Asap Cloud Native Deployment Guide
Asap Cloud Native Deployment Guide
Release 7.4
F40784-02
September 2022
Oracle Communications ASAP Cloud Native Deployment Guide, Release 7.4
F40784-02
This software and related documentation are provided under a license agreement containing restrictions on
use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your
license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license,
transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse
engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is
prohibited.
The information contained herein is subject to change without notice and is not warranted to be error-free. If
you find any errors, please report them to us in writing.
If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on
behalf of the U.S. Government, then the following notice is applicable:
U.S. GOVERNMENT END USERS: Oracle programs (including any operating system, integrated software,
any programs embedded, installed or activated on delivered hardware, and modifications of such programs)
and Oracle computer documentation or other Oracle data delivered to or accessed by U.S. Government end
users are "commercial computer software" or "commercial computer software documentation" pursuant to the
applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use,
reproduction, duplication, release, display, disclosure, modification, preparation of derivative works, and/or
adaptation of i) Oracle programs (including any operating system, integrated software, any programs
embedded, installed or activated on delivered hardware, and modifications of such programs), ii) Oracle
computer documentation and/or iii) other Oracle data, is subject to the rights and limitations specified in the
license contained in the applicable contract. The terms governing the U.S. Government’s use of Oracle cloud
services are defined by the applicable contract for such services. No other rights are granted to the U.S.
Government.
This software or hardware is developed for general use in a variety of information management applications.
It is not developed or intended for use in any inherently dangerous applications, including applications that
may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you
shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its
safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this
software or hardware in dangerous applications.
Oracle, Java, and MySQL are registered trademarks of Oracle and/or its affiliates. Other names may be
trademarks of their respective owners.
Intel and Intel Inside are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are
used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Epyc,
and the AMD logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered
trademark of The Open Group.
This software or hardware and documentation may provide access to or information about content, products,
and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly
disclaim all warranties of any kind with respect to third-party content, products, and services unless otherwise
set forth in an applicable agreement between you and Oracle. Oracle Corporation and its affiliates will not be
responsible for any loss, costs, or damages incurred due to your access to or use of third-party content,
products, or services, except as set forth in an applicable agreement between you and Oracle.
Contents
Preface
Audience vii
Documentation Accessibility vii
Diversity and Inclusion vii
iii
Using Load Balancer as a Service (LBaaS) 2-10
About Using Oracle Cloud Infrastructure Domain Name System (DNS) Zones 2-11
Using Persistent Volumes and File Storage Service (FSS) 2-11
Leveraging Oracle Cloud Infrastructure Services 2-12
Validating Your Cloud Environment 2-12
Performing a Smoke Test 2-12
Validating Common Building Blocks in the Kubernetes Cluster 2-14
iv
6 Creating an Order Balancer Cloud Native Instance
Installing the Order Balancer Artifacts and the Toolkit 6-1
Installing the Traefik Container Image 6-1
Creating an Order Balancer Instance 6-3
Setting Environment Variables 6-3
Creating Secrets 6-3
Registering the Namespace 6-4
Creating an Order Balancer Instance 6-4
Validating the Order Balancer Instance 6-7
Deleting and Recreating Your Order Balancer Instance 6-8
Cleaning Up the Environment 6-9
Troubleshooting Issues with the Scripts 6-9
Next Steps 6-10
7 Planning Infrastructure
Sizing Considerations 7-1
Securing Operations in Kubernetes Cluster 7-1
9 Integrating ASAP
Integrating With ASAP Cloud Native Instances 9-1
Connectivity Between the Building Blocks 9-1
Inbound HTTP Requests 9-2
Inbound JMS Requests 9-3
Applying the WebLogic Patch for External Systems 9-4
Configuring SAF On External Systems 9-4
Setting Up Secure Communication with SSL/TLS 9-5
Configuring Secure Incoming Access with SSL 9-5
v
Generating SSL Certificates for Incoming Access 9-5
Setting Up ASAP Cloud Native for Incoming Access 9-6
Configuring Incoming HTTP and JMS Connectivity for External Clients 9-7
Debugging SSL 9-8
vi
Preface
This guide describes how to install and administer Oracle Communications ASAP Cloud
Native Deployment Option.
Audience
This document is intended for DevOps administrators and those involved in installing and
maintaining Oracle Communications ASAP Cloud Native Deployment.
Documentation Accessibility
For information about Oracle's commitment to accessibility, visit the Oracle Accessibility
Program website at http://www.oracle.com/pls/topic/lookup?ctx=acc&id=docacc.
vii
1
Overview of the ASAP Cloud Native
Deployment
Get an overview of Oracle Communications ASAP cloud native deployment, architecture, and
the ASAP cloud native toolkit.
This chapter provides an overview of Oracle Communications ASAP deployed in a cloud
native environment using container images and a Kubernetes cluster.
1-1
Chapter 1
ASAP Cloud Native Architecture
The ASAP cloud native architecture requires components such as the Kubernetes
cluster. The WebLogic domain is static in the Kubernetes cluster. For any
modifications, you should update the Docker image and redeploy it. The ASAP cloud
native artifacts include a container image built using Docker and the ASAP cloud
native toolkit.
Note:
The cloud native tar file is present in the Linux platform only.
The artifacts in the tar file are extracted to the same directory where you ran the
command.
The following zip files are extracted:
• asap-img-builder.zip: To build ASAP Docker image and Order Balancer Docker
image.
• asap-cntk.zip: To create ASAP instance.
• ob-cntk.zip: To create Order Balancer instance.
1-2
Chapter 1
ASAP Cloud Native Architecture
1-3
2
Planning and Validating Your Cloud
Environment
In preparation for Oracle Communications ASAP cloud native deployment, you must set up
and validate prerequisite software. This chapter provides information about planning, setting
up, and validating the environment for ASAP cloud native deployment.
See the following topics:
• Required Components for ASAP and Order Balancer Cloud Native
• Planning Your Cloud Native Environment
• Setting Up Persistent Storage
• Planning Your Container Engine for Kubernetes (OKE) Cloud Environment
• Validating Your Cloud Environment
If you are already familiar with traditional ASAP, for important information on the differences
introduced by ASAP cloud native, see "Differences Between ASAP Cloud Native and ASAP
Traditional Deployments".
2-1
Chapter 2
Planning Your Cloud Native Environment
install, and set up various components and services in ways that are best suited for
your cloud native environment. The following sections provide information about each
of those required components and services, the available options that you can choose
from, and the way you must set them up for your ASAP cloud native environment.
kubectl version
• Flannel
2-2
Chapter 2
Planning Your Cloud Native Environment
To check the version, run the following command on the master node running the kube-
flannel pod:
• Docker
To check the version, run the following command:
docker version
Typically, Kubernetes nodes are not used directly to run or monitor Kubernetes workloads.
You must reserve worker node resources for the execution of the Kubernetes workload.
However, multiple users (manual and automated) of the cluster require a point from which to
access the cluster and operate on it. This can be achieved by using kubectl commands
(either directly on the command line and shell scripts or through Helm) or Kubernetes APIs.
For this purpose, set aside a separate host or set of hosts. Operational and administrative
access to the Kubernetes cluster can be restricted to these hosts and specific users can be
given named accounts on these hosts to reduce cluster exposure and promote traceability of
actions.
In addition, you need the appropriate tools to connect to your overall environment, including
the Kubernetes cluster. For instance, for a Container Engine for Kubernetes (OKE) cluster,
you must install and configure the Oracle Cloud Infrastructure Command Line Interface.
Additional integrations may need to include appropriate NFS mounts for home directories,
security lists, firewall configuration for access to the overall environment, and so on.
Kubernetes worker nodes should be configured with the recommended operating system
kernel parameters listed in "Configuring a UNIX ASAP Group and User" in ASAP Installation
Guide. Use the documented values as the minimum values to set for each parameter. Ensure
that Linux OS kernel parameter configuration is persistent, so as to survive a reboot.
The ASAP cloud native instance, for which specification files are provided with the toolkit for
large systems, requires up to 16 GB of RAM and 2 CPUs, in terms of Kubernetes worker
node capacity. For more details about database sizes, see "ASAP Server Hardware
Requirements" in ASAP Installation Guide. A small increment is needed for Traefik. Refer to
those projects for details.
2-3
Chapter 2
Planning Your Cloud Native Environment
routes and proxy) and include authentication for logging in to the repository. Oracle
recommends that you choose a repository that provides centralized storage and
management for the container image.
Failing to ensure that all nodes have access to a centralized repository will mean that
image has to be synced to the hosts manually or through custom mechanisms (for
example, using scripts), which are error-prone operations as worker nodes are
commissioned, decommissioned, or even rebooted. When an image on a particular
worker node is not available, the pods using that image are either not scheduled to
that node, wasting resources, or fail on that node. If image names and tags are kept
constant (such as myapp:latest), the pod may pick up a pre-existing image of the
same name and tag, leading to unexpected and hard to debug behaviors.
Installing Helm
ASAP cloud native requires Helm, which delivers reliability, productivity, consistency,
and ease of use.
In an ASAP cloud native environment, using Helm enables you to achieve the
following:
• You can apply custom domain configuration by using a single and consistent
mechanism, which leads to an increase in productivity. You no longer need to
apply configuration changes through multiple interfaces such as WebLogic
Console, WLST, and WebLogic Server MBeans.
• Changing the ASAP domain configuration in the traditional installations is a
manual and multi-step process that may lead to errors. This can be eliminated with
Helm because of the following features:
– Helm Lint allows pre-validation of syntax issues before changes are applied
– Multiple changes can be pushed to the running instance with a single upgrade
command
– Configuration changes may map to updates across multiple Kubernetes
resources (such as domain resources, config maps, and so on). With Helm,
you merely update the Helm release and its responsibility to determine which
Kubernetes resources are affected.
• Including configuration in Helm charts allows the content to be managed as code,
through source control, which is a fundamental principle of modern DevOps
practices.
To co-exist with older Helm versions in production environments, ASAP requires Helm
3.1.3 or later saved as helm in PATH.
The following text shows sample commands for installing and validating Helm:
$ cd some-tmp-dir
$ wget https://get.helm.sh/helm-v3.4.1-linux-amd64.tar.gz
$ tar -zxvf helm-v3.4.1-linux-amd64.tar.gz
# Find the helm binary in the unpacked directory and move it to its
desired destination. You need root user.
$ sudo mv linux-amd64/helm /usr/local/bin/helm
2-4
Chapter 2
Planning Your Cloud Native Environment
Helm leverages kubeconfig for users running the helm command to access the Kubernetes
cluster. By default, this is $HOME/.kube/config. Helm inherits the permissions set up for this
access into the cluster. You must ensure that if RBAC is configured, then sufficient cluster
permissions are granted to users running Helm.
Note:
ASAP does not support multiple replicas. However, if you do not want to expose
Kubernetes node IP addresses to users, use a load balancer.
The ingress controller monitors the ingress objects created by the ASAP cloud native
deployment, and acts on the configuration embedded in these objects to expose ASAP HTTP
and HTTPS services to the external network. This is achieved using NodePort services
exposed by the ingress controller.
The ingress controller must support:
• Sticky routing (based on standard session cookie)
• SSL termination and injecting headers into incoming traffic
Examples of such ingress controllers include Traefik, Voyager, and Nginx. The ASAP cloud
native toolkit provides samples and documentation that use Traefik as the ingress controller.
An external load balancer serves to provide a highly reliable single-point access into the
services exposed by the Kubernetes cluster. In this case, this would be the NodePort
services exposed by the ingress controller on behalf of the ASAP cloud native instance.
Using a load balancer removes the need to expose Kubernetes node IPs to the larger user
base, and insulates the users from changes (in terms of nodes appearing or being
decommissioned) to the Kubernetes cluster. It also serves to enforce access policies. The
ASAP cloud native toolkit includes samples and documentation that show integration with
Oracle Cloud Infrastructure LBaaS when Oracle OKE is used as the Kubernetes
environment.
Using Traefik as the Ingress Controller
2-5
Chapter 2
Planning Your Cloud Native Environment
If you choose to use Traefik as the ingress controller, the Kubernetes environment
must have the Traefik ingress controller installed and configured.
For more information about installing and configuring Traefik ingress controller, see
"Installing the Traefik Container Image".
For details about the required version of Traefik, see ASAP Compatibility Matrix.
Note:
The hosts file is located in /etc/hosts on Linux and MacOS machines and in
C:\Windows\System32\drivers\etc\hosts on Windows machines.
However, the solution of editing the hosts file is not easy to scale and coordinate
across multiple users and multiple access environments. A better solution is to
leverage DNS services at the enterprise level.
2-6
Chapter 2
Planning Your Cloud Native Environment
With DNS servers, a more efficient mechanism can be adopted. The mechanism is the
creation of a domain level A-record:
If the target is not a load balancer, but the Kubernetes cluster nodes themselves, a DNS
service can also insulate the user from relying on any single node IP. The DNS entry can be
configured to map *.asap.org to all the current Kubernetes cluster node IP addresses. You
must update this mapping as the Kubernetes cluster changes with adding a new node,
removing an old node, reassigning the IP address of a node, and so on.
With these two approaches, you can set up an enterprise DNS once and modify it only
infrequently.
2-7
Chapter 2
Planning Your Cloud Native Environment
The exported filesystems must have enough capacity to support the intended
workload. Given the dynamic nature of the ASAP cloud native instances, and the fact
that the ASAP logging volume is highly dependent on cartridges and on the order
volume, it is prudent to put in place a set of operational mechanisms to:
• Monitor disk usage and warn when the usage crosses a threshold
• Clean out the artifacts that are no longer needed
If a toolchain such as ELK Stack picks up this data, then the cleanup task can be built
into this process itself. As artifacts are successfully populated into the toolchain, they
can be deleted from the filesystem. You must take care to only delete log files that
have rolled over.
2-8
Chapter 2
Setting Up Persistent Storage
2-9
Chapter 2
Planning Your Container Engine for Kubernetes (OKE) Cloud Environment
Note:
The reference to logs in this section applies to the container logs and other
infrastructure logs. The space considerations still apply even if the ASAP
cloud native logs are being sent to an NFS Persistent Volume.
Connectivity Requirements
ASAP cloud native assumes the connectivity between the OKE cluster and the Oracle
CDBs is LAN-equivalent in reliability, performance, and throughput. This can be
achieved by creating the Oracle CDBs within the same tenancy as the OKE cluster
and in the same Oracle Cloud Infrastructure region.
ASAP cloud native allows for the full range of Oracle Cloud Infrastructure "cloud-to-
ground" connectivity options for integrating the OKE cluster with on-premise
applications and users. Selecting, provisioning, and testing such connectivity is a
critical part of adopting Oracle Cloud Infrastructure OKE.
2-10
Chapter 2
Planning Your Container Engine for Kubernetes (OKE) Cloud Environment
DNS mapping difficult to achieve. Additionally, it is also required to balance the load between
the worker nodes. To fulfill these requirements, you can use Load Balancer as a Service
(LBaaS) of Oracle Cloud Infrastructure.
The load balancer can be created using the service descriptor in $ASAP_CNTK/samples/
oci-lb-traefik.yaml. The subnet ID referenced in this file must be filled in from your Oracle
Cloud Infrastructure environment (using the subnet configured for your LBaaS). The port
values assume you have installed Traefik using the unchanged sample values.
The configuration can be applied using the following command (or for traceability, by
wrapping it into a Helm chart):
The Load Balancer service is created for Traefik pods in the Traefik name space. Once the
Load Balancer service is created successfully, an external IP address is allocated. This IP
address must be used for DNS mapping.
2-11
Chapter 2
Validating Your Cloud Environment
up and leverages the NFS storage provisioner that is typically available in all
Kubernetes installations. However, the data flows through the mount target, which
models an NFS server.
FSS can also be used natively, without requiring the NFS protocol. This can be
achieved by leveraging the FSS storage provisioner supplied by OKE. The broad
outline of how to do this is available in the blog post "Using File Storage Service with
Container Engine for Kubernetes" on the Oracle Cloud Infrastructure blog.
Table 2-1 Oracle Cloud Infrastructure Services for OKE Cloud Environment
2-12
Chapter 2
Validating Your Cloud Environment
Note:
The requirement of the nginx container image for the smoke test can change over
time. See the content of the deployment.yaml file in step 3 of the following
procedure to determine which image is required. Alternatively, ensure that you have
logged in to Docker Hub so that the system can download the required image
automatically.
These commands must run successfully and return information about the pods and the
port for nginx.
4. Open the following URL in a browser:
http://master_IP:port/
where:
• master_IP is the IP address of the master node of the Kubernetes cluster or the
external IP address for which routing has been set up
• port is the external port for the external-nginx service
5. To track which pod is responding, on each pod, modify the text message in the webpage
served by nginx. In the following example, this is done for the deployment of two pods:
2-13
Chapter 2
Validating Your Cloud Environment
index.html
$ kubectl cp index.html nginx-deployment-5c689d88bb-g7zvh:/usr/
share/nginx/html/index.html
$ echo "This is pod B - nginx-deployment-5c689d88bb-r68g4 -
worker2" > index.html
$ kubectl cp index.html nginx-deployment-5c689d88bb-r68g4:/usr/
share/nginx/html/index.html
$ rm index.html
6. Check the index.html webpage to identify which pod is serving the page.
7. Check if you can reach all the pods by running refresh (Ctrl+R) and hard refresh
(Ctrl+Shift+R) on the index.html webpage.
8. If you see the default nginx page, instead of the page with your custom message,
it indicates that the pod has restarted. If a pod restarts, the custom message on
the page gets deleted.
Identify the pod that restarted and apply the custom message for that pod.
9. Increase the pod count by patching the deployment.
For instance, if you have three worker nodes, run the following command:
Note:
Adjust the number as per your cluster. You may find you have to
increase the pod count to more than your worker node count until you
see at least one pod on each worker node. If this is not observed in your
environment even with higher pod counts, consult your Kubernetes
administrator. Meanwhile, try to get as much worker node coverage as
reasonably possible.
10. For each pod that you add, repeat step 5 to step 8.
Ensuring that all the worker nodes have at least one nginx pod in the Running state
ensures that all worker nodes have access to Docker Hub or to your private Docker
repository.
2-14
Chapter 2
Validating Your Cloud Environment
For the Kubernetes environment, identify an NFS server and create or export an NFS
filesystem from it.
Ensure that this filesystem:
• Has enough space for the ASAP logs and performance data
• Is mountable on all the Kubernetes worker nodes
Create an nginx pod that mounts an NFS PV from the identified server. For details, see the
documentation about "Kubernetes Persistent Volumes" on the Kubernetes website. This
activity verifies the integration of NFS, PV/PVC, and the Kubernetes cluster. To clean up the
environment, delete the nginx pod, the PVC, and the PV.
Ideally, data such as logs and JFR data is stored in the PV only until it can be retrieved into a
monitoring toolchain such as Elastic Stack. The toolchain must delete the rolled over log files
after processing them. This helps you to predict the size of the filesystem. You must also
consider the factors such as the number of ASAP cloud native instances that will use this
space, the size of those instances, the volume of orders they will process, and the volume of
logs that your cartridges generate.
Validating the Load Balancer
For a development-grade environment, you can use an in-cluster software load balancer.
ASAP cloud native toolkit provides documentation and samples that show you how to use
Traefik to perform load balancing activities for your Kubernetes cluster.
It is not necessary to run through "Traefik Quick Start" as part of validating the environment.
However, if the ASAP cloud native instances have connectivity issues with HTTP/HTTPS
traffic, and the ASAP logs do not show any failures, it might be worthwhile to take a step back
and validate Traefik separately using Traefik Quick Start.
A more intensive environment, such as a test, a production, apre-production, or performance
environment can additionally require a more robust load balancing service to handle the
HTTP/HTTPS traffic. For such environments, Oracle recommends using a load balancing
hardware that is set up outside the Kubernetes cluster. A few examples of external load
balancers are Oracle Cloud Infrastructure LBaaS for OKE, Google's Network LB Service in
GKE, and F5's Big-IP for private cloud. The actual selection and configuration of an external
load balancer is outside the scope of ASAP cloud native itself but is an important component
to sort out in the implementation of ASAP cloud native. For more details on the requirements
and options, see "Integrating ASAP".
To validate the ingress controller of your choice, you can use the same nginx deployment
used in the smoke test described earlier. This is valid only when run in a Kubernetes cluster
where multiple worker nodes are available to take the workload.
To perform a smoke test of your ingress setup:
1. Run the following commands:
2-15
Chapter 2
Validating Your Cloud Environment
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
sessionAffinity: None
type: ClusterIP
EOF
kubectl apply -f ./smoke-internal-nginx-svc.yaml
kubectl get svc smoke-internal-nginx
2. Create your ingress targeting the internal-nginx service. The following text shows
a sample ingress annotated to work with the Traefik ingress controller:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: traefik
name: smoke-nginx-ingress
namespace: default
spec:
rules:
- host: smoke.nginx.asaptest.org
http:
paths:
- backend:
serviceName: smoke-internal-nginx
servicePort: 80
If the Traefik ingress controller is configured to monitor the default name space,
then Traefik creates a reverse proxy and the load balancer for the nginx
deployment. For more details, see Traefik documentation.
If you plan to use other ingress controllers, refer to the documentation about the
corresponding controllers for information on creating the appropriate ingress and
make it known to the controller. The ingress definition should be largely reusable,
with ingress controller vendors describing their own annotations that should be
specified, instead of the Traefik annotation used in the example.
3. Create a local DNS/hosts entry in your client system mapping
smoke.nginx.asaptest.org to the IP address of the cluster, which is typically the
IP address of the Kubernetes master node, but could be configured differently.
4. Open the following URL in a browser:
http://smoke.nginx.asaptest.org:Traefik_Port/
where Traefik_Port is the external port that Traefik has been configured to expose.
5. Verify that the web address opens and displays the nginx default page.
Your ingress controller must support session stickiness for ASAP cloud native. To learn
how stickiness should be configured, refer to the documentation about the ingress
2-16
Chapter 2
Validating Your Cloud Environment
controller you choose. For Traefik, stickiness must be set up at the service level itself. For
testing purposes, you can modify the internal-nginx service to enable stickiness by running
the following commands:
Other ingress controllers may have different configuration requirements for session
stickiness. Once you have configured your ingress controller, and smoke-nginx-ingress
and smoke-internal-nginx services as required, repeat the browser-based procedure to
verify and confirm if nginx is still reachable. As you refresh (Ctrl+R) the browser, you should
see the page getting served by one of the pods. Repeatedly refreshing the web page should
show the same pod servicing the access request.
To further test session stickiness, you can either do a hard refresh (Ctrl+Shift+R) or restart
your browser (you may have to use the browser in Incognito or Private mode), or clear your
browser cache for the access hostname for your Kubernetes cluster. You may observe that
the same nginx pod or a different pod is servicing the request. Refreshing the page
repeatedly should stick with the same pod while hard refreshes should switch to the other
pod occasionally. As the deployment has two pods, the chances of a switch with a hard
refresh are 50%. You can modify the deployment to increase the number of replica nginx
pods (controlled by the replicas parameter under spec) to increase the odds of a switch.
For example, with four nginx pods in the deployment, the odds of a switch with hard refresh
rise to 75%. Before testing with the new pods, run the commands for identifying the pods to
add unique identification to the new pods. See the procedure in "Performing a Smoke Test"
for the commands.
To clean up the environment after the test, delete the following services and the deployment:
• smoke-nginx-ingress
• smoke-internal-nginx
• nginx-deployment
2-17
3
Creating an ASAP Cloud Native Image
An ASAP cloud native image is required to create and manage ASAP cloud native instances.
This chapter describes creating an ASAP cloud native image.
An ASAP cloud native requires a container image and access to the database. The ASAP
image is built on top of a Linux base image and the ASAP image builder script adds Java,
WebLogic Server components, database client, and ASAP.
The ASAP cloud native image is created using the ASAP cloud native builder toolkit. You
should run the ASAP cloud native builder toolkit on Linux and it should have access to the
local Docker daemon.
See the following topics for further details:
• Downloading the ASAP Cloud Native Image Builder
• Prerequisites for Creating ASAP Image
• Creating the ASAP Cloud Native Image
• Working with Cartridges
Note:
If the required swap space is not available, contact your administrator.
• ASAP 7.4.0.0 or later Linux Installer. Download the .tar file from Oracle Software Delivery
Cloud:
https://edelivery.oracle.com
3-1
Chapter 3
Creating the ASAP Cloud Native Image
• Create the disk1 directory and copy the contents of the .tar file to this directory.
• Installers for WebLogic Server and JDK. Download these from Oracle Software
Delivery Cloud:
https://edelivery.oracle.com
• Oracle Database Client. Download this from Oracle Software Downloads:
https://www.oracle.com/downloads/
• Java, installed with JAVA_HOME set in the environment.
• ASAP is installed in a silent installation mode using the asap.properties file. You
should update the properties file with the database, WebLogic Server,
ORACLE_HOME, port numbers, and all required details.
• Keep the TRAEFIK Ingress service node port details ready where it is being
deployed.
• Create ASAP database users. For more information, see "Creating Oracle
Database Tablespaces, Tablespace User, and Granting Permissions" in ASAP
Installation Guide.
For details about the required and supported versions of the prerequisite software, see
ASAP Software Compatibility Matrix.
Note:
After you download the installer, locate the cloud native image builder asap-
img-builder.zip in the cloud native tar file. The ASAP Docker images are
created automatically for ASAP 7.4.0.1 or later.
3-2
Chapter 3
Creating the ASAP Cloud Native Image
orcl19c =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = database host)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = service name)
)
)
Note:
The variables in the section Docker details, Installer filenames, and Installation
locations are populated by default with the appropriate information.
base_image=oraclelinux:8
HTTPS_PROXY=
HTTP_PROXY=
# Docker details
ASAP_IMAGE_TAG="7.4.0.0.0"
ASAP_VOLUME=dockerhost_volume
ASAP_CONTAINER="asap-c"
DOCKER_HOSTNAME="asaphost"
# Installer filenames
JDK_FILE=jdk-8u321-linux-x64.tar.gz
DB_CLIENT_FILE=LINUX_193000_client.zip
FMW_FILE=fmw_14.1.1.0.0_wls_lite_Disk1_1of1.zip
# Installation locations
TNS_ADMIN=/scripts/
JAVA_HOME=/usr/lib/jvm/java/jdk1.8.0_321
ORACLE_HOME=/home/oracle/oracle/product/19.3.0/dbhome_1
PATH=$ORACLE_HOME:$JAVA_HOME/bin:$PATH
CV_ASSUME_DISTID=OEL8
WLS_HOME=/home/oracle/weblogic14110/wlserver
Note:
Ensure that the file names of JDK_FILE, DB_CLIENT_FILE, and FMW_FILE
variables match with the file names in the /asap-img-builder/installers/ folder.
3-3
Chapter 3
Creating the ASAP Cloud Native Image
asap.tar.file=ASAP.R7_4_0.B196.linux.tar
asap.envid=CNE1
asap.installLocation=/scratch/oracle/asap
asap.db.alias=
asap.db.username=
asap.db.password=
asap.db.tablespace=
asap.db.temp.ts=
weblogic.domainName=asapDomain
weblogic.domainLocation=/u01/oracle/user_projects/domain/
weblogic.username=
weblogic.password=
weblogic.port=7601
weblogic.channel.listenport=7602
weblogic.channel.publicport=30301
asap.server.adm.password=
asap.server.ctrl.password=
asap.server.nep.password=
asap.server.sarm.password=
asap.server.srp.password=
asap.weblogic.adminPassword=
asap.weblogic.cmwsPassword=
asap.weblogic.monitorPassword=
asap.weblogic.operatorPassword=
asap.weblogic.wsPassword=
## ASAP.cfg properties
asap.properties.MSGSND_RETRIES=5
3-4
Chapter 3
Working with Cartridges
Note:
./build-asap-images.sh -i asap
The script creates the staging Docker image by installing WebLogic Server, Java, and the
database client.
The script also creates the staging ASAP Docker image for the environment IDs specified
in the asap.properties file.
To install cartridges:
1. Copy the required cartridges to the $asap-img-builder/cartridges directory.
2. Run the following command to copy installers and cartridges to the volume:
$asap-img-builder/upgradeASAPDockerImage.sh
3. Create a new container with the ASAP Docker image created by the build-asap-
images.sh script using the following command:
3-5
Chapter 3
Securing Your ASAP Installation
Where
• CONTAINER_NAME is the $ASAP_CONTAINER.
• version is the version of the ASAP Docker image. This version should be
higher than the previous version.
11. Exit the ASAP container by using the following command:
exit
12. Stop and remove the containers using the following commands:
docker stop CONTAINER_NAME
docker rm CONTAINER_NAME
3-6
4
Creating an ASAP Cloud Native Instance
This chapter describes how to create an ASAP cloud native instance in your cloud
environment using the operational scripts and the base ASAP configuration provided in the
ASAP cloud native toolkit. You can create an ASAP instance quickly to become familiar with
the process, explore the configuration, and structure your own project. This procedure is
intended to validate that you are able to create an ASAP instance in your environment.
Before you create an ASAP instance, you must do the following:
• Download the ASAP cloud native tar file and extract the asap-cntk.zip file. For more
information about downloading the ASAP cloud native toolkit, see "Downloading the
ASAP Cloud Native Artifacts".
• Install the Traefik container images
$ export ASAP_CNTK=asap_cntk_path
Note:
If you are installing Order Balancer in the ASAP namespace, ignore this section.
To leverage the ASAP cloud native samples that integrate with Traefik, the Kubernetes
environment must have the Traefik ingress controller installed and configured.
4-1
Chapter 4
Installing the Traefik Container Image
If you are working in an environment where the Kubernetes cluster is shared, confirm
whether Traefik has already been installed and configured for ASAP cloud native. If
Traefik is already installed and configured, set your TRAEFIK_NS environment variable
to the appropriate name space.
The instance of Traefik that you installed to validate your cloud environment must be
removed as it does not leverage the ASAP cloud native samples. Ensure that you
have removed this installation in addition to purging the Helm release. Check that any
roles and rolebindings created by Traefik are removed. There could be a clusterrole
and clusterrolebinding called "traefik-operator". There could also be a role and
rolebinding called "traefik-operator" in the $TRAEFIK_NS name space. Delete all of
these before you set up Traefik.
To download and install the Traefik container image:
1. Ensure that Docker in your Kubernetes cluster can pull images from Docker Hub.
See ASAP Compatibility Matrix for the required and supported versions of the
Traefik image.
2. Run the following command to create a name space ensuring that it does not
already exist:
Note:
You might want to add the traefik name space to the environment setup
such as .bashrc.
Note:
Set kubernetes.namespaces and the chart version specifically using
command-line.
After the installation, Traefik monitors the name spaces listed in its
kubernetes.namespaces field for Ingress objects. The scripts in the toolkit manage
this name space list as part of creating and tearing down ASAP cloud native projects.
When the values.yaml Traefik sample in the ASAP cloud native toolkit is used as is,
Traefik is exposed to the network outside of the Kubernetes cluster through port
4-2
Chapter 4
Creating an ASAP Instance
30305. To use a different port, edit the YAML file before installing Traefik. Traefik metrics are
also available for Prometheus to scrape from the standard annotations.
Traefik function can be viewed using the Traefik dashboard. Create the Traefik dashboard by
running the instructions provided in the $ASAP_CNTK/samples/charts/traefik/traefik-
dashboard.yaml file. To access this dashboard, the URL is: http://traefik.asap.org. This
is if you use the values.yaml file provided with the ASAP cloud native toolkit; it is possible to
change the hostname as well as the port to your desired values.
Creating Secrets
You must store sensitive data and credential information in the form of Kubernetes Secrets
that the scripts and Helm charts in the toolkit consume. Managing secrets is out of the scope
of the toolkit and must be implemented while adhering to your organization's corporate
policies. Additionally, Order Balancer cloud native does not establish password policies.
For an ASAP could native instance, the following secrets are required:
• imagepull-secret: If the private registry or repository is password protected, create this
secret.
• tls-secret: If the traefik ingress is ssl-enabled, create this secret. For more information
about creating tls-secret, see "Setting Up ASAP Cloud Native for Incoming Access."
To create imagepull-secret:
1. Run the following command:
docker login
4-3
Chapter 4
Creating an ASAP Instance
$ASAP_CNTK/scripts/register-namespace.sh -p sr -t targets
# For example, $ASAP_CNTK/scripts/register-namespace.sh -p sr -t
traefik
Note:
traefik is the name of the targets for registration of the namespace sr. The
script uses TRAEFIK_NS to find these targets. Do not provide the traefik
target if you are not using Traefik.
readiness:
enabled: true
initialDelaySeconds: 240
periodSeconds: 60
liveness:
enabled: true
periodSeconds: 120
initialDelaySeconds: 120
failureThreshold: 3
For detailed description of the readiness and liveness parameters, see step 1 in
"Creating an ASAP Instance".
4-4
Chapter 4
Creating an ASAP Instance
replicaCount: 1
image:
repository: asapcn
pullPolicy: IfNotPresent
tag: "7.4.0.0.0"
imagePullSecrets:
- name: asap-imagepull
asapEnv:
envid: cne1
port: 7601
host: asaphost
persistence:
enabled: false
readiness:
enabled: true
initialDelaySeconds: 240
periodSeconds: 60
liveness:
enabled: true
periodSeconds: 120
initialDelaySeconds: 120
failureThreshold: 3
nameOverride: ""
fullnameOverride: ""
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname
template
name: ""
podAnnotations: {}
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
servicechannel:
name: channelport
4-5
Chapter 4
Creating an ASAP Instance
type: ClusterIP
port: 7601
service:
name: adminport
type: ClusterIP
port: 7602
ingress:
type: TRAEFIK
enabled: true
sslIncoming: false
adminsslhostname: adminhost.asap.org
adminhostname: adminhostnonssl.asap.org
secretName: project-instance-asapcn-tls-cert
hosts:
- host: adminhost.asap.org
paths: []
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
autoscaling:
enabled: false
nodeSelector: {}
tolerations: []
affinity: {}
Where:
• repository is the repository name of the configured container registry
• tag is the version name that is used when you create a final Docker image
from the container
• name is the name of the imagepull-secret if it is configured. For more
information about imagepull-secret, see "Creating Secrets".
• envid is the unique environment ID of the instance. This ID should include only
the lower-case alphanumeric characters. For example, asapinstance1.
• port is the port number of the WebLogic Server where ASAP is deployed.
• host is the hostname of the docker container.
Note:
The hostname should match with the hostname when you create the
ASAP Docker image. If the hostname mismatches, the ASAP
servers may not start.
4-6
Chapter 4
Creating an ASAP Instance
Note:
adminsslhostname and secretName are applicable only if sslIncoming is set to
true.
# This is pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: <project>-<asapEnv.envid>-nfs-pv
4-7
Chapter 4
Creating an ASAP Instance
labels:
type: local
spec:
storageClassName: asaplogs
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
# hostPath:
# path: "/mnt/asap/logs/
nfs:
server: <server>
path: <path>
# This is pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: <obEnv.envid>-pvc
namespace: sr
spec:
storageClassName: asaplogs
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Where:
• <project> is the namespace. In the above example, value is sr.
• <asapEnv.envid> is the environment ID provided in the $ASAP_CNTK/
samples/charts/traefik/values.yaml file. In the above example, value is
cn96.
This path will be mounted on the Pod as /scratch/oracle/asap/DATA/logs/
3. Run the following command to create the ASAP instance:
$ASAP_CNTK/scripts/create-instance.sh -p sr -i quick
The create-instance.sh script uses the Helm chart located in the charts/asap
directory to deploy the ASAP docker image, service, and ingress controller for your
instance. If the script fails, see "Troubleshooting Issues with the Scripts" before
you make additional attempts.
4. Validate the important input details such as Image name and tag, specification files
used (Values Applied), hostname, and port for ingress routing:
$ASAP_CNTK/scripts/create-instance.sh -p sr -i quick
NAME: sr-quick
4-8
Chapter 4
Creating an ASAP Instance
5. If you query the status of the ASAP pod, the READY state of the ASAP pod displays 0/1
for several minutes when the ASAP application is starting.
When the READY state shows 1/1, your ASAP instance is up and running. You can then
validate the instance by submitting work orders.
The base hostname required to access this instance using HTTP is quick.sr.asap.org.
See "Planning and Validating Your Cloud Environment" for details about hostname resolution.
The create-instance script prints out the following valuable information that you can use
when you work with your ASAP domain:
• The T3 URL: http://t3.quick.sr.asap.org This is required for external client
applications such as JMS and WLST.
• The URL for accessing the WebLogic UI, which is provided through the ingress controller
at host: http://admin.quick.sr.asap.org:30305/console.
To get the ASAP server status enter into the pod by using the following command:
4-9
Chapter 4
Creating an ASAP Instance
You are now entered into the ASAP pod. Navigate to the ASAP installation directory by
using the following command:
cd $ASAP_BASE
source Environment_Profile
status
Note:
After an ASAP instance is created, it may take a few minutes to start ASAP
servers and WebLogic Server.
To access WebLogic Administration Console outside the cluster, enter the following
URL in the browser:
http://adminhostnonssl.asap.org:30305/console
The system prompts for the user name and password. Enter the WebLogic domain
user name and password.
4-10
Chapter 4
Creating an ASAP Instance
Update the hosts file with the hostname and master_ip address on the machine where the
URL is getting accessed with the following information:
ip_address adminhostnonssl.asap.org
Note:
The hosts file is located in /etc/hosts on Linux and MacOS machines and in
C:\Windows\System32\drivers\etc\hosts on Windows machines.
Submitting Orders
ASAP is installed with default POTS cartridge.
To submit ASAP orders over JMS, use an external runJMSclient. The endpoint must be as
follows:
System.$
{ENV_ID}.ApplicationType.ServiceActivation.Application.1-0;7-4;ASAP.Comp.Mess
ageQueue
http://adminhostnonssl.asap.org:30305/
$ASAP_CNTK/scripts/delete-instance.sh -p sr -i quick
$ASAP_CNTK/scripts/create-instance.sh -p sr -i quick
4-11
Chapter 4
Creating an ASAP Instance
Note:
After recreating an instance, client applications such as SoapUI may need to
be restarted to avoid using expired cache information.
If another ASAP instance is created in the same database using the same
Environment ID, the ASAP installer deletes the previous ASAP database
users and recreates new users.
You should not create multiple ASAP instances with the same Docker image.
$ASAP_CNTK/scripts/delete-instance.sh -p sr -i quick
2. Delete the name space, which in turn deletes the Kubernetes name space and the
secrets:
$ASAP_CNTK/scripts/unregister-namespace.sh -p sr -d -t target
Note:
traefik is the name of the target for registration of the name space.
The script uses TRAEFIK_NS to find this target. Do not provide the
"traefik" target if you are not using Traefik.
4-12
Chapter 4
Creating an ASAP Instance
Note:
Do not retry running the create-instance script or the upgrade-instance script
immediately to fix any errors, as they would return errors. The upgrade-instance
script may work but re-running it does not complete the operation.
$ASAP_CNTK/scripts/delete-instance.sh -p sr -i quick
Recreating an Instance
If you encounter issues when creating an instance, do not try to re-run the create-
instance.sh script as this will fail. Instead, perform the cleanup activities and then run the
following command:
$ASAP_CNTK/scripts/create-instance.sh -p sr -i quick
4-13
Chapter 4
Next Steps
Note:
Based on the Ingressroute configuration use HTTP or HTTPS. The port
number is the traefik nodeport.
3. Launch the thin client once the database is updated with the following URL:
https:<HOST>:<ingressport>/<ENVID>/OCA
For the thick client, the OCA.cfg file contains HTTPS_PORT inside SESSION
specifying the ingressport.
Next Steps
The ASAP instance is ready to add the Order Balancer cloud native instance. The
URL for adding the Order Balancer instance is:
t3://<service-name>.<namespace>.svc.cluster.local:<portnumber>
Where:
• <service-name> is the name of the service.
• <namespace> is the namespace of the ASAP instance.
• <portnumber> is the port number of the WebLogic Server Domain in the ASAP
instance.
Here is an example for adding the ASAP instance to Order Balancer:
• ./addASAPServer -asapSrvName ASAP1 -asapSrvURL t3://cn96-
service.sr.svc.cluster.local:7601 -asapSrvUser weblogic -asapSrvRequestQueue
"System.CN96.ApplicationType.ServiceActivation.Application.1-0;7-4;ASAP.Comp.
MessageQueue"
For more information about managing ASAP instances, see "Setting Up ASAP for High
Availability" in ASAP System Administrator's Guide.
4-14
5
Creating an Order Balancer Cloud Native
Image
Order Balancer cloud native image is required to create and manage Order Balancer cloud
native instances. This chapter describes creating an Order Balancer cloud native image.
Order Balancer cloud native instance requires a container image and access to the database.
The Order Balancer image is built on top of a Linux base image and the Order Balancer
image builder script adds Java, WebLogic Server components, and Order Balancer.
The Order Balancer cloud native image is created using the Order Balancer cloud native
builder toolkit. You should run the Order Balancer cloud native builder toolkit on Linux and it
should have access to the local Docker daemon.
See the following topics for further details:
• Downloading the Order Balancer Cloud Native Image Builder
• Prerequisites for Creating an Order Balancer Image
• Creating the Order Balancer Cloud Native Image
Note:
If the required swap space is not available, contact your administrator.
5-1
Chapter 5
Creating the Order Balancer Cloud Native Image
Note:
After you download the installer, locate the Order Balancer cloud native
image builder asap-img-builder.zip in the ASAP cloud native tar file. The
Order Balancer Docker images are created automatically for ASAP 7.4.0.1 or
later.
ob.tar.file=ASAP.R7_4_0.B196.ob.tar
ob.weblogic.username=weblogic
ob.weblogic.password=
ob.weblogic.port=7501
ob.weblogic.domainName=ob
ob.weblogic.channel.listenport=7502
ob.weblogic.channel.publicport=30301
5-2
Chapter 5
Creating the Order Balancer Cloud Native Image
ob.ssl.incoming=0
#Time in seconds
ob.all.servers.down.wait.interval=3600
ob.all.servers.down.retry.interval=120
ob.server.down.retry.interval=2
ob.server.poll.interval=60
ob.webservice.res.timeout=0
ob.asap.conn.timeout=10
## Values allowed: SEVERE, WARNING, INFO , FINE , FINEST ,ALL
ob.logger.info=INFO
ob.db.host=
ob.db.port=1521
ob.db.service.name=
ob.db.user=
ob.db.password=
ob.jms.user=
ob.jms.password=
where
• ob.weblogic.username is the user name to log in to WebLogic Server.
• ob.weblogic.password is the password to log in to WebLogic Server.
• ob.weblogic.port is the port of the WebLogic Server.
• ob.weblogic.domainName is the WebLogic Server domain.
• ob.weblogic.channel.listenport is the channel listen port of the WebLogic Server.
• ob.weblogic.channel.publicport is the public port of the WebLogic Server.
• ob.ssl.incoming is set to enable SSL on Order Balancer WebLogic Server. The
default value is 0 which specifies non-SSL.
• ob.all.servers.down.wait.interval specifies the duration in seconds that Order
Balancer waits before routing the request back to queue when all the ASAP
instances are down. The default value is 3600.
• ob.all.servers.down.retry.interval specifies the duration in seconds that Order
Balancer waits before retrying to connect to fetch for an active ASAP member
instance while waiting when all servers are down. The default value is 120.
• ob.server.down.retry.interval specifies the duration in seconds that Order
Balancer waits before reattempting to route the order to the same instance. If the re-
attempt fails, the instance is marked as down. The default value is 2.
• ob.server.poll.interval specifies the duration in seconds that Order Balancer
waits before it retries to check the ASAP instance status. The default value is 60.
• ob.webservice.res.timeout specifies the duration in seconds that Order Balancer
waits for response before the read times-out. Order Balancer Web Service waits for a
response from ASAP member instance after invoking the operation. A value of zero
means Order Balancer will wait indefinitely until it receives a response from ASAP.
The default value is 0 seconds (no read time-out).
• ob.asap.conn.timeout specifies the duration in seconds that Order Balancer
reattempts the connection to the ASAP instance. The default value is 10.
5-3
Chapter 5
Creating the Order Balancer Cloud Native Image
• ob.logger.info specifies the log level for initializing the Order Balancer
application root logger. The valid values are SEVERE, WARNING, INFO,
FINE, FINEST, and ALL.
• ob.db.host is the database host name or IP address.
• ob.db.port is the database port.
• ob.db.service.name is the database service name.
• ob.db.user is the database user name.
• ob.db.password is the database password.
• ob.jms.user is the JMS user.
• ob.jms.password is the JMS password.
Note:
Do not add an ASAP instance when you are building the Order Balancer
Docker image. The wallet store is mounted dynamically in the
Kubernetes cluster. The wallet files created in the Docker image are not
accessible in the Kubernetes Pod.
In the cloud native deployment, the WebLogic domain is non-SSL and the ingress
controller is configured as SSL.
5. Update the HTTPS_PROXY and HTTP_PROXY variables in the build_ob_env.sh
script:
base_image=oraclelinux:8
HTTPS_PROXY=
HTTP_PROXY=
# Docker details
OB_IMAGE_TAG="obcn:7.4.0.0.0"
OB_VOLUME=obhost_volume
OB_CONTAINER="ob-c"
DOCKER_HOSTNAME="obhost"
# Installer filenames
WEBLOGIC_DOMAIN=/u01/oracle/user_projects/domains/
JDK_FILE=jdk-8u321-linux-x64.tar.gz
FMW_FILE=fmw_14.1.1.0.0_wls_lite_Disk1_1of1.zip
# Installation locations
JAVA_HOME=/usr/lib/jvm/java/jdk1.8.0_321
PATH=$JAVA_HOME/bin:$PATH
WEBLOGIC_HOME=/home/oracle/weblogic141100
5-4
Chapter 5
Creating the Order Balancer Cloud Native Image
Note:
The file names of JDK_FILE and FMW_FILE variables must match with the file
names in the /asap-img-builder/installers/ folder.
6. Run the build-asap-images.sh script to build the Order Balancer docker images:
./build-asap-images.sh -i ob
The script creates the Order Builder Docker images by running the docker container and
commits the Order Builder image.
5-5
6
Creating an Order Balancer Cloud Native
Instance
This chapter describes how to create an Order Balancer cloud native instance in your cloud
environment using the operational scripts and the base Order Balancer configuration
provided in the Order Balancer cloud native toolkit. You can create an Order Balancer
instance quickly to become familiar with the process, explore the configuration, and structure
your own project. This procedure is intended to validate that you are able to create an Order
Balancer instance in your environment.
Before you create an Order Balancer instance, you must do the following:
• Download the ASAP cloud native tar file and extract the ob-cntk.zip file. For more
information about downloading the Order Balancer cloud native toolkit, see "Downloading
the ASAP Cloud Native Artifacts".
• Install the Traefik container images
$ export OB_CNTK=ob_cntk_path
Where ob_cntk_path is the installation directory of the Order Balancer cloud native toolkit.
6-1
Chapter 6
Installing the Traefik Container Image
Traefik is already installed and configured, set your TRAEFIK_NS environment variable
to the appropriate name space.
The instance of Traefik that you installed to validate your cloud environment must be
removed as it does not leverage the Order Balancer cloud native samples. Ensure that
you have removed this installation in addition to purging the Helm release. Check that
any roles and rolebindings created by Traefik are removed. There could be a
clusterrole and clusterrolebinding called "traefik-operator". There could also be a
role and rolebinding called "traefik-operator" in the $TRAEFIK_NS name space.
Delete all of these before you set up Traefik.
To download and install the Traefik container image:
1. Ensure that Docker in your Kubernetes cluster can pull images from Docker Hub.
See ASAP Compatibility Matrix for the required and supported versions of the
Traefik image.
2. Run the following command to create a name space ensuring that it does not
already exist:
Note:
You might want to add the traefik name space to the environment setup
such as .bashrc.
Note:
Set kubernetes.namespaces and the chart version specifically using
command-line.
After the installation, Traefik monitors the name spaces listed in its
kubernetes.namespaces field for Ingress objects. The scripts in the toolkit manage
this name space list as part of creating and tearing down Order Balancer cloud native
projects.
When the values.yaml Traefik sample in the Order Balancer cloud native toolkit is
used as is, Traefik is exposed to the network outside of the Kubernetes cluster through
port 30305. To use a different port, edit the YAML file before installing Traefik. Traefik
metrics are also available for Prometheus to scrape from the standard annotations.
6-2
Chapter 6
Creating an Order Balancer Instance
Traefik function can be viewed using the Traefik dashboard. Create the Traefik dashboard by
running the instructions provided in the $OB_CNTK/samples/charts/traefik/traefik-
dashboard.yaml file. To access this dashboard, the URL is: http://traefik.asap.org. This
is if you use the values.yaml file provided with the Order Balancer cloud native toolkit; it is
possible to change the hostname as well as the port to your desired values.
Creating Secrets
You must store sensitive data and credential information in the form of Kubernetes Secrets
that the scripts and Helm charts in the toolkit consume. Managing secrets is out of the scope
of the toolkit and must be implemented while adhering to your organization's corporate
policies. Additionally, ASAP cloud native does not establish password policies.
For an Order Balancer could native instance, the following secrets are required:
• imagepull-secret: If the private registry or repository is password protected, create this
secret.
• tls-secret: If the traefik ingress is ssl-enabled, create this secret. For more information
about creating tls-secret, see "Setting Up ASAP Cloud Native for Incoming Access."
To create imagepull-secret:
1. Run the following command:
docker login
6-3
Chapter 6
Creating an Order Balancer Instance
$OB_CNTK/scripts/register-namespace.sh -p sr -t targets
# For example, $OB_CNTK/scripts/register-namespace.sh -p sr -t traefik
Note:
traefik is the name of the targets for registration of the namespace sr. The
script uses TRAEFIK_NS to find these targets. Do not provide the traefik
target if you are not using Traefik.
image:
repository: obcn
pullPolicy: IfNotPresent
tag: "7.4.0.0"
imagePullSecrets:
- name: ob-imagepull
obEnv:
envid: ob96
port: 7501
host: obhost
servicechannel:
name: channelport
type: ClusterIP
port: 7502
service:
name: adminport
type: ClusterIP
port: 7501
ingress:
type: TRAEFIK
enabled: true
sslIncoming: false
adminsslhostname: adminobhost.asap.org
adminhostname: adminobhost.asap.org
secretName: project-instance-obcn-tls-cert
6-4
Chapter 6
Creating an Order Balancer Instance
Where:
• repository is the repository name of the configured container registry.
• tag is the version name that is used when you create a final Docker image from the
container.
• name is the name of the imagepull-secret if it is configured. For more information
about imagepull-secret, see "Creating Secrets".
• envid is the unique environment ID of the instance. This ID should include only the
lower-case alphanumeric characters. For example, asapinstance1.
• port is the port number of the WebLogic Server where Order Balancer is deployed.
• host is the hostname of the docker container.
• servicechannel.port is the channel port when you create a channel in the WebLogic
domain.
• service.port is the admin port of the WebLogic Server.
• type is the ingress controller type. This type can be TRAEFIK or GENERIC or
OTHER.
• enabled is the status of the ingress controller whether it is enabled or not. The value
is true or false. By default, this is set to true.
• sslIncoming is the status of the SSL/TLS configuration on incoming connections
whether it is enabled or not. The configuration value is true or false. By default, this is
set to false. If you want to set the value to true, create keys, certificate, and secret by
following the instructions in the "Setting Up ASAP Cloud Native for Incoming Access"
section.
• adminsslhostname is the hostname of the https access.
• adminhostname is the hostname of the http access.
• secretName is the secret name of the certificate created for SSL/TLS. For more
information about creating keys and secret name, see "Setting Up ASAP Cloud
Native for Incoming Access."
Note:
adminsslhostname and secretName are applicable only if sslIncoming is set to
true.
2. Create PV and PVC for the Order Balancer instance. It is a mandatory step for Order
Balancer instance. The pv path is used to store the wallet and logs of the ASAP instance
added to the Order Balancer. The sample files are available in the ob_cntk.zip
at $OB_CNTK/samples/nfs/ file.
# This is pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: <project>-<obEnv.envid>-nfs-pv
labels:
type: local
spec:
6-5
Chapter 6
Creating an Order Balancer Instance
storageClassName: wallet
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
# hostPath:
# path: "/mnt/ob/wallet/
nfs:
server: <server>
path: <path>
# This is pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: <obEnv.envid>-pvc
namespace: sr
spec:
storageClassName: wallet
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Where:
• <project> is the namespace. In the above example, value is sr.
• <obEnv.envid> is the environment ID provided in the $OB_CNTK/samples/
charts/traefik/values.yaml file. In the above example, value is ob96.
This path will be mounted on the Pod as /u01/oracle/user_projects/domains/
domain/oracle_communications/asap
3. Run the following command to create pv and pvc:
4. Verify that whether pv and pvc are created successfully or not by running the
following command:
kubectl get pv
kubectl get pvc -n sr
$OB_CNTK/scripts/create-instance.sh -p sr -i quick
The create-instance.sh script uses the Helm chart located in the charts/ob
directory to deploy the Order Balancer docker image, service, and ingress
controller for your instance. If the script fails, see "Troubleshooting Issues with the
Scripts" before you make additional attempts.
6-6
Chapter 6
Creating an Order Balancer Instance
6. Validate the important input details such as Image name and tag, specification files used
(Values Applied), hostname, and port for ingress routing:
$OB_CNTK/scripts/create-instance.sh -p sr -i quick
If PV/PVC is not configured before, the Pod will be in the pending state as shown below:
7. When the Pod state shows READY 1/1, your Order Balancer instance is up and running.
The base hostname required to access this instance using HTTP is
adminobhostnonssl.asap.org. See "Planning and Validating Your Cloud Environment"
for details about hostname resolution.
The create-instance script prints out the following valuable information that you can use
when you work with your Order Balancer domain:
• The URL for accessing the WebLogic UI, which is provided through the ingress controller
at host: http://adminobhostnonssl.asap.org:30305/console.
6-7
Chapter 6
Creating an Order Balancer Instance
PORT(S) AGE
service/obinstance1-service ClusterIP 10.99.231.206
<none> 7502/TCP,7501/TCP 5d21h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/obinstance1-deployment 1/1 1
1 5d21h
NAME DESIRED CURRENT
READY AGE
replicaset.apps/obinstance1-deployment-d5bd787f8 0
0 0 5d15h
Note:
After an Order Balancer instance is created, it may take a few minutes to
start Order Balancer servers.
To access WebLogic Administration Console outside the cluster, enter the following
URL in the browser:
http://adminhostnonssl.asap.org:30305/console
The system prompts for the user name and password. Enter the WebLogic domain
user name and password.
Update the hosts file with the hostname and master_ip address on the machine
where the URL is getting accessed.
Note:
The hosts file is located in /etc/hosts on Linux and MacOS machines and in
C:\Windows\System32\drivers\etc\hosts on Windows machines.
ip_address adminhostnonssl.asap.org
$OB_CNTK/scripts/delete-instance.sh -p sr -i quick
6-8
Chapter 6
Creating an Order Balancer Instance
$OB_CNTK/scripts/create-instance.sh -p sr -i quick
Note:
If another Order Balancer instance is created in the same database, create new
database users.
You should not create multiple Order Balancer instances with the same Docker
image.
$OB_CNTK/scripts/delete-instance.sh -p sr -i quick
2. Delete the name space, which in turn deletes the Kubernetes name space and the
secrets:
$OB_CNTK/scripts/unregister-namespace.sh -p sr -d -t target
Note:
traefik is the name of the target for registration of the name space. The
script uses TRAEFIK_NS to find this target. Do not provide the "traefik" target if
you are not using Traefik.
6-9
Chapter 6
Next Steps
When a create-instance script fails, you must clean up the instance before making
another attempt at instance creation.
Note:
Do not retry running the create-instance script or the upgrade-instance
script immediately to fix any errors, as they would return errors. The
upgrade-instance script may work but re-running it does not complete the
operation.
$OB_CNTK/scripts/delete-instance.sh -p sr -i quick
Recreating an Instance
If you encounter issues when creating an instance, do not try to re-run the create-
instance.sh script as this will fail. Instead, perform the cleanup activities and then run
the following command:
$OB_CNTK/scripts/create-instance.sh -p sr -i quick
Next Steps
The Order Balancer instance is ready to add the ASAP cloud native instance.
For more information about managing ASAP instances, see "Setting Up ASAP for High
Availability" in ASAP System Administrator's Guide.
6-10
7
Planning Infrastructure
In Creating an ASAP Cloud Native Instance, you learned how to create an ASAP instance in
your cloud native environment. This chapter provides details about setting up infrastructure
and structuring ASAP instances for your organization.
See the following topics:
• Sizing Considerations
• Securing Operations in Kubernetes Cluster
Sizing Considerations
The hardware utilization for an ASAP cloud native deployment is approximately the same as
that of the ASAP traditional deployment.
Consider the following when sizing for your cloud native deployment:
• For ASAP cloud native, ensure that the database is sized to account for work orders
residing in the database. For details, see "ASAP Oracle Database Tablespace Sizing
Requirements" in ASAP Installation Guide.
• Oracle recommends sizing using a configuration as a building block by adjusting the
ASAP.cfg file to meet target order volumes.
Note:
Update the ASAP.cfg file when you build the Docker image.
7-1
Chapter 7
Securing Operations in Kubernetes Cluster
the requirements for their approved actions. The Kubernetes objects concerned are
service accounts and RBAC objects.
All ASAP and Order Balancer cloud native users fall into the following three
categories:
• Infrastructure Administrator
• Project Administrator
• ASAP User
Infrastructure Administrator
Infrastructure Administrators perform the following operations:
• Create a project for ASAP and Order Balancer cloud native and configure the
projects
• After creating a new project, run the register-namespace.sh script provided with
the ASAP cloud native toolkit
• Before deleting the ASAP and Order Balancer cloud native projects, run the
unregister-namespace.sh script
• Delete the ASAP and Order Balancer cloud native projects
Project Administrator
Project Administrators can perform all the tasks related to an instance level ASAP and
and Order Balancer cloud native deployments within a given project. This includes
creating and updating ASAP and Order Balancer cloud native instances. A project
administrator can work on one specific project. However, a given human user may be
assigned Project Administrator privileges on more than one project.
RBAC Requirements
The RBAC requirements for Traefik is documented in its user guide. The Infrastructure
Administrator must be able to create and delete name spaces and Traefik name space
(if Traefik is used as the ingress controller). Depending on the specifics of your
Kubernetes cluster and RBAC environment, this may require cluster-admin privileges.
The Project Administrator has limited RBAC privileges. For a start, it would be limited
to only that project's name space. Further, it would be limited to the set of actions and
objects that the instance-related scripts manipulate when run by the Project
Administrator. This set of actions and objects is documented in the ASAP and Order
Balancer cloud native toolkit sample located in the samples/rbac directory.
Structuring Permissions Using the RBAC Sample Files
There are many ways to structure permissions within a Kubernetes cluster. There are
clustering applications and platforms that add their own management and control of
these permissions. Given this, the ASAP and Order Balancer cloud native toolkit
provides a set of RBAC files as a sample. You will have to translate this sample into a
configuration that is appropriate for your environment. These samples are in the
samples/rbac directory within the toolkit.
The key files are project-admin-role.yaml and project-admin-rolebinding.yaml.
These files govern the basic RBAC for a Project Administrator.
Do the following with these files:
7-2
Chapter 7
Securing Operations in Kubernetes Cluster
1. Make a copy of both these files for each particular project, renaming them with the
project/namespace name in place of "project". For example, for a project called "biz",
these files would be biz-admin-role.yaml and biz-admin-rolebinding.yaml.
2. Edit both the files, replacing all occurrences of project with the actual project/namespace
name.
For the project-admin-rolebinding.yaml file, replace the contents of the "subjects"
section with the list of users who will act as Project Administrators for this particular
project.
Alternatively, replace the contents with reference to a group that contains all users who
will act as Project Administrators for this project.
3. Once both files are ready, they can be activated in the Kubernetes cluster by the cluster
administrator using kubectl apply -f filename.
It is strongly recommended that these files be version controlled as they form part of the
overall ASAP cloud native configuration.
In addition to the main Project Administrator role and its binding, the samples contain two
additional and optional role-rolebinding sets:
• project-admin-addon-role.yaml and project-admin-addon-rolebinding.yaml: This role
is per project and is an optional adjunct to the main Project Administrator role. It contains
authorization for resources and actions in the project name space that is not required by
the toolkit, but might be of some use to the Project Administrator for debugging purposes.
7-3
8
Exploring Alternate Configuration Options
The ASAP cloud native toolkit provides samples and documentation for setting up your ASAP
cloud native environment using standard configuration options. However, you can choose to
explore alternate configuration options for setting up your environment, based on your
requirements. This chapter describes alternate configurations you can explore, allowing you
to decide how best to configure your ASAP cloud native environment to suit your needs.
You can choose alternate configuration options for the following:
• Choosing Worker Nodes for Running ASAP Cloud Native
• Working with Ingress, Ingress Controller, and External Load Balancer
• Using an Alternate Ingress Controller
• Managing Logs
• Managing ASAP Cloud Native Metrics
The sections that follow provide instructions for working with these configuration options.
8-1
Chapter 8
Working with Ingress, Ingress Controller, and External Load Balancer
# key: oracle.com/licensed-for-coherence
# values:
# - true
The Traefik ingress controller works by creating an operator in its own "traefik" name
space and exposing a NodePort service. However, all ingress controllers do not
behave the same way. To accommodate all types of ingress controllers, by default, the
values.yaml file provides the loadBalancerPort parameter.
8-2
Chapter 8
Using an Alternate Ingress Controller
Note:
If you choose Traefik or any other ingress controller such as GENERIC or OTHER
you can update the ingress: section in the asap_cntk/charts/asap/values.yaml
file.
Table 8-1 Service Name and Service Ports for Ingress Rules
8-3
Chapter 8
Managing Logs
If any of the supported Ingress controllers or even a generic ingress does not meet
your requirements, you can choose "OTHER".
By choosing this option, ASAP cloud native does not create or manage any ingress
required for accessing the ASAP cloud native services. However, you may choose to
create your own ingress objects based on the service and port details mentioned in the
above table.
Note:
Regardless of the choice of Ingress controller, it is mandatory to provide the
value of loadBalancerPort in one of the specification files. This is used
for establishing a front-end cluster.
Managing Logs
ASAP cloud native generates traditional textual logs. By default, these log files are
generated in the managed server pod but can be re-directed to a Persistent Volume
Claim (PVC) supported by the underlying technology that you choose. See "Setting Up
Persistent Storage" for details.
When you update the staging container, update the LOGDIR attribute in
the $ASAP_BASE/Environment_Profile file:
# LOGDIR=/asaplogs
asapcn.metricspath: /ENV_ID/OrderMetrics
ASAP cloud native metrics expose the Order Balancer metrics along with the work
order metrics. The following is the Order Balancer metrics path:
metrics_path: /ASAPOB/metrics
8-4
Chapter 8
Managing ASAP Cloud Native Metrics
- job_name: 'asapmetrics'
scrape_interval: 120s
scrape_timeout: 60s
metrics_path: /ENV_ID/OrderMetrics
scheme: http/https
basic_auth:
username: WebLogic user name
password: WebLogic password
static_configs:
- targets: ['hostname:port number']
params:
query: [all]
- job_name: 'obmetrics'
scrape_interval: 120s
scrape_timeout: 60s
metrics_path: /ASAPOB/metrics
scheme: http/https
basic_auth:
8-5
Chapter 8
Managing ASAP Cloud Native Metrics
Where
• WebLogic user name is the user name of WebLogic Server.
• WebLogic password is the password of WebLogic Server.
• hostname is the configured host name in the values.yaml file.
– ASAP: $asap_cntk/charts/asap/values.yaml
– Order Balancer: $ob-cntk/charts/ob/values.yaml
• port number is the traefik node port number.
Note:
The filter options are: all, today, and total.
If you use a filter, update query: [filter] in the prometheus.yml file.
If you do not use a filter, comment out params: query: [filter] in the
prometheus.yml file.
If multiple ASAP instances are added, add the respective jobs in
prometheus.yml file
http://hostname:traefik_Port/ENV_ID/OrderMetrics
http://hostname:traefik_Port/ASAPOB/metrics
Where
• hostname is the configured host name in the values.yaml file
– ASAP: $asap_cntk/charts/asap/values.yaml
– Order Balancer: $ob-cntk/charts/ob/values.yaml
• traefik_Port is the traefik node port number.
These only provide metrics of the WebLogic Server that is serving the request. They
does not provide consolidated metrics for the entire cluster. Prometheus Query and
Grafana dashboards provide consolidated metrics.
8-6
Chapter 8
Managing ASAP Cloud Native Metrics
Name Notes
asap_wo_complete_total The total work orders in the completed state.
asap_wo_loading_total The total work orders in the loading state.
asap_wo_failed_total The total work orders in the failed state.
asap_wo_cancelled_total The total work orders in the canceled state.
asap_wo_inprogress_total The total work orders in the in progress state.
asap_wo_complete_last_interval The total work orders that are in the completed
state in the last interval.
asap_wo_complete_today The total work orders that are in the completed
state as of the current date.
asap_wo_loading_today The total work orders that are in the loading state
as of the current date.
asap_wo_failed_today The total work orders that are in the failed state as
of the current date.
asap_wo_cancelled_today The total work orders that are in the canceled
state as of the current date.
asap_wo_inprogress_today The total work orders that are in the in-progress
state as of the current date.
8-7
9
Integrating ASAP
Typical usage of ASAP involves the ASAP application receiving work orders from upstream.
Upstream interacts with ASAP using t3/t3s or http/https. This chapter examines the
considerations involved in integrating ASAP cloud native instances into a larger solution
ecosystem.
This section describes the following topics and tasks:
• Integrating with ASAP cloud native instances
• Applying the WebLogic patch for external systems
• Configuring SAF on External Systems
• Setting up Secure Communication with SSL/TLS
Note:
Connectivity with the OCA client and SRT are not supported in ASAP cloud native
environment.
9-1
Chapter 9
Integrating With ASAP Cloud Native Instances
Invoking the ASAP cloud native Helm chart creates a new ASAP instance. In the
above illustration, the name of the instance is "quick" and the name of the project is
"sr". The instance consists of an ASAP pod and a Kubernetes service.
The Cluster Service contains endpoints for both HTTP and T3 traffic. The instance
creation script creates the ASAP cloud native Ingress object. The Ingress object has
metadata to trigger the Traefik ingress controller as a sample. Traefik responds by
creating new front-ends with the configured "hostnames" for the cluster
(quick.sr.asap.org and t3.quick.sr.uim.org in the illustration). The IngressRoute
connects the hostname to the service exposed on the pod. The service is created on
the ASAP WebLogic admin server port.
The prior installation of Traefik has already exposed Traefik itself via a selected port
number (30305 in the example) on each worker node.
9-2
Chapter 9
Integrating With ASAP Cloud Native Instances
This leads to an interruption of access and requires intervention. The recommended pattern
to avoid these concerns is for the DNS Resolver to be populated with all the applicable IP
addresses as resolution targets (in our example, it would be populated with the IPs of both
Worker node 1 and node 2), and have the Resolver return a random selection from that list.
An alternate mode of communication is to introduce a load balancer configured to balance
incoming traffic to the Traefik ports on all the worker nodes. The DNS Resolver is still
required, and the entry for *.mobilecom.asap.org points to the load balancer. Your load
balancer documentation describes how to achieve resiliency and load management. With this
setup, a user (User Client A in our example) sends a message to
dev2.mobilecom.asap.org, which actually resolves to the load balancer - for instance,
http://dev2.mobilecom.asap.org:8080/OrderManagement/Login.jsp. Here, 8080 is the
public port of the load balancer. The load balancer sends this to Traefik, which routes the
message, based on the "hostname" targeted by the message to the HTTP channel of the
ASAP cloud native instance.
By adding the hostname resolution such that admin.dev2.mobilecom.asap.org also
resolves to the Kubernetes cluster access IP (or Load Balancer IP), User Client B can access
the WebLogic console via http://admin.dev2.mobilecom.asap.org/console and the
credentials specified while setting up the "wlsadmin" secret for this instance.
Note:
Access to the WebLogic Admin console is provided for review and debugging use
only. Do not use the console to change the system state or configuration. As a
result, any such manual changes (whether using the console or using WLST or
other such mechanisms) are not retained in pod reschedule or reboot scenarios.
The only way to change the state or configuration of the WebLogic domain or the
ASAP installation is inside the Docker image.
9-3
Chapter 9
Applying the WebLogic Patch for External Systems
0.0.0.0 project-instance-ms1
0.0.0.0 project-instance-ms2
0.0.0.0 project-instance-ms3
0.0.0.0 project-instance-ms4
0.0.0.0 project-instance-ms5
0.0.0.0 project-instance-ms6
0.0.0.0 project-instance-ms7
0.0.0.0 project-instance-ms8
0.0.0.0 project-instance-ms9
0.0.0.0 project-instance-ms10
0.0.0.0 project-instance-ms11
0.0.0.0 project-instance-ms12
0.0.0.0 project-instance-ms13
0.0.0.0 project-instance-ms14
0.0.0.0 project-instance-ms15
0.0.0.0 project-instance-ms16
0.0.0.0 project-instance-ms17
0.0.0.0 project-instance-ms18
You should add these entries for all the ASAP cloud native instances that the external
system interacts with. Set the IP address to 0.0.0.0. The server in the ASAP cloud
native instance must be listed.
9-4
Chapter 9
Setting Up Secure Communication with SSL/TLS
To enable domain trust, in your domain configuration, under Advanced, edit the Credential
and ConfirmCredential fields with the same password you used to create the global trust
secret in ASAP cloud native.
When ASAP cloud native dictates secure communication, then it is responsible for generating
the SSL certificates. These certificates must be provided to the appropriate client.
9-5
Chapter 9
Setting Up Secure Communication with SSL/TLS
Note:
Traefik 2.x moved to use IngressRoute (a CustomResourceDefinition)
instead of the Ingress object. If you are using Traefik, change all references
of ingress to ingressroute in the following command :
rules:
- host: admin.instance.project.asap.org
http:
paths:
- backend:
serviceName: ENV_ID-service
servicePort: 7601
mkdir $ASAP_CNTK/charts/asap/ssl
ingress:
sslIncoming: true
3. After creating the instance by running the create-instance.sh script, you can
validate the configuration by describing the ingress controller for your instance.
9-6
Chapter 9
Setting Up Secure Communication with SSL/TLS
You should see each of the certificates you generated, terminating one of the hostnames:
Once you have the name of your ingress, run the following command:
TLS:
project-instance-admin-tls-cert terminates
admin.instance.project.asap.org
Now the ASAP instance is created with the secure connection to the ingress controller.
Note:
Remember to have your DNS resolution set up on any remote hosts that will
connect to the ASAP cloud native instance.
# For example
./keytool -importcert -v -trustcacerts -alias asapcn -file /scratch/t3.crt -
keystore /jdk1.8.0_202/jre/lib/security/cacerts -storepass default_password
9-7
Chapter 9
Setting Up Secure Communication with SSL/TLS
Debugging SSL
To debug SSL, do the following:
• Verify Hostname
• Enable SSL logging
Verifying Hostname
When the keystore is generated for the on-premise server, if FQDN is not specified,
then you may have to disable hostname verification. This is not secure and should
only be done in development environments.
To do so, when you build the Docker image, update the build_env.sh script and add
the following Java options:
project:
#JAVA_OPTIONS for all managed servers at project level
java_options: "-Dweblogic.StdoutDebugEnabled=true -Dssl.debug=true
-Dweblogic.security.SSL.verbose=true -
Dweblogic.debug.DebugSecuritySSL=true -Djavax.net.debug=ssl"
9-8
10
Upgrading the ASAP Cloud Native
Environment
This chapter describes the tasks you perform in order to apply a change or upgrade to a
component in the cloud native environment.
ASAP supports only one replica per instance. If the same Docker image is used in two
instances, the behavior is undefined. Due to these constraints, ASAP supports only offline
upgrades.
$asap-img-builder/upgradeASAPDockerImage.sh
5. Create a new container using the previous version of the Docker image using the
following command:
For example: docker run --name asap-c -dit -h asaphost -p 7601 -v dockerhost_volume:/
dockerhost_volume asapcn:7.4.0.0
The container will be created with asap-c.
6. Enter into the ASAP container using the following command:
You have entered into the ASAP container. Now, you have to upgrade the ASAP
installation in the console mode.
10-1
Chapter 10
ASAP Cloud Native Upgrade Procedures
cd /dockerhost_volume/installers/new installer
/asap74ServerLinux -console
Where version is the version of the ASAP Docker image. This version should be
higher than the previous version.
10-2
Chapter 10
Order Balancer Cloud Native Upgrade Procedures
2. To deploy a new Docker image in the Kubernetes cluster, the image should be available
in the configured Docker registry or on all worker nodes. To push the Docker image to the
Kubernetes docker registry, run the following commands:
$ASAP_CNTK/scripts/create-instance.sh -p sr -i quick
$asap-img-builder/upgradeOBDockerImage.sh
4. Create a new container using the previous version of the Docker image using the
following command:
For example: docker run --name ob-c -dit -h obhost -p 7601 -v obhost_volume:/
obhost_volume obcn:7.4.0.0
The container will be created with ob-c.
5. Enter into the container using the following command:
You have entered into the Order Balancer container. For upgrading Order Balancer, see
"Updating and Redeploying Order Balancer" in ASAP System Administrator's Guide.
Creating an Image from the Staging Container
The staging container is deployed with all the required updates to route work orders to ASAP
instances. Save this container as a Docker image to deploy in the Kubernetes cluster.
10-3
Chapter 10
Upgrades to Infrastructure
Where version is the version of the Order Balancer Docker image. This version
should be higher than the previous version.
2. To deploy the new Docker image in the Kubernetes cluster, the image should be
available in the configured Docker registry or on all worker nodes. To push the
Docker image to the Kubernetes docker registry, run the following commands:
$OB_CNTK/scripts/create-instance.sh -p sr -i quick
Upgrades to Infrastructure
From the point of view of ASAP instances, upgrades to the cloud infrastructure fall into
two categories:
• Rolling upgrades
• One-time upgrades
Note:
All infrastructure upgrades must continue to meet the supported types and
versions listed in the ASAP documentation's certification statement.
Rolling upgrades are where, with proper high-availability planning (like anti-affinity
rules), the instance as a whole remains available as parts of it undergo temporary
outages. Examples of this are Kubernetes worker node OS upgrades, Kubernetes
version upgrades and Docker version upgrades.
One-time upgrades affect a given instance all at once. The instance as a whole suffers
either an operational outage or a control outage. Examples of this is Ingress controller
upgrade.
Kubernetes and Docker Infrastructure Upgrades
10-4
Chapter 10
Miscellaneous Upgrade Procedures
Follow standard Kubernetes and Docker practices to upgrade these components. The impact
at any point should be limited to one node - master (Kubernetes and OS) or worker
(Kubernetes, OS, and Docker). If a worker node is going to be upgraded, drain and cordon
the node first. This will result in all pods moving away to other worker nodes. This is
assuming your cluster has the capacity for this - you may have to temporarily add a worker
node or two. For ASAP instances, any pods on the cordoned worker will suffer an outage until
they come up on other workers. However, their messages and orders are redistributed to
surviving pods and processing continues at a reduced capacity until the affected pods
relocate and initialize. As each worker undergoes this process in turn, pods continue to
terminate and start up elsewhere, but as long as the instance has pods in both affected and
unaffected nodes, it will continue to process orders.
Ingress Controller Upgrade
Follow the documentation of your chosen Ingress Controller to perform an upgrade.
Depending on the Ingress Controller used and its deployment in your Kubernetes
environment, the ASAP instances it serves may see a wide set of impacts, ranging from no
impact at all (if the Ingress Controller supports a clustered approach and can be upgraded
that way) to a complete outage.
The new Traefik can be installed into a new name space, and one-by-one, projects can be
unregistered from the old Traefik and registered with the new Traefik.
During this transition, there will be an outage in terms of the outside world interacting with
ASAP. Any data that flows through the ingress will be blocked until the new Traefik takes
over. This includes GUI traffic, order injection, API queries, and SAF responses from external
systems. This outage will affect all the instances in the project being transitioned.
10-5
11
Moving to ASAP Cloud Native from a
Traditional Deployment
You can move to an ASAP cloud native deployment from your existing ASAP traditional
deployment. This chapter describes tasks that are necessary for moving from a traditional
ASAP deployment to an ASAP cloud native deployment.
Supported Releases
You can move to ASAP cloud native from all supported traditional ASAP releases. In addition,
you can move to ASAP cloud native within the same release, starting with the ASAP release
7.3.0.6.0.
11-1
Chapter 11
Pre-move Development Activities
Note:
The values of ENV_ID and port numbers are present in the
asap73ServerLinux.response file of the ASAP installation directory.
2. Create an ASAP cloud native test instance and test your instance.
3. Validate the solution.
4. Shut down your test instance and remove the associated secrets and ingress.
11-2
Chapter 11
Reverting to Your ASAP Traditional Deployment
that no messages get queued or dequeued. The result is that ASAP is up and running, but
completely idle.
Cleaning Up
Once the ASAP cloud native instance is deemed operational, you can release the resources
used for the ASAP traditional application layer.
You can delete the database used for ASAP traditional instance and release its resources as
well.
11-3
12
Debugging and Troubleshooting
This chapter provides information about debugging and troubleshooting issues that you may
face while setting up an ASAP cloud native environment and creating ASAP cloud native
instances.
This chapter describes information about the following:
• Troubleshooting Issues with Traefik and WebLogic Administration Console
• Common Error Scenarios
• Known Issues
Note:
These steps apply for local DNS resolution via the hosts file. For any other
DNS resolution, such as corporate DNS, follow the corresponding steps.
12-1
Chapter 12
Troubleshooting Issues with Traefik and WebLogic Administration Console
Note:
If the Traefik service is not running, install or update the Traefik Helm
chart.
5. Verify if the Traefik back-end systems are registered, by using one of the following
options:
12-2
Chapter 12
Troubleshooting Issues with Traefik and WebLogic Administration Console
• Run the following commands to check if your project name space is being monitored
by Traefik. The absence of your project name space means that your managed
server back-end systems are not registered with Traefik.
$ cd $ASAP_CNTK
$ source scripts/common-utils.sh
$ find_namespace_list 'namespaces' traefik traefik-operator
"traefik","project_1", "project_2"
• Verify the Traefik Dashboard and add the following DNS entry in your hosts
configuration file:
Kubernetes_Access_IP traefik.asap.org
Add the same entry regardless of whether you are using Oracle Cloud Infrastructure
load balancer or not. Navigate to: http://traefik.asap.org:30305/dashboard/ and
check the back-end systems that are registered. If you cannot find your project name
space, install or upgrade the Traefik Helm chart. See "Installing the Traefik Container
Image" for more information.
Reloading Instance Backend Systems
If your instance's ingress is present, yet Traefik does not recognize the URLs of your
instance, try to unregister and register your project name space again. You can do this by
using the unregister-namespace.sh and register-namespace.sh scripts in the toolkit.
Note:
Unregistering a project name space will stop access to any existing instances in
that name space that was working prior to the unregistration.
12-3
Chapter 12
Common Problems and Solutions
3. Enabling access logs generates large amounts of information in the logs. After
debugging is complete, disable access logging by running the following command:
Cleaning Up Traefik
Note:
Clean up is not usually required. It should be performed as a desperate
measure only. Before cleaning up, make a note of the monitoring project
name spaces. Once Traefik is re-installed, run $ASAP_CNTK/scripts/
register-namespace.sh for each of the previously monitored project name
spaces.
Warning: Uninstalling Traefik in this manner will interrupt access to all ASAP
instances in the monitored project name spaces.
Cleaning up of Traefik does not impact actively running ASAP instances. However,
they cannot be accessed during that time. Once the Traefik chart is re-installed with all
the monitored name spaces and registered as Traefik back-end systems successfully,
ASAP instances can be accessed again.
Setting up Logs
As described earlier in this guide, ASAP and WebLogic logs can be stored in the
individual pods or in a location provided via a Kubernetes Persistent Volume. The PV
approach is strongly recommended, both to allow for proper preservation of logs (as
pods are ephemeral) and to avoid straining the in-pod storage in Kubernetes.
Within the pod, logs are available at: /u01/oracle/user_projects/domains/domain/
servers/AdminServer/logs.
ASAP logs: /scratch/oracle/asap/DATA/logs/
When a PV is configured, logs are available at the following path starting from the root
of the PV storage:
project-instance/logs.
12-4
Chapter 12
Known Issues
Pod Status
While the introspection is running, you can check the status of the introspection pod by
running the following command:
The READY field is showing 1/1, which indicates that the pod status is healthy.
If there is an issue accessing the image specified in the instance specification, then it shows
the following:
This shows that the introspection pod status is not healthy. If the image can be pulled, it is
possible that it took a long time to pull the image.
To resolve this issue, verify the image name and the tag and that it is accessible from the
repository by the pod.
You can also try the following:
• Pull the container image manually on all Kubernetes nodes where the ASAP cloud native
pods can be started up.
Known Issues
This section describes known issues that you may come across, their causes, and the
resolutions.
Email Plugin
The ASAP Email plugin is currently not supported. Users who require this capability can
create their own plugin for this purpose.
12-5
A
Differences Between ASAP Cloud Native and
ASAP Traditional Deployments
If you are moving from a traditional deployment of ASAP to a cloud native deployment, this
section describes the differences between ASAP cloud native and ASAP traditional.
• ASAP Installer
Distributed installations are not supported in the ASAP cloud native environment. All
ASAP components, including WebLogic Server, must be installed in the same container.
Also, SRT, custom SRPs, and custom NEPs are not supported in the ASAP cloud native
environment.
• WebLogic Domain Configuration
In a traditional deployment of ASAP, the WebLogic domain configuration is done using
WLST or the WebLogic Admin Console. In ASAP cloud native, domain configuration is
done by using WLST. ASAP cloud native does not support the deployment of ASAP in a
Managed Server.
• Incoming JMS and SAF
For incoming JMS and SAF messages, the originator must use T3 over HTTPS
tunneling.
• ASAP OCA
The Order Control Application (OCA) is available in both ASAP traditional and ASAP
cloud native deployments. In a cloud native environment, you can access OCA using the
hostname configured in the ASAP values.yaml file and the port number in the Traefik
values.yaml file. For example, to access the OCA, use:
https://adminhostnonssl.asap.org:30443/<ENV_ID>/OCA
A-1