0% found this document useful (0 votes)
334 views97 pages

Container Integrity

hhetvw

Uploaded by

Anoop
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
334 views97 pages

Container Integrity

hhetvw

Uploaded by

Anoop
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 97

HP 9000 Containers A.03.

01 on HP
Integrity Server Administrator Guide
HP-UX 11i v3

Abstract
This document describes configuration, file system layout, management, troubleshooting, and known limitations of HP 9000
Containers. The document is intended for system administrators, who want to configure and administer HP 9000 containers,
and solution architects involved in transitioning applications from legacy HP 9000 servers to HP-UX 11i v3 on HP Integrity
servers using HP 9000 Containers.

HP Part Number: 5900-3112


Published: June 2013
Edition: 1
© Copyright 2011, 2013 Hewlett-Packard Development Company, L.P.
Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial
Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under
vendor's standard commercial license. The information contained herein is subject to change without notice. The only warranties for HP products
and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as
constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. UNIX is a registered
trademark of The Open Group.
The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express
warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall
not be liable for technical or editorial errors or omissions contained herein. Intel, Pentium, Intel Inside, and the Intel Inside logo are trademarks or
registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries.
Acknowledgments
Intel®, Itanium®, Pentium®, Intel Inside, and the Intel Inside logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in
the United States and other countries. Microsoft Windows®, Windows XP®, and Windows NT® are U.S. registered trademarks of Microsoft
Corporation. Adobe® and Acrobat® are trademarks of Adobe Systems Incorporated. Java and Oracle are registered trademarks of Oracle and/or
its affiliates. UNIX® is a registered trademark of The Open Group.
Contents
1 Introduction...............................................................................................9
1.1 Overview............................................................................................................................9
1.2 Features of HP 9000 Containers...........................................................................................9
1.3 HP 9000 container types...................................................................................................10
1.4 When to use HP 9000 Containers.......................................................................................10
1.5 Consolidating HP 9000 servers using HP 9000 Containers....................................................11
1.6 Sizing an HP 9000 container.............................................................................................11
1.7 Resource entitlement..........................................................................................................12
1.8 Using ARIES without HP 9000 Containers............................................................................12
1.9 ISV software license and support.........................................................................................13

2 Installing and configuring HP 9000 Containers............................................15


2.1 Prerequisites.....................................................................................................................15
2.2 Recommended patches......................................................................................................15
2.3 Additional requirements for HP 9000 classic containers.........................................................16
2.4 Configuring HP-UX SRP......................................................................................................16
2.5 Installing HP 9000 Containers............................................................................................16

3 Transition of application environments from HP 9000 server to HP 9000


container....................................................................................................17
3.1 Selecting HP 9000 container type.......................................................................................17
3.2 Creating HP 9000 server file system image..........................................................................17
3.3 Transitioning kernel tunable parameters...............................................................................18
3.4 Choosing name for HP 9000 container...............................................................................19
3.5 Creating file systems for the container..................................................................................19
3.6 Installing driver and management software..........................................................................20
3.7 Creating and configuring an HP 9000 container..................................................................20

4 Creating and configuring HP 9000 system container.....................................21


4.1 Setting up user environment for HP 9000 image recovery.......................................................21
4.2 Creating container root directory........................................................................................21
4.3 Recovering HP 9000 image...............................................................................................21
4.3.1 Configuring mount points............................................................................................21
4.3.2 Using Ignite-UX network recovery archive.....................................................................22
4.3.3 Using Ignite-UX tape recovery archive..........................................................................22
4.3.4 Using cpio, tar, and frecover recovery archive...............................................................22
4.3.5 Using other tools for recovery archive..........................................................................23
4.3.6 Completing the recovery............................................................................................23
4.4 Creating HP 9000 system container....................................................................................23
4.4.1 Configuring PRM.......................................................................................................23
4.4.2 Adding hp9000sys template......................................................................................23
4.4.3 Verifying configuration of HP 9000 system container.....................................................25
4.4.4 Changing configuration parameters (if required)...........................................................25
4.4.5 Reverting configuration (if required).............................................................................25
4.5 Additional container configuration......................................................................................25
4.5.1 Configuring host name and node name........................................................................25
4.5.2 Configuring IP address...............................................................................................25
4.5.3 Configuring additional IP addresses.............................................................................26

Contents 3
4.5.4 Configuring additional devices...................................................................................26
4.5.5 Configuring mount points...........................................................................................26
4.5.6 Restoring or deleting HP 9000 startup services..............................................................26
4.5.7 Configuring DCE services...........................................................................................26
4.5.8 Configuring root cron jobs..........................................................................................27
4.5.9 Configuring or disabling trusted mode features.............................................................27
4.5.10 Configuring inittab...................................................................................................27
4.5.11 Configuring printers..................................................................................................27
4.5.12 Configuring X server.................................................................................................27
4.5.13 Configuring additional privileges for HP 9000 system container.....................................28
4.5.14 Configuring DDFA....................................................................................................28
4.5.15 Disabling Autofs.......................................................................................................29
4.5.16 Configuring telnet for HP-UX 10.xx containers..............................................................29
4.5.17 Configuring OSI Transport Services.............................................................................29
4.5.18 Enabling auditing....................................................................................................29
4.6 Testing HP 9000 system container......................................................................................29
4.7 Workarounds for known issues...........................................................................................30
4.8 Tweaking ARIES configuration............................................................................................30
4.8.1 Configuring for more threads......................................................................................30
4.8.2 Configuring for more stack size...................................................................................30
4.8.3 Configuring machine-specific parameters.....................................................................30

5 Creating HP 9000 classic container............................................................33


5.1 Setting up user environment for image recovery.....................................................................33
5.2 Creating container root directory........................................................................................33
5.3 Recovering HP 9000 image ..............................................................................................34
5.3.1 Configuring mount points............................................................................................34
5.3.2 Using Ignite-UX network recovery archive.....................................................................34
5.3.3 Using Ignite-UX tape recovery archive..........................................................................34
5.3.4 Using tar and frecover recovery archive........................................................................34
5.3.5 Using other tools for recovery......................................................................................35
5.3.6 Completing the recovery............................................................................................35
5.4 Creating HP 9000 classic container....................................................................................35
5.5 Additional container configuration......................................................................................36
5.5.1 Configuring host name or node name...........................................................................36
5.5.2 Configuring IP address...............................................................................................36
5.5.3 Configuring additional IP addresses.............................................................................36
5.5.4 Configuring mount points...........................................................................................36
5.5.5 Restoring HP 9000 startup services..............................................................................36
5.5.6 Configuring root cron jobs..........................................................................................37
5.5.7 Configuring inittab startup file.....................................................................................37
5.5.8 Configuring printers...................................................................................................37
5.5.9 Configuring non-local users........................................................................................37
5.5.10 Configuring trusted users...........................................................................................37
5.5.11 Configuring sendmail daemon...................................................................................38
5.5.12 Configuring xinetd service.........................................................................................38
5.6 Testing HP 9000 classic container.......................................................................................38
5.7 Configuring ARIES parameters............................................................................................38
5.8 Workarounds for common issues........................................................................................39

6 Upgrading HP 9000 Containers versions....................................................41


6.1 Upgrading from HP 9000 Containers A.03.0x......................................................................41
6.2 Upgrading from HP 9000 Containers A.01.0x......................................................................41

4 Contents
6.2.1 Upgrading to HP 9000 classic container......................................................................41
6.2.2 Upgrading to HP 9000 system container......................................................................42

7 HP 9000 Containers file system layout........................................................45


7.1 HP 9000 system container file system...................................................................................45
7.2 HP 9000 classic container file system..................................................................................46
7.3 HP 9000 Containers directories..........................................................................................48

8 Administration of HP 9000 Containers........................................................49


8.1 Administrator privileges......................................................................................................49
8.2 Start and stop the HP 9000 container.................................................................................49
8.3 User account management.................................................................................................49
8.3.1 HP 9000 system container..........................................................................................49
8.3.2 HP 9000 classic container.........................................................................................50
8.4 Configuring SSH authorization keys.....................................................................................50
8.4.1 HP 9000 system container..........................................................................................50
8.4.2 HP 9000 classic container.........................................................................................50
8.5 Configuring mount and export points..................................................................................50
8.5.1 Configuring NFS and Autofs clients..............................................................................51
8.5.2 Configuring VxFS mount points...................................................................................51
8.5.3 Configuring NFS exports............................................................................................52
8.6 Modifying IP address configuration of HP 9000 container.....................................................52
8.6.1 Changing primary IP address......................................................................................52
8.6.2 Adding a new IP address for an HP 9000 container......................................................52
8.6.3 Changing global IP address.......................................................................................53
8.7 Modifying host name........................................................................................................53
8.8 Modifying resource entitlements..........................................................................................53
8.9 Monitoring HP 9000 containers processes from Integrity host.................................................53
8.10 Patching HP 9000 Containers...........................................................................................54
8.10.1 Patching native files inside container...........................................................................54
8.10.2 Commands disallowed inside container......................................................................54
8.10.3 Applying kernel patches inside the container...............................................................55
8.10.4 Patching commands and libraries...............................................................................55
8.10.5 Errors reported by swverify command.........................................................................55
8.10.6 SD post session scripts..............................................................................................55
8.11 Run level support..............................................................................................................56
8.12 Backing up and cloning a container...................................................................................56
8.12.1 Exporting and importing an HP 9000 system container.................................................56
8.12.2 Exporting and importing an HP 9000 classic container.................................................57
8.12.3 Cloning an HP 9000 system container........................................................................57
8.12.4 Backup applications with HP 9000 system containers...................................................57
8.12.5 Backup applications with HP 9000 classic containers...................................................58
8.13 Auditing with HP 9000 Containers....................................................................................58

9 Using Container Manager.........................................................................61


9.1 Accessing Container Manager from HP SMH........................................................................61
9.2 Container Manager home page.........................................................................................61
9.3 Setting up the environment for container...............................................................................62
9.4 Creating an HP 9000 container..........................................................................................63
9.5 Viewing and modifying configuration of an HP 9000 container..............................................64
9.6 Starting and stopping an HP 9000 container.......................................................................66

Contents 5
10 Integration with SG.................................................................................67
10.1 Setting up the SG cluster...................................................................................................67
10.2 Configuring system on each node in the cluster...................................................................67
10.3 Selecting the package model............................................................................................67
10.4 Selecting application that manages file system and network interface....................................68
10.5 Configuring the SG package on the primary node..............................................................68
10.5.1 Using container package model.................................................................................68
10.5.2 Using application package model.............................................................................70
10.6 Copying and applying package configuration....................................................................71

11 Limitations of HP 9000 Containers............................................................73


11.1 Application limitations.......................................................................................................73
11.2 Setup limitations..............................................................................................................73
11.3 Access limitations.............................................................................................................74
11.4 Patching limitations..........................................................................................................74
11.5 User management limitations.............................................................................................74
11.6 Commands limitations......................................................................................................74
11.7 Unsupported tasks and utilities...........................................................................................75
11.8 Performance limitations of HP 9000 Containers...................................................................76

12 HP 9000 Containers troubleshooting.........................................................77


12.1 Verifying HP 9000 container health...................................................................................77
12.2 Recovering from HP 9000 container startup and shutdown issues..........................................78
12.3 Triaging HP 9000 container access issues..........................................................................78
12.4 Collecting application and system call logs.........................................................................79
12.5 Debugging applications...................................................................................................79
12.6 Known issues and workarounds.........................................................................................80
12.7 Troubleshooting HP ARIES................................................................................................82
12.8 Troubleshooting HP-UX Containers.....................................................................................83
12.9 Reconfiguring HP 9000 containers....................................................................................83
12.9.1 Reconfiguring HP 9000 container...............................................................................83
12.9.2 Switching to newer HP 9000 libraries.........................................................................83
12.9.3 Restoring restricted HP 9000 commands.....................................................................83
12.10 Performance tuning.........................................................................................................84
12.10.1 Switching commands to Integrity native commands......................................................84
12.10.2 Tuning ARIES emulation...........................................................................................84
12.10.3 Tuning kernel parameters.........................................................................................85
12.10.4 Profiling ARIES emulation.........................................................................................85

13 Support and other resources.....................................................................87


13.1 Information to collect before you contact HP........................................................................87
13.2 How to contact HP..........................................................................................................87
13.3 HP authorized resellers.....................................................................................................87
13.4 Related Information.........................................................................................................87
13.5 Typographic conventions..................................................................................................88

14 Documentation feedback.........................................................................91

Glossary....................................................................................................93

6 Contents
Index.........................................................................................................95

Contents 7
8
1 Introduction
This chapter provides an overview of HP 9000 Containers including its features and types.

1.1 Overview
HP 9000 Containers is a set of tools designed to enable quick transition of application environment
from an HP 9000 server with PA-RISC processor to an HP Integrity server. HP 9000 Containers
allows rehosting the complete HP 9000 user-space environment without recompiling or reinstalling
individual applications, or reconstructing the application ecosystem, with minimal reconfiguration
and application inventory preparation effort.
The transitioned applications reside along with HP 9000 commands, libraries, and other user-space
components in a chroot environment, which is known as HP 9000 container. An HP 9000
container has its own IP address and login credentials. An HP 9000 container can be started,
stopped, modified, exported, imported, and deleted. It does not support applications that are
kernel intrusive, and applications related to system administration, management, and resource
monitoring.
HP 9000 Containers is built with two key HP-UX technologies:
• HP ARIES dynamic binary translator, which provides the execution layer for PA-RISC
applications.
• HP-UX Containers (formerly known as SRP), which enables creation of multiple secure isolated
execution environments on the same HP-UX operating system instance.

1.2 Features of HP 9000 Containers


Table 1 (page 9) lists the features that HP 9000 Containers currently supports and does not
support.
Table 1 Features supported and not supported by HP 9000 Containers
HP 9000 Containers supports HP 9000 Containers does not support

Transition of HP 9000/PA-RISC application environment Running HP 9000 HP-UX kernel inside the container
to a chroot environment on an HP Integrity server

Transition of HP-UX 11i v1, v2, and v3 (HP 9000) to HP-UX HP 9000 environments earlier than HP-UX 11i v1
11i v3 (Integrity)

Creation of container environment from existing HP 9000 Pre-populated HP 9000 components inside containers
servers

Transition of all application binaries and configuration files HP 9000 platform virtualization
together

Emulation of executables inside container using HP ARIES Native mode or mixed mode execution inside containers
dynamic binary translator

Assignment of IP addresses and login credentials System administration and resource monitoring tools and
services

Management of container life cycle – start, stop, export, Online migration


import, modify, and delete

Well-behaved, pure user-space applications that do not Kernel intrusive applications, device drivers, system
perform system management tasks management, and monitoring related applications

SG (Serviceguard) integration using modified packages SG inside containers


for high availability

1.1 Overview 9
NOTE: HP-UX 11.00 and HP-UX 10.20 environments usually work inside HP 9000 containers,
but these environments are not officially supported.

1.3 HP 9000 container types


HP 9000 Containers A.03.01 allows you to create two container types: HP 9000 system containers
(feature-rich) and HP 9000 classic containers (for compatibility with HP 9000 Containers A.01.0x).
Table 2 (page 10) lists the differences between the two types.
Table 2 Differences between HP 9000 system container and HP 9000 classic container
HP 9000 system container HP 9000 classic container

Supports inetd services (access to container through Does not support inetd services (access only through SSH
telnet, ftp, rlogin, remsh, and rexec [no telnet based protocols)
yet for HP-UX 10.20])

Supports SSH based access only if SSH is available in the Supports SSH based access even if SSH is not configured
HP 9000 image in the HP 9000 image

Supports SD patching inside the container Supports only non-SD patching inside the container

Can coexist with other HP 9000 system containers on the Only one classic container is supported on an HP-UX
same HP-UX instance instance

Can coexist with native HP-UX containers Cannot coexist with native HP-UX containers

Has private HP 9000 file system A part of the HP 9000 file system is shared with the host
(mainly /etc, /dev, /tcb, and parts of /var)

Supports user management inside the container User management is performed on the host system

Most commands report container-related information inside Some commands report system-wide information inside
container container

Supports run level inside container Supports partial run level inside container

Supports mount inside container Does not support mount inside container

Supports SG integration in both SRP package and SG integration is supported only in the application package
application package models model

Does not support user quotas User quotas can be enabled because user management is
performed on the host system

Supports trusted mode inside container (with some Trusted mode support is similar to that on a native system
differences compared to native system) (managed entirely from the host)

Does not support HP SMH or SAM to manage users HP SMH or SAM can be used from the host to manage
users

1.4 When to use HP 9000 Containers


HP 9000 Containers can be considered for transition when the following criteria are met:
• When upgrading or porting applications to native Integrity version is infeasible.
• If the license agreement for ISV software allows copying application-related files to a new
platform (or license can be migrated).
• If ISV supports the application on ARIES, or if ISV support is not a critical requirement for the
customer.
• If the applications to be transitioned are pure user-space and not related to system
administration or management.
• When traditional ARIES based migration is costly due to one or more of the following reasons:

10 Introduction
◦ Complete information about the application inventory such as list of applications,
executables, libraries, configuration files, or dependencies is not available.
◦ The number of servers targeted for migration is large and resources are limited to carry
out individual application transition.
◦ There is a dependency on legacy stand-alone development environments, which are not
supported by HP XPADE. For more information about HP XPADE, see http://www.hp.com/
go/xpade.
• When the limitations of HP 9000 Containers are acceptable. For more information about
limitations, see Chapter 11 (page 73).
• When it is possible to perform a detailed Proof-of-Concept testing prior to moving to production.
This testing is required because latent application or emulation defects might get exposed in
the container environment.

1.5 Consolidating HP 9000 servers using HP 9000 Containers


Multiple options are available for consolidating HP 9000 servers using HP 9000 Containers:
• Use multiple HP 9000 system containers.
• Use HP 9000 containers in HP Integrity VM guests.
• Use HP 9000 containers in HP-UX vPars.
Multiple HP 9000 system containers can be used when the following conditions are satisfied:
• Complete isolation of application environments is not required (multiple containers share the
same kernel).
• Dynamic migration of resources (memory and CPU) is not needed.
• Online migration is not needed at container level.
• There are no conflicting requirements for kernel tunable parameters.
• There are no conflicting manageability requirements (management applications must run on
the host system in the global container).
• Application downtimes can be coordinated easily when the server needs a reboot.
• Enough resources (memory, CPU) are available to account for emulation overhead.
• Some legacy 32-bit PA-RISC applications require the kernel tunable parameter shmmax to be
less than 0x40000000. This limits the number of applications that can be stacked together if
the applications use shared memory.
• Concurrent processes on a system hosting legacy containers is not more than 30000 (large
PIDs cannot be supported with legacy commands and applications).

1.6 Sizing an HP 9000 container


The guidelines to size an HP 9000 container (accounting for the ARIES emulation overhead and
also for the loss in performance because the applications were not compiled to take advantage
of the Itanium processor architecture) are as follows:
• ARIES might incur an average memory overhead of 10 MB per process. To compute the total
requirement, find the number of processes that concurrently run on the HP 9000 server at
peak load and account for an additional 10 MB for each of them. You must consider all the
processes that run on the HP 9000 server at peak load, including processes related to user
sessions, which are also emulated in the container.
• HP-UX 11i v3 kernel has a larger memory foot print (more than 20%) compared to earlier
versions, which must be accounted.

1.5 Consolidating HP 9000 servers using HP 9000 Containers 11


• The CPU requirements vary with the workload characteristics. Workloads that have a higher
requirement inside HP 9000 containers are the following:
◦ Applications that spawn several short lived processes or threads.

◦ Applications that concurrently run several CPU bound processes.

◦ Script intensive applications.

◦ Java based applications.

◦ Applications that have intensive floating points.

◦ Applications that load and unload several libraries dynamically.


• A guideline, assuming the target server uses Intel Itanium Processor 9350 cores, is to start
with an approximate 1:<frequency in GHz on HP 9000 server> core ratio. For example, if
the HP 9000 server is using 1 GHz PA-RISC cores, use 1:1ratio for sizing. The core sizing
can be up to 25% lesser with Intel Itanium Processor 9560.

NOTE: The guidelines are common case estimates and some changes might be needed based
on the results of Proof-of-Concept testing.

1.7 Resource entitlement


HP-UX Containers supports the feature to allocate CPU and memory usage per container. By default,
a PRM group is allocated to each container on the system. CPU and memory allocation can be
assigned to each PRM group. The PRM group provides two allocation models for CPU cores:
• Share based allocation—Restrictions (excluding maximum utilization caps) are not applied
until the managed resource is fully utilized, at which point the operating system scheduler or
memory manager applies an algorithm to allocate resources proportional to the share size of
each PRM group. This model ensures that individual containers can utilize available resources
without frequent tuning of allocations.
• Dedicated allocation—The specified PRM group is allocated a fixed quantity of resource for
its own exclusive use. This model guarantees immediate and complete access to the resource
at the expense of the ability to allow other PRM groups access to the currently unused resource.
Dedicated CPU allocation is used to limit the software license requirements for some software
products.
You can apply a combination of resource allocation models on a single server. You can also
disable PRM, either to use a different resource allocation utility such as WLM or gWLM, or to
disable resource management per container.
For more information about PRM and WLM, see the documents related to PRM and WLM at http://
www.hp.com/go/hpux-core-docs.

1.8 Using ARIES without HP 9000 Containers


You can use stand-alone ARIES emulator to run applications compiled for PA-RISC. The transition
involves copying the application-related files from HP 9000 server to HP Integrity system. ARIES
is supported with other HP virtualization solutions (such as HP Integrity VM, HP-UX vPars, and
HP-UX nPars) and with HP-UX system and workload native containers.
Table 3 (page 13) lists a comparison of transition using stand-alone ARIES and HP 9000 Containers.

12 Introduction
Table 3 Comparison of transition using stand-alone ARIES and HP 9000 Containers
Stand-alone ARIES transition HP 9000 Containers transition

Must identify and transfer application dependencies All dependencies are included in the HP 9000 file image
manually that is used to create a container

There is no PA-RISC environment on the Integrity server The container has PA-RISC virtualized user-space
except for system libraries and applications environment

Must use a separate product called XPADE for PA-RISC PA-RISC development environment comes along with the
C/C++ code development HP 9000 file image

Direct installation and patching of applications might need Installation and patching of applications do not need
some workarounds (for example, if the HP-UX version and workarounds
platform information are verified)

Non kernel intrusive system management applications can System management and resource monitoring related
be run on ARIES applications generally do not run inside the container

Better performance compared to containers if applications Might need to switch to native shells and commands in
are highly script intensive script intensive environments

Does not introduce any new manageability aspects There are some additional management tasks related to
containers

No changes to SG packages other than that required for Changes to SG packages are required to integrate with
SG version upgrade containers

1.9 ISV software license and support


ISV product license and support issues must be discussed directly with the respective application
vendors. HP does not own issues related to software license (LTU or RTU) migration during
application transition to a new platform. If licensing policy explicitly prohibits copying applications
to a new server, HP recommends that you apply for fresh licenses before using HP 9000 Containers.

1.9 ISV software license and support 13


14
2 Installing and configuring HP 9000 Containers
This chapter explains installation and configuration of HP 9000 Containers, prerequisites, and
recommended patches.

2.1 Prerequisites
HP-UX 11i v3 March 2011 or later
Install HP-UX 11i v3 March 2011 Base OE or Data Center OE.

NOTE:
• While installing the operating environment, configure the /var file on a file system separate
from the root file system.
• HP recommends that you host applications only inside containers. Do not install or use
applications outside containers. The exceptions are system management-related applications
(such as HP OpenView, HP SMH, and HP SG), device drivers, and other applications with
kernel modules, which are not supported inside containers.

HP-UX Containers A.03.01 or later


If the latest version is not already available in the OE, download and install the HP-UX Containers
depot from http://www.software.hp.com —> HP-UX Containers (SRP).
Install the product and verify the installation:
$ swinstall –x autoreboot=true -s <HP-UX Containers depot path> \*
$ swverify HP-UX-SRP

HP ARIES patch PHSS_41423 or later


Download and install the most recent HP ARIES patch for HP-UX 11i v3 from http://www.hp.com/
go/hpsc.
Examine the patch level:
$ what /usr/lib/hpux32/aries32.so

Perl v5.8.8 or later


Verify the Perl version on the system:
$ perl –v
If the version is earlier than 5.8.8, get the latest version from http://www.software.hp.com —>
perl on hp-ux 11i (PA-RISC) and hp-ux 11i (IPF).

HP-UX Secure Shell A.05.00.012 or later


Verify the Secure Shell version on the system:
$ swlist | grep SecureShell
If the version is earlier than A.05.00.012, get the latest version from http://www.software.hp.com
—> HP-UX Secure Shell.

2.2 Recommended patches


HP recommends the following patches because they address some of the known issues related to
HP 9000 containers:

2.1 Prerequisites 15
PHKL_41967 : 11.31 fs_select cumulative patch
PHKL_42716 : 11.31 vfs_vnops cumulative patch
PHNE_42470 : 11.31 cumulative ARPA Transport patch
PHSS_42623 : 11.31 mksf(1M) cumulative patch
PHSS_42863 : 11.31 Aries cumulative patch
PHCO_43198 : 11.31 audcmnds cumulative patch
HP also recommends the latest version of the following bundles:
FileSystem-SRP
HPUXTransportSRP
HPUX-Streams-SRP
AuditExt
The latest version can be downloaded from http://www.software.hp.com.

2.3 Additional requirements for HP 9000 classic containers


If you are using an HP 9000 classic container, the HP-UX OS instance must be dedicated for
hosting a single HP 9000 container. HP 9000 Containers does not support the creation of other
containers, or running applications outside the classic container on the same OS instance. The
only exceptions are system management-related applications.

2.4 Configuring HP-UX SRP


Log in as root user and run the following command:
$ srp_sys –setup
Enable PRM only if you want to host multiple containers on the system, and partition resources
(memory and CPU) among these containers. When you use an HP 9000 classic container, disable
PRM because the host system must be dedicated for a single container.
You can either accept the default values for all SRP configuration parameters or select to customize.
Ensure that SSHD on the host is configured to listen to the HP-UX 11i v3 host IP address.
For more information about the configuration parameters, see HP-UX Containers (SRP) A.03.01
Administrator's Guide at www.hp.com/go/hpux-srp-docs.
Reboot the server after configuring SRP.

2.5 Installing HP 9000 Containers


To install HP 9000 Containers and verify the installation, enter the following command:
$ swinstall –s <HP9KContainers depot path> \*
$ swverify HP9KContainers

16 Installing and configuring HP 9000 Containers


3 Transition of application environments from HP 9000
server to HP 9000 container
This chapter explains the steps for transitioning application environments from an HP 9000 server
to an HP 9000 container on an HP Integrity server running HP-UX 11i v3.
The transition process from an HP 9000 server to an HP 9000 container typically involves the
following steps:
• “Selecting HP 9000 container type” (page 17)
• “Creating HP 9000 server file system image” (page 17)
• “Transitioning kernel tunable parameters” (page 18)
• “Choosing name for HP 9000 container” (page 19)
• “Creating file systems for the container” (page 19)
• “Installing driver and management software” (page 20)
• “Creating and configuring an HP 9000 container” (page 20)

NOTE: Data migration related issues must be addressed separately. HP 9000 Containers does
not provide any new tools or documentation for data migration.

3.1 Selecting HP 9000 container type


HP recommends HP 9000 system containers type for transitioning applications. You can select HP
9000 classic containers for transitioning only when any of the following is true:
• There is a need to quickly migrate from HP 9000 Containers A.01.02 or A.01.06.
• There is a need for user level disk quotas.
• There is a need for managing users and auditing with HP SMH or SAM.
For more information about container types, see Section 1.3 (page 10).

3.2 Creating HP 9000 server file system image


Overview
Archive all the directories from HP 9000 server except the NFS mounted directories. In the backup,
at least include /bin, /dev, /etc, /lib, /net, /opt, /sbin, /stand, /usr, and /var
directories.

NOTE:
• Copy all the application data together to prevent inconsistencies.
• Before creating the image, stop all the applications on HP 9000 server to prevent the archival
of transient files.
Create the HP 9000 server file system image using any utility that can eventually make the files
visible under an alternate root directory and preserve file ownership and permissions. The commonly
used tools for image creation are tar, cpio, pax, and fbackup. Existing Ignite-UX images can
also be used to create the HP 9000 server file system image.

3.1 Selecting HP 9000 container type 17


Using fbackup to create image of HP 9000 files
The steps for archiving all the required directories in a single session using the fbackup command
are explained here. If the image is created on a live production server, opt for multiple sessions
to reduce memory and I/O overhead. For more information about fbackup, see fbackup(1M).
1. Prepare a graph file with the include-exclude list.
For example, a graph file for a system level backup can contain the following:
i /
e /var/adm/crash
For a directory level backup, an example for the graph file is:
i /var
e /var/adm/crash
NFS mounted directories are excluded from the backup by default. If you want to include the
NFS mounted directories, use the -n option with the fbackup command.
2. Compute the space requirement for the archive.
3. Select the archive location.
The files can be archived on a tape or in a local or remote file. If the directories are archived
on a file, ensure that largefiles is supported on the file system and there is enough free
space to copy the archive.
$ fsadm <file system root directory>
$ df -k <file system root directory>
4. Run fbackup:
$ /usr/sbin/fbackup -0 \
-f <output device or file path> \
-g <graph file path> —I <index file> 2>&1 | \
tee <fbackup log file>
Some temporary files such as /var/tmp or /var/spool/sockets might appear as errors
in the log, but can be ignored.
Do not use the kill command on fbackup.

Using tar and cpio to create image of HP 9000 files


NOTE: HP 9000 Containers does not support using cpio images for creating classic containers.
When using the tar or cpio tools for creating image, ensure that the backup does not include
the “/ ” prefix. This is because, the backup is restored under an alternate root, and not at the
system root on the Integrity system.
For example,
$ cd /
$ tar –cvf archive.tar bin dev etc lib net opt sbin stand tmp usr var

3.3 Transitioning kernel tunable parameters


Kernel parameters on the target system must be altered to accommodate the requirements of the
HP 9000 server being migrated.
1. Log in to the HP 9000 server and get the parameter values:
$ kmtune >/tmp/tunables_hp9000.txt
2. Transfer the output file to the target HP-UX 11i v3 server.

18 Transition of application environments from HP 9000 server to HP 9000 container


3. Log in to the target server and run the kernel parameter configuration script:
$ /opt/HP9000-Containers/bin/hp9000_conf_tunables \
<HP 9000 kmtune file> <HP 9000 host name>
The hp9000_conf_tunables script provides an option to choose between batch mode
and interactive mode. In the batch mode, a set of selected tunable parameters is updated
automatically. In the interactive mode, users can select the list of tunable parameters to be
updated based on the values on the PA-RISC server.
4. Review the data in the log file /var/opt/ HP9000-Containers/logs/
transition_tunables_<hostname>.log and make further changes as required. This
is required because the hp9000_conf_tunables script does not guarantee that all the
required changes are applied. The limitations of the script are as follows:
• It ignores parameters that do not have an impact on applications and are more related
to system administration.
• It does not modify tunable parameters, where there can be conflicts when moving from
multiple HP 9000 servers into containers.
• It does not handle inter-tunable dependencies. Hence, some of the attempted changes
might fail.
• It does not add up the values of parameters when creating additional containers.
5. Containers hosting environments earlier than HP-UX 11i v3 do not support large PID values.
This means that certain commands and applications within such containers fail when they
encounter value larger than 30000.
$ kctune process_id_max=30000
$ kctune nproc=30000
6. HP 9000 Containers does not support tunable base page size. Ensure that the kernel tunable
parameter base_pagesize is set to its default value of 4 KB.
$ kctune base_pagesize=4
7. If you migrate Java or other heavily multi-threaded applications, increase the value of the
parameter pa_maxssiz_32bit to 128 MB.
$ kctune pa_maxssiz_32bit=128MB
8. Currently, if legacy HP 9000 containers are hosted on a system, the name of the global host
cannot contain more than 8 characters. The workaround is to disable overflow error checking
using the command:
$ kctune uname_eoverflow=0
9. Reboot the server.

3.4 Choosing name for HP 9000 container


The container name is also used as its node name and host name. If the HP 9000 environment
being migrated does not support long node names (as is the case with HP-UX 11i v1), limit the
container name to less than or equal to 8 characters. Later, you can edit the container name to a
longer name when needed.
In the following chapters, the container name is referred as <srp_name>.

3.5 Creating file systems for the container


NOTE: Host the container root directories in separate file systems for better isolation and
management. By placing the home for each container in its own LUN, storage performance can
be optimized.

3.4 Choosing name for HP 9000 container 19


If the container is created on the primary node of an SG cluster and the container package model
is used, it is mandatory for the HP 9000 root directory to be a mount point.
Create additional logical volumes and file systems as necessary, under the HP 9000 root directory
for performing mounts in future (for /var, application data, and so on). Provide about 4 GB of
additional space under container /var and about 4 GB under container /usr for internal use by
HP 9000 Containers.

3.6 Installing driver and management software


Install any special drivers that are needed on the host HP-UX 11i v3 server.
Any applications that do system management-related activities (such as resource monitoring) must
be installed and used on the host.

3.7 Creating and configuring an HP 9000 container


For more information about creating an HP 9000 system container, see Chapter 4 (page 21) and
for more information about creating an HP 9000 classic container, see Chapter 5 (page 33).
For more information about container types, see Section 1.3 (page 10).

20 Transition of application environments from HP 9000 server to HP 9000 container


4 Creating and configuring HP 9000 system container
This chapter explains how to create and configure an HP 9000 system container.

4.1 Setting up user environment for HP 9000 image recovery


If a third-party tool is used to create the HP 9000 server image, and the tool gives preference to
user name and group name over UID and GID respectively, the following settings must be completed
on the host system before recovering the image:
(These steps imply that only the root user can log in to the system during image recovery.)
• To backup host user-related files, run the following commands:
$ cp –p /etc/passwd /etc/passwd.backup
$ cp –p /etc/group /etc/group.backup
$ cp –p /etc/nsswitch.conf /etc/nsswitch.conf.backup

• Edit the /etc/nsswitch.conf entry for passwd to include only files.


passwd files

• Delete all the entries except root, other, bin, sys, adm, and daemon from the /etc/
group file.
• Delete all the entries except root, daemon, bin, sys, and adm from the /etc/passwd
file.

4.2 Creating container root directory


Create root directory for HP 9000 system container:
$ mkdir /var/hpsrp/<srp_name>
Mount the VxFS file system for the container root (if applicable):
$ mount –F vxfs <from where> /var/hpsrp/<srp_name>
Set ownership and permissions to the root directory:
$ chown root:sys /var/hpsrp/<srp_name>
$ chmod 0755 /var/hpsrp/<srp_name>
In the following sections of the document, the container root directory is referred as
<hp9000_root>.

4.3 Recovering HP 9000 image


This section explains the steps for recovering an HP 9000 file system image.

4.3.1 Configuring mount points


Create the mount points on the HP-UX 11i v3 host to recover container subdirectories (if applicable).
For example:
$ mkdir <hp9000_root>/var
$ chown bin:bin <hp9000_root>/var
$ mount –F vxfs <from where> <hp9000_root>/var
HP 9000 Containers does not support the /dev and /sbin directories as mount points inside a
container file system.

4.1 Setting up user environment for HP 9000 image recovery 21


4.3.2 Using Ignite-UX network recovery archive
You can use an existing Ignite-UX network recovery archive to recover an HP 9000 image on the
target server. However, you cannot use Ignite-UX to do the image recovery because it cannot be
restored to an alternate root directory.
To recover an HP 9000 image from an Ignite-UX network recovery archive:
1. Identify the archive. By default, it is located on the Ignite-UX server in the /var/opt/ignite/
recovery/archives/<HP 9000-host-name> directory.
2. Copy the archive file to the Integrity server (or make it visible via an NFS-mount). Do not keep
the archive in the root directory on the system.
3. Uncompress the archive.
4. Recover the image:
$ /opt/HP9000-Containers/bin/hp9000_recover_image \
<hp9000_root> <image-file>
5. Ignore any errors related to the recovery of the dev directory that are recorded in the log file.

4.3.3 Using Ignite-UX tape recovery archive


You can use an existing Ignite-UX tape recovery archive to recover an HP 9000 image on the
target server. However, you cannot use Ignite-UX to do the image recovery because it cannot be
restored to an alternate root directory.
To recover an HP 9000 image from an Ignite-UX tape recovery archive:
1. Insert the tape into a compatible drive.
2. Extract the archive into file system:
$ copy_boot_tape -u /dev/rmt/0mn -d <directory>
3. Identify the file that corresponds to the file system image in the extracted archive. Typically,
this is the largest file in the extract. In the case of HP-UX 11i v1, it is usually named file0002.
4. Copy the archive file to the Integrity server (or make it visible via an NFS-mount). Do not keep
the archive in the root directory on the system.
5. Recover the archive:
$ /opt/HP9000-Containers/bin/hp9000_recover_image \
<hp9000_root> <image-file>
6. Ignore any errors related to the recovery of the dev directory that are recorded in the log file.

4.3.4 Using cpio, tar, and frecover recovery archive


To recover an HP 9000 image using cpio, tar, and frecover:
1. Do one of the following:
—In the case of file archive, copy the files to HP-UX 11i v3 server (or make it visible via an
NFS mount). Do not keep the archive in the root directory on the system.
Recover the archive:
$ /opt/HP9000-Containers/bin/hp9000_recover_image \
<hp9000_root> <image-file>
—In the case of tape archive, insert the tape into the Integrity server and configure the tape
device file in the HP-UX 11i v3 system (/dev/rtape/tape1_BEST).
Recover the archive:
$ /opt/HP9000-Containers/bin/hp9000_recover_image \

22 Creating and configuring HP 9000 system container


<hp9000_root> < /dev/rtape/tape1_BEST>
2. Ignore any errors related to the recovery of the dev directory that are recorded in the log file.

4.3.5 Using other tools for recovery archive


IMPORTANT: If you use third-party tools for recovery, ensure that proper permissions and
ownerships (UID or GID) are preserved. Some tools do not preserve setuid and setgid bits.
For example, verify permissions of the /usr/sbin/sendmail file to ensure that the setuid and
setgid bits are preserved.

4.3.6 Completing the recovery


After recovering the HP 9000 image:
1. Manually verify whether all the basic directories (/etc, /opt, /sbin, /usr, /var) are
recovered.
2. Create directories that are not recovered, and assign proper ownership and permissions to
them.
For example,
$ mkdir <hp9000_root>/var/adm/crash
$ chmod 0755 <hp9000_root>/var/adm/crash
$ chown root:root <hp9000_root>/var/adm/crash
3. If a third-party tool other than cpio, tar, and fbackup is used, restore the files that were
modified before the recovery.
$ cp –p /etc/passwd.backup /etc/passwd
$ cp –p /etc/group.backup /etc/group
$ cp –p /etc/nsswitch.conf.backup /etc/nsswitch.conf

4.4 Creating HP 9000 system container


This section explains the steps for creating an HP 9000 system container including configuring
PRM, adding hp9000sys template, and verifying configuration.

4.4.1 Configuring PRM


If you use PRM to allocate resources between multiple containers, choose whether FSS or PSET
must be used with cores. FSS provides better sizing and more flexibility, but might cause conflicts
when the application performs resource management internally (for example, when using database
resource managers). In the case of PSETs, the cores are reserved even when the container is down.
For FSS, the percentage entitlement is calculated as:
Number of shares assigned to a particular PRM Group
--------------------------------------------------- x 100
Sum of the shares assigned to all PRM Groups

4.4.2 Adding hp9000sys template


To create an HP 9000 system container, add the hp9000sys template:
$ srp –add <srp_name> -t hp9000sys
The container creation might take up to 30 minutes to complete.

Configuration parameters
Auto start setting Determines if the container must be automatically started at
system boot time. Answer no if the container is being created
on the primary node of an SG cluster and the container
package model is chosen.

4.4 Creating HP 9000 system container 23


Root user configuration A system container has a root user with fewer privileges
than the root user of the HP-UX 11i v3 host system. Enter
the root password when prompted.
DNS configuration By default, the DNS configuration for an HP 9000 system
container is the same as that in the HP 9000 image. To
change the configuration, specify the new values of the DNS
parameters when prompted.
Disallowed commands configuration HP 9000 Containers A.03.01 provides the following two
options for disabling system administration-related
commands inside the container:
• By default, HP 9000 Containers A.03.01 overwrites
the disallowed commands with a dummy executable
that displays an error message and exits.
• Use compartment rules to restrict commands.
For HP-UX 11.xx containers, select the default option to
begin with.
For HP-UX 10.xx containers, select the option to use
compartment rules to restrict commands.
Use rules to restrict unsupported commands
[no] yes
PRM configuration HP 9000 Containers uses PRM to allocate resources to
individual containers. During container creation, enter the
following parameters:
• CPU Entitlement (shares)—Minimum share of the CPU
when the system is at peak load.
• Max CPU Usage (%)—Maximum percentage of the
CPU that the container can use.
• Memory Entitlement (shares)—Minimum share of the
private real memory when the system is at peak load.
• Max Memory (%)—Maximum percentage of the private
real memory that can be allocated for the container
from the available user memory.
• Shared Memory (megabytes)—The minimum size of
the real memory in megabytes allocated to the container
for shared memory usage.
Network parameters A static IP address is essential for the container. DHCP is
currently not supported.
Ensure that IP address, LAN interface, gateway IP, and
subnet mask are configured. The LAN interface can be either
private to the container or shared.
If IPv4 address is used for HP 9000 server, use the same
for HP 9000 container because the environment might not
support IPv6.
If this configuration is on one of the nodes of an SG cluster
and if the container IP address is to be managed by SG,
then answer to the following question must be no.
Add IP address to netconf file? [yes] no

24 Creating and configuring HP 9000 system container


Error messages
If you get an error message, Could not generate swlist:
• Verify whether the /var/adm/sw directory exists under <hp9000_root>.
• Verify whether the /var/adm/sw directory or directories under it such as /var/adm/sw/
products is a symbolic link. For example, if the link is to the /softdepot directory, create
a link to this directory (with the same path) on the host system:
$ ln –s /<hp9000_root>/softdepot /softdepot
If you get an error message Could not find HP 9000 HP-UX version, configure the OS
version manually by editing the -pa_os_ver parameter in the files /var/hpsrp/<srp_name>/
.ariesrc and /var/hpsrp/<srp_name>/.aries64rc.

4.4.3 Verifying configuration of HP 9000 system container


To verify the configuration of HP 9000 system container:
$ srp –list <srp_name> -v | more

4.4.4 Changing configuration parameters (if required)


To change the configuration parameters:
$ srp –replace <srp_name>

4.4.5 Reverting configuration (if required)


If you encounter any errors during configuration, delete the partial container:
$ srp –delete <srp_name> delete_changes_ok=yes

4.5 Additional container configuration


This section explains the additional configurations that might be required for certain HP 9000
system containers.

4.5.1 Configuring host name and node name


By default, the container name is also used as the node name and host name for the HP 9000
container. For more information about how to modify this configuration, see Section 8.7 (page 53).
Change the host name in the following scenarios:
• Legacy HP 9000 environments might not support long names. If the container name is more
than 8 characters in length, give a different host name or node name to the container.
• If SG is used with the HP 9000 container in the application package model, the HP 9000
container on each node must have a different IP address (but the same compartment name).
Hence, HP recommends configuring different host names on each node.
Configure applications inside the HP 9000 container with the host name. Typically, this involves
editing application configuration files with new host names, but some applications store this name
in databases or in internal formats. For more information about how to edit the host name, see the
application documentation.

4.5.2 Configuring IP address


Configure applications inside the HP 9000 container to listen to the IP address of the container.
Some applications store the IP address in the databases. See the respective application
documentation for reconfiguring IP addresses.
Use HP 9000 system IP address for the HP 9000 container if application license depends on IP
address and it cannot be migrated, or if application reconfiguration for IP address is complex.
4.5 Additional container configuration 25
For more information about how to reconfigure IP address, see Section 8.6 (page 52).

4.5.3 Configuring additional IP addresses


Applications inside an HP 9000 container might require multiple IP addresses. Analyze the
configuration in the <hp9000_root>/var/opt/HP9000-Containers/etc/rc.config.d/
netconf file to find the number of configured IP addresses on the HP 9000 server.
If SG is used with an HP 9000 container in the application package model, applications might
require an additional floating IP address.
For more information about how to reconfigure IP address, see Section 8.6 (page 52).

4.5.4 Configuring additional devices


Creating and managing device files inside the container is not supported inside an HP 9000
container. The devices must be provisioned on the HP-UX 11i v3 host system.
To copy a device file from the host into the container, run the following command:
$ srp –add <srp_name> -tune device=<device_path>
For example, this command can be used to provision raw devices used by database applications
for the container. The data from raw devices must be recovered separately from the host system
using standard tools such as dd. You might have to configure the raw device names in application
specific files.
The list of device files copied from the host into the container is recorded in the /var/hpsrp/
<srp_name>.setup/srpdevices.lst file.

4.5.5 Configuring mount points


Configure mounts inside the container root directory in container local fstab or container
pre-start fstab. For more information about how to configure NFS, Autofs, and VxFS mount
points for HP 9000 container, see Section 8.5 (page 50).

4.5.6 Restoring or deleting HP 9000 startup services


As a part of HP 9000 container setup, several daemons are deleted from the HP 9000 RC
directories. The concept used is that all services that appear in HP 9000 swlist, except the
services that are supported inside the container, are moved out of <hp9000_root>/sbin/
init.d. Application daemons that were installed using SD might also get removed.
To restore a deleted service (if required), execute the following script:
$ /opt/HP9000-Containers/bin/hp9000_restore_service
The script queries for the container name and the RC script name, which is moved to the
/sbin-hp9000/init.d directory inside the container. The script also updates the record /var/
opt/HP9000-Containers/deleted_services.
To delete a service from the container, execute the following script:
$ /opt/HP9000-Containers/bin/hp9000_remove_service

4.5.7 Configuring DCE services


To view or configure DCE, execute the following script:
$ /opt/HP9000-Containers/bin/hp9000_dce_setup <srp_name>
If the script detects that the container is configured on a DCE server, the host name and IP address
of the HP 9000 container must be changed to the values on the HP 9000 server to avoid
reconfiguring all clients.
If the tool finds DCE client configuration, and if the HP 9000 container is using a hostname or IP
address different from that on the HP 9000 server, add this new client to the DCE server using the

26 Creating and configuring HP 9000 system container


dce_config command. If this is not feasible, the only option is to reuse the hostname and IP
address from the HP 9000 server.
Edit IP address configuration in the /etc/opt/security/pe_site file if required.

4.5.8 Configuring root cron jobs


During HP 9000 container creation, all cron jobs configured by root are moved out because
they might contain system administration-related jobs, which might not be supported inside the
container. To restore any of these jobs in the HP 9000 container, run the crontab command and
reconfigure the job, or restore entries from the backup file <hp9000_root>var/opt/
HP9000-Containers/var/spool/cron/crontabs/root.

4.5.9 Configuring or disabling trusted mode features


Trusted mode support with HP 9000 system containers has the following differences compared to
a native system:
• Audit management happens mostly from global and there are some known limitations. For
more information about how to configure auditing, see Section 4.5.18 (page 29).
• HP SMH or SAM is not available to manage trusted mode inside the container.
To configure trusted mode, carry out the same steps for configuring auditing.
To convert from trusted mode to standard mode (if required), run the following commands:
$ srp –start <srp_name>
$ srp_su <srp_name> root –c “/usr/lbin/tsconvert -r”
$ srp –stop <srp_name>

4.5.10 Configuring inittab


Examine the <hp9000_root>/var/opt/HP9000-Containers/etc/inittab file to view
the configuration present on the HP 9000 server. Open the container inittab file /var/hpsrp/
<srp_name>/etc/inittab and verify whether application-specific configurations are retained.

4.5.11 Configuring printers


Printer configuration is available as a part of the file system backup from HP 9000 server. Any
additional printer settings can be performed later using the lp commands inside the container.
If the printers have to be used from the global container also, store the configuration from the HP
9000 server and restore it to the HP-UX 11i v3 server using lpmgr as follows:
On the HP 9000 server, run the following commands:
$ mkdir /tmp/lpsave
$ /usr/sam/lbin/lpmgr –S –xsavedir=/tmp/lpsave
Transfer the /tmp/lpsave directory to the Integrity host HP-UX 11i v3 system and run the following
command:
$ /usr/sam/lbin/lpmgr –R –xsavedir=<dir>

4.5.12 Configuring X server


X server with graphics adapter is not supported inside HP 9000 containers. XVfb is supported
inside HP 9000 system container.
To use XVfb:

4.5 Additional container configuration 27


1. Open the display screens file:
For example, /etc/X11/X0screens
2. Comment out any line containing /dev/crt.
3. Add configuration:
ServerOptions
ServerMode XVfb
4. Open the /etc/dt/config/Xservers file.
5. Comment out line containing /usr/bin/X11/X :0 or modify it to /usr/bin/X11/Xvfb
:0 -fbdir /tmp.
If X-server is required in only one container on the system, and not required in global, then it can
be configured with graphics devices using the following steps:
1. Open the /etc/cmpt/<srp_name>.rules file and insert the following line just before the
first line containing #include:
# define ALLOW_RDEVOPS
2. Reset compartment rules:
$ setrules
3. Enable graphics module on the host system and reboot (if not already loaded). For example,
$ kcmodule gvid_core=best gvid=best
$ reboot
4. Verify whether the module is loaded:
$ kcmodule | grep gvid
5. Copy the graphics devices into the container:
$ srp -add <srp_name> -tune device=/dev/gvid
$ srp -add <srp_name> -tune device=/dev/gvid0
$ srp -add <srp_name> -tune device=/dev/gvid_info
6. Copy the input devices into the container. For example,
$ srp -add <srp_name> -tune device=/dev/hid
7. Change the /var/hpsrp/<srp_name>/etc/X11/XF86Config file to reflect new devices.
For example,
Option "Device" "/dev/hid/hid_000"

4.5.13 Configuring additional privileges for HP 9000 system container


The setprivgrp command is not currently supported inside an HP 9000 container. Hence,
privileges such as RTPRIO and MLOCK cannot be configured in the /etc/privgroup file inside
the container. A workaround is to use the setprivgrp command and the /etc/privgroup
configuration file from the global after copying the affected group name and GID to the global
container.

NOTE: The global configuration applies to groups with the same GIDs in other system containers
on the same host. Therefore, this is not recommended where there are multiple containers on the
host unless it can be ensured that a unique GID is used (across the system) for groups, which need
the privilege.

4.5.14 Configuring DDFA


If DDFA is required inside the container, open the /etc/cmpt/<srp_name>.rules file and
insert the line #define ALLOW_MKNOD before the first line containing #include.

28 Creating and configuring HP 9000 system container


NOTE: This enables MKNOD privilege inside the container, but ensure that the mknod command
is not used for any other purposes other than by DDFA itself. In general, using mknod inside the
container is not supported and can result in undefined system state.

4.5.15 Disabling Autofs


If Autofs is not used, disable this service to save container startup and shutdown time.
Open the file /etc/rc.config.d/nfsconf and set AUTOFS=0.

4.5.16 Configuring telnet for HP-UX 10.xx containers


HP-UX 10.xx version of telnetd is incompatible with ARIES emulation on HP-UX 11i v3. The
workaround is to copy the files from an HP-UX 11i v1 or HP-UX 11.00 system to the 10.xx container.
The files to be copied are as follows:
/usr/lbin/telnetd
/usr/lib/libc.2
/usr/lib/libsis.1
/etc/inetsvcs.conf
Create a symbolic link inside the container:
$ ln –s /usr/lib/libsis.1 /usr/lib/libsis.sl

4.5.17 Configuring OSI Transport Services


If OSI Transport Services (OTS) is in use on the HP 9000 server, download the version for Integrity
HP-UX 11i v3 and install it on the host system (global). Copy the related devices into the container:
$ srp –add <srp_name> –tune device=/dev/osotipi
$ srp –add <srp_name> –tune device=/dev/otsop

4.5.18 Enabling auditing


Auditing is not supported inside an HP 9000 system container, but auditing can be enabled from
the global container and records can be filtered at a container granularity. For auditing, selection
of users must be done inside the container.
To enable auditing in the global, run the audsys(1M) command. For example,
$ audsys -n -c /var/adm/audit_trail -N 1
To select events or system calls for auditing, use the audevent command. Migrating the list of
selected system calls from the PA-RISC file system image to the global is a manual process.
The user selection for auditing is retained inside the container file system. You can change the user
settings inside the container by running the audusr (with trusted mode) or userdbset (with
SMSE) command. HP SMH or SAM is not supported inside an HP 9000 container.
For more information about how to filter and view audit records for a container and view the list
of known auditing limitations, see Section 8.13 (page 58).

4.6 Testing HP 9000 system container


If the HP 9000 system container is configured on the primary node of an SG cluster with container
package model, see Section 10.5.1 (page 68) for more information about how to start the HP
9000 container for testing.
Otherwise, start the container:
$ srp –start <srp_name>

4.6 Testing HP 9000 system container 29


All startup messages must say OK. Look for any startup error messages in the /var/hpsrp/
<srp_name>/etc/rc.log file.
Verify the status of the container:
$ srp –status <srp_name> -v
Log in to the container from the host:
$ srp_su <srp_name>
Also, attempt to log in using the telnet and ssh commands.
Start applications using the normal procedures and perform exhaustive functional and performance
testing to ensure compatibility.
Stop the container:
$ srp –stop <srp_name>

4.7 Workarounds for known issues


Some applications, especially database servers and old java applications, have known issues
inside HP 9000 containers. To verify whether any of the workarounds are applicable to the migrated
environment, see Section 12.6 (page 80).

4.8 Tweaking ARIES configuration


This section explains the additional configuration that might be required for ARIES binary translator
in certain environments.

4.8.1 Configuring for more threads


The number of threads that a 32-bit application can support under ARIES is limited by the value
of the kernel tunable parameter pa_maxssiz_32bit. With the default configuration parameter,
85 threads can be supported. For every additional thread, the parameter needs to be increased
by 215 KB. For example, if an application needs 300 threads, pa_maxssiz_32bit needs to
be increased by (300-85)*215*1024 bytes.
If the number of threads that the application needs is unknown, use a parameter of 128 MB, which
will suffice for more than 300 threads.
$ kctune pa_maxssiz_32bit=128MB
In addition to tuning the kernel tunable parameter, configure ARIES to support more threads. Add
the following line to the /.ariesrc file:
# start config for more threads
<executable path name> -mem_tune heap_max
# end config for more threads

4.8.2 Configuring for more stack size


The main thread stack for emulated applications is allocated by ARIES, not by the kernel. Hence,
it is not possible to modify this using the ulimit –s command. The default stack size is 8 MB for
32-bit applications. To increase the stack size to 16 MB, increase the value of the
pa_maxssiz_32bit parameter by 8 MB and add the following lines to the /.ariesrc file:
# start configuration for 16 MB stack
<executable path name> -ssz 16384
# end configuration for 16 MB stack

4.8.3 Configuring machine-specific parameters


To match applications with certain machine-specific parameters from the HP 9000 server, ARIES
(PHSS_41423 or a later patch) provides the following configuration options (can be added to the
existing /.ariesrc and /.aries64rc files):

30 Creating and configuring HP 9000 system container


-machine_id <uname –i on HP 9000 server>
-machine_ident <getconf MACHINE_IDENT on HP 9000 server>
-machine_serial <getconf MACHINE_SERIAL on HP 9000 server>
-partition_ident <getconf PARTITION_IDENT on HP 9000 server>

# start config for machine specific parameters


<executable path> <ARIES option> <HP 9000 server value>
# end config for machine specific parameters
To apply this configuration to all executables, use wildcard / in place of executable path name.
For example,
/ -machine_id 1338625371 –machine_ident Z3e123a334fc9cd5b –
machine_serial SGH4632J0F –partition_ident Z3e123a334fc9cd5b
These parameters are effective only when ARIES option -pa_os_cpu is also specified in the ARIES
RC (/.ariesrc or /.aries64rc) file. This option is automatically configured in these files when
an HP 9000 container is created.

NOTE: The ARIES options described here must not be used if it is not legally permitted. For
example, if the configuration is for reusing an application license, approval from the respective
vendor is required.

4.8 Tweaking ARIES configuration 31


32
5 Creating HP 9000 classic container
This chapter explains how to create and configure an HP 9000 classic container.

5.1 Setting up user environment for image recovery


A classic container shares the /etc directory and login mechanism with the HP-UX 11i v3 host
system. Therefore, merge HP 9000 users and groups to the host before the recovery process.
To set up user environment:
1. Recover the HP 9000 /etc directory.
The input for user migration process is a copy of the /etc directory from the HP 9000 server.
Get a tar archive of /etc and recover it under the /tmp directory on the HP Integrity server.
You can also recover /etc from the image.
The following example shows how to extract the /etc directory from a complete fbackup
image:
$ mkdir /tmp/HP9000
$ echo “i etc” > /tmp/HP9000/graph
$ cd /tmp/HP9000
$ frecover –x –X –f <image file> -g /tmp/HP9000/graph
2. Configure the system.
If HP 9000 server is configured with trusted mode, enable the trusted mode on HP Integrity
host using HP SMH.
If HP 9000 server is configured with shadow password, enable the shadow mode on HP
Integrity host using the pwconv command.
3. Migrate user and group.
Run the user merge tool:
$ /opt/HP9000-Containers/bin/hp9000_conf_users \
<path to recovered /etc directory>
Examine errors or warnings in the /var/opt/HP9000-Containers/logs/
user_config.log file.
4. Install and configure user management-related products on the host.
SSH login process to a classic container is actually native (does not use products from the HP
9000 image). Towards the end of the login process, SSHD does a chroot into the HP 9000
file system and invokes a PA-RISC shell. So, if the requirement is to use NIS, LDAP or any other
Active Directory tool, configure the same on the host system.

5.2 Creating container root directory


The container root directory must be created under the host root directory.
For example,
$ mkdir /hp9000-root
Mount the file system created to host the container root (if applicable):
$ mount –F <fstype> <from where> /hp9000-root
Set ownership and permissions:
$ chown root:sys /hp9000-root
$ chmod 0755 /hp9000-root

5.1 Setting up user environment for image recovery 33


5.3 Recovering HP 9000 image
This section explains the steps for recovering HP 9000 image.

5.3.1 Configuring mount points


To recover the files within the container onto mount points (if applicable), create them on the HP-UX
11i v3 host. For example,
$ mkdir <hp9000_root>/var
$ chown bin:bin <hp9000_root>/var
$ mount –F vxfs <from where> <hp9000_root>/var

5.3.2 Using Ignite-UX network recovery archive


You can use an existing Ignite-UX network recovery archive to replicate the files on the target
server. However, you cannot use Ignite-UX itself for recovery because the image cannot be restored
to an alternate root directory.
To recover an HP 9000 image from an Ignite-UX network recovery archive:
1. Identify the archive. By default, it is located on the Ignite-UX server in the /var/opt/ignite/
recovery/archives/<HP 9000-host-name> directory.
2. Copy the archive file to the Integrity server (or make it visible via an NFS-mount). Do not keep
the archive in the root directory on the system.
3. Uncompress the archive.
4. Recover the image:
$ /opt/HP9000-Containers/bin/hp9000_recover_image \
<hp9000_root> <image-file>
5. Ignore any errors related to the recovery of the dev directory that are recorded in the log file.

5.3.3 Using Ignite-UX tape recovery archive


You can use an existing Ignite-UX tape recovery archive to recover an HP 9000 image on the
target server. However, you cannot use Ignite-UX to recover images because it cannot be restored
to an alternate root directory.
To recover an HP 9000 image from an Ignite-UX tape recovery archive:
1. Insert the tape into a compatible drive.
2. Extract the archive into file system:
$ copy_boot_tape -u /dev/rmt/0mn -d <directory>
3. Identify the file in the extract that corresponds to the file system image. Typically, this is the
largest file in the extract. For HP-UX 11i v1, it is usually named file0002.
4. Copy the archive file to the Integrity server (or make it visible via an NFS-mount). Do not keep
the archive in the root directory on the system.
5. Recover the archive:
$ /opt/HP9000-Containers/bin/hp9000_recover_image \
<hp9000_root> <image-file>
6. Ignore any errors related to the recovery of the dev directory that are recorded in the log file.

5.3.4 Using tar and frecover recovery archive


To recover an HP 9000 image using tar and frecover:

34 Creating HP 9000 classic container


1. Do one of the following:
—In the case of file archive, copy the file to the HP-UX 11i v3 server (or make it visible via
an NFS-mount). Do not keep the archive in the root directory on the system.
Recover the archive:
$ /opt/HP9000-Containers/bin/hp9000_recover_image \
<hp9000_root> <image-file>
— In the case of tape archive, insert the tape into the Integrity server and present the tape to
the HP-UX 11i v3 system. For example, /dev/rtape/tape1_BEST.
$ /opt/HP9000-Containers/bin/hp9000_recover_image \
<hp9000_root> </dev/rtape/tape1_BEST>
2. Ignore any errors related to the recovery of the dev directory that are recorded in the log file.

5.3.5 Using other tools for recovery


IMPORTANT: If you use third-party tools for recovery, ensure that proper permissions and
ownership (UID and GID) are preserved. Some tools do not preserve setuid and setgid bits.
For example, verify permissions in the /usr/sbin/sendmail file to ensure that the setuid and
setgid bits are preserved.

5.3.6 Completing the recovery


After recovering the HP 9000 image:
• Manually verify whether all the basic directories (/etc, /home, /opt, /tmp,/usr, /var,
/stand) are recovered.
• Manually create the directories that are not copied, and assign proper ownership and
permissions.
For example,
$ mkdir <hp9000_root>/var/adm/crash
$ chmod 0755 <hp9000_root>/var/adm/crash

5.4 Creating HP 9000 classic container


To create HP 9000 classic container, add the hp9000cl template:
$ srp –add <srp_name> -t hp9000cl
The following configuration parameters must be set for HP 9000 classic container:
Auto start setting Controls whether or not the container must be started (through
an RC script) at the time of server boot.
HP 9000 root directory Specifies the root path <hp9000_root>, where HP 9000 files
are recovered.
Network parameters A static IP address is essential for the container. DHCP is
currently not supported.
Ensure that IP address, LAN interface, gateway IP, and subnet
mask are configured properly. The LAN interface can be either
private to the container or shared. The network configuration
is actually performed on the host system (not inside the
container).
If the HP 9000 server uses IPv4 address, use the same for HP
9000 container because the environment might not provide
complete IPv6 support.

5.4 Creating HP 9000 classic container 35


The container creation can take up to 30 minutes.
To list configuration:
$ srp –list <srp_name> -v | more
To change configuration, when required:
$ srp –replace <srp_name>
To revert the configuration, when required:
$ srp –delete <srp_name> delete_changes_ok=y

5.5 Additional container configuration


This section explains container configurations that might be required for certain environments.

5.5.1 Configuring host name or node name


By default, the container name is used as the node name and host name for the HP 9000 container.
For more information about how to modify this configuration, see Section 8.7 (page 53). Legacy
HP 9000 environments might not support long names. Hence, if the container name has more than
8 characters, the container must be given a different host name or node name.
Configure applications inside the HP 9000 container with the host name. Typically, this involves
editing the configuration files, but some applications store this name in databases or in internal
formats. For more information about how to edit the host name, see the respective application
documentation.

5.5.2 Configuring IP address


Configure applications inside the HP 9000 container to listen to the IP address of the container.
Some applications store the IP address in the databases. For more information about reconfiguring
IP addresses, see the respective application documentation.
Use HP 9000 system IP address for HP 9000 container when application license depends on IP
address cannot be migrated, or application reconfiguration for IP address is complex.
For more information about how to reconfigure IP address, see Section 8.6 (page 52).

5.5.3 Configuring additional IP addresses


Applications inside an HP 9000 container might require multiple IP addresses. Analyze the
configuration in the <hp9000_root>/etc-hp9000/rc.config.d/netconf configuration
file to find the number of configured IP addresses on the HP 9000 server.
If SG is used with HP 9000 container in the application package model, applications might require
an additional floating IP address.
For more information about how to reconfigure IP addresses, see Section 8.6 (page 52).

5.5.4 Configuring mount points


For information about how to configure NFS, Autofs, and VxFS mount points for the HP 9000
container, see Section 8.5 (page 50).

5.5.5 Restoring HP 9000 startup services


When setting up HP 9000 container, several daemons are deleted from the HP 9000 RC directories.
The concept used is, all the services that appear in HP 9000 swlist (except for those that are
supported inside the container) are moved out of <hp9000_root>/sbin/init.d. Application
daemons that were installed using SD might also get removed.
A backup of the RC scripts is available in the /sbin-hp9000/init.d directory and can be
restored manually (if required).

36 Creating HP 9000 classic container


5.5.6 Configuring root cron jobs
During HP 9000 container creation, all cron jobs configured by root are moved out because they
might contain system administration-related jobs, which might not be supported inside the container.
If you want to run any of these jobs in HP 9000 container, reconfigure the jobs using the crontab
command or restore entries from the <hp9000_root>/var/opt/HP9000-Containers/var/
spool/cron/crontabs/root backup file.

5.5.7 Configuring inittab startup file


To configure the startup configuration file inittab:
1. Examine the <hp9000_root>/var/opt/HP9000-Containers/etc/inittab to view
the configuration on the HP 9000 server.
2. Open the file /var/hpsrp/<srp_name>/etc/inittab.
3. Copy each application related entry to the container inittab.
4. Modify the fourth field (which contains the path of the executable) by prefixing with chroot
<hp9000_root>.
For example,
appdaemon:3456:respawn:chroot /hp9000 /opt/app/bin/appd

5.5.8 Configuring printers


To configure printers:
1. Store the configuration from the HP 9000 server and restore it to the HP-UX 11i v3 server
using the lpmgr command.
On the HP 9000 server, run:
$ mkdir /tmp/lpsave
$ /usr/sam/lbin/lpmgr –S –xsavedir=/tmp/lpsave
Transfer the /tmp/lpsave directory to the Integrity host HP-UX 11i v3 system and run:
$ /usr/sam/lbin/lpmgr –R –xsavedir=<dir>

5.5.9 Configuring non-local users


The HP 9000 local users (from /etc/passwd) are automatically added to a container login group
when the hp9000cl template is added. This group can access the container using RBAC. To grant
access to the non-local users, follow the instructions in Section 8.3 (page 49).

5.5.10 Configuring trusted users


Examine the /tcb/files/auth file on the HP-UX 11i v3 system and verify whether the trusted
mode users are merged from <hp9000_root>/tcb.
Examine the /var/opt/HP9000-Containers/logs/migrated_users file and verify whether
any of the UIDs are changed as a part of the user merge. If there are changed UIDs, the same
must be reflected in the /tcb/files/auth file.
For example, if the log file reports a change in the UID of user1, edit the u_id field in the
/tcb/files/auth/u/user1 file.
Enable auditing via HP SMH, if required. The audit IDs are automatically picked up from the files
in /tcb/files/auth.

5.5 Additional container configuration 37


5.5.11 Configuring sendmail daemon
You can run the sendmail daemon on the host HP-UX 11i v3 system (not inside the HP 9000
classic container). Because the /etc directory is shared with the host, you must copy any specific
configuration from HP 9000 /etc/mail/sendmail.cf, /etc/mail/aliases, and so on to
the global /etc/mail directory.

5.5.12 Configuring xinetd service


A workaround for the unavailability of inetd services inside a classic HP 9000 container is to
use xinetd.

WARNING! HP has not extensively tested xinetd configuration and hence, there are certain
limitations when using it inside a container.
To configure xinetd and setup RC scripts:
1. Stop and delete the existing container:
$ srp -stop <srp_name>
$ srp -delete <srp_name> delete_changes_ok=y -b
2. Install xinetd on the HP 9000 server and restore the backup on the Integrity server as
described in Section 5.3 (page 34).
3. Recreate the container:
$ srp –add <srp_name> -t hp9000cl
4. Run the configuration tool provided with the product:
$ /opt/HP9000-Containers/bin/hp9000_xinetd_setup <srp_name>
5. If the script exits with errors related to itox, update the <hp9000_root>/etc-hp9000/
inetd.conf file so that it contains only the entries related to the minimum required services.
Run the hp9000_xinetd_setup script again.

5.6 Testing HP 9000 classic container


Start HP 9000 classic container:
$ srp –start <srp_name>
All startup messages must say OK. Verify whether there are any startup error messages in the /var
/hpsrp/<srp_name>/etc/rc.log file.
Verify the status of the container:
$ srp –status <srp_name> -v
Log in to the container:
$ ssh <srp_name>
Start applications for testing just the way you do it on the HP 9000 server.
Stop the container:
$ srp –stop <srp_name>

5.7 Configuring ARIES parameters


Some application environments might require additional ARIES configuration. For more information
about ARIES configuration, see Section 4.8 (page 30).

38 Creating HP 9000 classic container


5.8 Workarounds for common issues
Some applications, especially database servers and old Java applications, have known issues
inside HP 9000 containers. To verify whether any of the workarounds are applicable to the migrated
environment, see Section 12.6 (page 80).

5.8 Workarounds for common issues 39


40
6 Upgrading HP 9000 Containers versions
This chapter describes the steps for upgrading the current version of HP 9000 Containers to the
latest version.

6.1 Upgrading from HP 9000 Containers A.03.0x


You can install HP 9000 Containers A.03.0y on a system, which has containers created using HP
9000 Containers A.03.0x (where x is less than y).
Before installing the new version of HP 9000 Containers, backup any configuration files that are
manually modified. For example,
/opt/HP9000-Containers/config/hp9000_switch_commands
$ cd /opt/HP9000-Containers/config
$ cp –p hp9000_switch_commands hp9000_switch_commands.bkp
Then, install the new depot:
$ swinstall –s <path to HP9KContainers depot> \*
When the installation is completed, restore the configuration file (that is backed up):
$ cd /opt/HP9000-Containers/config
$ cp –p hp9000_switch_commands.bkp hp9000_switch_commands
During the installation process, an existing HP 9000 container does not undergo any change. If
the container rules file /etc/cmpt/<srp_name>.rules was modified manually after previous
container creation or upgrade, take a back up of it. Then run the following commands:
$ srp –stop <srp_name>
$ srp –replace <srp_name> -s init,cmpt
$ srp –start <srp_name>

6.2 Upgrading from HP 9000 Containers A.01.0x


HP 9000 Containers A.01.0x used a container model similar to the classic container in HP 9000
Containers A.03.0x. You can manually upgrade such a container to a classic container using
some scripts provided with the newer HP 9000 Containers version.
To switch a container to the system container type, delete and reconfigure the container (with the
same file system). This involves some environment-specific steps and more manual effort than
upgrading to classic container type.
In either case of upgrade, a downtime is required.

6.2.1 Upgrading to HP 9000 classic container


HP 9000 Containers requires HP-UX 11i v3 March 2011 update (or later). If the existing container
host Integrity system uses an older version of HP-UX 11i v3, update it to the latest HP-UX 11.31
OE.
To upgrade a container to HP 9000 classic container type:
1. Stop the container after updating the HP-UX 11i v3 OE:
$ srp –stop <srp_name>
2. Manually associate the compartment tag if it is not present.
Associate the compartment tag for every additional (non-primary) IP address configured for
using the container in the /etc/rc.config.d/netconf file. This was not a mandatory
requirement for the earlier versions of the product, but with HP 9000 Containers A.03.01 this

6.1 Upgrading from HP 9000 Containers A.03.0x 41


requirement is mandatory. The tag format is the same as that used for primary IP address of
the container. For example, if there is an IPv4 address configured for use in mysrp container
such as IP_ADDRESS[3]=<2nd IP>, associate a tag using:
IPV4_CMGR_TAG[3]=’compartment=”mysrp” template=”base”
service=”network” id=”2”’
3. Increment the id value for each such IP address.
For example, if a 3rd IP address for mysrp IP_ADDRESS[4]=<3RD IP> is present, the tag
must be as follows:
IPV4_CMGR_TAG[4]=’compartment=”mysrp” template=”base”
service=”network” id=”3”
4. Verify the version of HP-UX Containers (SRP) on the system:
$ swlist | grep HP-UX-SRP
If the SRP version is earlier than A.03.01, install the latest HP-UX Containers depot:
$ swinstall –x autoreboot=true –s <path to depot> \*
Do not remove the existing SRP version.
5. Configure system wide SRP parameters after reboot:
$ srp_sys –setup
6. Accept the prompt to migrate existing containers:
Migrate all existing workload containers [y] y
7. Select to disable PRM and IPFilter:
Disable PRM [n]y
8. Select default values for other parameters, if required.
9. Reboot the server, and shutdown the container if it is running:
$ srp –stop <srp_name>
10. Run the migration script again if any error is reported during container migration:
$ /opt/hpsrp/bin/util/srp_migrate –c <srp_name>
11. Examine the version of HP 9000 Containers on the system:
$ swlist | grep HP9000-Containers
$ swlist | grep HP9KContainers
If the version is earlier than A.03.00, remove the existing version before installing A.03.0x:
$ swremove HP9000-Containers
12. Install the latest version of HP 9000 Containers.
13. Run the HP 9000 container-specific migration script:
$ /opt/HP9000-Containers/bin/hp9000_migrate <srp_name>
14. Configure the container host name in the /var/hpsrp/<srp_name>/etc/rc.config.d
/netconf file, if it is different from the container name.
15. Start the container and verify it:
$ srp –start <srp_name>

6.2.2 Upgrading to HP 9000 system container


Before upgrading a container to HP system container type, verify whether the HP 9000 image can
be recreated from the HP 9000 server. If it is possible, HP recommends creating an HP 9000
system container afresh from the image rather than upgrading to HP 9000 system container. If HP

42 Upgrading HP 9000 Containers versions


9000 image cannot be recreated, delete the container configuration and reconfigure it with the
system container type.
To delete the container configuration and reconfigure:
1. Back up HP-UX 11i v3 server image before the upgrade.
2. Stop the container:
$ srp –stop <srp_name>
3. Back up container network configuration and rules files for future reference:
$ cp –p /etc/rc.config.d/netconf/etc/rc.config.d/netconf.bkp
$ cp –p /etc/cmpt/<srp_name>.rules \
/etc/cmpt/<srp_name>.rules.bkp
4. Delete the container:
$ srp –delete <srp_name>
5. Remove any additional IP addresses from the /etc/rc.config.d/netconf file that are
configured for the container.
6. Move the /tcb folder inside the HP 9000 file system:
$ mv <hp9000_root>/tcb <hp9000_root>/tmp
At this time, the shared directories inside containers are restored to the original versions that came
over from the HP 9000 server. However, this might not be a valid operation for the /etc folder
because some files might have changed after the original container was created. Restore such files
manually as follows:
1. Uncompress and expand the etc-native.tar.gz file created inside <hp9000_root>:
$ cd /tmp
$ cp <hp9000_root>/etc-native.tar.gz .
$ gunzip ./etc-native.tar.gz
$ tar –xvf ./etc-native.tar
2. Copy the files in the /etc directory that might have been updated after the previous container
creation:
For example,
$ cp /tmp/etc/passwd <hp9000_root>/etc/passwd
$ cp /tmp/etc/group <hp9000_root>/etc/group
If the /var directory is not on a separate file system from root file system, ensure that /var/
hpsrp is created on a new file system.
3. Move the HP 9000 file system to its new root:
$ mv <hp9000_root> /var/hpsrp/<srp_name>
4. HP 9000 Containers A.03.0x requires HP-UX 11i v3 March 2011 update (or later). If the
current version is not March 2011 (or later), update it to the latest HP-UX 11i v3 OE.
5. Verify the version of HP 9000 Containers on the system:
$ swlist | grep HP9000-Containers
$ swlist | grep HP9KContainers
If HP 9000 Containers version is earlier than A.03.00, remove the existing version before
installing A.03.0x:
$ swremove HP9000-Containers
6. Verify the version of SRP on the system by running the following command:
$ swlist | grep HP-UX-SRP
If SRP version is earlier than A.03.01, install the latest depot:

6.2 Upgrading from HP 9000 Containers A.01.0x 43


$ swinstall –x autoreboot=true <new SRP depot>
Enable SRP:
$ srp_sys –setup
PRM can be enabled if you want to host other containers on the same system.
7. Install the latest version of HP 9000 Containers.
8. Install the recommended patches listed in Section 2.2 (page 15).
When the upgrade is completed, reboot the system. After reboot, create the HP 9000 system
container by following the steps in Chapter 4 (page 21).
There can be other application-specific files in the /etc directory that might have got updated
after the transition from HP 9000 server. If application testing encounters errors, examine the files
that need to be restored from the /tmp/etc directory.
After the testing is completed, delete all the HP 9000 users and groups from the global /etc/
passwd and /etc/group files on the Integrity host system.

44 Upgrading HP 9000 Containers versions


7 HP 9000 Containers file system layout
This chapter explains the file system layout for HP 9000 system and classic container types, and
the operations performed on the file systems while creating a container.

7.1 HP 9000 system container file system


HP 9000 system container has a private HP 9000 file system in the /var/hpsrp/<srp_name>
directory. The container does not have write permission to the directories that are outside its file
system.
The private directories inside the system container contain files that are recovered from the HP
9000 file system image. However, the file system undergoes the following changes while a container
is created:
• The HP 9000 /dev file is moved to dev-hp9000. A container private /dev is created during
configuration with a set of default devices copied from the host system. You can view the list
of devices in the /opt/HP9000-Containers/config/hp9000_devices file.
• Unsupported system services are removed from HP 9000 /sbin. A copy of the original
directory is preserved for reference as /sbin-hp9000.
• A set of predefined products and files are copied from the host into the container. You can
find the list of products and individual files copied in the /opt/HP9000-Containers/
config/hp9000sys_copy_products and /opt/HP9000-Containers/config/
hp9000sys_copy_files respectively.
• Changes are made to some specific files inside the container file system such as /etc/fstab,
/etc/inittab, /etc/rc.config.d/netconf, and /etc/hosts.
• ARIES configuration files .ariesrc and .aries64rc are created in the container root
directory.
• The crontab files owned by the root user inside the container are moved so that system
administration-related cron jobs do not get automatically enabled in the container.
• System administration-related commands are either overwritten or disallowed read and execute
permissions inside the container (depending on the option selected while creating the container).
You can view the list of such commands in the /opt/HP9000-Containers/config/
hp9000sys_delete_commands file.
Figure 1 (page 46) shows the file system layout for a host with HP 9000 system container configured.

7.1 HP 9000 system container file system 45


Figure 1 HP-UX 11i v3 Integrity file system configured with HP 9000 system container

There are two directories on the host system that are shared (using read only loopback mounts)
with HP 9000 system containers—/usr/lib/hpux32 and /usr/lib/hpux64. These directories
bring in ARIES libraries (for running PA-RISC executables) and other native Integrity libraries (for
running native commands and tools) inside the container.

7.2 HP 9000 classic container file system


On an HP-UX 11i v3 host OS instance, the system has three root directories. The directories are:
• Host (global container) HP-UX 11i v3 root (/)
• HP-UX SRP root (/var/hpsrp/<srp_name>)
• HP 9000 container root (/<hp9000_root>)
The HP 9000 container file system is not completely isolated from the HP-UX 11i v3 file system.
HP 9000 system daemons other than cron daemon, are not supported inside a classic container.
Applications inside the container interact with system services running on the HP-UX 11i v3 host
system. To enable this communication, a part of the file system is shared between the container
and the host system.
The shared directories are as follows:
• /dev
• /etc
• /net
• /tcb
• /usr/lib/hpux32
• /usr/lib/hpux64
• /var/adm/syslog
• /var/adm/userdb
• /var/mail
• /var/news
• /var/opt/dce/rpc
• /var/uucp
46 HP 9000 Containers file system layout
• /var/yp
• All subdirectories of /var/spool except /var/spool/cron
In addition, there might be a requirement to share file system mount points to be accessed from
within the HP 9000 container. For information about how to implement mount point sharing, see
Section 8.5 (page 50).
File system sharing is implemented through LOFS, also known as loop-back mounts, to HP 9000
container directories from corresponding native directories. LOFS mounts are performed as a part
of the HP 9000 container startup. This is enabled by configuring /var/hpsrp/<srp_name>/
etc/fstab. The mount points are critical and must always stay active for the proper functioning
of applications inside the HP 9000 container.
Figure 2 (page 47) shows the file system layout with the HP 9000 classic container configured.

Figure 2 HP-UX 11i v3 Integrity file system configured with HP 9000 classic container

While creating a classic container, the following actions take place:


• Adds HP 9000 users to a login group and that group is granted access to the container using
RBAC with the role SRPlogin-<srp_name>.
• Configures /var/hpsrp/<srp_name>/etc/cmpt/fstab with the loop-back mount points
required to implement directory sharing.
• Backs up /sbin, /dev, and all the shared directories inside the container file system. The
backup directories contain -hp9000 suffixed to their original name.
• Merges files from <hp9000_root>/etc and <hp9000_root>/tcb to the corresponding
directories on the HP-UX 11i v3 system based on heuristics.
• Creates a set of symbolic links in <hp9000_root>/usr/lib/security.

7.2 HP 9000 classic container file system 47


• Deletes unsupported system daemons from <hp9000_root>/sbin/init.d (and the
corresponding RC links).
• Copies the HP 9000 container-specific RC script (hp9000_rc) and RC links to /var/hpsrp/
<srp_name>/sbin/init.d.
• Moves root crontab file so that system administration-related cron jobs do not get
automatically enabled inside the container.
• Creates ARIES resource configuration files under the container root directory .ariesrc and
.aries64rc.

7.3 HP 9000 Containers directories


HP 9000 Containers depot installation creates the following directories under /opt/
HP9000-Containers/:
bin: setup, cleanup and management scripts
docs: documentation
config: configuration for setup
newconfig: default configuration files, which are copied into the container
HP 9000 Containers depot also installs files in the following directories:
/opt/hpsrp/etc/templates
/opt/hpsrp/bin/update-ux
/opt/hpcmgr/lib/Cmgr
/opt/hpcmgr/lib/Util
/opt/hpsmh/data/htdocs/srpgui
/usr/lbin/sw/post_session
/usr/share/man/man5.Z/container_hp9000.5
A record of the changes made to the file system during the HP 9000 container setup is stored in
the <hp9000_root>/var/opt/HP9000-Containers directory.
The HP 9000 container configuration log is available in the /var/opt/HP9000-Containers/
logs directory.

IMPORTANT: Preserving the record of changes is critical for running a proper cleanup if and
when the HP 9000 container is to be deleted or reconfigured.

48 HP 9000 Containers file system layout


8 Administration of HP 9000 Containers
Most of the administration tasks for HP 9000 Containers must be performed from the HP-UX 11i
v3 host system (referred as the global container in the following sections).

8.1 Administrator privileges


By default, the root user on the host system is assigned the administrator privileges for managing
the lifecycle (start, stop, export, import, delete, and modify) of the container. Using RBAC, the
privileges for managing the lifecycle of an HP 9000 container can be assigned to additional users.
To assign administrator privileges, run the following command:
$ roleadm add <user-name> SRPadmin-<srp_name>
To delete an administrator, run the following command:
$ roleadm delete <user-name> SRPadmin-<srp_name>

8.2 Start and stop the HP 9000 container


To start the HP 9000 container:
$ srp –start <srp_name>
To stop the HP 9000 container:
$ srp –stop <srp_name>
If any warning (for example, Certain processes could not be terminated) is displayed
when stopping the container, retry the process using the same command.
The container is locked to prevent other operations during lifecycle operations. The lock file,
srp_<srp_name>.lck is created in the /var/opt/hpsrp/tmp directory.
To enable or disable auto start at system boot time:
$ srp –replace <srp_name> -s init
Startup messages are logged in the /var/hpsrp/<srp_name>/etc/rc.log file on the host
system. Inside a system container, this file is accessible as /etc/rc.log. It is not possible to
access this log file from within a classic container.

8.3 User account management


This section describes how to perform user management activities for HP 9000 system and classic
container types.

8.3.1 HP 9000 system container


User management activities can be performed locally just as on HP 9000 server. The container
has a local root user with complete access to the HP 9000 file system.
To reset the container root password from the global container, run the following command:
$ srp –replace <srp_name> -s init
User accounting and quota are not supported inside system containers.

NOTE: HP recommends not to configure any users in the global container other than the users
and groups related to system administration, or system management applications.

8.1 Administrator privileges 49


8.3.2 HP 9000 classic container
User management must be performed in the global container. As a part of HP 9000 container
configuration, a group <srp_name>-login is created, which has access to the container. All
local users from HP 9000 /etc/passwd file are added to this auxiliary group.

Add a new user for the container


1. Log in to the global container as root and run the useradd command. There is no need to
prefix the <hp9000_root> directory in this step while specifying the home directory.
2. Create the home directory inside the <hp9000_root> directory. Set the permission for the
home directory to 0755.
3. Add user to one of the HP 9000 container login groups. The default login group name (created
at setup time) starts with <srp_name>-login.
$ groupmod –a -l <username> <srp_name>-login

Allow container access for a group of users


To allow access for a group of users to HP 9000 container, run the following command:
$ roleadm assign \&<group-name> SRPlogin-<srp_name>

Deny container access for a group of users


To deny access for a group of users to HP 9000 container, run the following command:
$ roleadm revoke \&<group-name> SRPlogin-<srp_name>

8.4 Configuring SSH authorization keys


The section provides information about configuring SSH authorization keys for both system and
classic container types.

8.4.1 HP 9000 system container


For system container, you can generate and use SSH authorization keys just the way you do it on
an HP 9000 server.

8.4.2 HP 9000 classic container


To automatically log in to an HP 9000 container using SSH authorization keys, you must create
additional home directories in the global container.
To configure SSH keys for a user:
1. Create a home directory on the host system (global container) with the same path as inside
the HP 9000 container. Change permissions of the home directory to 0755 and ownership
to the individual user.
2. Log in to the host system and create a $HOME/.ssh directory with 0700 permissions.
3. Log in to the client system (from where automatic login is to be allowed) and generate an ssh
key:
$ ssh-keygen -t dsa
4. Add the contents of $HOME/.ssh/id_dsa.pub on the client system to $HOME/.ssh/
authorized_keys on the target system (global container).

8.5 Configuring mount and export points


This section describes how to configure file system mounts and exports with HP 9000 containers.

50 Administration of HP 9000 Containers


8.5.1 Configuring NFS and Autofs clients
HP 9000 system container
NFS and Autofs are supported inside an HP 9000 system container. Therefore, you can configure
them inside the container file system just the way you do it on the HP 9000 server.

HP 9000 classic container


NFS and Autofs are not supported inside an HP 9000 classic container. You can configure them
on the host system and then expose it to the container.
Configure /etc/fstab, /etc/auto_master, and /etc/auto.direct files on the host.
Perform actual mounts on the host system. Make the mount point visible inside the HP 9000 classic
container using the following command:
$ /opt/HP9000-Containers/bin/hp9000_link_dir \
<directory> <srp_name>

8.5.2 Configuring VxFS mount points


HP 9000 system container
To configure VxFS mounts with HP 9000 system containers, the following options are available:
Configure pre-start mounts
Pre-start mounts are done from the global to the container file system when the container starts up.
They get unmounted when the container shuts down. The configuration must be done in /var/
hpsrp/<srp_name>.setup/fstab using the same format used in the host /etc/fstab. For
example,
$ echo “\n/dev/vg01/lvol2 /var/hpsrp/mysrp/mnt vxfs \
delaylog 0 2” >>/var/hpsrp/mysrp.setup/fstab
Configure container local mounts
Make the logical volume visible inside the container and configure the mount in the local fstab.
For example,
$ srp –add <srp_name> -tune device=/dev/vg01/lvol2
$ srp –add <srp_name> -tune device=/dev/vg01/rlvol2
$ echo “\n/dev/vg01/lvol2 /mnt vxfs delaylog 0 2\
>>/var/hpsrp/mysrp/etc/fstab
This facility is not currently supported for the /usr file system because the container has to do
LOFS mount of /usr/lib/hpux32 and /usr/lib/hpux64 before it can start any service. It is
not supported for /var too because container startup needs to read this directory.
Configure the mount on the HP-UX 11i v3 host
Configure the mount on the HP-UX 11i v3 host with the mount point inside the container. For
example, if there is a /mnt mount point on the HP 9000 server, configure the following:
$ echo “\n/dev/vg01/lvol2 /var/hpsrp/mysrp/mnt vxfs \
delaylog 0 2” >> /etc/fstab
Currently, there is a known limitation with this option. After system reboot, the bdf and mount
commands inside the container do not display information for these mount points. The issue is
related to RC sequencing order between fstab processing and SRP initialization, and does not
have an immediate fix. Hence, HP recommends using either the container pre-start fstab or
container local fstab file for configuring mounts.
Global mounts can be configured in SG packages (if required) without encountering the issue
mentioned above. This is because the SRP initialization completes before SG performs the mounts.

8.5 Configuring mount and export points 51


HP 9000 classic container
An HP 9000 classic container does not support the mount command inside it. Hence, configure
the mount point on the HP-UX 11i v3 host system, perform a mount, and make it visible inside the
container using the hp9000_link_dir tool.
$ /opt/HP9000-Containers/bin/hp9000_link_dir \
<directory> <srp_name>
For example,
$ echo “\n/dev/vg01/lvol2 /mnt vxfs delaylog 0 2” \
>>/etc/fstab
$ mount /mnt
$ /opt/HP9000-Containers/bin/hp9000_link_dir \
/mnt mysrp
Alternatively, you can directly mount from the global container to a container file system directory.
$ mkdir <hp9000_root>/mnt
$ echo “\n/dev/vg01/lvol2 /hp9000-root/mnt vxfs \
delaylog 0 2” >>/etc/fstab

8.5.3 Configuring NFS exports


NFS server is not supported inside an HP 9000 container. Hence, configure the NFS exports in
the global container and specify the complete path <hp9000_root/dir>, where dir indicates
the directory originally exported from the HP 9000 system. On the client systems, ensure that the
host name configured for NFS mount is the global host name and not the name of the HP 9000
container.

8.6 Modifying IP address configuration of HP 9000 container


8.6.1 Changing primary IP address
To change the IP address:
1. Stop the HP 9000 container:
$ srp –stop <srp_name>
2. Reconfigure the container network parameters:
$ srp –replace <srp_name> –s network
If the IP address is managed by SG, answer no to the following question:
Add IP address to netconf file? [yes] no
3. Open the container sshd configuration file (by default /var/hpsrp/<srp_ name>/opt/
ssh/sshd_config) and change if the ListenAddress was configured:
ListenAddress <new IP-address>
4. Reconfigure applications with new IP address.
5. Update the /etc/hosts and /var/hpsrp/<srp_name>/etc/hosts files to reflect new
mapping.
6. Start the HP 9000 container:
$ srp –start <srp_name>

8.6.2 Adding a new IP address for an HP 9000 container


To add a new IP address for an HP 9000 container:

52 Administration of HP 9000 Containers


1. Find the number of IP addresses already assigned to the container:
$ srp –list <srp_name> -s network –v
2. Find the id value for the next IP address. For example, if an additional IP, apart from the
container primary address, is not assigned yet, the next id will be 2.
3. Add the new IP address:
$ srp –add <srp_name> -s network –id “<next id>”
If the IP address is managed by SG, answer no to the following question:
Add IP address to netconf file? [yes] no

8.6.3 Changing global IP address


During srp_sys –setup, the host sshd might have been configured to listen specifically to the
host system IP address. Therefore, if the native IP address changes, the sshd configuration file
must be updated.
Edit the configuration file (default /opt/ssh/etc/sshd_config) for the ListenAddress
parameter:
ListenAddress <new IP-address>
Then, restart sshd:
$ /sbin/init.d/secsh stop
$ /sbin/init.d/secsh start

8.7 Modifying host name


For HP 9000 system container, you can configure the host name and node name from within the
container just the way you do that on the HP 9000 server. Edit the HOSTNAME parameter in the
/etc/rc.config.d/netconf file, and update the /etc/hosts, /etc/mail/sendmail.cw
files, and any application configuration.
For HP 9000 classic container, you must configure the host name and node name from the global
container ( HP-UX 11i v3 host). Edit the /var/hpsrp/<srp_name>/etc/rc.config.d/
netconf, /etc/hosts, /etc/mail/sendmail.cw files, and any application configuration.

8.8 Modifying resource entitlements


You can use HP PRM to manage resource entitlements for an HP 9000 system container.
To modify the PRM configuration for the container, run the following command:
$ srp –replace <srp_name> -s prm
HP SMH or PRM commands can also be used directly to modify the PRM group configuration. For
more information, see prm(1M).

8.9 Monitoring HP 9000 containers processes from Integrity host


The ps command in the global container lists the processes running in all the containers, but with
certain obfuscation in the name. For example, an sshd running in a container with id 3 will be
displayed as /opt/ssh/sbin/3_Sshd.
The container identifiers are listed in the /etc/cmpt-db file on the host system.
To list only the processes running inside an HP 9000 container:
$ srp_ps <srp_name> <ps options>
To invoke a command in the context of an HP 9000 system container:
$ srp_su <srp_name> root -c “<command full path> <args>”
To invoke a command in the context of an HP 9000 classic container:

8.7 Modifying host name 53


$ srp_su <srp_name> root –c “chroot <hp9000_root> \
<command full path> <args>”

8.10 Patching HP 9000 Containers


Patching applications that use custom installers must work inside HP 9000 system and classic
container types. HP 9000 classic containers do not support patching using SD. HP 9000 system
containers support SD patching, but with differences as explained in the following sections.

8.10.1 Patching native files inside container


During container configuration, a set of products (mainly NFS) and files (commands such as ipcs,
mount, netstat, ioscan, traceroute, and so on) are copied from the host HP-UX 11i v3
system to an HP 9000 container. This is done because, the corresponding PA-RISC legacy
components do not work with the HP-UX 11i v3 kernel and the differences cannot be bridged using
ARIES user-space emulation. A backup of the copied native files are available in the /var/opt/
HP9000-Containers/native directory inside the container.
When products including the copied native files are patched inside the container, they might get
overwritten by HP 9000 versions. An SD post session script
/usr/lbin/sw/post_session/hp9000_flag_sync is automatically run after patching to
copy the files again from the backup available at /var/opt/HP9000-Containers/native.
Copying files from the host is not a one time process. The following processes trigger the creation
of a file hp9000_needs_recovery inside the container, under /var/adm/sw.
• Patching or installation of products including the native files on the host.
• Removing products or patches from the host.
• Running Update-UX on the host.
• Patching or installation of copied files from within the container.
• Removing products or patches from within the container.
When the container is restarted, this file is detected and the native files are copied again. You can
trigger the copying manually (when container status is stopped) by running the following command:
$ srp –replace <srp_name> -s init

8.10.2 Commands disallowed inside container


Certain commands (mostly related to system administration tasks) are disallowed inside containers.
HP 9000 Containers A.03.01 provides the following ways to restrict these commands:
• Deny execute permission for these commands using compartment rules in /opt/
HP9000-Containers/config/hp9000.disallowed.cmds. This also causes the denial
of read permission on command executable files (compartment rules cannot distinguish between
read and execute).
• Replace unsupported commands with a dummy command that exits after displaying an error
message. Commands listed in the /opt/HP9000-Containers/config/
hp9000sys_delete_commands file are replaced. This option is available from HP 9000
Containers A.03.01 for system containers.
You can choose to restrict the commands at the time of container creation by answering yes (for
rules) or no (for replacement) to the following question:
Use rules to restrict unsupported commands?
Later, you can change the choice using the replace operation.
$ srp –replace <srp_name> -s init,cmpt

54 Administration of HP 9000 Containers


Compartment rules provide a better way to restrict the commands. However, read permission on
these files is disabled and SD operations such as swinstall, swverify, and swremove fail
for products that include these commands.
For example, a quality pack might contain several products some of which might contain files that
belong to the list of restricted commands and installation or rollback of the pack can turn out to be
tedious. A workaround is to temporarily disable the compartment rules when the operation is being
performed. Open /etc/cmpt/<srp_name>.rules on the host HP-UX 11i v3 server and comment
out (using #) the line including hp9000.disallowed.cmds. Then, run the $ setrules
command. After patching is completed, enable the rules by editing the rules file to remove the
comment, and running setrules again.
If the option to replace unsupported commands (which is the default) is chosen, the SD operations
for products including these files are not affected. However, swinstall and swremove run
relatively slower because a post session script is again run to replace the disallowed commands.

IMPORTANT: Do not interrupt the post session scripts.

8.10.3 Applying kernel patches inside the container


An HP 9000 container does not have an active HP 9000 kernel. Therefore, applying kernel patches
inside the container do not have any effect. The swinstall command updates files without
restarting the container.

8.10.4 Patching commands and libraries


HP 9000 Containers A.03.01 provides options to use native HP-UX commands and the latest
versions of system libraries using the cmdv3 and libv3 templates respectively. If any of these
templates are added, there is a risk of overwriting commands and libraries with the legacy
components when patching products that include these files. To restore the HP-UX 11i v3 native
commands and PA-RISC libraries, stop and replace the container:
$ srp –replace <srp_name> -t cmdv3
$ srp –replace <srp_name> -t libv3

8.10.5 Errors reported by swverify command


The swverify command inside the HP 9000 container might report errors due to the following
reasons:
• If compartment rules are used to restrict commands, read permission is not available for the
command files.
• If command or library is switched, the file attributes differ from what is stored in the SD
database.
• If swremove is performed on a product that includes disallowed commands or other files
copied from the host, the SD database is no longer flagged to ignore errors for these files.

8.10.6 SD post session scripts


As a part of container creation, some scripts and configuration files get copied into the container
to help SD patching, and these scripts and files must be retained. The post session processing takes
care of deleting unsupported services, restoring native files, and overwriting unsupported commands
after patching operations inside the container.
Files related to post session processing are:
/usr/lbin/sw/post_session/hp9000_flag_sync
/usr/lbin/sw/post_session/hp9000_delete_svcs
/var/opt/HP9000-Containers/hp9000sys_sd_filesets

8.10 Patching HP 9000 Containers 55


/var/opt/HP9000-Containers/deleted_services
/usr/lbin/sw/post_session/hp9000_replace_cmds
/var/opt/HP9000-Containers/hp9000sys_delete_commands

8.11 Run level support


This section describes the run level support for HP 9000 system and classic container types.
Table 4 (page 56) lists the differences between the two types.
Table 4 Comparison of run level support between HP 9000 system container and HP 9000 classic
container
HP 9000 system container HP 9000 classic container

The srp_init daemon in the container is the equivalent


HP 9000 classic container provides only partial support
for run levels. It does not support any system daemon other
of the system srp_init(IM) daemon. It is the first process
started in the container and spawns and monitors the than cron. Application services are supported and the
process based on the /etc/inittab file. processing of RC directories follows the same order as on
The srp_init command can be run inside the container a physical server. However, switching run level inside the
container is not supported.
to communicate with the srp_init daemon and change
the run level:
$ /sbin/srp_init 0|1|2|3|4|5|6|Q|q
Examine the /srp.log file inside the container for
messages from the srp_init daemon.
For information about srp_init, see srp_init(1M).

8.12 Backing up and cloning a container


This section describes various aspects for backing up and cloning a container by HP 9000 container.
Backup is discussed for both HP 9000 system and classic containers. Cloning is currently supported
only by HP 9000 system containers.

8.12.1 Exporting and importing an HP 9000 system container


To backup only the container configuration:
$ srp –export <srp_name> -b
To backup the container configuration and the file system:
$ srp –export <srp_name> ok_export_dirs=yes
To delete the container configuration and restore from backup:
$ srp –delete <srp_name> -b
$ srp –import –xfile <name of configuration export file>
To delete the container completely (including file system) and restore from backup:
$ srp –delete <srp_name> delete_changes_ok=yes
$ mv /var/hpsrp/<srp_name> <some backup location>
$ srp –import –xfile <path of export file>

NOTE: A known issue with HP-UX Containers A.03.01 is that the ownership of imported files
changes if the same users are configured on the Integrity host system with different UIDs (or same
groups with different GIDs). Therefore, do not configure users or groups in the global container or
use the same IDs (by using LDAP or NIS) in both the global container and all the containers.

56 Administration of HP 9000 Containers


8.12.2 Exporting and importing an HP 9000 classic container
HP 9000 classic container shares a part of the file system with the HP-UX 11i v3 host on which it
resides. So, take the file system backup at system level, not at container level.
To backup the container configuration alone:
$ srp –export <srp_name> -b
To delete and reimport the container configuration:
$ srp –delete <srp_name> -b
$ srp –import –xfile <configuration export file>

8.12.3 Cloning an HP 9000 system container


Cloning a system container involves backing up the source container and creating a new container.
To backup the source container configuration and the file system, run:
$ srp –export <srp_name> ok_export_dirs=yes –b
Run the following command to use the exchange file to create a new container on the same host
or on a different host. You are prompted to enter new network parameters for the container:
$ srp –import <new_srp_name> -xfile <path of export file>

8.12.4 Backup applications with HP 9000 system containers


Backup applications can run inside HP 9000 system containers, but there are some limitations as
well. The limitations are as follows:
• The backup application cannot attempt tasks or commands (for example, applications related
to system administration) that are unsupported inside the container.
• Ignite-UX does not work inside HP 9000 containers.
• If compartment rules are used to restrict unsupported commands, read permissions on these
files are disabled inside an HP 9000 container. Therefore, an image, which includes command
directories (/sbin, /usr/bin, /usr/contrib/bin, /usr/lbin, and /usr/sbin) might
not contain the complete set of files.
• Do not recover a complete image from an HP 9000 server inside the container. If you recovered
an image, make sure that you reconfigure it before the next container start.
$ srp –replace <srp_name> -s init
Tape devices files can be copied to the container using the following command (if required):
$ srp –add <srp_name> -tune device=<tape device file>
Some backup applications (such as Legato Networker) can be configured to read /etc/fstab
and backup all configured file systems. To use this option, the mount points must be configured in
container local fstab (not in container pre-start fstab). However, the /usr and /var directories
are not supported in local fstab. For these file systems and container root directory, the backup
might have to be initiated separately from the host.
When using HP DP, run the disk agent from within the container and the media agent from the
global container. In addition, you might have to configure the disk agent in the global container
to backup the container root file system /var/hpsrp/<srp_name>.
If backup applications are used from the global container, and mounts are configured in the
container pre-start or local fstab file then, the container must be up when the backup is executed.
When using backup applications from the global container, synchronize UIDs and GIDs in all the
containers (including that of the global) on the system.

8.12 Backing up and cloning a container 57


8.12.5 Backup applications with HP 9000 classic containers
In addition to the limitations for the system container type, classic container requires additional
care because directories such as /etc and /dev are shared with the host system. HP recommends
using backup applications from the global container rather than from a classic container.

WARNING! Do not attempt to restore a complete image from an HP 9000 server to an HP 9000
classic container because it destroys the contents in HP-UX 11i v3 /etc. HP recommends storing
the backup applications on the Integrity host system to avoid this.
If backup applications need to run commands inside the container for any reason, use the following
command syntax:
$ srp_su <srp_name> root -c “chroot <hp9000_root> <command> <args>”

8.13 Auditing with HP 9000 Containers


The HP-UX audit subsystem is not virtualized at a container level. So, auditing cannot be managed
completely from within the container. However, you can enable auditing at the global container
level, and filter container-specific records.
Audit management in global is not different from that on a system without containers.
At the command level, audsys(1M) is used for enabling or disabling auditing; audevent(1M)
is used to select events; audomon(1M) for monitoring, and so on.
For HP 9000 system containers, users are selected from within the container using the userdbset
(with SMSE) or audusr (with trusted mode) commands.
For example,
$ srp_su <srp_name>
$ audusr -a <user>
$ userdbset –u <user> AUDIT_FLAG=1
After configuring, audit records generated by processes in all the containers are written to audit
log files in the global view. To view all the audit records generated, run the following command:
$ auditdp -r <global_log>
To view records for a specific system container from the global, run the following command:
$ audisp -C <srp_name>
$ auditdp -r <global_log> -s “+cmpt=<srp_name>”
The records displayed in the global might show an incorrect mapping between user or group IDs
and names. This is because the records contain only the IDs, and the UID to user name (or GID to
group name) mapping in the global might be different from the mapping inside container.
To view raw audit data of all containers with IDs correctly mapped to names, run the sample script
provided in /opt/audit/AudReport/bin/hp9000_audit_global. This script is included
in AuditExt B.11.31.04.01 (or later), which can be downloaded from the HP Software Depot
website at http://www.software.hp.com —> HP-UX Auditing System Extensions.
To view audit logs for a specific system container:
$ hp9000_audit_global -C <srp_name> -a <global_log>
To view audit logs for all the containers:
$ hp9000_audit_global -a <global_log>
To copy the relevant records from the global into a system container:
$ /opt/audit/AudReport/bin/srp_auditdp_copy \
–r <global_log> -R <local_log> -C <srp_name>
To copy the records from the global to all system containers:

58 Administration of HP 9000 Containers


$ /opt/audit/AudReport/bin/srp_auditdp_copy \
–r <global_log> -R <local_log>
To view the copied records from within a system container:
$ audisp <local_log>
Legacy system containers have a major limitation; the login and logoff events are not included in
audit logs. The workaround is to write an init.d startup script that runs the following commands:
echo ”audit_en_logins_compat/W 1" | adb -o –w /stand/vmunix/dev/kmem
echo "audit_logoff_compat/W 1" | adb -o -w /stand/vmunix/dev/kmem
For HP 9000 classic containers, users are selected in the global (/etc and /tcb are shared
between the host and the container). The audit records contain information for the global and the
container (only one HP 9000 classic container is supported on a host). Currently, there is no way
to filter records for the container.

8.13 Auditing with HP 9000 Containers 59


60
9 Using Container Manager
The Container Manager component that is integrated in HP SMH provides a GUI to manage system,
workload, and HP 9000 containers.
The following tasks are supported by HP 9000 Containers with the Container Manager:
• Enable or disable system-wide configuration of containers
• Monitor container status and activity
• Create and delete containers
• Start and stop containers
• Modify container configuration
• Export and import containers

9.1 Accessing Container Manager from HP SMH


Log in to SMH using the SMH administrator or root credential http://<hostname>:2301/.
On the SMH Tools page, select Container Manager from the Container Management menu. Figure 3
(page 61) shows the Tools home page.

Figure 3 SMH Tools home page

9.2 Container Manager home page


The Container Manager home page provides a view of all the containers on the Integrity host
system including the current state of each container and resource utilization for each container.
Figure 4 (page 62) shows the Container Manager home page.

9.1 Accessing Container Manager from HP SMH 61


Figure 4 Container Manager home page

For further help regarding Container Manager, click ? located at the upper right corner.

9.3 Setting up the environment for container


Enable the core subsystem properties before creating containers by either using the srp_sys
command (with the –enable or –setup option) or by carrying out the following steps using the
Container Manager:
1. Select the System Properties tab.
2. Enable SRP core subsystems.
3. Wait until the message about successful completion is displayed.
4. Enable compartment login feature and sshd configuration properties.
Enable PRM service to support multiple containers on the system.
Enabling other properties are optional.
5. Reboot the system.
The Container Listing tab is displayed only if the SRP core subsystem property status is set to OK.
Figure 5 (page 63) shows the options for the System Properties tab.

62 Using Container Manager


Figure 5 Container Manager—System properties

9.4 Creating an HP 9000 container


To create an HP 9000 container:
1. In the Container Manager home page, click Create a container.
2. By default, the selected Container Type is System. Select hp9000sys for creating an HP 9000
system container and hp9000cl for creating an HP 9000 classic container. Figure 6 (page 63)
shows the container types.

Figure 6 Container Manager—Select container type

3. Click Next. Figure 7 (page 64) shows the options and services for creating a container.

9.4 Creating an HP 9000 container 63


Figure 7 Container Manager—Create a new container

4. Enter the container name.

NOTE: You cannot use the keywords system, workload, hp9000sys, or hp9000cl as container
names.

5. Enter the parameters.


6. After entering the container details, click Create.
A window pops up and shows the container creation logs getting generated on the host. If
the container is successfully created, a message is displayed. If the creation of container fails,
a corresponding message is displayed.
7. Click Back to Container Listing and close the result window.

9.5 Viewing and modifying configuration of an HP 9000 container


After creating the container, you can view or modify the configuration of it when the status of the
container is stopped.
To modify the container configuration:
1. From the Container Manager home page, click the Container Listing tab and select the container
to view or modify. Details of the selected container is displayed.
The Overview tab displays the key properties. The Process View tab displays the processes
running inside the container. The Base tab provides the detailed configuration of the container.
Figure 8 (page 65) shows the list of containers.
2. Select the configuration tab you want to view or modify.

64 Using Container Manager


Figure 8 Container Manager—Container Listing

3. To modify configuration, click Modify Container. To add a new instance, click + add new
instance. Figure 9 (page 65) shows the container properties.

Figure 9 Container Manager—View or modify container properties

4. A new window displays the output of the modification. After modifying the configuration, click
Close This Window.

9.5 Viewing and modifying configuration of an HP 9000 container 65


9.6 Starting and stopping an HP 9000 container
You can start a container if its status is stopped. The status of each container is displayed on the
Container Manager home page.
Figure 10 (page 66) shows the Start option.

Figure 10 Container Manager—Start a container

To start a container, select the container and click Start in the task bar on the right.
To stop a container, select the container and click Stop in the task bar on the right.

66 Using Container Manager


10 Integration with SG
HP SG allows you to create high availability clusters of HP 9000 or HP Integrity servers. SG is not
supported inside an HP 9000 container. But, it can be installed in the global container (that is, on
the host system) and configured for applications running inside containers.
HP 9000 system containers support two methods of SG integration:
• Application package model—HP 9000 containers are up on all the failover nodes, but the
monitored application runs inside only one of the containers at any given time. Only the
application is failed over to the other container when required.
• Container (SRP) package model—HP 9000 container is active only on one node at a time.
Failover happens at a container level. The entire container is shut down on the primary node
and started on the failover node. RC scripts must be written to start applications along with
container startup.
HP 9000 classic containers currently supports the application package model only.
To configure SG cluster for HP 9000 containers:
1. Set up the SG cluster.
2. Configure system on each node in the cluster.
3. Select the package model.
4. Select the application that will manage file system and network interface.
5. Configure SG package on the primary node.
After selecting the package model, SG cluster configuration requires the following:
a. Configure shared logical volumes.
b. Configure HP 9000 container on the primary node.
c. Configure HP 9000 container on each failover node.
d. Create or migrate monitor script.
e. Create RC scripts to start applications (container package only).
f. Create or migrate SG package configuration.
6. Copy and apply package configuration.

10.1 Setting up the SG cluster


For information about setting up the SG cluster, see Managing Serviceguard A.11.20 at
www.hp.com/go/hpux-SG-docs.

10.2 Configuring system on each node in the cluster


All the nodes in the cluster must be identical with respect to OE and patch levels. For more
information about how to prepare each node for HP 9000 Containers, see Chapter 2 (page 15).

10.3 Selecting the package model


For SG integration, you can either choose application package model or container (SRP) package
model. HP 9000 classic container supports only the application package model of SG integration.
For HP 9000 system container, select the model based on the requirements. It is easier to transition
the existing SG packages to the application package model and the application failover is quicker
compared to container package model. Container package model simplifies manageability primarily
by supporting shared container file system.

10.1 Setting up the SG cluster 67


10.4 Selecting application that manages file system and network interface
When you use the application package model, HP recommends using SG to manage the network
interface and file system mounts. In the case of container package model, use HP-UX Containers
to manage the network interface and file system mounts.
To use the SG network failover capability, the network interface must be managed by SG. Ensure
that an SG managed container and a non-SG managed container on the same host do not share
the same physical network interface.

10.5 Configuring the SG package on the primary node


10.5.1 Using container package model
Only HP 9000 system containers support the container package model.

Selecting the file system model


The container file system in the /var/hpsrp/<srp_name> directory can either be on shared
logical volume or replicated on all the nodes. A shared file system provides better manageability
and lower storage costs. To use a shared file system, the host system must be on the same OE
update and patch level on all the nodes.

Configuring shared logical volumes


To share the container file system, configure a shared logical volume on the primary node, export
the volume group configuration, and then import it on the failover nodes. Do the same for logical
volumes used to host shared data, if applicable.

Configuring primary node


To configure primary node, follow the steps described in either Chapter 3 (page 17) or Chapter 4
(page 21) based on the HP 9000 container type. When creating the container, follow the specific
instructions for integration with SG. In particular, during container creation, answer no to the
following question:
Autostart container at system boot? [yes] no
If SG manages the IP address of the container,
Add IP address to netconf file? [yes] no

Writing RC scripts for applications


In the container package model, the container is failed over and started on the failover node. This
does not start applications unless RC scripts are written for starting them along with the container.
For more information about creating RC scripts, see rc(1M).

Testing applications on primary node


To start an HP 9000 container and test applications:
1. If the IP address of the container is managed by SG, enable it manually and add the route
entry for testing:
$ ifconfig <container-lan-interface> \
<container-ip-addr> netmask <netmask>
$ /usr/sbin/route add default <gateway-ip-addr> 1 \
source <container-ip-address>
2. Start the HP 9000 container:
$ srp –start <srp_name>

68 Integration with SG
Examine the /var/hpsrp/<srp_name>/etc/rc.log file to verify whether the applications
configured in the RC scripts are started properly.
3. Log in to the HP 9000 container and test applications.
4. Stop the HP 9000 container after testing is completed:
$ srp –stop <srp_name>
Examine the /var/hpsrp/<srp_name>/etc/rc.log file to verify whether the applications
configured in the RC scripts are stopped.
5. If the container IP addresses are managed by SG, disable them and remove the route entry:
$ ifconfig <srp-lan-interface> 0
$ /usr/sbin/route delete default <gateway-ip-addr> 1 \
source <srp-ip-addr>

Configuring failover nodes


To configure failover nodes:
1. Do one of the following depending on whether or not the container file system is shared:
—If the container file system is shared, export only the container configuration on primary
node:
$ srp –export <srp_name> -xfile <path name of exchange file>
—If the container file system is not shared, include both file system and configuration in the
export file:
$ srp –export <srp_name> ok_export_dirs=yes -xfile
<path name of exchange file>
2. Unmount the container file system and volume data, and deactivate the volume group on the
primary node.
3. Copy the exchange file to failover nodes:
$ cmcp <exchange file> <failover node>:<exchange file>
4. Create the root directory:
$ mkdir /var/hpsrp/<srp_name>
$ chown root:sys /var/hpsrp/<srp_name>
$ chmod 0755 /var/hpsrp/<srp_name>
5. If you do not use the shared file system, ensure that users other than the default set of users
that the operating system provides are not configured on the failover node.
6. Import the container onto the failover node:
$ srp –import –xfile <exchange file> autostart=no
7. Configure kernel tunable parameters on the failover node to match the primary node.
8. Configure printers if applicable.
9. Configure devices on the failover nodes and ensure that the device files have the same major
and minor device numbers as on the primary node.
10. Install any special device drivers if applicable.
11. Install any manageability software if applicable.
12. If applicable, transition the /etc/privgroup configuration file from the primary node (global)
to the failover node (global).

10.5 Configuring the SG package on the primary node 69


Creating monitor scripts
Place the monitor scripts for applications in a directory under <hp9000_root>. You can use the
existing monitor scripts if they are compatible with the SG version on the Integrity server. For
information about how to migrate older packages, see Migrating packages from legacy to modular
style at www.hp.com/go/hpux-SG-docs.

Configuring SG package
To configure SG package:
1. If the container file system is on shared volume, specify /var/hpsrp/<srp_name> as an
SG managed file system:
fs_name /dev/<vg_name>/container_lv>
fs_directory /var/hpsrp/<srp_name>
2. Specify the monitor script to be executed inside the container:
service_cmd “/opt/hpsrp/bin/srp_su <srp_name> <user> \
–c “<command line for monitor script>””
3. If SG manages the container IP addresses, specify the same addresses:
ip_subnet <subnet>
ip_address <IP address>
4. If SG manages the container IP addresses, configure the package to create default routes for
these:
For example,
# srp_route_script configures the required source
# based routing entries for the SG managed IP
# addresses
external_script /etc/cmcluster/pkg1/srp_route_script
The /opt/hpsrp/example/serviceguard/srp_as_sg_package/srp_route_script
file provides a reference implementation of this route script.
5. Write a control script for starting and stopping the container during failover:
external_script /etc/cmcluster/pkg/srp_control_script
The /opt/hpsrp/example/serviceguard/srp_as_sg_package/
srp_control_script file provides a reference implementation of the control script.
The /opt/hpsrp/example/serviceguard/srp_as_sg_package/srp_package.conf
file provides a reference implementation of a container SG package.

10.5.2 Using application package model


Configuring shared volumes
The application data is allowed to reside in shared volumes, but the container file system must be
replicated on each node. Application package model does not support a shared container file
system.

Configuring primary and failover nodes


Configure HP 9000 containers separately on each node in the cluster by following the steps
described in Chapter 3 (page 17). Configure a unique IP address and a unique hostname for the
HP 9000 container on each server, but use the same container name. After you complete the
configuration on each node, start each container and verify whether the applications run properly.

70 Integration with SG
Configuring SG package
Configuration from the HP 9000 server can be reused with minor modifications as long as the
configuration is compatible with the SG version on the host system. For more information about
how to migrate older packages, see Migrating packages from legacy to modular style at
www.hp.com/go/hpux-SG-docs.
1. Use the srp_su command for starting and monitoring applications:
a. For HP 9000 system container, use the following configuration:
service_cmd ““/opt/hpsrp/bin/srp_su <srp_name> <user name>\
–c “<command line>””
b. For HP 9000 classic container, to run the command as root user:
service_cmd “/opt/hpsrp/bin/srp_su <srp_name> root –c \
“chroot <hp9000_root> <command line>””
c. For HP 9000 classic container, to run the command as a non-root user:
service_cmd “/opt/hpsrp/bin/srp_su <srp_name> root \
“chroot <hp9000_root> /usr/bin/su - <user> -c \
<command line>””

2. Configure the package to create default routes for container IP addresses managed by SG
(the package IPs):
For example,
# srp_route_script configures the required source based
# routing entries for the SG managed IP addresses
external_script /etc/cmcluster/pkg1/srp_route_script

A reference implementation of this script is available in the /opt/hpsrp/example/


serviceguard/srp_as_sg_package/srp_route_script file.

10.6 Copying and applying package configuration


Copy the package configuration in the failover node. Apply configuration on the primary node
and test failover using the following commands:
$ cmcheckconf –P <package configuration>
$ cmapplyconf –P <package configuration>
$ cmrunpkg –v –n <primary node name> <package>
Test the SG failover by manually stopping an application.

10.6 Copying and applying package configuration 71


72
11 Limitations of HP 9000 Containers
The following sections explain the limitations of HP 9000 Containers.

11.1 Application limitations


Does not support kernel intrusive applications and applications that use privileged instructions
Applications that are kernel intrusive and that use privileged instructions do not work inside an HP
9000 container. This includes device drivers and applications that read or write /dev/kmem, or
use DLKMs.
Does not support system management applications
Management utilities such as SAM, HP SMH, or applications such as HP Openview agents, and
HP SG that perform tasks related to system management or monitoring are not supported inside
an HP 9000 container. You can run these applications in the global container.
Does not support system resource monitoring applications
Performance agents and applications that perform disk, memory, or CPU monitoring are not
supported inside an HP 9000 container. You can run these applications in the global container.
Does not support non-manageability applications in the global container
Running applications in the global container is not supported, except for those related to system
management or monitoring.
Does not support DHCP server, DNS server, NFS server, X server, and IP routing
DHCP, DNS, NFS, and X servers are currently not supported inside an HP 9000 container. Using
the container as a router is also not supported.
Other limitations
Fundamental limitations of the HP ARIES binary translator are listed at http://www.hp.com/go/
aries —> Documentation —> ARIES limitations.
For more information about other known emulation issues with common application stacks, see
Section 12.6 (page 80).

11.2 Setup limitations


Does not support DHCP IP address
HP 9000 Containers supports only static IP addresses for containers. Using DHCP as container IP
address is currently not supported.
Does not support large base page size
Applications running in an HP 9000 container on an HP-UX 11i v3 system, where kernel tunable
parameter base_pagesize is configured to a non-default value, might experience correctness
issues with symptoms such as application hangs and aborts. The workaround is to set the value of
kernel parameter base_pagesize to 4 KB (which is default).
Does not support large PID, UID, host name with legacy containers
Legacy (earlier than HP-UX 11i v3) containers might not have the necessary user-space components
to support large values of PID, UID, GID, host name, and node name. Therefore, such large values
cannot be supported with HP 9000 containers built from such environments. In particular, you must
set the value of process_id_max parameter on the HP-UX 11i v3 host system to 30000 even
though the kernel can support larger values for this parameter.
Additional limitations specific to HP 9000 classic containers
Additional limitations of HP 9000 classic containers are as follows:

11.1 Application limitations 73


• Does not support multiple HP 9000 containers on the same host.
• Does not support coexistence with native containers.
• Some server applications, which register RPC services might need a virtual hostname
configuration (that matches the global host name) to work. For more information, see
Section 12.6 (page 80).

11.3 Access limitations


Limitations specific to HP 9000 classic containers
Using the telnet command to log in to an HP 9000 classic container is not supported. Instead,
you can use ssh. If telnet is used, the user login occurs in the global container and therefore,
applications cannot be run. But, using telnet from an HP 9000 container to other servers is
supported.
Using commands such as remsh, rlogin, rcp, and rexec to access the HP 9000 classic
container are not supported. To achieve similar functionality, use SSH based protocols (ssh,
slogin, scp) along with authorization keys.

11.4 Patching limitations


SD patching is not supported inside an HP 9000 classic container.
SD patching inside an HP 9000 system container has the following limitations:
• If compartment rules are used to restrict unsupported commands, patching of products that
include the disallowed commands fails.
• The swverify and check_patches commands might report errors.
• Patching is not supported for libraries and commands that are switched using the libv3 and
cmdv3 templates.

11.5 User management limitations


HP 9000 system container
• Does not support user quota.
• Does not support the configuration of users, apart from those related to system administration
or system management-related applications, in the global container.
HP 9000 classic container
• The users must be configured on the host. Access to the container is managed via RBAC.
• For storing SSH authorization keys, separate home directories must be created in the global
file system.

11.6 Commands limitations


The $ df –k command fails inside both HP 9000 system and HP 9000 classic containers.
Due to the shared file system, the following commands provide global information (not
container-specific data) inside an HP 9000 classic container:
bdf, df, last, lastb, mount, who, and finger
In an HP 9000 classic container, the bdf command also reports errors for loopback mounts.
For more information about HP 9000 classic container file system, see Section 7.2 (page 46).

74 Limitations of HP 9000 Containers


11.7 Unsupported tasks and utilities
Most of the unsupported tasks in an HP 9000 container are related to system administration and
can be performed outside the container (in the global container). Some of the tasks and utilities
that are unsupported and disabled inside an HP 9000 container are as follows:
• Assembly debugging
• Enable or disable accounting
• Enable or disable auditing
• CacheFS
• CIFS client
• Cluster management
• Compartment rule configuration
• Date and time setting
• DHCP configuration
• DHCP client
• Device creation
• Disk management
• Driver installation and management
• Event monitoring Service
• File system management and export
• Global Instant Capacity Management
• HP-UX Containers (SRP) creation and management
• Interrupt configuration
• IP address configuration
• IPFilter and IPSec configuration
• Ignite-UX
• Kernel debugging
• Kernel make
• Kernel memory read
• Kernel module (DLKM) administration
• Kernel registry services
• Kernel tunable management
• Logical and physical volume management
• Network tunable configuration
• NIC administration
• NFS server and exports or shares
• NLIO
• NTP
• OLAR
• Partition management

11.7 Unsupported tasks and utilities 75


• Portable file system
• Printer management (classic container limitation only)
• Privilege management
• PRM management
• Processor set management
• Processor binding
• RAID control
• Reboot, shutdown system
• Resource (CPU, memory, disk, and so on) monitoring
• Routing configuration and advertisement
• SAM, SMH
• SCSI control
• Serviceguard configuration and management
• SD based installation and patching (specific to classic container)
• Storage or disk management
• STREAMS administration
• STM
• Swap space management
• System activity reporter
• System boot configuration
• System crash configuration
• System diagnostics and statistics
• Update-UX
• VxFS, VxVM, and Volume Replicator-related tasks

11.8 Performance limitations of HP 9000 Containers


For most application stacks, the sizing guidelines described in Section 1.6 (page 11) ensure that
performance of an HP 9000 container is the same or better than the source HP 9000 server. The
following types of applications might incur a larger overhead when run under ARIES emulation:
• Applications that are short lived and have very flat execution profile (for example, compilers,
interpreters, shells, and scripts).
• Applications that spawn several short lived threads or processes.
• Applications that load and unload several libraries dynamically.
• Applications that perform intensive floating point arithmetic.
• Some database servers which require ARIES to enforce strong memory ordering.
• Applications that are debugged using PA-RISC WDB or traced using the PA=RISC tusc
utility.

76 Limitations of HP 9000 Containers


12 HP 9000 Containers troubleshooting
This chapter explains how to troubleshoot HP 9000 Containers, lists common problems you might
encounter, and suggests how to resolve them.

12.1 Verifying HP 9000 container health


Verifying container status
Log in to the global container as root and run the following command:
$ srp -status <srp_name> -v
Verify whether the connectivity to the HP 9000 container is properly working from both the global
container and another system.

Verifying container startup logs


Examine the /var/hpsrp/<srp_name>/etc/rc.log file to verify whether the previous startup
and shutdown were proper. Search the /var/adm/syslog/syslog.log file on the host for
SRP to know the list of operations performed on the container.

Verifying container configuration


Log in to the global container and run the following command:
$ srp –list <srp_name> -v

Verifying PRM configuration and statistics


Log in to the global container and run the following commands:
$ prmlist –g –s
$ prmmonitor

Verifying network configuration


Log in to the global container and run the following command to verify whether the container
interfaces are up:
$ netstat –in
Also, verify whether a default gateway is associated with the container.
$ netstat –rn

Verifying kernel parameters


Legacy containers cannot support large PIDs and large base page size.
Set the related kernel parameters to their supported values using:
$ kctune base_pagesize=4
$ kctune process_id_max=30000
$ kctune nproc=30000
Some legacy 32-bit applications also expect a lower value for shmmax.
Reduce shmmax value using:
$ kctune shmmax=0x40000000

12.1 Verifying HP 9000 container health 77


Verifying host name and node name
Legacy containers have issues when long host name or node name is used for either the global
container or 9000 containers. Examine the /etc/rc.config.d/netconf and /var/hpsrp/
<srp_name>/etc/rc.config.d/netconf files for these parameters.
There is a workaround for using long host name in the global; run the $ kctune
uname_eoverflow=0 command.

12.2 Recovering from HP 9000 container startup and shutdown issues


If an HP 9000 container hangs at startup:
• Run the $ srp_ps <srp_name> -ef command and identify the RC service that hangs at
startup.
• Kill the service daemon and the RC script to complete the rest of startup.
• If these options do not work, try to get the prompt using Ctrl+\.
If an HP 9000 container reports errors at shutdown indicating that some processes are not stopped,
run the command again:
$ srp –stop <srp_name>
An HP 9000 container is locked during life cycle operations. The lock file, srp_<srp_name>.lck
is available in the /var/opt/hpsrp/tmp directory. You can remove this file manually if it is left
over from a previous operation (which is incomplete for some reason) as long as no other life cycle
operation is in progress at the time of deletion.

12.3 Triaging HP 9000 container access issues


Examine the /etc/rc.config.d/netconf file on the host HP-UX 11i v3 server and ensure that
the IP address, gateway, and subnet mask for the container are configured correctly. To change
the values, see Section 8.6 (page 52).
Examine the /opt/ssh/etc/sshd_config file on the host and ensure that the ListenAddress
parameter is set to host IP address. Do not comment out or set this parameter to any container IP
address.
If you fail to access HP 9000 classic containers using SSH, verify whether the following are true:
• The ListenAddress parameter in the /var/hpsrp/<srp_name>/opt/ssh/
sshd_config file is set to the container IP address.
• The chroot directory in the /var/hpsrp/<srp_name>/opt/ssh/sshd_config file is
set to the root directory (<hp9000_root>), where HP 9000 files are recovered.
• The host HP-UX 11i v3 root directory (/) has 755 permissions and root:sys or root:root
ownership.
• All other directory components of the path leading up to <hp9000_root> have 755
permissions and root:sys or root:root ownership.
If you fail to access HP 9000 system containers using SSH, verify the following:
• Is the UseDNS parameter set in the sshd_config file inside the container? If yes, is the DNS
server accessible from the container and is the container host name registered in DNS? Does
setting UseDNS help?
• Is the PermitRootLogin parameter for the sshd_config command set to no?
• Are the host keys regenerated and used on the clients?
• Is the routing configuration correct?
• Does SSH version upgrade inside the container help?

78 HP 9000 Containers troubleshooting


If the node name on the global container (host system) is longer than 8 characters in length (and
the kernel parameter expanded_node_host_names is set to 1), only two login sessions are
allowed to legacy containers on the system. The workaround is to run the following command:
$ kctune uname_eoverflow=0
If you fail to access HP 9000 system containers using telnet, verify whether the value of kernel
parameters npty and nstrpty are sufficient. Also, verify if there are enough /dev/pty and
/dev/pts devices exposed to the container. If not, run the following commands:
$ kctune npty=<new value>
$ insf -e -n <new npty value>
$ srp –add <srp_name> -tune device=/dev/pty/*
There is a known limitation that legacy inetd services cannot be used with large PIDs. Ensure
that the kernel tunable parameter process_id_max is set to less than or equal to 30000.

NOTE: Though the login services are not functioning properly, login using srp_su <srp_name>
works if the status of the HP 9000 system container is started. This can be used for debugging
purposes. For example, it can be used to get a tusc log on sshd or inetd as described in
Section 12.4 (page 79).

12.4 Collecting application and system call logs


If application fails inside the container:
• Verify whether the application logs and files where stdout or stderr are redirected.
• Install HP-UX system call tracer utility tusc for HP-UX 11.31/Itanium on the host. It can be
downloaded from http://hpux.connect.org.uk/hppd/hpux/Sysadmin/tusc-8.1
• Copy the /usr/local/bin/tusc binary from the global container to the HP 9000 container.
• Log in to the HP 9000 container and run the tusc utility on the failing application:
$ tusc –o <output file path> -lfpkaev \
-s \!sigprocmask,sigaction,sigsetreturn \
<executable> <arguments>

• Search the tusc log for clues like failing system calls. Verify whether any of the HP 9000
container limitations are encountered. For example, analyze execve(2) system calls to see
if any unsupported command is invoked.

12.5 Debugging applications


Use PA-RISC HP WDB to debug applications inside a container, just like on the HP 9000 server.
The only additional requirement is to set the PA_DEBUG environment variable before initiating gdb
using the $ export PA_DEBUG=1 command.
ARIES generates PA-RISC HP-UX core files when the application aborts. WDB can be used to
analyze these core files.
Currently, there is a known limitation with this option; when an application running under ARIES
emulation is debugged or traced, it runs much slower. This is because ARIES emulation of ttrace(2)
system calls works only when ARIES is running in pure interpreter mode with no translations or
caching.
ARIES does not support debugging or tracing applications with GNU GDB and custom tools (in
particular, 64-bit debuggers and tracers).

NOTE: You can recompile and link applications, as long as the required compilers and tools are
available inside the container.

12.4 Collecting application and system call logs 79


12.6 Known issues and workarounds
Table 5 (page 80) lists some known issues and workarounds for HP 9000 Containers. The patches
and products listed in Section 2.2 (page 15) resolve some of the known issues when using
containers.
Table 5 Known issues and workarounds
Issue Workaround

Legacy lsof command fails inside HP 9000 containers. Install the lsof depot on the host system and copy the
/usr/local/bin/lsof file into the container.

Some Java 1.2 and Java 1.3 applications (such as TIBCO) Upgrade to Java 1.4.2 for such applications. It is usually
might fail to run inside HP 9000 containers. possible to change the application startup script to point
to new java version.
If the application has an incompatibility with Java 1.4, try
upgrading to the latest version of Java 1.3. In some cases,
specifying */java -noopt in ARIES configuration file
helps.

UDP broadcast messages might not reach the container. Contact HP for a fix or workaround.
This issue is frequently encountered when using TIBCO
rendezvous agent inside the container.

Communication between containers on the same system Contact HP for a fix or workaround.
does not honor subnet mask when selecting source IP
address.

IBM Informix Dynamic Server hangs intermittently inside Set the number of CPU VPS to 1 in Informix configuration
an HP 9000 container. file along with ARIES configuration -mem_fence for
Informix binaries.
If the issue still persists, activate strong memory ordering
in ARIES by configuring <path to Informix DB
server install dir>/* -mem_order
-mem_fence.

Progress datavase server might hang or crash inside an Restart the database after specifying -mem_fence in the
HP 9000 container. ARIES configuration file.
If the issue still persists, activate strong memory ordering
in ARIES by configuring <path to Progress DB
server install dir>/* -mem_order
-mem_fence. Using -spin at the startup of Progress
database, might help reduce the performance impact of
enabling strong memory ordering.

Progress database sometimes reports an error, SYSTEM Use the -mux 0 parameter at the startup of Progress
ERROR: muxfree 24 not owner database.
For information about how to use the parameter, see
http://knowledgebase.progress.com/articles/Article/
P22598

Oracle database server crashes when the Set the parallel_automatic_tuning parameter to
parallel_automatic_tuning parameter is set to FALSE.
TRUE.

Oracle database server sometimes crashes when the Set the parallel_threads_per_cpu parameter to 1.
parallel_threads_per_cpu parameter is set to a
value greater than 1.

Oracle database server sometimes crashes with ORA-0600 Enable strong memory ordering in ARIES by configuring
errors, or reports ORA-0600 errors to the application. <path to Oracle DB server install dir>/*
-mem_order –mem_fence. This might, however, incur
a performance overhead.
If the ORA-0600 error still persists, contact HP for support.

80 HP 9000 Containers troubleshooting


Table 5 Known issues and workarounds (continued)
Issue Workaround

Oracle database server core dumps and the stack trace Contact HP for details about ARIES patch that resolves the
shows function name sjontlo_threa_main. issue.

PRM FSS cannot be used along with Oracle Database Switch to PSETs if the resource manager is in use.
Resource Manager. See http://www.hp.com/go/
hpux-prm-docs—> Using HP PRM with Oracle databases.

HP GlancePlus returns no information for PRM groups Use the prmmonitor command instead of GlancePlus to
configured with PSETs. monitor resource usage.
Another option is to create an application record in
GlancePlus to group container specific processes together
(using a filter). Processes running inside a container are
prefixed with the container identifier when they are listed.
Hence, the container identifier can be used as a filter to
group container processes (container identifiers are listed
in /etc/cmpt-db).

CIFS client, smbd, and nmbd (part of CIFS server) fail inside Contact HP for a fix or workaround.
an HP 9000 container.

When earlier versions of SAP are used, the stopsap Upgrading to SAP kernel 1773 patch generally solves the
command hangs and produces a core dump of the issue. You can also contact HP for a workaround.
sapstart process.

Earlier versions of Connect Direct fail when the kernel Set the maxfiles and maxfiles_lim parameters to
parameter maxfiles is larger than 2048. 2048.

Earlier versions of Java (before 1.4.2.28) fail to run inside Comment out the line
a container and display an error message securerandom.source=file:/dev/random in the
java.lang.InternalError: URLSeedGenerator java.security file in <java_home>/jre/lib/
file:/dev/random generated exception: security.
Permission denied.

Some of the terminal settings might be lost when moving Edit the /etc/profile file to initiate stty for required
to an HP 9000 container. For example, Ctrl+C might no settings.
longer interrupt processes when logged in using telnet
or rlogin.

HP-UX 10.20 system container has no telnet access. Copy telnetd and its dependencies from HP-UX 11i v1
or HP-UX 11.00 system to the container.
For more information about configuring telnet, see
Section 4.5.16 (page 29)

If mounts for a container are configured in global /etc/ Configure container pre-start mounts as described in
fstab, they do not appear in the output of the bdf Section 8.5 (page 50).
command inside the container, post reboot. Also, the
subsequent unmount operations might report errors.

The hp9000_conf_tunables script does not add up Increase parameters such as npty or maxfiles to
parameters when multiple containers are created on the accommodate all the users and applications in many
same host. containers.

When auditing is enabled with HP 9000 system containers, Write an init.d script that runs the following commands
the login and logoff events do not get recorded. after system reboot:
echo ”audit_en_logins_compat/W 1" | \
adb -o -w /stand/vmunix /dev/kmem

echo "audit_logoff_compat/W 1" | \


adb -o -w /stand/vmunix /dev/kmem

The srp –stop operation sometimes returns before all Issuing a second or third srp –stop generally works.
the processes are killed. This issue usually occurs when Autofs is enabled. If there
is no requirement, turn off Autofs in the /etc/
rc.config.d/nfsconf file inside the container.

12.6 Known issues and workarounds 81


Table 5 Known issues and workarounds (continued)
Issue Workaround

The srp –export operation does not include files that Back up the large files separately.
are larger than 8 GB in size.

The srp –import operation changes ownership of files During import, do not configure any users on the host
if same users exist on the host system with different UIDs system apart from the default users.
(or same groups with different GIDs).

HP 9000 commands with argument strings larger than 768 Copy the native command from the global container to the
KB fail inside HP 9000 containers. For example, the $ls HP 9000 container.
command on a directory with a large number of files.

No support for inetd in an HP 9000 classic container. Install xinetd on the HP 9000 server and create the
container again. To configure xinetd, run the following
script:
$ /opt/HP9000-Containers/bin/ \
\hp9000_xinetd_setup <srp_name>

Some server applications might fail to start up inside a Configure ARIES with a virtual host name that matches the
classic container and might throw errors such as unable Integrity host name. Include the following configuration in
to register RPC service. the /.ariesrc (32-bit) or /.aries64rc (64-bit) file:
<executable path> -cmpt_host_name <name of
the host 11i v3 system>

12.7 Troubleshooting HP ARIES


If the application complains about thread creation or stack growth failures, see Section 4.8
(page 30) for resolution details.
Updating HP ARIES patch
Update HP ARIES patch to the latest version. Every patch brings in defect fixes that can save a lot
of troubleshooting effort.
Configuring strong memory ordering
Configure strong memory ordering and memory fencing by appending the following lines in
/.ariesrc or /.aries64rc (particularly applicable for database servers):
# Enable strong memory ordering
<executable-path> -mem_order –mem_fence
# End
Disabling ARIES optimizations
Disable ARIES optimizations by appending the following lines to /.ariesrc (for 32-bit) or
/.aries64rc (for 64-bit):
# Disable optimizations
<executable-path> -noopt
# End
Disabling ARIES translations
Disable ARIES translations by appending the following lines to /.ariesrc (for 32-bit) or
/.aries64rc (for 64-bit):
# Disable translation for testing
<executable-path> -notrans
# End
Use the -notrans option for only testing, because it slows down application performance
significantly.
If ARIES displays heap exhaustion error, see aries(5M) for more information about how to configure
larger values for these parameters. ARIES manpage is accessible on the host system only.

82 HP 9000 Containers troubleshooting


For information about HP ARIES troubleshooting, see http://www.hp.com/go/ARIES.
For any assistance with troubleshooting ARIES issues, contact HP-UX Support Center at http://
www.hp.com/go/hpsc.

12.8 Troubleshooting HP-UX Containers


For information about troubleshooting HP-UX Containers, see HP-UX Containers A.03.01
Administrator's Guide and for more information about known defects and workarounds, see HP-UX
Containers (SRP) A.03.01 Release Notes at www.hp.com/go/hpux-srp-docs.

12.9 Reconfiguring HP 9000 containers


This section discusses the reconfiguration of container parameters and file systems.

12.9.1 Reconfiguring HP 9000 container


If there are suspected issues with the HP 9000 container parameters or file systems, log in to the
global container and run following commands:
$ srp –stop <srp_name>
$ srp –replace <srp_name>
$ srp -start <srp_name>

12.9.2 Switching to newer HP 9000 libraries


By default, applications inside the HP 9000 container use system libraries included in the HP 9000
file system image. An HP 9000 container can be configured to use PA-RISC libraries shipped with
Integrity HP-UX 11i v3 instead. This enables a newer set of libraries to be used in the HP 9000
container with potentially more defect fixes.
You must stop HP 9000 container before switching the libraries. Log in to the global container as
root and run the following commands:
$ srp –stop <srp_name>
$ srp –add <srp_name> -t libv3 -b
The process might take about 10 minutes because the PA-RISC HP-UX 11i v3 libraries are copied
into the HP 9000 container and any application libraries are merged in.

NOTE: Patching the libraries that are switched is not supported inside the container. If the files
are overwritten as a result of patching, recover them manually. Also, there is no automatic copying
into the container when these libraries are patched on the host. You can run the replace operation
to copy all the latest libraries again, but it requires a container downtime.
To recopy the set of libraries from the host system, run the following commands:
$ srp –stop <srp_name>
$ srp –replace <srp_name> -t libv3

To switch to the original libraries, run the following commands:


$ srp –stop <srp_name>
$ srp –delete <srp_name> -t libv3

12.9.3 Restoring restricted HP 9000 commands


HP 9000 Containers provides the following options to restrict commands inside the container:
• Replace unsupported commands with a dummy command that exits with an error message.
In this case, the original HP 9000 commands are backed up under /sbin-hp9000 (for

12.8 Troubleshooting HP-UX Containers 83


/sbin commands) or under /var/opt/HP9000-Containers inside the container. These
can be copied back for testing.
For example,
$ cp -p /var/opt/HP9000-Containers/usr/sbin/<command> /usr/sbin

• Use container rules to disallow execution. To allow execution of the command, remove the
entry of the command from the /opt/HP9000-Containers/config/
hp9000.disallowed.cmds file. Then run the $ setrules command.
If the recovered command works as expected inside the container, remove the entry for the command
from the /var/opt/HP9000-Containers/hp9000sys_delete_commands file inside the
container. Also, remove the entry from the files
/opt/HP9000-Containers/config/hp9000sys_delete_commands and
/opt/HP9000-Containers/config/hp9000.disallowed.cmds.

12.10 Performance tuning


This section discusses the options to tune HP 9000 containers for better performance.

12.10.1 Switching commands to Integrity native commands


If the application is script intensive, performance might be significantly hit when it is run inside an
HP 9000 container. This is because scripts are usually short living processes and have flat execution
profiles that do not suit emulation.
HP 9000 Containers provides an option to replace commonly used shells and commands with
Integrity native versions. The list of commands that are copied is available in the /opt/
HP9000-Containers/config/hp9000_switch_commands file.
To switch commands, run the following commands:
$ srp -stop <srp_name>
$ srp –add <srp_name> -t cmdv3 -b

NOTE: After switching the commands, patching them inside the container is not supported. If the
files are overwritten as a result of patching, recover them manually. Also, when the commands are
patched on the host, the commands cannot be automatically copied into the container. You can
run the replace operation to copy all the latest commands again, but this requires container
downtime.
To copy the commands again, run the following commands:
$ srp -stop <srp_name>
$ srp –replace <srp_name> -t cmdv3

To switch to HP 9000 commands, run the following commands:


$ srp -stop <srp_name>
$ srp –delete <srp_name> -t cmdv3

12.10.2 Tuning ARIES emulation


Performance of certain applications can be improved by tuning ARIES. The ARIES configuration
parameters are specified in /.ariesrc for 32-bit applications and /.aries64rc for 64-bit
applications. The configuration files might also reside in the user home directory or application
directory, if required.
The following options are available for tuning ARIES emulation:
• Enable trace scheduling:

84 HP 9000 Containers troubleshooting


# Enable trace scheduling
<executable-path> -sched_trace
# End

• Reduce memory overhead:


# Reduce memory footprint
<executable-path> -mem_min
# End

• Enable the use of native code for common APIs (with ARIES patch PHSS_42863 or later):
# Turn on API optimization
<executable-path> -opt_api_trans
# End

• Enable preservation of shared library translations across unloads (with ARIES patch
PHSS_42863 or later):
# Turn on shared library preservation
<executable-path> -shlib_preserve
# End
# Increase ARIES private heap
<executable-path> –ap_heap_ssz 8192
# End
Also, increase the value of kernel parameter pa_maxssiz_32bit by 8 MB.

NOTE: HP recommends proper testing before enabling such configurations in production because
these configuration options can have adverse impact on performance, or accuracy in some cases.

12.10.3 Tuning kernel parameters


For some applications, reducing the parameters filecache_min and filecache_max to 5-10%
of physical memory might help.
$ kctune filecache_min=5% filecache_max=10%
Compare all the kernel parameters on the Integrity server with the HP 9000 server and ensure that
the required values are set.
For any assistance with kernel tuning, contact HP-UX Support Center at http://www.hp.com/go/
hpsc.

12.10.4 Profiling ARIES emulation


You can use HP Caliper to profile ARIES emulation of an application from its startup. But, you
cannot attach caliper to an ARIES emulated process.
To profile ARIES emulation:
1. Install Caliper on the host system (global) and copy the files into the container:
$ cp -p -r /opt/caliper /var/hpsrp/<srp_name>/opt
2. Login to the container.
3. Create a directory to hold ARIES profiles:
$ mkdir /tmp/ARIES_profdb
4. Modify the application startup command:
export PA_BOOT32_DEBUG=3
export PA_BOOT64_DEBUG=3
export CALIPER_HOME=/opt/caliper
$CALIPER_HOME/bin/caliper fprof -r a \

12.10 Performance tuning 85


--database=/tmp/ARIES_profdb/db,unique \
--scope=process \
--des=all \
--process=all \
--thread=all \
--output-file=/tmp/ARIES_profdb/aries_profile.txt, \
per- process,unique \
<Application executable or startup script> <arguments>
unset PA_BOOT64_DEBUG
unset PA_BOOT32_DEBUG
5. Run the test and save /tmp/ARIES_profdb.
6. Contact HP support for detailed analysis of the report.

86 HP 9000 Containers troubleshooting


13 Support and other resources
13.1 Information to collect before you contact HP
Be sure to have the following information available before you contact HP:
• Software product name
• Hardware product model number
• Operating system type and version
• Applicable error message
• Third-party hardware or software
• Technical support registration number (if applicable)

13.2 How to contact HP


Use the following methods to contact HP technical support:
• In the United States, see the Customer Service / Contact HP United States website for contact
options: http://welcome.hp.com/country/us/en/contact_us.html
• In the United States, call 1-800-HP-INVENT (1-800-474-6836) to contact HP by telephone.
This service is available 24 hours a day, 7 days a week. For continuous quality improvement,
conversations might be recorded or monitored.
• In other locations, see the Contact HP Worldwide website for contact options: http://
welcome.hp.com/country/us/en/wwcontact.html

13.3 HP authorized resellers


For the name of the nearest HP authorized reseller, see the following sources:
• In the United States, see the HP U.S. service locator web site:
http://www.hp.com/service_locator
• In other locations, see the Contact HP worldwide web site:
http://welcome.hp.com/country/us/en/wwcontact.html

13.4 Related Information


• HP Integrity family
http://www.hp.com/go/integrity
• HP-UX 11i v3
http://www.hp.com/go/hpux11i
• HP ARIES dynamic binary translator
http://www.hp.com/go/aries
• HP-UX Containers
www.hp.com/go/hpux-srp-docs
• HP 9000 Containers
http://www.hp.com/go/hp9000-containers

13.1 Information to collect before you contact HP 87


• HP 9000 Containers Software Access
http://www.software.hp.com
• HP Process Resource Manager
http://www.hp.com/go/prm
• HP Virtualization Continuum for HP-UX
http://www.hp.com/go/vse
• HP Serviceguard for HP-UX
http://www.hp.com/go/serviceguard
• HP-UX Manuals
http://www.hp.com/go/hpux-core-docs

13.5 Typographic conventions


This document uses the following typographical conventions:
%, $, or # A percent sign represents the C shell system prompt. A dollar sign
represents the system prompt for the Bourne, Korn, and POSIX
shells. A number sign represents the superuser prompt.
audit(5) A manpage. The manpage name is audit, and it is located in
Section 5.
Command A command name or qualified command phrase.
Computer output Text displayed by the computer.
Ctrl+x A key sequence. A sequence such as Ctrl+x indicates that you
must hold down the key labeled Ctrl while you press another key
or mouse button.
ENVIRONMENT VARIABLE The name of an environment variable, for example, PATH.
ERROR NAME The name of an error, usually returned in the errno variable.
Key The name of a keyboard key. Return and Enter both refer to the
same key.
Term The defined use of an important word or phrase.
User input Commands and other text that you type.
Variable The name of a placeholder in a command, function, or other
syntax display that you replace with an actual value.
[] The contents are optional in syntax. If the contents are a list
separated by |, you must select one of the items.
{} The contents are required in syntax. If the contents are a list
separated by |, you must select one of the items.
... The preceding element can be repeated an arbitrary number of
times.
| Separates items in a list of choices.
WARNING A warning calls attention to important information that if not
understood or followed will result in personal injury or
nonrecoverable system problems.
IMPORTANT This alert provides essential information to explain a concept or
to complete a task

88 Support and other resources


NOTE A note contains additional information to emphasize or supplement
important points of the main text.

13.5 Typographic conventions 89


90
14 Documentation feedback
HP is committed to providing documentation that meets your needs. To help us improve the
documentation, send any errors, suggestions, or comments to Documentation Feedback
([email protected]). Include the document title and part number, version number, or the URL
when submitting your feedback.

91
92
Glossary
ARIES Automatic Retranslation and Integrated Environment Simulation.
CIFS Common Internet File System.
DCE Distributed Computing Environment.
DDFA Data Communications and Terminal Controller Device File Access.
DHCP Dynamic Host Control Protocol.
DLKM Dynamically Loadable Kernel Module.
DNS Domain Name Server.
DP Data Protector
FSS Fair Share Scheduler.
GID Group Identifier.
gWLM Global Workload Manager.
HP-UX OS HP-UX Operating System.
ISV Independent Software Vendor.
LDAP Lightweight Directory Access Protocol.
LOFS Loopback File System.
LTU License To Use.
LUN Logical Unit Number.
NFS Network File System.
NIC Network Interface Controller.
NIS Network Information Service.
NLIO Native Language Input/Output.
NTP Network Time Protocol.
OE Operating Enviroment.
OLAR Online Addition and Replacement.
PA-RISC Precision Architecture Reduced Instruction Set Computing.
PID Process Id.
PRM Process Resource Manager.
PSET Processor Set.
RAID Redundant Array of Independent Disks.
RBAC Role Based Access Control.
RC Run Control.
RPC Remote Procedure Call.
RTU Right To Use.
SCSI Small Computer System Interface.
SD Software Distributor.
SG Serviceguard.
SMH System Management Homepage.
SMSE Standard Mode Security Extensions.
SRP Secure Resource Partitions.
SSH Secure Shell.
SSHD Secure Shell Daemon.
STM Support Tools Manager.

93
UDP User Datagram Protocol.
UID User Identifier.
VM Virtual Machine.
vPar Virtual Partition.
VxFS Veritas File System.
VxVM Veritas Volume Manager.
WDB Wildebeest Debugger.
WLM Workload Manager.
XVfb X Virtual Frame Buffer.

94 Glossary
Index
container types
A HP 9000 system containers and HP 9000 classic
additional container configuration, 25, 36 containers, 10
Additional requirements, 16 cpio, 17, 18
assign administrator privileges, 49 CPU and memory allocation, 12
auditing, 29 CPU Entitlement , 24
Auto start setting, 23, 35 Create
container root directory, 21
B HP 9000 classic container, 33
Backup applications, 57 create
HP 9000 classic containers, 58 HP 9000 system container, 21
HP 9000 system containers, 57 root directory, 33
batch mode, 19 Create HP 9000 container, 20
Create system container, 23
C Creating file systems, 19
change container configuration, 25
choosing container name, 19 D
chroot, 9 Data migration, 17
commands DCE
unsupported DCE client, 27
restricted, 54 DCE server, 27
complete recovery, 23, 35 DDFA, 28
Configuration parameters, 23 Dedicated allocation, 12
configure delete administrator, 49
additional devices, 26 DHCP, 24
additional IP addresses, 26 disable Autofs, 29
additional privileges, 28 Disable PRM, 16
container local mounts, 51 Disallowed commands configuration, 24
container pre-start mounts, 51 DNS configuration, 24
cron jobs, 27, 37
host name, 36 E
HP 9000 local users, 37 Enable PRM, 16
IP address, 25, 36 error messages, 25
machine-specific parameters, 31 errors, 25, 55
mount points, 26, 34, 36 export and import
NFS and Autofs clients, 51 HP 9000 classic container, 57
NFS exports, 52 HP 9000 system container, 56
node name, 36
printers, 27, 37 F
SG package, 70 failover nodes, 69
stack size, 30 fbackup, 17, 18
threads, 30 file archive, 23, 35
Configure mount points, 21 file system
configure PRM, 23 HP 9000 classic container, 46
configure trusted mode, 27 HP 9000 system container, 45
configure VxFS file system image, 17
HP 9000 classic container, 52 floating IP address, 26
HP 9000 system container, 51 frecover, 34
consolidating HP 9000 servers, 11 FSS, 23
container
start and stop, 66 G
view or modify configuration, 64 gWLM, 12
container cloning, 56
container directories, 48 H
Container Manager, 61 host IP address, 53
container package model, 20, 68 host name, 19, 25

95
HP 9000 container, 9 MLOCK, 28
add new IP address, 52 Modify
primary IP address, 52 host name, 53
shut down, 49 resource entitlements, 53
start, 49 multiple IP addresses, 26
HP 9000 Containers, 9
administration, 49 N
auditing, 58 Network parameters, 24, 35
limitations, 73 NFS, 17
HP 9000 root directory, 35 NFS mounted directories, 18
HP 9000 server, 12, 17 node name, 19, 25
HP 9000 system container, 27
cloning, 57 O
HP 9000 system containers, 25 OSI Transport Services, 29
HP ARIES dynamic binary translator, 9 other tools, 23
HP Integrity server, 9, 17
HP virtualization solutions, 12 P
HP XPADE, 11 PA-RISC, 12
HP-UX Containers PA-RISC environment, 13
SRP, 9 package model
hp9000cl, 35 application package model, 67
hp9000sys, 23 container package model, 67
hp9000sys template, 23 select package model, 67
HPSC, 15 patching, 54
commands and libraries, 55
I pax, 17
Ignite-UX , 17 performance tuning, 84
Ignite-UX network recovery, 34 Perl, 15
Ignite-UX tape recovery, 34 Prerequisites, 15
Install drivers, 20 primary node, 68
Installing and configuring HP 9000 Containers, 15 PRM, 12
Installing HP 9000 Containers, 16 PRM configuration, 24
Integrity host, 53 PRM group, 12
Integrity server, 13 Proof-of-Concept testing, 11
interactive mode, 19 PSET, 23
IPv4, 24
IPv6, 24 R
ISV software, 10 RBAC, 37
ISV software license, 13 RC script, 26, 36
Recommended patches, 15
K Recover HP 9000 image, 21, 34
kernel patches, 55 cpio, 22
kernel tunable parameters, 18 frecover, 22
Ignite-UX network recovery, 22
L Ignite-UX tape recovery, 22
LAN, 24 tar, 22
latest version, 16 resource entitlement, 12
legacy HP 9000 containers, 19 restore service, 26
limitations, 11 Root user configuration, 24
LOFS, 47 RTPRIO, 28
loop-back mounts, 47 RTU, 13
LTU, 13 run level, 10
Run level support, 56
M
machine-specific parameters, 30 S
Max CPU Usage, 24 SD, 26
Max Memory, 24 SD post session scripts, 55
Memory Entitlement, 24 Secure Shell, 15
MKNOD, 29 select container types

96 Index
HP 9000 classic containers, 17
Set up
user environment, 33
SG, 13, 26
integration, 67
SG cluster, 24
Share based allocation, 12
Shared Memory, 24
sizing HP 9000 container, 11
SSH authorization keys
HP 9000 classic container, 50
HP 9000 system container, 50
SSHD, 16
stack size, 30
stand-alone ARIES, 12
standard mode, 27

T
tape archive, 23, 35
tar, 17, 18, 34
testing
HP 9000 classic container, 38
HP 9000 system container, 29
third-party tools for recovery, 23
traditional migration, 11
Transition of application, 17
transition using HP 9000 Containers, 10
troubleshooting, 77
trusted mode, 27
tweaking, 30

U
upgrade
container version, 41
HP 9000 classic container, 41
HP 9000 system container, 42
use HP 9000 Containers, 10
user environment
HP 9000 image recovery, 21
User management
Add new user, 50
allow container access, 50
deny container access, 50
HP 9000 classic container, 50
HP 9000 system container, 49

V
verify system container installation, 25
VxFS, 21

W
WLM, 12
Workarounds, 30

X
X server
XVfb, 27
xinetd, 38

97

You might also like