Site Preparation Guide
Site Preparation Guide
Guide
Sun Microsystems, Inc. has intellectual property rights relating to technology embodied in the product that is described in this document. In particular, and without
limitation, these intellectual property rights may include one or more U.S. patents or pending patent applications in the U.S. and in other countries.
U.S. Government Rights – Commercial software. Government users are subject to the Sun Microsystems, Inc. standard license agreement and applicable provisions
of the FAR and its supplements.
This distribution may include materials developed by third parties.
Parts of the product may be derived from Berkeley BSD systems, licensed from the University of California. UNIX is a registered trademark in the U.S. and other
countries, exclusively licensed through X/Open Company, Ltd.
Sun, Sun Microsystems, the Sun logo, the Solaris logo, the Java Coffee Cup logo, docs.sun.com, Java, JavaHelp, J2EE, JumpStart, Solstice, Sun Blade, SunSolve,
SunSpectrum, ZFS, Sun xVM hypervisor, OpenSolaris, and Solaris are trademarks or registered trademarks of Sun Microsystems, Inc. or its subsidiaries in the U.S.
and other countries. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. in the U.S. and other
countries. Products bearing SPARC trademarks are based upon an architecture developed by Sun Microsystems, Inc. DLT is claimed as a trademark of Quantum
Corporation in the United States and other countries. Netscape and Mozilla are trademarks or registered trademarks of Netscape Communications Corporation in
the United States and other countries.
The OPEN LOOK and SunTM Graphical User Interface was developed by Sun Microsystems, Inc. for its users and licensees. Sun acknowledges the pioneering efforts
of Xerox in researching and developing the concept of visual or graphical user interfaces for the computer industry. Sun holds a non-exclusive license from Xerox to
the Xerox Graphical User Interface, which license also covers Sun's licensees who implement OPEN LOOK GUIs and otherwise comply with Sun's written license
agreements.
Products covered by and information contained in this publication are controlled by U.S. Export Control laws and may be subject to the export or import laws in
other countries. Nuclear, missile, chemical or biological weapons or nuclear maritime end uses or end users, whether direct or indirect, are strictly prohibited. Export
or reexport to countries subject to U.S. embargo or to entities identified on U.S. export exclusion lists, including, but not limited to, the denied persons and specially
designated nationals lists is strictly prohibited.
DOCUMENTATION IS PROVIDED “AS IS” AND ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS AND WARRANTIES, INCLUDING ANY
IMPLIED WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NON-INFRINGEMENT, ARE DISCLAIMED, EXCEPT TO
THE EXTENT THAT SUCH DISCLAIMERS ARE HELD TO BE LEGALLY INVALID.
Copyright ©2010 Sun Microsystems, Inc. 4150 Network Circle, Santa Clara, CA 95054 U.S.A. Tous droits réservés.
Sun Microsystems, Inc. détient les droits de propriété intellectuelle relatifs à la technologie incorporée dans le produit qui est décrit dans ce document. En particulier,
et ce sans limitation, ces droits de propriété intellectuelle peuvent inclure un ou plusieurs brevets américains ou des applications de brevet en attente aux Etats-Unis
et dans d'autres pays.
Cette distribution peut comprendre des composants développés par des tierces personnes.
Certaines composants de ce produit peuvent être dérivées du logiciel Berkeley BSD, licenciés par l'Université de Californie. UNIX est une marque déposée aux
Etats-Unis et dans d'autres pays; elle est licenciée exclusivement par X/Open Company, Ltd.
Sun, Sun Microsystems, le logo Sun, le logo Solaris, le logo Java Coffee Cup, docs.sun.com, Java, JavaHelp, J2EE, JumpStart, Solstice, Sun Blade, SunSolve,
SunSpectrum, ZFS, Sun xVM hypervisor, OpenSolaris et Solaris sont des marques de fabrique ou des marques déposées de Sun Microsystems, Inc., ou ses filiales, aux
Etats-Unis et dans d'autres pays. Toutes les marques SPARC sont utilisées sous licence et sont des marques de fabrique ou des marques déposées de SPARC
International, Inc. aux Etats-Unis et dans d'autres pays. Les produits portant les marques SPARC sont basés sur une architecture développée par Sun Microsystems,
Inc. Quantum Corporation riclame DLT comme sa marque de fabrique aux Etats-Unis et dans d'autres pays. Netscape et Mozilla sont des marques de Netscape
Communications Corporation aux Etats-Unis et dans d'autres pays.
L'interface d'utilisation graphique OPEN LOOK et Sun a été développée par Sun Microsystems, Inc. pour ses utilisateurs et licenciés. Sun reconnaît les efforts de
pionniers de Xerox pour la recherche et le développement du concept des interfaces d'utilisation visuelle ou graphique pour l'industrie de l'informatique. Sun détient
une licence non exclusive de Xerox sur l'interface d'utilisation graphique Xerox, cette licence couvrant également les licenciés de Sun qui mettent en place l'interface
d'utilisation graphique OPEN LOOK et qui, en outre, se conforment aux licences écrites de Sun.
Les produits qui font l'objet de cette publication et les informations qu'il contient sont régis par la legislation américaine en matière de contrôle des exportations et
peuvent être soumis au droit d'autres pays dans le domaine des exportations et importations. Les utilisations finales, ou utilisateurs finaux, pour des armes nucléaires,
des missiles, des armes chimiques ou biologiques ou pour le nucléaire maritime, directement ou indirectement, sont strictement interdites. Les exportations ou
réexportations vers des pays sous embargo des Etats-Unis, ou vers des entités figurant sur les listes d'exclusion d'exportation américaines, y compris, mais de manière
non exclusive, la liste de personnes qui font objet d'un ordre de ne pas participer, d'une façon directe ou indirecte, aux exportations des produits ou des services qui
sont régis par la legislation américaine en matière de contrôle des exportations et la liste de ressortissants spécifiquement designés, sont rigoureusement interdites.
LA DOCUMENTATION EST FOURNIE "EN L'ETAT" ET TOUTES AUTRES CONDITIONS, DECLARATIONS ET GARANTIES EXPRESSES OU TACITES
SONT FORMELLEMENT EXCLUES, DANS LA MESURE AUTORISEE PAR LA LOI APPLICABLE, Y COMPRIS NOTAMMENT TOUTE GARANTIE
IMPLICITE RELATIVE A LA QUALITE MARCHANDE, A L'APTITUDE A UNE UTILISATION PARTICULIERE OU A L'ABSENCE DE CONTREFACON.
100314@23474
Contents
Preface ...................................................................................................................................................11
1 Architecture ..........................................................................................................................................13
Architecture Introduction .................................................................................................................. 13
Enterprise Controller .......................................................................................................................... 14
Proxy Controller .................................................................................................................................. 15
Example of a Co-Located Deployment Architecture .............................................................. 15
Example of a Deployment Architecture with Multiple Proxy Controllers ........................... 15
Agents ................................................................................................................................................... 16
Management Network ........................................................................................................................ 17
Data Network ....................................................................................................................................... 17
3
Contents
7 Decision: What Type of Systems for the Enterprise Controller and Proxy Controllers? ........... 33
Ops Center System Requirements ..................................................................................................... 33
Ops Center Enterprise Controller Requirements .................................................................... 33
Ops Center Proxy Controller Requirements ............................................................................ 34
Ops Center Agent Requirements ............................................................................................... 35
Firmware Requirements ..................................................................................................................... 36
Supported Systems Matrix ................................................................................................................. 40
Supported Operating Systems ........................................................................................................... 44
Supported Operating System by Feature ................................................................................... 45
Supported Operating Systems for Logical Domains ............................................................... 46
Supported Browsers ............................................................................................................................ 46
Cache Planning .................................................................................................................................... 47
Cache Recommendations for Connected Mode Configurations ........................................... 48
Cache Requirements for Disconnected Mode Configurations .............................................. 49
System Scaling ...................................................................................................................................... 50
Enterprise Controller Matrix ...................................................................................................... 51
Proxy Controller Matrix .............................................................................................................. 51
11 Provision an OS ....................................................................................................................................69
Provision an OS Introduction ............................................................................................................ 69
14 Virtualization .......................................................................................................................................79
Logical Domains .................................................................................................................................. 79
Solaris Containers ............................................................................................................................... 80
5
Contents
21 OC Doctor ..............................................................................................................................................97
Utility Download ................................................................................................................................. 97
OC Doctor Version 1.11 (March 12 2010) ................................................................................ 97
7
Contents
9
Contents
Ops Center is a data center life-cycle management tool that enables you to provision, patch, and
monitor the managed hardware, storage, and software, or assets, in one or more of your data
centers from a single browser user interface. The remote management capabilities are designed
to help increase availability and utilization and minimize downtime.
The user interface displays a consolidated view of all the discovered and managed resources in
your data centers, including SPARC? and x86 systems, Linux and Solaris Operating Systems
(Solaris OS), and Solaris Containers and zones.
The following are some of the tasks that you can perform from the Ops Center console:
■ Provision bare metal systems with Solaris, Red Hat, or SUSE Linux operating systems
■ Provision systems with Solaris or Linux operating systems
■ Automate patching and updates for Solaris and Linux OS
■ Update firmware
■ Manage and monitor your assets
■ Generate a variety of reports
Components
■ Enterprise Controller - The Enterprise Controller is the central server that consolidates the
data about the managed systems in your datacenters. You use the Enterprise Controller's
browser user interface (BUI) to view and administer the managed systems. The Enterprise
Controller connects to the managed systems through one or more Proxy Controllers.
■ Proxy Controller - The Proxy Controller increases the scale of the Enterprise Controller's
operations. In a simple deployment or small datacenter, you can install the Enterprise
Controller and Proxy Controller on the same system (co-located). In a larger, more complex
data center, you can install multiple proxy controllers to manage your assets.
■ Agent software - An agent is deployed on an asset so that the Enterprise Controller can
identify the asset. When the agent is installed on the hardware or software, the asset appears
in the Managed Assets section of the Navigation panel.
11
Preface
■ Managed Assets - Assets that have been discovered and have agent software. The agent
software responds to commands from the Enterprise Controller, allowing the asset to be
identified and managed.
■ Virtualization Controller - The Virtualization Controller is a specialized agent that
identifies and manages Solaris 10 OS global zones. Solaris 8, 9, and 10 OS, including
non-global zones in Solaris 10, and the Linux OS use the Agent software.
Architecture
1
Architecture Introduction
The three-tier architecture consists of the Enterprise Controller, Proxy Controller and the
managed systems. This illustration gives a typical data center scenario of the managed systems
connected to the Enterprise Controller. You can also have one proxy controller to manage both
the management and data networks.
13
Enterprise Controller
Enterprise Controller
The Enterprise Controller is the central server that consolidates the management systems. This
is where you manage the connected systems using the new user-friendly browser based
interface. The Enterprise Controller connects to the managed systems through Proxy
Controllers that are deployed for each network.
In Connected mode, the Enterprise Controller has Internet access to download the patch
information from Sun Knowledge Services, and to download patches from different software
vendors such as Sun, Oracle, Red Hat, and Novell. You can choose to use the software in
disconnected mode.
Proxy Controller
The Enterprise Controller requires one or more proxies to handle the managed systems. Proxy
Controllers increase the scale of the Enterprise Controller's operations. In a simple data center,
one Proxy Controller is co-located with the Enterprise Controller.
A proxy controller manages the flow of actions and data between the Enterprise Controller and
the managed systems. You can only perform actions on a subset of the managed systems at any
one time. The actions are placed in a job queue in chronological order. When a job stops, the
next job in the queue is started.
If you anticipate having a large number of concurrent, parallel jobs, consider using multiple
proxy controllers to improve performance and scalability.
Chapter 1 • Architecture 15
Agents
Agents
Agent software is deployed on an asset so that the Enterprise Controller can identify the asset
and manage it. Agents communicate with a specific Proxy Controller; they do not communicate
with the Enterprise Controller directly.
Some Ops Center features, such as firmware provisioning, do not use agents. Other features,
such as operating system updates, rely on agents to perform tasks within the operating system
on managed systems.
Management Network
In management network, the physical networks are managed separately. You can remotely
control the physical systems that are discovered and managed by Ops Center. You can do the
following functions through this network:
■ Power on or off
■ Power usage
■ Firmware update
■ OS provisioning
■ Locator lights information
■ Boot device information
■ Hardware variable information such as temperature and fan speed
■ Boot parameters
Data Network
In data network, the OS running on the managed systems are managed separately. A separate
proxy is required to manage this network. You can do the following functions through this
network:
■ Provision an OS (using manual net boot option during OS provisioning).
■ Patch, or update, an OS
■ Reboot an OS
■ Obtain OS information such as type and version
■ Obtain CPU, memory and network usage information
■ Obtain zone-related information, such as representation of global and non-global zones.
Chapter 1 • Architecture 17
18
2
C H A P T E R 2
Ops Center software downloads operating system patches and other new software using
Internet access, a mode of operation called Connected mode. By default, Ops Center is in
Connected mode. Before beginning an installation, consider whether you want Ops Center to
access the Internet. In Disconnected mode, the Enterprise Controller cannot be updated
automatically so all updates must be scheduled and managed according to a site policy for
manual procedures. After you have completed the Ops Center installation, you can change
modes.
Connected Mode
In Connected mode, Ops Center uses an Internet connection to access patches and patch
information. This mode is useful for most datacenters.
19
Disconnected Mode
Disconnected Mode
In Disconnected mode, Ops Center can be used in a secured environment that does not allow
Internet access. You must load the patches and other new software from a media device, such as
a CD or DVD, onto the Enterprise Controller. To obtain the software, you run a harvester script
on a system that is connected to the Internet and you downlosf the software to a CD or DVD.
Semi-Disconnected Mode
You can use a combination of Connected and Disconnected modes to maintain your data
center. In the semi-disconnected mode, you run your data center in Disconnected mode until
you need to need to access the knowledge base or third-party vendors. For example, when you
want to check for patches, you switch the Enterprise Controller to Connected Mode, connect to
the Internet to get the needed information, then switch the Enterprise Controller back to
Disconnected Mode.
See “Cache Planning” on page 47 for more information about configuring the Enterprise
Controller for these Connection modes.
Be aware that some software updates cause the Enterprise Controller to reboot or run in
single-user mode.
If you enable the Auto Update option after the initial configuration of the Enterprise Controller,
you must also perform the following procedures:
1. Configure the co-located Proxy Controller.
2. Install and configure agent software on the Enterprise Controller.
23
24
4
C H A P T E R 4
Your High Availability (HA) architecture must consider all single points of failure, such as
power, SAN and other storage, and network connectivity in addition to the Ops Center system.
The Ops Center High Availability capability consists of the transfer of Enterprise Controller
functions from one system to another system. The secondary Enterprise Controller takes over
much of the primary Enterprise Controller's identity, including its host name, its IP addresses,
its ssh keys, and its Ops Center data and role.
In an HA configuration, the primary Enterprise Controller has Ops Center software installed,
configured, and operational. The secondary Enterprise Controller has Ops Center software
installed, but not configured, and not operational. In the failover procedure, the data that is
saved on the primary Enterprise Controller is transferred to the secondary Enterprise
Controller to duplicate the primary Enterprise Controller's configuration.
However, root user passwords on the primary and secondary Enterprise Controllers are not
changed.
When the primary Enterprise Controller fails, you initiate the failover to the secondary
Enterprise Controller by:
■ Shutting down the primary Enterprise Controller, if possible
■ Preparing the secondary Enterprise Controller for failover
■ Transferring the storage asset that holds the /var/opt/sun/xvm directory structure from
the primary Enterprise Controller to the secondary Enterprise Controller
■ Restoring the Ops Center configuration on the secondary Enterprise Controller
■ Rebooting the secondary Enterprise Controller and starting Ops Center operations
Only one Enterprise Controller, either primary or secondary, can be operational at any
given time.
25
Requirements
Requirements
Use two systems of the same model that are configured identically:
■ Processor class (SPARC or x86)
■ Operating system (Solaris or RHEL 5.0)
■ Ops Center software version, including updates
■ Set of network interfaces that are cabled identically to the same subnets
■ Use transportable storage
Add an asset tag to identify the primary Enterprise Controller and to distinguish it from the
secondary Enterprise Controller.
If you use ZFS to provide the file system that mounts as /var/opt/sun/xvm, avoid using the ZFS
sharenfs command to share /var/opt/sun/xvm/osp/share/allstart. This allows the Ops
Center software to use legacy NFS sharing tools to share the
/var/opt/sun/xvm/osp/share/allstart directory.
Limitations
■ User accounts and data that are not associated with Ops Center are not part of the failover
process. Only Ops Center data is moved between the primary and secondary Enterprise
Controllers.
■ BUI sessions are lost on failover.
■ The HA configuration applies only to the Enterprise Controller and its co-located Proxy
Controller and not to other standalone Proxy Controllers.
A wide variety of storage solutions can meet these criteria, including hardware RAID arrays and
external JBODs. Storage can be attached to the Enterprise Controllers using various means,
including Storage Area Networks, or directly connected Fibre Channel (FC) or SCSI interfaces.
You must determine what storage solution offers the capacity, performance, connectivity, and
redundancy capabilities required for use with Ops Center. Configuration procedures vary
greatly among the available storage solutions, and between operating systems.
Note ? You must configure the transferable storage on the system that you want to use as the
primary Enterprise Controller before you install Ops Center software on that system.
In this configuration, both systems have access to all of the disks in the array. Using the FC ports
in this way avoids changing interface connections in the failover procedure. You must prevent
the two systems from using the same disks at the same time. In this example configuration, only
the primary Enterprise Controller accesses the /var/opt/sun/xvm directory on the array.
The example array has no inherent data redundancy capability, so ZFS is used to create a
mirrored storage pool and a file system that will mount as the /var/opt/sun/xvm directory.
To resolve an issue regarding when ZFS and LOFS mounts take place in the system boot
process, the configuration sets the mountpoint property of the example ZFS file system to
legacy. The legacy value indicates that the legacy mount and umount commands, and the
/etc/vfstab file, will control mounting and unmounting this ZFS file system. Other storage
solutions typically use these legacy commands and the /etc/vfstab file to control mounting
and unmounting operations. Refer to the Release Notes for more information about the LOFS
race condition issue.
29
30
6
C H A P T E R 6
The Discovery feature of the Ops Center software installs the agent software on each asset it
discovers. If you prefer, you can install the agent software on only those assets that you select.
31
32
7
C H A P T E R 7
Operating system Ops Center Enterprise Controller and Proxy Controllers require at least Solaris 10 11/06 (x64 or
SPARC), Red Hat Enterprise Linux (RHEL) 5.0, RHEL 5.3, or Oracle Enterprise Linux (OEL) 5.3.
33
Ops Center System Requirements
Ops Center Enterprise Controller software, and the data it stores, consume space below the
/var/opt/sun/xvm and /opt directory structures.
Ops Center software installation procedures use /var/tmp/OC as the example working
directory for software installation. The directory that you use for this purpose requires about 2
GByte of available space.
On Solaris Ops Center Enterprise Controllers, the software update data is stored below
/opt/SUNWuce, and data for OS provisioning is stored below /var/opt/sun/xvm/images.
On RHEL Ops Center Enterprise Controllers, software update data is stored below
/usr/local/uce/server and data for OS provisioning is stored below /var/opt/sun/scn. OS
images often consume about 4 Gbyte of space each.
When a Ops Center Proxy Controller is located on the same system as a Ops Center Enterprise
Controller, no duplication of OS images or software update data occurs.
Operating system Ops Center Proxy Controllers require at least Solaris 10 11/06 (x64 or SPARC), Red Hat Enterprise
Linux (RHEL) 5.0, RHEL 5.3, or Oracle Enterprise Linux (OEL) 5.3.
Ops Center Proxy Controller software, and the data it stores, consume space below the
/var/opt/sun/xvm and /opt directory structures.
Ops Center software installation procedures use /var/tmp/OC as the example working
directory for software installation. The directory that you use for this purpose requires about 2
Gbyte of available space.
When a Ops Center Proxy Controller is located on the same system as a Ops Center Enterprise
Controller, no duplication of OS images occurs.
On Solaris Ops Center Proxy Controllers, Solaris OS images are stored below /var/js and
Linux OS images are stored below /var/opt/sun/xvm/osp/.
On RHEL Ops Center proxies, Solaris OS images are stored below /var/js and Linux OS
images are stored below /var/opt/sun/xvm/osp/.
Chapter 7 • Decision: What Type of Systems for the Enterprise Controller and Proxy Controllers? 35
Firmware Requirements
Known Dependencies
Testing Ops Center agent installations on various operating systems has demonstrated that the
following specific dependencies exist:
■ “RHEL 3 Dependencies” on page 36
■ “SLES 10, 64-bit Dependencies” on page 36
■ “SUSE LINUX Enterprise Server 9 (i586) Dependencies” on page 36
RHEL 3 Dependencies
Ops Center agent updates on systems running Red Hat Enterprise Linux 3 require that the
libxml2 library is installed. This library is delivered by the libxml2-2.5.10-5.i386.rpm
package.
Firmware Requirements
Note - The information on this page is being updated, and might be out of date or incomplete.
Ops Center supports a wide range of Sun servers and chassis, as indicated by the table below.
However, system support is not static. Ops Center can support new Sun hardware without
requiring a new software release.
Each hardware group tests new systems for the ability to be supported by Ops Center. As a
result, the supported list is dynamic and will change as new Sun hardware, or a system variant,
is released.
An "X" in the Qualified for Firmware Provisioning column below indicates that the Ops Center
engineers have tested and qualified the system for firmware provisioning. The recommended
firmware version is the most recently tested version.
Sun Blade X6220 Server X 2.0.3.2 2.0.3.2 Solaris 10 11/06 x86,RHEL4U4 AS 64bit
Module
Sun Blade X6250 Server X 4.0.43 4.0.45 Solaris 10 11/06 x86, RHEL5 AS 32bit,
Module SUSE10-64bit
Sun Blade X8440 Server X 2.0.0.0 2.0.1.11 Solaris 10 11/06 x86, Solaris 10 8/07 x86,
Module RHEL5, SLES10
Sun Fire V125 Server X 1.6.3 1.6.3 Solaris 10 1/06, Solaris 10 11/06, Solaris 10
8/07
Sun Fire V215 Server X 1.6 1.6 Solaris 10 1/06, Solaris 10 11/06, Solaris 10
8/07
Chapter 7 • Decision: What Type of Systems for the Enterprise Controller and Proxy Controllers? 37
Firmware Requirements
Sun Fire V240 Server X 1.6.2 1.6.4 Solaris 10 11/06 SPARC, Solaris 10 5/08
SPARC
Sun Fire V245 Server X 1.6.3 1.6.9 Solaris 10 1/06, Solaris 10 11/06, Solaris 10
8/07
Sun Fire V445 Server X 1.6 1.6 Solaris 10 1/06, Solaris 10 11/06, Solaris 10
8/07
Sun Fire T1000 Server X Sun system Solaris 10 11/06 SPARC, Solaris 10 8/07
firmware 6.1.2 SPARC, Solaris 10 5/08 SPARC
Sun Fire T2000 Server X Sun System Solaris 10 11/06 SPARC, Solaris 10 8/07
Firmware 6.1.2 SPARC
Sun Fire X4200 Server X Solaris 10 6/06 x86, RHEL5 64 bit, SLES10 64
bit
Chapter 7 • Decision: What Type of Systems for the Enterprise Controller and Proxy Controllers? 39
Supported Systems Matrix
Ops Center supports a wide range of Sun servers and chassis, including the following:
■ All ILOM-based Sun Servers
■ M3000, M4000, M5000, M8000, M9000
■ All ALOM, ELOM, and RSC service processor enabled systems
Other systems, such as the v240, are displayed in the table below.
Note - System support is not static. Ops Center can support new Sun hardware without
requiring a new software release. Each hardware group tests new systems for the ability to be
supported by Ops Center. As a result, the supported list is dynamic and will change as new Sun
hardware, or a system variant, is released.
An "X" in the Qualified for Firmware Provisioning column below indicates that the Ops Center
engineers have tested and qualified the system for firmware provisioning. The recommended
firmware version is the most recently tested version.
Sun Blade X6220 Server X 2.0.3.2 2.0.3.2 Solaris 10 11/06 x86,RHEL4U4 AS 64bit
Module
Sun Blade X6250 Server X 4.0.43 4.0.45 Solaris 10 11/06 x86, RHEL5 AS 32bit,
Module SUSE10-64bit
Sun Blade X8440 Server X 2.0.0.0 2.0.1.11 Solaris 10 11/06 x86, Solaris 10 8/07 x86,
Module RHEL5, SLES10
Sun Fire V125 Server X 1.6.3 1.6.3 Solaris 10 1/06, Solaris 10 11/06, Solaris 10
8/07
Chapter 7 • Decision: What Type of Systems for the Enterprise Controller and Proxy Controllers? 41
Supported Systems Matrix
Sun Fire V215 Server X 1.6 1.6 Solaris 10 1/06, Solaris 10 11/06, Solaris 10
8/07
Sun Fire V240 Server X 1.6.2 1.6.4 Solaris 10 11/06 SPARC, Solaris 10 5/08
SPARC
Sun Fire V245 Server X 1.6.3 1.6.9 Solaris 10 1/06, Solaris 10 11/06, Solaris 10
8/07
Sun Fire V445 Server X 1.6 1.6 Solaris 10 1/06, Solaris 10 11/06, Solaris 10
8/07
Sun Fire T1000 Server X Sun system Solaris 10 11/06 SPARC, Solaris 10 8/07
firmware 6.1.2 SPARC, Solaris 10 5/08 SPARC
Sun Fire T2000 Server X Sun System Solaris 10 11/06 SPARC, Solaris 10 8/07
Firmware 6.1.2 SPARC
Sun Fire X4200 Server X Solaris 10 6/06 x86, RHEL5 64 bit, SLES10 64
bit
Chapter 7 • Decision: What Type of Systems for the Enterprise Controller and Proxy Controllers? 43
Supported Operating Systems
Enterprise Controller - - - -
Proxy Controller - - - -
Discovery
Provisioning - -
Monitoring
Updating
Live Upgrade - - - -
Solaris 10 8/07
Branded Zones - - - - -
*For installation, other RHEL, OEL, and Solaris OS releases are not supported. Solaris 10 1/06,
6/06, and 10/09 OS are not supported.
Chapter 7 • Decision: What Type of Systems for the Enterprise Controller and Proxy Controllers? 45
Supported Browsers
Refer to Migrating Zones for detailed information about supported Solaris 10 OS update
versions.
Ops Center supports only Logical Domains that are created through the BUI.
The Logical Domain host must belong to a specific hardware and OS.
The host must also meet specific patch and firmware requirements. For a detailed list of these
requirements, see Requirements of Logical Domains.
Feature Supported OS
Supported Browsers
Ops Center's Browser User Interface (BUI) is accessible from any supported browser.
Firefox 3.0.x
Firefox 2.0.x
Internet Explorer 8 - - - - -
Internet Explorer 7
Internet Explorer 6 - - - - -
Safari - - - - -
Opera - - - - -
Cache Planning
Ops Center uses a centralized file cache to manage its content. The Enterprise Controller and
Proxy Controllers use /var/opt/sun/xvm as the base directory. Agents use
/var/scn/update-agent as their base directory. The Enterprise Controller's global file cache
contains some or all of the following content, depending on what Ops Center is used for:
■ Provisioning Content
■ Firmware
■ OS Images
■ Update Content
■ Knowledge Base data - Metadata that shows what updates exist for a give update channel
(such as Red Hat Enterprise Linux 5 or Solaris 10 X86)
■ Updates - Packages, Patches and RPM files that are a standard part of an OS update
channel
■ Local Content - User-designated content (software bundles, configuration files, scripts)
Ops Center propagates content from the cache as required. The requester downloads the
content on a per-job basis so a proxy controller downloads the content it needs from the
Enterprise Controller to perform a job, and an agent downloads the required content from the
proxy controller. After content is cached on a proxy controller or agent, it can be re-used
without additional downloads. This provides operational efficiency for Ops Center.
Note – A user runs a job which patches five Solaris 10 SPARC OS agents on a single proxy. The
proxy controller downloads and caches all of the patches required by the agents, and each of the
agents downloads and caches only the patches it requires. If an agent has cached several updates
already, it re-uses those updates and downloads only what it needs from the proxy.
Note – A user runs a job to provision an OS ISO image to three systems which are managed by
two proxy controllers. Each proxy controller downloads and caches the ISO image. The three
systems do not cache the OS image, because they download and install the images from their
respective proxy controllers.
Chapter 7 • Decision: What Type of Systems for the Enterprise Controller and Proxy Controllers? 47
Cache Planning
Many installations use a co-located configuration, in which the proxy controller is installed on
the same OS instance as the Enterprise Controller. In this case, the proxy and enterprise
controllers share a global file cache and no additional disk space is required for the proxy
controller's cache.
Because agents store only update content for their OS instance, they have reduced caching
requirements. It is recommended that 2GB be available for the agent for both the Ops Center
software and the update cache.
Note – An Ops Center installation uses an Enterprise Controller with a co-located proxy
controller and one standalone proxy controller. The installation performs OS provisioning for
Solaris 10 X86 and SPARC (update 6) and Red Hat Linux 5.3, with one ISO image for each
distribution. It patches Solaris 10 X86, Solaris 10 SPARC and Red Hat Enterprise Linux 5 32-bit
X86. The standalone proxy controller is used to provision and update Solaris 10 systems on
both SPARC and X86 architectures.
In this scenario, both the Enterprise Controller with co-located proxy and standalone proxy
controllers need a cache size of 74GB, with 2GB in /var/tmp and /opt, and 72GB in
/var/opt/sun/xvm. No additional caching is required on the Enterprise Controller because the
co-located proxy controller shares its cache.
The Enterprise Controller must have a minimum cache size of 44 GB because of the following
requirements:
■ 30 GB for the three OS update channels in /var/opt/sun/xvm
■ 12 GB for the three OS provisioning ISO images in /var/opt/sun/xvm
■ 2 GB for the Ops Center software in /var/tmp and /opt
The standalone proxy controller must have a minimum cache of 30 GB, with the following
requirements:
■ 20 GB for the two Solaris OS update channels in /var/opt/sun/xvm
■ 8 GB for the two Solaris OS provisioning ISOs in /var/opt/sun/xvm
■ 2 GB for the Ops Center software in /var/tmp and /opt
Provisioning content is managed in the same way as in Connected mode configurations except
it is not possible to download Solaris OS images.
The following cache operations work the same in both Connected and Disconnected modes:
■ Import OS image
■ Load OS image from CD or DVD
■ Create firmware image
Update content is usually managed differently in Disconnected mode. Users must manually
upload the knowledgebase (KB) and update content to the Enterprise Controller.
Chapter 7 • Decision: What Type of Systems for the Enterprise Controller and Proxy Controllers? 49
System Scaling
The KB content is available as a TAR bundle, which users can obtain by running the Ops Center
harvester script. Obtaining a KB bundle with the Harvester Script provides details and examples
on how to run the script. Depending on the settings, users can download the KB content only,
or they can obtain patch content for one or more Solaris baselines.
To cache update content (such as patches, packages or RPMs) users perform one or more bulk
uploads with the enterprise controller. Uploading Local Software in Bulk explains how to
perform bulk uploads of update content in Ops Center.
System Scaling
The Enterprise Controller Matrix and the Proxy Controller Matrix on this page are intended to
provide guidance on the minimum amount of memory and disk space needed to optimize
performance for your environment.
To improve performance, consider the following if you plan to install more than 100 hosts in
your data center:
■ Deploy the Enterprise Controller and the Proxy Controller on separate systems.
■ Using the OS Update functionality requires faster disks and stripped disk configurations.
This is critical for large scale deployments.
■ Consider Solaris Zones and Sun Logical Domains as additional hosts, or agents.
■ Monitoring is optimized for multiple cores. Proxy Controllers that manage service
processors (SPs) will benefit from more cores.
Use the Chapter 21, “OC Doctor,” utility with the --performance flag to determine your
hardware's benchmark times (BT).
Note - For maximum performance, avoid using a co-located Proxy Controller (Enterprise
Controller and Proxy Controller installed on the same system) in environments with more than
100 hosts.
Chapter 7 • Decision: What Type of Systems for the Enterprise Controller and Proxy Controllers? 51
System Scaling
The systems specified in the table are intended to be examples. Machine speed might vary based
on OS, core CPU speed, the number of cores and the number of disks. Service processor
monitoring generates heavier load than OS, as monitoring is done from the Proxy Controller.
Note - For maximum performance in environments with more than 100 hosts, avoid using a
co-located Proxy Controller (Enterprise Controller and Proxy Controller installed on the same
system). OS update functionality benefits from a faster core CPU speed on the Enterprise
Controller.
You can use Ops Center to discover, provision, update, and manage your Sun SPARC
Enterprise? M3000/M4000/M5000/M8000/M9000 servers and Fujitsu SPARC Enterprise
M3000/M4000/M5000/M8000/M9000 servers, also referred to as SPARC Enterprise M-series
servers.
The SPARC Enterprise servers contain eXtended System Control Facility (XSCF) firmware,
which is a system monitoring and control facility consisting of a dedicated processor that is
independent from the system processor. The XSCF provides an interface between the user and
the server.
The XSCF is the firmware running on the service processor in the server. The firmware is a
single centralized point for the management of hardware configuration, control of hardware
monitoring, cooling system (fan units), domain status monitoring, power on and power off of
peripheral devices, and error monitoring. XSCF firmware uses different functions to achieve
high system availability and it has a partitioning function to configure and control domains.
A single XSCF service processor is installed in the SPARC Enterprise M3000, M4000, and
M5000 servers. In the SPARC Enterprise M8000 and M9000 servers, two XSCF service
processors are installed in the server; these two service processors are highly available and only
one service processor is active at a time.
When hardware resources in the server are logically divided into one or more units, each set of
divided resources can be used as one system, which is called a domain. A Solaris OS can operate
in each domain.
53
Requirements
Requirements
Ops Center is qualified to run on SPARC Enterprise M3000/M4000/M5000/M8000/M9000
servers running Solaris 10 10/08 operating system with the following requirements:
■ Create and configure domains manually on the server. See Domain Configuration in the
Sun SPARC Enterprise M3000/M4000/M5000/M8000/M9000 Servers XSCF User's Guide.
■ Configure the domains manually.
■ In the XSCF service processor, create an xvmoc user with platadm privilege.
■ Create a group for the SPARC Enterprise servers.
■ Use the firmware version that is recommended for the server.
Specific procedures are required to discover and provision firmware on the SPARC Enterprise
M-Series servers. The tasks for updating the domain OS is the same as updating a system OS.
In addition, depending on the environment being managed, the Enterprise Controller might
need to access a number of vendor sites to download patches or other knowledge. Review the
Chapter 26, “Vendor Download Sites,” page for a list of vendor sites.
55
Network Port Requirements and Protocols
Proxy Controller to Enterprise Controller HTTPS, TCP 443 Proxy Controller push of asset inventory data to
Enterprise Controller
Proxy Controller pull of jobs, update, agent, and OS
images
Proxy Controller to Systems FTP, TCP 21 Discovery, bare metal provisioning, system
management, and monitoring
SSH, TCP 22
Telnet, TCP 23
DHCP, UDP 67,68
SNMP, UDP 161,162
IPMI, TCP+UDP 623
Service Tags, TCP 6481
Agent to Proxy Controller HTTPS, TCP 21165 Agent push of asset inventory data to Proxy Controller
Agent pull of jobs
Agent to Proxy Controller HTTPS, TCP 8002 Agent download of updates from Proxy Controller
Java client to public APIs Transport Layer JMX access from clients
Security(TLS), port 11162
Reference Configurations
This section provides the reference configurations and connectivity information for Ops
Center.
Other configurations are possible, such as using separate switches for each network. You can
implement your network using any combination of VLANs and switches. Each network,
whether management, provisioning, or data, should be assigned to separate VLANs.
Section Description
“Separate Management, Provisioning, and Data Networks” on Describes the connectivity requirements for the separate
page 58 management, provisioning, and data networks configuration.
“Combined Management and Provisioning Network and a Describes the connectivity requirements for the combined
Separate Data Network” on page 60 management, provisioning and separate data networks
configuration.
“Combined Provisioning and Data Network and a Separate Describes the connectivity requirements for the combined
Management Network” on page 64 provisioning, data and separate management networks
configuration.
“Combined Provisioning, Data, and Management Network” on Describes the connectivity requirements for the combined
page 62 provisioning, data, and management networks configuration.
The following list summarizes the connectivity requirements for the separate management,
provisioning, and data networks configuration.
■ Enterprise Controller/Proxy Controller
The enterprise controller/proxy controller should provide connectivity to the management
network, provisioning network, and corporate network as follows:
■ ETH0 connects the enterprise controller/proxy controller to the corporate network to
provide external access. The ETH0 IP address, netmask, and gateway should be
configured to meet your corporate environment connectivity requirements.
■ ETH1 connects the enterprise controller/proxy controller to the provisioning network
and should be on the same network as the ETH0 connections of the agents. No devices
other than the enterprise controller/proxy controller and the agents should reside on the
provisioning network. ETH1 should be a 1-Gbit NIC interface.
■ ETH2 connects the enterprise controller/proxy controller to the management network
and should be on the same network as the management port connections of the agents.
The ETH2 IP address, netmask, and gateway should be configured to enable
connectivity to the agent's management port IP addresses. ETH2 should be a
100-megabit NIC interface.
■ The DHCP service allocates IP addresses to the agents for loading operating systems.
■ Agents
Each agent should provide connectivity to the management network, provisioning network,
and data network as follows:
■ The management port connects the agent to the management network and should be on
the same network as the ETH2 connection of the enterprise controller/proxy controller.
The management port should be a 100-megabit connection.
■ ETH0 connects the agent to the provisioning network and must be on the same network
as the ETH1 connection of the enterprise controller/proxy controller. ETH0 should be a
1-Gbyte connection.
■ ETH1 connects the agent to the data network through the switch to provide external
corporate network access to the agent. ETH1 should be a 1-Gbyte connection.
For this configuration, an additional NIC does not need to be installed on the enterprise
controller/proxy controller. The combined management and provisioning network reduces
system and network security.
The following list summarizes the connectivity requirements for the combined management
and provisioning network and the separate data network configuration.
■ Enterprise Controller/Proxy Controller
The enterprise controller/proxy controller should provide connectivity to the management
and provisioning network as follows:
■ ETH0 connects the enterprise controller/proxy controller to the corporate network to
provide external access. The ETH0 IP address, netmask, and gateway should be
configured to meet your corporate environment connectivity requirements.
■ ETH1 connects the enterprise controller/proxy controller to the management and
provisioning network and should be on the same network as the MGMT and ETH0
connections of the agents. No devices other than the enterprise controller/proxy
controller and the agents should reside on the management and provisioning network.
The ETH1 IP address, netmask, and gateway should be configured to enable
connectivity to the agent's management port IP addresses. ETH1 should be a 1-Gbit NIC
interface.
■ The DHCP service allocates IP addresses to the agents for loading operating systems.
■ Agents
Each agent should provide connectivity to the management and provisioning network and
the separate data network as follows:
■ The management port connects the agent to the management and provisioning network
and should be on the same network as the ETH1 connection of the enterprise
controller/proxy controller. The management port should be a 100-megabit connection.
■ ETH0 connects the agent to the management and provisioning network and must be on
the same network as the ETH1 connection of the enterprise controller/proxy controller.
ETH0 should be a 1-Gbyte connection.
■ ETH1 connects the agent to the data network through the switch to provide external
corporate network access to the agent. ETH1 should be a 1-Gbyte connection.
For this configuration, an additional NIC does not need to be installed on the enterprise
controller/proxy controller. The combined management, provisioning, and data networks
greatly reduces system and network security.
The following list summarizes the connectivity requirements for the combined management,
provisioning, and data networks configuration.
The following list summarizes the connectivity requirements for the combined data and
provisioning network and the separate management network configuration.
■ Enterprise Controller/Proxy Controller
67
68
11
C H A P T E R
Provision an OS
1 1
Provision an OS Introduction
Operating system (OS) provisioning enables you to use Ops Center to automatically install
operating systems onto systems that are attached to your network. In most circumstances, OS
provisioning requires no manual interaction with the system that you want to install. You
initiate these OS installations from a centralized location, using the Ops Center BUI, rather
than from the systems that you want to install.
Check “Supported Operating Systems” on page 44 for the list of operating systems that you can
provision with Ops Center.
Creating OS images and creating OS profiles are one-time tasks for each OS configuration that
you want to provision. After an OS image and associated OS profile exist in Ops Center, you can
provision the OS onto systems that are attached to your network.
69
70
12
C H A P T E R
Provision Firmware
1 2
Firmware provisioning enables you to install firmware updates on a server by using firmware
images and firmware profiles.
71
72
13
C H A P T E R 1 3
About Updating an OS
Using Ops Center, operating systems are secure and current. You can patch the following
operating systems:
■ Solaris 8, 9, and 10 (SPARC)
■ Solaris 10 (x86)
■ Red Hat Linux Advanced Server 3, 4, and 5
■ SUSE Linux Enterprise 8, 9, and 10
■ Microsoft Windows
The processes for installing patches on Solaris and Linux operating systems are very similar.
The process for updating Windows is different. Detailed information is available in each
OS-specific section.
Managing Systems
Before you can use Ops Center to patch and update an OS, you must discover the OS to gather
identification for each operating system and then you must manage the OS to install the agent
controller software. The agent controller software allows Ops Center to check the current
condition of the operating system and to perform update operations.
73
Obtaining Patches
Obtaining Patches
By default, Ops Center software downloads patches and new software using Internet access.
The Enterprise Controller is connected to the Internet and to the Solaris Knowledge Services
database. You can configure Ops Center to connect to third party vendors, such as Red Hat, and
provide authentication details. When you run an update job, the patches are downloaded from
the corresponding site. For example, Solaris OS patches are available from the SunSolve web site
and Red Hat patches are available from the Red Hat site.
Ops Center downloads only signed patches from SunSolve or EIS DVD. The patches must be in
the jar or jar.gz format or in the patch directory.
If your data center cannot have direct Internet access, configure the software to operate in
Disconnected mode. In this mode, the Enterprise Controller is not connected to the Internet
and you must upload all content, such as patches, to the Enterprise Controller. To obtain the
patches and packages, you must run the harvester script on a system outside of the data center
that does have Internet access. You then save the downloaded information to a portable media
device, such as a CD or DVD, and bring it to your data center for manual upload. The uploaded
software is stored in the Local Content section of the Updates Library.
Another option is to run your Enterprise Controller in Disconnected Mode until you need to
download patches or packages. You then change the Enterprise Controller's mode to
Connected only to download the required patches and packages, and then change back to the
Disconnected mode.
You can add categories for your content in the Updates Library, edit a component file, and
delete a local component from your library.
Reports
Several OS Update reports are available. Reports are OS-specific, but many reports check for
new patches and security advisories. You can get a general report, or test a system or installed
package for available fixes. For auditing purposes, you can create a Ops Center job history
report.
Detailed information is available in each OS-specific section. When you create a report, you
select the criteria that are relevant to you, such as a list of hosts that have a specific patch or a list
of hosts that do not have a specific patch. You can export the results of most reports to a CSV
format.
For Solaris Baseline Analysis Reports, you can run the report much quicker if you run a patch
simulation and do not download the patches.
The BUI supports column-based sorting in the Report Results section for all the OS Update
reports except for Job History Report and Baseline Analysis Report. Clicking on any field in the
header part of the results table in the center panel will sort the results of that column.
You can upload patches, packages, and local content and save it in the Updates Library. Local
content includes files, scripts, executables, or binaries that are not known to the hosted tier and
are private to your organization. Your local content files typically include instructions that must
be carried out before or after an update job.
Update Job
Ops Center contains the following options in an update job to maintain control and consistency
across your data center:
■ Groups - Help you to organize the display of assets in the user interface and act as targets for
many types of jobs.
■ Roles - Enable you to determine the tasks that a user can perform on a specific asset, or a
group of assets.
■ Update Profiles - Define what you will allow, or not allow, to be installed on a target. You
can select from a list of predefined profiles, your existing custom profiles, or you can create a
new profile by modifying an existing profile.
■ Update Policies - Define how a job is performed and sets the automation level of the job.
You can select from a list of your existing policies or you can create a new policy.
■ Solaris Baselines, white lists, and black lists - Enable you to bring all systems to a baseline,
and remove or add patches from the list of patches to install.
■ Local Content - Enable you to add custom packages, software, and scripts
■ Patch Simulations - Estimates how much time is required to complete an update job based
on the policy and profile and if the job will succeed.
■ Rollback and recovery capabilities - Enables you to back out patches
■ Reports - Maintain patch records, including compliance reports and patch history.
You can define the following job parameters while creating a new update job:
■ Job Name and Description - Identifies the job in the Jobs list. A detailed description is
helpful in clearly identifying the job in the historical record. You can rerun existing jobs.
■ Profile - Defines what you will allow, or not allow, to be installed on a target. You can select
from a list of predefined profiles, your existing custom profiles, or you can create a new
profile by modifying an existing profile.
■ Policy - Defines how a job is performed and sets the automation level of the job. You can
select from a list of your existing policies or you can create a new policy.
■ Target Settings - Defines whether the target should be different or similar for each task in the
job.
■ Run Type - Defines whether this job is in simulation mode or is an actual run. You can
choose to deploy the job, or to run a job simulation. A job simulation determines the actions
and results of a job, and estimates how much time is required to complete the job. Job
simulations also indicate if your policy and profile responses will enable the job to succeed.
You can tun a job simulation without downloading patches and packages.
■ Task Execution Order - Specifies whether the tasks should be run in parallel or sequentially.
■ Task Failure Policy - Specifies what action to take if the task fails.
■ Targets - Selects the target systems for the job.
Solaris OS Patching
The following package and patch services and features are supported for patching the Solaris
OS in Ops Center:
■ Recommended patch clusters
■ Solaris baseline reports
■ Custom packages
■ Active dependency rules
■ Patch analysis
■ Job simulation
■ Job scheduling
■ Rollback and recovery
You can use Solaris Live Upgrade to update your Solaris software or you can update your Solaris
Containers and zones.
Linux OS Patching
The following package and RPM installation services and features are supported for patching
Linux systems in Ops Center:
■ Linux Red Hat Package Manager (RPM)
■ Custom packages
■ Active dependency rules
■ Patch analysis
■ Job simulation
■ Job scheduling
■ Rollback and recovery
Windows OS Patching
The following features are supported for patching Windows systems in Ops Center:
■ Patch analysis
■ Job scheduling
Virtualization
1 4
Ops Center can manage assets and resources even if they are virtual assets and resources.
The Virtualization Controller manages and monitors the agent software on a virtual asset or
storage resource as if it were a physical component.
Logical Domains
Logical Domains, or LDoms, technology is part of a suite of methodologies for consolidation
and resource management for SPARC CMT systems. This technology allows you to allocate a
system's various resources, such as memory, CPU threads, and devices, into logical groupings
and create multiple discrete systems. These discrete systems will have their own operating
system, resources, and identity within a single system. By careful architecture, a Logical
Domains environment can help you achieve greater resource usage, better scaling, and
increased security and isolation.
79
Solaris Containers
Solaris Containers
Solaris Containers are an integral part of the Solaris 10 operating system (OS). Solaris
Containers isolate software applications and services using flexible software-defined
boundaries. They enable you to create many private execution environments within a single
instance of the Solaris 10 OS. Each environment has its own identity that is separate from the
underlying hardware. Each environment behaves independently as if running on its own
system, making consolidation simple, safe, and secure.
Using Groups
1 5
Groups are administrative structures that contain assets. They appear in the Assets section of
the Navigation panel. Groups can contain any number of assets, and assets can be placed in
more than one group.
User-Defined Groups
User-defined groups can contain any type of asset:
■ Homogeneous groups contain a single type of asset: server, chassis, or operating system.
■ Heterogeneous groups can contain several types of assets.
Smart Groups
Smart groups are automatically generated to organize all of your assets by type.
You can use groups to organize your assets and act as targets for many types of jobs.
Homogeneous server groups, for example, can be targeted with OS provisioning or firmware
update jobs.
81
82
16
C H A P T E R 1 6
“Notifications” on page 87
83
84
17
C H A P T E R 1 7
Each role grants a user a specific set of authorizations. To perform a job, you must have the
correct role for the assets or group targeted by the job. Administrators can grant roles to a user
that cover the following assets or groups:
■ Enterprise Controller
■ All Assets group
■ User-created groups
Note - Subgroups inherit the roles assigned to the parent group.
85
Enterprise Controller Admin Role
Group Roles
An Enterprise Controller Admin can grant one or more of these roles to any user for any
user-defined group:
Content Description
Group Admin This role allows the user to use administration actions such as adding or removing assets.
Group Provision This role allows the user to provision new operating systems and firmware.
Group Update Simulate This role allows the user to run simulated update jobs.
Group Manage This role allows the user to use management and monitoring actions.
Notifications
Notification Profiles determine how notifications are sent to a user and what levels of
notifications are sent. By configuring separate notification profiles, different users can receive
specific levels of notifications through the BUI, through email, or through a pager. Different
levels of notifications can be sent for specific Virtual Pools, Groups, or top-level Smart Groups.
Four levels of notification can be sent to a destination:
■ None
■ Low and Higher
■ Medium and Higher
■ High
If a user has no notification profile, all notifications for all assets are sent to the BUI, and no
notifications are sent to other destinations.
Getting Ready
1 8
89
90
19
C H A P T E R 1 9
91
Tasks for Preparing a Site Introduction
For ILOM, ALOM, and SP-based agents, see the server documentation for information about
assigning IP addresses to the server's management port. You can also locate the server
documentation at http://sunsolve.sun.com/handbook_pub/Systems/.
Ops Center requires that you provide a valid Sun Online Account name and password when
you register the Enterprise Controller with the Sun Inventory online service. If you have Linux
systems that you intend to update using Ops Center, a valid Red Hat Network or Novell account
must be available.
Your login succeeds if you have a valid Sun Online Account. The My Account tab on the My
Sun Connection site enables you to manage your Sun Online Account, including updating
account information, and managing support contracts and licenses.
95
Verifying Your Red Hat Network or Novell Account
OC Doctor
2 1
The Ops Center Doctor utility is designed to check requirements and identify potential issues
before deploying Ops Center and to assist with post-deployment troubleshooting.
The utility has an internal knowledge base for detecting known issues and workarounds which
gets updated on a regular basis.
Utility Download
The utility is updated on a regular basis. To receive email notification of an update to this page,
including when a new version is available, click Tools, then click Watch This Space.
Alternatively, use the self Auto-Update option (*-update*) to automatically download the latest
version.
# ./OCDoctor.sh
97
Running the OCDoctor
[ --troubleshoot ] [--fix] Scan the installed components for issues. --fix will /
ctor
Options
The following options are are available:
■ “Pre-Installation” on page 99
■ “Troubleshooting and Tuning” on page 100
■ “Auto-Update” on page 101
Pre-Installation
The following pre-installation tests are available:
■ -sat-prereq - Verifies that the Enterprise Controller requirements are met.
■ -performance - Checks the machine speed and provides a Benchmark Time (BT) score. To
ensure the best results, run this command when the machine is idle.
■ -agent-prereq - Verifies that the Agent requirements are met.
sat-prereq Option
Run the following command on the system that will be your Enterprise Controller to verify that
the minimum requirements are met.
# ./OCDoctor.sh --sat-prereq
performance Option
The output of this option enables you to determine the best Enterprise Controller and Proxy
Controller configuration for your data center. For maximum performance in environments
with more than 100 systems, avoid using a co-located Proxy Controller (Enterprise Controller
and Proxy Controller installed on the same system).
Run the following command on the systems that will be your Enterprise Controller and Proxy
Controller to determine the system speed and establish a BT score:
# ./OCDoctor.sh --performance
After you obtain the output, go to “System Scaling” on page 50. The Enterprise Controller and
Proxy Controller matrices provide general guidelines for planning your system requirements.
agent-prereq Option
Run the following command on systems that you are planning to install an agent on to verify
that the minimum requirements are met.
# ./OCDoctor.sh --agent-prereq
Chapter 21 • OC Doctor 99
Options
Tip – You can also run the -troubleshoot option inside a broken zone to troubleshoot a problem.
■ -needhelp - Display information on how to gather additional information and how to open
a support case. Try this option if the -troubleshoot did not identify the problem.
Note – Support often asks for GDD output. The latest GDD utility is bundled inside the
OCDoctor in the GDD folder. For more information about the GDD, see Sun Gathering Debug
Data for Sun Ops Center.
■ -tuning - Scans current configuration and suggests improvements.
■ -whatisblobid
<BLOBID> - This is used for debugging: Blob IDs is what Ops Center uses to point to
patches, RPMs and Local files. The Blob IDs show up in various log files. This option will
provide details about a specific blob id.
-------------------------------------------------
-------------------------------------------------
Auto-Update
■ -update - Checks for a newer version of the Doctor online and automatically install the
newer version.
# ./OCDoctor.sh --update
If your system need a proxy server to access the web, you can configure the proxy settings with
the following:
# export http_proxy="http://proxyuser:proxypass@proxyname:port"
# export http_proxy="http://sunproxy.sun.com:8080"
# ./OCDoctor --update
Verify that your system is ready to accept the Ops Center Enterprise Controller or Proxy
Controller software before you proceed with the installation. This page describes the system
resources to check.
Run the Chapter 21, “OC Doctor,” utility to check requirements and to identify potential issues
before you install Ops Center. You can also run the utility after installation at any time to
identify problems such as missing Ops Center patches.
The Ops Center Doctor utility performs the following operations. You can perform the same
tasks manually.
■ “To Check the Operating System Release” on page 104
■ “To Check the Installed Software Group” on page 104
■ “To Check the Zone Identity” on page 104
■ “To Check the Available Disk Space” on page 105
■ “To Check Swap Space” on page 106
■ “To Verify the Amount of System Memory” on page 106
■ “To Verify the Amount of Shared Memory” on page 106
■ “To Verify the webservd User and Group” on page 107
■ “To Verify That an Alternate Administrative User Exists” on page 107
■ “Ops Center Users and Groups” on page 108
■ “To Verify the umask Value” on page 109
■ “To Verify the Locations of ssh Binaries” on page 109
■ “To Verify Correct IP Address Resolution” on page 110
■ “To Verify That /usr/local Is Writeable” on page 110
■ “To Verify the Date and Time” on page 110
■ “To Verify Online cryptosvc and gss Services” on page 111
■ “To Remove the SMClintl Package” on page 111
■ “To Verify Network Access to Required Web Sites” on page 111
■ “To Verify ssh Access for the root User” on page 114
■ “To Verify Network Port Access” on page 114
103
Before You Begin
# cat /etc/release
# cat /var/sadm/system/admin/CLUSTER
CLUSTER=SUNWCall
# zonename
global
# df -h
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
(output omitted)
Ops Center software, and the data that it stores, primarily consume space below the
/var/opt/sun/xvm and /opt directory structures. In this example, the /opt and
/var/opt/sun/xvm directories are located within the root (/) file system, which has 78 Gbytes
of space available. The install script checks for 2 Gbytes of space in /opt and 70 Gbytes of
space in /var/opt/sun/xvm.
High availability (HA) configurations for Ops Center use transferable storage to hold the
/var/opt/sun/xvm directory structure within a separate file system. Refer to About High
Availability and “Configuring Storage for High Availability” on page 26 for more information
about HA configurations.
# swap -l
The values in the blocks and free columns are expressed in 512-byte blocks.
project: 1: user.root
project.max-shm-memory
At least 500 MB of shared memory is required. If the privileged value is less than 500 MB, use
the following command to set it to 500 MB.
Examine the content of the /etc/passwd, /etc/shadow, and /etc/group files to confirm that
the webservd user and group exist. For example:
webservd:*LK*:::::::
webservd::80:
If the webservd user or group does not exist, create the missing user or group using the UID and
GID values listed in the example above.
# logins -l droot
This example system uses droot as the administrative user for Ops Center. You must create the
administrative user before you install Ops Center.
Users svctag, allstart, scndb, scn, scncon, uce-sds, xvm svctag, allstart, uce-sds
Ops Center creates these users and groups with the following UID and GID values:
# cat /etc/group
(output omitted)
uce-sds::98194050:
scndb::98194051:
jet::98194052:
# cat /etc/passwd
(output omitted)
scn:x:231796:3::/:/bin/sh
xvm:x:60:60::/:/bin/sh
scncon:x:231798:1::/:/bin/true
uce-sds:x:231799:98194050:UCE Engine:/opt/SUNWuce/server:/bin/sh
allstart:x:231801:1:AllStart User:/var/opt/sun/xvm/osp/data:/bin/sh
All user accounts have locked (*LK*) passwords, except the scncon user. A password is required
for the scncon user, but it has no login shell. If you must create the scncon user before installing
the software, you must enter the password that you want to use, in clear text, in the
/var/opt/sun/xvm/persistence/scn-satellite/satellite.properties file. Associate the
password with the scncon.password parameter in this file. For example:
scncon.password=2EzafaJE
# sh
# umask
0022
# ksh
# umask
022
# csh
<hostname># umask
22
For example:
# host system.domain
Verify that the /etc/hosts file contains the correct host name and IP address for your system.
For example:
# df -h /usr/local
# ls -ld /usr/local
In this example, the /usr/local directory is stored in the root (/) file system, and is writeable
by the root user and group.
# date
If the date and time are not correct, reset them. See Troubleshooting for a description of an error
that might occur in the Enterprise Controller Configuration wizard if the date and time is not
set correctly.
You can use the svcadm command to enable these services if they are not online.
# pkgrm SMClintl
(output omitted)
https://getupdates1.sun.com
https://inv-cs.sun.com
https://inventory.sun.com
https://a248.e.akamai.net
https://identity.sun.com
ftp://ftp.sunfreeware.com
The https://getupdates1.sun.com site should display a login authentication screen for the
Sun Update Connection Download Server. The https://inv-cs.sun.com and
https://inventory.sun.com sites should display the Sun Connection page.
For access to Red Hat Linux updates, verify that your system can access the following URLs:
https://www.redhat.com
http://rhn.redhat.com
https://rhn.redhat.com
https://download.rhn.redhat.com
For access to SUSE Linux updates, verify that your system can access the following URLs:
http://www.novell.com
https://www.novell.com
http://download.novell.com
https://you.novell.com
Use the wget command to verify that you can access the getupdates1.sun.com web site and
download a sample file.
1. If you use a proxy server to access the Internet, set the https_proxy environment variable to
point to the proxy server:
# export https_proxy="http://myproxy.company.com:8080"
In this example, [email protected] and password represent the SOA and SOA password
that you provide:
--http-user="[email protected]" --http-password="password"
--11:43:41-- https://getupdates1.sun.com/channels3/channels.xml
=> ‘/tmp/channels.xml’
Location: https://a248.e.akamai.net/f/248/21808/15m/sun.download.akamai.com/
21808/sc/channels3/channels.xml?AuthParam=1236019547_e9120d30e1ac62650c8f928
4dfe47663&TUrl=L0QdUQV8Z4i0fdED3QTP3SJDWA8FMyaJsHfIWf4X29kTWQpKEzIbwqFuyRPZ
&TicketId=3qfzk1SIPR9R&GroupName=SWUP&BHost=sdlc3h.sun.com&FilePath=
/sc/channels3/channels.xml&File=channels.xml [following]
--11:43:42-- https://a248.e.akamai.net/f/248/21808/15m/sun.download.akamai.com/
21808/sc/channels3/channels.xml?AuthParam=1236019547_e9120d30e1ac62650c8f9284
dfe47663&TUrl=L0QdUQV8Z4i0fdED3QTP3SJDWA8FMyaJsHfIWf4X29kTWQpKEzIbwqFuyRPZ
&TicketId=3qfzk1SIPR9R&GroupName=SWUP&BHost=sdlc3h.sun.com&FilePath=
/sc/channels3/channels.xml&File=channels.xml
=> ‘/tmp/channels.xml’
To verify ssh access for the root user, try using ssh to log in as root to the system. If that
attempt succeeds, no further action is necessary. If that attempt fails, check the value of the
PermitRootLogin parameter in the /etc/ssh/sshd_config file. If PermitRootLogin is set to
no, edit the /etc/ssh/sshd_config file, and change the PermitRootLogin setting to yes. Then
use the svcadm command to restart the svc:/network/ssh:default service. For example:
Before you install Ops Center on an RHEL or OEL system, verify that the system conforms to
the recommendations described below. This page describes the system resources to check.
Run the Chapter 21, “OC Doctor,” utility to check requirements and to identify potential issues
before you install Ops Center. You can also run the utility after installation at any time to
identify problems such as missing Ops Center patches.
The Ops Center Doctor utility performs the following operations. You can perform the same
tasks manually.:
■ “To Check the Operating System Release” on page 116
■ “To Check the Available Disk Space” on page 116
■ “To Verify the Amount of System Memory and Swap Space” on page 117
■ “To Verify the SELinux Setting” on page 117
■ “To Verify the umask Value” on page 118
■ “Ops Center Users and Groups” on page 119
■ “To Verify That Required Packages Are Installed” on page 120
■ “To Verify Correct IP Address Resolution” on page 121
■ “To Verify the Locations of ssh Binaries” on page 121
■ “To Verify That /usr/local Is Writeable” on page 121
■ “To Verify the Date and Time” on page 122
■ “To Verify Network Access to Required Web Sites” on page 122
■ “To Verify Network Port Access” on page 125
■ “Verifying kernel.shmall and kernel.shmmax Values” on page 125
115
Before You Begin
On a system that has a complete installation of Linux, use the following procedures to verify
that its resources meet the requirements for Ops Center installation.
These procedures assume that you are logged in as the root user on the system on which you
intend to install Enterprise Controller or Proxy Controller software.
# cat /etc/redhat-release
# df -h
/dev/mapper/VolGroup00-LogVol00
Ops Center software, and the data it stores, primarily consume space below the
/var/opt/sun/xvm and /opt directory structures. In this example, the /var/opt/sun/xvm and
/opt directories are located within the root (/) file system, which has 119 GBytes of space
available.
High availability (HA) configurations for Ops Center use transferable storage to hold the
/var/opt/sun/xvm directory structure within a separate file system. Refer to About High
Availability and “Configuring Storage for High Availability” on page 26 for more information
about HA configurations.
# free -m
You should have at least 6 GBytes of installed memory and swap space for Ops Center
Enterprise Controller installations, at least 4 GBytes of installed memory and swap space for
Ops Center Proxy Controller installations. The value in the total column indicates the total
amount of installed memory or configured swap space.
You can also use the dmesg command to display the amount of memory installed. For example:
Memory: 4022900k/4063168k available (2043k kernel code, 39036k reserved, 846k data, 232k init, /
3145664k highmem)
# sestatus
# cat /etc/selinux/config
SELINUX=disabled
SELINUXTYPE=targeted
If the SELinux state is either enforcing or permissive, edit the /etc/selinux/config file and
change the SELINUX value to disabled. After making this change, reboot your system for the
change to take effect.
# sh
# umask
0022
# ksh
# umask
0022
# csh
# umask
22
# bash
# umask
0022
Check the umask value set in /etc/bashrc. The umask value must be set to 022, even for
non-root users. For example:
umask 002
umask 022
Users svctag, allstart, scndb, scn, scncon, uce-sds, xvm svctag, allstart, uce-sds
Ops Center creates these users and groups with the following UID and GID values:
# cat /etc/group
(output omitted)
uce-sds::98194050:
scndb::98194051:
jet::98194052:
# cat /etc/passwd
(output omitted)
scn:x:231796:3::/:/bin/sh
xvm:x:60:60::/:/bin/sh
scncon:x:231798:1::/:/bin/true
uce-sds:x:231799:98194050:UCE Engine:/opt/SUNWuce/server:/bin/sh
allstart:x:231801:1:AllStart User:/var/opt/sun/xvm/osp/data:/bin/sh
All user accounts have locked passwords, except the scncon user. A password is required for the
scncon user, but it has no login shell. If you must create the scncon user before installing the
software, you must enter the password that you want to use, in clear text, in the
/var/opt/sun/xvm/persistence/scn-satellite/satellite.properties file. Associate the
password with the scncon.password parameter in this file. For example:
scncon.password=2EzafaJE
dhcp-3.0.5-3.el5
# host x4200-brm-13
For example:
# df -h /usr/local
/dev/mapper/VolGroup00-LogVol00
# ls -ld /usr/local
In this example, the /usr/local directory is stored in the root (/) file system and is writeable by
the root user and group.
# date
If the date and time are not correct, reset them. See Troubleshooting for a description of an error
that might occur in the Enterprise Controller Configuration wizard if the date and time are not
set correctly.
https://getupdates1.sun.com
https://inv-cs.sun.com
https://inventory.sun.com
https://a248.e.akamai.net
https://identity.sun.com
ftp://ftp.sunfreeware.com
The https://getupdates1.sun.com site should display a login authentication screen for the
Sun Update Connection Download Server. The https://inv-cs.sun.com and
https://inventory.sun.com sites should display the Sun Connection page.
For access to Red Hat Linux updates, verify that your system can access the following URLs:
https://www.redhat.com
http://rhn.redhat.com
https://rhn.redhat.com
https://download.rhn.redhat.com
For access to SUSE Linux updates, verify that your system can access the following URLs:
http://www.novell.com
https://www.novell.com
http://download.novell.com
https://you.novell.com
Use the wget command to verify that you can access the getupdates1.sun.com web site and
download a sample file.
1. If you use a proxy server to access the Internet, set the https_proxy environment variable to
point to the proxy server. For example:
# export https_proxy="http://myproxy.company.com:8080"
The wget command is stored by default in /usr/bin on Linux systems. In this example,
[email protected] and password represent the SOA and SOA password that you must provide.
--http-user="[email protected]" --http-password="password"
--12:07:40-- https://getupdates1.sun.com/channels3/channels.xml
Location: https://a248.e.akamai.net/f/248/21808/15m/sun.download.akamai.com/
21808/sc/channels3/channels.xml?AuthParam=1236020624_01b507faf428706c2c0b14
a7462004e4&TUrl=L0QdUQV8Z4i0fdED3QTP3SJDWA8FMyaJsHfIWf4X29kTWQpKEzIbwqFuyRPZ
&TicketId=3qfzk1SANhtW&GroupName=SWUP&BHost=sdlc3h.sun.com&FilePath=
/sc/channels3/channels.xml&File=channels.xml [following]
--12:07:41-- https://a248.e.akamai.net/f/248/21808/15m/sun.download.akamai.com/
21808/sc/channels3/channels.xml?AuthParam=1236020624_01b507faf428706c2c0b14a746
2004e4&TUrl=L0QdUQV8Z4i0fdED3QTP3SJDWA8FMyaJsHfIWf4X29kTWQpKEzIbwqFuyRPZ
&TicketId=3qfzk1SANhtW&GroupName=SWUP&BHost=sdlc3h.sun.com&FilePath=
/sc/channels3/channels.xml&File=channels.xml
kernel.shmall 268435456
kernel.shmmax 4294967295
vm.hugetlb_shm_group = 0
kernel.shmmni = 4096
kernel.shmall = 2097152
kernel.shmmax = 33554432
[root@x4200-2 ~]#
2. If the values for kernel.shmall and kernel.shmmax are lower than the values listed above,
edit the /etc/sysctl.conf file and set the variables equal to the values listed above.
[root@x4200-2 ~]# vi /etc/sysctl.conf
(output omitted)
kernel.shmmax = 4294967295
(output omitted)
kernel.shmall = 268435456
3. Reboot the system.
[root@x4200-2 ~]# reboot
Verify that the systems that you intend to manage are ready for Ops Center agent software
installation. These required resources are typically available in systems that are running current
versions of operating system software. Review the list of required resources to determine if it is
likely that any resource is missing from your systems.
This page describes the system resources to check for both Solaris and Linux systems.
Regardless of the operating system supporting the Enterprise Controller, both Linux and
Solaris systems can be managed.
Sun Support Services might have tools available that automate verifying many of the system
requirements and resources listed here. Check with Sun Support Services for the following
items:
■ Pre-installation checklist
■ Pre-installation check script
■ Patches to apply to the Ops Center software
■ Updated Ops Center agent bundles
■ Advice about specific patch dependencies that relate to Ops Center agent installation
127
Solaris OS: To Verify Required Packages and Devices
SUNWadmap
SUNWbash
SUNWctpls
SUNWdtcor
SUNWesu
SUNWgzip
SUNWlibC
SUNWlibms
SUNWloc
SUNWmfrun
SUNWswmt
SUNWtoo
SUNWxcu4
SUNWxwdv
SUNWxwfnt
SUNWxwice
SUNWxwplt
SUNWxwrtl
SUNWzip
SUNWzlib
/dev/random
/dev/urandom
SUNWlmsx
SUNWnisr
SUNWnisu
SUNWtltk
SUNWxildh
SUNWxilow
SUNWxilrl
SUNWzlibx
SUNWcpp
SUNWgcmn
SUNWlibpopt
SUNWlmsx
SUNWlxml
SUNWpl5u
SUNWpl5v
SUNWzlibx
SUNWbzip
SUNWcpp
SUNWgcmn
SUNWlibmsr
SUNWlibpopt
SUNWlxml
SUNWperl584core
SUNWperl584usr
SUNWxwplr
SUNWxwplr
Check Solaris 8 systems in particular for the SUNWbash package and the /dev/random and
/dev/urandom devices. The patch 112438-03 installs these devices.
You can use the pkginfo command to verify that a package is installed. For example:
# pkginfo SUNWadmfr
coreutils
file
gettext
grep
tar
unzip
xinetd
You can use the rpm -qf _file_ command to find the name of the package that installed a file.
You can use the rpm -q _package_ command to verify that a specific package has been
installed.
Ops Center Agent installation on Linux systems requires the 32-bit versions of the following
packages to be installed:
Solaris OS: Use the pkginfo command to verify that the SUNWsshu package is installed on
Solaris systems. For example:
# pkginfo SUNWsshu
Linux OS: Use the rpm command to check for ssh installation. For example:
# which ssh
/usr/bin/ssh
openssh-clients-4.3p2-16.el5
Patches 122660-07 and 122661-07 are required on systems with non-global zones installed.
These patches must be installed in single user mode. Because these patches depend on kernel
patch 118833-36 or 118855-36, a reboot is required after you install them. Plan for the time
required to take the affected systems offline to install these patches. Systems that are running at
least Solaris 10 8/07 already have these patches applied.
<service_tag>
<instance_urn>urn:st:c76d9a11-f64b-418b-e9dc-a2fb18e7b76e</instance_urn>
<product_version>10</product_version>
<product_urn>urn:uuid:5005588c-36f3-11d6-9cec-fc96f718e113</product_urn>
<product_parent_urn>urn:uuid:596ffcfa-63d5-11d7-9886-ac816a682f92 /
</product_parent_urn>
<product_defined_inst_id/>
<product_vendor>Sun Microsystems</product_vendor>
<platform_arch>sparc</platform_arch>
<container>global</container>
<source>SUNWstosreg</source>
<installer_uid>95</installer_uid>
</service_tag>
</registry>
2. Compare the instance_urn values on the systems that were installed using Solaris flash
archives, and determine if duplicate URNs exist.
If the instance_urn for the Solaris operating system matches the instance_urn from
another system, you can remove and re-generate the service tag registry to correct the
problem.
3. To remove the service tag registry, remove the rm
/var/sadm/servicetag/registry/servicetag.xml file. For example:
# rm /var/sadm/servicetag/registry/servicetag.xml
# ls /var/sadm/servicetag/registry/servicetag.xml
# ls /var/sadm/servicetag/registry/servicetag.xml
/var/sadm/servicetag/registry/servicetag.xml
5. Use the stclient -x command to verify that the new instance_urn values are unique. For
example:
# stclient -x
<service_tag>
<instance_urn>urn:st:cbf9acfb-0c48-c248-fb07-9816382ceb29</instance_urn>
<product_version>10</product_version>
<product_urn>urn:uuid:5005588c-36f3-11d6-9cec-fc96f718e113</product_urn>
<product_parent_urn>urn:uuid:596ffcfa-63d5-11d7-9886-ac816a682f92 /
</product_parent_urn>
<product_defined_inst_id/>
<product_vendor>Sun Microsystems</product_vendor>
<platform_arch>sparc</platform_arch>
<container>global</container>
<source>SUNWstosreg</source>
<installer_uid>95</installer_uid>
</service_tag>
</registry>
For systems running Solaris 10 versions earlier than Solaris 10 6/06: Agent provisioning installs
the patchadd patch 119254-52 or 119255-52. These patches depend on patches 120900 and
120901 or 121133 and 121334 respectively, which are incorporated into the Solaris OS starting
with Solaris 10 6/06. The patches 120900, 120901, 121133, and 121334 require a reboot to
ensure proper installation. Plan for the down time required to install these patches, if necessary.
The patches 119254-63 and 119255-63 correct issues with Solaris 10 single user mode
operations. Before you provision an Ops Center agent, verify that no IDR patches have been
installed that address Solaris 10 single user mode operations.
Check with Sun Support Services for updated Ops Center agent bundles.
# sh
# umask
0022
# ksh
# umask
022
# csh
<host_name># umask
22
For example:
As a qualified Sun customer with an engaged Sun sales support representative, a Sun field or
system engineer can provide access to the Ops Center software for you to download. The
software license agreement for Ops Center is presented as part of the download process. You
must read and accept the software license agreement before you can use Ops Center.
137
138
26
C H A P T E R 2 6
139
Vendor Download Sites Introduction
■ https://www.redhat.com
■ http://rhn.redhat.com
■ https://rhn.redhat.com
■ http://download.rhn.redhat.com
■ https://content-web.rhn.redhat.com
■ https://e2595.c.akamaiedge.net
Terminology
2 7
Agent
The agent software communicates with the Enterprise Controller and is installed automatically
when an asset is discovered to make the asset into a managed asset.
Appliance
An appliance is a pre-installed and pre-configured application and operating system
environment. Using appliances eliminates the installation, configuration, and maintenance
costs associated with running complex stacks of software. Appliance images of the format
VMDK are supported in Ops Center.
Assets
Anything that Ops Center can discover and manage. Hardware, software, operating systems,
and hypervisors are all assets.
Automatic Discovery
Automatic discovery is a discovery method that searches for Service Tags on subnets associated
with the Proxy Controllers.
141
Baseline
Baseline
A baseline, or Solaris baseline, is a dated collection of Solaris patches, patch metadata, and tools.
Sun releases Solaris baselines on a monthly basis. You can modify a baseline to create a custom
patch set by the use of black lists and white lists.
Black List
A black list is a list of Solaris OS patch IDs that you never want to be applied to a host. The black
list is used when you are using a baseline to update a Solaris OS.
Boot environment
A collection of mandatory file systems (disk slices and mount points) that are critical to the
operation of the Solaris OS. These disk slices might be on the same disk or distributed across
multiple disks.
Channel
Channel is an OS distribution, such as Solaris 10 5/09 or Red Hat Enterprise Linux 5.3.
Connected Mode
Connected mode is the default connection mode for Ops Center. With this mode, patch data is
regularly downloaded through an Internet connection.
Control Domain
The control domain is a domain that is created when Logical Domains is installed. The control
domain allows you to create and manage guest domains and allocate virtual resources to the
guest domains.
Custom Discovery
Custom discovery is a discovery method that uses user-specified targets (IP addresses or
subnets) and discovery protocols.
Declare Assets
The Declare Assets option allows you to add assets to Ops Center without performing an
Automatic Discovery or Custom Discovery.
Disconnected Mode
Disconnected mode is the alternate connection mode for Ops Center. Instead of relying on an
Internet connection for updates, patch data is user supplied.
Domain
A domain is created when Logical Domains is installed. See Control Domain.
Enterprise Controller
Enterprise controller is the top portion of the Ops Center software. The Enterprise Controller
hosts the user interface and communicates with the Sun Datacenter.
Global zone
In Solaris Containers, the global zone is both the default zone for the system and the zone used
for system-wide administrative control. The global zone is the only zone from which a
non-global zone can be configured,installed, managed, or uninstalled. Administration of the
system infrastructure, such as physical devices,routing, or dynamic reconfiguration (DR), is
only possible in the global zone. Appropriately privileged processes running in the global zone
can access objects associated with other zones. See also Solaris Containers and Non-Global
Zones.
Group
A group consists of user-defined assets. Assets can be organized into a group by any number of
properties, such as type or location. A group can include other groups.
Guest
Guests are virtual machines of a virtualization host such as a Logical Domain host. The control
domain is a privileged domain (Dom0) and the virtual machines are unprivileged domains
(domUs). An unprivileged domain is a domain with no special hardware access.
Host name
The name by which a system is known to other systems on a network. This name must be
unique among all the systems within a particular domain (usually, this means within any single
organization). A host name can be any combination of letters, numbers, and minus signs (-),
but it cannot begin or end with a minus sign.
Hypervisor
A hypervisor is the software that allows multiple virtual machines to be multiplexed on a single
physical machine. The hypervisor code runs at a higher privilege level than the supervisor code
of its guest operating systems to manage use of the underlying hardware resources by multiple
supervisor kernels.
JMX
Java Management Extensions (JMX) technology provides the tools for building distributed,
modular, and dynamic solutions for managing and monitoring devices, applications, and
networks. The JMX API defines the notion of MBeans, or manageable objects, which expose
attributes and operations in a way that allows remote management applications to access them.
The public API in Ops Center can be accessed through JMX-Remoting.
Library
A library is a collection of virtual machine images and disk images that are located under the
same file system. When a virtual pool is created, one or more libraries is assigned to the virtual
pool. Virtual pools can share the same libraries.
Logical Domain
Logical Domain technology is part of a suite of methodologies for consolidation and resource
management for SPARC systems. This technology allows you to allocate a system's various
resources, such as memory, CPUs, and devices, into logical groupings and create multiple
discrete systems. A Logical Domain is a full virtual machine, with a set of resources, such as a
boot environment, CPU, memory, I/O devices, and its own operating system.
Network
A network allows guests to communicate with each other or with the external world (that is, the
Internet). When a virtual pool is created, one or more networks is assigned to the virtual pool.
Virtual pools can share the same networks.
Non-global zone
A virtualized operating system environment created within a single instance of the Solaris OS.
One or more applications can run in a non-global zone without interacting with the rest of the
system. Non-global zones are also called zones. See also Solaris Containers and Global Zone.
Policy
A policy defines how a job is performed and sets the automation level of the job. A policy file is
similar to a response file. If there is a conflict between a profile and policy, the profile overrides
the policy.
Profile
A profile defines the configuration of components for a specific type of system. By using a
profile, you can define what is allowed, and not allowed, to be installed on a system. If there is a
conflict between a profile and policy, the profile overrides the policy.
Proxy
The proxy is the mid-level portion of the Ops Center software. The proxy pulls jobs from the
Satellite Server and directs their execution.
which all other file systems are mounted, and is never unmounted. The root ( / ) file system
contains the
directories and files critical for system operation, such as the kernel, device drivers, and the
programs that
Root directory
The top-level directory from which all other directories stem.
Solaris Containers
Solaris containers are sometimes referred to as Solaris Zones. A software partitioning
technology used to virtualize operating system services and provide an isolated and secure
environment for running applications. When you create a non-global zone, you produce an
application execution environment in which processes are isolated from all other zones. This
isolation prevents processes that are running in a zone from monitoring or affecting processes
that are running in any other zones. See also global zone, and non-global zone.
Static Route
A static route specifies the route that must be taken by the network for external access. You
might define a default gateway for the network; however, this default gateway might not be able
reach a given subnet. In this case, you need to add a static route for this specific subnet.
SCCM
Microsoft's System Center Configuration Manager (SCCM) updates Windows operating
systems.
Unclassified assets
Assets that appear in the Unclassified Assets tab. The hardware and software are discovered, but
there is not enough information to manage them. Typically, assets are placed in this category
when you run an Automatic discovery job or if you run a Custom Discovery job that finds
service tags, but fails on protocol-based authentication. To move assets to the Available to be
Managed or Managed Assets tabs, you must run a Custom Discovery or Declare Assets job.
Virtual Pool
A virtual pool is a resource pool of virtualization hosts that share compatible chip architecture,
which facilitates actions such as moving guests between virtualization host instances. Members
of the virtual pool have access to the same network and storage library resources. Guests can
access the images contained in the virtual pool's library. Several virtual pools can share the same
network and library storage resources.
Virtualization Host
Virtualization Host is a hypervisor.
White List
A white list is a list of Solaris OS patch IDs that you always want to be applied to a host. The
white list is used when you are using a baseline to update a Solaris OS.
WS-Management
Web Services for Management (WS-MAN) is a specification for managing servers, devices, and
applications using web services standards. It provides a common way for systems to access and
exchange management information across the entire IT infrastructure. The public API in Ops
Center can be accessed through WS-Management.
zone
Zones, also called non-global zones, are a virtualized operating system environment created
within a single instance of the Solaris OS. One or more applications can run in a non-global
zone without interacting with the rest of the system. See also Solaris Containers, Non-Global
Zone, and Global Zone.
ZFS
A Solaris OS file system that uses storage pools to manage physical storage.