Infoscale Install 80 Lin
Infoscale Install 80 Lin
0
Installation Guide - Linux
Last updated: 2022-04-19
Legal Notice
Copyright © 2022 Veritas Technologies LLC. All rights reserved.
Veritas and the Veritas Logo are trademarks or registered trademarks of Veritas Technologies
LLC or its affiliates in the U.S. and other countries. Other names may be trademarks of their
respective owners.
This product may contain third-party software for which Veritas is required to provide attribution
to the third-party (“Third-Party Programs”). Some of the Third-Party Programs are available
under open source or free software licenses. The License Agreement accompanying the
Software does not alter any rights or obligations you may have under those open source or
free software licenses. Refer to the third-party legal notices document accompanying this
Veritas product or available at:
https://www.veritas.com/about/legal/license-agreements
The product described in this document is distributed under licenses restricting its use, copying,
distribution, and decompilation/reverse engineering. No part of this document may be
reproduced in any form by any means without prior written authorization of Veritas Technologies
LLC and its licensors, if any.
The Licensed Software and Documentation are deemed to be commercial computer software
as defined in FAR 12.212 and subject to restricted rights as defined in FAR Section 52.227-19
"Commercial Computer Software - Restricted Rights" and DFARS 227.7202, et seq.
"Commercial Computer Software and Commercial Computer Software Documentation," as
applicable, and any successor regulations, whether delivered by Veritas as on premises or
hosted services. Any use, modification, reproduction release, performance, display or disclosure
of the Licensed Software and Documentation by the U.S. Government shall be solely in
accordance with the terms of this Agreement.
Veritas Technologies LLC
2625 Augustine Drive
Santa Clara, CA 95054
http://www.veritas.com
Technical Support
Technical Support maintains support centers globally. All support services will be delivered
in accordance with your support agreement and the then-current enterprise technical support
policies. For information about our support offerings and how to contact Technical Support,
visit our website:
https://www.veritas.com/support
You can manage your Veritas account information at the following URL:
https://my.veritas.com
If you have questions regarding an existing support agreement, please email the support
agreement administration team for your region as follows:
Japan [email protected]
Documentation
Make sure that you have the current version of the documentation. Each document displays
the date of the last update on page 2. The latest documentation is available on the Veritas
website:
https://sort.veritas.com/documents
Documentation feedback
Your feedback is important to us. Suggest improvements or report errors or omissions to the
documentation. Include the document title, document version, chapter title, and section title
of the text on which you are reporting. Send feedback to:
[email protected]
You can also see documentation information or ask a question on the Veritas community site:
http://www.veritas.com/community/
Storage Foundation
Cluster File System High
Availability (SFCFSHA)
■ Licensing notes
Installation without a license does not eliminate the need to obtain a license.
The administrator and company representatives must ensure that a server or
cluster is entitled to the license level for the products installed. Veritas reserves
the right to ensure entitlement and compliance through auditing.
See “Registering Veritas InfoScale using keyless license” on page 19.
■ Veritas collects licensing and platform related information from InfoScale products
as part of the Veritas Product Improvement Program. The information collected
helps identify how customers deploy and use the product, and enables Veritas
to manage customer licenses more efficiently. See “About telemetry data
collection in InfoScale” on page 14.
Visit the Veritas licensing Support website, for more information about the licensing
process.
www.veritas.com/licensing/process
Licensing ■ Product ID
■ Serial number
■ Serial ID
■ License meter
■ Fulfillment ID
■ Platform
■ Version
■ SKU type
■ VXKEYLESS
■ License type
■ SKU
By default, the Veritas Telemetry Collector will collect telemetry data every Tuesday
at 1:00 A.M. as per the local system time. The time and interval of data collection
can be customized by the user if required.
You can configure the Veritas Telemetry Collector while installing or upgrading the
product, See “Installing Veritas InfoScale using the installer” on page 55.. You can
also manage the Veritas Telemetry Collector on each of your servers by using the
/opt/VRTSvlic/tele/bin/TelemetryCollector command. For more information,
See “Commands to manage the Veritas telemetry collector on your server”
on page 85.
Configure the firewall policy such that the ports required for telemetry data collection
are not blocked. Refer to your respective firewall or OS vendor documents for the
required configuration.
Note: Ensure that you reboot the server after uninstalling the product to ensure
that all services related to the Veritas Telemetry Collector are stopped successfully.
Licensing notes
Review the following licensing notes before you install or upgrade the product.
■ If you use a keyless license option, you must configure Veritas InfoScale
Operations Manager within two months of product installation and add the node
as a managed host to the Veritas InfoScale Operations Manager Management
Server. Failing this, a warning message for non-compliance is displayed
periodically.
Licensing Veritas InfoScale 16
Licensing notes
Note: The license key file must not be saved in the root directory (/) or the
default license directory on the local host (/etc/vx/licesnes/lic). You can
save the license key file inside any other directory on the local host.
■ You can manage the license keys using the vxlicinstupgrade utility.
See “About managing InfoScale licenses” on page 21.
■ Before upgrading the product, review the licensing details and back up the older
license key. If the upgrade fails for some reason, you can temporarily revert to
the older product using the older license key to avoid any application downtime.
■ You can use the license assigned for higher Stock Keeping Units (SKU) to install
the lower SKUs.
For example, if you have procured a license that is assigned for InfoScale
Enterprise, you can use the license for installing any of the following products:
■ InfoScale Foundation
■ InfoScale Storage
■ InfoScale Availability
The following table provides details about the license SKUs and the
corresponding products that can be installed:
Licensing Veritas InfoScale 17
Registering Veritas InfoScale using permanent license key file
InfoScale ✓ X X X
Foundation
InfoScale ✓ ✓ X X
Storage
InfoScale X X ✓ X
Availability
InfoScale ✓ ✓ ✓ ✓
Enterprise
Note: At any given point in time you can install only one product.
Note: The license key file must not be saved in the root directory (/) or the default
license directory on the local host (/etc/vx/licesnes/lic). You can save the
license key file inside any other directory on the local host.
You can register your permanent license key file in the following ways:
Licensing Veritas InfoScale 18
Registering Veritas InfoScale using permanent license key file
Using the You can register your InfoScale product using a permanent license
installer key file during the installation process.
./installer
Alternatively, you can register your InfoScale product using the installer
menu.
./installer
Manual If you are performing a fresh installation, run the following commands
on each node:
# cd /opt/VRTS/bin
or
then,
# cd /opt/VRTS/bin
Even though other products are included on the enclosed software discs, you can
only use the Veritas InfoScale software products for which you have purchased a
license.
Using the installer You can enable keyless licensing for InfoScale during the
installation process.
./installer
./installer
# export PATH=$PATH:/opt/VRTSvlic/bin
2 View the keyless product code for the product you want to
install:
# vxkeyless displayall
Example:
Warning: Within 60 days of choosing this option, you must install a valid license
key file corresponding to the license level entitled, or continue with keyless licensing
by managing the systems with Veritas InfoScale Operation Manager. If you fail to
comply with the above terms, continuing to use the Veritas InfoScale product is a
violation of your End User License Agreement, and results in warning messages.
For more information about keyless licensing, see the following URL:
http://www.veritas.com/community/blogs/introducing-keyless-feature-
enablement-storage-foundation-ha-51
For more information to use keyless licensing and to download the Veritas InfoScale
Operation Manager, see the following URL:
www.veritas.com/product/storage-management/infoscale-operations-manager
# cd /opt/VRTS/bin
Where, the <key file path> is the absolute path of the .slf
license key file saved on the current node.
Example:
/downloads/InfoScale_keys/XYZ.slf
Using the vxkeyless To add or update a keyless license, perform the following
steps:
# export PATH=$PATH:/opt/VRTSvlic/bin
2 View the keyless product code for the product you want
to install:
# vxkeyless displayall
Example:
■ If the current key is keyless and the newly entered license key file is a permanent
license of the same product
Example: If the 8.0 Foundation Keyless license key is already installed on a
system and the user tries to install 8.0 Foundation permanent license key file,
then the vxlicinstupgrade utility installs the new license at
/etc/vx/licenses/lic and the 8.0 Foundation Keyless key is deleted.
Note: When registering license key files manually during upgrade, you have to use
the vxlicinstupgrade command. When registering keys using the installer script,
the same procedures are performed automatically.
Licensing Veritas InfoScale 24
Generating license report with vxlicrep command
-g default report
-v print version
■ Hardware requirements
https://www.veritas.com/support/en_US/doc/infoscale_scl_80_lin
Table 3-2 lists the minimum disk space requirements for SLES each product when
the /opt, /root, /var, and /bin directories are created on the same disk..
Hardware requirements
This section lists the hardware requirements for Veritas InfoScale.
Table 3-3 lists the hardware requirements for each component in Veritas InfoScale.
System requirements 27
Hardware requirements
Component Requirement
Storage Foundation (SF) See “SF and SFHA hardware requirements” on page 27.
Storage Foundation for
High Availability (SFHA)
Storage Foundation See “SFCFS and SFCFSHA hardware requirements” on page 27.
Cluster File System
(SFCFS) and Storage
Foundation Cluster File
System for High
Availability (SFCFSHA)
Storage Foundation for See “SF Oracle RAC and SF Sybase CE hardware requirements”
Oracle RAC (SF Oracle on page 28.
RAC)
For additional information, see the hardware compatibility list (HCL) at:
https://www.veritas.com/content/support/en_US/doc/infoscale_hcl_8x_unix
Item Requirement
Requirement Description
Requirement Description
Node All nodes in a Cluster File System must have the same
operating system version.
Shared storage Shared storage can be one or more shared disks or a disk
array connected either directly to the nodes of the cluster or
through a Fibre Channel Switch. Nodes can also have
non-shared or local devices on a local I/O channel. It is
advisable to have /, /usr, /var and other system partitions
on local devices.
Fibre Channel or iSCSI Each node in the cluster must have a Fibre Channel I/O
storage channel or iSCSI storage to access shared storage devices.
The primary component of the Fibre Channel fabric is the
Fibre Channel switch.
Cluster platforms There are several hardware platforms that can function as
nodes in a Veritas InfoScale cluster.
For a cluster to work correctly, all nodes must have the same
time. If you are not running the Network Time Protocol (NTP)
daemon, make sure the time on all the systems comprising
your cluster is synchronized.
SAS or FCoE Each node in the cluster must have an SAS or FCoE I/O
channel to access shared storage devices. The primary
components of the SAS or Fibre Channel over Ethernet
(FCoE) fabric are the switches and HBAs.
Item Description
Item Description
Disks All shared storage disks support SCSI-3 Persistent Reservations (PR).
Note: The coordinator disk does not store data, so configure the disk
as the smallest possible LUN on a disk array to avoid wasting space.
The minimum size required for a coordinator disk is 128 MB.
Swap space For SF Oracle RAC: See the Oracle Metalink document: 169706.1
Oracle RAC requires that all nodes use the IP addresses from the same
subnet.
Fiber Channel or At least one additional SCSI or Fibre Channel Host Bus Adapter per
SCSI host bus system for shared data disks.
adapters
Item Description
DVD drive One drive in a system that can communicate to all the nodes in the
cluster.
System requirements 30
Supported operating systems and database versions
Item Description
The SFHA I/O fencing feature requires that all data and coordinator
disks support SCSI-3 Persistent Reservations (PR).
Network Interface In addition to the built-in public NIC, VCS requires at least one more
Cards (NICs) NIC per system. Veritas recommends two additional NICs.
Veritas recommends that you turn off the spanning tree on the LLT
switches, and set port-fast on.
Fibre Channel or Typical VCS configuration requires at least one SCSI or Fibre Channel
SCSI host bus Host Bus Adapter per system for shared data disks.
adapters
■ Planning the installation setup for SF Oracle RAC and SF Sybase CE systems
3 Unzip the patch tar file. For example, run the following command:
# gunzip cpi-8.0P2-patches.tar.gz
where sys1, sys2 are the names of the nodes in the cluster.
The program proceeds in a non-interactive mode, examining the systems for
licenses, RPMs, disk space, and system-to-system communications. The
program displays the results of the check and saves them in a log file. The
location of the log file is displayed at the end of the precheck process.
Private
network
You need to configure at least two independent networks between the cluster nodes
with a network switch for each network. You can also interconnect multiple layer 2
switches for advanced failure protection. Such connections for LLT are called
cross-links.
Figure 4-2 shows a private network configuration with crossed links between the
network switches.
Public network
Private networks
Crossed link
4 Test the network connections. Temporarily assign network addresses and use
telnet or ping to verify communications.
LLT uses its own protocol, and does not use TCP/IP. So, you must ensure that
the private network connections are used only for LLT communication and not
for TCP/IP traffic. To verify this requirement, unplumb and unconfigure any
temporary IP addresses that are configured on the network interfaces.
The installer configures the private network in the cluster during configuration.
You can also manually configure LLT.
Preparing to install 37
Setting up the private network
5 In case of LLT configured over UDP, ensure that the firewall or any other
security measure is properly configured and all the UDP ports for the LLT high
priority links are enabled over those measures.
For example, you must enable network ports 50000 through 50006 for two high
priority links, ports 50000 through 50007 for three high priority links, and so on
up to eight high priority links. These examples are based on the default port
number 50000. If the default port number in your environment is different, use
the corresponding port range. You can find the default port number mentioned
in /etc/llttab.
Guidelines for setting the maximum transmission unit (MTU) for LLT
interconnects in Flexible Storage Sharing (FSS) environments
Review the following guidelines for setting the MTU for LLT interconnects in FSS
environments:
Preparing to install 38
Setting up shared storage
■ Set the maximum transmission unit (MTU) to the highest value (typically 9000)
supported by the NICs when LLT (both high priority and low priority links) is
configured over Ethernet or UDP. Ensure that the switch is also set to 9000
MTU.
Note: MTU setting is not required for LLT over RDMA configurations.
■ For virtual NICs, all the components—the virtual NIC, the corresponding physical
NIC, and the virtual switch—must be set to 9000 MTU.
■ If a higher MTU cannot be configured on the public link (because of restrictions
on other components such as a public switch), do not configure the public link
in LLT. LLT uses the lowest of the MTU that is configured among all high priority
and low priority links.
■ Identify your shared disk name. If you have two internal SCSI hard disks,
your shared disk is /dev/sdc.
Identify whether the shared disk is sdc, sdb, and so on.
■ Type the following command:
# fdisk /dev/shareddiskname
# fdisk /dev/sdc
Where the name of the disk group is dg, the name of the volume is vol01,
and the file system type is vxfs.
12 Verify that you can view the shared disk using the fdisk command.
3 Verify that the system detects the Fibre Channel disks properly.
4 Create volumes. Format the shared disk and create required partitions on it
and perform the following:
■ Identify your shared disk name. If you have two internal SCSI hard disks,
your shared disk is /dev/sdc.
Identify whether the shared disk is sdc, sdb, and so on.
■ Type the following command:
# fdisk /dev/shareddiskname
# fdisk /dev/sdc
Where the name of the disk group is dg, the name of the volume is vol01,
and the file system type is vxfs.
Preparing to install 41
Synchronizing time settings on cluster nodes
5 Repeat step 2 and step 3 for all nodes in the clusters that require connections
with Fibre Channel.
6 Power off this cluster system.
7 Connect the same disks to the next cluster system.
8 Turn on the power for the second system.
9 Verify that the second system can see the disk names correctly—the disk
names should be the same.
■ The vxfenconfig thread in the vxfen configuration path waits for GAB to seed.
■ The vxfenswap thread in the online coordinator disks replacement path waits
for the snapshot of peer nodes of the new coordinator disks.
To disable the kernel.hung_task_panic tunable:
■ Set the kernel.hung_task_panic tunable to zero (0) in the /etc/sysctl.conf
file. This step ensures that the change is persistent across node restarts.
■ Run the command on each node.
# sysctl -w kernel.hung_task_panic=0
To verify the kernel.hung_task_panic tunable value, run the following command:
■ # sysctl -a | grep hung_task_panic
Preparing to install 42
Planning the installation setup for SF Oracle RAC and SF Sybase CE systems
the surviving nodes first, followed by Oracle Clusterware. The CSS miss-count
value indicates the amount of time Oracle Clusterware waits before evicting
another node from the cluster, when it fails to respond across the interconnect.
For more information, see the Oracle Metalink document: 782148.1
Note: The private IP addresses of all nodes that are on the same physical network
must be in the same IP subnet.
Note: The PrivNIC and MultiPrivNIC agents are no longer supported in Oracle
RAC 11.2.0.2 and later versions for managing cluster interconnects.
For 11.2.0.2 and later versions, Veritas recommends the use of alternative
solutions such as bonded NIC interfaces or Oracle High Availability IP (HAIP).
■ Configure Oracle Cache Fusion traffic to take place through the private network.
Veritas also recommends that all UDP cache-fusion links be LLT links.
Oracle database clients use the public network for database services. Whenever
there is a node failure or network failure, the client fails over the connection, for
both existing and new connections, to the surviving node in the cluster with
which it is able to connect. Client failover occurs as a result of Oracle Fast
Application Notification, VIP failover and client connection TCP timeout. It is
strongly recommended not to send Oracle Cache Fusion traffic through the
public network.
■ Use NIC bonding to provide redundancy for public networks so that Oracle RAC
can fail over virtual IP addresses if there is a public link failure.
Table 4-1 High availability solutions for Oracle RAC private network
Options Description
Using link Use a native NIC bonding solution to provide redundancy, in case of
aggregation/ NIC NIC failures.
bonding for Oracle
Make sure that a link configured under a aggregated link or NIC bond
Clusterware
is not configured as a separate LLT link.
When LLT is configured over a bonded interface, do one of the
following steps to prevent GAB from reporting jeopardy membership:
set-dbg-minlinks 2
Using HAIP Starting with Oracle RAC 11.2.0.2, Oracle introduced the High
Availability IP (HAIP) feature for supporting IP address failover. The
purpose of HAIP is to perform load balancing across all active
interconnect interfaces and fail over existing non-responsive interfaces
to available interfaces. HAIP has the ability to activate a maximum of
four private interconnect connections. These private network adapters
can be configured during the installation of Oracle Grid Infrastructure
or after the installation using the oifcfg utility.
Table 4-2 Type of storage required for SF Oracle RAC and SF Sybase CE
Figure 4-3 OCR and voting disk storage configuration for external
redundancy
Option 1: OCR and voting disk on CFS Option 2: OCR and voting disk on CVM raw volumes
with two-way mirroring with two-way mirroring
ocrvol votevol
(CVM volume mirrored (CVM volume mirrored
on Disk1 and Disk 2) on Disk1 and Disk 2)
/ocrvote/vote /ocrvote/ocr
ocrvotevol
(CVM volume mirrored
on Disk1 and Disk 2)
■ If you want to place OCR and voting disk on a clustered file system (option 1),
you need to have two separate files for OCR and voting information respectively
on CFS mounted on a CVM mirrored volume.
■ If you want to place OCR and voting disk on ASM disk groups that use CVM
raw volumes (option 2), you need to use two CVM mirrored volumes for
configuring OCR and voting disk on these volumes.
For both option 1 and option 2:
■ The option External Redundancy must be selected at the time of installing
Oracle Clusterware/Grid Infrastructure.
■ The installer needs at least two LUNs for creating the OCR and voting disk
storage.
See the Oracle RAC documentation for Oracle RAC's recommendation on the
required disk space for OCR and voting disk.
Figure 4-4 OCR and voting disk storage configuration for normal redundancy
V V
O O
L L
U U
M M
E Vol1 Vol2 E Vol1 Vol2 Vol3
S S
D D
I I
S S
Disk 1 Disk 2 Disk 1 Disk 2 Disk 3
K K
S S
The OCR and voting disk files exist on separate cluster file systems.
Configure the storage as follows:
■ Create separate filesystems for OCR and OCR mirror.
■ Create separate filesystems for a minimum of 3 voting disks for redundancy.
■ The option Normal Redundancy must be selected at the time of installing
Oracle Clusterware/Grid Infrastructure.
Planning the storage for Oracle RAC binaries and data files
The Oracle RAC binaries can be stored on local storage or on shared storage,
based on your high availability requirements.
Note: Veritas recommends that you install the Oracle Clusterware and Oracle RAC
database binaries local to each node in the cluster.
Table 4-3 Type of storage for Oracle RAC binaries and data files
Store the Oracle RAC database files on CFS rather than on raw
device or CVM raw device for easier management. Create
separate clustered file systems for each Oracle RAC database.
Keeping the Oracle RAC database datafiles on separate mount
points enables you to unmount the database for maintenance
purposes without affecting other databases.
Supported by ASM ASM provides storage for data files, control files, Oracle Cluster
Registry devices (OCR), voting disk, online redo logs and archive
log files, and backup files.
Not supported by ASM ASM does not support Oracle binaries, trace files, alert logs,
export files, tar files, core files, and application binaries.
■ Use CVM mirrored volumes with dynamic multi-pathing for creating ASM disk
groups. Select external redundancy while creating ASM disk groups.
■ The CVM raw volumes used for ASM must be used exclusively for ASM. Do
not use these volumes for any other purpose, such as creation of file systems.
Creating file systems on CVM raw volumes used with ASM may cause data
corruption.
■ Do not link the Veritas ODM library when databases are created on ASM. ODM
is a disk management interface for data files that reside on the Veritas File
System.
■ Use a minimum of two Oracle RAC ASM disk groups. Store the data files, one
set of redo logs, and one set of control files on one disk group. Store the Flash
Recovery Area, archive logs, and a second set of redo logs and control files on
the second disk group.
For more information, see Oracle RAC's ASM best practices document.
■ Do not configure DMP meta nodes as ASM disks for creating ASM disk groups.
Access to DMP meta nodes must be configured to take place through CVM.
■ Do not combine DMP with other multi-pathing software in the cluster.
■ Do not use coordinator disks, which are configured for I/O fencing, as ASM
disks. I/O fencing disks should not be imported or used for data.
■ Volumes presented to a particular ASM disk group should be of the same speed
and type.
# umask 0022
2 Reboot the system once the appropriate file has been modified.
See the operating system documentation for more information on I/O
schedulers.
Section 2
Installation of Veritas
InfoScale
■ Installing or upgrading Veritas InfoScale using the installer with the -yum option
# cd /mnt/cdrom
3 From this directory, type the following command to start the installation on the
local system.
# ./installer
5 The list of available products is displayed. Select the product that you want to
install on your system.
If you enter y, the installer configures the product after installation. If you enter
n, the installer quits after the installation is complete.
7 At the prompt, specify whether you accept the terms of the End User License
Agreement (EULA).
Do you agree with the terms of the End User License Agreement as
specified in the EULA/en/EULA.pdf file
present on media? [y,n,q,?] y
8 The installer performs the pre-checks. If it is a fresh system, the product is set
as the user defined it. If the system already has a different product installed,
the product is set as Veritas InfoScale Enterprise with a warning message after
pre-check.
9 Choose the licensing method. Answer the licensing questions and follow the
prompts.
Note: You can also register your license using the installer menu by selecting
the L) License a Product option.
See “Registering Veritas InfoScale using permanent license key file”
on page 17.
■ -matrixpath
■ -upgradestart
■ -upgradestop
Note: The new installer options are supported only with InfoScale 8.0. You can
perform upgrades from an earlier version to 8.0. The supported versions for upgrades
are 6.2.1, 7.3.1, 7.4.1, 7.4.2, and 7.4.3.
Notes:
■ If a repository URL is passed as an argument with the -yum option, you do not
need to set the yum repository manually. The CPI installer creates the repository
on each node. The repository URL is the base URL that you specify in the
Installing Veritas InfoScale using the installer 59
Installing or upgrading Veritas InfoScale using the installer with the -yum option
repository file while configuring yum repository, and the values for the base URL
attribute begins with http://, ftp:/, file:///, or sftp:/
■ If a repository name is passed as an argument with the -yum option, the CPI
installer assumes that the repository is already configured and enabled on the
node, hence, you need not to configure the repository. If a repository name is
used and the repository has not yet been configured, then the CPI installer exits
with an appropriate error.
Using -yum and -patch_path options together with -matrixpath
The following is the syntax and examples for performing patch installation or patch
upgrade along with GA upgrade of InfoScale with RPM files:
Note: After running any of the following yum installation commands, select the
Install a product or upgrade a product option from the menu displayed by installer
script.
Syntax:
./installer -yum [repo_name | repo_url] -patch_path [repo_name |
repo_url] -matrixpath
When you run this command, you need to enter the release matrix data path in the
command. You must use the matrixpath option when there is no SORT connectivity
on a machine and the -yum and -patch_path options are used together. As installer
has pre-checks on the release matrix data, if a correct release matrix data path is
not provided, the patch installation or patch upgrade may fail.
Direct or manual yum installation
Ensure that you set the yum repository manually on each node of the cluster before
running the yum install command.
For more details on Installing Veritas InfoScale using yum, refer to the topic:
See “Installing Veritas InfoScale using yum” on page 77.
Installing Veritas InfoScale using the installer 60
Installing or upgrading Veritas InfoScale using the installer with the -yum option
2 Specify all the Veritas InfoScale RPMs using RPM glob. For example: # yum
install 'VRTS*'
3 Specify the group name if a group is configured for Veritas InfoScale's RPMs.
Note: Ensure that the specified name is consistent with the one in the xml file. For
example, consider the group name usage as ENTERPRISE80: # yum install
@ENTERPRISE80 or # yum groupinstall -y ENTERPRISE80.
Use the upgradestop option before you begin to upgrade InfoScale using the yum
upgrade command. This command performs required pre-upgrade checks and
backups all the configuration files before the upgrade.
Syntax for upgradestart:
/opt/VRTS/install/installer -upgradestart
Use the upgradestart option to start the services after upgrading InfoScale rpms
using yum such as starting CVM agents, registering extra types.cf files, and updating
protocol version.
To upgrade InfoScale using yum
1 Disable all the service groups on a cluster.
2 Unmount the file system which is not under the VCS control.
3 Use the following command to disable the dmp native support:
# vxdmpadm settune dmp_native_support=off
Installing Veritas InfoScale using the installer 61
Installing or upgrading Veritas InfoScale using the installer with the -yum option
Note: The base version for upgradestop is 8.0. You cannot perform direct yum
upgrade from earlier versions of InfoScale to 8.0 using upgradestop. You may
use -stop option with installer, post running ./installer -stop command.
Ensure that all the modules and services are stopped using lsmod and
systemctl status commands and verify the status before proceeding with yum
upgrade.
i. Create .repo file using any editor [vi,vim or nano] as shown below: # vi
/etc/yum.repos.d/infoscale80.repo
ii. After executing the above command insert the following values in the .repo
file as follows:
Note: The values for the baseurl attribute can start with http://, ftp://,
or file://. The URL you choose needs to be able to access the repodata
directory. It also needs to access all the Veritas InfoScale RPMs in the
repository that you create or update.
iii. Save and exit the text editor
Note: If you copy the .repo file directly from installation media then you need
to update the 'baseurl' and ‘gpgkey’ entry in
/etc/yum.repos.d/infoscale80.repo for yum repository directory using any
text editor.
■ # yum updateinfo
■ # yum grouplist
10 Run the following command to manually install the VRTSrest package on all
the cluster nodes.
# yum install VRTSrest
After successful completion of yum upgrade ensure that cluster is up and running.
You may verify the CVM protocol version using vxdctl protocolversion command
and VCS protocol version as follows:
/opt/VRTS/bin/haclus -value ProtocolNumber
Note: Ensure that you set the yum repository manually on each node of the cluster
before running the yum install and upgrade command.
Table 5-1
Variable Description List or Scalar Mandatory or
Optional
#
# Configuration Values:
Installing Veritas InfoScale using the installer 64
Installing or upgrading Veritas InfoScale using the installer with the -yum option
#
our %CFG;
$CFG{accepteula}=1;
$CFG{keys}{keyless}=[ "ENTERPRISE" ];
$CFG{opt}{install}=1;
$CFG{opt}{yum}="repo-Infoscale80";
$CFG{prod}="ENTERPRISE80";
$CFG{systems}=[ "dl380g10-10-vip17" ];
1;
#
# Configuration Values:
#
our %CFG;
$CFG{accepteula}=1;
$CFG{keys}{keyless}=[ "ENTERPRISE" ];
$CFG{opt}{install}=1;
$CFG{opt}{yum}="http://xyz.com/rhel8_x86_64/rpms/";
$CFG{prod}="ENTERPRISE80";
$CFG{systems}=[ "dl380g10-10-vip17" ];
1;
#
# Configuration Values:
#
our %CFG;
$CFG{accepteula}=1;
$CFG{keys}{keyless}=[ "ENTERPRISE" ];
$CFG{opt}{install}=1;
$CFG{opt}{matrixpath}="/root/patch_matrix/";
$CFG{opt}{patch_path}="repo-Infoscale80P";
$CFG{opt}{yum}="repo-Infoscale80";
$CFG{prod}="ENTERPRISE80";
$CFG{systems}=[ "dl380g10-10-vip17" ];
1;
Installing Veritas InfoScale using the installer 65
Installing or upgrading Veritas InfoScale using the installer with the -yum option
Note: For all upgrade operations, you need to enter the newly added options
wherever required. Rest of the configuration values are same as per traditional
installation and upgrade.
#
# Configuration Values:
#
our %CFG;
$CFG{opt}{gco}=1;
$CFG{opt}{stop}=1;
$CFG{opt}{upgradestop}=1;
$CFG{opt}{vr}=1;
$CFG{prod}="ENTERPRISE80";
$CFG{systems}=[ "dl380g10-10-vip17","dl380g10-10-vip18" ];
$CFG{vcs_allowcomms}=1;
1;
#
# Configuration Values:
#
our %CFG;
$CFG{opt}{gco}=1;
$CFG{opt}{start}=1;
$CFG{opt}{upgradestart}=1;
$CFG{opt}{vr}=1;
$CFG{prod}="ENTERPRISE80";
$CFG{systems}=[ "dl380g10-10-vip14" ];
$CFG{vcs_allowcomms}=1;
1;
Chapter 6
Installing Veritas InfoScale
using response files
This chapter includes the following topics:
Note: Veritas recommends that you use the response file created by the installer
and then edit it as per your requirement.
Installing Veritas InfoScale using response files 67
Installing Veritas InfoScale using response files
$CFG{Scalar_variable}="value";
$CFG{Scalar_variable}=123;
5 Mount the product disc and navigate to the directory that contains the installation
program.
6 Start the installation from the system to which you copied the response file.
For example:
Variable Description
Table 6-1 Response file variables for installing Veritas InfoScale (continued)
Variable Description
CFG{opt}{logpath} Mentions the location where the log files are to be copied.
The default location is /opt/VRTS/install/logs.
our %CFG;
$CFG{accepteula}=1;
$CFG{keys}{keyless}=[ qw(ENTERPRISE) ];
$CFG{opt}{gco}=1;
$CFG{opt}{install}=1;
$CFG{prod}="ENTERPRISE80";
$CFG{systems}=[ qw(system1 system2) ];
1;
The following example shows a response file for installing Veritas InfoScale using
a permanent license.
Installing Veritas InfoScale using response files 70
Sample response files for Veritas InfoScale installation
our %CFG;
$CFG{accepteula}=1;
$CFG{keys}{licensefile}=["<path_to_license_key_file>"];
$CFG{opt}{gco}=1;
$CFG{opt}{install}=1;
$CFG{prod}="ENTERPRISE80";
$CFG{systems}=[ qw(system1 system2) ];
1;
Chapter 7
Installing Veritas Infoscale
using operating
system-specific methods
This chapter includes the following topics:
For example:
Key fingerprint = C031 8CAB E668 4669 63DB C8EA 0B0B C720 A17A 604B
To display details about the installed Veritas key file, use the rpm -qi command
followed by the output from the previous command:
You can also use the following command to show information for the installed Veritas
key file:
To check the GnuPG signature of an RPM file after importing the builder's GnuPG
key, use the following command:
# rpm -K <rpm-file>
md5 gpg OK
# mkdir /kickstart_files/
2 Generate the Kickstart configuration files. The configuration files have the
extension .ks. :
Enter the following command:
3 Set up an NFS exported location which the Kickstart client can access. For
example, if /nfs_mount_kickstart is the directory which has been NFS
exported, the NFS exported location may look similar to the following:
# cat /etc/exports
/nfs_mount_kickstart * (rw,sync,no_root_squash)
4 Copy the rpms directory from the installation media to the NFS location.
5 Verify the contents of the directory.
# ls /nfs_mount_kickstart/
6 In the Veritas InfoScale Kickstart configuration file, modify the BUILDSRC variable
to point to the actual NFS location. The variable has the following format:
BUILDSRC="hostname_or_ip:/nfs_mount_kickstart"
7 Append the entire modified contents of the Kickstart configuration file to the
operating system ks.cfg file.
8 Launch the Kickstart installation for the operating system.
9 After the operating system installation is complete, check the file
/opt/VRTStmp/kickstart.log for any errors that are related to the installation
of RPMs and product installer scripts.
Installing Veritas Infoscale using operating system-specific methods 75
Installing Veritas InfoScale using Kickstart
10 Verify that all the product RPMs have been installed. Enter the following
command:
11 If you do not find any installation issues or errors, configure the product stack.
Enter the following command:
%packages
libudev.i686
device-mapper
device-mapper-libs
parted
libgcc.i686
compat-libstdc++-33
ed
ksh
nss-softokn-freebl.i686
glibc.i686
libstdc++.i686
audit-libs.i686
cracklib.i686
db4.i686
libselinux.i686
pam.i686
libattr.i686
libacl.i686
%end
%post --nochroot
# Add necessary scripts or commands here to your need
# This generated kickstart file is only for the automated
Installing Veritas Infoscale using operating system-specific methods 76
Installing Veritas InfoScale using Kickstart
PATH=$PATH:/sbin:/usr/sbin:/bin:/usr/bin
export PATH
#
# Notice:
# * Modify the BUILDSRC below according to your real environment
# * The location specified with BUILDSRC should be NFS accessible
# to the Kickstart Server
# * Copy the whole directories of rpms from installation media
# to the BUILDSRC
#
BUILDSRC="<hostname_or_ip>:/path/to/rpms"
#
# Notice:
# * You do not have to change the following scripts
#
mkdir -p ${BUILDDIR}
mount -t nfs -o nolock,vers=3 ${BUILDSRC} ${BUILDDIR} >> ${KSLOG} 2>&1
umount ${BUILDDIR}
CALLED_BY=KICKSTART ${ROOT}/opt/VRTS/install/bin/UXRT8.0/
add_install_scripts >> ${KSLOG} 2>&1
exit 0
%end
# cat /etc/yum.repos.d/veritas_infoscale7.repo
[repo-Veritas InfoScale]
name=Repository for Veritas InfoScale
baseurl=file:///path/to/repository/
enabled=1
gpgcheck=1
gpgkey=file:///path/to/repository/RPM-GPG-KEY-veritas-infoscale7
The values for the baseurl attribute can start with http://, ftp://, or file:///.
The URL you choose needs to be able to access the repodata directory.
It also needs to access all the Veritas InfoScale RPMs in the repository that
you create or update.
■ Run the following commands to get the yum repository updated:
# yum repolist
# yum updateinfo
ENTERPRISE80
FOUNDATION80
STORAGE80
Refer to the Red Hat Enterpirse Linux Deployment Guide for more information
on yum repository configuration.
2 Install the RPMs on the target systems.
■ To install all the RPMs
2. Specify all of the Veritas InfoScale RPMs using its RPM glob. For example:
3. Specify the group name if a group is configured for Veritas InfoScale's RPMs.
This name should keep consistency with the one in xml file. In this example,
the group name is ENTERPRISE8.0:
Or
# ./installer -allpkgs
2. Use the same order as the output from the installer -allpkgs command:
# /opt/VRTS/install/bin/UXRT80/add_install_scripts
Creating install/uninstall scripts for installed products
Creating /opt/VRTS/install/installer for UXRT80
Creating /opt/VRTS/install/showversion for UXRT80
2 Log on to the Red Hat Satellite admin page. Select the Systems tab. Click on
the target system.
3 Select Alter Channel Subscriptions to alter the channel subscription of the
target system.
4 Select the channel which contains the repository of Veritas InfoScale.
5 Enter the following command to check the YUM repository on the target system.
# yum repolist
6 Enter the following command to install the Veritas InfoScale RPMs using YUM:
# /opt/VRTS/install/bin/UXRT8.0/add_install_scripts
8 Enter the following command to configure Veritas InfoScale using the installer:
# ./installer -configure
Chapter 8
Completing the post
installation tasks
This chapter includes the following topics:
# /opt/VRTS/install/installer -version
To find out about the installed RPMs and its versions, use the following command:
# /opt/VRTS/install/showversion
After every product installation, the installer creates an installation log file and a
summary file. The name and location of each file is displayed at the end of a product
installation, and are always located in the /opt/VRTS/install/logs directory.
Veritas recommends that you keep the files for auditing, debugging, and future use.
The installation log file contains all commands that are executed during the
procedure, their output, and the errors generated by the commands.
The summary file contains the results of the installation by the installer or the product
installation scripts. The summary includes the list of the RPMs, and the status
(success or failure) of each RPM, and information about the processes that were
Completing the post installation tasks 84
Setting environment variables
stopped or restarted during the installation. After installation, refer to the summary
file to determine whether any processes need to be started.
■ If you want to use a shell such as csh or tcsh, enter the following:
On a Red Hat system, also include the 1m manual page section in the list defined
by your MANSECT environment variable.
■ If you want to use a shell such as sh or bash, enter the following:
■ If you want to use a shell such as csh or tcsh, enter the following:
If you use the man(1) command to access manual pages, set LC_ALL=C in your
shell to ensure that they display correctly.
Completing the post installation tasks 85
Commands to manage the Veritas telemetry collector on your server
Operation Description
Start the collector (if Use the following command if you want to start a collector that is not
the collector is not sending telemetry data to the edge server.
already running)
/opt/VRTSvlic/tele/bin/TelemetryCollector -start
Restart the collector Use the following command to restart the collector that is sending
(if the collector is telemetry data to the edge server.
already running)
/opt/VRTSvlic/tele/bin/TelemetryCollector -restart
Check whether the Use the following command to check the status of the collector on
collector is running or your server.
not
/opt/VRTSvlic/tele/bin/TelemetryCollector -status
Storage Foundation and High Availability See Storage Foundation and High Availability
Configuration and Upgrade Guide
Completing the post installation tasks 86
Next steps after installation
Storage Foundation Cluster File System HA See Storage Foundation Cluster File System
High Availability Configuration and Upgrade
Guide
Storage Foundation for Oracle RAC See Storage Foundation for Oracle RAC
Configuration and Upgrade Guide
Storage Foundation for Sybase SE See Storage Foundation for Sybase ASE CE
Configuration and Upgrade Guide
■ Removing rootability
# df -T | grep vxfs
2 Make backups of all data on the file systems that you wish to preserve, or
recreate them as non-VxFS file systems on non-VxVM volumes or partitions.
3 Unmount all Storage Checkpoints and file systems:
# umount /checkpoint_name
# umount /filesystem
4 Comment out or remove any VxFS file system entries from the /etc/fstab
file.
Removing rootability
Perform this procedure if you configured rootability by encapsulating the root disk.
Uninstalling Veritas InfoScale using the installer 90
Moving volumes to disk partitions
To remove rootability
1 Check if the system’s root disk is under VxVM control by running this command:
# df -v /
For example, the following command removes the plexes mirrootvol-01, and
mirswapvol-01 that are configured on a disk other than the root disk:
Warning: Do not remove the plexes that correspond to the original root disk
partitions.
3 Enter the following command to convert all the encapsulated volumes in the
root disk back to being accessible directly through disk partitions instead of
through volume devices:
# /etc/vx/bin/vxunroot
# dd if=/dev/vx/dsk/diskgroup/volume-name of=/dev/sdb2
where sdb is the disk outside of VxVM and 2 is the newly created partition on
that disk.
7 Replace the entry for that volume (if present) in /etc/fstab with an entry for
the newly created partition.
8 Mount the disk partition if the corresponding volume was previously mounted.
9 Stop the volume and remove it from VxVM using the following commands:
10 Remove any disks that have become free (have no subdisks defined on them)
by removing volumes from VxVM control. To check if there are still some
subdisks remaining on a particular disk, use the following command:
11 If the output is not 0, there are still some subdisks on this disk that must be
subsequently removed. If the output is 0, remove the disk from VxVM control
using the following commands:
12 The free space now created can be used for adding the data in the next volume
to be removed.
13 After all volumes have been converted into disk partitions successfully, reboot
the system. After the reboot, none of the volumes should be open. To verify
that none of the volumes are open, use the following command:
Note: If you are upgrading Volume Replicator, do not remove the Replicated Data
Set.
Uninstalling Veritas InfoScale using the installer 93
Removing the Replicated Data Set
The argument local_rvgname is the name of the RVG on the local host and
represents its RDS.
The argument sec_hostname is the name of the Secondary host as displayed
in the output of the vradmin printrvg command.
3 Remove the Secondary from the RDS by issuing the following command on
any host in the RDS:
The argument local_rvgname is the name of the RVG on the local host and
represents its RDS.
The argument sec_hostname is the name of the Secondary host as displayed
in the output of the vradmin printrvg command.
4 Remove the Primary from the RDS by issuing the following command on the
Primary:
When used with the -f option, the vradmin delpri command removes the
Primary even when the application is running on the Primary.
The RDS is removed.
5 If you want to delete the SRLs from the Primary and Secondary hosts in the
RDS, issue the following command on the Primary and all Secondaries:
Note: After you uninstall the product, you cannot access any file systems you
created using the default disk layout version in Veritas InfoScale 8.0 with a previous
version of Veritas InfoScale.
# umount /mount_point
3 If the VxVM RPM (VRTSvxvm) is installed, read and follow the uninstallation
procedures for VxVM.
See “Removing rootability” on page 89.
4 If a cache area is online, you must take the cache area offline before uninstalling
the VxVM RPM. Use the following command to take the cache area offline:
# hastop -local
# hastop -all
Uninstalling Veritas InfoScale using the installer 95
Removing the Storage Foundation for Databases (SFDB) repository
# cd /opt/VRTS/install
# ./installer -uninstall
8 The uninstall script prompts for the system name. Enter one or more system
names, separated by a space, from which to uninstall Veritas InfoScale.
9 The uninstall script prompts you to stop the product processes. If you respond
yes, the processes are stopped and the RPMs are uninstalled.
The uninstall script creates log files and displays the location of the log files.
10 Most RPMs have kernel components. In order to ensure complete removal, a
system reboot is recommended after all RPMs have been removed.
11 In case the uninstallation fails to remove any of the VRTS RPMs, check the
installer logs for the reason for failure or try to remove the RPMs manually
using the following command:
# rpm -e VRTSvxvm
# cat /var/vx/vxdba/rep_loc
{
"sfae_rept_version" : 1,
"oracle" : {
"SFAEDB" : {
"location" : "/data/sfaedb/.sfae",
"old_location" : "",
"alias" : [
"sfaedb"
]
}
}
}
# rm -rf /data/sfaedb/.sfae
# rm -rf /db2data/db2inst1/NODE0000/SQL00001/.sfae
# rm -rf /db2data/db2inst1/NODE0000/SQL00001/MEMBER0000/.sfae
# rm -rf /var/vx/vxdba/rep_loc
4 Start the uninstallation from the system to which you copied the response file.
For example:
# /opt/VRTS/install/installer -responsefile
/tmp/response_file
Variable Description
CFG{opt}{logpath} Mentions the location where the log files are to be copied.
The default location is /opt/VRTS/install/logs.
Variable Description
our %CFG;
$CFG{opt}{uninstall}=1;
$CFG{opt}{vr}=1;
$CFG{prod}="ENTERPRISE80";
$CFG{systems}=[ qw("system1", "system2") ];
1;
Section 4
Installation reference
–keyfile ssh_key_file Specifies a key file for secure shell (SSH) installs.
This option passes -I ssh_key_file to every
SSH invocation.
-rsh Specify this option when you want to use rsh and
RCP for communication between systems instead
of the default ssh and SCP.
Installation scripts 105
Installation script options
6 Start the installer for the installation, configuration, or upgrade. For example:
Where /tmp/tunables_file is the full path name for the tunables file.
7 Proceed with the operation. When prompted, accept the tunable parameters.
Certain tunables are only activated after a reboot. Review the output carefully
to determine if the system requires a reboot to set the tunable value.
8 The installer validates the tunables. If an error occurs, exit the installer and
check the tunables file.
Where /tmp/tunables_file is the full path name for the tunables file.
Tunable files for installation 110
Setting tunables with an un-integrated response file
7 Proceed with the operation. When prompted, accept the tunable parameters.
Certain tunables are only activated after a reboot. Review the output carefully
to determine if the system requires a reboot to set the tunable value.
8 The installer validates the tunables. If an error occurs, exit the installer and
check the tunables file.
Where response_file_name is the full path name for the response file and
tunables_file_name is the full path name for the tunables file.
7 Certain tunables are only activated after a reboot. Review the output carefully
to determine if the system requires a reboot to set the tunable value.
8 The installer validates the tunables. If an error occurs, exit the installer and
check the tunables file.
Tunable files for installation 111
Preparing the tunables file
# ./installer -tunables
You see a list of all supported tunables, and the location of the tunables file
template.
To manually format tunables files
◆ Format the tunable parameter as follows:
$TUN{"tunable_name"}{"system_name"|"*"}=value_of_tunable;
For the system_name, use the name of the system, its IP address, or a wildcard
symbol. The value_of_tunable depends on the type of tunable you are setting. End
the line with a semicolon.
The following is an example of a tunables file.
#
# Tunable Parameter Values:
#
our %TUN;
$TUN{"tunable1"}{"*"}=1024;
$TUN{"tunable3"}{"sys123"}="SHA256";
1;
$TUN{"dmp_daemon_count"}{"node123"}=16;
In this example, you are changing the dmp_daemon_count value from its default
of 10 to 16. You can use the wildcard symbol "*" for all systems. For example:
$TUN{"dmp_daemon_count"}{"*"}=16;
Tunable Description
Tunable Description
Tunable Description
Tunable Description
Tunable Description
Tunable Description
Tunable Description
Tunable Description
■ Inaccessible system
Suggested solution: You need to set up the systems to allow remote access using
ssh or rsh.
Troubleshooting installation issues 122
Inaccessible system
Note: Remove remote shell permissions after completing the Veritas InfoScale
installation and configuration.
Inaccessible system
The system you specified is not accessible. This could be for a variety of reasons
such as, the system name was entered incorrectly or the system is not available
over the network.
Suggested solution: Verify that you entered the system name correctly; use the
ping(1M) command to verify the accessibility of the host.