0% found this document useful (0 votes)
31 views61 pages

11-TECS Openpalette (V7.22.30) CLI Reference

Uploaded by

khidr.gadora
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views61 pages

11-TECS Openpalette (V7.22.30) CLI Reference

Uploaded by

khidr.gadora
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 61

TECS Openpalette

CLI Reference

Version: V7.22.30

ZTE CORPORATION
ZTE Plaza, Keji Road South, Hi-Tech Industrial Park,
Nanshan District, Shenzhen, P.R.China
Postcode: 518057
Tel: +86-755-26771900
URL: http://support.zte.com.cn
E-mail: [email protected]
LEGAL INFORMATION
Copyright 2022 ZTE CORPORATION.

The contents of this document are protected by copyright laws and international treaties. Any reproduction or
distribution of this document or any portion of this document, in any form by any means, without the prior written
consent of ZTE CORPORATION is prohibited. Additionally, the contents of this document are protected by con-

tractual confidentiality obligations.


All company, brand and product names are trade or service marks, or registered trade or service marks, of ZTE
CORPORATION or of their respective owners.

This document is provided as is, and all express, implied, or statutory warranties, representations or conditions are
disclaimed, including without limitation any implied warranty of merchantability, fitness for a particular purpose,
title or non-infringement. ZTE CORPORATION and its licensors shall not be liable for damages resulting from
the use of or reliance on the information contained herein.

ZTE CORPORATION or its licensors may have current or pending intellectual property rights or applications
covering the subject matter of this document. Except as expressly provided in any written license between ZTE

CORPORATION and its licensee, the user of this document shall not acquire any license to the subject matter

herein.
ZTE CORPORATION reserves the right to upgrade or make technical change to this product without further notice.

Users may visit the ZTE technical support website http://support.zte.com.cn to inquire for related information.

The ultimate right to interpret this product resides in ZTE CORPORATION.

Statement on the Use of Third-Party Embedded Software:


If third-party embedded software such as Oracle, Sybase/SAP, Veritas, Microsoft, VMware, and Redhat is de-

livered together with this product of ZTE, the embedded software must be used as only a component of this
product. If this product is discarded, the licenses for the embedded software must be void either and must not be

transferred. ZTE will provide technical support for the embedded software of this product.

Revision History

Revision No. Revision Date Revision Reason


R1.0 2023-04-15 First editon.

Serial Number: SJ-20221017102955-015


Publishing Date: 2023-04-15 (R1.0)
CLI Reference

CLI Reference

■ Revision History

■ About This Document

■ Software Repository CLI

■ Software Repository CLI

■ Introduction to Zartcli

■ Configuration File

■ Querying a List

■ Fuzzy Query

■ Query Details

■ Performing Synchronization.

■ Querying the Synchronization Result

■ Deleting a Synchronization Task

■ Exporting

■ Importing

■ Creating a Push List File

■ Performing the Push Operation

■ Querying the Synchronization Result

■ Deleting a Push Record

■ Platform Management CLI

■ Platform Management CLI

■ Syntax of Pdm-cli Commands

■ PaaS Deployment

■ PaaS Components

■ PaaS Cluster Management

■ Resource Pool

■ Volume

■ Port Range

■ SSH Keys of Nodes

■ Time Zone of Paas

■ External Clock Source of the PaaS

■ Clock Source Whitelist of the PaaS

■ Setting the Timestamp Function for the Clock Synchronization Service

■ PaaS Date and Time

■ Restarting the PaaS

■ Health Check on Common Services

■ Graceful Shutdown of the PaaS

■ Commands for Hot Patches of the PaaS

■ Modifying the Hostname of a Node

3 - 61
CLI Reference

■ Offline Updating the Component Version in the Local Software Repository

■ Modifying the Shelf/Blade Configuration

■ Apache Package Related Information

■ Node Resources Management CLI

■ Node Resources Management CLI

■ Overview of the Cnrm-cli Tool

■ Using Help Commands

■ Configuration Sub-commands

■ Query Sub-commands

■ Modification Sub-commands

■ Status Sub-commands

■ Status Query Sub-commands

■ Firewall Rule Management CLI

■ Firewall Rule Management CLI

■ Overview of the Inetrules-cli Tool

■ Using Help Commands

■ Querying Firewall Rules

■ Enabling Firewall Rules

■ Disabling Firewall Rules

■ Adding Firewall Rules

■ Deleting a Firewall Rule

■ System Traffic Management CLI

■ System Traffic Management CLI

■ Overview of the Globalinetrules-cli Tool

■ Using Help Commands

■ Querying System Traffic Filtering Rules

■ Adding a System Traffic Filtering Rule

■ Deleting an Existing System Traffic Filtering Rule

■ Deleting All System Traffic Filtering Rules

■ One-Click Collection CLI

■ Overview of One-Click Collection CLI

■ Data Collection

■ Modifying a Configuration File

■ Modifying the Quota of a Data Collection Directory

■ Configuration File Description

Revision History

4 - 61
CLI Reference

CLI Change Operation CLI Com Parameter Change Operation Paramete Changed Reason f Revision
mand r Contents or Chang Version
e

Addition/modification/deletion Addition/modification/deletion Describes


the
contents
that are
changed.

About This Document


Document Description
This document describes the software repository, platform management, node resource management, firewall rule
management, system traffic management, and one-click collection CLI commands.
Intended Audience
This manual is intended for:

■ Debugging engineers

■ Maintenance engineers

■ Network management engineers

Required Skills and Knowledge


Before using this document, you need to understand the fundamentals of Linux and the PaaS.
What Is in This Document
This document contains the following chapters:

Chapter Overview

Software Repository CLI Describes the CLI commands of the software repository, including the configuration file,
query list, and fuzzy query commands.

Platform Management CLI Describes the CLI commands for platform management, including PaaS deployment,
PaaS components, and PaaS cluster commands.

Node Resources Manageme Describes the CLI commands for node resource management, including configuration
nt CLI sub-commands, query sub-commands, and modification sub-commands.

Firewall Rule Management C Describes the CLI commands for firewall rule management, including the commands for
LI viewing, enabling, and disabling firewall rules.

System Traffic Management Describes the CLI commands for system traffic management, including the commands
CLI for querying system traffic filtering rules, adding system traffic filtering rules, and deleting
existing system traffic filtering rules.

One-Click Collection CLI Describes the CLI commands for one-click collection, including commands for data
collection, modifying configuration files, and modifying the quota of a data collection
directory.

5 - 61
CLI Reference

Software Repository CLI


■ Software Repository CLI

■ Introduction to Zartcli

■ Configuration File

■ Querying

■ Querying a List

■ Fuzzy Query

■ Query Details

■ Uploading

■ Downloading

■ Updating

■ Deleting

■ Building an Image

■ Pushing a Local Image

■ Online Synchronization

■ Performing Synchronization.

■ Querying the Synchronization Result

■ Deleting a Synchronization Task

■ Importing and Exporting a Version

■ Exporting

■ Importing

■ Pushing

■ Creating a Push List File

■ Performing the Push Operation

■ Querying the Synchronization Result

■ Deleting a Push Record

■ Garbage Collection of the Registry

Software Repository CLI

Introduction to Zartcli

Zartcli is a CLI client of the software repository. It is a binary file, and the current version supports 64-bit Linux.
It provides commands for querying, uploading, downloading, updating, deleting and synchronizing four types of versions:
image, blueprint (bp), software package (bin) and component.
Zartcli is released together with the PaaS version. After the PaaS environment is installed successfully, zartcli can be used.
It is in the /root/zartcli/ directory on the controller node of the PaaS.
Parameter Descriptions:

6 - 61
CLI Reference

-b public attribute (publicview), "yes" or "no".


-D used to mark the push operation.
-E displays error codes.
-e desc
-g image type. At present, it is set to "devframe" when dev is used to upload a programming framework. If this
parameter is not specified, use the default setting "app".
-i tenantid, for example, tcfs in http://127.0.0.1:5000/swr/v1/tenants/tcfs/images.
-l alias
-m application type (model), currently, the options are image, com, bp, and bin.
-n name, for example, tcfs and iportal.
-o operation type, including query, detail, download, upload, delete, update, push, build, and sync.
-p path used when files are uploaded or downloaded.
-w waits for the actual execution result of the uploading and synchronization tasks, "true" or "false".
-r remark
-S Saves or adds a server address in the configuration file.
-s Specifies a configuration name for the server address.
-t blueprint tag: service, microservice, and commonservice.
-V client version number.
-v version number, record version, for example, v1.0.
-Y used to mark the synchronization operation.
-logpath used to set the log path.

Configuration File

The configuration file zartcli.ini of zartcli is located in the same directory of the binary program. The initial contents are as
follows:

[zartsrv]
default = ip:port // ip is the address of the PaaS software repository server, and port is the one of the PaaS software
repository server (6000 by default).

[logpath]
path = /paasdata/op-log/cf-zartcli

To operate another independent repository (for example, 10.1.1.123:6000), run the following command to add the
repository configuration:

./zartcli -S=swr123:10.1.1.123:6000

■ -S format description: “repository name:repository address:repository port”, or “repository address:repository

port” (in this case, the repository name is default).


■ swr123: Name of the newly added independent repository.

After this configuration is added, you can use the “-s swr123” parameter to perform version-related operations for this
repository.
If no log path is configured in the zartcli.ini file, the logs are saved in the same directory as zartcli by default. To set the log
path, run the following command:

./zartcli -logpath=path

7 - 61
CLI Reference

Querying

Querying a List

The querying operation can query a list of versions of the image, bp, bin, and com types for a specified project (tenant).
For the bp type, the -t parameter is needed. For the image, bin, and com types, the -t parameter is not needed.

./zartcli -o=query -i=tcfs -m=image -n=image1 -v=v1


./zartcli -o=query -i=tcfs -m=bp -n=bp1 -v=v1 -t=service

■ Obtain the corresponding version of -m under the -i tenant.

■ -m can be set to image, bp, bin, and com.

Example:

1 The following example shows how to query all versions of a bp under the tcfs project:

./zartcli -o=query -i=tcfs -m=bp -n=bp1 -t=service

2 The following example shows how to query the specified version of a bp under the tcfs project:

./zartcli -o=query -i=tcfs -m=bp -n=bp1 -v=v1 -t=service

Fuzzy Query

Fuzzy conditions are used for query of version list.

./zartcli -o=query -i=tcfs -m=bp -n=\* -v=v\* -t=\*service

■ -m can be set to image, bp, bin, and com.

■ * stands for a wildcard, which can be *, *abc, and abc*.

■ Fuzzy query is only applicable to -t, -n and -v.

■ Pay attention to the keyword with the * wildcard. Use double quotation marks.

Example:

1 The following example shows how to query all versions of each image under the tcfs project. In this example, the
image name suffix is “test”.

./zartcli -o=query -i=tcfs -m=image -n=*test

Query Details

Query the detailed information of a specific version of the image, bp, bin and com types for the specified project (tenant).
** (The parameters carried must specify a unique version.)**

./zartcli -o=detail -i=tcfs -m=bp -t=service -n=name -v=version


./zartcli -o=detail -i=tcfs -m=image -n=name -v=version

8 - 61
CLI Reference

./zartcli -o=detail -i=tcfs -m=bin -n=name -v=version

■ Obtain the detailed information of the specified version.

Example:

1 The following example shows how to query the blueprint details under the tcfs project. In this example, the
blueprint name is bp1, the version is v1 and the tag is service.

./zartcli -o=detail -i=tcfs -m=bp -t=service -n=bp1 -v=v1

Uploading

Upload a local version to the software repository.

./zartcli -o=upload -i=tcfs -m=image -n=name -v=version -p=path


./zartcli -o=upload -i=tcfs -m=image -n=name -v=version -p=path -w=true
./zartcli -o=upload -i=tcfs -m=bp -t=service -n=name -v=version -p=path

■ The version to be uploaded should be placed in the path corresponding to -p. This path cannot contain a sub-

directory.
■ When a blueprint is uploaded, all the files except .detail and .info under the path are written to the list field through

stream transmission and carried to the server.


■ When a blueprint is uploaded, if -t is not specified, “default” is set by default.

■ When an image is uploaded, only one tar package is allowed under the path (this tar package must be saved by

imagename:version not imageid).


■ When a component is uploaded, only one image tar package is allowed under the path (this tar package must be

saved by imagename:version not imageid). Other files cannot have the .tar suffix.
■ When a blueprint, an image, or a software package (bin) is uploaded, the detail and info fields are obtained

through the .detail and .info files under the path. To add other fields, add them directly to the upload command.
Currently, the com parameter does not support updating detail and info through files.
■ -m can be set to bp, bin, com, and image.

■ -w is optional. The value is “true” or “false” and the default value is false. “True” indicates that the result is

returned only after the status of the uploaded image is “available” or “unavailable”. “False” indicates that the
final result of the uploaded image does not need to be known, and the result is returned after the request is sent
successfully.

Example:

1 The following example shows how to upload an image to the tcfs project. In this example, the image name is
image1, the version is v1, and the storage path of the image tar package is /home/upload/.

./zartcli -o=upload -i=tcfs -m=image -n=image1 -v=v1 -p=/home/upload/

2 The following example shows how to upload a blueprint to the tcfs project. In this example, the blueprint name
is bp1, the type is service, the version is v1, and the storage path of the json file of the blueprint is /home/upload/.

9 - 61
CLI Reference

./zartcli -o=upload -i=tcfs -m=bp -n=bp1 -v=v1 -t=service -p=/home/upload/

3. The following example shows how to upload a software package to the tcfs project. In this example, the software
package name is bin1 and the version is v1, the storage path of all the files in the software package is /home/upload/,
which does not contain a sub-directory.

./zartcli -o=upload -i=tcfs -m=bin -n=bin1 -v=v1 -p=/home/upload/

4. The following example shows how to upload a component package to the tcfs project. In this example, the component
package name is com1, the version is v1, the storage path of the component package and the image tar package is
/home/upload/, which contains no sub-directory and only one tar package file.

./zartcli -o=upload -i=tcfs -m=com -n=com1 -v=v1 -p=/home/upload/

Downloading

Download a software package (bin), a component package, and a blueprint from the software repository to a specified
local directory as a file. ** This function cannot be used to download an image. For a component, only the file list can be
downloaded, but the image of the component cannot be downloaded.**

./zartcli -o=download -i=tcfs -m=bp -t=service -n=name -v=version -p=path

■ -p is the path where the specified version is downloaded to the local computer.

■ If the path does not exist, the system creates one automatically. If this version has been downloaded before, the

original one will be overwritten.


■ If the version contains docker img, the docker img will not download it to the local computer.

■ -m can be set to bin, com, and bp.

Example:

1 The following example shows how to download a blueprint of the tcfs project. In this example, the the version is
v1, the type is service, and the download path is /home/download/.

./zartcli -o=download -i=tcfs -m=bp -t=service -n=bp1 -v=v1 -p=/home/download/

2 The following example shows how to download a software package of the tcfs project. In this example, the
software package name is bin1, the version is v1, and the download path is /home/download/.

./zartcli -o=download -i=tcfs -m=bin -n=bin1 -v=v1 -p=/home/download/

3 The following example shows how to download a component package of the tcfs project, In this example, the
component package name is com1, the version is v1, and the download path is /home/download/.

./zartcli -o=download -i=tcfs -m=com -n=com1 -v=v1 -p=/home/download/

Updating

10 - 61
CLI Reference

Set the public attribute (publicview) of a specified version under a specified project.

./zartcli -o=update -i=tcfs -m=bp -t=service -n=name -v=version -b=yes

■ The -i, -m, -t, -n, and -v identify only one record. You can modify it by adding other parameters at the end, for

example, -b=yes.
■ When -m is set to bin, image, or com, the -t parameter is not required, because there is no tag label in the version

attribute.

Example:

1 The following example shows how to set a blueprint version under the tcfs project to a public blueprint.

./zartcli -o=update -i=tcfs -m=bp -t=service -n=bp1 -v=v1 -b=yes

2 The following example shows how to set an image version under the tcfs project to a project image (private
image).

./zartcli -o=update -i=tcfs -m=image -n=image1 -v=v1 -b=no

Deleting

Delete a version of the bin, image, com or bp type under the specified item.

./zartcli -o=delete -i=tcfs -m=bp -t=service -n=name -v=version


./zartcli -o=delete -i=tcfs -m=image -n=name -v=version

■ If -t is not specified when a blueprint is deleted, -t=default is used.

■ When an image is deleted, only the image record in the software repository and the image label in the registry are

deleted. The data layer of the image cannot be deleted, and the space occupied by the image will not be released.
■ -m can be set to bin, image, com, and bp.

Example:

1 The following example shows how to delete the specified version of an image under the tcfs project.

./zartcli -o=delete -i=tcfs -m=image -n=image1 -v=v1

2 The following example shows how to delete all the versions of an image under the tcfs project.

./zartcli -o=delete -i=tcfs -m=image -n=image1

Building an Image

Upload all the files required for building an image such as Dockerfile to a temporary directory on the software repository
server, build an image, and then put the image into the software repository. Finally, delete the files in the temporary
directory.

11 - 61
CLI Reference

./zartcli -o=build -i=tcfs -m=image -n=name -v=version -p=path


./zartcli -o=build -i=tcfs -m=image -n=name -v=version -timeoutseconds=second -p=path

■ For an image, the files to be edited are pushed to the server. After being edited, the files are stored in the software

repository.
■ timeoutseconds, timeout time for building an image, unit: seconds, default: 1800, range: 0‒7200.

Example:

1 The following example shows how to build an image. In this example, the project name is tcfs, the image name is
image1, the version is v1, and the directory is /home/build/.

./zartcli -o=build -i=tcfs -m=image -n=image1 -v=v1 -p=/home/build/


./zartcli -o=build -i=tcfs -m=image -n=image1 -v=v1 -timeoutseconds=2000 -p=/home/build/

Pushing a Local Image

Directly push a local image tar package to the registry. After that, the image record is written in the software repository.
Difference between uploading and pushing an image: The push function is to execute the docker load tar package, docker
tag and docker push commands, and then writes an image record to the software repository. The upload function directly
uploads the image tar package to the software repository server, and the server completes the subsequent operations.

./zartcli -o=push -i=tcfs -m=image -n=name -v=version -p=path

■ Only one tar packet is allowed under the path.

■ For an image, docker needs to be pre-installed locally.

Example:

1 The following example shows how to push an image of the tcfs project. In this example, the image name is
image1, the version is v1, and the storage path of the image tar package is /home/push/.

./zartcli -o=push -i=tcfs -m=image -n=image1 -v=v1 -p=/home/push/

Online Synchronization

Synchronize the related software version from the remote software repository to the local software repository. ** (This is
manual synchronization, and synchronization is performed once after it is initiated.) **

Performing Synchronization.

./zartcli -o=sync -i=admin -p=path


./zartcli -o=sync -i=admin -p=path -w=true

■ The -i parameter in the synchronization operation can only be the admin.

■ -m can be set to bp, image, bin and com.

■ -w is optional. The value is “true” or “false”, and the default parameter is false. “True” indicates that the

12 - 61
CLI Reference

result is returned after the synchronization task is “sync success” or “sync failed!”. “False” indicates that the
final result of the synchronization task does not need to be known, and the result is returned immediately after the
request is sent successfully.
■ After the client sends a synchronization operation, the procedure is over. To know the synchronization result, you

need to query the execution of the synchronization task.


■ The synchronization list file sync.list should be saved under the path. The file should be in the json format. The

reference is as follows:

{
"sourcezart": "10.62.52.117:6000", ///address and port number of the remote repository
"sourceregistry":"10.62.52.117:6666", //address and port number of the registry of the remote repository, used during
image synchronization.
"name": "sync1", //synchronization name
"overwrite": "yes", //whether the synchronization contents overwrite the local record
"versions": [
{"reponame": "cg", "name": "bp1", "version": "v1", "tag":"default", "model": "bp"},
{"reponame": "cg", "name": "bp1", "version": "v2", "tag":"default", "model": "bp"},
{"reponame": "cg", "name": "image1", "version": "v1", "model": "image"},
{"reponame": "cg", "name": "bin1", "version": "v1", "model": "bin"},
{"reponame": "cg", "name": "com1", "version": "v1", "model": "com"}
]
}

■ sourcezart: address and port number of the remote repository

■ sourceregistry: address and port number of the registry of the remote repository, used during image

synchronization. If there is no image in the following version list, this configuration can be omitted.
■ name: Name of the synchronization task (unique).

■ overwrite: whether to overwrite the local version that is to be synchronized

■ versions: A list of versions of the objects to be synchronized. You can add or delete objects as required. The format

is fixed to {“reponame”: “project in the remote repository”, “name”: “version name”, “version”: “version
number”, “model”: “object type”}. For a blueprint, you need to add “tag”:”blueprint tag”.

Example:
Compile a sync.list file to be synchronized by refering to the above sync.list file in json format, and save it to a directory,
for example, /home/sync/. The following example shows how to start synchronization.

./zartcli -o=sync -i=admin -p=/home/sync/

Querying the Synchronization Result

The following example shows how to query the execution results of all or specified synchronization tasks.

./zartcli -o=query -i=admin -Y


./zartcli -o=query -i=admin -n=sync1 -Y

■ The -Y parameter identifies the synchronization operation.

■ If the -n parameter is not specified, all history synchronization operation records are returned. If the -n parameter

13 - 61
CLI Reference

is specified, the result of a specific synchronization operation is returned.

Deleting a Synchronization Task

Deletes all or specified synchronization tasks. ** (Note that the versions that have been synchronized will not be deleted.)
**

./zartcli -o=delete -i=admin -Y


./zartcli -o=delete -i=admin -n=sync1 -Y

■ The -Y parameter identifies the synchronization operation.

■ If the -n parameter is not specified, all history synchronization operation records are deleted. If the -n parameter is

specified, the record of a specific synchronization operation is deleted.

Importing and Exporting a Version

The system provides the function of importing and exporting versions, which facilitates offline version transfer.

Exporting

A version list (export.list) is saved under a specified directory (path). The version list can be exported to a directory (path)
by using the export command. The path should be specific.

Note:
If zartcli is not used in the PaaS, configure /etc/default/docker before running the export command. The IP address of
the repository and the PORT of the registry are required. For example, add the following bold part.

DOCKER_OPTS=”‒insecure-registry=10.67.18.xxx:33777 ‒insecure-registry=193.168.4.5:6666 -
H unix:///var/run/docker.sock -H 0.0.0.0:5555”

Obtain the IP address and port number. There are two cases:

1 For the repository inside the PaaS platform, run the pdm-cli node list command on the controller node to view
the IP address of the soft-repo node, which is used as the registry IP address, and PORT is 6666.

2 The IP address and PORT of the registry on the version server where PaaS releases versions, for example,
10.67.18.xxx:33377.

./zartcli -o=export -p=/path

Example of export.list:
export.list

{"reponame":"admin","name":"aerospike","model":"com","version":"3.7.5.1"},
{"reponame":"admin","name":"c0-ms","model":"bp","tag":"microservice","version":"v1.16.20.04.37529"},
{"reponame":"admin","name":"c0","model":"image","version":"v1.16.20.04.I74ba1f"},
{"reponame":"admin","name":"cf-base","model":"bin","version":"1.0.1"},
{"reponame":"admin","name":"cf-csm","model":"com","version":"v1.16.20.04.37498"},
{"reponame":"admin","name":"cf-common","model":"bin","version":"v1.16.20.04.37498"},
{"reponame":"admin","name":"cf-pcluster","model":"com","version":"v1.16.20.04.37498"},
{"reponame":"admin","name":"cf-pdeploy","model":"com","version":"v1.16.20.04.37498"},

14 - 61
CLI Reference

{"reponame":"admin","name":"cf-pdman","model":"com","version":"v1.16.20.04.37498"},
{"reponame":"admin","name":"cf-pnode","model":"com","version":"v1.16.20.04.37498"},
{"reponame":"admin","name":"cf-srcli","model":"bin","version":"v1.16.20.04.35650"},
{"reponame":"admin","name":"cf-srepo","model":"com","version":"v1.16.20.04.35650"},
{"reponame":"admin","name":"cf-vnpm","model":"com","version":"v1.16.20.04.37498"},
{"reponame":"admin","name":"cf-zartcli","model":"bin","version":"v1.16.20.04.37498"},
{"reponame":"admin","name":"cnimaster","model":"bin","version":"v1.16.20.04.37529"},
{"reponame":"admin","name":"cpp-centos-7.2.1511","model":"image","version":"v1.16.10.03.p01.de3d69f"},
{"reponame":"admin","name":"cradle_master","model":"com","version":"v1.16.20.04.039a8a1"}

Importing

Import the version that is exported by zartcli into the repository. The import command is in the following format, where
path should be specific.

./zartcli -o=import -p=/path

When the version is imported, zartcli creates the alreadimportedbackup directory under the path, and move the imported
version to this directory.

Pushing

Creating a Push List File

The push list name is distr.list, and the file content is in json format. For example,

{
"swrcache": ["10.62.52.117:6000"], //address and port number of the remote repository
"name": "distr1", //push name
"versions": [
{"project":"projname1", "reponame": "cg", "name": "bp1", "version": "v1", "tag":"default", "model": "bp"},
//"reponame" and "model" are required, "project" is the destination where the version is pushed to, default is admin.
{"project":"projname1", "reponame": "cg", "name": "bp1", "version":"v2", "tag":"default", "model": "bp"},
{"project":"projname1", "reponame": "cg", "name": "image1", "version":"v1", "model": "image"},
{"project":"projname1", "reponame": "cg", "name": "bin1", "version":"v1", "model": "bin"},
{"project":"projname1", "reponame": "cg", "name": "com1", "version": "v1", "model": "com"}
]
}

Performing the Push Operation

./zartcli -o=distr -i=admin -p=path -s=zartname

■ The -i parameter in the synchronization operation can only be the admin.

■ -m can be set to bp, image, bin and com.

■ When -s is omitted, -s=default is used.

■ The push list file (distr.list) in json format should be saved under the path.

Querying the Synchronization Result

15 - 61
CLI Reference

./zartcli -o=query -i=admin -n=sync_multy_image -s=zartname -D //query the result of the push operation named
sync_multy_image.
./zartcli -o=query -i=admin -s=zartname -D //query all history synchronization records.

■ -D is used to mark the push operation.

■ When -s is omitted, -s=default is used.

Example of the query result:

{
"Id": 35,
"Created_at": "2017-01-24T19:33:59+08:00",
"Updated_at": "2017-01-24T19:33:59+08:00",
"Name": "sync_multy_image", //push name
"Srcipport": "10.63.241.165:6000", //address and port number of remote repository
"Srcregistry": "10.63.241.165:6666", //address and port number of the registry of the remote repository, used during
image synchronization.
"TotalTask": 2, //number of versions to be synchronized in the synchronization list
"DoneTask": 2, //number of versions that have been synchronized
"SameSkipVer": 0, //number of local versions skipped by the synchronization operation when overwrite=no
"Overwrite": "yes", //overwrite status, whether the synchronized contents overwrite the local records when
overwrite=yes
"Status": "distr success", //synchronization operation status: in progress, success and failure
"Remark": "", //record the failure cause if synchronization fails. It is left blank if synchronization is successful.
"SyncFailVer": "" //record the model, reponame, name and version if synchronization fails.
}

Deleting a Push Record

./zartcli -o=delete -i=admin -n=sync_multy_image -s=zartname -D //delete the push record named sync_multy_image.
./zartcli -o=delete -i=admin -s=zartname -D //delete all the historical push operation records.

■ -D is used to mark the push operation.

■ When -s is omitted, -s=default is used.

■ The deletion operation only deletes the push records, but does not delete the versions pushed to the remote

repository. To delete a version, perform the deletion operation.

Garbage Collection of the Registry

./zartcli -o=mtngc -i=admin -m=quota

■ This feature is only applicable to the admin user.

■ After this command is executed, the untagged layers in the registry are really deleted. In addition, the repository

and registry enter the maintenance status during the execution and they are unavailable. If the upload and update
operations are performed, they will fail. Use this command with caution.

Platform Management CLI

16 - 61
CLI Reference

■ Platform Management CLI

■ Syntax of Pdm-cli Commands

■ PaaS Deployment

■ PaaS Components

■ PaaS Cluster Management

■ Cluster

■ Enhanced Components of a Cluster

■ Enabling the Kata Function

■ Resource Pool

■ Volume

■ Port Range

■ SSH Keys of Nodes

■ Time Zone of Paas

■ External Clock Source of the PaaS

■ Immediately Synchronizing with the Clock Source

■ Adding an External Clock Source for the PaaS

■ Deleting an External Clock Source of the PaaS

■ Viewing External Clock Sources of the PaaS

■ Modifying an External Clock Source of the PaaS

■ Clock Source Whitelist of the PaaS

■ Setting the Timestamp Function for the Clock Synchronization Service

■ PaaS Date and Time

■ Restarting the PaaS

■ Health Check on Common Services

■ Graceful Shutdown of the PaaS

■ Commands for Hot Patches of the PaaS

■ Modifying the Hostname of a Node

■ Offline Updating the Component Version in the Local Software Repository

■ Modifying the Shelf/Blade Configuration

■ Apache Package Related Information

Platform Management CLI

Syntax of Pdm-cli Commands

pdm-cli <subcommand> …
Where, <subcommand> includes deploy and cluster. The sub-command is followed by parameters, which are sorted in
sequence and separated by spaces. The parameters starting with “‒” are optional. Optional parameters can be
omitted, and their positions are variable.

17 - 61
CLI Reference

PaaS Deployment

Querying Deployment Tasks

pdm-cli task list [--all]

--all is an optional parameter. If --all is specified in the command, all deployment tasks in the environment are displayed. If
--all is not specified, some old historical tasks are not displayed. The historical tasks that are not displayed are as follows:

■ Deployment tasks on the node that has been deleted are not displayed.

■ If components are deployed or upgraded on the same node for multiple times, only the upgrade or deployment

tasks of the last three times are displayed, and other tasks are not displayed.

Querying the Software Repository Information


pdm-cli registry list

PaaS Components

Updating Patches
pdm-cli patch <model> <reponame> <name> <version>
For example,
pdm-cli patch bin admin nwnode v1.0.1
Updating Components Locally
pdm-cli update <model> <reponame> <name> <version> <path> <role>
role: specified installation nodes, including: paas_controller, master, minion, and elk.
Log path: /paasdata/op-log/cf-pdeploy/pdeploy_ansible.log
For example,
Executable file component: pdm-cli update bin admin nwnode v1.0.1 /root/nwnode minion
Container component: pdm-cli update com admin utm v1.0.1 /root/utm paas_controller
Querying the Component Version
pdm-cli version <model> <reponame> <name>
Independently Deploying a Container Component
pdm-cli deploy_com <model> <reponame> <name> <version> <role>
role: specified installation nodes, including: paas_controller, master, minion, and elk.
Log path: /paasdata/op-log/cf-pdeploy/pdeploy_ansible.log
If the component is not in the local repository, obtain the version from the install_center (install_center is set in
/etc/pdm/conf/softcenter.json).
For example,
pdm-cli deploy_com com admin zenap_cos v1.17.20.02.245446 paas_controller
To use the local component version, add the <path> parameter in the command line as follows:
pdm-cli deploy_com <model> <reponame> <name> <version> <path> <role>
path: directory where the container component version is located
For example,
pdm-cli deploy_com com admin ndr v1.1.0 /home/ubuntu/ndr paas_controller

18 - 61
CLI Reference

PaaS Cluster Management

Cluster

Querying All Clusters


pdm-cli cluster list
Querying Details of a Cluster
pdm-cli cluster show <uuid>

Note:
In the returned cluster information, the pict_eviction_pod under the cluster_config field records the eviction switch and
the thresholds of the parameters related to the eviction:

■ enabling: user-defined eviction switch

■ eviction_rate: Maximum number of PODs that are evicted at a time

■ target_thresholds_cpu: CPU usage threshold of the node that triggers the eviction

■ target_thresholds_memory: memory usage threshold of the node that triggers the eviction

■ target_thresholds_loadavg: average CPU usage threshold of the node that triggers the eviction

** Querying All Nodes of the Cluster**


pdm-cli cluster node list
Creating a Cluster
pdm-cli cluster create <cluster_file>
For the contents of the cluster_file, refer to the /etc/pdm/conf/example/cluster.example
For example,
pdm-cli cluster create /etc/pdm/conf/example/cluster.example
Continuing Installation Upon Failure to Create a Cluster
pdm-cli cluster continue_deploy <uuid> <cluster_file>

Note:

■ This command is used in the scenario where the PaaS is deployed successfully but a cluster fails to be created, or

a new cluster fails to be added.


■ The uuid parameter refers to the uuid of the cluster whose installation needs to be continued. The cluster_file

parameter refers to the configuration path of the created cluster. The cluster_file parameter is optional. If it is not
specified, the original configuration of the cluster is used by default.
■ You can check the cluster deployment status by using the pdm-cli cluster list command. Only when the cluster

deployment status is any status of init-para-fail, applyfail, taskfail, deployfailed or labelfail, the command for
continuing installation is valid.
■ The cluster_file parameter is valid only when the cluster deployment status is init-para-fail.

Deleting a Cluster
pdm-cli cluster delete <uuid>
Deleting a Cluster Node
pdm-cli cluster delete <uuid> node <node_uuid>

19 - 61
CLI Reference

Expanding a Cluster
pdm-cli cluster scaleout <uuid> <scale_file>
For the contents of scale_file, refer to the /etc/pdm/conf/example/scale.example.
For example,
pdm-cli cluster scaleout /etc/pdm/conf/example/scale.example
Adding a Label to the Cluster Node
pdm-cli label add <key=value> node <node_uuid>
** Modifying a Cluster Node Label**
pdm-cli label update <key=value> node <node_uuid>
** Deleting a Cluster Node Label**
pdm-cli label delete <key> node <node_uuid>
Displaying All Cluster Nodes Labeled with a Label
pdm-cli label list node <key=value>
** Displaying All the Node Labels**
pdm-cli label list
Setting the Default Value of a Label (after the command is executed successfully, you can run the “pdm-cli label list”
command to check whether the setting is successful)
pdm-cli label set default_operator <key> <DoesNotExist/DoNotCare>
Blocking a Node
pdm-cli cluster node unschedule <cluster_node_uuid>
Unblocking a Node
pdm-cli cluster node schedule <cluster_node_uuid>
Deleting a Node Pod
pdm-cli cluster node drain <uuid>
Modifying Reserved Cluster Resources
pdm-cli cluster update <cluster_uuid> reserved_res <config_file>
For the config_file contents, refer to the /etc/pdm/conf/example/cluster_config.example. This command supports the
modification of the following configuration: reserved_res_prf (resource reservation threshold).
Modifying Reserved Cluster Node Resources
pdm-cli cluster node update <node_uuid> reserved_res <cpu=,mem=>
The <node_uuid> can be queried by using the pdm-cli cluster node list command. The values after cpu= and mem= can be
negative numbers.
Batch Modifying Reserved Cluster Node Resources
pdm-cli cluster nodes update reserved_res <reserved_conf_file>
For the contents of the reserved_conf_file, refer to: /etc/pdm/conf/example/nodes_reserved_conf.example

Enhanced Components of a Cluster

Rules of Enhanced Components

■ Create an enhanced component deployment rule.

pdm-cli cluster add encomp_deploy_rule <encomp_deploy_rule file>

20 - 61
CLI Reference

For the contents of the encomp_deploy_rule file, refer to the


/etc/pdm/conf/example/create_encomp_deploy_rule.example. An example is as follows:

{
"encomp_deploy_rule":
{
"name": "rule1", # rule name
"cluster_uuids":[], # clust uuid list, which can be empty
"label_selectors":["k1=v1"], # node label list, which can be empty
"encomps":["tipc"] # enhanced component list, which must not be empty
}
}

■ Update an enhanced component deployment rule.

pdm-cli cluster update encomp_deploy_rule <encomp_deploy_rule file>


For the contents of the encomp_deploy_rule file, refer to the
/etc/pdm/conf/example/update_encomp_deploy_rule.example. An example is as follows:

{
"encomp_deploy_rule":
{
"uuid": "1b423225-2574-47aa-92c5-0b9c18a2ac86", # rule uuid
"cluster_uuids":[], # clust uuid list, which can be empty
"label_selectors":["k1=v1"], # node label list, which can be empty
"encomps":["gpu"] # enhanced component list, which must not be empty
}
}

■ Query all the enhanced component deployment rules.

pdm-cli cluster get encomp_deploy_rules


The returned values of an enhanced component rule are described as follows:

key value

name Name of the enhanced component rule

uuid UUID of the enhanced component rule

cluster_uuids UUID list of specified clusters. If the value is null, the rule is applicable to all clusters.

label_selectors Label list of specified nodes, supporting three label operation types: =, notin and in, for example:
k1=v1; k1 notin (v1); k1 in (v1, v2). If the value is null, the rule is applicable to all minion nodes.

encomps List of specified enhanced components. It cannot be null, and indicates the enhanced
components that the rule is applicable to.

■ Query the specified enhanced component deployment rule.

pdm-cli cluster show encomp_deploy_rule <encomp_deploy_rule_uuid>


■ Delete the specified enhanced component deployment rule.

pdm-cli cluster delete encomp_deploy_rule <encomp_deploy_rule_uuid>

** Enhanced Component Deployment**

21 - 61
CLI Reference

■ Deploy an enhanced component.

pdm-cli cluster deploy encomp <encomp file>


■ Upgrade an enhanced component.

pdm-cli cluster upgrade encomp <encomp file>


■ Roll back an enhanced component.

pdm-cli cluster rollback encomp <encomp file>

For the contents of the encomp file in the deployment, upgrade and rollback commands, refer to the
/etc/pdm/conf/example/encomp.example. An example is as follows:

{
"encomp":
{
"name": "tipc", # name of the enhanced component
"version":"v1.20.20.20.111111", # version number of the enhanced component
"model":"bin", # type of the enhanced component
"rules":["rule1"] # rule list specified when the enhanced component is deployed
}
}

Note:

■ Before deploying, upgrading, and rolling back enhanced components, if there are no deployment rules for

enhanced components, you must create them first.


■ The target version of the enhanced component must be specified for deployment and upgrade. It is not required

for rollback.
■ An enhanced component is deployed, upgraded and rolled back in accordance with the specified rules in the

encomp.example file as required. If no rule is specified, the rules applicable to the component are automatically
selected from the created rules.

■ Query all enhanced components.

pdm-cli cluster get encomps


Some returned values of the enhanced component are described as follows:

key value

name Name of the enhanced component

version Current version of the enhanced component that is running on the node corresponding to the
node_uuid

src_version Current version of the enhanced component that previously operated on the node corresponding to
the node_uuid

deploy_state Deployment status of the enhanced component

node_uuid Node where the enhanced component is deployed

operation Last operation on the enhanced component. The value can be deploy, upgrade or rollback.

22 - 61
CLI Reference

Enabling the Kata Function

Enabling the Kata Function of the Minion Node


pdm-cli cluster enable kata <cluster_node_uuid>
Disabling the Kata Function of the Minion Node
pdm-cli cluster disable kata <cluster_node_uuid>

Resource Pool

Querying All Resource Pools


pdm-cli nodepool list
Querying the Details of a Resource Pool
pdm-cli nodepool show <nodepool_id>
Querying the Nodes in a Resource Pool
pdm-cli nodepool node list <nodepool_id>
Creating a Resource Pool
pdm-cli nodepool create <nodepool_file>
For the contents of the nodepool_file, refer to the /etc/pdm/conf/example/nodepool.example.
For example,
pdm-cli nodepool create /etc/pdm/conf/example/nodepool.example
** Deleting a Resource Pool**
pdm-cli nodepool delete <nodepool_id>
** Adding or Deleting a node_identity to/from the Specified Nodepool (only applicable to bare metal scenarios)**
pdm-cli nodepool add_node_identity <nodepool_id> <node_identity>
pdm-cli nodepool delete_node_identity <nodepool_id> <node_identity>
Querying All Nodes of the PaaS
pdm-cli node list
View the Details of a Node of the PaaS
pdm-cli node show <uuid>
Deleting a Node (not in Use) of the PaaS
pdm-cli node delete <uuid>
Registering Volumes Manually Created on a Node

pdm-cli node volume_register <node_id> <volume_type> <backend> --lun-id=xxx/--mountpoint=xxx

Note:

■ Before executing the registration command, make sure that the backend corresponding to the local

/etc/pdm/conf/blockstorage/cinder_conf/cinder.conf configuration contains storwize_svc_vol_rsize= -1


■ volume_type is the storage pool at the back end.

■ backend is the name of the back end storage pool.

■ --lun-id indicates the ID of LUN to be registered. Lun-id is the unique ID of the volume, which can be

obtained from the disk array interface. The IDs of different disk arrays may be named in different ways, for

23 - 61
CLI Reference

example, uuid.
■ --mountpoint indicates the mount path, the system automatically searches for all LUNs in the mount

path, filters out the LUNs that have been registered, and then registers the left LUNs.
■ Select either --lun-id or --moutpoint .

■ You can query whether the volume is registered successfully by using the pdm-cli node volume_show

<node_id> command.

** Querying the Volumes Mounted to a Node**


pdm-cli node volume_show <node_id>
** Querying the Disks of a Node**
pdm-cli node disk_show <node_id>
** HA Scenario Recovery Error paas_controller**
pdm-cli recover <oes/nfv/cloudt/itran>

Volume

** Creating a Volume**
pdm-cli volume create <volume_file>
** Querying a Volume**
pdm-cli volume show <volume_uuid>
** Deleting a Volume**
pdm-cli volume delete <volume_uuid>

Port Range

The commands related to the port range can be executed before or after the deployment of the PaaS.
When you run a command to modify the port range, the security rules of the firewall are automatically updated.
** Querying All Port Ranges**
pdm-cli port_range list
An example of the execution result is as follows:

+-----------------------+---------------------------------------------------------------------------+
| range_name | value |
+-----------------------+---------------------------------------------------------------------------+
| plat_com_range | 53-53, 69-69, 80-80, 112-112, 443-443, 1022-4499, 5000-28000, 31942-31999 |
| common_services_range | 4500-4608, 4610-4999, 29951-29953 |
| ftp_data_range | 29900-29950 |
| public_services_range | 28001-29800 |
| config_services_range | |
+-----------------------+---------------------------------------------------------------------------+

■ plat_com_range: port range of platform components

■ common_services_range: port range of common services

■ ftp_data_range: FTP data port range

■ public_services_range: public port range of applications

■ config_services_range: reserved port range of applications

24 - 61
CLI Reference

** Querying the Specified Port Range**


pdm-cli port_range show <port_range_type>
port_range_type: port range type. Options:

■ plat_com: port range of platform components

■ com_srv: port range of common services

■ ftp_data: FTP data port range

■ app_public: public port range of applications

■ app_config: reserved port range of applications

Adding a Port Segment to the Port Range of Platform Components


pdm-cli port_range add plat_com <port1-port2>[,…]
port1-port2: Multiple port segments are supported and they are separated by commas. If there is a space, the port
segment should be enclosed in double quotation marks, for example, 1): 1-10 2): 1-10,20-30 3): “1-10, 20-30”.
Example:

pdm-cli port_range add plat_com 1-10,20-30


pdm-cli port_range add plat_com "1-10, 20-30"

Requirement: The new port segment must not conflict with the port range of other types.
If the system prompts that the added port segment conflicts with other port ranges, you can execute a command to delete
the corresponding port segment from the port ranges.
Deleting a Port Segment from the Port Range of Platform Components
pdm-cli port_range delete plat_com <port1-port2>[,…]
port1-port2: port segment. For the format, refer to the Adding a Port Segment to the Port Range of Platform Components.
Requirements:

■ Before the PaaS deployment, it is required that the port segment to be deleted should not include the reserved

ports for platform components (defined in the /etc/pdm/conf/paas.conf).


■ After the PaaS deployment, it is required that the port segment to be deleted should not contain any platform

component ports (defined in the /root/common/port_vars.yml).

Adding a Port Segment to the Port Range of Common Services


pdm-cli port_range add com_srv <port1-port2>[,…]
port1-port2: port segment. For the format, refer to the Adding a Port Segment to the Port Range of Platform Components.
Requirement: The new port segment must not conflict with the port range of other types.
If the system prompts that the added port segment conflicts with other port ranges, you can execute a command to delete
the corresponding port segment from the port ranges.
Deleting a Port Segment from the Port Range of Common Services
pdm-cli port_range delete com_srv <port1-port2>[,…]
port1-port2: port segment. For the format, refer to the Adding a Port Segment to the Port Range of Platform Components.
Requirements:

■ Before the PaaS deployment, it is required that there is no intersection between the port segment to be deleted

and the current common service port range. Otherwise, a failure is returned. (The common service port segment

25 - 61
CLI Reference

cannot be deleted before the PaaS is deployed.)


■ After the PaaS deployment, it is required that the port segment to be deleted should not include the ports that

have been allocated to common services.

Deleting a Port Segment from the FTP Data Port Range


pdm-cli port_range delete ftp_data <port1-port2>[,…]
port1-port2: port segment. For the format, refer to the Adding a Port Segment to the Port Range of Platform Components.
Requirement: There is no intersection between the port segment to be deleted and the current FTP data port range.
Otherwise, a failure will be returned. (At present, it is not allowed to delete an FTP data port segment.)
Adding a Port Segment to the Public Port Range of Applications
pdm-cli port_range add app_public <port1-port2>[,…]
port1-port2: port segment. For the format, refer to the Adding a Port Segment to the Port Range of Platform Components.
Requirement: The new port segment must not conflict with the port range of other types.
If the system prompts that the added port segment conflicts with other port ranges, you can execute a command to delete
the corresponding port segment from the port ranges.
Deleting a Port Segment from the Public Port Range of Applications
pdm-cli port_range delete app_public <port1-port2>[,…]
port1-port2: port segment. For the format, refer to the Adding a Port Segment to the Port Range of Platform Components.
Requirements:

■ Before the PaaS deployment, there is no requirement.

■ After the PaaS deployment, it is required that the port segment to be deleted should not include the ports that are

in use.

Adding a Port Segment to the Reserved Port Range of Applications


pdm-cli port_range add app_config <port1-port2>[,…]
port1-port2: port segment. For the format, refer to the Adding a Port Segment to the Port Range of Platform Components.
Requirement: The new port segment must not conflict with the port range of other types.
If the system prompts that the added port segment conflicts with other port ranges, you can execute a command to delete
the corresponding port segment from the port ranges.
If the reserved port range contains the port numbers that conflict with the system services or platform components and
are allocated to applications, including: 22 (sshd), 67 (dhcpd), 123 (ntpd), 547 (dhcpd), 49602, 49603, 49604 and 49605
(platform component port), the applications can only be deployed in the SLB scenario, and the protocol used to publish
the services must be TCP or UDP, and cannot be HTTP, HTTPS or other protocols. The applications can only be accessed
through the SLB address.
For a list of the ports that can be used by applications and conflict with system services or platform components, you can
view the value of the zenap_msb_router_disable_listen_ports variable in the /root/common/com_vars.yml, for example, -
{zenap_msb_router_disable_listen_ports: ‘22,67,123,547,49602,49603,49604,49605’}.
In this case, the related ports exist in both the port range of platform components and the reserved ports range of
applications, and they are not regarded as conflicting with each other.
Deleting a Port Segment from the Reserved Port Range of Applications
pdm-cli port_range delete app_config <port1-port2>[,…]
port1-port2: port segment. For the format, refer to the Adding a Port Segment to the Port Range of Platform Components.

26 - 61
CLI Reference

Requirements:

■ Before the PaaS deployment, there is no requirement.

■ After the PaaS deployment, it is required that the port segment to be deleted should not include the ports that are

reserved for applications.

Changing a Port Number


pdm-cli port update_port --<port_name>=<port_value> [...]
Port_name is the name of port variable, and port_value is the new port value. This parameter supports multiple ports.
Currently, this command is only applicable to two port variables zenap_msb_router_port and
zenap_msb_router_https_port.
For example, pdm-cli port update_port --zenap_msb_router_port=123
Opening or Closing a Port
At present, there are two ways to open or close a port:
Method 1: Before the deployment, change the [open_or_close_ports] section in the /etc/pdm/conf/paas.conf file. Add the
port name and whether the port is open or closed at the end of the section. If you want to close the zenap_cos_port, add
zenap_cos_port=close. If you want to open the zenap_cos_port, add zenap_cos_port=open.
Method 2: Use the pdm-cli command. This method is recommended. The command is as follows:
pdm-cli port update port_name open/close
port_name is a variable. You can set it to open or close.
For example, pdm-cli port update zenap_cos_port close

SSH Keys of Nodes

** Updating SSH Keys of Nodes**

pdm-cli sshkey update

After this command is executed, the SSH keys for logging in to all nodes will be replaced with the automatically generated
keys.

Note:
After the SSH keys are replaced, if the PaaS is uninstalled and then redeployed, the keys on all nodes will be restored to
the initial default keys. For the sake of security, it is necessary to run this command again to update the SSH keys.

Time Zone of Paas

Because different countries and regions have different longitude and time, different time zones are divided. When a
product is delivered to the site and assembled and powered on, the time zone needs to be changed to the local time zone.

Note:

■ The time zone is adjusted after the environment deployment and before the formal use.

■ After the time zone is modified, the data in a period of time may be incorrect.

■ The environment must be checked before the time zone adjustment. For details, refer to Environment Check.

■ Do not run this command without permission!!

27 - 61
CLI Reference

Adjusting the Time Zone

Attention:
Modifying the time zone may affect the local time of the system and the operation of components. Do not modify the
time zone without permission.

pdm-cli timezone set <timezone>

<timezone> represents the time zone parameter. The method to obtain valid time zone parameters is as follows: Input the
linux command “tzselect” on any node of the PaaS, and then select the corresponding area in accordance with the
wizard steps. Finally, the wizard will generate the time zone of the corresponding area. For example,
TZ=’Asia/Shanghai’,Asia/Shanghai is the generated time zone, which can be used as the time zone parameter
<timezone>.
After this command is executed, the system first checks whether all nodes are reachable. If all nodes are reachable, this
command is executed on each node. If a node is unreachable, the system terminates the execution of this command, and
the time zone of each node is not changed.
To make the time zone change take effect, run the nohup pdm-cli reboot & command to restart all PaaS nodes. Nohup
means that the command is executed at the background. In the directory where the restart command is located, run the
tail -f nohup.out command to view the screen output.
Querying the Time Zone

pdm-cli timezone get

After the above command is executed, the system will display the current time zone configured for the PaaS system and
the time zone actually used by each node.
Command Execution Result Example:

collect nodes time zone info ...


paas controller node configuration time zone is:
paas controller node 192.168.2.189 configuration time zone: Indian/Mauritius
paas controller node 192.168.2.188 configuration time zone: Indian/Mauritius
paas controller node 192.168.2.187 configuration time zone: Indian/Mauritius
each node actual timezone is:
node 192.168.2.189 actual time zone: Indian/Mauritius
node 192.168.2.188 actual time zone: Indian/Mauritius
node 192.168.2.187 actual time zone: Indian/Mauritius
node 192.168.2.193 actual time zone: Indian/Mauritius
node 192.168.2.194 actual time zone: Indian/Mauritius
node 192.168.2.195 actual time zone: Indian/Mauritius
node 192.168.2.197 actual time zone: Indian/Mauritius

External Clock Source of the PaaS

Before deploying the PaaS, multiple external clock sources can be configured in the configuration file.
In all nodes of the PaaS, the clock source of the controller node points to an external clock source, and the clock sources
of the non-controller nodes point to the controller node. Therefore, to modify the external clock source of the PaaS is to
modify the clock source of the controller node.

28 - 61
CLI Reference

The PaaS clock source can be synchronized slowly or immediately. For details, refer to the following table.

Synchronization Description Advantage Disadvantage


Mode

Slow The clock sources are aligned through the PaaS time does The synchronization process takes a
synchronization NTP mechanism without human not hop, which long time. If the time difference
intervention. has little impact between the PaaS system and the
During the slow synchronization, the
on the system. external clock source is one minute, it
controller node of the PaaS synchronizes
takes about ten minutes for the PaaS
with the external clock source slowly until
system to complete the
they are the same, and then the other
nodes of the PaaS synchronizes with the synchronization with the external
controller node until they are the same. clock source.

Immediate Run a command manually to trigger time Time Time hopping has a great impact on
Synchronization to be aligned with the external clock synchronization the system. After the synchronization,
source. can be you need to run a command to restart
Operation method: First, immediately
completed the PaaS system.
synchronize the time of the PaaS with the
quickly.
specified clock source (Immediately Synch
ronizing PaaS Time with the Specified Cloc
k Source ), and then restart the PaaS
system (Restarting the PaaS).

Note:
If the time difference is less than ten minutes, and the chrony service does not exit, select slow synchronization. If the
time difference is so large that the chrony service exits, select immediately synchronization.

Note:
By default, an external clock source of the PaaS can only be an IP address. If you want to set a domain name as an
external clock source, perform the following steps:

1 Before the deployment of PaaS, ping the domain name of an external clock source to obtain its IP address, and
then fill the IP address in the ntp_server field in the /etc/pdm/conf/paas.conf file. If there are multiple clock sources,
separate them with commas.

2 After the deployment of PaaS, enable the msb to interconnect with an external DNS server. For the configuration
method, refer to Setting Upstream DNS <setdnsserver>

3 On the controller node, run the pdm-cli ntpserver replace <old ntpserver> with <new ntpserver> command to
replace the IP address of the clock source with the domain name. <old ntpserver> is the IP address and <new
ntpserver> is the corresponding domain name.

The steps of adding/modifying an external clock source of the PaaS are as follows:

1 View the time difference between the PaaS system and the external clock source.

ntpdate -q <ntpserver> , <ntp_server>indicates the address of the new clock source, for example, ntpdate -q
10.30.1.105. An example of the query result is as follows:

server 10.30.1.105, stratum 1, offset 5.545793, delay 0.10478


4 May 08:58:22 ntpdate[10220]: step time server 10.30.1.105 offset 5.545793 sec

29 - 61
CLI Reference

The returned offset value (unit: s) is the time difference between the PaaS system and the clock source.

2 Perform the corresponding operation in accordance with the time difference.

If Then

The time Synchronize the clock by adding/modifying a clock source and using the slow synchronization
difference method.
between is Skip steps 3 and 4 and directly add or modify the clock source. At this time, the PaaS will rely
on the slow synchronization mechanism of the NTP itself to gradually align the time with the
less than ten
new clock source. If the time difference is one minute, the synchronization time is about ten
minutes.
minutes, which it is a long time but there is little impact on the system.

The time Synchronize the clock by using the immediate synchronization method and adding/modifying
difference a clock source.
between is Go to Step 3. The PaaS synchronizes with the clock source immediately, but you need to
restart the PaaS. In this case, the services will be interrupted. It takes about ten minutes to
more than
recover.
ten minutes.

3 Run the pdm-cli ntpdate <ntp_server> command to make the PaaS synchronize time with the specified clock
source immediately. For details, refer to Immediately Synchronizing with the Clock Source.

4 Restarting the PaaS

5 Add or modify a clock source. For details, refer to Adding an External Clock Source for the PaaS or Modifying an
External Clock Source of the PaaS.

Attention:
If the time difference exceeds one minute, if you do not perform immediate synchronization but modify the external
clock source directly, the PaaS NTP service may exit and the time cannot be aligned.

Immediately Synchronizing with the Clock Source

If there is a big time difference between a PaaS node and the new clock source, it may take a long time for
synchronization, or the NTP service exits and automatic time synchronization fails. To immediately synchronize the time
with the NTP clock source, run the following command to synchronize the time of all PaaS nodes with the clock source.
First, ensure that no other external clock source is configured in the PaaS. Otherwise, the NTP service of the controller
node may exit.

pdm-cli ntpserver delete <old ntpserver>

The value of <old ntpserver> is the external clock source configured in the system. Delete all the external clock sources
one by one. Then, immediately synchronize the time with the specified external clock source.

pdm-cli ntpdate <ntp_server>

<ntp_server> indicates the address of the new clock source, for example, pdm-cli ntpdate 10.30.1.105.
After the above command is executed, the system synchronizes the date and time of each node in the PaaS system with
the new clock source.

Attention:

30 - 61
CLI Reference

After executing the above command, restart the PaaS system. For details, refer to Restarting the PaaS.

Adding an External Clock Source for the PaaS

pdm-cli ntpserver add <new ntpserver>

<new ntpserver> is the IP address of the new clock source.

Note:

1 The PaaS supports 10 external NTP servers. If the PaaS is not configured with an external clock source, the first
NTP server to be added is regarded as the master NTP server. Ensure that it is connected with the network of the
PaaS controller nodes and can provide the time synchronization service.

2 If the clock source configurations on the PaaS controller nodes are different, you need to delete the different
configurations first and then add the clock source configuration.

Deleting an External Clock Source of the PaaS

pdm-cli ntpserver delete <old ntpserver>

<old ntpserver> is the IP address of the existing clock source in the /etc/ntp.conf or /etc/pdm/conf/paas.conf.

Note:

If you manually add a clock source after upgrading the old version to V1.19.40.06 or later, before rolling back the
PaaS version, you need to
delete the clock source that is added manually with a command.

If you deploy a version after V1.19.40.06 and upgrade it to a higher version, and then you manually add an external
clock source,
when rolling back the version, you do not need to delete the clock source.

In the NTP configuration in the /etc/pdm/conf/paas.conf, the first NTP server on the right of the equal sign is
regarded as master NTP server.
In the deployed PaaS environment, it is not allowed to use this command to delete the NTP server, but you can
use the pdm-cli ntpserver replace command to replace the NTP server. To delete the master NTP server
configuration, you need to manually delete the NTP server configuration in the paas.conf file and the line
containing the NTP server configuration in the /etc/chrony.conf file. If there are multiple PaaS controller nodes,
the configuration on all nodes needs to be deleted. If other nodes do not have this configuration, you do not
need to delete it. If the PaaS deployment fails and you are going to deploy the PaaS again, you can modify the
configurations in the paas.conf before re-deployment. You do not need to delete the previous NTP
configurations.

31 - 61
CLI Reference

Viewing External Clock Sources of the PaaS

pdm-cli ntpserver get

After the above command is executed, the system will display the external clock source information configured for the
PaaS.
Command Execution Result Example:
If an external clock source is configured properly, the query result is as follows:

collect nodes ntp server info ...


PaaS ntp server configuration is: 10.30.1.105

If no external clock source is configured, the following information is displayed:

collect nodes ntp server info ...


No external ntp server configured for PaaS

If the NTP address configuration in the NTP service configuration file is different from that in the PaaS configuration file,
the configuration information on each node is printed, as shown below:

collect nodes ntp server info ...


WARNING: the ntp server configuration is different in /etc/pdm/conf/paas.conf and /etc/ntp.conf.
The ntp server in ntp.conf is:
In paas-controller 192.168.1.6, ntp server is: 192.168.1.35
In paas-controller 192.168.1.60, ntp server is: 192.168.1.35
In paas-controller 192.168.1.59, ntp server is: 192.168.1.35
The ntp server in paas.conf is:
In paas-controller 192.168.1.6, ntp server is: 192.168.1.35
In paas-controller 192.168.1.60, ntp server is: 192.168.1.35
In paas-controller 192.168.1.59, ntp server is: 192.168.1.35 192.168.1.34

Modifying an External Clock Source of the PaaS

pdm-cli ntpserver replace <old ntpserver> with <new ntpserver>

If an external clock source has been configured, when the external clock source needs to be modified, set it as the <old
ntpserver>, and set a new clock source as the <new ntpserver>.

Note:
The <old ntpserver> and <new ntpserver> values can be set as IP addresses.

Attention:
Modifying an external clock source may cause an alarm. If an alarm is reported, handle it in accordance with the alarm
information.

Clock Source Whitelist of the PaaS

32 - 61
CLI Reference

Querying the NTP Whitelist


Query the current NTP whitelist of the PaaS by using the following command:

pdm-cli chrony-white-list get

The query result is displayed as follows:

2020-09-26 12:26:23,678 - INFO - All controller nodes are reachable.


2020-09-26 12:26:25,058 - INFO - controller [172.20.0.3] chrony whitelist is: [10.230.147.134]
2020-09-26 12:26:25,058 - INFO - controller [172.20.0.4] chrony whitelist is: [10.230.147.134]
2020-09-26 12:26:25,058 - INFO - controller [172.20.0.7] chrony whitelist is: [10.230.147.134]

Adding an IP Address to the NTP Whitelist

1 Add an IP address to the NTP whitelist of the PaaS by using the following command:

pdm-cli chrony-white-list add <ip>

Command Execution Result Example:

[root@paas-controller-1:/home/ubuntu]$ pdm-cli chrony-white-list add 10.230.147.134


2020-09-26 12:26:15,228 - INFO - All controller nodes are reachable.
2020-09-26 12:26:18,535 - INFO - add chrony whitelist 10.230.147.134 successful.

2 Add an IP address segment to the NTP whitelist of the PaaS by using the following command:

pdm-cli chrony-white-list add <ip/subnet-mask>

Command Execution Result Example:

[root@paas-controller-1:/home/ubuntu]$ pdm-cli chrony-white-list add 10.230.147.0/24


2020-09-26 13:06:24,383 - INFO - All controller nodes are reachable.
2020-09-26 13:06:27,576 - INFO - add chrony whitelist 10.230.147.0/24 successful.

Deleting an IP Address from the NTP Whitelist

1 Delete an IP address from the NTP whitelist of the PaaS by using the following command:

pdm-cli chrony-white-list delete <ip>

Command Execution Result Example:

[root@paas-controller-1:/home/ubuntu]$ pdm-cli chrony-white-list delete 10.230.147.134


2020-09-26 12:53:02,489 - INFO - All controller nodes are reachable.
2020-09-26 12:53:05,784 - INFO - delete chrony whitelist 10.230.147.134 successful.

Deleting an IP Address Segment from the NTP Whitelist

pdm-cli chrony-white-list delete <ip/subnet-mask>

Command Execution Result Example:

33 - 61
CLI Reference

[root@paas-controller-1:/home/ubuntu]$ pdm-cli chrony-white-list delete 10.230.147.0/24


2020-09-26 12:55:02,439 - INFO - All controller nodes are reachable.
2020-09-26 12:55:05,684 - INFO - delete chrony whitelist 10.230.147.0/24 successful.

Setting the Timestamp Function for the Clock Synchronization Service

Setting the Timestamp Function for the Clock Synchronization Service

pdm-cli disable-ntp-randomtx < 0 | 1 >

disable-ntp-randomtx can only be followed by the parameter “0” or “1”. “1”indicates that the timestamp is
disabled. “0”indicates that the timestamp is enabled.

PaaS Date and Time

After deploying the PaaS, you can modify the date and time of the PaaS system.
Depending on whether the PaaS is configured with an external clock server, perform the following operations as required.

If Then

The PaaS is not configured with an Run the pdm-cli date set <date> <time> command to modify the time. For details,
external clock server. refer to Modifying the PaaS Date and Time.

The PaaS has been configured with Modify the time as follows:
an external clock server.
a Delete the external clock server. For details, refer to Changing an External Cl
ock Server.
b Run the pdm-cli date set <date> <time> command to modify the time. For
details, refer to Modifying the PaaS Date and Time.
c Add an external clock server again. For details, refer to Changing an External
Clock Server.

Note:

■ Modifying the time may affect the system stability. Be cautious about modifying the time.

■ If the PaaS keeps synchronous with the external clock source, after the time is modified, the difference between

the local time and the clock source may be large, causing the NTP service to exit. Pay attention to the alarms. If there
are related alarms, follow the instructions.

The time modification operation may have an impact on the PaaS. For example, if the clock is changed to a future
time point and then it is changed
back, the performance data will be affected. Refer to the technical notice and perform the operation as required.

■ Run the following command to check the allowable time range of the PaaS system before modifying the time:

docker exec zenap_msb_router /bin/sh -c 'openssl x509 -in /opt/application/zenap-msb-


apigateway/openresty/ssl/cert/cert.crt -noout -text | grep "Validity" -A 2'.
■ After step c is executed, the alarm about NTP service exiting on the controller node may be raised.

34 - 61
CLI Reference

Modifying PaaS Date and Time

pdm-cli date set <date> <time>

<date>, for example: 2019-3-28


<time>, for example: 10:05:21
If only the date of the PaaS needs to be modified, only the <date> value needs to be specified, and the <time> value does
not need to be specified, for example, pdm-cli date set 2019-3-28.
If only the time of the PaaS needs to be modified, only the <time> value needs to be specified, and the <date> value does
not need to be specified, for example, pdm-cli date set 10:05:21.
To modify the date and time of the PaaS, you need to specify the <date> and <time> values, for example,pdm-cli date set
2019-3-28 10:05:21.

Attention:
After executing the above command, restart the PaaS system. For details, refer to Restarting the PaaS.

Querying the Date and Time of the PaaS

pdm-cli date get

After the above command is executed, the system displays the date and time of each node in the PaaS system.
Command Execution Result Example:

collect nodes date info ...


each node date is:
node 192.168.2.142 date is: 2019-03-28 16:05:21
node 192.168.2.141 date is: 2019-03-28 16:05:21
node 192.168.2.146 date is: 2019-03-28 16:05:21
node 192.168.2.145 date is: 2019-03-28 16:05:21
node 192.168.2.144 date is: 2019-03-28 16:05:21

Restarting the PaaS

Command for Restarting the PaaS

nohup pdm-cli reboot <node_ips> &

<node_ips> indicates the IP address of the net_api network plane of a PaaS node.

■ To restart a single node of the PaaS, set <node_ips> to the IP address of the net_api network plane of the node, for

example, nohup pdm-cli reboot 192.168.200.109 &`, where, nohup indicates that the command is exectued at the
background. In the directory where the reboot command is executed, run the tail -f nohup.out command to view the
screen output.

Command Execution Result Example:

[root@paas-controller-192-168-200-103:/home/ubuntu]$ nohup pdm-cli reboot 192.168.200.109 &


[root@paas-controller-192-168-200-103:/home/ubuntu]$ tail -f nohup.out

35 - 61
CLI Reference

2019-07-15 10:16:20,177 - INFO - getting paas nodes begin ...


{"rests": {"hosts": ["192.168.200.107", "192.168.200.106", "192.168.200.109", "192.168.200.108"], "vars": {"current_host":
"192.168.200.103"}},
"nodes": {"hosts": ["192.168.200.106", "192.168.200.104", "192.168.200.105", "192.168.200.103", "192.168.200.107",
"192.168.200.108", "192.168.200.109"],
"vars": {"current_host": "192.168.200.103"}}, "paas_controllers": {"hosts": ["192.168.200.104", "192.168.200.105",
"192.168.200.103"],
"vars": {"current_host": "192.168.200.103"}}}
2019-07-15 10:16:20,786 - INFO - getting paas nodes successfully!
2019-07-15 10:16:20,787 - INFO - checking reboot node ip begin ...
2019-07-15 10:16:24,800 - INFO - check reboot node ip successfully!
2019-07-15 10:16:24,801 - INFO - check if reboot node ip is paas node or not ...
2019-07-15 10:16:25,254 - INFO - reboot node ip is paas node, check OK!
2019-07-15 10:16:25,255 - INFO - stopping services begin, it will take at least 5 minutes, please wait ...

To restart all nodes of the PaaS system, do not specify the <node_ips> parameter, for example,nohup pdm-cli reboot
&, where, nohup indicates that the command
is executed at the background. In the directory where the reboot command is executed, run the tail -f nohup.out
command to view the screen output.

Command Execution Result Example:

[root@paas-controller-192-168-200-103:/home/ubuntu]$ nohup pdm-cli reboot &


[root@paas-controller-192-168-200-103:/home/ubuntu]$ tail -f nohup.out
2019-07-15 11:01:10,084 - INFO - getting paas nodes begin ...
{"rests": {"hosts": ["192.168.200.107", "192.168.200.106", "192.168.200.109", "192.168.200.108"], "vars": {"current_host":
"192.168.200.103"}},
"nodes": {"hosts": ["192.168.200.106", "192.168.200.104", "192.168.200.105", "192.168.200.103", "192.168.200.107",
"192.168.200.108", "192.168.200.109"],
"vars": {"current_host": "192.168.200.103"}}, "paas_controllers": {"hosts": ["192.168.200.104", "192.168.200.105",
"192.168.200.103"],
"vars": {"current_host": "192.168.200.103"}}}
2019-07-15 11:01:10,551 - INFO - getting paas nodes successfully!
2019-07-15 11:01:10,551 - INFO - check nodes reachable status ...
2019-07-15 11:01:23,960 - INFO - all nodes reachable status are OK!
2019-07-15 11:01:23,960 - INFO - stopping services begin, it will take at least 5 minutes, please wait ...

Note:
The above command is not applicable to some nodes.

Health Check on Common Services

Command for Health Check on Common Services

pdm-cli common_service health_check

After the above command is executed, the system displays whether the common services are available. If the status is
unavailable, an error is displayed.
Command Execution Result Example:

36 - 61
CLI Reference

+-------------------------------+---------+---------+---------------------------------------------------------------------------+
| check_item | result | errcode | fail_info |
+-------------------------------+---------+---------+---------------------------------------------------------------------------+
| CHECK_AVA_COM_CSM | success | 0 | |
| CHECK_AVA_TENANT_OPCS | success | 0 | |
| CHECK_AVA_CMS_PostgreSQLCACHE | success | 0 | |
| CHECK_AVA_CMS_PostgreSQL | fail | 2001 | PostgreSQL_pg-vnpm check failed: deploy_status is deploy_ing,
please wait |
+-------------------------------+---------+---------+---------------------------------------------------------------------------+

Graceful Shutdown of the PaaS

Command for Graceful Shutdown of the PaaS

pdm-cli shutdown <node_ip> --service-only

<node_ips> indicates the IP address of the net_api network plane of the default network type (V4 or V6) of a PaaS node.

--service-only is optional (only applicable to TECS scenarios). When the --service-only parameter is used, only the
services of the node is stopped, but the node is not shut down.

■ To shut down a single node of the PaaS system, set <node_ip> to the IP address of the net_api network plane of

the node, for example, pdm-cli shutdown 192.168.200.109.

Command Execution Result Example:

[root@paas-controller-192-168-200-103:/home/ubuntu]$ pdm-cli shutdown 192.168.200.109


2019-07-15 11:15:55,753 - INFO - checking shutdown node ip begin ...
2019-07-15 11:15:59,766 - INFO - check shutdown node ip successfully!
2019-07-15 11:15:59,766 - INFO - check if shutdown node ip is paas node or not ...
2019-07-15 11:16:00,202 - INFO - shutdown node ip is paas node, check OK!
2019-07-15 11:16:00,203 - INFO - shutdown nodes begin ...

When the --service-only parameter is used, only the services of the node is stopped, but the node is not shut down. For
example, pdm-cli shutdown 192.168.200.109 --service-only.
Command Execution Result Example:

[root@paas-controller-192-168-200-103:/home/ubuntu]$ pdm-cli shutdown 192.168.200.109 --service-only


2020-03-17 13:15:13,333 - INFO - checking shutdown node ip begin ...
2020-03-17 13:15:17,344 - INFO - check shutdown node ip successfully!
2020-03-17 13:15:17,345 - INFO - check if shutdown node ip is paas node or not ...
2020-03-17 13:15:17,812 - INFO - shutdown node ip is paas node, check OK!
2020-03-17 13:15:17,812 - INFO - stop service on nodes begin ...
2020-03-17 13:17:32,230 - INFO - stop all service on nodes successfully!

■ To shut down all nodes of the PaaS system, do not specify the <node_ip> parameter, for example, pdm-cli

shutdown.

Command Execution Result Example:

[root@paas-controller-192-168-200-103:/home/ubuntu]$ pdm-cli shutdown

37 - 61
CLI Reference

2019-07-15 13:51:15,667 - INFO - check nodes reachable status ...


2019-07-15 13:51:29,061 - INFO - all nodes reachable status are OK!
2019-07-15 13:51:29,062 - INFO - shutdown nodes begin ...

Note:
The above command does not support shutdown of some nodes. To shut down all nodes, you need to log in from the
local end and run the command. Otherwise, the command will fail or you cannot see the execution result due to the
floating IP address disconnection.

Commands for Hot Patches of the PaaS

Querying Hot Patches to be Installed

pdm-cli hotfix info

Querying Hot Patches Installed on All Nodes of the PaaS

pdm-cli hotfix list

Querying Hot Patches Installed on a Node of the PaaS

pdm-cli hotfix show <host_ip>

Installing Hot Patches


You can specify the address or hot patch name. If you do not specify the name, all hot patches are installed on all nodes.

pdm-cli hotfix install

Uninstalling Hot Patches


You can specify the address or hot patch name. If you do not specify the name, all hot patches are uninstalled from all
nodes.

pdm-cli hotfix uninstall

Modifying the Hostname of a Node

Modifying the Hostname of a Node

pdm-cli hostname update <node_id> <hostname_value>

<node_id> can be the ID or uuid of the node.

Offline Updating the Component Version in the Local Software Repository

Updating a Blueprint (bp) Component

pdm-cli update_soft_package bp <reponame> <name> <verison> <tag> <path>

Updating the Version (Non-Snap Format) of a Component of the Bin, Com, or Image Type

38 - 61
CLI Reference

pdm-cli update_soft_package <model> <reponame> <name> <verison> <path>

Parameter Description

<model> Component type: bin, com, image, bp

<reponame> Default user of the software repository, generally admin. For details, refer to the information in the
/etc/pdm/deploylist/pkg_ver.lig.

<name> Component name

<version> Component version number

<tag> Blueprint tag. If it is not specified, the default tag is marked by the software repository.

<path> Version path of the local software

Modifying the Shelf/Blade Configuration

Modifying the Shelf/Blade Configuration

pdm-cli node update_baremetal <baremetal_nodes_file_new>

Note:

■ This command is only applicable to bare metal scenarios.

■ Currently, only the following fields can be modified: managePassword, manageUser, snmpProtocolType,

manageIp, snmpV2Info, snmpV3Info


The configurations of different blades in the same shelf are written together. The number of blades configured in the
slot must be the same as the actual number, that is,
the configurations of the blades in the same shelf must be modified at the same time.

■ After the PaaS is rolled back, you must modify the hardware server or modify the configuration information to

ensure that the configuration information is consistent.


■ Fields that do not need to be modified should be kept unchanged.

Preparing the Configuration File

■ Copy the /etc/pdm/conf/baremetal_nodes.json file to any directory and rename it as

baremetal_nodes_update.json.
■ Specify the managePassword parameter (this field must be filled in as a required field for verification, regardless of

whether the password needs to be modified).


■ Change the original manageIp in the file to old_manageIp (the parameter must be modified, as an identifier to

identify that it is used to modify the shelf/blade configuration information).


■ Add manageIp and configure a new management address. If the address is not changes, it can be the same as that

of old_manageIp.
■ Modify other fields that can be modified.

39 - 61
CLI Reference

Example of Updating a Configuration Information Template

[{
"slot": ["8", "9", "10", "11"],
"managePassword": "XXXXXX", # It can be modified.
"deviceModel": "ZTE-E9000-xx",
"manageUser": "xxxxxxxx", # It can be modified.
"snmpProtocolType": "v2c", # It can be modified.
"manageIp": "192.168.3.100", # It can be modified.
"old_manageIp": "192.168.2.100", # origanl management IP address
"snmpV2Info": { # It can be modified.
"readCommunity": "public"
},
"snmpV3Info": { # It can be modified.
"auth_protocol": null,
"priv_password": "",
"priv_protocol": null,
"user": null,
"security_level": null,
"auth_password": ""}
}]

Example of the Verification Configuration Command

pdm-cli node update_baremetal baremetal_nodes_update.json

Command Execution Result Example

[root@paas-controller1:/home/pict]$ pdm-cli node update_baremetal baremetal_nodes_update.json


Begin to update_baremetal_nodes from file
update_baremetal_nodes called from file: baremetal_nodes_update.json
check_before_update_baremetal_nodes called
Update baremetal_nodes info SUCCESS, clean the password in baremetal_nodes_update.json

Apache Package Related Information

Querying Installation of the Apache Package


Run the following command to query installation of the apache package on the controller node:

pdm-cli apache inquire

The query result may be installed, not installed, timeout, or error code if an unknown error occurs. An example is as
follows:

2021-01-19 21:43:07,649 - INFO - Get paas.conf scenario


+-----------------------+-----------------------+
| node ip | apache inquire result |
+-----------------------+-----------------------+
| 3ffe:ffff:0:f101::103 | installed |
| 3ffe:ffff:0:f101::104 | installed |
| 3ffe:ffff:0:f101::102 | installed |
+-----------------------+-----------------------+

40 - 61
CLI Reference

Uninstalling Apache Packages on Nodes


Run the following command to uninstall apache packages on all nodes:

pdm-cli apache remove

If the operation is successful, a success result is returned. If the operation fails, a failure cause is displayed. An example is
as follows:

2021-01-19 22:10:08,250 - INFO - Get paas.conf scenario


Romove apache packages successfully.

Installing Apache Packages on Nodes


Run the following command to install the apache package on the controller node:

pdm-cli apache install

If the operation is successful, a success result is returned. If the operation fails, a failure cause is displayed. An example is
as follows:

2021-01-19 22:11:08,250 - INFO - Get paas.conf scenario


Install apache packages successfully.

Node Resources Management CLI


■ Node Resources Management CLI

■ Overview of the Cnrm-cli Tool

■ Using Help Commands

■ Configuration Sub-commands

■ Query Sub-commands

■ Modification Sub-commands

■ Status Sub-commands

■ Status Query Sub-commands

Node Resources Management CLI

Overview of the Cnrm-cli Tool

The cnrm-cli tool provides the node resource configuration and status management functions. Currently, the following
node resources are supported:

■ Exclusive core

■ Huge Page

41 - 61
CLI Reference

Currently, the following functions are supported:

■ Querying the configuration information of the exclusive cores and huge pages of all nodes in the cluster.

■ Querying the configuration information of the exclusive cores and huge pages of the specified node in the cluster.

■ Querying the exclusive core configuration of the specified node in the cluster.

■ Querying the huge page configuration of the specified node in the cluster.

■ Saving the configuration file of the exclusive cores of the specified node in the cluster.

■ Saving the configuration file of the huge pages of the specified node in the cluster.

■ Modifying the configuration of the exclusive cores of the specified node in the cluster.

■ Modifying the configuration of the huge pages of the specified node in the cluster.

■ Modifying the exclusive core list configuration of the specified node in the cluster in accordance with the

configuration file.
■ Modifying the huge page configuration of the specified node in the cluster in accordance with the configuration

file.
■ Querying the status of the exclusive cores and huge pages of all nodes in the cluster.

■ Querying the status of the exclusive cores and huge pages of the specified node in the cluster.

■ Querying the status of the exclusive cores of the specified node in the cluster.

■ Querying the status of the huge pages of the specified node in the cluster.

Note:

■ The configuration and status of huge pages of a node queried by the cnrm-cli tool are the sum of all the huge

pages of all NUMAs on the node.


There are two ways to set huge pages by the cnrm-cli tool. One is to configure the same number of huge pages for
each numa, and the other is to configure
different number of huge pages for each numa.

Using Help Commands

The cnrm-cli tool uses a tree command structure. You can get help through the help or -h command. The parameters of
cnrm-cli consist of three parts: resources, operations and global filter.

cnrm-cli -h
NAME:
cnrm-cli - <subcommand> ...
USAGE:
cnrm-cli [global options] command [command options] [arguments...]
VERSION:
v1
AUTHOR:
nw <[email protected]>
COMMANDS:
config, c <subcommand> ...
state, s <subcommand> ...
help, h Shows a list of commands or help for one command

42 - 61
CLI Reference

GLOBAL OPTIONS:
--node value, -n value node uuid
--type value, -t value resource type(cpu or hugepage)
--help, -h show help
--version, -v print the version

Configuration Sub-commands

The configuration sub-commands are used to query and modify configurations.

cnrm-cli config -h
NAME:
cnrm-cli config - <subcommand> ...
USAGE:
cnrm-cli config command [command options] [arguments...]
COMMANDS:
get get config
get_to_file store cpu or hugepage config to file /etc/cnrm-cli/cpu_config_file.json or
/etc/cnrm-cli/hp_config_file.json
set set resource config for node
OPTIONS:
--node value, -n value node uuid
--type value, -t value resource type(cpu or hugepage)
--help, -h show help

Query Sub-commands

The query sub-commands can query all resource configurations of all nodes, or the specified resource configurations of a
specified node through a filter.

cnrm-cli config get -h


NAME:
cnrm-cli config get - get config
USAGE:
cnrm-cli config get [command options] [arguments...]
OPTIONS:
--node value, -n value node uuid
--type value, -t value resource type(cpu or hugepage)

cnrm-cli config get_to_file -h


NAME:
cnrm-cli config get_to_file - store config to config file
USAGE:
cnrm-cli config get_to_file [command options] [arguments...]
OPTIONS:
--node value, -n value node uuid
--type value, -t value resource type(cpu or hugepage)

Modification Sub-commands

The modification sub-commands must be used together with the filter to modify the configuration of the specified
resources on the specified node.

43 - 61
CLI Reference

cnrm-cli config set -h


NAME:
cnrm-cli config set - set resource config for node
USAGE:
cnrm-cli config set command [command options] [arguments...]
COMMANDS:
exclusive_cpu_count set Exclusive CPU count
hugepage_2m_count set 2M huge page count,for per numa
hugepage_1g_count set 1G huge page count,for per numa
by_config_file set Exclusive CPU List or hugepage of every numa by config file
OPTIONS:
--node value, -n value node uuid
--help, -h show help

Status Sub-commands

The status sub-commands include only the query sub-commands of resource status.

cnrm-cli state -h
NAME:
cnrm-cli state - <subcommand> ...
USAGE:
cnrm-cli state command [command options] [arguments...]
COMMANDS:
get get resource state
OPTIONS:
--node value, -n value node uuid
--type value, -t value resource type(cpu or hugepage)
--help, -h show help

Status Query Sub-commands

The query sub-commands can query the status of all resource of all nodes, or the specified resources of a specified node
through a filter.

cnrm-cli state get -h


NAME:
cnrm-cli state get - get resource state
USAGE:
cnrm-cli state get [command options] [arguments...]
OPTIONS:
--node value, -n value node uuid
--type value, -t value resource type(cpu or hugepage)

Firewall Rule Management CLI


■ Firewall Rule Management CLI

■ Overview of the Inetrules-cli Tool

■ Using Help Commands

44 - 61
CLI Reference

■ Querying Firewall Rules

■ Enabling Firewall Rules

■ Disabling Firewall Rules

■ Adding Firewall Rules

■ Deleting a Firewall Rule

Firewall Rule Management CLI

Overview of the Inetrules-cli Tool

The inetrules-cli tool provides the functions of configuring firewall rules for nodes, querying firewall rules, and managing
the status.
Currently, the following functions are supported:

■ Querying all firewall rules of the current node

■ Disabling the firewall

■ Enabling the firewall

■ Adding an open port range of the specified network plane

■ Adding an port range of each network plane

■ Deleting a rule that is added manually

Using Help Commands

The inetrules tool uses a single-line command structure, and you can use the inetrules help command to get help.

inetrules show --list all inetrules


inetrules off --turn off all inetrules
inetrules on --turn on all inetrules
inetrules add-rule [--srccidr <cidrlist, split by ,>] [--portrange <portrangelist, split by ,>]
inetrules del-rule [--srccidr <cidrlist, split by ,>] [--portrange <portrangelist, split by ,>]
inetrules show [--srccidr <cidrlist, split by ,>] [--portrange <portrangelist, split by ,>]

Querying Firewall Rules

If no parameter is specified in the command, all firewall rules are queried, including IPv4 and IPv6 rules, which are
separated by splits.
Command:

inetrules show

Returned result:

Chain INETBLOCK (1 references)


target prot opt source destination
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state ESTABLISHED multiport dports 1024:65535
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 state ESTABLISHED multiport dports 1024:65535

45 - 61
CLI Reference

DROP icmp -- 0.0.0.0/0 0.0.0.0/0 icmptype 13


DROP icmp -- 0.0.0.0/0 0.0.0.0/0 icmptype 14
ACCEPT tcp -- 180.16.2.0/24 0.0.0.0/0 match-set net_api_inet_tcp dst
ACCEPT udp -- 180.16.2.0/24 0.0.0.0/0 match-set net_api_inet_udp dst

If a parameter is specified in the command, the firewall rules of the specified network plane are queried. If only the ‒
srccidr parameter is specified in the command, all rules of this network plane are queried. The rules for multiple network
planes are separated by commas. The ‒srccidr parameter must be specified. If both of the ‒srccidr and ‒portrange
parameters are specified in the command, the system determines the status of the port range on the specified network
plane.

inetrules show [--srccidr <cidrlist, split by ,>] [--portrange <portrangelist, split by ,>]

‒srccidr This parameter indicates the source IP address plus the mask length of the network plane, which can be viewed
by the inetrules show command. ‒portrange This parameter specifies the destination port to be queried.

1 Query all open ports of the specified subnet.

inetrules show --srccidr 21.0.1.0/24

2 Query the status of the specified port set on the specified network plane.

inetrules show --srccidr 21.0.1.0/24 --portrange 1000

Enabling Firewall Rules

Command:

inetrules on

Returned result:

turn on inetrules success


turn on inet6rules success

Disabling Firewall Rules

Command:

inetrules off

Returned result:

turn off inetrules success


turn off inet6rules success

Adding Firewall Rules

Add an open port to the subnetwork plane.

46 - 61
CLI Reference

inetrules add-rule [--srccidr <cidrlist, split by ,>] [--portrange <portrangelist, split by ,>]

‒srccidr This parameter indicates the source IP address plus the mask length of the network plane. ‒portrange This
parameter specifies the destination port.
Example:

1 The parameter contains both the address and port.

inetrules add-rule --srccidr 192.168.1.2/16 --portrange 10000:15000

2 Add all subnet ports of Inetblock.

inetrules add-rule --portrange 10000:15000

Deleting a Firewall Rule

Delete a rule that is added manually, that is, change add-rule of the addition command to del-rule, and keep other
contents unchanged.

inetrules del-rule [--srccidr <cidrlist, split by ,>] [--portrange <portrangelist, split by ,>]

‒srccidr This parameter indicates the source IP address plus the mask length of the network plane. ‒portrange This
parameter specifies the destination port.
Example:

1 The following example shows how to delete the address and port of the specified subnet.

inetrules del-rule --srccidr 192.168.1.2/16 --portrange 10000:15000

2 The following example shows how to delete all subnet ports of Inetblock.

inetrules del-rule --portrange 10000:15000

Note: The cidr must carry prefixlength.

System Traffic Management CLI


■ System Traffic Management CLI

■ Overview of the Globalinetrules-cli Tool

■ Using Help Commands

■ Querying System Traffic Filtering Rules

■ Adding a System Traffic Filtering Rule

■ Deleting an Existing System Traffic Filtering Rule

■ Deleting All System Traffic Filtering Rules

47 - 61
CLI Reference

System Traffic Management CLI

Overview of the Globalinetrules-cli Tool

The globalinetrules-cli tool provides commands for filtering the system traffic of the equipment.
Currently, the following functions are supported:

■ Using help commands

■ Querying the traffic filtering rules of all controller nodes.

■ Adding traffic filtering rules to all controller nodes.

■ Deleting an existing system traffic filtering rule.

■ Deleting all system traffic filtering rules.

Using Help Commands

The globalinetrules tool uses a single-line command structure, and you can use the globalinetrules help command to get
help.

1)globalinetrules add-rule [--action drop|accept] [--srcmac macaddr]


[--srccidr cidr] [--dstcidr cidr][--protocol protocol]
[--srcport srcport] [--dstport dstport]
2)globalinetrules del-rule [--action drop|accept] [--srcmac macaddr]
[--srccidr cidr] [--dstcidr cidr] [--protocol protocol]
[--srcport srcport] [--dstport dstport]
3)globalInetrules clear-rules
4)globalInetrules show

Parameter Descriptions

--action rule (policy), default is Discard


--srcmac source MAC address of a packet
--dstcidr subnet where the destination address of a packet is located
--srccidr source subnet where the source address of a packet is located
--protocol protocol used for packets, default is All, that is, all protocols. You can set it to tcp, udp, and icmp.
--srcport source port of a packet
--dstport destination port of a packet

Querying System Traffic Filtering Rules

Query the traffic filtering rules of all controller nodes and display them in a dictionary list.
Command:

globalInetrules show

Returned result:

[{'dstcidr': '30.27.0.6/32', 'protocol': 'tcp', 'srcmac': '00:d0:d0:12:34:56', 'srcport': '100',


'srccidr': '30.20.0.6/32', 'action': 'drop', 'dstport': '100'},

48 - 61
CLI Reference

{'dstcidr': '30.27.0.6/32', 'protocol': 'tcp', 'srcmac': '1111', 'srcport': '100',


'srccidr': '30.20.0.6/32', 'action': 'drop', 'dstport': '110'},
{'dstcidr': '2.2.2.2', 'protocol': 'tcp', 'srcmac': '1234', 'srcport': '33',
'srccidr': '1.1.1.1', 'action': 'drop', 'dstport': '44'}]

Adding a System Traffic Filtering Rule

Add traffic filtering rules to all controller nodes.


Command:

globalinetrules add-rule [--action drop|accept] [--srcmac macaddr]


[--srccidr cidr] [--dstcidr cidr][---protocol protocol]
[--srcport srcport] [--dstport dstport]

Returned result:

"accept" means that the setting is successful.


"error" means that the setting is unsuccessful.

Example:

1 The following example shows how to set a rule that allows all packets from the subnet 192.168.1.2/16.

globalinetrules add-rule --action accept --srccidr 192.168.1.2/16

Deleting an Existing System Traffic Filtering Rule

Delete an existing system traffic filtering rule.


Command:

globalinetrules del-rule [--action drop|accept] [--srcmac macaddr]


[--srccidr cidr] [--dstcidr cidr][---protocol protocol]
[--srcport srcport] [--dstport dstport]

Returned result:

"accept" means that the setting is successful.


"error" means that the setting is unsuccessful.

Example:

1 The following example shows how to delete a rule that allows all packets from the subnet 192.168.1.2/16.

globalinetrules del-rule --action accept --srccidr 192.168.1.2/16

Deleting All System Traffic Filtering Rules

Command:

globalinetrules clear-rules

49 - 61
CLI Reference

Returned result:

"success" means that all rules are deleted successfully.

Note:
Note: The cidr must carry prefixlength.

One-Click Collection CLI

Overview of One-Click Collection CLI

This CLI is used to collect files, shell command execution results, PaaS platform operation logs and PaaS platform
performance data.
This CLI provides the following collection modes:

■ Collecting data by node role.

■ Collecting data by component

■ Collecting data by IP address

■ Collecting data by scenario

■ Collecting data by common service instance

This CLI is executed only on the controller node by the user with the root rights. If there are multiple controller nodes, you
can execute the collection commands on each controller node independently, without any influence over one another.
The collected data can be stored in two ways:

■ Local storage

Path: /paasdata/collect_data on the controller node where this CLI is executed.


■ Remote storage

Path: The specified directory on the remote machine specified by the user. The remote machine supports
login in ssh mode.

There are four collect-cli commands:

■ collect-cli data collection.

■ collect-cli cfg shows/modifies a file directory or a file name or a shell command.

■ collect-cli quota shows/modifies/restores the quota.

■ collect-cli source list resource objects supported by data collection.

Data Collection

Parameter Descriptions

Abbreviat Full Name Description


ion

50 - 61
CLI Reference

-r --role Node role. It refers to each role configured on the node. The role names are
separated by commas, for example, paas_controller,master. To learn about the role
contents, you can execute the pdm-cli node list command on the controller node and
view the roles field. Or you can view the roles configured on each node through the
UI. If you do not enter a specific role name but enter all, the data of all roles is
collected. The collect-cli source list command can be used to view the supported
roles.

-i --ipaddr IP address of the node. The IP addresses you entered are allocated by the net_api
and multiple IP addresses are separated by commas. For example,
192.10.20.123,192.10.20.122.

-s --scene Scenario. This parameter can be used when you can roughly identify the scenario
where the problem occurs. Options: network, storage and deploy, which indicate the
network scenario, storage scenario, and deployment scenario respectively. One
scene parameter is input at a time during data collection. The collect-cli source list
command can be used to view the supported scenes.

-c --component Component name. Multiple components are supported, such as slb. The specific
component names can be queried by using the collect-cli source list command. If you
do not enter a specific component name but enter all, the data of all components is
collected. The collect-cli source list command can be used to view the supported
components.

-cs -- Common service name, corresponding to “Common Service Name” on the


commonservice portaladmin . You can use the collect-cli source list command to view the names of
all the supported common service instances. Only one common service name can be
entered at a time. This parameter must be used together with the inst parameter.

-inst --instance Common service instance name. You can enter one or more instance names at a
time, which are separated by commas. For example, kafaka1,kafaka2. This parameter
must be used together with the cs parameter. If you do not enter a specific name but
enter all, the data of all instances of the common service is collected.

-d --debug Outputs the collect_log.txt. The collect_log.txt file contains the detailed information
(file size, last modification time) of the files that have been collected or have not been
collected on each node.

-l --last Period, from a time point in the past to the current time point. Unit: days; Integer;
Minimum: 1. For example, last 2 means that the files modified in the [now-24*2
hours,now] period range are collected.

-p --packet Format of the collected data. The default format is tar.gz. To compress files into
another format, use this parameter. Options are tar and zip. For example, -p zip
means that a .zip package is generated.

-rt --remote Remote storage mode. The path for file storage is <IP port directory>. The IP address,
port number and storage directory of the remote device are separated by spaces. If
you do not enter a port number, the default port 22 is used. For example, ‘100.20.0.1
/home/temp’ indicates that files are stored in the /home/temp/ directory on the
device whose IP address is 100.20.0.1 through the port 22.

-st --starttime Start time, format: yyyy-mm-dd hh:mm:ss (local time) or yyyy-mm-dd, for example,
‘2019-03-17 01:01:01’ or ‘2019-03-17’ (it will be supplemented automatically as
‘2019-03-17 00:00:00’). If this parameter exists, the files whose last modification
time is within the time range of [starttime, now] will be collected.

51 - 61
CLI Reference

-et --endtime End time, format: yyyy-mm-dd hh:mm:ss (local time) or yyyy-mm-dd, for example,
‘2019-03-17 01:01:01’ or ‘2019-03-17’ (it will be supplemented automatically as
‘2019-03-17 00:00:00’). This parameter must be used together with starttime. At
present, [starttime, endtime] is only applicable to the ops component.

none --interval Sampling interval, which is only applicable to performance data collection of the OPS
component. The supported values are 30 s, 5 m, and 15 m, and the default value is 5
m.

none --all-common- Flag for collecting the logs of all the common service instances. This flag is used only
service when the “-r all” parameter exists. The value is true or false. The default value is
true. If this parameter is not specified or “all-common-service=true” is specified
explicitly, all common service instance logs will be collected. If “all-common-
service=false” is specified, no common service logs will be collected. Note that
there is no space around “=”.

Note:

■ The operation logs and performance data of the PaaS platform can be collected only when the component name

is ops. The performance data collection function supports two types of objects: node and component instance. The
performance data of a type of object within the specified period can be collected.
■ The endtime and interval parameters are invalid when the data of non-ops components is collected.

The starttime, endtime, and interval parameters are valid when the data of ops component is collected. When the
starttime and endtime parameters are not specified, the performance data collection period is determined based
on the interval parameter. If the interval is 5 minutes by default, the performance data within 12 hours is collected.
If the interval is 30 s, the performance data within two hours is collected. If the interval is 15 minutes, the
performance data within 24 hours is collected.
■ By default, for the ops component, the data within the last 12 hours is collected. For other objects, the data
within the complete time range is collected by default.
■ If the last parameter and the starttime or endtime parameters coexist, the last parameter prevails.

■ The -s parameter cannot be used together with the -r, -i, and -c parameters.

■ The -cs and -inst parameters must be used together, and cannot be used together with the -r, -i, -c and -s
parameters.
■ When executing the remote storage mode (-rt, ‒remote), if you want to access without a password, you should
make sure you have completed the related configuration.

a Use ssh-keygen -m PEM -t rsa to generate an ssh key pair (public key and private key). To prevent
overwriting the original key pair, the ssh key can be generated outside the paas environment.

b In the /paasdata/ops-tools/remote_config.json, enter the correct username for ssh login of the
remote machine.

c In the /root/.ssh/ of the main controller node where the data is to be collected, place a private key
named id_rsa_collect, with the authority of 700.

d On the remote machine, add the corresponding public key in the /.ssh/authorized_keys under the
login username.

Example

1 Collect the logs and shell data on nodes 100.20.0.171 and 100.20.0.170.

52 - 61
CLI Reference

collect-cli --ipaddr 100.20.0.171,100.20.0.170

2 Collect the logs and shell data of all the nodes in the system, including the logs of the common service instances.

collect-cli -r all

Or

collect-cli -r all --all-common-service=true

Note:
This command can be used only after the system is deployed.

3 Collect the logs and shell data of all the nodes in the system, excluding the logs of the common service
instances.

collect-cli -r all --all-common-service=false

Note:
This command can be used only after the system is deployed.

4 Collect the logs and shell data of all the controller nodes (node role: Paas_controller) in the system within the
recent 2 days.

collect-cli -r paas_controller -l 2

5 Collect the logs and shell data of all nodes (as the master or minion role) in the system.

collect-cli --role master,minion

Note:
This command can be used only after the system is deployed.

6 Collect the logs and shell data of a single controller node (as the paas_controller role) whose IP address is
100.20.0.171 in the system.

collect-cli -r paas_controller -i 100.20.0.171

Note:
This command can be used only after the system is deployed.

7 Collect the logs and shell data of the slb component.

collect-cli --component slb

Note:
This command can be used only after the system is deployed.

8 Collect the logs and shell data of all components. The logs to be collected must be within the time range, that is,

53 - 61
CLI Reference

the last modification time is within the range of [2019-3-17 01:01:01, current time].

collect-cli --component all --starttime '2019-03-17 01:01:01'

Note:
This command can be used only after the system is deployed.

9 Collect the logs and shell data of the slb component, and record the details of the collected logs in the
collect_log.txt file.

collect-cli --component slb --debug

Note:
This command can be used only after the system is deployed.

10 Collect the data of the slb component, and save the collected results to the /home/ubuntu/directory of the
remote machine (100.20.0.1).

collect-cli --remote '100.20.0.1 /home/ubuntu/' -c slb

Note:
Collection can be performed only if the remote machine can be accessed correctly. Ensure that you have the
right to write data to the /home/ubuntu/ directory.

11 If there is a problem with the network in the current environment, collect the data of the network scenario and
save the collected results to the /home/ubuntu/directory of the remote machine (100.20.0.1).

collect-cli --remote '100.20.0.1 /home/ubuntu/' --scene network

Note:
Collection can be performed only if the remote machine can be accessed correctly. Ensure that you have the
right to write data to the /home/ubuntu/ directory.

12 Collect the performance data within 12 hours and operation logs within 12 hours at an interval of 5 minutes, and
save them in the local disk.

collect-cli --component ops

13 Collect the performance data within two hours and operation logs within two hours at an interval of 30 seconds,
and save them on the local disk. Assume that the current time is 2019-08-24 02:00:00.

collect-cli --component ops --interval 30s --starttime '2019-08-24 00:00:00' --endtime '2019-08-24 02:00:00'

14 Collect the data of the slb component within the time range of [2019-08-24 00:00:00, 2019-08-26 00:00:00], and
save the collection results in the /home/ubuntu/ directory of the remote machine (100.20.0.1).

collect-cli -c slb -st '2019-08-24' -et '2019-08-26' -rt '100.20.0.1 /home/ubuntu/'

Note:

54 - 61
CLI Reference

Collection can be performed only if the remote machine can be accessed correctly. Ensure that you have the
right to write data to the /home/ubuntu/ directory.

15 Collect the data of the slb component in the last three days, and save it in a .zip file.

collect-cli -c slb -l 3 -p zip

16 Collect the data of the 8s-minion component on the nodes 110.0.0.12 and 110.0.0.5 only.

collect-cli -c k8s-minion -i 110.0.0.12,110.0.0.5

17 Collect the data of the toposervice component on the nodes whose role is paas_controller.

collect-cli -c toposervice -r paas_controller

Note:

If a large amount of data is to be collected, and the collection time may be longer than five minutes, you can use
nohup to run the collection command in the background.

■ If a command contains nohup...&, this indicates that this command runs at the background. The purpose is to

prevent the command from being terminated after the ssh connection is interrupted. For example,

nohup collect-cli -c all &

This indicates that the collect-cli -c all command runs at the background.
■ When a command runs at the background, there is no output on the screen. You can execute the tail -f nohup.out
command in the current directory to view the output on the screen. You can use ctrl+c to stop the command output.

18 Collect the data of the kafka-zyh1 instance under the common service instance Apache-Kafka.

collect-cli -cs Apache-Kafka -inst kafka-zyh1

19 Collect the data of all instances under the common service instance Apache-Kafka.

collect-cli -cs Apache-Kafka -inst all

Output Result
The contents collected on each node are saved into the following files (taking the zip format as example):

File Name Exist or Not Description

logs.zip Exist All collected files

shells.zip Exist All the shell commands, including the shell commands executed
in the component containers (if the input parameter is the
component) and the shell commands executed on the nodes

output.txt Exist Result statistics of the collected files and shell commands

55 - 61
CLI Reference

collect_log.txt Optional. This Collected file name, size, modification time, and discarded
parameter exists reason
only after you enter
related parameters.

Modifying a Configuration File

Command line format collect-cli cfg

collect-cli cfg show


collect-cli cfg edit

Example

Output the currently configured collection directory and shell commands. For the displayed contents, see the
section “Configuration File Description”.

collect-cli cfg show

Edit the directory to be collected and the shell commands. For the edited contents, refer to Section
“Configuration File Description”.

collect-cli cfg edit

Note:
Press “Insert” to enter the edit mode, and “Esc” to exit the edit mode.
After editing, enter :w and press “Enter” to save the configuration. Enter :q to exit the edit mode.

Modifying the Quota of a Data Collection Directory

To ensure that the controller node can operate properly, the upper limit of the quota of the data collection directory
/paasdata/collect_data is set to 5 GB.
If there are too many data collection nodes or the data to be collected on each node is too large, and the default upper
limit of the collection directory (5 GB) is exceeded, you need to modify the quota of the data collection directory in
accordance with the size of the collected data and the free space of the disk of the execution node. The unit is GB.
If the free space of the collection directory is too small to meet the collection requirements, all historical collection data is
deleted automatically.
Command line format collect-cli quota

collect-cli quota show


collect-cli quota modify
collect-cli quota default

Example

Display the quota of the current collection directory and the available disk space.

collect-cli quota show

56 - 61
CLI Reference

Change the quota of the collection directory to 2 GB.

collect-cli quota modify 2G

Reset the quota of the collection directory to the initial value.

collect-cli quota default

Configuration File Description

To modify the file collection range and shell commands of a node or component, you need to edit the configuration file.
For how to obtain the configuration file, refer to Section “Modifying a Configuration File”.
Note:

■ The format is json.

■ The following fields are contained in the file header:

Parameter Description

LOG_DIR_NUM_LIMIT Maximum number of files to be collected in a directory. If the number of files in


a directory exceeds this value, the directory will not be collected. Default: 200.

LOG_FILE_SIZE_LIMIT Maximum size of a single file. If the size of a file exceeds this value, the file will
not be collected. Default: 1 GB.

COLLECT_TIMEOUT Duration during which the execution node waits for collection to be completed.
If the collection time of a single node exceeds this duration, the collection fails.
Default: 5 minutes.

SHELL_EXEC_TIMEOUT Execution duration of a single shell. If the execution duration of a single shell on
a node exceeds this duration, the collection fails. Default: 30 seconds.

QUOTA_LIMIT Upper limit of the quota, that is, the data size when the collected data is saved
locally. When you modify the quota value through the quota modify or quota
default command, the macro value changes. Default: 1GB.

Dictionary Description

LOGS_FOR_ZIP Files to be collected on the node

LOGS_FOR_COMPONENT_ZIP Files or file directories to be collected on the component

SHELLS_FOR_EXC Shell commands executed on the node. For each shell, a file is output.

SHELLS_FOR_EXEC_IN_CONTAINER_TO_LOG Shell commands executed in the container. All shells are output in one
file.

SHELLS_FOR_EXEC_IN_CONTAINER Commands executed in the container, not limited to shell commands or


python commands. The commands are executed only, and no files are
output.

COFNIG_FOR_ZIP Configuration files to be collected on the controller node

■ The files or commands in the common part of the LOGS_FOR_ZIP and SHELLS_FOR_EXC are collected for each

role, and you do not need not to define them repeatedly.

57 - 61
CLI Reference

■ Data collection parameter configuration, which can be modified as required. The contents to be collected should

be defined uniquely. Otherwise, data will be collected repeatedly.

Example

1 For the node whose role is elk, add the /etc/resolv.conf file for collection.

Before the modification:

LOGS_FOR_ZIP = {
'elk': [
'/root/info/logs/'
],
}

After the modification:

LOGS_FOR_ZIP = {
'elk': [
'/root/info/logs/',
'/etc/resolv.conf'
],
}

2 For the node whose role is minion, add the shell command ps.

Before the modification:

SHELLS_FOR_EXC = {
'minion': [
'systemctl status knitter-agent.service'
],
}

After the modification:

SHELLS_FOR_EXC = {
'minion': [
'systemctl status knitter-agent.service',
'ps'
],
}

3 For the slb component, add the paasdata/op-log/apiroute file for collection.

Before the modification:

LOGS_FOR_COMPONENT_ZIP = {
'slb': [
'/paasdata/op-log/eslb'
],
}

58 - 61
CLI Reference

After the modification:

LOGS_FOR_COMPONENT_ZIP = {
'slb': [
'/paasdata/op-log/eslb',
'/paasdata/op-log/apiroute'
]
}

4 For the slb component, add the shell command ps that is executed on the node.

Before the modification:

SHELLS_FOR_COMPONENT_EXC = {
'slb': [
'cat /proc/meminfo'
],
}

After the modification:

SHELLS_FOR_COMPONENT_EXC = {
'slb': [
'cat /proc/meminfo',
'ps'
]
}

5 For the slb component, add the shell command ps that is executed in the container.

Before the modification:

SHELLS_FOR_EXEC_IN_CONTAINER_TO_LOG = {
'slb': [
{
'container_name': 'c-eslb',
'shell': [
'ifconfig'
],
},
],
}

After the modification:

SHELLS_FOR_EXEC_IN_CONTAINER_TO_LOG = {
'slb': [
{
'container_name': 'c-eslb',
'shell': [
'ifconfig',
'ps'
],

59 - 61
CLI Reference

},
],
}

6. For the slb component, add the security check script python/data/autocheck.py 1 that is executed in the container. The
script is executed only, and no data is output.

Before the modification:

SHELLS_FOR_EXEC_IN_CONTAINER = {
'slb': [
{
'container_name': 'c-eslb',
'shell': [
'cp -r /etc/pod-config /vnslog/'
]
}
]
}

After the modification:

SHELLS_FOR_EXEC_IN_CONTAINER = {
'slb': [
{
'container_name': 'c-eslb',
'shell': [
'cp -r /etc/pod-config /vnslog/',
'python /data/autocheck.py 1'
]
}
]
}

7 For the slb component, add the root/nodes file for collection. The file is on the controller node.

Before the modification:

COFNIG_FOR_ZIP = {
'slb': [
'/etc/pdm/conf/vnm_network.conf'
],
}

After the modification:

COFNIG_FOR_ZIP = {
'slb': [
'/etc/pdm/conf/vnm_network.conf',
'/root/nodes'
],
}

60 - 61
CLI Reference

8 For the sys_server component, add the contents to be collected by using wildcards.

Before the modification:

COFNIG_FOR_ZIP = {
'sys_server': [
'/var/log/messages',
'/var/log/messages.1.gz',
'/var/log/messages.2.gz',
'/var/log/messages.3.gz',
'/var/log/messages.4.gz',
'/var/log/messages.5.gz'
],
}

After the modification:

COFNIG_FOR_ZIP = {
'sys_server': [
'/var/log/messages*'
],
}

9. Collect data by using simple reverse filtering rules. For example, only the valid files in the /var/log are collected, the files
with the suffixes of sig, ver, doc, and txt are not collected, and the file named etcd1 is not collected.

Before the modification:

COFNIG_FOR_ZIP = {
'common': [
'/var/log/'
],
}

After the modification:

COFNIG_FOR_ZIP = {
'common': [
'/var/log/!(*.sig, *.ver, *.doc, *.txt, etcd1)'
],
}

61 - 61

You might also like