11-TECS Openpalette (V7.22.30) CLI Reference
11-TECS Openpalette (V7.22.30) CLI Reference
CLI Reference
Version: V7.22.30
ZTE CORPORATION
ZTE Plaza, Keji Road South, Hi-Tech Industrial Park,
Nanshan District, Shenzhen, P.R.China
Postcode: 518057
Tel: +86-755-26771900
URL: http://support.zte.com.cn
E-mail: [email protected]
LEGAL INFORMATION
Copyright 2022 ZTE CORPORATION.
The contents of this document are protected by copyright laws and international treaties. Any reproduction or
distribution of this document or any portion of this document, in any form by any means, without the prior written
consent of ZTE CORPORATION is prohibited. Additionally, the contents of this document are protected by con-
This document is provided as is, and all express, implied, or statutory warranties, representations or conditions are
disclaimed, including without limitation any implied warranty of merchantability, fitness for a particular purpose,
title or non-infringement. ZTE CORPORATION and its licensors shall not be liable for damages resulting from
the use of or reliance on the information contained herein.
ZTE CORPORATION or its licensors may have current or pending intellectual property rights or applications
covering the subject matter of this document. Except as expressly provided in any written license between ZTE
CORPORATION and its licensee, the user of this document shall not acquire any license to the subject matter
herein.
ZTE CORPORATION reserves the right to upgrade or make technical change to this product without further notice.
Users may visit the ZTE technical support website http://support.zte.com.cn to inquire for related information.
livered together with this product of ZTE, the embedded software must be used as only a component of this
product. If this product is discarded, the licenses for the embedded software must be void either and must not be
transferred. ZTE will provide technical support for the embedded software of this product.
Revision History
CLI Reference
■ Revision History
■ Introduction to Zartcli
■ Configuration File
■ Querying a List
■ Fuzzy Query
■ Query Details
■ Performing Synchronization.
■ Exporting
■ Importing
■ PaaS Deployment
■ PaaS Components
■ Resource Pool
■ Volume
■ Port Range
3 - 61
CLI Reference
■ Configuration Sub-commands
■ Query Sub-commands
■ Modification Sub-commands
■ Status Sub-commands
■ Data Collection
Revision History
4 - 61
CLI Reference
CLI Change Operation CLI Com Parameter Change Operation Paramete Changed Reason f Revision
mand r Contents or Chang Version
e
■ Debugging engineers
■ Maintenance engineers
Chapter Overview
Software Repository CLI Describes the CLI commands of the software repository, including the configuration file,
query list, and fuzzy query commands.
Platform Management CLI Describes the CLI commands for platform management, including PaaS deployment,
PaaS components, and PaaS cluster commands.
Node Resources Manageme Describes the CLI commands for node resource management, including configuration
nt CLI sub-commands, query sub-commands, and modification sub-commands.
Firewall Rule Management C Describes the CLI commands for firewall rule management, including the commands for
LI viewing, enabling, and disabling firewall rules.
System Traffic Management Describes the CLI commands for system traffic management, including the commands
CLI for querying system traffic filtering rules, adding system traffic filtering rules, and deleting
existing system traffic filtering rules.
One-Click Collection CLI Describes the CLI commands for one-click collection, including commands for data
collection, modifying configuration files, and modifying the quota of a data collection
directory.
5 - 61
CLI Reference
■ Introduction to Zartcli
■ Configuration File
■ Querying
■ Querying a List
■ Fuzzy Query
■ Query Details
■ Uploading
■ Downloading
■ Updating
■ Deleting
■ Building an Image
■ Online Synchronization
■ Performing Synchronization.
■ Exporting
■ Importing
■ Pushing
Introduction to Zartcli
Zartcli is a CLI client of the software repository. It is a binary file, and the current version supports 64-bit Linux.
It provides commands for querying, uploading, downloading, updating, deleting and synchronizing four types of versions:
image, blueprint (bp), software package (bin) and component.
Zartcli is released together with the PaaS version. After the PaaS environment is installed successfully, zartcli can be used.
It is in the /root/zartcli/ directory on the controller node of the PaaS.
Parameter Descriptions:
6 - 61
CLI Reference
Configuration File
The configuration file zartcli.ini of zartcli is located in the same directory of the binary program. The initial contents are as
follows:
[zartsrv]
default = ip:port // ip is the address of the PaaS software repository server, and port is the one of the PaaS software
repository server (6000 by default).
[logpath]
path = /paasdata/op-log/cf-zartcli
To operate another independent repository (for example, 10.1.1.123:6000), run the following command to add the
repository configuration:
./zartcli -S=swr123:10.1.1.123:6000
After this configuration is added, you can use the “-s swr123” parameter to perform version-related operations for this
repository.
If no log path is configured in the zartcli.ini file, the logs are saved in the same directory as zartcli by default. To set the log
path, run the following command:
./zartcli -logpath=path
7 - 61
CLI Reference
Querying
Querying a List
The querying operation can query a list of versions of the image, bp, bin, and com types for a specified project (tenant).
For the bp type, the -t parameter is needed. For the image, bin, and com types, the -t parameter is not needed.
Example:
1 The following example shows how to query all versions of a bp under the tcfs project:
2 The following example shows how to query the specified version of a bp under the tcfs project:
Fuzzy Query
■ Pay attention to the keyword with the * wildcard. Use double quotation marks.
Example:
1 The following example shows how to query all versions of each image under the tcfs project. In this example, the
image name suffix is “test”.
Query Details
Query the detailed information of a specific version of the image, bp, bin and com types for the specified project (tenant).
** (The parameters carried must specify a unique version.)**
8 - 61
CLI Reference
Example:
1 The following example shows how to query the blueprint details under the tcfs project. In this example, the
blueprint name is bp1, the version is v1 and the tag is service.
Uploading
■ The version to be uploaded should be placed in the path corresponding to -p. This path cannot contain a sub-
directory.
■ When a blueprint is uploaded, all the files except .detail and .info under the path are written to the list field through
■ When an image is uploaded, only one tar package is allowed under the path (this tar package must be saved by
saved by imagename:version not imageid). Other files cannot have the .tar suffix.
■ When a blueprint, an image, or a software package (bin) is uploaded, the detail and info fields are obtained
through the .detail and .info files under the path. To add other fields, add them directly to the upload command.
Currently, the com parameter does not support updating detail and info through files.
■ -m can be set to bp, bin, com, and image.
■ -w is optional. The value is “true” or “false” and the default value is false. “True” indicates that the result is
returned only after the status of the uploaded image is “available” or “unavailable”. “False” indicates that the
final result of the uploaded image does not need to be known, and the result is returned after the request is sent
successfully.
Example:
1 The following example shows how to upload an image to the tcfs project. In this example, the image name is
image1, the version is v1, and the storage path of the image tar package is /home/upload/.
2 The following example shows how to upload a blueprint to the tcfs project. In this example, the blueprint name
is bp1, the type is service, the version is v1, and the storage path of the json file of the blueprint is /home/upload/.
9 - 61
CLI Reference
3. The following example shows how to upload a software package to the tcfs project. In this example, the software
package name is bin1 and the version is v1, the storage path of all the files in the software package is /home/upload/,
which does not contain a sub-directory.
4. The following example shows how to upload a component package to the tcfs project. In this example, the component
package name is com1, the version is v1, the storage path of the component package and the image tar package is
/home/upload/, which contains no sub-directory and only one tar package file.
Downloading
Download a software package (bin), a component package, and a blueprint from the software repository to a specified
local directory as a file. ** This function cannot be used to download an image. For a component, only the file list can be
downloaded, but the image of the component cannot be downloaded.**
■ -p is the path where the specified version is downloaded to the local computer.
■ If the path does not exist, the system creates one automatically. If this version has been downloaded before, the
Example:
1 The following example shows how to download a blueprint of the tcfs project. In this example, the the version is
v1, the type is service, and the download path is /home/download/.
2 The following example shows how to download a software package of the tcfs project. In this example, the
software package name is bin1, the version is v1, and the download path is /home/download/.
3 The following example shows how to download a component package of the tcfs project, In this example, the
component package name is com1, the version is v1, and the download path is /home/download/.
Updating
10 - 61
CLI Reference
Set the public attribute (publicview) of a specified version under a specified project.
■ The -i, -m, -t, -n, and -v identify only one record. You can modify it by adding other parameters at the end, for
example, -b=yes.
■ When -m is set to bin, image, or com, the -t parameter is not required, because there is no tag label in the version
attribute.
Example:
1 The following example shows how to set a blueprint version under the tcfs project to a public blueprint.
2 The following example shows how to set an image version under the tcfs project to a project image (private
image).
Deleting
Delete a version of the bin, image, com or bp type under the specified item.
■ When an image is deleted, only the image record in the software repository and the image label in the registry are
deleted. The data layer of the image cannot be deleted, and the space occupied by the image will not be released.
■ -m can be set to bin, image, com, and bp.
Example:
1 The following example shows how to delete the specified version of an image under the tcfs project.
2 The following example shows how to delete all the versions of an image under the tcfs project.
Building an Image
Upload all the files required for building an image such as Dockerfile to a temporary directory on the software repository
server, build an image, and then put the image into the software repository. Finally, delete the files in the temporary
directory.
11 - 61
CLI Reference
■ For an image, the files to be edited are pushed to the server. After being edited, the files are stored in the software
repository.
■ timeoutseconds, timeout time for building an image, unit: seconds, default: 1800, range: 0‒7200.
Example:
1 The following example shows how to build an image. In this example, the project name is tcfs, the image name is
image1, the version is v1, and the directory is /home/build/.
Directly push a local image tar package to the registry. After that, the image record is written in the software repository.
Difference between uploading and pushing an image: The push function is to execute the docker load tar package, docker
tag and docker push commands, and then writes an image record to the software repository. The upload function directly
uploads the image tar package to the software repository server, and the server completes the subsequent operations.
Example:
1 The following example shows how to push an image of the tcfs project. In this example, the image name is
image1, the version is v1, and the storage path of the image tar package is /home/push/.
Online Synchronization
Synchronize the related software version from the remote software repository to the local software repository. ** (This is
manual synchronization, and synchronization is performed once after it is initiated.) **
Performing Synchronization.
■ -w is optional. The value is “true” or “false”, and the default parameter is false. “True” indicates that the
12 - 61
CLI Reference
result is returned after the synchronization task is “sync success” or “sync failed!”. “False” indicates that the
final result of the synchronization task does not need to be known, and the result is returned immediately after the
request is sent successfully.
■ After the client sends a synchronization operation, the procedure is over. To know the synchronization result, you
reference is as follows:
{
"sourcezart": "10.62.52.117:6000", ///address and port number of the remote repository
"sourceregistry":"10.62.52.117:6666", //address and port number of the registry of the remote repository, used during
image synchronization.
"name": "sync1", //synchronization name
"overwrite": "yes", //whether the synchronization contents overwrite the local record
"versions": [
{"reponame": "cg", "name": "bp1", "version": "v1", "tag":"default", "model": "bp"},
{"reponame": "cg", "name": "bp1", "version": "v2", "tag":"default", "model": "bp"},
{"reponame": "cg", "name": "image1", "version": "v1", "model": "image"},
{"reponame": "cg", "name": "bin1", "version": "v1", "model": "bin"},
{"reponame": "cg", "name": "com1", "version": "v1", "model": "com"}
]
}
■ sourceregistry: address and port number of the registry of the remote repository, used during image
synchronization. If there is no image in the following version list, this configuration can be omitted.
■ name: Name of the synchronization task (unique).
■ versions: A list of versions of the objects to be synchronized. You can add or delete objects as required. The format
is fixed to {“reponame”: “project in the remote repository”, “name”: “version name”, “version”: “version
number”, “model”: “object type”}. For a blueprint, you need to add “tag”:”blueprint tag”.
Example:
Compile a sync.list file to be synchronized by refering to the above sync.list file in json format, and save it to a directory,
for example, /home/sync/. The following example shows how to start synchronization.
The following example shows how to query the execution results of all or specified synchronization tasks.
■ If the -n parameter is not specified, all history synchronization operation records are returned. If the -n parameter
13 - 61
CLI Reference
Deletes all or specified synchronization tasks. ** (Note that the versions that have been synchronized will not be deleted.)
**
■ If the -n parameter is not specified, all history synchronization operation records are deleted. If the -n parameter is
The system provides the function of importing and exporting versions, which facilitates offline version transfer.
Exporting
A version list (export.list) is saved under a specified directory (path). The version list can be exported to a directory (path)
by using the export command. The path should be specific.
Note:
If zartcli is not used in the PaaS, configure /etc/default/docker before running the export command. The IP address of
the repository and the PORT of the registry are required. For example, add the following bold part.
DOCKER_OPTS=”‒insecure-registry=10.67.18.xxx:33777 ‒insecure-registry=193.168.4.5:6666 -
H unix:///var/run/docker.sock -H 0.0.0.0:5555”
Obtain the IP address and port number. There are two cases:
1 For the repository inside the PaaS platform, run the pdm-cli node list command on the controller node to view
the IP address of the soft-repo node, which is used as the registry IP address, and PORT is 6666.
2 The IP address and PORT of the registry on the version server where PaaS releases versions, for example,
10.67.18.xxx:33377.
Example of export.list:
export.list
{"reponame":"admin","name":"aerospike","model":"com","version":"3.7.5.1"},
{"reponame":"admin","name":"c0-ms","model":"bp","tag":"microservice","version":"v1.16.20.04.37529"},
{"reponame":"admin","name":"c0","model":"image","version":"v1.16.20.04.I74ba1f"},
{"reponame":"admin","name":"cf-base","model":"bin","version":"1.0.1"},
{"reponame":"admin","name":"cf-csm","model":"com","version":"v1.16.20.04.37498"},
{"reponame":"admin","name":"cf-common","model":"bin","version":"v1.16.20.04.37498"},
{"reponame":"admin","name":"cf-pcluster","model":"com","version":"v1.16.20.04.37498"},
{"reponame":"admin","name":"cf-pdeploy","model":"com","version":"v1.16.20.04.37498"},
14 - 61
CLI Reference
{"reponame":"admin","name":"cf-pdman","model":"com","version":"v1.16.20.04.37498"},
{"reponame":"admin","name":"cf-pnode","model":"com","version":"v1.16.20.04.37498"},
{"reponame":"admin","name":"cf-srcli","model":"bin","version":"v1.16.20.04.35650"},
{"reponame":"admin","name":"cf-srepo","model":"com","version":"v1.16.20.04.35650"},
{"reponame":"admin","name":"cf-vnpm","model":"com","version":"v1.16.20.04.37498"},
{"reponame":"admin","name":"cf-zartcli","model":"bin","version":"v1.16.20.04.37498"},
{"reponame":"admin","name":"cnimaster","model":"bin","version":"v1.16.20.04.37529"},
{"reponame":"admin","name":"cpp-centos-7.2.1511","model":"image","version":"v1.16.10.03.p01.de3d69f"},
{"reponame":"admin","name":"cradle_master","model":"com","version":"v1.16.20.04.039a8a1"}
Importing
Import the version that is exported by zartcli into the repository. The import command is in the following format, where
path should be specific.
When the version is imported, zartcli creates the alreadimportedbackup directory under the path, and move the imported
version to this directory.
Pushing
The push list name is distr.list, and the file content is in json format. For example,
{
"swrcache": ["10.62.52.117:6000"], //address and port number of the remote repository
"name": "distr1", //push name
"versions": [
{"project":"projname1", "reponame": "cg", "name": "bp1", "version": "v1", "tag":"default", "model": "bp"},
//"reponame" and "model" are required, "project" is the destination where the version is pushed to, default is admin.
{"project":"projname1", "reponame": "cg", "name": "bp1", "version":"v2", "tag":"default", "model": "bp"},
{"project":"projname1", "reponame": "cg", "name": "image1", "version":"v1", "model": "image"},
{"project":"projname1", "reponame": "cg", "name": "bin1", "version":"v1", "model": "bin"},
{"project":"projname1", "reponame": "cg", "name": "com1", "version": "v1", "model": "com"}
]
}
■ The push list file (distr.list) in json format should be saved under the path.
15 - 61
CLI Reference
./zartcli -o=query -i=admin -n=sync_multy_image -s=zartname -D //query the result of the push operation named
sync_multy_image.
./zartcli -o=query -i=admin -s=zartname -D //query all history synchronization records.
{
"Id": 35,
"Created_at": "2017-01-24T19:33:59+08:00",
"Updated_at": "2017-01-24T19:33:59+08:00",
"Name": "sync_multy_image", //push name
"Srcipport": "10.63.241.165:6000", //address and port number of remote repository
"Srcregistry": "10.63.241.165:6666", //address and port number of the registry of the remote repository, used during
image synchronization.
"TotalTask": 2, //number of versions to be synchronized in the synchronization list
"DoneTask": 2, //number of versions that have been synchronized
"SameSkipVer": 0, //number of local versions skipped by the synchronization operation when overwrite=no
"Overwrite": "yes", //overwrite status, whether the synchronized contents overwrite the local records when
overwrite=yes
"Status": "distr success", //synchronization operation status: in progress, success and failure
"Remark": "", //record the failure cause if synchronization fails. It is left blank if synchronization is successful.
"SyncFailVer": "" //record the model, reponame, name and version if synchronization fails.
}
./zartcli -o=delete -i=admin -n=sync_multy_image -s=zartname -D //delete the push record named sync_multy_image.
./zartcli -o=delete -i=admin -s=zartname -D //delete all the historical push operation records.
■ The deletion operation only deletes the push records, but does not delete the versions pushed to the remote
■ After this command is executed, the untagged layers in the registry are really deleted. In addition, the repository
and registry enter the maintenance status during the execution and they are unavailable. If the upload and update
operations are performed, they will fail. Use this command with caution.
16 - 61
CLI Reference
■ PaaS Deployment
■ PaaS Components
■ Cluster
■ Resource Pool
■ Volume
■ Port Range
pdm-cli <subcommand> …
Where, <subcommand> includes deploy and cluster. The sub-command is followed by parameters, which are sorted in
sequence and separated by spaces. The parameters starting with “‒” are optional. Optional parameters can be
omitted, and their positions are variable.
17 - 61
CLI Reference
PaaS Deployment
--all is an optional parameter. If --all is specified in the command, all deployment tasks in the environment are displayed. If
--all is not specified, some old historical tasks are not displayed. The historical tasks that are not displayed are as follows:
■ Deployment tasks on the node that has been deleted are not displayed.
■ If components are deployed or upgraded on the same node for multiple times, only the upgrade or deployment
tasks of the last three times are displayed, and other tasks are not displayed.
PaaS Components
Updating Patches
pdm-cli patch <model> <reponame> <name> <version>
For example,
pdm-cli patch bin admin nwnode v1.0.1
Updating Components Locally
pdm-cli update <model> <reponame> <name> <version> <path> <role>
role: specified installation nodes, including: paas_controller, master, minion, and elk.
Log path: /paasdata/op-log/cf-pdeploy/pdeploy_ansible.log
For example,
Executable file component: pdm-cli update bin admin nwnode v1.0.1 /root/nwnode minion
Container component: pdm-cli update com admin utm v1.0.1 /root/utm paas_controller
Querying the Component Version
pdm-cli version <model> <reponame> <name>
Independently Deploying a Container Component
pdm-cli deploy_com <model> <reponame> <name> <version> <role>
role: specified installation nodes, including: paas_controller, master, minion, and elk.
Log path: /paasdata/op-log/cf-pdeploy/pdeploy_ansible.log
If the component is not in the local repository, obtain the version from the install_center (install_center is set in
/etc/pdm/conf/softcenter.json).
For example,
pdm-cli deploy_com com admin zenap_cos v1.17.20.02.245446 paas_controller
To use the local component version, add the <path> parameter in the command line as follows:
pdm-cli deploy_com <model> <reponame> <name> <version> <path> <role>
path: directory where the container component version is located
For example,
pdm-cli deploy_com com admin ndr v1.1.0 /home/ubuntu/ndr paas_controller
18 - 61
CLI Reference
Cluster
Note:
In the returned cluster information, the pict_eviction_pod under the cluster_config field records the eviction switch and
the thresholds of the parameters related to the eviction:
■ target_thresholds_cpu: CPU usage threshold of the node that triggers the eviction
■ target_thresholds_memory: memory usage threshold of the node that triggers the eviction
■ target_thresholds_loadavg: average CPU usage threshold of the node that triggers the eviction
Note:
■ This command is used in the scenario where the PaaS is deployed successfully but a cluster fails to be created, or
parameter refers to the configuration path of the created cluster. The cluster_file parameter is optional. If it is not
specified, the original configuration of the cluster is used by default.
■ You can check the cluster deployment status by using the pdm-cli cluster list command. Only when the cluster
deployment status is any status of init-para-fail, applyfail, taskfail, deployfailed or labelfail, the command for
continuing installation is valid.
■ The cluster_file parameter is valid only when the cluster deployment status is init-para-fail.
Deleting a Cluster
pdm-cli cluster delete <uuid>
Deleting a Cluster Node
pdm-cli cluster delete <uuid> node <node_uuid>
19 - 61
CLI Reference
Expanding a Cluster
pdm-cli cluster scaleout <uuid> <scale_file>
For the contents of scale_file, refer to the /etc/pdm/conf/example/scale.example.
For example,
pdm-cli cluster scaleout /etc/pdm/conf/example/scale.example
Adding a Label to the Cluster Node
pdm-cli label add <key=value> node <node_uuid>
** Modifying a Cluster Node Label**
pdm-cli label update <key=value> node <node_uuid>
** Deleting a Cluster Node Label**
pdm-cli label delete <key> node <node_uuid>
Displaying All Cluster Nodes Labeled with a Label
pdm-cli label list node <key=value>
** Displaying All the Node Labels**
pdm-cli label list
Setting the Default Value of a Label (after the command is executed successfully, you can run the “pdm-cli label list”
command to check whether the setting is successful)
pdm-cli label set default_operator <key> <DoesNotExist/DoNotCare>
Blocking a Node
pdm-cli cluster node unschedule <cluster_node_uuid>
Unblocking a Node
pdm-cli cluster node schedule <cluster_node_uuid>
Deleting a Node Pod
pdm-cli cluster node drain <uuid>
Modifying Reserved Cluster Resources
pdm-cli cluster update <cluster_uuid> reserved_res <config_file>
For the config_file contents, refer to the /etc/pdm/conf/example/cluster_config.example. This command supports the
modification of the following configuration: reserved_res_prf (resource reservation threshold).
Modifying Reserved Cluster Node Resources
pdm-cli cluster node update <node_uuid> reserved_res <cpu=,mem=>
The <node_uuid> can be queried by using the pdm-cli cluster node list command. The values after cpu= and mem= can be
negative numbers.
Batch Modifying Reserved Cluster Node Resources
pdm-cli cluster nodes update reserved_res <reserved_conf_file>
For the contents of the reserved_conf_file, refer to: /etc/pdm/conf/example/nodes_reserved_conf.example
20 - 61
CLI Reference
{
"encomp_deploy_rule":
{
"name": "rule1", # rule name
"cluster_uuids":[], # clust uuid list, which can be empty
"label_selectors":["k1=v1"], # node label list, which can be empty
"encomps":["tipc"] # enhanced component list, which must not be empty
}
}
{
"encomp_deploy_rule":
{
"uuid": "1b423225-2574-47aa-92c5-0b9c18a2ac86", # rule uuid
"cluster_uuids":[], # clust uuid list, which can be empty
"label_selectors":["k1=v1"], # node label list, which can be empty
"encomps":["gpu"] # enhanced component list, which must not be empty
}
}
key value
cluster_uuids UUID list of specified clusters. If the value is null, the rule is applicable to all clusters.
label_selectors Label list of specified nodes, supporting three label operation types: =, notin and in, for example:
k1=v1; k1 notin (v1); k1 in (v1, v2). If the value is null, the rule is applicable to all minion nodes.
encomps List of specified enhanced components. It cannot be null, and indicates the enhanced
components that the rule is applicable to.
21 - 61
CLI Reference
For the contents of the encomp file in the deployment, upgrade and rollback commands, refer to the
/etc/pdm/conf/example/encomp.example. An example is as follows:
{
"encomp":
{
"name": "tipc", # name of the enhanced component
"version":"v1.20.20.20.111111", # version number of the enhanced component
"model":"bin", # type of the enhanced component
"rules":["rule1"] # rule list specified when the enhanced component is deployed
}
}
Note:
■ Before deploying, upgrading, and rolling back enhanced components, if there are no deployment rules for
for rollback.
■ An enhanced component is deployed, upgraded and rolled back in accordance with the specified rules in the
encomp.example file as required. If no rule is specified, the rules applicable to the component are automatically
selected from the created rules.
key value
version Current version of the enhanced component that is running on the node corresponding to the
node_uuid
src_version Current version of the enhanced component that previously operated on the node corresponding to
the node_uuid
operation Last operation on the enhanced component. The value can be deploy, upgrade or rollback.
22 - 61
CLI Reference
Resource Pool
Note:
■ Before executing the registration command, make sure that the backend corresponding to the local
■ --lun-id indicates the ID of LUN to be registered. Lun-id is the unique ID of the volume, which can be
obtained from the disk array interface. The IDs of different disk arrays may be named in different ways, for
23 - 61
CLI Reference
example, uuid.
■ --mountpoint indicates the mount path, the system automatically searches for all LUNs in the mount
path, filters out the LUNs that have been registered, and then registers the left LUNs.
■ Select either --lun-id or --moutpoint .
■ You can query whether the volume is registered successfully by using the pdm-cli node volume_show
<node_id> command.
Volume
** Creating a Volume**
pdm-cli volume create <volume_file>
** Querying a Volume**
pdm-cli volume show <volume_uuid>
** Deleting a Volume**
pdm-cli volume delete <volume_uuid>
Port Range
The commands related to the port range can be executed before or after the deployment of the PaaS.
When you run a command to modify the port range, the security rules of the firewall are automatically updated.
** Querying All Port Ranges**
pdm-cli port_range list
An example of the execution result is as follows:
+-----------------------+---------------------------------------------------------------------------+
| range_name | value |
+-----------------------+---------------------------------------------------------------------------+
| plat_com_range | 53-53, 69-69, 80-80, 112-112, 443-443, 1022-4499, 5000-28000, 31942-31999 |
| common_services_range | 4500-4608, 4610-4999, 29951-29953 |
| ftp_data_range | 29900-29950 |
| public_services_range | 28001-29800 |
| config_services_range | |
+-----------------------+---------------------------------------------------------------------------+
24 - 61
CLI Reference
Requirement: The new port segment must not conflict with the port range of other types.
If the system prompts that the added port segment conflicts with other port ranges, you can execute a command to delete
the corresponding port segment from the port ranges.
Deleting a Port Segment from the Port Range of Platform Components
pdm-cli port_range delete plat_com <port1-port2>[,…]
port1-port2: port segment. For the format, refer to the Adding a Port Segment to the Port Range of Platform Components.
Requirements:
■ Before the PaaS deployment, it is required that the port segment to be deleted should not include the reserved
■ Before the PaaS deployment, it is required that there is no intersection between the port segment to be deleted
and the current common service port range. Otherwise, a failure is returned. (The common service port segment
25 - 61
CLI Reference
■ After the PaaS deployment, it is required that the port segment to be deleted should not include the ports that are
in use.
26 - 61
CLI Reference
Requirements:
■ After the PaaS deployment, it is required that the port segment to be deleted should not include the ports that are
After this command is executed, the SSH keys for logging in to all nodes will be replaced with the automatically generated
keys.
Note:
After the SSH keys are replaced, if the PaaS is uninstalled and then redeployed, the keys on all nodes will be restored to
the initial default keys. For the sake of security, it is necessary to run this command again to update the SSH keys.
Because different countries and regions have different longitude and time, different time zones are divided. When a
product is delivered to the site and assembled and powered on, the time zone needs to be changed to the local time zone.
Note:
■ The time zone is adjusted after the environment deployment and before the formal use.
■ After the time zone is modified, the data in a period of time may be incorrect.
■ The environment must be checked before the time zone adjustment. For details, refer to Environment Check.
27 - 61
CLI Reference
Attention:
Modifying the time zone may affect the local time of the system and the operation of components. Do not modify the
time zone without permission.
<timezone> represents the time zone parameter. The method to obtain valid time zone parameters is as follows: Input the
linux command “tzselect” on any node of the PaaS, and then select the corresponding area in accordance with the
wizard steps. Finally, the wizard will generate the time zone of the corresponding area. For example,
TZ=’Asia/Shanghai’,Asia/Shanghai is the generated time zone, which can be used as the time zone parameter
<timezone>.
After this command is executed, the system first checks whether all nodes are reachable. If all nodes are reachable, this
command is executed on each node. If a node is unreachable, the system terminates the execution of this command, and
the time zone of each node is not changed.
To make the time zone change take effect, run the nohup pdm-cli reboot & command to restart all PaaS nodes. Nohup
means that the command is executed at the background. In the directory where the restart command is located, run the
tail -f nohup.out command to view the screen output.
Querying the Time Zone
After the above command is executed, the system will display the current time zone configured for the PaaS system and
the time zone actually used by each node.
Command Execution Result Example:
Before deploying the PaaS, multiple external clock sources can be configured in the configuration file.
In all nodes of the PaaS, the clock source of the controller node points to an external clock source, and the clock sources
of the non-controller nodes point to the controller node. Therefore, to modify the external clock source of the PaaS is to
modify the clock source of the controller node.
28 - 61
CLI Reference
The PaaS clock source can be synchronized slowly or immediately. For details, refer to the following table.
Slow The clock sources are aligned through the PaaS time does The synchronization process takes a
synchronization NTP mechanism without human not hop, which long time. If the time difference
intervention. has little impact between the PaaS system and the
During the slow synchronization, the
on the system. external clock source is one minute, it
controller node of the PaaS synchronizes
takes about ten minutes for the PaaS
with the external clock source slowly until
system to complete the
they are the same, and then the other
nodes of the PaaS synchronizes with the synchronization with the external
controller node until they are the same. clock source.
Immediate Run a command manually to trigger time Time Time hopping has a great impact on
Synchronization to be aligned with the external clock synchronization the system. After the synchronization,
source. can be you need to run a command to restart
Operation method: First, immediately
completed the PaaS system.
synchronize the time of the PaaS with the
quickly.
specified clock source (Immediately Synch
ronizing PaaS Time with the Specified Cloc
k Source ), and then restart the PaaS
system (Restarting the PaaS).
Note:
If the time difference is less than ten minutes, and the chrony service does not exit, select slow synchronization. If the
time difference is so large that the chrony service exits, select immediately synchronization.
Note:
By default, an external clock source of the PaaS can only be an IP address. If you want to set a domain name as an
external clock source, perform the following steps:
1 Before the deployment of PaaS, ping the domain name of an external clock source to obtain its IP address, and
then fill the IP address in the ntp_server field in the /etc/pdm/conf/paas.conf file. If there are multiple clock sources,
separate them with commas.
2 After the deployment of PaaS, enable the msb to interconnect with an external DNS server. For the configuration
method, refer to Setting Upstream DNS <setdnsserver>
3 On the controller node, run the pdm-cli ntpserver replace <old ntpserver> with <new ntpserver> command to
replace the IP address of the clock source with the domain name. <old ntpserver> is the IP address and <new
ntpserver> is the corresponding domain name.
The steps of adding/modifying an external clock source of the PaaS are as follows:
1 View the time difference between the PaaS system and the external clock source.
ntpdate -q <ntpserver> , <ntp_server>indicates the address of the new clock source, for example, ntpdate -q
10.30.1.105. An example of the query result is as follows:
29 - 61
CLI Reference
The returned offset value (unit: s) is the time difference between the PaaS system and the clock source.
If Then
The time Synchronize the clock by adding/modifying a clock source and using the slow synchronization
difference method.
between is Skip steps 3 and 4 and directly add or modify the clock source. At this time, the PaaS will rely
on the slow synchronization mechanism of the NTP itself to gradually align the time with the
less than ten
new clock source. If the time difference is one minute, the synchronization time is about ten
minutes.
minutes, which it is a long time but there is little impact on the system.
The time Synchronize the clock by using the immediate synchronization method and adding/modifying
difference a clock source.
between is Go to Step 3. The PaaS synchronizes with the clock source immediately, but you need to
restart the PaaS. In this case, the services will be interrupted. It takes about ten minutes to
more than
recover.
ten minutes.
3 Run the pdm-cli ntpdate <ntp_server> command to make the PaaS synchronize time with the specified clock
source immediately. For details, refer to Immediately Synchronizing with the Clock Source.
5 Add or modify a clock source. For details, refer to Adding an External Clock Source for the PaaS or Modifying an
External Clock Source of the PaaS.
Attention:
If the time difference exceeds one minute, if you do not perform immediate synchronization but modify the external
clock source directly, the PaaS NTP service may exit and the time cannot be aligned.
If there is a big time difference between a PaaS node and the new clock source, it may take a long time for
synchronization, or the NTP service exits and automatic time synchronization fails. To immediately synchronize the time
with the NTP clock source, run the following command to synchronize the time of all PaaS nodes with the clock source.
First, ensure that no other external clock source is configured in the PaaS. Otherwise, the NTP service of the controller
node may exit.
The value of <old ntpserver> is the external clock source configured in the system. Delete all the external clock sources
one by one. Then, immediately synchronize the time with the specified external clock source.
<ntp_server> indicates the address of the new clock source, for example, pdm-cli ntpdate 10.30.1.105.
After the above command is executed, the system synchronizes the date and time of each node in the PaaS system with
the new clock source.
Attention:
30 - 61
CLI Reference
After executing the above command, restart the PaaS system. For details, refer to Restarting the PaaS.
Note:
1 The PaaS supports 10 external NTP servers. If the PaaS is not configured with an external clock source, the first
NTP server to be added is regarded as the master NTP server. Ensure that it is connected with the network of the
PaaS controller nodes and can provide the time synchronization service.
2 If the clock source configurations on the PaaS controller nodes are different, you need to delete the different
configurations first and then add the clock source configuration.
<old ntpserver> is the IP address of the existing clock source in the /etc/ntp.conf or /etc/pdm/conf/paas.conf.
Note:
If you manually add a clock source after upgrading the old version to V1.19.40.06 or later, before rolling back the
PaaS version, you need to
delete the clock source that is added manually with a command.
If you deploy a version after V1.19.40.06 and upgrade it to a higher version, and then you manually add an external
clock source,
when rolling back the version, you do not need to delete the clock source.
In the NTP configuration in the /etc/pdm/conf/paas.conf, the first NTP server on the right of the equal sign is
regarded as master NTP server.
In the deployed PaaS environment, it is not allowed to use this command to delete the NTP server, but you can
use the pdm-cli ntpserver replace command to replace the NTP server. To delete the master NTP server
configuration, you need to manually delete the NTP server configuration in the paas.conf file and the line
containing the NTP server configuration in the /etc/chrony.conf file. If there are multiple PaaS controller nodes,
the configuration on all nodes needs to be deleted. If other nodes do not have this configuration, you do not
need to delete it. If the PaaS deployment fails and you are going to deploy the PaaS again, you can modify the
configurations in the paas.conf before re-deployment. You do not need to delete the previous NTP
configurations.
31 - 61
CLI Reference
After the above command is executed, the system will display the external clock source information configured for the
PaaS.
Command Execution Result Example:
If an external clock source is configured properly, the query result is as follows:
If the NTP address configuration in the NTP service configuration file is different from that in the PaaS configuration file,
the configuration information on each node is printed, as shown below:
If an external clock source has been configured, when the external clock source needs to be modified, set it as the <old
ntpserver>, and set a new clock source as the <new ntpserver>.
Note:
The <old ntpserver> and <new ntpserver> values can be set as IP addresses.
Attention:
Modifying an external clock source may cause an alarm. If an alarm is reported, handle it in accordance with the alarm
information.
32 - 61
CLI Reference
1 Add an IP address to the NTP whitelist of the PaaS by using the following command:
2 Add an IP address segment to the NTP whitelist of the PaaS by using the following command:
1 Delete an IP address from the NTP whitelist of the PaaS by using the following command:
33 - 61
CLI Reference
disable-ntp-randomtx can only be followed by the parameter “0” or “1”. “1”indicates that the timestamp is
disabled. “0”indicates that the timestamp is enabled.
After deploying the PaaS, you can modify the date and time of the PaaS system.
Depending on whether the PaaS is configured with an external clock server, perform the following operations as required.
If Then
The PaaS is not configured with an Run the pdm-cli date set <date> <time> command to modify the time. For details,
external clock server. refer to Modifying the PaaS Date and Time.
The PaaS has been configured with Modify the time as follows:
an external clock server.
a Delete the external clock server. For details, refer to Changing an External Cl
ock Server.
b Run the pdm-cli date set <date> <time> command to modify the time. For
details, refer to Modifying the PaaS Date and Time.
c Add an external clock server again. For details, refer to Changing an External
Clock Server.
Note:
■ Modifying the time may affect the system stability. Be cautious about modifying the time.
■ If the PaaS keeps synchronous with the external clock source, after the time is modified, the difference between
the local time and the clock source may be large, causing the NTP service to exit. Pay attention to the alarms. If there
are related alarms, follow the instructions.
■
The time modification operation may have an impact on the PaaS. For example, if the clock is changed to a future
time point and then it is changed
back, the performance data will be affected. Refer to the technical notice and perform the operation as required.
■ Run the following command to check the allowable time range of the PaaS system before modifying the time:
34 - 61
CLI Reference
Attention:
After executing the above command, restart the PaaS system. For details, refer to Restarting the PaaS.
After the above command is executed, the system displays the date and time of each node in the PaaS system.
Command Execution Result Example:
<node_ips> indicates the IP address of the net_api network plane of a PaaS node.
■ To restart a single node of the PaaS, set <node_ips> to the IP address of the net_api network plane of the node, for
example, nohup pdm-cli reboot 192.168.200.109 &`, where, nohup indicates that the command is exectued at the
background. In the directory where the reboot command is executed, run the tail -f nohup.out command to view the
screen output.
35 - 61
CLI Reference
To restart all nodes of the PaaS system, do not specify the <node_ips> parameter, for example,nohup pdm-cli reboot
&, where, nohup indicates that the command
is executed at the background. In the directory where the reboot command is executed, run the tail -f nohup.out
command to view the screen output.
Note:
The above command is not applicable to some nodes.
After the above command is executed, the system displays whether the common services are available. If the status is
unavailable, an error is displayed.
Command Execution Result Example:
36 - 61
CLI Reference
+-------------------------------+---------+---------+---------------------------------------------------------------------------+
| check_item | result | errcode | fail_info |
+-------------------------------+---------+---------+---------------------------------------------------------------------------+
| CHECK_AVA_COM_CSM | success | 0 | |
| CHECK_AVA_TENANT_OPCS | success | 0 | |
| CHECK_AVA_CMS_PostgreSQLCACHE | success | 0 | |
| CHECK_AVA_CMS_PostgreSQL | fail | 2001 | PostgreSQL_pg-vnpm check failed: deploy_status is deploy_ing,
please wait |
+-------------------------------+---------+---------+---------------------------------------------------------------------------+
<node_ips> indicates the IP address of the net_api network plane of the default network type (V4 or V6) of a PaaS node.
--service-only is optional (only applicable to TECS scenarios). When the --service-only parameter is used, only the
services of the node is stopped, but the node is not shut down.
■ To shut down a single node of the PaaS system, set <node_ip> to the IP address of the net_api network plane of
When the --service-only parameter is used, only the services of the node is stopped, but the node is not shut down. For
example, pdm-cli shutdown 192.168.200.109 --service-only.
Command Execution Result Example:
■ To shut down all nodes of the PaaS system, do not specify the <node_ip> parameter, for example, pdm-cli
shutdown.
37 - 61
CLI Reference
Note:
The above command does not support shutdown of some nodes. To shut down all nodes, you need to log in from the
local end and run the command. Otherwise, the command will fail or you cannot see the execution result due to the
floating IP address disconnection.
Updating the Version (Non-Snap Format) of a Component of the Bin, Com, or Image Type
38 - 61
CLI Reference
Parameter Description
<reponame> Default user of the software repository, generally admin. For details, refer to the information in the
/etc/pdm/deploylist/pkg_ver.lig.
<tag> Blueprint tag. If it is not specified, the default tag is marked by the software repository.
Note:
■ Currently, only the following fields can be modified: managePassword, manageUser, snmpProtocolType,
The configurations of different blades in the same shelf are written together. The number of blades configured in the
slot must be the same as the actual number, that is,
the configurations of the blades in the same shelf must be modified at the same time.
■ After the PaaS is rolled back, you must modify the hardware server or modify the configuration information to
baremetal_nodes_update.json.
■ Specify the managePassword parameter (this field must be filled in as a required field for verification, regardless of
of old_manageIp.
■ Modify other fields that can be modified.
39 - 61
CLI Reference
[{
"slot": ["8", "9", "10", "11"],
"managePassword": "XXXXXX", # It can be modified.
"deviceModel": "ZTE-E9000-xx",
"manageUser": "xxxxxxxx", # It can be modified.
"snmpProtocolType": "v2c", # It can be modified.
"manageIp": "192.168.3.100", # It can be modified.
"old_manageIp": "192.168.2.100", # origanl management IP address
"snmpV2Info": { # It can be modified.
"readCommunity": "public"
},
"snmpV3Info": { # It can be modified.
"auth_protocol": null,
"priv_password": "",
"priv_protocol": null,
"user": null,
"security_level": null,
"auth_password": ""}
}]
The query result may be installed, not installed, timeout, or error code if an unknown error occurs. An example is as
follows:
40 - 61
CLI Reference
If the operation is successful, a success result is returned. If the operation fails, a failure cause is displayed. An example is
as follows:
If the operation is successful, a success result is returned. If the operation fails, a failure cause is displayed. An example is
as follows:
■ Configuration Sub-commands
■ Query Sub-commands
■ Modification Sub-commands
■ Status Sub-commands
The cnrm-cli tool provides the node resource configuration and status management functions. Currently, the following
node resources are supported:
■ Exclusive core
■ Huge Page
41 - 61
CLI Reference
■ Querying the configuration information of the exclusive cores and huge pages of all nodes in the cluster.
■ Querying the configuration information of the exclusive cores and huge pages of the specified node in the cluster.
■ Querying the exclusive core configuration of the specified node in the cluster.
■ Querying the huge page configuration of the specified node in the cluster.
■ Saving the configuration file of the exclusive cores of the specified node in the cluster.
■ Saving the configuration file of the huge pages of the specified node in the cluster.
■ Modifying the configuration of the exclusive cores of the specified node in the cluster.
■ Modifying the configuration of the huge pages of the specified node in the cluster.
■ Modifying the exclusive core list configuration of the specified node in the cluster in accordance with the
configuration file.
■ Modifying the huge page configuration of the specified node in the cluster in accordance with the configuration
file.
■ Querying the status of the exclusive cores and huge pages of all nodes in the cluster.
■ Querying the status of the exclusive cores and huge pages of the specified node in the cluster.
■ Querying the status of the exclusive cores of the specified node in the cluster.
■ Querying the status of the huge pages of the specified node in the cluster.
Note:
■ The configuration and status of huge pages of a node queried by the cnrm-cli tool are the sum of all the huge
There are two ways to set huge pages by the cnrm-cli tool. One is to configure the same number of huge pages for
each numa, and the other is to configure
different number of huge pages for each numa.
The cnrm-cli tool uses a tree command structure. You can get help through the help or -h command. The parameters of
cnrm-cli consist of three parts: resources, operations and global filter.
cnrm-cli -h
NAME:
cnrm-cli - <subcommand> ...
USAGE:
cnrm-cli [global options] command [command options] [arguments...]
VERSION:
v1
AUTHOR:
nw <[email protected]>
COMMANDS:
config, c <subcommand> ...
state, s <subcommand> ...
help, h Shows a list of commands or help for one command
42 - 61
CLI Reference
GLOBAL OPTIONS:
--node value, -n value node uuid
--type value, -t value resource type(cpu or hugepage)
--help, -h show help
--version, -v print the version
Configuration Sub-commands
cnrm-cli config -h
NAME:
cnrm-cli config - <subcommand> ...
USAGE:
cnrm-cli config command [command options] [arguments...]
COMMANDS:
get get config
get_to_file store cpu or hugepage config to file /etc/cnrm-cli/cpu_config_file.json or
/etc/cnrm-cli/hp_config_file.json
set set resource config for node
OPTIONS:
--node value, -n value node uuid
--type value, -t value resource type(cpu or hugepage)
--help, -h show help
Query Sub-commands
The query sub-commands can query all resource configurations of all nodes, or the specified resource configurations of a
specified node through a filter.
Modification Sub-commands
The modification sub-commands must be used together with the filter to modify the configuration of the specified
resources on the specified node.
43 - 61
CLI Reference
Status Sub-commands
The status sub-commands include only the query sub-commands of resource status.
cnrm-cli state -h
NAME:
cnrm-cli state - <subcommand> ...
USAGE:
cnrm-cli state command [command options] [arguments...]
COMMANDS:
get get resource state
OPTIONS:
--node value, -n value node uuid
--type value, -t value resource type(cpu or hugepage)
--help, -h show help
The query sub-commands can query the status of all resource of all nodes, or the specified resources of a specified node
through a filter.
44 - 61
CLI Reference
The inetrules-cli tool provides the functions of configuring firewall rules for nodes, querying firewall rules, and managing
the status.
Currently, the following functions are supported:
The inetrules tool uses a single-line command structure, and you can use the inetrules help command to get help.
If no parameter is specified in the command, all firewall rules are queried, including IPv4 and IPv6 rules, which are
separated by splits.
Command:
inetrules show
Returned result:
45 - 61
CLI Reference
If a parameter is specified in the command, the firewall rules of the specified network plane are queried. If only the ‒
srccidr parameter is specified in the command, all rules of this network plane are queried. The rules for multiple network
planes are separated by commas. The ‒srccidr parameter must be specified. If both of the ‒srccidr and ‒portrange
parameters are specified in the command, the system determines the status of the port range on the specified network
plane.
inetrules show [--srccidr <cidrlist, split by ,>] [--portrange <portrangelist, split by ,>]
‒srccidr This parameter indicates the source IP address plus the mask length of the network plane, which can be viewed
by the inetrules show command. ‒portrange This parameter specifies the destination port to be queried.
2 Query the status of the specified port set on the specified network plane.
Command:
inetrules on
Returned result:
Command:
inetrules off
Returned result:
46 - 61
CLI Reference
inetrules add-rule [--srccidr <cidrlist, split by ,>] [--portrange <portrangelist, split by ,>]
‒srccidr This parameter indicates the source IP address plus the mask length of the network plane. ‒portrange This
parameter specifies the destination port.
Example:
Delete a rule that is added manually, that is, change add-rule of the addition command to del-rule, and keep other
contents unchanged.
inetrules del-rule [--srccidr <cidrlist, split by ,>] [--portrange <portrangelist, split by ,>]
‒srccidr This parameter indicates the source IP address plus the mask length of the network plane. ‒portrange This
parameter specifies the destination port.
Example:
1 The following example shows how to delete the address and port of the specified subnet.
2 The following example shows how to delete all subnet ports of Inetblock.
47 - 61
CLI Reference
The globalinetrules-cli tool provides commands for filtering the system traffic of the equipment.
Currently, the following functions are supported:
The globalinetrules tool uses a single-line command structure, and you can use the globalinetrules help command to get
help.
Parameter Descriptions
Query the traffic filtering rules of all controller nodes and display them in a dictionary list.
Command:
globalInetrules show
Returned result:
48 - 61
CLI Reference
Returned result:
Example:
1 The following example shows how to set a rule that allows all packets from the subnet 192.168.1.2/16.
Returned result:
Example:
1 The following example shows how to delete a rule that allows all packets from the subnet 192.168.1.2/16.
Command:
globalinetrules clear-rules
49 - 61
CLI Reference
Returned result:
Note:
Note: The cidr must carry prefixlength.
This CLI is used to collect files, shell command execution results, PaaS platform operation logs and PaaS platform
performance data.
This CLI provides the following collection modes:
This CLI is executed only on the controller node by the user with the root rights. If there are multiple controller nodes, you
can execute the collection commands on each controller node independently, without any influence over one another.
The collected data can be stored in two ways:
■ Local storage
Path: The specified directory on the remote machine specified by the user. The remote machine supports
login in ssh mode.
Data Collection
Parameter Descriptions
50 - 61
CLI Reference
-r --role Node role. It refers to each role configured on the node. The role names are
separated by commas, for example, paas_controller,master. To learn about the role
contents, you can execute the pdm-cli node list command on the controller node and
view the roles field. Or you can view the roles configured on each node through the
UI. If you do not enter a specific role name but enter all, the data of all roles is
collected. The collect-cli source list command can be used to view the supported
roles.
-i --ipaddr IP address of the node. The IP addresses you entered are allocated by the net_api
and multiple IP addresses are separated by commas. For example,
192.10.20.123,192.10.20.122.
-s --scene Scenario. This parameter can be used when you can roughly identify the scenario
where the problem occurs. Options: network, storage and deploy, which indicate the
network scenario, storage scenario, and deployment scenario respectively. One
scene parameter is input at a time during data collection. The collect-cli source list
command can be used to view the supported scenes.
-c --component Component name. Multiple components are supported, such as slb. The specific
component names can be queried by using the collect-cli source list command. If you
do not enter a specific component name but enter all, the data of all components is
collected. The collect-cli source list command can be used to view the supported
components.
-inst --instance Common service instance name. You can enter one or more instance names at a
time, which are separated by commas. For example, kafaka1,kafaka2. This parameter
must be used together with the cs parameter. If you do not enter a specific name but
enter all, the data of all instances of the common service is collected.
-d --debug Outputs the collect_log.txt. The collect_log.txt file contains the detailed information
(file size, last modification time) of the files that have been collected or have not been
collected on each node.
-l --last Period, from a time point in the past to the current time point. Unit: days; Integer;
Minimum: 1. For example, last 2 means that the files modified in the [now-24*2
hours,now] period range are collected.
-p --packet Format of the collected data. The default format is tar.gz. To compress files into
another format, use this parameter. Options are tar and zip. For example, -p zip
means that a .zip package is generated.
-rt --remote Remote storage mode. The path for file storage is <IP port directory>. The IP address,
port number and storage directory of the remote device are separated by spaces. If
you do not enter a port number, the default port 22 is used. For example, ‘100.20.0.1
/home/temp’ indicates that files are stored in the /home/temp/ directory on the
device whose IP address is 100.20.0.1 through the port 22.
-st --starttime Start time, format: yyyy-mm-dd hh:mm:ss (local time) or yyyy-mm-dd, for example,
‘2019-03-17 01:01:01’ or ‘2019-03-17’ (it will be supplemented automatically as
‘2019-03-17 00:00:00’). If this parameter exists, the files whose last modification
time is within the time range of [starttime, now] will be collected.
51 - 61
CLI Reference
-et --endtime End time, format: yyyy-mm-dd hh:mm:ss (local time) or yyyy-mm-dd, for example,
‘2019-03-17 01:01:01’ or ‘2019-03-17’ (it will be supplemented automatically as
‘2019-03-17 00:00:00’). This parameter must be used together with starttime. At
present, [starttime, endtime] is only applicable to the ops component.
none --interval Sampling interval, which is only applicable to performance data collection of the OPS
component. The supported values are 30 s, 5 m, and 15 m, and the default value is 5
m.
none --all-common- Flag for collecting the logs of all the common service instances. This flag is used only
service when the “-r all” parameter exists. The value is true or false. The default value is
true. If this parameter is not specified or “all-common-service=true” is specified
explicitly, all common service instance logs will be collected. If “all-common-
service=false” is specified, no common service logs will be collected. Note that
there is no space around “=”.
Note:
■ The operation logs and performance data of the PaaS platform can be collected only when the component name
is ops. The performance data collection function supports two types of objects: node and component instance. The
performance data of a type of object within the specified period can be collected.
■ The endtime and interval parameters are invalid when the data of non-ops components is collected.
The starttime, endtime, and interval parameters are valid when the data of ops component is collected. When the
starttime and endtime parameters are not specified, the performance data collection period is determined based
on the interval parameter. If the interval is 5 minutes by default, the performance data within 12 hours is collected.
If the interval is 30 s, the performance data within two hours is collected. If the interval is 15 minutes, the
performance data within 24 hours is collected.
■ By default, for the ops component, the data within the last 12 hours is collected. For other objects, the data
within the complete time range is collected by default.
■ If the last parameter and the starttime or endtime parameters coexist, the last parameter prevails.
■ The -s parameter cannot be used together with the -r, -i, and -c parameters.
■ The -cs and -inst parameters must be used together, and cannot be used together with the -r, -i, -c and -s
parameters.
■ When executing the remote storage mode (-rt, ‒remote), if you want to access without a password, you should
make sure you have completed the related configuration.
a Use ssh-keygen -m PEM -t rsa to generate an ssh key pair (public key and private key). To prevent
overwriting the original key pair, the ssh key can be generated outside the paas environment.
b In the /paasdata/ops-tools/remote_config.json, enter the correct username for ssh login of the
remote machine.
c In the /root/.ssh/ of the main controller node where the data is to be collected, place a private key
named id_rsa_collect, with the authority of 700.
d On the remote machine, add the corresponding public key in the /.ssh/authorized_keys under the
login username.
Example
1 Collect the logs and shell data on nodes 100.20.0.171 and 100.20.0.170.
52 - 61
CLI Reference
2 Collect the logs and shell data of all the nodes in the system, including the logs of the common service instances.
collect-cli -r all
Or
Note:
This command can be used only after the system is deployed.
3 Collect the logs and shell data of all the nodes in the system, excluding the logs of the common service
instances.
Note:
This command can be used only after the system is deployed.
4 Collect the logs and shell data of all the controller nodes (node role: Paas_controller) in the system within the
recent 2 days.
collect-cli -r paas_controller -l 2
5 Collect the logs and shell data of all nodes (as the master or minion role) in the system.
Note:
This command can be used only after the system is deployed.
6 Collect the logs and shell data of a single controller node (as the paas_controller role) whose IP address is
100.20.0.171 in the system.
Note:
This command can be used only after the system is deployed.
Note:
This command can be used only after the system is deployed.
8 Collect the logs and shell data of all components. The logs to be collected must be within the time range, that is,
53 - 61
CLI Reference
the last modification time is within the range of [2019-3-17 01:01:01, current time].
Note:
This command can be used only after the system is deployed.
9 Collect the logs and shell data of the slb component, and record the details of the collected logs in the
collect_log.txt file.
Note:
This command can be used only after the system is deployed.
10 Collect the data of the slb component, and save the collected results to the /home/ubuntu/directory of the
remote machine (100.20.0.1).
Note:
Collection can be performed only if the remote machine can be accessed correctly. Ensure that you have the
right to write data to the /home/ubuntu/ directory.
11 If there is a problem with the network in the current environment, collect the data of the network scenario and
save the collected results to the /home/ubuntu/directory of the remote machine (100.20.0.1).
Note:
Collection can be performed only if the remote machine can be accessed correctly. Ensure that you have the
right to write data to the /home/ubuntu/ directory.
12 Collect the performance data within 12 hours and operation logs within 12 hours at an interval of 5 minutes, and
save them in the local disk.
13 Collect the performance data within two hours and operation logs within two hours at an interval of 30 seconds,
and save them on the local disk. Assume that the current time is 2019-08-24 02:00:00.
collect-cli --component ops --interval 30s --starttime '2019-08-24 00:00:00' --endtime '2019-08-24 02:00:00'
14 Collect the data of the slb component within the time range of [2019-08-24 00:00:00, 2019-08-26 00:00:00], and
save the collection results in the /home/ubuntu/ directory of the remote machine (100.20.0.1).
Note:
54 - 61
CLI Reference
Collection can be performed only if the remote machine can be accessed correctly. Ensure that you have the
right to write data to the /home/ubuntu/ directory.
15 Collect the data of the slb component in the last three days, and save it in a .zip file.
16 Collect the data of the 8s-minion component on the nodes 110.0.0.12 and 110.0.0.5 only.
17 Collect the data of the toposervice component on the nodes whose role is paas_controller.
Note:
If a large amount of data is to be collected, and the collection time may be longer than five minutes, you can use
nohup to run the collection command in the background.
■ If a command contains nohup...&, this indicates that this command runs at the background. The purpose is to
prevent the command from being terminated after the ssh connection is interrupted. For example,
This indicates that the collect-cli -c all command runs at the background.
■ When a command runs at the background, there is no output on the screen. You can execute the tail -f nohup.out
command in the current directory to view the output on the screen. You can use ctrl+c to stop the command output.
18 Collect the data of the kafka-zyh1 instance under the common service instance Apache-Kafka.
19 Collect the data of all instances under the common service instance Apache-Kafka.
Output Result
The contents collected on each node are saved into the following files (taking the zip format as example):
shells.zip Exist All the shell commands, including the shell commands executed
in the component containers (if the input parameter is the
component) and the shell commands executed on the nodes
output.txt Exist Result statistics of the collected files and shell commands
55 - 61
CLI Reference
collect_log.txt Optional. This Collected file name, size, modification time, and discarded
parameter exists reason
only after you enter
related parameters.
Example
Output the currently configured collection directory and shell commands. For the displayed contents, see the
section “Configuration File Description”.
Edit the directory to be collected and the shell commands. For the edited contents, refer to Section
“Configuration File Description”.
Note:
Press “Insert” to enter the edit mode, and “Esc” to exit the edit mode.
After editing, enter :w and press “Enter” to save the configuration. Enter :q to exit the edit mode.
To ensure that the controller node can operate properly, the upper limit of the quota of the data collection directory
/paasdata/collect_data is set to 5 GB.
If there are too many data collection nodes or the data to be collected on each node is too large, and the default upper
limit of the collection directory (5 GB) is exceeded, you need to modify the quota of the data collection directory in
accordance with the size of the collected data and the free space of the disk of the execution node. The unit is GB.
If the free space of the collection directory is too small to meet the collection requirements, all historical collection data is
deleted automatically.
Command line format collect-cli quota
Example
Display the quota of the current collection directory and the available disk space.
56 - 61
CLI Reference
To modify the file collection range and shell commands of a node or component, you need to edit the configuration file.
For how to obtain the configuration file, refer to Section “Modifying a Configuration File”.
Note:
Parameter Description
LOG_FILE_SIZE_LIMIT Maximum size of a single file. If the size of a file exceeds this value, the file will
not be collected. Default: 1 GB.
COLLECT_TIMEOUT Duration during which the execution node waits for collection to be completed.
If the collection time of a single node exceeds this duration, the collection fails.
Default: 5 minutes.
SHELL_EXEC_TIMEOUT Execution duration of a single shell. If the execution duration of a single shell on
a node exceeds this duration, the collection fails. Default: 30 seconds.
QUOTA_LIMIT Upper limit of the quota, that is, the data size when the collected data is saved
locally. When you modify the quota value through the quota modify or quota
default command, the macro value changes. Default: 1GB.
Dictionary Description
SHELLS_FOR_EXC Shell commands executed on the node. For each shell, a file is output.
SHELLS_FOR_EXEC_IN_CONTAINER_TO_LOG Shell commands executed in the container. All shells are output in one
file.
■ The files or commands in the common part of the LOGS_FOR_ZIP and SHELLS_FOR_EXC are collected for each
57 - 61
CLI Reference
■ Data collection parameter configuration, which can be modified as required. The contents to be collected should
Example
1 For the node whose role is elk, add the /etc/resolv.conf file for collection.
LOGS_FOR_ZIP = {
'elk': [
'/root/info/logs/'
],
}
LOGS_FOR_ZIP = {
'elk': [
'/root/info/logs/',
'/etc/resolv.conf'
],
}
2 For the node whose role is minion, add the shell command ps.
SHELLS_FOR_EXC = {
'minion': [
'systemctl status knitter-agent.service'
],
}
SHELLS_FOR_EXC = {
'minion': [
'systemctl status knitter-agent.service',
'ps'
],
}
3 For the slb component, add the paasdata/op-log/apiroute file for collection.
LOGS_FOR_COMPONENT_ZIP = {
'slb': [
'/paasdata/op-log/eslb'
],
}
58 - 61
CLI Reference
LOGS_FOR_COMPONENT_ZIP = {
'slb': [
'/paasdata/op-log/eslb',
'/paasdata/op-log/apiroute'
]
}
4 For the slb component, add the shell command ps that is executed on the node.
SHELLS_FOR_COMPONENT_EXC = {
'slb': [
'cat /proc/meminfo'
],
}
SHELLS_FOR_COMPONENT_EXC = {
'slb': [
'cat /proc/meminfo',
'ps'
]
}
5 For the slb component, add the shell command ps that is executed in the container.
SHELLS_FOR_EXEC_IN_CONTAINER_TO_LOG = {
'slb': [
{
'container_name': 'c-eslb',
'shell': [
'ifconfig'
],
},
],
}
SHELLS_FOR_EXEC_IN_CONTAINER_TO_LOG = {
'slb': [
{
'container_name': 'c-eslb',
'shell': [
'ifconfig',
'ps'
],
59 - 61
CLI Reference
},
],
}
6. For the slb component, add the security check script python/data/autocheck.py 1 that is executed in the container. The
script is executed only, and no data is output.
SHELLS_FOR_EXEC_IN_CONTAINER = {
'slb': [
{
'container_name': 'c-eslb',
'shell': [
'cp -r /etc/pod-config /vnslog/'
]
}
]
}
SHELLS_FOR_EXEC_IN_CONTAINER = {
'slb': [
{
'container_name': 'c-eslb',
'shell': [
'cp -r /etc/pod-config /vnslog/',
'python /data/autocheck.py 1'
]
}
]
}
7 For the slb component, add the root/nodes file for collection. The file is on the controller node.
COFNIG_FOR_ZIP = {
'slb': [
'/etc/pdm/conf/vnm_network.conf'
],
}
COFNIG_FOR_ZIP = {
'slb': [
'/etc/pdm/conf/vnm_network.conf',
'/root/nodes'
],
}
60 - 61
CLI Reference
8 For the sys_server component, add the contents to be collected by using wildcards.
COFNIG_FOR_ZIP = {
'sys_server': [
'/var/log/messages',
'/var/log/messages.1.gz',
'/var/log/messages.2.gz',
'/var/log/messages.3.gz',
'/var/log/messages.4.gz',
'/var/log/messages.5.gz'
],
}
COFNIG_FOR_ZIP = {
'sys_server': [
'/var/log/messages*'
],
}
9. Collect data by using simple reverse filtering rules. For example, only the valid files in the /var/log are collected, the files
with the suffixes of sig, ver, doc, and txt are not collected, and the file named etcd1 is not collected.
COFNIG_FOR_ZIP = {
'common': [
'/var/log/'
],
}
COFNIG_FOR_ZIP = {
'common': [
'/var/log/!(*.sig, *.ver, *.doc, *.txt, etcd1)'
],
}
61 - 61