0% found this document useful (0 votes)
55 views53 pages

H3C CloudOS 5.0 CLoud Operation System Technical Withepaper - IaaS

Uploaded by

miniserverindia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
55 views53 pages

H3C CloudOS 5.0 CLoud Operation System Technical Withepaper - IaaS

Uploaded by

miniserverindia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 53

H3C CloudOS 5.

0 Cloud Operating System


Technical White Papers

Product version: 5.0

Copyright © 2022 H3C Technologies Co., Ltd. All rights reserved.

No part of the contents of this document may be extracted or reproduced by any company or individual without prior
written consent of H3C, and shall not be transmitted in any form.

Except for the trademark of H3C, the trademarks, product logos and commodity names of other companies in this
manual shall be owned by their respective right holders.
The information in this document is subject to change without notice.
Contents

1 Overview··················································································································1-1
1.1 Glossary ·················································································································1-1
1.2 Background ················································································································1
1.3 Platform capabilities ·····································································································1
1.4 Platform architecture ·····································································································2
1.5 Platform features ·········································································································4
1.6 Optimization of open source technologies ·········································································4

2 Introduction of IaaS service function technology ··································································6


2.1 General introduction ·····································································································6
2.1.1 Glossary ··········································································································6
2.1.2 Background······································································································7
2.1.3 Service-oriented architecture ···············································································8
2.1.4 Service features ································································································9
2.2 Cloud host service······································································································10
2.2.1 Service introduction ·························································································10
2.2.2 Product architecture ·························································································10
2.2.3 Functional characteristics ··················································································11
2.3 Bare metal service ·····································································································11
2.3.1 Service introduction ·························································································11
2.3.2 Product architecture ·························································································12
2.3.3 Functional characteristics ··················································································12
2.3.4 Application restrictions ······················································································13
2.4 Image service ···········································································································13
2.4.1 Service introduction ·························································································13
2.4.2 Product architecture ·························································································13
2.4.3 Functional characteristics ··················································································13
2.4.4 Application restrictions ······················································································13
2.5 Snapshot service ·······································································································14
2.5.1 Service introduction ·························································································14
2.5.2 Product architecture ·························································································14
2.5.3 Functional characteristics ··················································································14

i
2.6 Cloud desktop service ·································································································14
2.6.1 Service introduction ·························································································14
2.6.2 Product architecture ·························································································15
2.6.3 Functional characteristics ··················································································15
2.7 Cloud hard disk service ·······························································································15
2.7.1 Service introduction ·························································································15
2.7.2 Product architecture ·························································································15
2.7.3 Functional characteristics ··················································································17
2.7.4 Application restrictions ······················································································17
2.8 Object storage service ································································································17
2.8.1 Service introduction ·························································································17
2.8.2 Product architecture ·························································································17
2.8.3 Functional characteristics ··················································································18
2.9 File storage service ····································································································18
2.9.1 Service introduction ·························································································18
2.9.2 Product architecture ·························································································18
2.9.3 Functional characteristics ··················································································18
2.10 Cloud network disk service ···························································································19
2.10.1 Service introduction ·························································································19
2.10.2 Product architecture ·························································································19
2.10.3 Functional characteristics ··················································································19
2.10.4 Application restrictions ······················································································19
2.11 Cloud backup ············································································································20
2.11.1 Service introduction ·························································································20
2.11.2 Product architecture ·························································································20
2.11.3 Functional characteristics ··················································································20
2.11.4 Application restrictions ······················································································21
2.12 Virtualization management ···························································································21
2.12.1 Service introduction ·························································································21
2.12.2 Product architecture ·························································································21
2.12.3 Functional characteristics ··················································································21
2.12.4 Application restrictions ······················································································22
2.13 Classic network ·········································································································22
2.13.1 Service introduction ·························································································22
2.13.2 Product architecture ·························································································22
2.13.3 Functional characteristics ··················································································23

ii
2.13.4 Application restrictions ······················································································23
2.14 Virtual private cloud ····································································································23
2.14.1 Service introduction ·························································································23
2.14.2 Product architecture ·························································································24
2.14.3 Functional characteristics ··················································································25
2.14.4 Application restrictions ······················································································25
2.15 DNS service ·············································································································25
2.15.1 Service introduction ·························································································25
2.15.2 Product architecture ·························································································25
2.15.3 Functional characteristics ··················································································26
2.15.4 Application restrictions ······················································································26
2.16 QoS service ··············································································································27
2.16.1 Service introduction ·························································································27
2.16.2 Product architecture ·························································································27
2.16.3 Functional characteristics ··················································································27
2.16.4 Application restrictions ······················································································27
2.17 Server load balancer service ························································································27
2.17.1 Service introduction ·························································································27
2.17.2 Product architecture ·························································································28
2.17.3 Functional characteristics ··················································································29
2.17.4 Application restrictions ······················································································29
2.18 Elastic IP service ·······································································································29
2.18.1 Service introduction ·························································································29
2.18.2 Product architecture ·························································································30
2.18.3 Functional characteristics ··················································································31
2.18.4 Application notice ····························································································31
2.19 NAT gateway service ··································································································31
2.19.1 Service introduction ·························································································31
2.19.2 Product architecture ·························································································33
2.19.3 Functional characteristics ··················································································34
2.19.4 Application restrictions ······················································································34
2.20 Firewall service··········································································································34
2.20.1 Service introduction ·························································································34
2.20.2 Product architecture ·························································································35
2.20.3 Functional characteristics ··················································································35
2.20.4 Application restrictions ······················································································36

iii
2.21 Anti-virus service ·······································································································36
2.21.1 Service introduction ·························································································36
2.21.2 Product architecture ·························································································36
2.21.3 Functional characteristics ··················································································36
2.21.4 Application restrictions ······················································································37
2.22 IPS service ···············································································································37
2.22.1 Service introduction ·························································································37
2.22.2 Product architecture ·························································································37
2.22.3 Functional characteristics ··················································································38
2.22.4 Application restrictions ······················································································38
2.23 Service chain service ··································································································38
2.23.1 Service introduction ·························································································38
2.23.2 Product architecture ·························································································38
2.23.3 Functional characteristics ··················································································38
2.23.4 Application restrictions ······················································································39
2.24 VPN service ··············································································································39
2.24.1 Service introduction ·························································································39
2.24.2 Product architecture ·························································································40
2.24.3 Functional characteristics ··················································································41
2.24.4 Application restrictions ······················································································41
2.25 Security group service·································································································41
2.25.1 Service introduction ·························································································41
2.25.2 Product architecture ·························································································41
2.25.3 Functional characteristics ··················································································42
2.26 Hosting service ··········································································································43
2.26.1 Service introduction ·························································································43
2.26.2 Product architecture ·························································································43
2.26.3 Functional characteristics ··················································································44
2.26.4 Application restrictions ······················································································44
2.27 Minicomputer service ··································································································44
2.27.1 Service introduction ·························································································44
2.27.2 Product architecture ·························································································44
2.27.3 Functional characteristics ··················································································46
2.27.4 Application restrictions ······················································································46

iv
1 Overview
1.1 Glossary

Noun Full English name Explanation


Infrastructure as a service.
Services provided to consumers are the utilization of all
computing infrastructures, including processing CPU, memory,
storage, network and other basic computing resources. Users can
IaaS Infrastructure as a service deploy and run arbitrary software, including operating systems
and applications. Consumers do not manage or control any cloud
computing infrastructure, but can control the choice of operating
system, storage space, and deployed applications, and may also
gain control of limited network components (such as routers,
firewalls, load balancers, etc.).
Platform as a service.
The service provided to consumers is to deploy applications to
the vendor's cloud infrastructure. Applications may be purchased
by customers or developed by customers using the development
PaaS Platform as a service languages and tools (such as Java, python, Net, etc.) provided by
the product. Customers do not need to manage or control the
underlying cloud infrastructure, including network, server,
operating system, storage, etc., but they can control the deployed
applications and possibly the managed environment configuration
of running applications.
Development and Operations.
DevOps Development and Operations It is the middle ground connecting development and operations,
and also a best practice combination of tool chain and
methodology.
K8s is an open source container cluster management system of
Google. Based on Docker technology, k8s provides a series of
functions for containerized applications, such as deployment
K8s Kubernetes
operation, resource scheduling, service discovery and dynamic
scaling, which improves the convenience of large-scale container
cluster management.
Microservices are an architectural style, where a large complex
application consists of one or more microservices. Each
microservice in the system can be deployed independently, and
MicroServices MicroServices
the microservices are loosely coupled. Each microservice focuses
only on completing one task and accomplishing it well. In all
cases, each task represents a small service capability.
That is, everything is service, representing “X as a service”,
“anything as a service” or “everything as a service”. The
XaaS X as a service abbreviation refers to services that are increasingly being
provided over the Internet, not just local or on-site services. The
essence of cloud computing is XaaS.
RESTful API is provided by the background SERVER and
invoked by the front end. The front end invokes API to send
Rest API Restful API HTTP requests to the background, and the background responds
to the requests and feeds back the processing results to the front
end. Restful is a typical HTTP-based protocol.
Virtual Local Area Network.
VLAN Virtual Local Area Network A physical LAN is divided into multiple logical LANs, namely
VLANs. Hosts in the same VLAN can communicate directly, while
hosts in different VLANs cannot communicate directly. After

1-1
Noun Full English name Explanation
VLAN division, broadcast packets are limited to the same VLAN,
that is, each VLAN is a broadcast domain.
Virtual Extensible Local Area Network.
VXLAN technology takes the existing three-layer physical network
Virtual Extensible Local Area as the underlay network, and builds a virtual two-layer network,
VXLAN
Network namely overlay network. Therefore, the nodes in the network do
not need to focus on the underlying physical implementation,
such as the P2P network, which is an overlay network.
Continuous
CI/CD Integration/Continuous Continuous Integration/Continuous Deployment
Deployment
It refers to a section of network storage in a cluster that has been
configured by an administrator.
PV Persistent Volume
PV is a volume plug-in like a volume, but has a life cycle
independent of any individual POD using PV.
It refers to user’s storage request, which is similar to a POD. POD
PVC Persistent Volume Claim consumes node resources, while PVC consumes storage
resources.
It refers to a storage solution of distributed replicated block device
that is software-based and shared-nothing, aiming to realize the
mirror function of block devices (hard disk, partition, logical
volume, etc.) between servers.
Distributed Replicated Block The mirror function means that when an application completes
DRBD
Device the write operation, the data it submits will not only be saved on
the local block device, but also be copied by DRBD and
transmitted to the block device of another node through the
network. In this way, the data on the block device of two nodes
will be kept consistent.
Flexvolume is a GA feature of k8s of version 1.8 or later. Users
can write their own drivers and add volume support in k8s.
FlexVolume \ If –enable-controller-attach-detach enables the Kubelet option,
the supplier driver should be installed in the volume plug-in path
on each Kubelet node and master node.

1-2
1.2 Background
With the rapid development of information technology, many existing contradictions in traditional
data center management are increasingly intensified, such as resource bottlenecks, information
islands, inconsistent standards, complex systems, and low service levels. The overall
management and control mode of IT urgently needs to transform to the cloud mode. Therefore,
more and more enterprises and organizations are focusing on the transformation from traditional
IT to cloud IT, aiming to realize the unified operation of IT through cloud computing technology
and services, and to improve the operation benefits. In the process of business development and
digital transformation, more and more enterprises and organizations expect IT departments to be
able to respond to the continuous evolution of business requirements, and analyze massive data
through intelligent applications, so as to improve the digital and intelligent level of enterprises.
H3C CloudOS 5.0 (hereinafter referred to as CloudOS) is a cloud operating system of H3C, and
is also a basic support platform for the corporate ICT transformation and rapid business
innovation in the cloud era. As a full-stack cloud platform, CloudOS integrates multiple technical
capabilities and cloud scenario capabilities of various industries, such as AI, big data, and IoT.
With the help of powerful computing power and mass storage, relying on intelligent data analysis,
CloudOS enables the virtualization of resources and provides service operations, realizing the
simplification, platformization and servitization of IT technology, and helping users to deliver
outstanding applications and functions in a complex and diverse IT environment. At the same
time, CloudOS provides support for important IT initiatives such as containerization and
microservices, and helps hundreds of users to realize digital transformation.

1.3 Platform capabilities


Basic capacities
The basic capabilities of CloudOS mainly include the super-large scale distributed management
capability, the capability to convert resources into services, the comprehensive automatic O&M
management capability, and the application life cycle management capability, such as cloud host
and storage via the self-service portal, such as startup, shutdown, restart, resume, suspend,
modify, snapshot, migrate, modify the owner, configure QoS policy, etc., support monitoring of
the CPU, memory, and disk of the VM, usage of IO.
The super-large scale distributed management capability
Its core technology is based on the open source project Kubernetes of Google/Linux. It is
responsible for the underlying distributed cluster management, enabling CloudOS users to
experience the most advanced open source cloud technology, and to make the user’s cloud
platform equipped with high availability, scalability and maintainability.
The capability to convert resources into services
It is mainly manifested in converting the resources of the underlying data center, such as
virtualization resources, bare metal resources, storage, network, software, and data into on-
demand cloud services, such as computing storage services, network security services,
cloud database services, application management services, data analysis services, and
management department services.
The comprehensive automatic O&M management capability
Cloud application monitoring management that allows users to monitor the deployed business;
Cloud service monitoring management that allows users to monitor the requested cloud
services;
Cloud operation and maintenance analysis management that analyzes cloud operation and
maintenance data and optimize cloud operation and maintenance capabilities;
Cloud infrastructure monitoring management that monitors all infrastructure resource clusters;
Cloud infrastructure automatic deployment management capability that performs automatic
deployment on all infrastructure resource clusters.

1
The application life cycle management capability
At the data center level, resources are coordinated and scheduled based on applications to
maximize resource utilization;
The business scale automatically and flexibly expands to ensure that the business is available
at any time;
Through Blue-Green Deployment, Canary Release and other methods, early detection of
business problems and adjustment can be realized to reduce the risks of application
upgrade;
All kinds of software, such as office, government affairs, education and finance, are transformed
into service resources to achieve application classification management;
The application partition management is realized by user isolation, development and O&M
isolation, special area setup, etc;
The basic applications are arranged into different cloud services to support the automatic
deployment of users, which is convenient for users to use in the form of application.
XaaS capability
Cloud operating system CloudOS realizes the unified management and intelligent scheduling of
heterogeneous resources in the data center, and provides the following support capabilities for
the upper layer XaaS:
Effectively connect to data center infrastructure resources based on stable and reliable IaaS
service capability, and realize automatic delivery through operation and maintenance
integration portal, helping users complete the sublimation of information construction from
cost center to value center.
Provide users with the best resource platform for cloud applications based on the container
service capability, ensuring the stable operation of customer business and providing strong
support for the continuous growth of customer business.
Help clients to establish a business development and operation integrated management system,
shorten the customer business delivery cycle, and build new business pipeline based on
DevOps service capability and H3C's practical experience on Devops.
Provide users with an industry-leading microservice framework based on the microservice
governance capability, so that clients can focus on the business itself, realize the rapid
development and efficient management of microservice applications, and facilitate the
transformation of customer business into the cloud.
Effectively connect multiple scenarios such as microservices and DevOps, based on the
comprehensive resource support provided by IaaS and container services and combined
with PaaS related capabilities such as database as a service, middleware as a service,
application management service, etc., to solve the challenges and difficulties faced by
enterprise software products in the whole life cycle.
Based on cloud big data service, provide users with a full range of solutions, including data
acquisition and conversion, computing and storage, analysis and mining, BI display, and
operation and maintenance management, to help users build a massive data processing
system, excavate the intrinsic value of data, and create new business opportunities.
Fully meet the user's demand for elastic expansion of AI computing resources, and help
enterprises to truly move towards networking, digitization and intelligence, based on the
optimized distributed training model and rich user-defined software and image library.

1.4 Platform architecture


The H3C CloudOS platform is in a “1+1+n” architecture. The platform adopts a service-oriented
architecture design. The REST API is used for interaction between subsystems, and each
subsystem can run independently. The H3C CloudOS platform is delivered as a basic platform, a
set of system components, and n sets of cloud services.

2
Figure 1 CloudOS product architecture

Basic platform
The basic platform is the minimum set of functions to ensure the normal operation of CloudOS
management components, and provides service support capabilities for the normal operation of
management components, mainly including:
Portal framework: It is the front-end framework of CloudOS, where subsequent cloud service
interfaces can be registered and displayed as required;
API gateway: It provides service routing, access control and other related capabilities for
CloudOS management components;
User management and authentication: It provides unified user management for cloud services, so
that you only need to focus on service-related permission control for each cloud service;
Container-related components: It ensures the deployment and normal operating of H3C CloudOS
management components;
Database-related components: It provides strong support for data storage and reliability of cloud
services;
Fault detection and monitoring: It provides basic O&M capabilities for H3C CloudOS management
components. It can monitor the operating status of each component, detect faults of
management components, and send the corresponding information to the related receiver
through the notification service capabilities..
System components
System components mainly provide high-level service O&M, log analysis and operation-related
capabilities for CloudOS management components.
Microservice-related components: They provide service governance, service topology and other
functions for management components, to realize the observability of each microservice in
CloudOS;
Log-related components: They provide more advanced log management capabilities, which can
be used by all microservices on the upper layer, to achieve cloud service log management
through the unified log center.
Application deployment: It supports traditional software deployment;
Performance acquisition: It provides the storage of resources, applications and other related
performance data for each service component.
Cloud services

3
Cloud services mainly provide users with different levels of cloud service capabilities. From IaaS
service capabilities to PaaS service capabilities and to AI service capabilities, users can choose
the corresponding service capabilities according to their own needs.
Platform key features
Various virtualization platform supports: support CAS, VMware, KVM, Xen, PowerVM and other
virtualization platform types.
Extensible management scale: provide two extension schemes in the face of multiple data
centers, namely the hierarchical management and subregional management.
Flexible networking scheme: support a variety of H3Cloud networking schemes in face of data
centers with different scales and management modes, such as the VLAN light scheme,
VLAN VPC scheme, host, network, and hybrid VXLAN VPC scheme
Application scheduling and resource management: realize whole life cycle automatic
management from application modeling, arrangement deployment to resource scheduling,
elastic scaling, and monitoring self-healing.
Pipeline management of application development: realize the whole process automation of
CI/CD from project source code compilation to building, automatic deployment, and
upgrade.
Cloud middleware services: provide database, message-oriented middleware and other services
required by application cloud.
Microservice management: provide applications with a series of distributed/microservice
governance capabilities such as automatic registration, discovery, governance, isolation,
and call analysis, shielding the complexity of distributed systems.

1.5 Platform features


ABC integration
Interface integration: provide AI (A), big data (B) and cloud computing (C) related services in the
form of cloud services on a unified interface, to realize the overall control with one platform.
Architecture integration: the underlying integration of product architecture, which performs the
componentization and integration of AI and big data services into CloudOS, to achieve
unified scheduling.
Function integration: IaaS provides infrastructure resources and provides resource support and
automatic linkage for AI, big data and PAAS application layers.
Open cloud service access capability
K8s, the mainstream open source container orchestration engine framework in the industry, is
adopted to maximize the value of resource pool.
According to the needs of users, AI (A), big data (B), cloud (C) services are arranged and
combined to provide users with business services on demand.
The plug-in open architecture is adopted to realize the deep integration of ecological applications
in multiple industries, providing users with more abundant service capabilities, and
accelerating the business innovation of clients.
Achieve rapid business launch with agile development
The PaaS capability has been significantly enhanced to accelerate application delivery
through combined business capabilities of containers, microservices, and DevOps,
providing an integrated support platform for traditional and native applications from R&D to
operation.

1.6 Optimization of open source technologies


Decoupling shared storage of basic platforms
4
Each management node is partitioned into a disk area to form a highly available block
storage in the kernel through DRBD. Based on this block storage, Pod of the management
component of the management platform realizes high availability switching between three
servers through PV -> PVC mount disk block and PV + PVC + flexvolume, which no longer
depends on shared storage. In informal situations such as POC demonstrations, both the
basic platform and the cloud services can use the built-in shared local storage, which
greatly reduces the demand for resources in the above scenarios.
Figure 2 Principle for decoupling shared storage of basic platforms

High-availability database clusters, enhanced stability, automatic fault recovery, independent of


shared storage
Self-developed mha+maxscale+mysql high-availability database cluster; without shared
storage dependence; massive stability enhancement and high speed automatic fault
recovery mechanism, no data loss.
Request flow limiting on demand
Extended HAProxy supports automatic current limiting and automatic fusing. The system
can quickly process and respond to user requests while ensuring the availability of the
platform.
When a large number of requests arrive in a short period of time (high concurrent requests), the
system response is slow. The most common operation of users is to abandon the previous
request and reload the page to initiate a new request. The backlog of “legacy” operations,
coupled with new requests, can make the situation worse. When the load balancer detects
that the client’s request exceeds the warning value within a certain period of time, it
performs the current limiting operation, and HAProxy filters out the over-limit requests. The
system can quickly process and respond to the user's request, so that the business can
proceed normally.
In case of request timeouts due to service unavailability or other factors, the client refreshes the
page, and the backlog of “legacy” operations, coupled with new requests, can make the
situation worse. HAProxy detects that the server returns a 5xx error and performs current
limiting operation so that the system pressure will not be too high. Moreover, due to LB
protection, the pressure of the client's retry requests or other client's requests that are
restricted will not be too high when they reach the back end, so that the system can run
normally.

5
Figure 3 Extended HAProxy supports automatic current limiting and automatic fusing

Fault detection, visualization and self-healing


Based on years of experience of CloudOS project implementation, containers and
microservice platforms, dozens of system problems have been summarized, and
corresponding automatic inspections and repair are carried out for these system problems,
so as to realize the stability of the basic environment, the visibility of fault, and self-healing.
The stability management of basic environment is realized by means of basic environment
stability guarantee, fault warning, fault detection and fault automatic repair. In the extreme
abnormal situation, there are three ways to locate and repair the problem: checking the
exception warning on the system interface, locating the cause of the exception with one key
fault detection, and automatically repairing the exception by the system background. The
underlying system is extended on a native basis to ensure stability: classifying services by
QoS to ensure the operation of basic services such as databases and fault detection
services; reserving resources for basic services that are not preempted by business
services; and optimizing Etcd code to reduce IO performance sensitivity.

2 Introduction of IaaS service function technology


2.1 General introduction
2.1.1 Glossary

Noun Full English name Explanation


OpenStack is an open source cloud computing management platform
project, which consists of several main components to accomplish
the specific tasks. OpenSack supports almost all types of cloud
OpenStack \ environment. The project aims to provide a simple, scalable, rich and
unified cloud computing management platform. OpenStack provides
infrastructure as a service (IaaS) solutions through a variety of
complementary services, each of which provides APIs for integration.
Containers are to package software into standardized units for
development, delivery, and deployment.
• A container image is a lightweight, executable, stand-alone
package that contains everything required to run software: code,
runtime environment, system tools, system libraries and
settings.
Container Container • Containerized software is suitable for Linux- and Windows-
based applications, and can run consistently in any
environment.
• Containers give software independence from external
environmental differences (for example, differences between
development and rehearsal environments), helping to reduce
conflicts between teams when running different software on the
same infrastructure.
Docker is the world's leading software container platform. It is
Docker \ developed and implemented using Google's Go language. Based on
technologies such as cgroup of Linux kernel, namespace and

6
Noun Full English name Explanation
UnionFS of AUFS class, Docker encapsulates and isolates
processes, which belongs to virtualization technology at the
operating system level. The isolated process is also called a
container because it is independent of the host and other isolated
processes. Docker was originally implemented based on LXC.
Dockers can automate repetitive tasks, such as setting up and
configuring development environments, which frees up developers to
focus on what really matters: building outstanding software.
Users can easily create and use containers, putting their applications
into containers. Containers can also be version managed, copied,
shared, and modified just like ordinary code.
Computing services with both virtual machine elasticity and physical
machine performance provide you and your enterprise with exclusive
cloud physical server, providing excellent computing performance
Bare metal Bare Metal
and data security for core database, key application system, high-
performance computing, big data and other businesses. Tenants can
apply flexibly and use on demand.
It refers to virtualize one computer into multiple logical computers
through virtualization technology. With multiple logical computers
running on a single computer simultaneously, each logical computer
Virtualization \
can run different operating systems, and the applications can run in
independent space without affecting each other, so as to significantly
improve the efficiency of the computer.
Domain Name System. It converts the domain name (for example,
DNS Domain Name System www.test.com) that people use commonly to IP address (for
example, 192.168.1.1) for computer connection.
Quality of Service. It can control the network bandwidth of virtual
QoS Quality of Service machine. By specifying QoS policies for virtual machines, the limited
network resources can be better utilized.

2.1.2 Background

Construction requirements of cloud data center


The large-scale use of IT technology makes modern enterprises more competitive than
ever before. However, with the wide application of IT technology, the construction cost and
management cost of IT platform have been greatly increased. In addition, with the long
business deployment cycle and isolated infrastructure, the IT system which should have
contributed to the development of enterprises has gradually become the bottleneck of rapid
development of enterprises.
The emergence of cloud computing technology subversively changes the IT platform
construction and service mode of data centers. Its features of online management, on-
demand acquisition, and rapid delivery are favored by most enterprises. More and more
enterprises begin to build data centers using cloud computing technology, namely cloud
data center.
As an important part of cloud data center construction, CloudOS IaaS can integrate the
large and complex data center into a single management object, namely cloud. Cloud users
can apply for and manage the required it resources in the cloud through CloudOS IaaS
cloud service portal.
Management requirements of cloud data center
The computing virtualization, storage virtualization and network virtualization technology
widely used in data center make the computing, storage and network resources of data
centers highly flexible. However, the flexibility of virtualization is greatly undermined by the
fact that “all resource allocation and use still need to be completed by the administrator”. If
these tasks are assigned to users themselves, the data center will be in inevitable chaos.
CloudOS IaaS can parse the service requests submitted by cloud users into the
instructions that can be understood by the infrastructure and execute them automatically.

7
With the help of operation functions such as specification, billing, process, and lease term,
CloudOS IaaS enables the resources of the whole cloud data center to be used and
allocated automatically and orderly, which can reduce the workload of administrators and
improve the operation efficiency of the data center.

2.1.3 Service-oriented architecture

CloudOS IaaS services mainly provide cloud services such as cloud host, cloud disk, cloud
network disk, cloud firewall, cloud load balancer, and elastic IP, and help users quickly open cloud
services through process service templates and operation guides. At the same time, it provides
administrators with rich operation and O&M support capabilities such as multi-level organizational
structure, business approval process, billing, monitoring, and statistical analysis.
Figure 4 IaaS service-oriented architecture

CloudOS IaaS service platform is composed of multiple services. Services are called by Restful
APIs, and each module performs its own duties to provide virtual machine, bare metal and other
services for the platform. IaaS service mainly manages the underlying virtualization software
through the middle layer, OpenStack. CloudOS IaaS does not directly communicate with
virtualization software. All operations are completed by calling OpenStack standard APIs, and
OpenStack calls the virtualization software to complete corresponding actions. The component
logic diagram of OpenStack is as follows:
Figure 5 The component logic diagram of OpenStack

8
Keystone: certification authentication service, which is used for is used for the certification and
authentication between users and modules.
Glance: mirror service, which provides mirror services for the virtual machine.
Nova: computing service, which provides VM services, and can be combined with Ironic to
provide bare metal services.
Neutron: network service, which provides network services for virtual machine, bare metal, etc.
Cinder: block storage service, which provides block storage services for virtual machine.
Swift: object storage service
Ironic: bare metal service, which provides bare metal services combined with Nova.
Ceilometer: monitoring service.
OpenStack, as the middleware of protocol adaptation layer, exists in the whole architecture of
CloudOS IaaS. The core value of OpenStack is the design idea of modular system architecture
based on open APIs and complete decoupling, which makes OpenStack system architecture
open and compatible. The relationship between OpenStack and external systems can be seen
more clearly in the CloudOS IaaS business logic architecture diagram below.
Figure 6 CloudOS IaaS business logic architecture diagram

OpenStack provides the northbound REST APIs for the upper layer. The business modules of
CloudOS IaaS communicate with OpenStack through REST APIs to realize resource scheduling
and business process.
OpenStack communicates with the third-party virtualization software through the plugin
mechanism. For example, through the Nova plugin provided by CVM, OpenStack can control the
computing resources managed by CVM and carry out VM creation and other operations. Through
the Cinder plugin provided by CVM, OpenStack can control the storage resources managed by
CVM to create and mount disk volumes. Through the Neutron Plugin provided by VCF Controller,
OpenStack can control the underlying VXLAN network.

2.1.4 Service features

• OpenStack-based development
CloudOS IaaS is developed based on OpenStack core components (Nova, cinder, neutron,
grace, keystone, etc.), and the good foundation enables CloudOS IaaS to be more flexible
and highly scalable.
• Convenient container deployment

9
CloudOS cloud operating system repackages various components and services into the
unified installation framework of CloudOS basic platform, realizing the automatic installation
and deployment based on Docker container and shielding the complexity of various
component installation for users.
• Support multiple virtualization platforms
CloudOS IaaS supports virtualization platforms such as CAS, VMware, KVM, Xen and
PowerVM. Users can build cloud computing data center based on existing virtualization
platform, so as to protect existing investment and reduce business changes.
• Secure and flexible networking mode
CloudOS IaaS supports the networking with VLAN or VXLAN technology, which can be done
by both physical network elements and NFV network elements. The secure and flexible
networking mode makes CloudOS IaaS suitable for cloud data centers of various scales.
• Complete cloud operation function
CloudOS IaaS provides powerful operation functions such as quota management,
specification management, billing management, process management and tenant
management. Users can customize their own operation strategies according to their actual
situations.
• Rich cloud computing service
CloudOS IaaS has powerful resource abstraction and on-demand supply capability, which
can provide users with computing, storage, network, security, database, application and
other resources of traditional data center in the form of cloud services.
• Open application program interface
The REST APIs of CloudOS IaaS provide users with the capability to re-develop based on
CloudOS IaaS, which facilitates users to connect CloudOS IaaS with existing business
systems or third-party management platforms.
• Good user experience
CloudOS IaaS integrates H3C's deep understanding of user requirements, rich development
experience and professional UI design to guarantee good user experience of CloudOS IaaS.

2.2 Cloud host service


2.2.1 Service introduction

Based on the server virtualization technology, it provides users with host resources and whole life
cycle management function of resources. Users can install their own applications in the host and
provide external services.

2.2.2 Product architecture

CloudOS cloud host services are compatible with various virtualization types. At the bottom of the
whole system are various virtualization resources. Compute nodes of OpenStack connect with
virtualization resources. OpenStack Controller provides REST interface services for the upper
layer. The cloud host services of the upper layer provide portal for user interaction and
encapsulation and extension of OpenStack.
At the same time, CloudOS cloud host services provide monitoring of both the cloud host and
physical resource pool.

10
Figure 7 Cloud host service system architecture

• Cloud host service portal: provide interfaces for user operation.


• H3C CloudOS cloud host service: encapsulate into an interface suitable for Portal calling
based on OpenStack interface, and expand the OpenStack function, which comes from open
source, but is more than open source.
• OpenStack Controller: provide OpenStack standard interface, manage the resource pool
represented by each OpenStack computing node, schedule the request after receiving it, and
select the appropriate computing node to process the request. OpenStack Controller and
OpenStack Compute communicate with each other through RabbitMQ.
• OpenStack Compute: responsible for managing various virtualization resource pools,
converting common request calls into various virtualization instructions, polling virtualization
resources, and converting various virtualization resources into general OpenStack format.

2.2.3 Functional characteristics

1. Cloud services
• Cloud host life cycle management
• Power operation of cloud host: turn on, turn off, suspend, resume and restart.
• Cloud host console access control.
• The operation capability of the attached resources (virtual NIC, cloud disk, etc.) of the cloud
host.
• Modifying the cloud host specifications, which include vCPU, memory, disk, GPU and vGPU.
• Cloud host snapshot
2. O&M
• Physical node basic information of physical computing resource pool.
• Resource usage information of physical computing resource pool.
• Addition and deletion of physical computing resource pool
• Cloud host monitoring.

2.3 Bare metal service


2.3.1 Service introduction

In some cases, virtual machines cannot meet the needs of users. At this time, bare metal services
are needed to provide users with the sole use of a physical machine. These cases include high-
performance computing, database applications, and user applications that require specific
hardware but cannot be virtualized.

11
2.3.2 Product architecture

The bare metal service is implemented based on OpenStack. The bare metal service calls the
OpenStack controller interface, and then forwards to OpenStack compute node. Finally, it calls
the OpenStack ironic interface to apply for bare metal, and at the bottom layer is the physical
server of each manufacturer.
Figure 8 Bare metal service system architecture

The specific usage process is as follows:


(1) The administrator connects the bare metal node to the network, configures the IPMI user
name and password, sets the NIC PXE boot, starts the bare metal node PXE boot, and
obtains the DHCP address and boot image from the Ironic node.
(2) The administrator enters the node information in bare metal service to register the node,
including IPMI user name, password and IP address. After the capability set is found, the
specific hardware information of the node can be seen.
(3) When a user chooses a bare metal service to apply for a bare metal node, the OpenStack
Nova module will schedule and select the appropriate bare metal node, and send a message
to the Compute node. The Compute node calls the Ironic API interfaces to prepare the bare
metal host. After the message is received by the Ironic API, it is sent to the Ironic Conductor,
and the Conductor will download the image from Grace to the Ironic node for cache. The
image is then written to the disk of the physical machine through the ISCSI protocol. The
physical machine is then restarted from the hard disk via the IPMI interface settings.
Figure 9 Bare metal schematic diagram

2.3.3 Functional characteristics

• Life cycle management of bare metal

12
• Bare metal power on and off
• Bare metal port polymerization
• Bare metal multi network support

2.3.4 Application restrictions

• Currently, hard disk expansion with bare metal is not supported


• Mounting cloud disk is not supported

2.4 Image service


2.4.1 Service introduction

When creating a cloud host, users need to select a pre-installed system, which is the mirror. The
image contains the basic operating system and other software that users want to use.

2.4.2 Product architecture

The image service is developed based on OpenStack. After the image service passes the user
creation request to OpenStack, OpenStack stores the data to the storage source.
Figure 10 Image service system architecture

The specific usage process is as follows:


(1) The administrator or user clicks to upload image, and selects local image file upload.
(2) The image service caches the image and then passes the image using the OpenStack
Glance interface.
(3) The Glance stores the image on the storage source.
(4) When the image is required for services such as creating cloud hosting, the Grace interface
is called to obtain the image. The Grace interface reads the image data from the storage
source and returns it to the user.

2.4.3 Functional characteristics

• Upload, download and deletion of the image


• Image attribute settings

2.4.4 Application restrictions

• At present, the supported image types are Qcow2, raw, VMDK and ISO.

13
• The supported back-end storage type is File system.

2.5 Snapshot service


2.5.1 Service introduction

The user uses the cloud host after its creation. At this time, the disk data of the cloud host and the
image data are not the same. If the user wants to save the disk data at this time, the snapshot
function can be applied.

2.5.2 Product architecture

The snapshot service is implemented based on OpenStack. The snapshot service calls
OpenStack interface to generate the cloud host snapshot, and OpenStack calls OpenStack
Compute interface to transfer cloud host system disk data to OpenStack Glance image
management module.
Figure 11 Snapshot image service system architecture

2.5.3 Functional characteristics

• Snapshot creation
• Snapshot deletion
• Snapshot to image

2.6 Cloud desktop service


2.6.1 Service introduction

Cloud desktop is a desktop virtualization technology which manages and allocates desktop
resources centrally through virtualization technology. CloudOS provides users with self-service
and pay-on-demand cloud desktop services by connecting to H3Cloud Desktop products.

14
2.6.2 Product architecture

The cloud desktop service is compatible with the cloud desktop back-end system: H3Cloud
Desktop, and its back-end desktop pool. The cloud desktop service and desktop are connected to
the same LDAP system.
Figure 12 Cloud desktop service system architecture

Users log in to H3CloudOS with LDAP user accounts and apply for cloud desktop using the cloud
desktop service. Cloud desktop service will call H3C VDI or H3CloudOS Desktop interface to
apply for a desktop. After the desktop application is completed, users can log in to the desktop
using the desktop client, and the desktop will use the same set of LDAP server to verify.
After the cloud desktop application is completed, users can also perform cloud desktop release,
power operation, and Console port output display.

2.6.3 Functional characteristics

• Cloud desktop life cycle management


• Viewing cloud desktop Console port output
• Startup, shutdown and restart of cloud desktop host

2.7 Cloud hard disk service


2.7.1 Service introduction

Cloud disk service is provided through the OpenStack Cinder component. It supports docking
virtualization back-end storage, and can create cloud hard disks and perform other management
operations through Cinder service on back-end storage. These cloud disks can be used as data
disks for virtual machine mounting, and can also be used as system disks for creating cloud
hosts.

2.7.2 Product architecture

CloudOS provides cloud disk management services through web pages and northbound
interfaces. The final request will be sent to the cinder-api service, which then sends the task

15
through a message queue based on the operation request, and the corresponding service will
receive and process the request.
Figure 13 Cloud hard disk service system architecture

At present, CloudOS also provides the backup function of a single cloud hard disk. The backup
function requires the storage manufacturer to provide a cider backup driver to create a
corresponding hard disk backup resource on the storage. Hard disk backup can be used to create
and restore cloud hard disks.
Figure 14 Cloud hard disk backup flow chart

16
2.7.3 Functional characteristics

• Cloud hard disk life cycle management


Cloud hard disk creation, modification, deletion, lease renewal, owner modification, capacity
expansion, cloning, etc
Mount the cloud hard disk to the virtual machine and unmount the cloud hard disk from the host
• Cloud hard disk snapshot life cycle management
Creation, modification and recovery of cloud hard disk snapshot
• Cloud hard disk backup life cycle management
Creation, deletion, and restore of cloud hard disk backup
• Hard disk type
Ceation, modification, and deletion of hard disk type

2.7.4 Application restrictions

• Cloud hard disk backup related functions are limited to docking CAS back-end storage.
• Cloud hard disk online capacity expansion is limited to docking CAS back-end storage.

2.8 Object storage service


2.8.1 Service introduction

The object storage service is an object-based mass storage service, providing customers with
safe, reliable and low-cost data storage capabilities. CloudOS provides object storage services by
connecting to ONEStor storage through S3 APIs, and can also provide object storage services by
connecting to third-party storage through AWS S3 SDK.

2.8.2 Product architecture

CloudOS provides page operations. The user sends the request to the adaptation layer through
the REST APIs after page operations, and the adaptation layer sends the request to the storage
management platform for processing through the REST APIs.
Figure 15 System architecture of the object storage service

17
2.8.3 Functional characteristics

• Bucket life cycle management


Creation, deletion, and folder creation of the bucket
• Object life cycle management
Upload, download, copy and deletion of the object

2.9 File storage service


2.9.1 Service introduction

CloudOS provides file storage services through OpenStack Manila component, and supports NFS
and CIFS protocols. The shared file system created by this service can provide shared access for
cloud hosts and bare metal, and realize shared file storage of multiple cloud resources. At the
same time, it provides the set authorization to restrict the read and write access to the file system.

2.9.2 Product architecture

Figure 16 File storage system architecture

2.9.3 Functional characteristics

• Shared file life cycle management


Shared file creation, expansion, modification, deletion, addition of shared permission,
removal of shared permission, etc
• Shared node life cycle management
Shared node creation, modification, and deletion

18
2.10 Cloud network disk service
2.10.1 Service introduction

Cloud disk provides information storage, reading, downloading and other services for a single
user, with the characteristics of security, stability and mass storage. Through the cloud network
disk service, users can manage the life cycle of cloud network disk.

2.10.2 Product architecture

CloudOS provides page operations. The user sends the request to the adaptation layer through
the REST APIs after interface operations, and the adaptation layer sends the request to the cloud
network disk management platform for processing through the REST APIs.
Figure 17 Cloud network disk system architecture

2.10.3 Functional characteristics

• Cloud network disk life cycle management


Cloud network disk creation and expansion
• Shared document/personal document life cycle management
New folder, file uploading, authorization configuration, copying, moving, deletion, renaming
and downloading
• Recycle bin
The emptying of the recycle bin, and copying, moving, deletion and restoring of the
files/folders in the recycle bin

2.10.4 Application restrictions

• This service is only available when connecting to AnyBackup.

19
2.11 Cloud backup
2.11.1 Service introduction

CloudOS provides the life cycle management of the virtual machine. In order to improve the
virtual machine service, CloudOS realizes the whole machine backup function by connecting to
AnyBackup, and supports regular backup, immediate backup, full backup, incremental backup
and recovery of the virtual machine.

2.11.2 Product architecture

CloudOS provides page operations. The user sends the request to the adaptation layer through
the REST APIs after interface operations, and the adaptation layer sends the request to the
AnyBackup management platform for processing through the REST APIs.
To connect CloudOS to AnyBackup, users need to register CloudOS information on the
AnyBackup Management Platform, and then create a virtual machine client corresponding to
CloudOS. The created virtual machine backup task will be associated with a specific client.
According to the organization information configured by the client, the list of virtual machine data
sources that can be backed up can be obtained.
Figure 18 Backup service system architecture

2.11.3 Functional characteristics

• Create a backup task


• Modify a backup task
• Move the backup task to recycle bin
• Recycle bin: restore the backup task and delete the backup task
• Restore
• Immediate backup
• Backup task details
• Backup task historic records
• Backup task monitoring
• Delete backup data

20
2.11.4 Application restrictions

• Only available when connecting to AnyBackup.


• Only CAS virtual machines created through CloudOS are supported.
• For the impact of the backup platform function of AnyBackup, only one NIC can be brought
when the virtual machine is restored.
• After backing up the virtual machine by AnyBackup, there is a corresponding snapshot on
the virtual machine on the virtualization, and the specification modification operation cannot
be carried out through the cloud.
• The user creates a backup task. The virtual machines that can be backed up are only those
created by the current user.
• Backup of virtual machines with bare disks is not supported.

2.12 Virtualization management


2.12.1 Service introduction

When IaaS service is deployed, a CVM container will be built in to provide the virtualization
management capability of CVM. Users can manage cloud resources and virtualization resources
on the same page.

2.12.2 Product architecture

Figure 19 Virtualization management system architecture

After IaaS service deployment is completed, you can log in to CloudOS interface to operate CVM
virtualization resources.

2.12.3 Functional characteristics

• Cluster life cycle management


Cluster addition, intelligent resource scheduling, high reliability, application HA, performance
monitoring, dynamic resource scheduling, storage and connection of all hosts
• Host life cycle management
Host addition, modification, deletion, connection, shutdown, restart, maintenance mode
operation, virtual machine import, monitoring, storage, physical NIC, virtual switch, and
advanced settings
• Virtual machine management
Virtual machine backup strategy, modification, startup power, safe shutdown, poweroff,
migration, snapshot management, immediate backup, backup management, migration
history, performance monitoring, and upgrading CASTools
• Shared file system
21
• Alarm configuration
• Network policy template
• NTP Server

2.12.4 Application restrictions

• The IaaS service supports only one built-in CVM container.


• Container-only CVM support in the functional characteristics

2.13 Classic network


2.13.1 Service introduction

1. Network type
Classic network is an isolated two-layer broadcast domain. CloudOS supports various types of
networks, including flat, VLAN and VxLAN.
• The flat network is a network without VLAN tagging. The flat network is supported when the
network planning is VLAN-free, and a flat network monopolizes a network exit. Instances in
the flat network can communicate with instances in the same network, and can span multiple
nodes.
• VLAN: The VLAN network is supported when the network planning is the SDN-free or H3C
SDN-free mode. The VLAN network is a network with 802.1q tagging. The VLAN is a two-
layer broadcast domain. Virtual machines in the same VLAN can communicate with each
other, while virtual machines indifferent VLANs can only communicate through router. The
VLAN network can span nodes, and is the most widely used network type.
• VxLAN: The VxLAN network is supported when the network planning is the H3C SDN mode.
Vxlan is an Overlay network based on tunneling technology, which distinguishes itself from
other VxLAN networks by a unique segmentation ID (also known as VNI). The packets are
encapsulated into UDP packets through VNI for transmission. Because the layer-2 packets
are transmitted in the layer-3 by encapsulation, the limitations of VLAN and physical network
are overcome.
2. Layer-2 isolation
Different networks are isolated on the layer-2. Take VLAN network as an example, the network A
and network B will assign different VLAN IDs, which ensures that the broadcast packets in
network A will not run into network B.
3. Layer-3 interworking
Take VLAN network as an example, the subnet in the network can be associated with routers.
Although the VLAN IDs of different networks are different, the subnet can realize traffic
interworking after connecting to the same router. After subnet A and subnet B are connected to
the router, the router generates two ports, portA and portB, and the IPs are the gateway address
corresponding to the two subnets. In the subnet A, the destination address is the traffic of non-A
network segment, which is transferred to the gateway by default, namely, router portA. If the
destination address of the packet is the segment B, the router will change VLAN ID A to VLAN ID
B and forward it to portB to interwork with subnet B.

2.13.2 Product architecture

1. VLAN classic network


VLAN can be divided into SDN-free scheme and SDN-enabled scheme.
• SDN-free

22
VLAN technology is adopted to isolate tenant traffic, and IP addresses between different
tenants cannot overlap, and network and security services cannot be provided. The network
communication between the tenant and the external network needs to be configured
manually by the administrator.
• SDN-enabled
VLAN technology is adopted to isolate tenant traffic, and IP addresses between different
tenants can overlap, and network and security services can be provided. The network
communication between the tenant and the external network can be managed independently
through the service portal. The H3C VCF controller is also needed to support the
implementation of the scheme.
2. VxLAN classic network
VxLAN technology is adopted to isolate tenant traffic, and IP addresses between different tenants
can overlap, and network and security services can be provided. The network communication
between the tenant and the external network can be managed independently through the service
portal. The H3C VCF controller and access switch supporting VLAN-VXLAN mapping are also
needed to support the implementation of the scheme.

2.13.3 Functional characteristics

• VLAN
• VXLAN
• Support IPv4 and IPv6 functions

2.13.4 Application restrictions

• Virtualization types support only KVM, CAS, and VMware.

2.14 Virtual private cloud


2.14.1 Service introduction

1. Overview
With Virtual Private Cloud (VPC), the same IP address segment (implemented by VPN
technology) can be used between tenants without interfering with each other. Any organization
looks like an independent “private cloud”, so as to build an isolated and user-managed virtual
network environment for cloud hosts, improve the security of resources in the cloud, and simplify
the network deployment of users.

23
The structure of VPC is shown in Figure 20. In the VPC, users can define security group (traffic
control between virtual machines), IP address segment and other network features.
Figure 20 VPC structure

2. Network type
The VPC is a virtual private cloud. The private network in the VPC is connected to routers by
default, while the three layers are interconnected. VPC supports various types of private
networks, including flat, VLAN and VxLAN.
• Flat: The flat network is a network without VLAN tagging. The flat network is supported when
the network planning is VLAN-free, and a flat network monopolizes a network exit.
• VLAN: The VLAN network is supported when the network planning is the SDN-free or H3C
SDN-free mode. The VLAN network is a network with 802.1q tagging. The VLAN network
can span nodes, and is the most widely used network type.
• VxLAN: The VxLAN network is supported when the network planning is the H3C SDN mode.
VxLan is an Overlay network based on tunneling technology, which distinguishes itself from
other VxLAN networks by a unique segmentation ID (also known as VNI). The packets are
encapsulated into UDP packets through VNI for transmission. Because the layer-2 packets
are transmitted in the layer-3 by encapsulation, the limitations of VLAN and physical network
are overcome.
3. Layer-2 isolation
Different networks are isolated on the layer-2. Take VLAN network as an example, the network A
and network B will assign different VLAN IDs, which ensures that the broadcast packets in
network A will not run into network B.

2.14.2 Product architecture

1. VLAN VPC
VLAN VPC can be divided into SDN-free scheme and SDN-enabled scheme.
• SDN-free
VLAN technology is adopted to isolate tenant traffic, and network and security services
cannot be provided. The network communication between the tenant and the external
network needs to be configured manually by the administrator.
• SDN-enabled
VLAN technology is adopted to isolate tenant traffic, and network and security services can
be provided. The network communication between the tenant and the external network can
be managed independently through the service portal. The H3C VCF controller is also
needed to support the implementation of the scheme.

24
2. VxLAN VPC
VxLAN technology is adopted to isolate tenant traffic, and network and security services can be
provided. The network communication between the tenant and the external network can be
managed independently through the service portal. The H3C VCF controller and access switch
supporting VLAN- VxLAN mapping are also needed to support the implementation of the scheme.

2.14.3 Functional characteristics

• VLAN
• VxLAN
• Support IPv4 and IPv6 functions

2.14.4 Application restrictions

• Virtualization types support only KVM, CAS, and VMware.

2.15 DNS service


2.15.1 Service introduction

The DNS service (CDNS) converts the domain name (for example, www.test.com) that people
use commonly to IP address (for example, 192.168.1.1) for computer connection. Users can set
the DNS server as CloudOS address and create a domain name to access the website or web
application by inputting the domain name on the browser.
CloudOS DNS service supports private domain name resolution.

2.15.2 Product architecture

The DNS service is provided by the container in CloudOS, and the container has two main
functions:
• Domain name management function: users manage the private network domain name
through CloudOS, and the actions of adding/deleting/checking/changing the domain name
will call the APIs of the container, which will be reflected in the specific DNS server
connected to the back end.
• Domain name resolution function: the container provides resolution function for private
network domain names created on CloudOS. The DNS server is set as CloudOS address by
the virtual machine or PC outside the cloud to resolve and access the domain name
managed on the cloud platform.
DNS service can provide services for inner-cloud hosts (or load balancer) and PCs outside the
cloud. After setting domain names on CloudOS for inner-cloud hosts (or load balancer) and PCs
outside the cloud:
• The PC outside the cloud can access the inner-cloud services by setting its own DNS server
as CloudOS address and accessing the inner-cloud domain names (the domain name of the
inner-cloud hosts or the domain name of the load balancer). When the IP of the inner-cloud
services changes, the external cloud does not need to perceive it, but to modify the value of
the domain name recordset on the cloud management platform;
• The inner-cloud hosts (or load balancer) can set its own DNS server as CloudOS address
and access inner-cloud domain names or domain names outside the cloud, to access inner-
cloud services or services outside the cloud.
As shown in Figure 23, the cloud platform consists of a cluster of three nodes, one of which is the
master node. There is an inherent domain name resolution function module on each node
Dnsmasq, which is responsible for domain name resolution for the internal module of cloud
platform. This module is invisible to users. The designate container provides domain name

25
resolution functions available to users. Corresponding to the DNS service function on cloud
platform, the container will only exist on one node. Inner-cloud resources (host or load balancer)
and PCsoutside the cloud can complete the domain name resolution function by setting DNS
server as CloudOS address.
Figure 21 DNS service system architecture

2.15.3 Functional characteristics

1. Host name record


Create a private domain name, such as example.org. The private domain name can be
associated with the VPC (virtual private cloud). After associating with the VPC, users can manage
the host name record under the VPC, that is, manage the domain name of the virtual machine
under the VPC. Assuming that the cloud host name is vm1 and the IP is 192.168.1.1, the host
name is recorded as vm1.example.org, and the value of the domain name is 192.168.1.1.
2. Domain name automatic update
After the private network domain name is associated with the VPC, the actions of creating a
virtual machine in the VPC, deleting the virtual machine, loading/unloading the virtual NIC of the
cloud host, and disassociating the private domain name with the VPC will cause the automatic
update of the host name record in the private network domain name. Users do not need to
manage the domain name of the cloud host manually.
3. DNS server modification
The DNS server to which CloudOS is connected is provided by the DNS resolution container by
default. Cloudos supports setting users' own back-end DNS server cluster. Users can set multiple
existing DNS servers on CloudOS as back-end DNS servers that provide cloud resolution
functions, increasing the high availability of cloud resolution services. At the same time, when the
DNS server is provided by the default container, CloudOS supports the configuration of
forwarding server, and the domain names that cannot be resolved locally are sent to the
configured DNS server for resolution.
4. DNS servicer cascade connection
When the user has a physical DNS server, the inner-cloud DNS server can be used as the
forwarding server of the physical DNS server. For example, through configuration, specific
domain names are forwarded to the inner-cloud DNS server for resolution, and the user's own
physical DNS server serves as the cache server for these domain names.

2.15.4 Application restrictions

• Currently, the domain name recordset types are A, AAAA, PTR reverse resolution, and
CNAME. Other recordset types are not supported for the time being.

26
• DNS server only provides private domain name resolution, and does not support public
domain name resolution for the time being.

2.16 QoS service


2.16.1 Service introduction

QoS means the quality of service. For network services, the factors that affect the quality of
service include transmission bandwidth, transmission delay, packet loss rate and so on. The
network bandwidth of virtual machine can be controlled in the system. By specifying QoS policies
for virtual machines, the limited network resources can be better utilized. After binding QoS policy
for virtual machine and configuring certain bandwidth restriction policy, the rate of virtual machine
outbound can be controlled.
Figure 22 QoS speed limit indication

100M 10M

VM QoS

2.16.2 Product architecture

On-cloud QoS can be connected to QoS policy in CAS or QoS policy in VCF controller. Currently,
the speed limit policy in CAS is used by default. The following is an example of CAS connecting:
At present, QoS policy only supports outbound speed limit, and currently only limits the bandwidth
rate. The specific limited bandwidth size needs to be set in the bandwidth restriction rules.
The QoS policy is bound to the virtual NIC of virtual machine. When the virtual machine has
multiple virtual NICs, different QoS policies can be set for each virtual NIC.
When the QoS policy is set for the virtual NIC of a virtual machine, the corresponding
configuration will take effect. In the XML configuration file of CAS virtual machine, the rate test
can be carried out through the flow test tool, or the configuration can be confirmed through the
values in the XML file.

2.16.3 Functional characteristics

• Support QoS policy of network bandwidth limitation.


• Limit the outbound speed of virtual NIC of virtual machine.

2.16.4 Application restrictions

• Do not support VMware virtualization.

2.17 Server load balancer service


2.17.1 Service introduction

The server load balancer service is a service that distributes traffic to multiple real servers.
Through the traffic distribution, the external service capability of the application system is
extended, the single point of fault is eliminated, and the availability of the application system is
improved.
The server load balancer service is a kind of clustering technology, which distributes specific
traffic and services to multiple network devices (such as servers), so as to improve the capability
of service processing and the reliability of services. The main function of the server load balancer
service consists of two elements: the virtual server providing virtual services and the real server
27
processing actual services. The IP address of virtual service is configured on the server load
balancer device for users to request the service. After the user's access request reaches the
server load balancer device through the network and matches the virtual service, the server load
balancer device distributes it to the real server.

2.17.2 Product architecture

The server load balancer service is divided into LBaaS v1 and LBaaS v2, and their data models
are different. In LBaaS v1, the virtual service IP and real server group is in a one-to-one
relationship, that is, a virtual service IP can only load balance one service. In LBaas v2, the virtual
service IP becomes the attribute of load balancing (LB), and the port becomes the attribute of
listener, that is, the same virtual service IP can load balance multiple services through different
ports. Compared with LBaaS v1, LBaaS v2 supports multiple listeners attached to one LB, which
can save the cost of public IP. LBaaS v2 supports TLS unloading, which reduces the burden of
the HTTPS server and provides load balancing based on L7 content.
Figure 23 Comparison of data structure between LBaaS v1 and LBaaS v2

In LBaas v2, the load balancer receives inbound traffic from the client and forwards the request to
one or more available real servers. The load balancer needs to add one or more listeners, check
the requests from customers by configuring the protocol and port of the listeners, and forward the
requests to a real server group according to the defined forwarding policy. Each real server group
forwards the request to one or more real servers according to the specified protocol and port
number. Health check function is applied to configure health check for each real server group.
When the health check of a real server in the back end is abnormal, elastic load balancing will
automatically distribute new requests to other real servers where the health checks are normal.
When the real server returns to normal operation, elastic load balancing will automatically restore
it to the elastic load balancing service.

28
Figure 24 Load balancing (LBaas V2) system architecture

2.17.3 Functional characteristics

• Listener
Methods such as ping, TCP, HTTP and HTTPS are applied to provide effective monitoring to
ensure that members can effectively process requests.
• Real server group
Support rotation, least connections and source address scheduling algorithm. Provide the
session persistence function. During the life cycle of a session, requests from the same client
can be forwarded to the same back-end server.
• Health check
Check the health of the back-end server. When the back-end server is detected to be in poor
health, traffic won’t be sent, and then is forwarded to other normal back-end servers.

2.17.4 Application restrictions

• CloudOS version 2.0 only supports LBaas v1.


• CloudOS version 3.0 only supports LBaas v2, and does not support LBaas v1, but support
smooth upgrades.

2.18 Elastic IP service


2.18.1 Service introduction

The elastic IP address is an IP address that can directly access the Internet. Private IP addresses
are all IP addresses of LAN in public cloud. And private IP addresses are prohibited from
appearing on the Internet.
Elastic IP service is to create an IP address that can directly access the Internet, and bind the
address with the virtual NIC or load balancer in the CloudOS LAN, so as to realize the
communication between cloud resources and the public network through elastic IP address.
An elastic IP can only be used by one cloud host virtual NIC or load balancer.

29
2.18.2 Product architecture

1. Allocate elastic IP to cloud hosts


When a virtual NIC of a cloud host is bound to an elastic IP address, the cloud host can
communicate with the public network through the bound elastic IP address. Each virtual NIC can
be bound to an elastic IP address.
Figure 25 Cloud host allocation elastic IP architecture

2. Allocate elastic IP to load balancer


After the load balancer is bound to elastic IP, the load balancer can forward the external
communication to the load balancing real service, to process the content of the communication
and feed back the results to the public network through the load balancer.
Figure 26 Load balancing allocation elastic IP architecture

30
2.18.3 Functional characteristics

After creating the elastic IP, you can bind the elastic IP as a public network IP to the virtual NIC
instance and load balancing instance.
1. Allocate elastic IP to cloud hosts
After the cloud host is bound to the elastic IP, the host can become a cloud host that can be
acquired at any time and is flexible and extensible, providing a reliable, safe, flexible and efficient
application environment to ensure the sustained and stable operation of business.
When users need to switch the cloud hosts, they only need to unbind the elastic IP address, and
then rebind the elastic IP address to the new cloud host virtual NIC, to continue to use the elastic
IP service, without the need to resolve the domain name at the domain name service provider,
helping to improve the stability of service.
2. Allocate elastic IP to load balancer
After binding the elastic IP, load balancer may accept requests from clients and forward them to
back-end real servers in one or more real server groups. Through the traffic distribution, the
external service capability of the application system is extended, the single point of fault is
eliminated, and the availability of the application system is improved.
One or more listeners can be added to the load balancer. The listener uses the configured
protocol and port to check for the connection request from the client, and forwards the request to
a back-end server group according to the user-defined forwarding policy. Each back-end real
server group forwards the request to one or more back-end real servers using the user-defined
protocol and port number.
Health check function is enabled to configure health check for each back-end real server group.
When the health check of a real server in the back end is abnormal, elastic load balancer will
automatically distribute new requests to other back-end real servers where the health checks are
normal. When the back-end real server returns to normal operation, elastic load balancer will
automatically restore it to the load balancing service.

2.18.4 Application notice

1. Allocate elastic IP to cloud hosts


• A virtual NIC of a cloud host can only be bound to an elastic IP address.
• The network types of the virtual NIC for cloud hosts can be VPC networks and classic
networks.
• The network to which the virtual NIC of the cloud host belongs is bound to the router, and the
external gateway set up by the router and the elastic IP address are in the same network.
2. Allocate elastic IP to load balancer
• A load balancer can only be bound to an elastic IP address.
• The network types of the load balancer can be VPC networks and classic networks.
• The external gateway of the router bound by the load balancer and elastic IP address are in
the same network.

2.19 NAT gateway service


2.19.1 Service introduction

NAT gateway can provide network address translation service for cloud hosts within VPC, for the
purpose of allowing the VPC inner cloud host to access the Internet network and the Internet
network to access the cloud host. NAT gateway has two functions: SNAT and DNAT.

31
1. SNAT
When the VPC inner cloud host accesses the Internet through the NAT gateway, the NAT
gateway replaces the source IP address in the message with a valid elastic IP address and
records the conversion. When the message returns from the external network side, the NAT
gateway finds the original record, replaces the destination address of the message back to the
original private network address of the original VPC cloud host, and forwards it to the VPC inner
cloud host. This process is called SNAT.
Figure 27 SNAT working process

SNAT working process:


(1) When the IP message sent by the VPC inner cloud host (192.168.1.3) to the external
network server (1.1.1.2) passes through the NAT gateway, if the NAT gateway checks the IP
header content of the message and finds that the message is sent to the external network, it
then converts the inner network address 192.168.1.3 of the source IP address field into a
routable external network address of 20.1.1.1, and sends the message to the external
network server. At the same time, it establishes the mapping of table entry record on NAT
gateway.
(2) After the reply message sent by the external network server to the VPC inner cloud host
arrives at the NAT device, the NAT device uses the message information to match the
established table entry, to find the matching table entry record, and replace the original
destination IP address 20.1.1.1 with the cloud host private address 192.168.1.3.
2. DNAT
When the external server on the Internet accesses the service in the cloud host, it first accesses
the elastic public IP address associated with the cloud host. After receiving the message, the NAT
gateway will replace the destination address with the private IP of the cloud host, and record the
conversion. Then, when the message returns from the virtual machine, the NAT gateway will find
the original record, replace the source IP address with a flexible public network IP address, and
forward the message to an external server. This process is called DNAT.
Figure 28 DNAT working process

DNAT working process:


(1) When the server (20.1.1.2) in the Internet accesses the service provided by the cloud host
(192.168.1.3) and the flexible public network IP (20.1.1.1) of the cloud host is actually
accessed, the NAT gateway will match the destination address of the IP message with the
configuration of the interface NAT, and convert the destination address of the message into
the private IP (192.168.1.3) of the cloud host;

32
(2) When the cloud host responds to the message, the NAT gateway modifies the source IP
address of the response message to elastic public network IP (20.1.1.1) according to the
existing address mapping relationship.
3. Advantages
The NAT gateway has the following advantages:
• Communication in the VPC private network uses private network address. If the private
network needs to communicate with external network or access external resources, multiple
hosts in different available zones within the same VPC can share the same flexible public
network IP, which alleviates the pressure of the increasingly exhausted IPv4 address space
to some extent.
• The Internet can access the Internet services provided by the cloud host through the elastic
public network IP address assigned by the cloud host, and hide the real IP address of the
internal server, so as to prevent the external attacks on the internal server and even the
internal network.
• It is convenient for network management. For example, when the services provided in the
cloud host are migrated, there is no need to change the configuration too much. This change
can be reflected only by adjusting the mapping table of internal servers.

2.19.2 Product architecture

1. SNAT
The SNAT function can realize the conversion from private IP in the cloud to elastic public IP by
binding elastic public IP, and can realize the sharing of elastic public network IP by multiple cloud
hosts across the available zone within VPC, so as to access the Internet safely and efficiently.
The SNAT architecture is shown as in Figure 29. When the traffic of the VPC inner cloud host
accessing the Internet passes through the NAT gateway, the source address in the message will
be converted to the elastic public network IP and forwarded to the Internet. At the same time, a
table entry is established in the NAT gateway to record the mapping, the returned message
matches the established table entry, and the destination address in the message is replaced by
the private network IP address of the cloud host and forwarded.
Figure 29 SNAT architecture diagram

2. DNAT
The DNAT function completes the conversion from elastic public IP to private IP by binding elastic
public IP, and supports one-to-one mapping between elastic public IP and private IP, so that
virtual machine can provide services for the Internet.
33
The DNAT architecture is shown as in Figure 30. Each cloud host that provides services to the
Internet is assigned an elastic public network IP. When Internet users access cloud services, the
corresponding elastic public network IP is accessed. After receiving the message, the NAT
gateway will match the destination address of the IP message with the configuration of the
interface NAT, convert the destination address to the private IP of the cloud host and forward it.
When the cloud host responds to the message, the NAT gateway modifies the source IP address
of the response message to elastic public network IP and forwards it according to the existing
address mapping relationship.
Figure 30 DNAT architecture diagram

2.19.3 Functional characteristics

1. Support SNAT
• Flexible deployment
Support deployment across the subnet and deployment across the available zones. The NAT
gateway supports deployment across the available zones with high availability and faults in a
single available zone do not affect NAT gateway business continuity.
• Cost reduction
Multiple virtual machines in different available zones of VPC can share an elastic public
network IP without applying for too many elastic public IP addresses.
2. Support one-to-one DNAT
• One-to-one DNAT service enables cloud hosts to provide services to the Internet.

2.19.4 Application restrictions

• DNAT does not support port forwarding.

2.20 Firewall service


2.20.1 Service introduction

The core concepts of firewall are firewall policies and firewall rules. Policies are an ordered set of
rules that specify the attributes (such as port ranges, protocols, and IP addresses) that make up
the matching criteria, as well as the actions (allowing or rejecting) to be taken on the matching
traffic.

34
The firewall can allow or restrict the data flow according to the different specified rules, so as to
filter the north-south traffic, protect the network security comprehensively, and assist the business
to establish a complete access control and security isolation capability.

2.20.2 Product architecture

Firewall is very simple and easy to use, and can be used after simple policy configuration. There
is no need for traditional firewall image installation, routing settings and other complex basic
systems and network configuration operations, and users do not need to pay attention to disaster
recovery, capacity expansion or access and other issues. Through the rule configuration, the rule
set corresponding to the firewall can be added, so as to achieve smooth expansion in function.
The firewall needs to be associated with a rule set in which multiple rules can be added, and rules
have priority. After the firewall is associated with the router, the traffic can be controlled and
protected.
Figure 31 Firewall architecture

When connecting to the H3C VCF controller, the VCFC plug-in should be installed on CloudOS.
The plug-in implements the driver of openstack. After the firewall page is created, the default
state of the firewall is inactive. The relevant resources of the firewall will not be sent to the VCFC,
but will only be stored in the CloudOS database. When the firewall is connected to the router, the
firewall status becomes active. The VCFC plug-in issues firewalls, the firewall rule sets, and
firewall rules to the VCFC, and creates virtual firewall context on the firewall devices at the same
time, to send firewall rules, routes, and external gateways bound by routers into the context. At
the same time, public network IP information can be published through the uplink of context.
When the firewall disconnects the router, the VCFC plug-in will send API to the VCFC to delete
the firewalls, firewall rule sets and firewall rules, and the firewall status will become inactive.

2.20.3 Functional characteristics

1. Support object groups


Multiple objects can be configured in the object group. After the object group is referenced by
rules, multiple discrete sources/destination addresses or ports can be restricted simultaneously to
work as the condition of matching messages. The object group is divided into IPv4 object group
and service object group. The IPv4 address objects can be configured within the IPv4 object
group to match the IPv4 address in the message. The service objects can be configured within
the service object group to match the upper protocol carried in the message, and supports TCP,
UDP and ICMP protocols.
2. Support IPS strategy
See IPS service for detailed description.

35
3. Support anti-virus strategy
See Anti-virus service for detailed description.

2.20.4 Application restrictions

• One rule can only be used by one firewall.


• A firewall must be associated with one rule set.

2.21 Anti-virus service


2.21.1 Service introduction

CloudOS anti-virus function integrates with Kaspersky Anti-Virus Engine, with a professional built-
in virus library. Adopting the most advanced anti-virus technologies such as the second
generation heuristic code analysis, iChecker real-time monitoring and unique script virus
interception, it can detect and kill a large number of file, network and hybrid viruses in real time. A
new generation of virtual unpacking and behavior judgment technology is used to accurately
detect and kill varieties of variant viruses and unknown viruses.

2.21.2 Product architecture

CloudOS anti-virus function is an anti-virus function directly connected to H3C VCF controller,
which is realized by the controller. It is used in combination with firewall anti-virus on VCFC to
configure an anti-virus policy on CloudOS, including predefined and custom anti-virus policies.
The predefined anti-virus policy is automatically generated by the system according to the virus
library, which is not allowed to be modified or deleted by users, but will only be updated according
to the virus library when the virus library is upgraded. Custom anti-virus policies are configured
based on inherited anti-virus templates, allowing users to modify and delete as well as updating
virus libraries when they are upgraded.
The anti-virus policy can be referenced by firewall rules and issued to firewall devices along with
the corresponding firewall configuration. After the configuration takes effect, the function of virus
detection for the specified traffic can be realized. The process of virus detection for the firewall
device is as follows:
(1) The firewall device receives a message that matches a firewall rule. If the rule is configured
to act in depth detection and references an anti-virus policy, the firewall continues to identify
the application-layer protocol of the message.
(2) If the application layer protocol of the message is supported by the anti-virus function, the
firewall matches the message with the virus characteristics in the virus library. Otherwise, it
will not be treated with anti-virus.
(3) If the message matches the virus feature successfully, it is further judged whether the virus
message conforms to the virus exception. Otherwise, the permissible action is executed.
(4) If the virus message conforms to the virus exception, the permissible action is executed for
this message. Otherwise, the application exception continues to be determined.
(5) If the virus message conforms to the application exception, the processing action of the
application exception is executed. Otherwise, the processing action of the application layer
protocol is executed.

2.21.3 Functional characteristics

• The firewall policy supports the anti-virus function. For detailed anti-virus function, please
refer to the introduction of firewall products of H3C products, such as M9K anti-virus.

36
2.21.4 Application restrictions

• Anti-virus templates, virus libraries and application libraries need to be imported in advance
in VCFC, to create anti-virus policies on CloudOS.
• The anti-virus service must be used in combination with the firewall policy and cannot be
used alone.
• The service chain firewall does not support the anti-virus function.

2.22 IPS service


2.22.1 Service introduction

IPS (Intrusion Prevention System) is usually used to prevent the intrusion from internal or external
network into internal network servers and clients. It is a kind of security defense technology which
can detect and defend against the application-layer attacks. By analyzing the network traffic,
intrusion detection (including buffer overflow attack, Trojan horse, worm, etc.) is carried out, and
the intrusion behavior is stopped in real time through certain response mode to protect enterprise
information system and network architecture from infringement.

2.22.2 Product architecture

IPS function is an IPS function directly connected to H3C VCF controller, which is realized by the
controller. On the CloudOS, the IPS service runs on the firewall. The IPS policy can be
referenced by firewall rules and issued to firewall devices along with the corresponding firewall
configuration, to realize IPS detection on the specified traffic.
The process of IPS detection for the firewall device is as follows:
(1) If the message received by the firewall device matches the IP blacklist successfully, the
message will be discarded directly.
(2) If the received message matches the firewall rule, and the action configured by the rule is
depth detection and references the IPS policy, the firewall performs a more detailed analysis
of the message and extracts the message characteristics.
(3) The firewall matches the extracted message characteristics with IPS characteristics, and
processes the matching message as follows:
If the message successfully matches with multiple IPS features simultaneously, it is processed
according to the action with the highest priority among these features. However, the three
actions of source blocking, log generation, and packet capture will be executed as long as
they exist in the successful matching action. The order of priority of actions from high to low
is: resetting > redirecting > source blocking/discarding > allowing, where source blocking
has the same priority as discarding.
If the message successfully only matches with one feature, it is processed according to the
action specified in the feature.
If the feature does not match any IPS feature successfully, the firewall will perform the
permissible action on the message.
The IPS policy includes predefined IPS policy and custom IPS policy. The predefined IPS policy
is automatically generated by the system according to the feature library, which is not allowed to
be modified or deleted by users, but will only be altered according to the feature library when the
feature library is updated. Custom IPS policies are configured based on inherited predefined IPS
policy, allowing users to modify and delete as well as altering feature libraries when they are
upgraded.

37
2.22.3 Functional characteristics

• The firewall policy supports the IPS function. For detailed IPS function, please refer to the
introduction of firewall products of H3C products, such as M9K IPS.

2.22.4 Application restrictions

• The IPS feature library and template need to be imported in VCFC in advance. Otherwise,
the IPS policies cannot be created on the cloud.
• The IPS must run on the firewall and cannot run alone.
• The service chain firewall does not support the IPS function.

2.23 Service chain service


2.23.1 Service introduction

1. Service chain
When the data message is transmitted in the network, it needs to go through a variety of service
nodes to ensure that the network can provide users with safe, fast and stable network services
according to the design requirements. These service nodes include well-known firewalls, intrusion
detection, load balancing, etc. Generally, network traffic needs to pass through these network
service points according to the given order required by business logic, and this is called service
chain.
In CloudOS, it is also required to provide necessary security business processing capabilities for
network service nodes. It guides forwarding traffic through the service node automatically, so as
to realize topology independent, flexible, convenient, efficient and safe deployment of forwarding
traffic to the service node for security business processing, thus forming the service function
chaining on CloudOS. The service chain can be understood as an application-based business
form.

2.23.2 Product architecture

The service chain on CloudOS is mainly used to achieve the forwarding control of east-west
traffic, which is usually the traffic within the tenant.
Figure 32 East-west traffic protection diagram

2.23.3 Functional characteristics

1. Traffic feature group


At present, the traffic feature group is divided into “subnet” type and “port” type, which are
obtained from the data of the corresponding subnet (the subnet of the non-public network IP
address pool) and the port of the organization respectively. The data of the traffic feature group is
the controller interface that is directly called, and the data is not stored in the database. The traffic
feature group cannot be modified or deleted after being referenced by the service chain.

38
2. Service chain firewall
The service chain firewall is the instance object served by the service chain. It is necessary to
create a new firewall on the firewall page of the service chain, and the firewall herein is an east-
west firewall (different from the firewall page, the new firewall page is a north-south firewall).
When creating a service chain firewall, the context is first created on the physical device, and
then a resource is created. The context is bound to the resource, and then the controller interface
is called to create the firewall. The resource pool name of the service chain firewall needs to be
consistent with the resource pool name of the controller, and the service chain firewall is the
controller interface that is directly called. As a result, the resource is not stored in the database.
The functions of the service chain firewall page include: firewall, rule set, and rules. Please refer
to detailed description of the Firewall service.
3. Service chain
According to the rules of firewall, the service chain restricts the traffic from “source traffic feature
group” to “destination traffic feature group”. The service chain takes effect on the controller, and
the data is not restored in the database.
When a new service chain is created, it is necessary to select the service type “firewall” and
select the firewall instance. The firewall instance is created on the service chain firewall page.

2.23.4 Application restrictions

• Currently, the service type of service chain on CloudOS supports only firewall instances (i.e.
service chain firewalls).

2.24 VPN service


2.24.1 Service introduction

VPN (virtual private network) is a private network that is built based on a shared public network,
through the network with virtual connections, and under a private management strategy. The VPN
is used to establish a secure and encrypted public network communication channel between the
remote user and the virtual private cloud (VPC), as well as between the remote user and the H3C
cloud internal network. The encrypted channel makes it easy to connect enterprise data centers,
enterprise office networks, and VPCS or cloud intranets securely and reliably. Currently, the VPN
connection based on IPsec policy and IKE policy is supported.
IPsec (IP Security) is a three-layer channel encryption protocol, which provides high-quality and
cryptographic-based security guarantee for the data transmitted on the Internet, and is a
traditional security technology to realize three-layer VPN. IPSec protects user data transmitted
between communicators by establishing channel between specific communicators.
The IPsec protocol is not a separate protocol. It provides a complete security architecture for
network data security on the IP layer, including the AH (Authentication Header) and ESP
(Encapsulating Security Payload) security protocol, IKE (Internet Key Exchange) and some
algorithms for network authentication and encryption. Among them, the AH protocol and ESP
protocol are used to provide security services, and the IKE protocol is used for key exchange.
Before using IPsec to protect the data, a security alliance (IPsec SA) needs to be established.
The IKE provides IPsec with the service to automatically negotiate the establishment of IPSec SA
for IPSec, with simple configuration and strong expansion ability.
A VPN is composed of a VPN gateway and a VPN connection. The VPN gateway provides the
external network exit of the virtual private cloud and corresponds to the remote gateway on the
side of the user's local data center. The VPN connection connects the VPN gateway with the
remote gateway through network encryption technology, so that local data center can
communicate with virtual private cloud, and build hybrid cloud environment more quickly and
safely. The networking of the VPN service is shown as in Figure 33.

39
Figure 33 VPN networking

The VPN gateway is an exit gateway device established in the virtual private cloud. Through the
VPN gateway, secure and reliable encrypted communication between the virtual private cloud
and the VPC in the enterprise data center or other regions can be established.
The VPN gateway needs to be used in combination with a remote gateway in the user's local
data center, with one local data center bound to one remote gateway and one virtual proprietary
cloud bound to one VPN gateway. VPN supports point-to-point or point-to-multipoint connection,
so the VPN gateway and the remote gateway are in a one-to-one or one-to-many relationship.
The VPN connection is an Internet-based IPsec encryption technology that helps you quickly
build a secure and reliable encryption channel between the VPN gateway and the remote
gateway in the user's local data center. The current VPN connection supports the IPsec VPN
protocol.
The IKE and IPsec protocols are used in the VPN connection to encrypt the transmitted data to
ensure the security and reliability of the data. And the VPN connection uses public network
technology, so the cost is low.

2.24.2 Product architecture

1. Site-to-site connection
The user's business system can exist in both the local data center room and the cloud room of
H3C cloud. Through the VPN service provided by H3C cloud, the user's local data center is
connected to H3C cloud resources. The business interaction between clouds can be carried out
through the public network to easily deploy the hybrid cloud, as shown in Figure 34
Figure 34 Functional architecture of site-to-site connection

Users can build a backup system for local data center services within VPC of H3C cloud, and
synchronize data from cloud and data center through VPN service, so as to realize remote
disaster recovery and ensure high availability of services.

40
2. VPC-to-VPC connection
In addition to the above application scenarios, the VPN of H3C cloud also supports the
interconnection between VPC and the VPC end. Through the VPN gateway service provided by
the H3C cloud, two unused virtual private clouds are connected, so as to achieve resource
sharing between virtual private clouds.
Figure 35 Functional architecture of VPC-to-VPC connection

2.24.3 Functional characteristics

• Simple and easy to use


It can quickly build the private network, reduce the deployment cycle, and simplify the user
side configuration and maintenance.
• High reliability and high security
IPsec VPN, IKE VPN and other technologies are used to encrypt the transmitted data to
ensure the security and reliability of data transmission.
• Low cost
It is much cheaper to build an encrypted channel over the Internet than to use a dedicated
line.

2.24.4 Application restrictions

• When a new VPN is created, the router connected to the internal network must set up an
external gateway.
• Currently, only the VPN connection of IPsec policy and IKE policy is supported.

2.25 Security group service


2.25.1 Service introduction

A security group is a collection named network access rules that are used to restrict the type of
access instance traffic. The security group service in this system is based on virtual switch to
provide users with security group resources and the full life cycle management function of
resources. Users can realize the east-west protection of host network by configuring security
group rules.

2.25.2 Product architecture

In CloudOS, each security group can have organization, owner and user information.
By default, the system creates a security group for each organization, which is shared within the
organization and has no owner. The default security group rule is to release all the outbound and
inbound (for the virtual machine) data messages. The rule is created through the security group
template, and users can manually modify the configuration of the security group template. At the
same time, if the user creates a security group manually, there are two rules by default, that is, all
traffic out of the virtual machine is released, and the traffic into the virtual machine is discarded.

41
After the security group is created, users can edit the rules for the security group. Rules take
effect in real time after they are created or modified. When a rule is created, the following can be
specified:
• Select the types of rule, such as ICMP, UDP, TCP, and HTTP;
• Specify the rule as outbound or inbound, so as to control the network traffic in the outbound
or inbound direction of the instance applying the security group;
• Select a specific port or port range, which is effective for TCP/UDP, that is, to control the
traffic of specific ports of the protocol;
• Specify the peer type, and select an IP address or remote security group. If a security group
is selected as the peer type, any instance in the security group is allowed to access any
other instance using the rule;
• Specify a IP address or security group, that is, after selecting the peer type, for example, if
the peer type is a security group, then specify a security group used by the peer end;
• Select a type, currently IPv4 and IPv6 are supported.
After creating/modifying/deleting a security group, rabbitmq will be informed. The back-end
virtualization plug-in will respond to the message, parse the rules in the security group, convert
them into specific ACL rules, and issue them to the virtualization software for use by virtual
machines. The virtual machine applies these ACL rules, that is, to control the data flow out of and
into the virtual machine.
When an instance is started, one or more security groups can be specified for it. If no security
group is created, a default security group is automatically assigned to the new instance unless a
different security group is explicitly specified.
Association rules in each security group control the traffic of access instances within the group.
Any inbound traffic that does not match the rules will be rejected by default. You can add or
remove rules from the security group, and you can modify the rules in the default or any other
security group.
Figure 36 Security group system architecture

2.25.3 Functional characteristics

• A security group needs to be specified in the cloud host, and each cloud host has a security
group by default.
• The system creates a security group for each user by default. The security group rules are
shown as in Table 1.

42
Table 1 Default rules for security groups
Direction IP protocol type Protocol Port Remote
Entrance IPv6 Any Any ::/0
Exit IPv4 Any Any 0.0.0.0/0
Entrance IPv6 Any Any ::/0
Exit IPv4 Any Any 0.0.0.0/0

• Each security group has its own default security group rules. You can also customize and
add security group rules. Default security group rules are shown as in Table 2.
Table 2 Default rules for security groups
Direction IP protocol type Protocol Port Remote
Exit IPv6 Any Any ::/0
Exit IPv4 Any Any 0.0.0.0/0

• Traffic sources support both IPv4 and IPv6.


• Supported protocols include: specified TCP protocol, specified UDP protocol, specified ICMP
protocol, other IP protocols, all TCP protocols, all ICMP protocols, all UDP protocols, DNS,
HTTP, HTTPS, IMAP, IMAPS, MS SQL, MYSQL, POP3, POP3S, RDP, SMTP, SMTPS, and
SSH.

2.26 Hosting service


2.26.1 Service introduction

In recent years, cloud computing services have developed rapidly, but many users have some
inventory virtualization applications. How to apply the inventory virtualization of users to the cloud
is a problem that cloud manufacturers need to face. The community virtualizes the stock on the
cloud by creating the image of the virtualization application, importing it into the cloud, and then
creating the cloud host. However, this operation will interrupt the business and is time-consuming.
CloudOS puts forward a hosting solution that allows users to do business without interruption,
without the time-consuming process of making and uploading images.

2.26.2 Product architecture

The hosting service is implemented based on OpenStack interface, but the native process of
OpenStack is modified. When the user selects the hosting cloud host, the hosting service calls
the cloud host interface created by OpenStack. However, OpenStack Compute does not really
create a virtual machine. It looks for the virtual machine in the virtualization environment, and
configures the network and other parameters.

43
Figure 37 Hosting service system architecture

The native OpenStack API does not support hosting. Cloudos extends the API to support the
creation of virtual machines by specifying UUID. When UUID is specified, OpenStack Compute
will not actually create virtual machines.
The specific process is as follows:
(1) Users use the hosting service to scan the existing virtual machines in vCenter and establish
the required OpenStack network in CloudOS.
(2) Users select the cloud host for hosting. The hosting service calls the extended interface of
OpenStack to create the cloud host, but specifies the UUID of the cloud host.
(3) The message is passed to OpenStack Compute, which finds that the UUID has been
specified and instead of actually creating a virtual machine, it looks for the virtual machine in
the virtualized environment based on UUID, and configures the network and other
parameters.

2.26.3 Functional characteristics

• Hosting cloud host


• Hosting cloud hard disk

2.26.4 Application restrictions

• Currently, the NSX scenario is not supported.


• Currently, only the VMware and CAS virtual machines are supported.

2.27 Minicomputer service


2.27.1 Service introduction

Minicomputer, used to refer to UNIX server in China, is a high-performance computer between


PC server and mainframe. It is widely used in many industries with high reliability requirements,
especially in the financial industry. CloudOS is compatible with a variety of virtualization types.
For minicomputers, CloudOS also provides perfect support.

2.27.2 Product architecture

There are two ways to connect H3CloudOS and PowerVM.


PCenter method

44
The minicomputer management driver is the link between H3CloudOS and PCenter. H3CloudOS
communicates through the REST API interface between the minicomputer driver and the PCenter
service program to obtain the information of the Power minicomputers managed by the PCenter
service program, such as the Power state, CPU number, memory size, hard disk size, network
connection and other hardware information of Power minicomputers.
The PCenter REST API interface is the standard access interface provided by the PCenter
service program. External programs or clients need to communicate with the PCenter service
program through the REST API. The PCenter service program is the core of the entire Power
minicomputer management system. Based on client/server model, it manages the Power
minicomputers and obtains the minicomputer resource information through the HMC. It can also
support the micro-partitioning on the Power minicomputer and the creation, deletion, query and
other operations of the micro-partitioning hardware devices (such as SCSI clients, virtual Ethernet
CARDS). The PCenter service program can also directly communicate with the VIOS of the
Power minicomputer to create and manage the hard disk and network of the micro-partitioning of
the Power minicomputer.
• The PCenter service program can automatically schedule the CloudOS virtual machine
creation request information to the appropriate Power minicomputer for virtual machine life
cycle management based on the collected hardware information of the managed Power
minicomputers.
• HMC is specially responsible for the management of Power minicomputers, and PCenter
service program manages and controls Power minicomputers through HMC.
• VIOS is a special partition dedicated to I/O management on the Power minicomputers.
Generally speaking, it occupies all the I/O hardware resources such as hard disk controllers
and physical NIC devices, and provides virtual I/O services for the micro-partitions created
on the Power minicomputers based on these physical devices. The micro-partitions will act
as a client using the services provided by the VIO server.
CloudOS manages the Power minicomputers completely according to the standard OpenStack
model. OpenStack API is first called, and OpenStack API finally calls the corresponding
OpenStack Nova Compute node. Nova Compute finally calls the REST interface of pCenter
through the driver of pCenter.
PowerVC method
H3CloudOS minicomputer management driver communicates with REST requests provided by
PowerVC. PowerVC has a built-in program that uses HMC to control the Power minicomputer.
Others are similar to the PCenter method.

45
Figure 38 Minicomputer management system architecture

2.27.3 Functional characteristics

• Create a cloud host


• Power operation
• Add cloud hard disk
• Add virtual NIC

2.27.4 Application restrictions

• Console port output is not supported currently.

46

You might also like