0% found this document useful (0 votes)
14 views6 pages

Possibility of HPC Application On Cloud Infrastructure by Container Cluster

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views6 pages

Possibility of HPC Application On Cloud Infrastructure by Container Cluster

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

2019 IEEE International Conference on Computational Science and Engineering (CSE) and IEEE International

Conference on Embedded and Ubiquitous Computing (EUC)

Possibility of HPC application on Cloud infrastructure


by container cluster
Kyunam Cho Hyunseok Lee Kideuk Bang Sungsoo Kim
Samsung Research Samsung Research Samsung Research Samsung Research
Samsung Electronics Samsung Electronics Samsung Electronics Samsung Electronics
Seoul, Republic of Korea Seoul, Republic of Korea Seoul, Republic of Korea Seoul, Republic of Korea
[email protected] [email protected] [email protected] [email protected]

Abstract— Today, High Performance Computing (HPC) has


been used to solve various domain problems including science and overhead of cloud technology poses the biggest challenge to the
engineering field or streaming processing application. All of these acceptance of cloud technology for HPC applications.
domains require a lot of computation resources. For HPC
application, wall time and accuracy of calculation are important Many researchers have evaluated the performance overhead
factors. Therefore, cloud infrastructure was hardly acceptable to of cloud environment. In the research of Zheng Li et al. [4], it is
the HPC application, because cloud infrastructure has shown that a cloud environment which is built on a virtual
performance overhead as compared with native environments. machine has maximum overhead of 8% in computation
However, new hardware acceleration devices, increased demand compared with natural environment. In the research of Roberto
for large scale calculations using AI applications and evolution of Morabito et al. [5], the researcher exhibited network i/o
container technology have increased the possibility of running performance overhead which is measured for TCP stream as
HPC applications on cloud infrastructure. In this study, we 28.4% in virtual machine environment.
evaluate and compare the performance of several applications
such as ; Poisson’s solver, ResNet50, and Recurrent Neural However, the situation has changed in the past 2~3 years.
Networks (RNNs) with Long Short-Term Memory models (LSTM) We think that there are three major influencing factors. One of
- to confirm the possibility of running HPC applications on cloud the reasons is the revolution Artificial Intelligence (AI) in the
infrastructure. We submit that there is only 0.05% overhead for past 2-3 years. Also, AI research uses HPC technology for
ResNet50 on cloud infrastructure. It indicates the possibility of computing. Due to the evolution of deep neural network, many
running special purpose HPC applications such as AI training or researchers who researched on natural science and engineering
General-Purpose computing on Graphics Processing Units domain started using AI technology in their research areas due
(GPGPU) oriented applications on cloud infrastructure. We also to its evolution of deep neutral network. The increase in use of
observe that there is no performance overhead for cache miss rate AI application led to an increase in demand for HPC
and InfiniBand latency. infrastructure. It changed the way infrastructure is shared e.g.
enabling AI infrastructure in public cloud. This is the first reason
Keywords— HPC, High performance computing, container, why cloud technology is considered in HPC domain.
cloud, infrastructure, AI
The second reason lies in the characteristics of AI
I. INTRODUCTION technology. AI technology requires large computations in
Today cloud environment is widely applied in numerous GPGPU. There is no performance loss in the GPGPU resources
fields in science and engineering. In recent years, the adaption while cloud technology provides cloud environment. Because
rate has dramatically increased due to the robustness of cloud cloud technology utilizes computation resources only in CPU
technology. Firstly, cloud environments are very easy to deploy for management and visualization. So, performance overhead of
and terminate than traditional computing resources which are cloud technology less affects to computations. Therefore, AI
based on physical server environments. Second, it is more cost technologies can apply different methodologies to a single
effective as compared to traditional server computing, especially problem. This characteristic makes demand for various
in early development stage. Competition among cloud providers computation configurations and environments. Cloud
such as Amazon Web Services (AWS) [1], Microsoft Azure [2], technology has no performance overhead for GPGPU resources
and Google Cloud Platform [3] have accelerated the decrease in and provides various configurations and environments on same
cost of operation. Next, cloud resources are easy accessible. You hardware. Those strengths of cloud environment are suitable to
can use cloud resource using just several clicks on public cloud AI application.
such as AWS with ease. Finally, a cloud technology provides The last reason comes from a cloud technology evolution. A
various computing environments within a single hardware Linux container (LXC) [6] is a new method for creating a virtual
computing environment, while providing isolated environment environment in cloud environment with low performance
although cloud computing resources are located on a single overhead. Cloud container resource orchestration technologies
server machine. In spite of the above strengths, cloud technology like Docker [7] and kubernetes [8] are using LXC internally.
is not used for many HPC applications. Perceived performance Several benchmark show that container based cloud

978-1-7281-1664-8/19/$31.00 ©2019 IEEE 266


DOI 10.1109/CSE/EUC.2019.00059
Authorized licensed use limited to: Nirma University Institute of Technology. Downloaded on October 03,2024 at 07:14:58 UTC from IEEE Xplore. Restrictions apply.
environments have very low performance overhead [9][10]. virtual machine environment and container environment were
Through several investigations using the three aforementioned determined using several pioneer researches. Roberto Morabito
factors, we present a number of results to show possibility of et al. [5] presented performance comparison result among native
running HPC application on cloud infrastructure. environment, virtual machine and container environment with
4 benchmark categories – CPU Performance, Disk I/O
We run Poisson’s equation solver in two environments - performance, Memory Performance, and Network I/O
BareMetal server cluster and container cluster - and compare Performance. Ann Mary Joy [14] also presented performance
traditional HPC application performance between the two comparison result between virtual machine and container
environments. We also run Tensorflow [11] benchmark environment using web front end application. Zheng Li et al. [4]
applications using a Horovod [12] in the two environments. We compared performance overhead between Hypervisor and
then evaluate some important performance factors - cache miss container based virtualization using by the HPC Challenge
rate and InfiniBand bandwidth- on cloud environment. Our (HPCC) benchmark suite [15].
experiments results confirm a partial possibility of running HPC
application on cloud infrastructure. We submit that there is only
0.05% overhead for ResNet50 and 3% overhead for RNNs with
LSTM on cloud infrastructure. We establish that there is very
small performance overhead when running GPGPU computing
oriented application on a container environment. Therefore, we
found that InfiniBand lacks performance overhead in container
environment.
We make the following contributions:
y We evaluate and compare the performance of several
applications on cloud infrastructure.
y We observe that possibility of using HPC application
on cloud infrastructure conditionally.
y We identify that there is no performance overhead for Fig. 1. Overall architecture of virtual machine and container environment
cache miss rate and InfiniBand latency on cloud
infrastructure. D. Container in HPC
The outline of this paper is as follows: In chapter 2, we Some frontier researches suggested a methodology which
describe the related works and background of cloud utilizes container in HPC domain. Maximilien de Bayser et al.
environment. In chapter 3, we present our research experimental [16] proposed the integration of MPI with docker for HPC. They
environments and experimental condition. In chapter 4, we show introduced uta overlay network method. In the Douglas M.
the experimental results. The conclusions and future works are Jacobsen et al. [17] research, there was introduction of
covered in chapter 5. automated scale out cluster which deploys container and user
defined docker images. Reid Priedhorsky et al. [18] showed
II. RELATED WORKS container cluster for HPC has no performance penalty at kb unit
message on 2048 nodes clusters.
A. Kubernetes
We introduce kubernetes [8] for container orchestration in E. Poisson’s Solver
our evaluation environment which works over docker [7] Poisson's equation is a partial differential equation of elliptic
environment, a cluster management and container orchestration type with broad application to various engineering problems. In
tool developed by Google and whose popularity has grown electrostatics for example, the potential field which is caused by
recently. It schedules pods, which is a group of containers into mass density distribution can be calculated using the Poisson’s
clusters based on required resources and also provides auto scale equation. The 1-dimensional Poisson’s equation can be solved
out/in and HA based on user description. mathematically. A 2-dimensional Poisson’s equation can be
discretized using different finite numerical method into a linear
B. Kubernetes-coreos-cluster equation. That linear equation can further be solved using
To enable MPI communication on the kubernetes, we several numerical methods such as conjugate gradient. The
introduce kubernetes-coreos-cluster [13] project which deploys conjugate gradient can be parallelized easily. Therefore, grid
multiple containers on kubernetes cluster and configures that is discretized by FDM has symmetrical characteristics and
MPICH on multiple containers. MPI application can run in the is highly sparse. It can be applied in a peer-to-peer
exact same manner on native environment using kubernetes- communication manner during calculation.
coreos-cluster.
F. Tensorflow
C. Performance benchmarking A Tensorflow [11] is software library for high performance
Fig. 1 illustrates container environment having less software numerical computation. It uses data flow graphs for computation
stack than a virtual machine environment. It means that where the graph nodes represent mathematical operations, while
container environment has less overhead than virtual machine. the graph edges represent the multidimensional data arrays that
The performance comparison among native environment, flow between them. Due to its flexible and modular architecture,

267

Authorized licensed use limited to: Nirma University Institute of Technology. Downloaded on October 03,2024 at 07:14:58 UTC from IEEE Xplore. Restrictions apply.
computation can be deployed to one or more CPUs or GPGPUs For evaluation of RNNs with LSTM, the PTB [22] project
on any environment without rewriting the code. It was originally dataset is used. Every evaluation is repeated 10 times and the
developed by researchers and engineers working in the Google mean value is used as the result. An evaluation is performed on
Brain team. Now, it is open source and widely used for machine native environment and container environment. The kubernetes
learning applications such as neural networks. is used for container environment as an orchestration tool.
G. Horovod IV. RESULTS AND DISCUSSION
A Horovod [12] is an open source distributed deep learning
A. MPI application scalability
framework on top of Tensorflow by Uber. It is developed to
make it easy to take a single-GPGPU Tensorflow program and Fig. 2 and Table 1 show Poisson’s equation solver scalability
train it on many GPGPUs in parallel, to enhance its speed. on native and container environment. To show optimal result,
Developers found the MPI model to be more straightforward and Fig. 2 and Table 1 illustrates only asynchronous communication
requires less code changes than a distributed Tensorflow. manner for scalability. A DOF of Matrix used in solving
Horovod is based on MPI concepts like size, rank, local rank, Poisson’s equation was 40,401. Maximum iterations of
all-reduce, all-gather and broadcast. conjugate gradient were 10,000. Tolerance for solution is 1.0e-
10. We only measure strong scale performance. Overhead
III. EXPERIMENTS AND EVALUATION DETAILS represents overhead in container environment against native
We conducted a number of experiments to check the environment.
possibility of running HPC application on cloud infrastructure
TABLE I. WALL TIME FOR POISSON’S EQUATION SOLVER (SEC)
using containers. We evaluated the performance of parallel
Poisson’s equation solver and InfiniBand bandwidth of native Node Count 1 10 20 30 40 44
and container environment. The native environment means that Native 3203.19 103.41 28.10 12.57 8.39 6.26
there is no software stacking over OS. The container Container 3205.04 103.43 28.14 16.14 10.15 10.27
environment is an environment which is built using container Overhead (%) 0.06 0.02 0.13 22.15 17.27 39.07
technology. In Poisson’s equation solver evaluation case, both
environments communicate between nodes by MPI. We also
measure cache miss in both environments. Other experiments 10000
Container
are machine learning training applications that use a lot of
offloading computing on GPGPU. For machine learning training Native
applications, we evaluated two types of machine learning 1000
training algorithms; the ResNet50 [19] and Recurrent Neural
Networks (RNNs) with Long Short-Term Memory (LSTM) [20].
Time(sec)

Both algorithms work on distributed Tensorflow with Horovod 100


frameworks.
A. Hardware environment
10
We use two kinds of BareMetal server clusters, CPU cluster
and GPGPU cluster. Every server in CPU cluster has 2 x 2.2GHz
Intel Xeon-Broadwell (E5-2640-V4) CPUs and 8 x Micron 8GB 1
DDR4 memory and a SuperMicro AOC-UR-i4XT network card 1 5 10 15 20 25 30 35 40 44
With a network max speed of 10,000Mbps. Every server in
MPI node count
GPGPU cluster has 2 x 2.6GHz Intel Xeon (E5-2690-V4) CPUs
Fig. 2. Scalability evaluation result of poisson’s equation solver – Lower is
and 32 x 8 GB Samsung DDR4 memories, a MCX555A-ECAT faster, the time axis is shown as log scale
/ Connect X®-5 VPI adapter card and EDR IB (100Gb/s)
network card and NVIDIA Tesla P40 model GPGPU cards. Before 20 nodes, there is no significantly performance gap
B. Experiment methodology between two environments. However, when over 20 nodes are
involved in computing, the performance gap increases. This
To evaluate the performance of Poisson’s equation solver, performance gap begins from 21 nodes are involved, which is
we measure scalability on MPI multi nodes with different one more MPI node counts in one physical server machine. This
communication manners. Three communication manners used performance gap emanates from network overhead when
are; collective, synchronous and asynchronous communication. communicating between physical servers. The network
Compact affinity is only used for node affinity. To measure the overhead is due to virtual network overlay which is used for
scalability of Poisson’s equation solver, we launch MPI communication among containers. Therefore, at 44 nodes, not
application on CPU cluster from 1 node to 44 nodes. only the container environment evaluation result is slower than
Additionally, we measure cache miss rate of Poisson’s equation native environment but also it is slower than 40 nodes evaluation
solver by valgrind. And we measured the InfiniBand result in container environment. There is a 17.27% performance
performance. At last, we measure the performance of two overhead when using 40 nodes in container environment. We
machine learning training applications on GPGPU cluster; can observe similar evaluation results by R. Morabito et al. [5].
ResNet50 and RNNs with LSTM. The two applications are The overhead is higher than 40% overhead when we use 44
included in Tensorflow as sample applications. We use the nodes in container environment. It alludes that container
ImageNet [21] dataset for performance evaluation of ResNet50.

268

Authorized licensed use limited to: Nirma University Institute of Technology. Downloaded on October 03,2024 at 07:14:58 UTC from IEEE Xplore. Restrictions apply.
environment cannot fully utilize CPU computing resources. Fig. 1.60%
3 and Fig. 4 show efficiency of communication optimization in 1.40%
both environments. In Fig. 3 and Fig. 4, an efficiency of
1.20%
communication optimization represents wall time of each
evaluation results as ratio against evaluation result on node 1.00%

Rate(%)
count 1. We confirmed that MPI communication optimal 0.80%
method can also be applied to container environment. 0.60%

0.40%
Native environment
1.2 0.20%

1
0.00%
I1 Miss rate LLi miss rate D1 miss rate LLd miss rate LL miss rate
0.8 Native Container
Ratio

0.6
Fig. 5. Cache miss rate evaluation result – Lower is better
0.4
C. InfiniBand latency
0.2
To measure communication performance via InfiniBand in
0
1 10 20 30 40 44 both environments, we measure InfiniBand bandwidth and
Collective 1 1 1 1 1 1 latency using openfabrics enterprise distribution [24]. The
Synchrous 1.00 0.31 0.16 0.11 0.09 0.08
Asynchrous 1.00 0.31 0.16 0.11 0.09 0.08
InfiniBand bandwidth is measured for hardware performance.
So, there is no difference between the two environments. A
Fig. 3. Communication optimal efficiency on a native environment – Lower latency comparison shows very little difference between the two
is better environments. Table 3 and Fig. 6 show InfiniBand latency
evaluation results. When we measure InfiniBand performance,
Container environment
1.2
we change kubernetes CNI from calico to flannel, because calico
does not support vxlan which is required for using InfiniBand in
1
container environment. A test application sends 83Mb
0.8 (8,388,608 bytes) data 1,000 times. The evaluation results are
show for minimum, maximum, typical (Typ), average, 99%
Ratio

0.6

0.4
percentile, and 99.9% percentile values.
0.2
TABLE III. INFINIBAND LATENCY EVALUATION RESULT (ɆS)
0
1 10 20 30 40 44 Evaluation Min Max Typ Avg 99% 99.9%
Collective 1 1 1 1 1 1
Synchrous 1.00 0.31 0.16 0.11 0.09 0.09
Native 715.39 751.18 718.78 719.68 735.58 751.18
Asynchrous 1.00 0.31 0.16 0.11 0.09 0.09 Container 714.87 745.01 717.66 718.38 735.70 745.01

Fig. 4. Communication optimal efficiency on a container environment


800

B. MPI application cache miss rate 700

We measure MPI application cache miss rate on both 600

environment using valgrind–cachegrind [23] method. Poisson’s 500


Time(μs)

equation solver is used. A cache miss rate measurement is 400


performed separately from MPI application scalability. Fig. 5
300
and Table. 2 shows the measurement result. We find that there
is slightly lower cache miss rate in container environment. A 200

cache miss rate is one of the most important optimal points in a 100
HPC application. Based on this result, we confirm that cache 0
miss rate on container is acceptable for HPC application. Min Max Typical Avg 99%
percentile
99.90%
percentile
Native Container
TABLE II. CACHE MISS EVALUATION RESULT OF MPI APPLICATION (%)
Type I1 LLi D1 LLd LL Fig. 6. InfiniBand latency evaluation result – Lower is better
Native 0.31% 0.17% 1.40% 0.80% 0.30%
There is no significant difference between the two
Container 0.28% 0.15% 1.30% 0.70% 0.30%
environments between average and typical. However, at
maximum and 99.9% percentile, there is a difference in
performance.

269

Authorized licensed use limited to: Nirma University Institute of Technology. Downloaded on October 03,2024 at 07:14:58 UTC from IEEE Xplore. Restrictions apply.
D. Machine learning training application performance applications. Even though container technology has a lot of
To measure performance overhead of machine learning benefit, this level of performance overhead cannot be accepted
training application in container environment, we evaluate the by HPC applications. However, we found that communication
two machine learning training applications in both environments. optimization method could be applied in container technology.
We measure the performance of distributed machine learning Therefore, we cannot find cache miss rate overhead in container
applications – ResNet50 and RNNs with LSTM. This environment. We also observe that machine learning training
measurement shows that performance gap of GPGPU oriented application have very small overhead in container environment.
HPC application on both environment. Both evaluations use a We submit that there is no performance loss in InfiniBand usage,
total of 64 GPGPU cards on 8 physical servers with each server too. We can confirm that possibility of using HPC application
containing 8 GPGPU cards. One worker in ResNet50 and RNNs on cloud infrastructure conditionally. If an HPC application is
with LSTM run in one GPGPU card. The ResNet50 GPGPU oriented or working on heterogeneous computing, it
performance evaluation result is presented as image processing can be run in container cluster environment with numerous
counts per second. The RNNs with LSTM evaluation result is benefits from cloud computing.
presented as words per second. Table 4 shows evaluation results A cloud environment, built by container technology has
of the two applications as performance evaluation result values. limitation in several aspects. It needs extra computation
Fig. 7 represents evaluation results as ratio against native resources for managing and orchestration hence some
evaluation result computing resources should be assigned to it. That overhead
should be reduced in future. It also has overhead in network. In
TABLE IV. MACHINE LEARNING APPLICATION EVALUATION RESULT -64 our next research, we will investigate and find the most suitable
WORKERS ARE USED
network configuration in container environment for HPC
Application ResNet50 RNNs with LSTM application and improve the network performance. We shall
study the best fit optimization method for HPC application in
3,622.08 49,296.24
Native
(images/sec) (words/sec)
container-based environment.
3,604.11 47,782.82
Container
(images/sec) (words/sec) REFERENCES
[1] “Amazone Web Services” [Online]. Available: https://aws.amazon.com/
[2] “Microsoft Azure” [Online]. Available: https://azure.microsoft.com/en-
ForResNet50, there is only 0.05% performance overhead. us/
The ResNet50 has a lot of code block for GPGPU processing [3] “Google Cloud Platform” [Online]. Available: https://cloud.google.com/
and only send parameter values which are needed to compute [4] Z. Li, M. Kihl, Q. Lu, and J. A. Andersson, “Performance Overhead
neural network weight and bias. The RNNs with LSTM has 3% Comparison between Hypervisor and Container Based Virtualization,” in
performance overhead. RNNs with LSTM has more 2017 IEEE 31st International Conference on Advanced Information
performance overhead than ResNet50. However, it has less Networking and Applications (AINA), 2017, pp. 955–962.
performance overhead than the previous measurement of other [5] R. Morabito, J. Kjallman, and M. Komu, “Hypervisors vs. Lightweight
application by Zheng Li et al. [4]. It can be attributed to machine Virtualization: A Performance Comparison,” in 2015 IEEE International
Conference on Cloud Engineering, 2015, pp. 386–393.
learning training application characteristic. Two machine
[6] “Linux Container.” [Online]. Available: http://linuxcontainers.org.
learning training applications have more computation part
working on GPGPU than CPU. This means that machine [7] “Docker.” [Online]. Available: https://www.docker.com/.
learning training application is less affected by computation [8] “Kubernetes.” [Online]. Available: https://kubernetes.io/.
performance overhead in container environment. [9] W. Felter, A. Ferreira, R. Rajamony, and J. Rubio, “An updated
performance comparison of virtual machines and Linux containers,” in
1.2 2015 IEEE International Symposium on Performance Analysis of
Systems and Software (ISPASS), 2015, pp. 171–172.
1 [10] C. Ruiz, E. Jeanvoine, and L. Nussbaum, “Performance evaluation of
containers for HPC,” in European Conference on Parallel Processing,
0.8 2015, pp. 813–824
[11] M. Abadi et al., “Tensorflow: a system for large-scale machine learning.,”
0.6
Raito

2016.
0.4 [12] A. Sergeev and M. Del Balso, “Horovod: fast and easy distributed deep
learning in TensorFlow,” Feb. 2018.
0.2 [13] “MPICH Multinode Cluster.” [Online]. Available:
https://github.com/ContinUSE/kubernetes-coreos-
0 cluster/tree/master/examples/mpich.
ResNet50 RNNs with LSTM
Native 1 1 [14] A. M. Joy, “Performance comparison between Linux containers and
Container 0.995 0.97 virtual machines,” in 2015 International Conference on Advances in
Computer Engineering and Applications, 2015, pp. 342–346.
Fig. 7. Machine learning application performance comparing [15] P. R. Luszczek et al., “S12---The HPC Challenge (HPCC) benchmark
suite,” in Proceedings of the 2006 ACM/IEEE conference on
V. CONCLUSIONS AND FUTURE WORK Supercomputing - SC ’06, 2006, p. 213.
[16] M. de Bayser and R. Cerqueira, “Integrating MPI with Docker for HPC,”
We present the possibility of HPC application on cloud in 2017 IEEE International Conference on Cloud Engineering (IC2E),
infrastructure which is built on container cluster. We observed 2017, pp. 259–265.
that there was performance overhead in CPU oriented

270

Authorized licensed use limited to: Nirma University Institute of Technology. Downloaded on October 03,2024 at 07:14:58 UTC from IEEE Xplore. Restrictions apply.
[17] D. M. Jacobsen and R. S. Canon, “Contain This, Unleashing Docker for [21] J. Deng, W. Dong, R. Socher, L.-J. Li, Kai Li, and Li Fei-Fei, “ImageNet:
HPC,” 2015. A large-scale hierarchical image database,” in 2009 IEEE Conference on
[18] R. Priedhorsky and T. Randles, “Charliecloud,” in Proceedings of the Computer Vision and Pattern Recognition, 2009, pp. 248–255.
International Conference for High Performance Computing, Networking, [22] M. P. Marcus, B. Santorini, M. A. Marcinkiewicz, and A. Taylor,
Storage and Analysis on - SC ’17, 2017, pp. 1–10. “Treebank-3 LDC99T42.” Philadelphia: Linguistic Data Consortium,
[19] K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image 1999.
Recognition,” in The IEEE Conference on Computer Vision and Pattern [23] N. Nethercote, “Dynamic binary analysis and instrumentation,”
Recognition (CVPR), 2016. University of Cambridge, Computer Laboratory, 2004.
[20] W. Zaremba, I. Sutskever, and O. Vinyals, “Recurrent Neural Network [24] “Openfabrics Enterprise Distribution.” [Online]. Available:
Regularization,” 2014. https://www.openfabrics.org/.

271

Authorized licensed use limited to: Nirma University Institute of Technology. Downloaded on October 03,2024 at 07:14:58 UTC from IEEE Xplore. Restrictions apply.

You might also like