0% found this document useful (0 votes)
5 views77 pages

Orchestration Tools

The document provides a comprehensive comparison of various orchestration tools, including Docker Swarm, Kubernetes, Terraform, and TOSCA/Cloudify. It outlines the characteristics, methodologies, and processes involved in orchestration, emphasizing the importance of automation in managing infrastructure and application deployment. The document concludes with a summary of the pros and cons of each tool, aiding in the decision-making process for selecting the appropriate orchestration solution.

Uploaded by

akdeniz.erdem
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views77 pages

Orchestration Tools

The document provides a comprehensive comparison of various orchestration tools, including Docker Swarm, Kubernetes, Terraform, and TOSCA/Cloudify. It outlines the characteristics, methodologies, and processes involved in orchestration, emphasizing the importance of automation in managing infrastructure and application deployment. The document concludes with a summary of the pros and cons of each tool, aiding in the decision-making process for selecting the appropriate orchestration solution.

Uploaded by

akdeniz.erdem
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 77

Orchestration Tool Roundup -

Docker Swarm vs. Kubernetes, TerraForm vs.


TOSCA/Cloudify vs. Heat
Speakers..
Agenda
• Orchestration 101..
• Different approaches for orchestration
• Method of comparison
• Comparison
• Synergies
• Summary - which tool to choose?
Orchestration 101

Orchestration is a mean to Automate Manual Process


Orchestration 101
• Common Characteristics
– Use DSL to define blueprint
– Execute a process based on
input from the blueprint
– Pass context information
between the deployed entities

• Different assumptions lead to


different approaches
– Application Architecture
– Infrastructure
– Scope of automation
Goals of this Exercise

Explore the Infrastructure


different Centric
approaches to
orchestration
Container
Pure Play
Centric
Method of Comparison
• Same Application Requirements
• Full Production Deployment
• Broken into three main groups
– Container Centric – Kubernetes,
Docker
– Pure Play –Cloudify/TOSCA,
Terraform,
–Infrastructure Centric - Heat

• Out of scope*
– PaaS, Configuration
Management (e.g Chef, Puppet,
Ansible,..)
– Covering all orchestrations
solutions
– Deep Dive into each
orchestration technology
The Test Application

Load Balancer

Mongo-cfg

VM

NodeJS NodeJS NodeJS Mongo-cfg

mongos mongos mongos VM

VM VM VM
Mongo-cfg

VM

mongod mongod mongod


mongod mongod mongod
mongod VM mongod VM mongod VM
VM VM VM
VM VM VM
1 Orchestration Process - Setup
Create network and
compute resources:
Load Balancer
VMs, security group,
network, subnet,
routers, LB pool
VM

VM

VM VM VM

VM

VM VM VM
VM VM VM
VM VM VM
2 Orchestration Process - Setup

Load Balancer
Install Mongo and
Node Binaries Mongo-cfg

VM

NodeJS NodeJS NodeJS Mongo-cfg

mongos mongos mongos VM

VM VM VM
Mongo-cfg

VM

mongod mongod mongod


mongod mongod mongod
mongod VM mongod VM mongod VM
VM VM VM
VM VM VM
3 Orchestration Process - Setup

Load Balancer

Mongo-cfg

VM

NodeJS NodeJS NodeJS Mongo-cfg

mongos mongos mongos VM


Start mongod
VM VM VM
processes
Mongo-cfg

VM

mongod mongod mongod


mongod mongod mongod
mongod VM mongod VM mongod VM
VM VM VM
VM VM VM
4 Orchestration Process - Setup
Start mongo-cfg
Load Balancer processes
proecesses

Mongo-cfg

VM

NodeJS NodeJS NodeJS Mongo-cfg

mongos mongos mongos VM

VM VM VM
Mongo-cfg

VM

mongod mongod mongod


mongod mongod mongod
mongod VM mongod VM mongod VM
VM VM VM
VM VM VM
5 Orchestration Process - Setup

Start mongos Load Balancer


processes, pointing
processes,
to mongo-cfg
pointing to config Mongo-cfg
servers VM

NodeJS NodeJS NodeJS Mongo-cfg

mongos mongos mongos VM

VM VM VM
Mongo-cfg

VM

mongod mongod mongod


mongod mongod mongod
mongod VM mongod VM mongod VM
VM VM VM
VM VM VM
6 Orchestration Process - Setup

Load Balancer

Mongo-cfg

VM

Pick
Pickone
oneVM mongos
per shard
andNodeJS NodeJS NodeJS Mongo-cfg
and
initialize
initialize
replica
replica
setset
mongos mongos mongos VM

VM VM VM
Mongo-cfg

VM

mongod mongod mongod


mongod mongod mongod
mongod VM mongod VM mongod VM
VM VM VM
VM VM VM
7 Orchestration Process - Setup
Pick one mongos and add
shards, one at a time Load Balancer

Mongo-cfg

VM

NodeJS NodeJS NodeJS Mongo-cfg

mongos mongos mongos VM

VM VM VM
Mongo-cfg

VM

mongod mongod mongod


mongod mongod mongod
mongod VM mongod VM mongod VM
VM VM VM
VM VM VM
8 Orchestration Process - Setup
Pick one mongos and
initialize data in mongodb Load Balancer

Mongo-cfg

VM

NodeJS NodeJS NodeJS Mongo-cfg

mongos mongos mongos VM

VM VM VM
Mongo-cfg

VM

mongod mongod mongod


mongod mongod mongod
mongod VM mongod VM mongod VM
VM VM VM
VM VM VM
9 Orchestration Process - Setup
Start nodejs
processes Load Balancer

Mongo-cfg

VM

NodeJS NodeJS NodeJS Mongo-cfg

mongos mongos mongos VM

VM VM VM
Mongo-cfg

VM

mongod mongod mongod


mongod mongod mongod
mongod VM mongod VM mongod VM
VM VM VM
VM VM VM
10 Orchestration Process - Setup
Add nodejs VMs
to LB pool Load Balancer

Mongo-cfg

VM

NodeJS NodeJS NodeJS Mongo-cfg

mongos mongos mongos VM

VM VM VM
Mongo-cfg

VM

mongod mongod mongod


mongod mongod mongod
mongod VM mongod VM mongod VM
VM VM VM
VM VM VM
Orchestrating in Production
• Monitoring and log collection
• Manual/Auto healing
• Manual/Auto scaling
• Maintenance:
– Backup and restore
– Continuous deployment
– Infrastructure upgrades and patches
Common Requirements
• Dependency management
•Reproducible
•Cloneable
• Recoverable
Series 1: Container Centric
Quick Overview of Docker Swarm
A Docker-native clustering system
• Use a pool of hosts through a single swarm
master endpoint
• Placement constraints, affinity/anti-affinity

docker run \
-name rs1 \
-e affinity:container!=rs* \
...
Swarm Architecture
Solution Overview - Deploy - Create
Replica Sets
for i in 1..{number_of_replica_sets}
for j in 1..{number_of_nodes_for_replica_set}
docker run \
-name rs{i}_srv{j} \
-e affinity:container!=rs* \
-e affinity:container!=cfg* \
-e constraint:daemon==mongodb \
-d example/mongodb \
--replSet rs{i}

Then, SSH into one host per replica set to


configure it.
Solution Overview - Deploy - Start
Node.js application containers
Make sure you inject all mongos endpoints for
the application.
for i in 1..{number_of_nodejs_servers}
docker run \
-P -name nodejs{i}_v1 \
-e constraint:daemon==nodejs \
-e affinity:container!=nodejs* \
-e MONGO_HOSTS=<LIST_OF_MONGOS_IPs> \
-d example/nodejs_v1 \
nodejs server.js
Solution Overview - Deploy -
Reconfigure HAProxy
Extract Node.js container IPs using docker
inspect and then:
for i in 1..{number_of_nodejs_servers}
docker exec haproxy1 \
reconfigure.sh \
--add=<IP_of_nodejs{i}:port>
Solution Overview - Mongodb scale
out
Identical to the process of deploying the initial
mongodb shards, mongodb will take care of
migrating data to the new shard
Docker Swarm - Pros and Cons
Pros Cons
● Easy modeling ● Basic infrastructure
● Placement/Affinity handling
● Manual handling
multiple instances
● “Manual” workflow
● Requires other tools
for production
aspects - monitoring,
healing, scaling
Kubernetes
Quick Overview to Kubernetes
Container cluster manager

• Pods: tightly coupled group of containers


• Replication controller: ensures that a
specified number of pod "replicas" are
running at any one time.
• Networking: Each pod gets its own IP address
• Service: Load balanced endpoint for a set of
pods
Kubernetes Architecture
Sample Replication Controller
apiVersion: v1beta3
kind: ReplicationController
spec:
replicas: 5
selector:
name: mongod-rs1
template:
metadata:
labels:
name: mongod-rs1
spec:
containers:
- command: [mongod, --port, 27017, --replSet, rs1]
image: example/mongod
name: mongod-rs1
- command: [mongod-rs-manager, --replSet, rs1]
image: example/mongod-rs-manager
name: mongod-rs1-manager
Sample Service Configuration
apiVersion: v1beta3
kind: Service
metadata:
labels:
type: nodejs
name: nodejs
spec:
ports:
- port: 80
targetPort: 8080
selector:
type: nodejs
createExternalLoadBalancer: true
Solution Overview - Deploy

• Create mongod config servers


for i in 1..3
kubectl create -f mongod-configsvr{i}-controller.yaml
kubectl create -f mongod-configsvr{i}-service.yaml

• Create mongos router


kubectl create -f mongos-controller.yaml
kubectl create -f mongos-service.yaml
Solution Overview - Deploy - Create
Data nodes
for i in 1..{number_of_replica_sets}
kubectl create -f \
mongod-rs{i}-controller.yaml

# Now configure each replicate set


# by picking pod to be the initial “master”
# of each replica set and extract all
# containers IPs using “kubectl get -l ...”

# dynamically update replica set


# members (this will kick of this process)
kubectl create -f mongod-rs{i}-service.yaml
Solution Overview - Node.js Heal

Failing pods are identified by kubernetes and


are automatically rescheduled
Solution Overview - Node.js
continuous deployment
# initially configured with 0 replicas
kubectl create -f nodejs-v{new_version}-controller.yaml

for i in 1..{number_of_nodejs_replicas}

kubectl resize rc nodejs_v{new_version} \


--current-replicas={i - 1} \
--replicas={i}

# smoke test and rollback everything if testing failed

kubectl resize rc nodejs_v{previous_version} \


--current-replicas={number_of_nodejs_replicas - i + 1} \
--replicas={number_of_nodejs_replicas - i}
Kubernetes - Pros and Cons
Cons
Pros
● (almost) zero configuration autoheal ● No placement (yet)
● Out of the box load balancer ● Not simple to manage stateful services
● Simple scaling
Series 2: Pure Play Orchestration
Introduction to Terraform

• By Hashicorp
• Simple (in a good way) command
line tool
– Resources
– Providers and provisioners
– Modules
– Variables and outputs
Sample Configuration
resource "openstack_compute_secgroup_v2" "nodejs_security_group" {
name = "nodejs_security_group"
description = "security group for mongodb"
rule {
from_port = 22
to_port = 22
ip_protocol = "tcp"
cidr = "0.0.0.0/0"
}
rule {
from_port = "${var.nodejs_port}"
to_port = "${var.nodejs_port}"
ip_protocol = "tcp"
cidr = "0.0.0.0/0"
}
}
Sample Configuration
#
# Create a Network
#
resource "openstack_networking_network_v2" "tf_network" {
region = ""
name = "tf_network"
admin_state_up = "true"
}
#
# Create a subnet in our new network
# Notice here we use a TF variable for the name of our network above.
#
resource "openstack_networking_subnet_v2" "tf_net_sub1" {
region = ""
network_id = "${openstack_networking_network_v2.tf_network.id}"
cidr = "192.168.1.0/24"
ip_version = 4
}
Sample Configuration
resource "openstack_compute_instance_v2" "mongod_host" {
count = "3"
region = ""
name = "mongod_host"
image_name = "${var.image_name}"
flavor_name = "${var.flavor_name}"
key_pair = "tf-keypair-1"
security_groups = ["mongo_security_group"]
network {
uuid = "${openstack_networking_network_v2.tf_network.id}"
}
...
provisioner "remote-exec" {
scripts = [
"scripts/install_mongo.sh"
"start_mongod.sh"
]
}
}
Solution Overview

• Single top level configuration file


• Creates: Network, subnet, router, floating IP,
security groups, VMs, LBaaS pool
• TF module to model a mongodb shard
– No easy way to specify "I want X occurrences of this
module"
– Just copy and paste...
Master Assignment & Registration of Shards

• Issue - no "cluster wide" way of invoking


provisioners
– Needed for configuring shard masters and adding
shards to the cluster
• Option 1: use Consul
– e.g. first instance acquires a lock and waits for other
to join
• Option 2: Static allocation in the
configuration
• Option 3: local-exec with locks
Terraform - Pros and Cons

Pros Cons
● Infrastructure & ● Configurations are not
Framework neutrality portable across cloud
● Solid support for providers
OpenStack ● Hard to model non-
● Simple and elegant infrastructure
● Present plan before components
applying ● Everything is done in the
● Support for incremental context of a single
updates resource instance
TOSCA / Cloudify
What is TOSCA?
TOSCA defines the
interoperable
description of
applications; including
their components,
relationships,
dependencies,
requirements, and
capabilities….
Cloudify – Open Source
Implementation of TOSCA

Manage Provision
Can be used as a
command line tool or
as a managed service Monitoring &
Alarming
Monitor Configure

Plugins

CM Infrastructure
Cloudify – Open Source
Implementation of TOSCA

Manage Provision
Can be used as a
command line tool or
as a managed service Monitoring &
Alarming
Monitor Configure

Plugins

CM Infrastructure
Containers Portability in TOSCA
Artifacts
Containee • Docker
(Docker Runtime Image Docker Hub
Requirement) • .TAR) (Repo.)
• URI of DockerImage
• Relative to Repo. artifact_types:
tosca.artifacts.impl.Docker.Image:
derived_from: tosca.artifacts.Root
Software Requirements description: Docker Image TAR
mime_type: TBD
Component
Container file_ext: [ tar ]
Container
(Docker Runtime
Capability) Hosted
On

Capabilities
Docker
# NOT YET IN TOSCA SPEC. TO BE INVENTED…
Rocket repositories:
… docker_hub:
url: xxx
Container credentials: yyy

node_templates:

docker_webserver:
type: tosca.nodes.Container
requirements:
- host:
# omitted for brevity
artifacts:
- my_image: < URI of Docker Image in Repo. >
type: tosca.artifacts.impl.Docker.Image:
repository: docker_repo

Source: Vmware Proposal


Solution Overview

Input: Input:
#nodeJS instances #config instances
mongodb deployment id or #Shards
MongoConfig #Replica set per shard
Mogo Shards

Load Balancer *Scalable *Scalable *Scalable

*Scalable
Subsitutable Mongo replica-
NodeJS Mongo Mongod-shard
cfg set
*Scalable
MongoS
Initialization
Initialization

Output: Output:
App EndPoint = Load-Balancer Mogoconfig hosts
IP/path Shards endpoint
Infrastructure setup
node_templates:
nodecellar_security_group:
type: cloudify.openstack.nodes.SecurityGroup
properties:
security_group:
name: nodecellar_security_group
rules:
- remote_ip_prefix: 0.0.0.0/0
port: { get_property: [ nodecellar, port ] }
Create Mongo Shards
mongodb:
type: tosca.nodes.mongodb.Shard
directives: [substitutable] *scalable MongoDB ReplicaSet
properties: MongoDB Server
count: { get_input: servers_count_in_replica_set }
requirements:
- host:
node: mongo_server
capabilities:
scalable:
properties:
min_instances: 1
max_instances: 10
default_instances: { get_input: mongodb_rs_count }
Create Compute Instances
mongo_server:
type: tosca.nodes.Compute
capabilities:
host:
properties: *host_capabilities
os:
properties: *os_capabilities
scalable:
properties:
min_instances: 1
max_instances: 10
default_instances: 5
Create MongoDB Replica Set
mongo_db_replica_set:
type: tosca.nodes.DBMS
requirements:
- host:
node: mongo_server
interfaces:
Standard:
create: Scripts/mongodb/create.sh
configure:
implementation: Scripts/mongodb/config.sh
inputs:
mongodb_ip: { get_attribute: [mongo_server, addr] }
start: Scripts/mongodb/start.sh
Creat NodeJS Containers
nodecellar_container:
type: tosca.nodes.NodeCellarAppContainer
properties:
port: { get_input: nodejs_app_port }
interfaces:
cloudify.interfaces.lifecycle:
create:
inputs:
....
command: nodejs server.js
environment:
NODECELLAR_PORT: { get_property: [SELF, port] }
MONGO_PORT: { get_property: [SELF, database_connection, port] }
MONGO_HOST: { get_attribute: [SELF, database_connection, private_address] }

…..
Create Load Balancer
haproxy:
type: tosca.nodes.Proxy
properties:
frontend_port: 80
statistics_port: 9000
backend_app_port: { get_property: [ nodecellar, port ] }
requirements:
- host:
node: haproxy_frontend_host
- member:
node: nodecellar_container

Get the web containers


through relationship and
update the load balancer
accordingly
Handling Post Deployment through
Workflow & Policies
● Cloudify Workflows
cfy executions start -w install ...

Script execution in python with context to


the deployment graph

● Built in workflows
o Install
o Uninstall
o Heal
o Scale
● Discovery through graph navigation
● Remote/Local execution
Summary TOSCA/Cloudify
Pros Cons
● Infrastructure & ● The spec is still evolving
Framework neutrality ● Cloudify isn’t 100%
● Complete Life Cycle complaint yet
Management ● Limited set of tooling
● Handles Infrastructure &
Software
● Production Orchestration*
o Monitoring
o Workflow
o Policies
o Logging
*Implementation specific
Series 3: Infrastructure Centric
• Overview of Heat
• Orchestrating NodeJS/MongoDB with Heat
• Summary – Benefits/ Limitations
What is Heat?

Heat provides a
mechanism for
orchestrating
OpenStack resources
through the use of
modular templates.
Heat Architecture
Solution Overview

Input: Input:
Input: #Replica set per shard
#nodeJS instances #config instances
MongoConfig hosts
Mogo Shards hosts

Load Balancer
Initialize
Mogo replica-
NodeJS mogocfg replica-
set
set-script
Initialize
MongoS- MongoS
Script

Output: Output: Output:


mongos node hosts mogocfg node hosts Replica set node hosts
App EndPoint = Load-Balancer ssh-key, private ip to the
IP/path init node
Infrastructure setup
resources:
secgroup:
type: OS::Neutron::SecurityGroup
properties:
name:
str_replace:
template: mongodb-$stackstr-secgroup
params:
$stackstr:
get_attr:
- stack-string
- value
rules:
- protocol: icmp
- protocol: tcp
port_range_min: 22
port_range_max: 22
- protocol: tcp
port_range_min: 27017
port_range_max: 27019
Create Compute Instances
mongo_host:
type: OS::Nova::Server
properties:
name:
str_replace:
template: $stackprefix-$stackstr
params:
$stackprefix:
get_param: stack-prefix
$stackstr:
get_attr:
- stack-string
- value
image:
get_param: image
flavor:
get_param: flavor
security_groups:
- get_param: security_group
Create MongoDB Replica Servers
mongodb_peer_servers:
type: "OS::Heat::ResourceGroup"
properties:
count: { get_param: peer_server_count }
resource_def:
type: { get_param: child_template }
properties:
server_hostname:
str_replace:
template: '%name%-0%index%'
params:
'%name%': { get_param: server_hostname }
image: { get_param: image }
flavor: { get_param: flavor }
ssh_key: { get_resource: ssh_key }
ssh_private_key: { get_attr: [ssh_key, private_key] }
kitchen: { get_param: kitchen }
chef_version: { get_param: chef_version }
Configure the Replica Servers
server_setup:
type: "OS::Heat::ChefSolo"
depends_on:
- mongodb_peer_servers
properties:
username: root
private_key: { get_attr: [ssh_key, private_key] }
host: { get_attr: [mongodb_peer_servers, accessIPv4, 0] }
kitchen: { get_param: kitchen }
chef_version: { get_param: chef_version }
node:
mongodb:
ruby_gems:
mongo: '1.12.0'
bson_ext: '1.12.0'
bind_ip: { get_attr: [mongodb_peer_servers, privateIPv4, 0] }
use_fqdn: false
replicaset_members: { get_attr: [mongodb_peer_servers, privateIPv4] }
config:
replset: myreplset
run_list: [ "recipe[config_replset]" ]
Create NodeJS Container
nodestack_chef_run:
type: 'OS::Heat::ChefSolo'
depends_on: nodestack_node
properties:
...
node:
nodejs_app:
...
deployment:
id: { get_param: stack_id }
app_id: nodejs
run_list: ["recipe[apt]",
"recipe[nodejs]",
"recipe[ssh_known_hosts]",
"recipe[nodejs_app]"]
data_bags:
nodejs:
id: { get_param: stack_id }
nodejs_app:
password: { get_attr: [nodejs_user_password, value] }
deploy_key: { get_param: deploy_key }
database_url:
str_replace:
template: 'mongodb://%dbuser%:%dbpasswd%@%dbhostname%'
params:
'%dbuser%': { get_param: database_username }
'%dbpasswd%': { get_param: database_user_password }
'%dbhostname%': { get_param: db_server_ip }
Summary

Pros Cons
● Native To OpenStack ● Limited to OpenStack
● Built-in mapping of all ● Software configuration is
the OpenStack limited
infrastructure resource ● Lack of built-in workflow
types ● Production orchestration
is limited
o Requires integration
with other tools/
projects
Potential Synergies
• Magnum -
Kubernetes + Docker,
Heat
• Cloudify/TOSCA +
Docker
• Cloudify/TOSCA +
Heat
Which orchestration tool should I
choose?
Final Words..
The Only Constant Is Change!
More Change Ahead..
Further Reading..
OpenStack Vancouver Session

You might also like