100% found this document useful (1 vote)
257 views

Lab - 2 Deploying The Ceph Cluster Using Cephadm

The document describes how to deploy a Ceph cluster using Cephadm. It involves installing Cephadm on the Ceph monitor node, bootstrapping the cluster, adding other monitor, manager and OSD nodes, and deploying OSDs across devices. Key steps include bootstrapping the cluster on one node, adding other nodes by sharing the cluster SSH key, labeling nodes, deploying 3 monitors, 2 managers, and adding OSDs across 4 nodes by referring devices or using all available devices. The status is checked to ensure all daemons are active and the cluster is healthy.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
257 views

Lab - 2 Deploying The Ceph Cluster Using Cephadm

The document describes how to deploy a Ceph cluster using Cephadm. It involves installing Cephadm on the Ceph monitor node, bootstrapping the cluster, adding other monitor, manager and OSD nodes, and deploying OSDs across devices. Key steps include bootstrapping the cluster on one node, adding other nodes by sharing the cluster SSH key, labeling nodes, deploying 3 monitors, 2 managers, and adding OSDs across 4 nodes by referring devices or using all available devices. The status is checked to ensure all daemons are active and the cluster is healthy.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Lab 2: Deploying the Ceph Cluster

Introduction
In this Lab, you will learn how to deploy the ceph cluster using Cephadm tool.

There are different ways to deploy Ceph Cluster:


Cephadm: Installs and Manages a Ceph cluster using containers and systemd.
Rook: Deploys and Manages Ceph clusters running in Kubernetes
ceph-ansible: Deploys and Manages Ceph clusters using Ansible.
ceph-deploy: Is a tool for quickly deploying clusters.
DeepSea: Installs Ceph using Salt.
jaas.ai/ceph-mon: Installs Ceph using Juju.
github.com/openstack/puppet-ceph: Installs Ceph via Puppet.
Ceph can also be installed manually.
Note: Ensure that the pre-requisites installed on all of the nodes from Lab1 before proceeding
with this lab.

1. Ensure that you have logged-in as root user with password as linux on ceph-mon1 node.

1.1 Let us first fetch the most recent version of the cephadm standalone script from github
using curl.

# curl --silent --remote-name --location


https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm

1.2 Add execute permission to cephadm script

# chmod +x cephadm

Student Material – Do Not Re-distribute. For any queries contact:


[email protected] or https://www.linkedin.com/in/naushadpasha/
1.3 Add the octopus release repository and Install the cephadm command on the host.

# ./cephadm add-repo --release octopus

output:

1.4 Installing the cephadm packges

# ./cephadm install

output

1.5 Confirm cephadm command installed and is your path.

# which cephadm

output:

1.6 Let us bootstrap a new cluster.

# mkdir -p /etc/ceph
# cephadm bootstrap --mon-ip 192.168.100.51

Student Material – Do Not Re-distribute. For any queries contact:


[email protected] or https://www.linkedin.com/in/naushadpasha/
Output:

Note: At the end of installation, the dashboard credentials are displayed, please save the details for
future use.

This command will:


1. Create a monitor and manager daemon for the new cluster on the local host.
2. Generate a new SSH key for the Ceph cluster and adds it to the root user’s
/root/.ssh/authorized_keys file.
3. Write a minimal configuration file needed to communicate with the new cluster to
/etc/ceph/ceph.conf.
4. Write a copy of the client.admin administrative (privileged!) secret key to
/etc/ceph/ceph.client.admin.keyring.
5. Write a copy of the public key to /etc/ceph/ceph.pub.
6. Bootstrap writes the files needed to access the new cluster to /etc/ceph for
convenience, so that any Ceph packages installed on the host can easily find them.
7. cephadm bootstrap -h to see all available options
1.7 Create an alias to the cephadm shell for ease of access

# alias ceph='cephadm shell -- ceph' >> ~/.alias

Note: The cephadm shell command launches a bash shell in a container with all of the Ceph packages
installed. By default, if configuration and keyring files are found in /etc/ceph on the host, they are
passed into the container environment so that the shell is fully functional.

1.8 Verify that in the backend there are containers created and running.

# ceph orch ps

Student Material – Do Not Re-distribute. For any queries contact:


[email protected] or https://www.linkedin.com/in/naushadpasha/
1.9 Install the ceph-common package, which contains all of the ceph commands, including ceph, rbd,
mount.ceph (for mounting CephFS file systems).

# cephadm add-repo --release octopus


# yum clean all
# yum clean dbcache
# yum repolist

output:

# cephadm install ceph-common

1.10 Verify the command is accessible

# ceph -v

Output:

Student Material – Do Not Re-distribute. For any queries contact:


[email protected] or https://www.linkedin.com/in/naushadpasha/
1.11 Confirm that the ceph command can connect to the cluster by checking the status.

# ceph status

Output:

1.12 Add hosts to the cluster

To add each new host to the cluster, Install the cluster’s public SSH key in the new host’s root user’s
authorized_keys file:

# ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph-mon2


# ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph-mon3
# ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph-osd1
# ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph-osd2
# ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph-osd3
# ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph-osd4
# ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph-client
Note: Enter root’s password “linux” when prompted.

Student Material – Do Not Re-distribute. For any queries contact:


[email protected] or https://www.linkedin.com/in/naushadpasha/
1.13 Let us limit the number of monitors to 3. Default number of monitors is 5

# ceph orch apply mon 3

1.14 Tell Ceph that the new node is part of the cluster:

# ceph orch host add ceph-mon2


# ceph orch host add ceph-mon3

1.15 Let us add labels to the hosts for our convenience and ease of deployment

# ceph orch host label add ceph-mon1 mon


# ceph orch host label add ceph-mon1 mgr
# ceph orch host label add ceph-mon2 mon
# ceph orch host label add ceph-mon2 mgr
# ceph orch host label add ceph-mon3 mon

1.16 To view the current hosts and labels:

# ceph orch host ls

Output:

Student Material – Do Not Re-distribute. For any queries contact:


[email protected] or https://www.linkedin.com/in/naushadpasha/
1.17 Deploy additional monitor to the cluster, this can be done by explicitly mentioning the
hostnames or by applying the monitors by label

# ceph orch apply mon "ceph-mon1 ceph-mon2 ceph-mon3"


----OR----
# ceph orch apply mon label:mon

Note: Be sure to include the first (bootstrap) host in this list. It takes around 5 minutes for all the
daemons to spawn up. Be patient!

Cephadm will be automatically deploy monitors on the hosts labeled as mon, this feature can be
disabled by passing --unmanaged option. We will see this is next step.

Output:

----OR----

1.18 Let us see how to disable the automated deployment of manager

# ceph orch apply mgr --unmanaged

1.19 Let us manually apply manager to 2 nodes

# ceph orch apply mgr "ceph-mon1 ceph-mon2"

1.20 Let us add the osd nodes to the cluster

# ceph orch host add ceph-osd1


# ceph orch host add ceph-osd2
# ceph orch host add ceph-osd3
Student Material – Do Not Re-distribute. For any queries contact:
# ceph orch host add ceph-osd4
[email protected] or https://www.linkedin.com/in/naushadpasha/
1.21 Let us add labels to the hosts for our convenience and ease of deployment

# ceph orch host label add ceph-osd1 osd


# ceph orch host label add ceph-osd2 osd
# ceph orch host label add ceph-osd3 osd
# ceph orch host label add ceph-osd4 osd

1.22 To view the current hosts and labels:

# ceph orch host ls


Output:

1.23 An inventory of storage devices on all cluster hosts can be displayed with:

# ceph orch device ls

Output:

Student Material – Do Not Re-distribute. For any queries contact:


[email protected] or https://www.linkedin.com/in/naushadpasha/
A storage device is considered available if all of the following conditions are met:

• The device must have no partitions.

• The device must not have any LVM state.

• The device must not be mounted.

• The device must not contain a file system.

• The device must not contain a Ceph BlueStore OSD.

• The device must be larger than 5 GB.

Ceph refuses to provision an OSD on a device that is not available.

1.24 Create an OSD from a specific device on a specific host:

# ceph orch daemon add osd ceph-osd1:/dev/sdb


# ceph orch daemon add osd ceph-osd2:/dev/sdb
# ceph orch daemon add osd ceph-osd3:/dev/sdb
# ceph orch daemon add osd ceph-osd4:/dev/sdb

Note: Alternatively we can tell ceph to consume any available and unused storage device by specifying
the command “ceph orch apply osd --all-available-devices”.

1.25 Verify the ceph cluster status to verify is all the monitors and osd daemons are reflecting.

# ceph status

Output:

Note: It take around 5 minutes for all the daemons to spawn up. Be patient! To see HEALTH_OK.

Student Material – Do Not Re-distribute. For any queries contact:


[email protected] or https://www.linkedin.com/in/naushadpasha/

You might also like