DevOps UNIT-5
DevOps UNIT-5
The Puppet client decides that it's time to check in with the Pup- pet
master to discover any new configuration changes. This can be due to
a timer or manual intervention by an operator at the client. The dialogue
between the Puppet client and master is normally encrypted using SSL.
The Puppet client presents its credentials so that the Pup- pet master
knows exactly which client is calling. Managing the client's credentials
is a separate issue.
The Puppet master figures out which configuration the client should
have by compiling the Puppet catalogue and sending it to the client.
This involves a number of mechanisms, and a particular setup doesn't
need to utilize all possibilities:
It is pretty common to have both a role-based configuration and a
concrete configuration for a Puppet client. Role-based configurations
can be inherited.
The Puppet master runs the necessary code on the client side so that
the con- figuration matches the one decided on by the Puppet master.
In this sense, a Puppet configuration is declarative. You declare what con-
figuration a machine should have, and Puppet figures out how to get from the
current to the desired client state.
There are both pros and cons of the Puppet ecosystem, and they are as
follows:
Puppet has a large community, and there are a lot of resources on the
internet for Puppet. There are a lot of different modules, and if you don't
have a really strange component to deploy, there already is, in all
likelihood, an existing module written for your component that you can
modify according to your: needs.
Puppet requires a number of dependencies on the Puppet client
machines. Sometimes, this gives rise to problems. The Puppet agent will
require a Ruby run- time that sometimes needs to be ahead of the Ruby
version avail- able in your distribution's repositories. Enterprise
distributions often lag behind in versions.
Puppet configurations can be complex to write and test.
Ansible
Ansible is a deployment solution that favours simplicity.
The Ansible architecture is agentless; it doesn't need a running daemon on
the client-side like Puppet does. In- stead, the Ansible server logs into the
Ansible node and issues commands over SSH in order to install the re- quired
configuration.
While Ansible's agentless architecture does make things simpler, you need a
Python interpreter installed on the Ansible nodes. Ansible is somewhat more
lenient about the Python version required for its code to run than Puppet is
for its Ruby code to run, so this dependence on Python being available is not
a great hassle in practice.
Like Puppet and others, Ansible focuses on configuration descriptors that are
idempotent. This basically means that the descriptors are declarative and the
Ansible system figures out how to bring the server to the desired state. You
can rerun the configuration run, and it will be safe, which is not necessarily
the case for an imperative system.
Let's try out Ansible with the Docker method we discussed earlier.
We will use the williamyeh/an- sible image, which has been devel- oped for
this purpose, but it should be possible to use any Ansible Docker image or
different ones altogether, to which we just add Ansible later:
1. Create a Dockerfile with this statement:
FROM williamyeh/ansible: centos7
2. Build the Docker container with the following command:
docker build.
This will download the image and create an empty Docker container that we
can use. Normally, you would, of course, have a more complex Dockerfile that
can add the things we need, but in this case, we are going to use the image
interactively, so we will instead mount the directory with Ansi- ble files from
the host so that we can change them on the host and rerun them easily.
3. Run the container
The following command can be used to run the container. You will need the
hash from the previous build command:
docker run -V ble:/ansible bash `pwd`/ansi- -it <hash>
Now we have a prompt, and Ansible is available. The -V trick is to make parts
of the host file system visible to the Docker guest container. The files will be
visible in the /ansible directory in the container.
The playbook.yml file is as follows:
hosts: localhost vars:
http_port: 80
max_clients: 200
remote_user: root
tasks:
- name: ensure apache is at the latest version yum: name=httpd state=latest
This playbook doesn't do very much, but it demonstrates some concepts of
Ansible playbooks.
Now we can try to run our Ansible playbook:
cd /ansible ansible-playbook -i inven- tory playbook.yml --con- nection=local
--sudo
The output will look like this:
Deployment tools: Chef, Salt Stack and Docker
Chef:
Deploying with Chef in DevOps involves integrating Chef into the overall
DevOps workflow to automate the deployment of infrastructure and
applications. Here's how Chef fits into the DevOps context:
1. Infrastructure as Code: Chef follows the Infrastructure as Code (IaC)
principle, allowing you to define your infrastructure and application
configurations as code using Chef cookbooks. This enables version control,
collaboration, and reproducibility, essential aspects of DevOps practices.
2. Continuous Integration and Continuous Deployment (CI/CD): Chef can
be integrated with CI/CD pipelines to automate the deployment process. As
part of the CI/CD workflow, you can include steps to build and test your
cookbooks, validate configurations, and deploy them to target environments.
3. Version Control and Collaboration: Chef cookbooks and associated files
can be stored in version control systems like Git. This enables collaboration
among team members, tracking changes, and rolling back to previous
versions if needed. DevOps teams can use branching and merging strategies
to manage cookbook versions and promote changes across different
environments.
4. Configuration Management: Chef provides a powerful configuration
management system that helps manage the desired state of your
infrastructure. By defining and applying cookbooks and recipes, you can
ensure that the configurations of your systems remain consistent and aligned
with your infrastructure requirements.
5. Infrastructure Orchestration: Chef can be used to orchestrate the
provisioning and configuration of infrastructure resources. It integrates with
various cloud providers and infrastructure-as-a-service platforms, allowing
you to provision virtual machines, configure networking, and manage other
resources required for your application deployments.
6. Automated Testing: Chef provides tools like Test Kitchen and ChefSpec
that enable automated testing of your cookbooks. You can write unit tests,
integration tests, and acceptance tests to validate the correctness and
behavior of your configurations. Testing helps ensure that your deployments
are reliable and free of errors.
7. Monitoring and Compliance: Chef allows you to define and enforce
compliance policies within your infrastructure. You can monitor the
configuration drift, apply security patches, and ensure that your systems
adhere to regulatory and organizational standards. Compliance automation is
an integral part of DevOps practices.
8. Infrastructure Monitoring and Logging: Integrating Chef with monitoring
and logging tools helps capture real-time data about your infrastructure and
applications. This data can be used to gain insights, troubleshoot issues, and
improve the overall performance and reliability of your deployments.
By leveraging Chef within the DevOps workflow, you can automate the
deployment and management of your infrastructure and applications,
improve collaboration among teams, ensure consistency and reliability, and
achieve faster and more predictable deployments.
Salt Stack
Deploying with SaltStack in DevOps involves utilizing SaltStack's
configuration management and orchestration capabilities to automate the
deployment and management of infrastructure and applications. Here's an
overview of deploying with SaltStack in a DevOps context:
1. Infrastructure Orchestration: SaltStack enables infrastructure
orchestration by defining infrastructure configurations as code. Using
SaltStack's configuration files called "states," you can specify the desired state
of your infrastructure, including server configurations, networking settings,
package installations, and more.
2. Salt Master and Minions: SaltStack uses a master-minion architecture.
The Salt Master is the central control node that manages and controls the
Salt Minions, which are the managed systems. Minions communicate with the
Salt Master to receive instructions, retrieve configurations, and report their
current state.
3. Salt States: SaltStack's configuration management is based on Salt States.
States are written in YAML or Jinja, representing the desired configuration of
a system or a group of systems. States can be used to define package
installations, file configurations, service management, user management, and
other aspects of your infrastructure.
4. Pillars and Grains: SaltStack provides two key concepts for managing data:
Pillars and Grains. Pillars allow you to securely store sensitive data, such as
credentials and secrets, separate from the Salt State files. Grains, on the other
hand, are system-specific data that can be used for targeting specific
configurations or applying customizations based on system properties.
5. Orchestration and Reactors: SaltStack offers powerful orchestration
capabilities that allow you to automate complex workflows and coordination
between multiple systems. Orchestration modules enable you to define
sequences of tasks, parallel executions, event-driven actions, and more.
Reactors enable reacting to specific events and triggering actions based on
those events.
6. Remote Execution: SaltStack allows you to execute commands and scripts
remotely on targeted minions, making it convenient for managing and
deploying configurations across multiple systems simultaneously. Remote
execution capabilities provide efficiency and consistency in managing your
infrastructure.
7. High Availability and Scalability: SaltStack supports high availability
setups, where multiple Salt Masters can be configured for redundancy and
failover. It also supports scalability with the ability to manage thousands of
minions efficiently.
8. Testing and Continuous Integration: SaltStack provides testing
frameworks like SaltStack Test Suite (STS) and SaltStack Formulas that
enable automated testing of Salt States and configurations. Integrating
SaltStack into the continuous integration and delivery (CI/CD) pipeline
ensures that changes to infrastructure configurations are tested and validated
before deployment.
9. Monitoring and Logging: Integrating SaltStack with monitoring and
logging tools allows you to capture and analyze real-time data about your
infrastructure and applications. Monitoring helps detect issues and
performance bottlenecks, while logging provides visibility into the actions and
changes made by SaltStack during deployments.
By leveraging SaltStack within the DevOps workflow, you can automate
infrastructure configurations, enforce consistency, reduce manual effort, and
achieve faster and more reliable deployments. SaltStack's flexible and scalable
architecture makes it suitable for managing small to large-scale
infrastructures efficiently.
Docker
Deploying with Docker in DevOps involves leveraging Docker containers to
streamline the deployment, scaling, and management of applications. Docker
provides a lightweight and portable runtime environment that encapsulates
applications and their dependencies, making them highly portable and
consistent across different environments. Here's an overview of deploying with
Docker in a DevOps context:
1. Containerization: Docker allows you to package applications, along with
their dependencies and configurations, into self-contained units called
containers. Containers are isolated and encapsulate the application and its
runtime environment, including libraries, binaries, and configuration files.
This enables consistent and reproducible deployments across different
environments.
2. Docker Images: Docker images are the building blocks of containers. An
image is a read-only template that contains the application, its dependencies,
and the instructions to run it. Images are created using Dockerfiles, which
define the steps to build the image. Docker images can be versioned, stored
in registries, and easily shared among team members.
3. Docker Engine: The Docker Engine is the core component of Docker that
runs and manages containers. It provides the runtime environment for
containers and handles tasks such as container creation, resource allocation,
networking, and lifecycle management.
4. Docker Compose: Docker Compose is a tool that allows you to define and
manage multi-container applications. It uses a YAML file to specify the
services, networks, and volumes required by your application. Docker
Compose simplifies the orchestration of multiple containers, allowing you to
define and manage the relationships between them.
5. Container Orchestration: In larger-scale deployments, container
orchestration platforms like Kubernetes or Docker Swarm can be used. These
platforms provide advanced features for managing containerized applications,
including scaling, load balancing, service discovery, rolling updates, and fault
tolerance.
6. Continuous Integration and Continuous Deployment (CI/CD): Docker
integrates well with CI/CD pipelines, enabling automated builds, tests, and
deployments. Docker images can be built, tested, and pushed to a registry as
part of the CI/CD workflow. Containers can then be deployed to different
environments, such as development, staging, and production, using the same
Docker image.
7. Infrastructure as Code: Docker, along with tools like Docker Compose or
Kubernetes, allows you to define your infrastructure as code. Infrastructure
configurations can be version-controlled, reviewed, and managed alongside
your application code, ensuring consistency and reproducibility in
deployments.
8. Monitoring and Logging: Docker provides logging mechanisms to capture
container logs, which can be aggregated and analyzed using logging
frameworks. Monitoring tools can be integrated to collect metrics from
containers, enabling visibility into resource usage, performance, and health
of the deployed applications.
By adopting Docker in the DevOps workflow, you can achieve faster and more
reliable deployments, improved scalability, simplified environment
management, and better isolation of applications. Docker's containerization
approach enhances consistency, portability, and flexibility, making it easier
to manage and scale applications across different environments.