Unit II TCS 750
Unit II TCS 750
Cloud orchestration tools: Cloud orchestration technologies integrate automated tasks and
processes into a workflow to perform specific business functions. Cloud orchestration tools
entail policy enforcement and ensure that processes have the proper permission to execute or
connect to a workload.
AWS CloudFormation: CloudFormation is a method of provisioning AWS infrastructure
using code. It allows you to model a collection of related resources, both AWS and third
party, to provision them quickly and consistently. AWS CloudFormation also provides you
with a mechanism to manage the resources through their lifecycle. CloudFormation is
designed to help you manage your AWS resources, especially associated resources. You can
use CloudFormation to group resources with dependencies into stacks using templates.
Installing EC2 instance and LAMP package(Apache, MySQL, and PHP) on top of it
using AWS Cloudformation
Using AWS cloudformation, we will be creating a template using which instance will be
launched and the LAMP package will be installed on top of it automatically
Step 1: Go to the Cloudformation dashboard on the AWS management console. Click on
create the stack.
Step 2: You will be redirected to this webpage. We will be using a sample template of
Lamp Stack in this. Select the option: Use a sample template. Select the Lamp Stack
template. Click on View in Designer to view the design of the template.
Step 3: Now you will be redirected to the designer page which shows the design of the
template. It shows the instance which will be created with Apache and MySQL installed
over it. It also shows the security groups attached to the security purpose of the instance.
Here you can design your own infrastructure accordingly.
Step 4: These are the components of the template which we discussed earlier. Rename the
template accordingly.
Step 5: This is the code written in JSON format which contains all the specifications and
dependencies about the infrastructure to be created.
Step 6: Now click on the cloud-shaped upload button to come out of the designer.
Step 7: We will come back to the same web page. Click on Next.
Step 9: Mention the name of the database you want to create on the MySQL database.
Also, specify the password and name of db-user.
Step 10: Choose the instance type. Select any available key pair which will be used in
making an SSH connection with the instance. Click on Next.
Step 11: You don’t have to worry about the advanced settings. Click on Next.
Step 12: Click on create a stack. The instance will be created with the LAMP package
installed on it. You can easily work with PHP and MySQL on the instance.
IBM Bluemix: IBM Bluemix is the service provider that provides the platform as a service.
It supports several programming languages and services as well as integrated Develops to
build, run, deploy and manage applications on the cloud. Bluemix is based on Cloud
Foundry open technology and runs on SoftLayer infrastructure. Bluemix supports several
programming languages including Java, Node.js, Go, PHP, Swift, Python, Ruby Sinatra,
Ruby.
Bluemix gives a user 2GB of run-time and container memory free for 30 days, plus access
to provision up to 10 services.
Cloud Foundry tool: PaaS service of bluemix.
Following are the steps to host a website on IBM cloud
1.
First, create an account on IBM Bluemix.
Verify the email mentioned in the account, by clicking on the link in the mail
sent from the IBM team.
Select Cloud foundry space Dev and Location is the united kingdom. Cloud
Foundry is an open source Platform as a Service (PaaS) technology.
Use the cf command (cf: Cloud Foundry) to allocate the space on cloud and
store the data like the website on the cloud.
After that, Click on the create resume.
2.
If using the Windows operating system, then download the cf-installer. It
helps in installing the Cloud Foundry to local machine. Then open the terminal.
If using Linux, then just open the terminal
3. Enter cf -v
5. After that, log into the account created through terminal or cmd by typing cf login -
u ********@g***l.com
6.
Make a folder from where the website is to be deployed on IBM cloud.
Go to that location. For going to some location, one can also run the
following command on cmd cd Location
7. Now, Target the cloud foundry tool and the email from which IBM Bluemix is
registered.
cf target -o *********@gmail.com -s dev
9. Now the website is successfully hosted on IBM Bluemix with the name mentioned
as a ToolChain.
Here, the name taken is geeksforgeeks, then the toolchain shows the name
geeksforgeeks.
10. The webpage given below shows a webpage that contains the
string GeeksForGeeks.
RedHat Ansible: Ansible is an IT automation engine that can automate various IT needs.
And it has features like application deployment that means you can deploy your application
easily as per your requirements, cloud provisioning, configuration management is also the
main feature where you can configure and describe your automation job, and intra-service
orchestration. In this, (Yet Another Markup Language)YAML is used for configuring that
helps for describing automation jobs as per requirement. It is Designed for multi-tier
deployments, Ansible models the IT infrastructure by describing how various systems
interrelate, instead of managing one system at a time.
Features :
In this, It uses no extra functionality and cost like no agents and no extra custom
security infrastructure, hence it is easy to deploy.
It uses a very simple language called YAML (Yet Another Markup Language) in the
form of Ansible Playbooks and you can configure it as per your requirement, and it
helps describe the automation jobs in a way that looks like basic English.
The Ansible Automation Engine has a direct interaction with the users who write
playbooks and also interacts with cloud services and the Configuration Management
Database (CMDB).
Architecture components :
Here, we will discuss the architecture part and will discuss its components. The Ansible
automation engine consists of various components as described below as follows.
Inventories: Ansible inventories are lists of hosts with their IP addresses, servers,
and databases which have to be managed via an SSH for UNIX, Linux, or
Networking devices, and WinRM for Windows systems.
Modules: Modules are executed directly on remote hosts through playbooks and
can control resources like services, packages, files, or execute system commands.
They act on system files, install packages and make API calls to the service
network. There are over 450 Ansible that provide modules that automate various
jobs in an environment. For example, Cloud Modules like Cloud Formation create
or delete an AWS cloud formation stack.
Plugins: Plugins are pieces of code that augment Ansible’s core functionality and
allow executing Ansible tasks as a job build step. Ansible ships with several handy
plugins and one can also write it on their own. For example, Action plugins act as
front-ends to modules and can execute tasks on the controller before calling the
modules themselves.
Hosts: Hosts refer to the nodes or systems (Linux, Windows, etc) which are
automated by Ansible.
Playbooks: Playbooks are simple files written in YAML format which describe the
tasks to be executed by Ansible. Playbooks can declare configurations, orchestrate
the steps of any manual ordered process and can also launch various tasks.
Cloud: It is a network of remote servers hosted on the internet to store, manage and
process data instead of storing it on a local server.
Microsoft Azure Automation: Automation is a powerful tool for optimizing cost in Azure.
Automation in Azure can help you save time, and money, and reduce complexity.
Automation allows you to use existing resources more efficiently, increase productivity, and
reduce errors. Automation can also help you automate repeatable processes and tasks, freeing
up resources for more strategic initiatives. Automation in Azure can be used to optimize costs
in a variety of ways. For example, automation can be used to monitor and manage usage and
spend, as well as to automate workloads and services. Automation can also be used to
identify and eliminate redundant services and resources, which can reduce costs.
Ways to Optimize Costs by Automation in Azure: Azure’s automation tools also offer the
ability to automate processes such as deployment, configuration, and monitoring of cloud
resources. This can be done by using Azure Automation runbooks. A runbook is a set of
instructions that can be used to automate a particular task or workflow. For example,
businesses can use runbooks to automate the deployment and configuration of virtual
machines or applications in the cloud.
Monitoring and Managing Usage and Spend: Automation can help you track and analyze
your usage and spend over time, allowing you to identify opportunities for cost savings.
Automation can also be used to set thresholds and alerts, so you can be notified when costs
start to exceed budgeted amounts. Automation can also be used to automate the process of
shutting down and scaling up resources, to ensure that you are only using the resources you
need, when you need them.
Automating Workloads and Services: Automation can be used to automate the process of
provisioning and configuring resources, as well as the process of deploying and managing
applications. Automation can also be used to automate the process of monitoring, managing,
and scaling workloads and services. Automation can help you ensure that the resources you
are using are optimized for performance, cost, and reliability.
Identifying and Eliminating Redundant Services and Resources: Automation can be used
to identify and eliminate resources that are no longer needed or are being underutilized.
Automation can also be used to identify and eliminate services that are redundant or
unnecessary. Automation can help you reduce the overall cost of running services and
resources, as well as reduce the complexity of managing them.
Automating Managing, Scaling Services, and Workloads: Automation can be used to
automate the process of scaling up and down services and workloads, as well as to automate
the process of managing and monitoring them. Automation can help you ensure that the
services and workloads you are using are optimized for cost, performance, and reliability.
Azure’s automation tools provide businesses with a variety of options to optimize costs and
increase efficiency. By automating processes such as deployment, configuration, and
monitoring of cloud resources, businesses can reduce costs and increase efficiency.
Additionally, businesses can use the cost optimization feature to identify underutilized
resources and optimize their cloud spending accordingly. Finally, businesses can use an ARM
to quickly deploy and manage their applications in the cloud. All of these features provide
businesses with the tools they need to optimize costs and increase efficiency.
Types of Automation Tools: There are many different types of automation tools available,
including:
Scripting Tools: Scripting tools allow developers to automate tasks by writing scripts
in a programming language. Scripting tools can automate tasks such as web page
updates, data entry, and report generation.
Workflow Automation Tools: Workflow automation tools allow developers to create
automated processes. Workflows can be used to automate tasks such as document
approval, data entry, and customer notification.
Machine Learning Tools: Machine learning tools use artificial intelligence to
automate tasks such as data analysis and product recommendations.
Robotic Process Automation: RPA tools use robots to automate tasks such as data
entry, report generation, and customer service.
Natural Language Processing: NLP tools use artificial intelligence to understand and
interpret natural language. NLP tools can be used to automate tasks such as customer
service and document analysis.
Cost Savings Through Automation: Optimizing cost with Azure’s automation tools is
a great way to reduce IT costs and increase efficiency. Automation helps to streamline
processes, reduce manual labor, and improve accuracy. It also helps to reduce the
need for manual labor, reducing the need for more employees and allowing a business
to focus more on its core operations.
Azure Automation also provides a number of Cost-Saving Features: Create and Manage
Scheduled Tasks, Automate Workflows and Processes, and Integrate with other Systems.
This helps to streamline processes and reduce the need for manual labor. It also helps to
reduce the time it takes to complete tasks and eliminates human errors, which can save time
and money. Azure Cost Management is another cost-saving tool. It helps to optimize cloud
costs and provides visibility into cloud spending. It allows businesses to gain insights into
their cloud spending, identify cost optimization opportunities, and set budgets. This helps to
reduce costs associated with cloud services and can help to identify areas where costs can be
reduced. Azure Cost Optimizer is another cost-saving tool. It helps to identify cost
optimization opportunities and reduce costs associated with cloud services. It provides
detailed cost insights and helps to identify areas where costs can be reduced. It also helps to
identify areas where costs can be optimized for a given workload. Azure’s automation tools
provide a number of cost-saving opportunities. By reducing the need for manual labor,
streamlining processes, and eliminating human errors, automation can reduce costs associated
with cloud services and increase efficiency. Additionally, Azure’s cost-saving features can
help to identify cost optimization opportunities and reduce costs associated with cloud
services. All of these features can help businesses to optimize costs and increase efficiency.
Automation Strategies for Cloud-Based Applications: The cloud offers organizations a
variety of advantages when it comes to hosting their applications. By leveraging the various
cloud service models and architectures, organizations can benefit from increased scalability,
reduced costs, and improved performance. However, managing cloud-based applications can
be a challenging task, requiring organizations to optimize their use of the cloud and its
associated tools. Automation strategies are essential for organizations to maximize their cloud
use cost-effectively and efficiently. Azure is a powerful cloud platform, enabling
organizations to create, manage, and deploy cloud-based applications with ease. As such,
Azure offers a range of automation tools and features to help organizations better manage
their applications and reduce costs. Some Automation Strategies available to organizations
with Azure to optimize their Cloud-Based Apps and reduce costs:
Scripting and Automation Tools: Azure provides a range of scripting and automation
tools, such as Azure Automation, Azure Resource Manager, and PowerShell, to
enable organizations to automate various tasks associated with their applications.
These tools can be used to automate the deployment and management of applications,
as well as the provisioning and scaling of resources. By automating these tasks,
organizations can gain greater control over their applications and reduce costs.
Azure Policy: Azure Policy is a service which allows organizations to define and
enforce standards and restrictions across their Azure resources. This can help
organizations ensure that their applications are secure and compliant with industry
standards, while also helping to reduce costs by preventing unnecessary resource
usage.
Azure Automation: Azure Automation is a service which allows organizations to
automate the deployment and management of their applications. Organizations can
use this service to automate the provisioning and scaling of resources, as well as the
deployment and management of applications. This can help organizations reduce
costs by allowing them to quickly deploy and manage their applications without
having to manually manage the resources involved.
Azure Resource Manager: Azure Resource Manager is a service which allows
organizations to manage their Azure resources in a unified manner. This can help
organizations reduce costs by ensuring that their applications are deployed and
managed in a consistent and cost-effective manner.
Azure Storage Accounts: Azure Storage Accounts enable organizations to store and
manage their data in a secure and cost-effective manner. By leveraging Storage
Accounts, organizations can reduce their storage costs, while ensuring that their data
is secure and accessible.
Automated Cloud Provisioning: Automated Cloud Provisioning is the process of using
automated tools to manage and configure cloud-based services. It is the process of
automating the provisioning process, which includes the creation, deployment, and
management of cloud-based services such as servers, applications, and data storage.
Automated cloud provisioning allows organizations to efficiently and cost-effectively
manage their IT infrastructure in the cloud. It eliminates the need for manual processes,
which can be time-consuming and expensive. By automating the processes organizations can
reduce their costs by eliminating the need for manual labor and other associated costs. It also
helps organizations streamline their IT operations, as it allows them to quickly deploy and
configure cloud-based resources, as well as manage and monitor their cloud environment.
Organizations can also use it to improve their security posture. Automated Cloud
Provisioning ensures that all cloud resources are configured securely, and it also allows
organizations to monitor their cloud environment and detect any potential security threats
with the ability to quickly respond to any security incidents, as the automated tools can detect
and alert the organization of any suspicious activity. Organizations can also use Automated
Cloud provisioning to improve their scalability. Automated cloud provisioning allows
organizations to quickly and efficiently scale up or down their cloud resources, depending on
their needs. This can help organizations save time and money in the long run, as they don’t
need to manually add or remove cloud resources.
Automated Resource Management: Automated Resource Management (ARM) is a set of
tools that allow organizations to better manage their cloud resources. ARM provides
automated infrastructure management, cost optimization, and resource utilization. Provision
and Manage Cloud Resources to reduce the cost of cloud operations and improve the
efficiency of cloud deployments. Define policies that govern their Cloud Infrastructure and
define the types of resources that should be provisioned, the amount of resources that can be
used, and the rules that govern resource utilization. Define Cost-Optimization Rules that can
be used to determine when resources should be scaled up or down to optimize cost. Resource
Utilization Insights can help organizations identify resources that are underutilized,
overutilized, and low-performing. This information can then be used to adjust resource
allocation and scale resources up or down to optimize cost. Automation Tools to help
organizations automate the deployment and management of their cloud resources. By using
Azure’s automation tools, organizations can save time and money by automating manual
tasks, optimizing cost and resource utilization, and monitoring their cloud deployments.
These tools can help organizations reduce the cost of cloud operations and improve the
efficiency of cloud deployments.
Automated Performance Monitoring: Automated performance monitoring is a process that
involves the use of automated tools to monitor the performance of an IT system or
application. This process is used to ensure that the system or application is working as
expected and that it meets the required performance objectives.
Automated Performace monitoring involves various benefits:
Identifying and addressing any performance issues quickly and efficiently can save
both time and money, as it reduces the need for manual testing and troubleshooting.
Identifying Potential Security threats and other vulnerabilities, allows organizations to
take action to address these issues before they become serious problems.
Analyzing the system or application in order to identify areas of improvement and
then implementing strategies to improve those areas.
Create Reports on the performance of the system or application which can then be
used to identify areas of improvement.
Optimize the cost of an IT system or Application. This can be done by reducing the
amount of manual testing and troubleshooting that is required, as well as by reducing
the cost of resources used in the optimization process.
Reduce the amount of time it takes to identify and address any performance issues,
thus reducing the cost of addressing those issues.
Automated Deployment and Configuration: Azure Automated Deployment and
Configuration is a cloud-based solution that allows users to quickly and easily deploy and
configure their Azure resources. By automating the process of deploying and configuring
Azure resources, users are able to optimize their Azure costs, reduce complexity, and save
time. Azure Automated Deployment and Configuration is a simple but powerful solution that
makes it easy to deploy and configure a wide range of Azure resources. This includes Virtual
Networks, Compute Resources, Storage, Databases, and more. It allows users to easily
specify the type and size of resources they need, and then quickly and easily deploy and
configure them in Azure. This ensures that users have the resources they need for their
applications without having to manually manage and configure them. The solution also
provides users with the ability to quickly and easily scale their Azure resources, so that they
can take advantage of cost savings when their applications require more resources. This
ensures that users are able to make the most of their Azure investments, while also keeping
their costs down. Azure Automated Deployment and Configuration also helps users save
time. It eliminates the need for users to manually deploy and configure their resources. This
allows them to quickly and easily deploy and configure their resources, and save time in the
process.
Automated Security and Compliance: Automated Security and Compliance (AS&C) is
a cloud-based security and compliance solution that helps organizations of all sizes to
quickly and cost-effectively secure their applications and data in the cloud. It combines
advanced security features with automated compliance processes to provide an end-to-
end security and compliance solution.
Automated Security and Compliance provide various advantages:
Achieve and Maintain Compliance with Industry Standards and Regulations such as
HIPAA, ISO 27001, PCI DSS, and other global regulations.
Quickly Identify and address Security and Compliance Issues, and provides
comprehensive reporting to help organizations maintain compliance with security and
compliance requirements.
A Comprehensive set of Security and Compliance Services that can be tailored to
meet the unique needs of each organization. It provides automated security and
compliance processes, such as vulnerability scanning, monitoring, and reporting. It
also provides automated compliance processes, such as regulatory reporting and audit
trail generation.
Optimize Costs by providing a comprehensive security and compliance solution that
is designed to reduce the cost of maintaining compliance with security and
compliance regulations. It also reduces the costs associated with manual security and
compliance processes, such as vulnerability scanning, monitoring, and reporting.
Automating Compliance Processes, such as regulatory reporting and audit trail
generation. Automating these processes can reduce the manual effort and time
associated with compliance processes, and can help reduce the cost associated with
managing compliance.
Real-time Security and Compliance Monitoring provides a comprehensive set of
reports and alerts that can help organizations quickly identify and address security and
compliance issues. This can help organizations to quickly identify and address
security and compliance issues and avoid costly penalties.
Automated Backup and Disaster Recovery: Automated Backup and Disaster
Recovery is a cloud-based service that provides automated backup and disaster
recovery capabilities to Azure customers. It helps organizations to protect their data
and applications in the event of a disaster or other unforeseen event.
The main advantages of using Azure Automated Backup and Disaster Recovery are:
Cost Savings: The service is billed based on the amount of storage used for the
backups, the number of recovery points, the duration of the backups, and other
factors. This allows customers to scale their backup environment as their needs
change, without incurring any additional costs.
Scalability: The service allows customers to increase their backup and disaster
recovery capabilities as their needs increase. This helps to ensure that their data and
applications are always safe and secure. Additionally, the service provides customers
with the ability to monitor the performance of their backups, allowing them to make
changes as needed.
Reliability: The service is designed to be highly reliable and secure. It is designed to
handle large volumes of data and applications, ensuring that customers always have
access to their data when they need it.
Follow the below step-by-step process as it is to get started from Azure Portal.
Step 1: Log in to your Azure Portal
Step 2: From the left top corner of the page >> click Menu and select Dashboard
Step 3: After opening the Dashboard >> click on +New Dashboard/Create >> select Blank
or Custom Dashboard of your choice to create one. Then give some names to the dashboard
and then click on save. Refer to the below images.
Now You will see a blank dashboard. Follow the next steps to add the azure monitor data.
Step 4: Now, open the Azure Monitor >> Navigate to the logs section.
Step 5: Run some KQL queries in the Log section then click on Pin to >> Azure
Dashboards.
Step 6: After selecting the Azure Dashboards >> select your dashboard which you have
created in step 3 >> click on Pin to add it to the selected dashboard.
Step 7: Now go to your dashboard which you have pinned and check the preview to verify.
That’s it you are done creating a dashboard in azure. You share your dashboard with other
colleagues in your organization or team or you can keep them private to restrict access.
Terraform
Terraform is a popular infrastructure-as-code tool that allows you to automate the
provisioning and management of infrastructure resources. It uses configuration files written
in the HashiCorp Configuration Language (HCL) to define the desired state of your
infrastructure, and it uses various commands to apply those configurations and manage
your infrastructure resources.
How Does Terraform Work?
With Terraform, users can define infrastructure resources using a simple, declarative
configuration language. These resources can include virtual machines, networking
components, storage resources, and more. Once the configuration is defined, Terraform can
be used to create, modify, and destroy these resources in a repeatable and predictable way.
One of the key benefits of Terraform is its ability to support multiple cloud providers, as
well as on-premises and open-source tools. This means that users can define infrastructure
resources using a single configuration and use Terraform to manage resources across
different environments.
Overall, Terraform is a powerful and flexible tool that enables users to define and manage
infrastructure resources in a reusable and automated way. It is widely used in a variety of
industries and scenarios, including cloud infrastructure, data centers, and hybrid
environments.
These files contain the definition of the infrastructure resources that Terraform will
manage, as well as any input and output variables and modules. The configuration files are
written in the HashiCorp Configuration Language (HCL), which is a domain-specific
language designed specifically for Terraform.
This file stores the current state of the infrastructure resources managed by Terraform. The
state file is used to track the resources that have been created, modified, or destroyed, and it
is used to ensure that the infrastructure resources match the desired state defined in the
configuration files.
Infrastructure as Code
Terraform allows you to use code to define and manage your infrastructure, rather than
manually configuring resources through a user interface. This makes it easier to version,
review, and collaborate on infrastructure changes.
These are the APIs or other interfaces that Terraform uses to create, modify, or destroy
infrastructure resources. Terraform supports multiple cloud providers, as well as on-
premises and open-source tools.
Providers
Terraform integrates with a wide range of cloud and infrastructure providers, including
AWS, Azure, GCP, and more. These providers allow Terraform to create and manage
resources on those platforms.
Overall, the architecture of a Terraform deployment consists of configuration files, a state
file, and a CLI that interacts with cloud APIs or other infrastructure providers to create,
modify, or destroy resources. This architecture enables users to define and manage
infrastructure resources in a declarative and reusable way.
Terraform Modules
In Terraform, a module is a container for a set of related resources that are used together to
perform a specific task. Modules allow users to organize and reuse their infrastructure code,
making it easier to manage complex infrastructure deployments.
Modules are defined using the ‘ module ‘ block in Terraform configuration. A module
block takes the following arguments:
source: The source location of the module. This can be a local path or a URL.
name: The name of the module. This is used to reference the module in other parts
of the configuration.
version: The version of the module to use. This is optional and can be used to
specify a specific version of the module.
Inside a module block, users can define the resources that make up the module, as well as
any input and output variables that the module exposes. Input variables allow users to pass
values into the module when it is called, and output variables allow the module to return
values to the calling configuration. Modules can be nested, allowing users to create
complex infrastructure architectures using a hierarchical structure. Modules can also be
published and shared on the Terraform Registry, enabling users to reuse and extend the
infrastructure code of others.
What is Terraform Core?
The open-source binary for Terraform Core is available for download and usage on the
command line. The configuration files you provide (your desired state) and the present state
(a state file generated and managed solely by Terraform) are the two input sources used by
Terraform’s Core. The Core then develops a plan for what resources need to be added,
altered, or eliminated using this knowledge.
Why Terraform?
Terraform offers many benefits and it is a widely used tool in present organizations for
managing their infrastructure.
Private Module Registry: A private module registry is a repository for Terraform modules
that is only accessible to a specific group of users, rather than being publicly available.
Private module registries are useful for organizations that want to manage and distribute
their own infrastructure code internally, rather than using publicly available modules from
the Terraform Registry. To use a private module registry, users need to configure their
Terraform CLI to authenticate with the registry and access the modules. This typically
involves setting up an access token or other authentication method and specifying the
registry URL in the Terraform configuration. Once configured, users can use the ‘ module ‘
block in their Terraform configuration to reference the modules in the private registry, just
like they would with publicly available modules. Private module registries can be hosted on
a variety of platforms, including cloud providers, on-premises servers, and open-source
tools. Overall, private module registries are a useful tool for organizations that want to
manage and distribute their own Terraform modules internally, enabling them to better
control and reuse their infrastructure code.
Terraform Commands
Terraform init
Terraform Validate
The validate command performs precisely what its name implies. It ensures that the code is
internally coherent and examines it for syntax mistakes. Only the configuration files (*.tf)
in the active working directory are examined. You must provide the -a recursive flag if you
want to validate files inside of folders (for example, if you have a module/ directory).
$ terraform validate
Terraform Apply
Terraform apply command applies the changes defined in the configuration to your
infrastructure. It creates or updates the resources according to the configuration, and it also
prompts you to confirm the changes before applying them.
$ terraform apply
Terraform Destroy
Terraform destroy command will destroy all the resources created by Terraform in the
current working directory. It is a useful command for tearing down your infrastructure
when you no longer need it.
$ terraform destroy
Terraform Import
Terraform Console
Terraform Refresh
This command updates the state of your infrastructure to reflect the actual state of your
resources. It is useful when you want to ensure that your Terraform state is in sync with the
actual state of your infrastructure.
$ terraform refresh
Advantages of Terraform
Declarative Configuration: Terraform uses a declarative configuration language,
which means that users define the desired state of their infrastructure resources, rather
than the specific steps required to achieve that state. This makes it easier to understand
and manage complex infrastructure deployments.
Support for Multiple Cloud Providers: Terraform supports multiple cloud
providers, as well as on-premises and open-source tools, which means that users can
define and manage their infrastructure resources using a single configuration.
Reusable Infrastructure Code: Terraform allows users to define their
infrastructure resources in a reusable and modular way, using features such as modules
and variables. This makes it easier to manage and maintain complex infrastructure
deployments.
Collaboration and Version Control: Terraform configuration files can be stored in
version control systems such as Git, which makes it easier for teams to collaborate and
track changes to their infrastructure.
Efficient Resource Management: Terraform has features such as resource
dependencies and provisioners that enable users to manage their infrastructure resources
efficiently, minimizing duplication and ensuring that resources are created and
destroyed in the correct order.
Disadvantages of Terraform
Complexity: Terraform can be complex to learn and use, especially for users who
are new to infrastructure automation. It has a large number of features and can be
difficult to understand the full scope of its capabilities.
State Management: Terraform uses a state file to track the resources it manages,
which can cause issues if the state file becomes out of sync with the actual
infrastructure. This can happen if the infrastructure is modified outside of Terraform or
if the state file is lost or corrupted.
Performance: Terraform can be slower than some other IaC tools, especially when
managing large infrastructure deployments. This can be due to the need to communicate
with multiple APIs and the overhead of managing the state file.
Limited Error Handling: Terraform does not have robust error handling, and it can
be difficult to diagnose and fix issues when they arise. This can make it difficult to
troubleshoot problems with infrastructure deployments.
Limited Rollback Capabilities: Terraform does not have a built-in rollback
feature, so it can be difficult to undo changes to infrastructure if something goes wrong.
Users can use the ‘ terraform destroy ‘ command to destroy all resources defined in
the configuration, but this can be time-consuming and may not be feasible in all
situations.
Kubernetes: Kubernetes Cluster mainly consists of Worker Machines called Nodes and a
Control Plane. In a cluster, there is at least one worker node. The Kubectl CLI
communicates with the Control Plane and Control Plane manages the Worker Nodes.
Kubernetes – Cluster Architecture
As can be seen in the diagram below, Kubernetes has a client-server architecture and has
master and worker nodes, with the master being installed on a single Linux system and the
nodes on many Linux workstations.
Kubernetes Components
Kubernetes is composed of a number of components, each of which plays a specific role in
the overall system. These components can be divided into two categories:
nodes: Each Kubernetes cluster requires at least one worker node, which is a
collection of worker machines that make up the nodes where our container will be
deployed.
Control plane: The worker nodes and any pods contained within them will be
under the control plane.
Control Plane Components
It is basically a collection of various components that help us in managing the overall
health of a cluster. For example, if you want to set up new pods, destroy pods, scale pods,
etc. Basically, 4 services run on Control Plane:
Kube-API server
The API server is a component of the Kubernetes control plane that exposes the Kubernetes
API. It is like an initial gateway to the cluster that listens to updates or queries via CLI like
Kubectl. Kubectl communicates with API Server to inform what needs to be done like
creating pods or deleting pods etc. It also works as a gatekeeper. It generally validates
requests received and then forwards them to other processes. No request can be directly
passed to the cluster, it has to be passed through the API Server.
Kube-Scheduler
When API Server receives a request for Scheduling Pods then the request is passed on to
the Scheduler. It intelligently decides on which node to schedule the pod for better
efficiency of the cluster.
Kube-Controller-Manager
The kube-controller-manager is responsible for running the controllers that handle the
various aspects of the cluster’s control loop. These controllers include the replication
controller, which ensures that the desired number of replicas of a given application is
running, and the node controller, which ensures that nodes are correctly marked as “ready”
or “not ready” based on their current state.
etcd
It is a key-value store of a Cluster. The Cluster State Changes get stored in the etcd. It acts
as the Cluster brain because it tells the Scheduler and other processes about which
resources are available and about cluster state changes.
Node Components
These are the nodes where the actual work happens. Each Node can have multiple pods and
pods have containers running inside them. There are 3 processes in every Node that are
used to Schedule and manage those pods.
Container runtime
A container runtime is needed to run the application containers running on pods inside a
pod. Example-> Docker
kubelet
kubelet interacts with both the container runtime as well as the Node. It is the process
responsible for starting a pod with a container inside.
kube-proxy
It is the process responsible for forwarding the request from Services to the pods. It has
intelligent logic to forward the request to the right pod in the worker node.
Addons Plug-in
We may install functionality in the cluster (such as Daemonset, Deployment, etc.) with the
aid of add-ons. This namespace resource provides cluster-level functionality, making it a
Kube-system namespace resource.
CoreDNS
KuberVirt
ACI
Calico etc.
Commands for Kubectl
Here are some common commands for interacting with a Kubernetes cluster:
To view a list of all the pods in the cluster, you can use the following command:
kubectl get pods
To view a list of all the nodes in the cluster, you can use the following command:
kubectl get nodes
To view a list of all the services in the cluster, you can use the following command:
kubectl get services
Prerequisites
Installing Docker − Docker is required on all the instances of Kubernetes. Following
are the steps to install the Docker.
Step 1 − Log on to the machine with the root user account.
Step 2 − Update the package information. Make sure that the apt package is
working.
Step 3 − Run the following commands.
$ sudo apt-get update
$ sudo apt-get install apt-transport-https ca-certificates
Step 4 − Add the new GPG key.
$ sudo apt-key adv \
--keyserver hkp://ha.pool.sks-keyservers.net:80 \
--recv-keys 58118E89F3A912897C070ADBF76221572C52609D
$ echo "deb https://apt.dockerproject.org/repo ubuntu-trusty main" | sudo tee
/etc/apt/sources.list.d/docker.list
Step 5 − Update the API package image.
$ sudo apt-get update
Once all the above tasks are complete, you can start with the actual installation of
the Docker engine. However, before this you need to verify that the kernel version
you are using is correct.
$ cp kubernetes/cluster/ubuntu/initd_scripts/kube-apiserver /etc/init.d/
$ cp kubernetes/cluster/ubuntu/initd_scripts/kube-controller-manager /etc/init.d/
$ cp kubernetes/cluster/ubuntu/initd_scripts/kube-kube-scheduler /etc/init.d/
$ cp kubernetes/cluster/ubuntu/default_scripts/kubelet /etc/default/
$ cp kubernetes/cluster/ubuntu/default_scripts/kube-proxy /etc/default/
$ cp kubernetes/cluster/ubuntu/default_scripts/kubelet /etc/default/
The next step is to update the copied configuration file under /etc. dir.
Configure etcd on master using the following command.
$ ETCD_OPTS = "-listen-client-urls = http://kube-master:4001"
Configure kube-apiserver
For this on the master, we need to edit the /etc/default/kube-apiserver file which
we copied earlier.
$ KUBE_APISERVER_OPTS = "--address = 0.0.0.0 \
--port = 8080 \
--etcd_servers = <The path that is configured in ETCD_OPTS> \
--portal_net = 11.1.1.0/24 \
--allow_privileged = false \
--kubelet_port = < Port you want to configure> \
--v = 0"
Configure the kube Controller Manager
We need to add the following content in /etc/default/kube-controller-manager.
$ KUBE_CONTROLLER_MANAGER_OPTS = "--address = 0.0.0.0 \
--master = 127.0.0.1:8080 \
--machines = kube-minion \ -----> #this is the kubernatics node
--v = 0
Next, configure the kube scheduler in the corresponding file.
$ KUBE_SCHEDULER_OPTS = "--address = 0.0.0.0 \
--master = 127.0.0.1:8080 \
--v = 0"
Once all the above tasks are complete, we are good to go ahead by bring up the
Kubernetes Master. In order to do this, we will restart the Docker.
$ service docker restart
Kubernetes Node Configuration
Kubernetes node will run two services the kubelet and the kube-proxy. Before
moving ahead, we need to copy the binaries we downloaded to their required folders
where we want to configure the kubernetes node.
Use the same method of copying the files that we did for kubernetes master. As it
will only run the kubelet and the kube-proxy, we will configure them.
$ cp <Path of the extracted file>/kubelet /opt/bin/
$ cp <Path of the extracted file>/kube-proxy /opt/bin/
$ cp <Path of the extracted file>/kubecfg /opt/bin/
$ cp <Path of the extracted file>/kubectl /opt/bin/
$ cp <Path of the extracted file>/kubernetes /opt/bin/
Now, we will copy the content to the appropriate dir.
$ cp kubernetes/cluster/ubuntu/init_conf/kubelet.conf /etc/init/
$ cp kubernetes/cluster/ubuntu/init_conf/kube-proxy.conf /etc/init/
$ cp kubernetes/cluster/ubuntu/initd_scripts/kubelet /etc/init.d/
$ cp kubernetes/cluster/ubuntu/initd_scripts/kube-proxy /etc/init.d/
$ cp kubernetes/cluster/ubuntu/default_scripts/kubelet /etc/default/
$ cp kubernetes/cluster/ubuntu/default_scripts/kube-proxy /etc/default/
We will configure the kubelet and kube-proxy conf files.
We will configure the /etc/init/kubelet.conf.
$ KUBELET_OPTS = "--address = 0.0.0.0 \
--port = 10250 \
--hostname_override = kube-minion \
--etcd_servers = http://kube-master:4001 \
--enable_server = true
--v = 0"
/
For kube-proxy, we will configure using the following command.
$ KUBE_PROXY_OPTS = "--etcd_servers = http://kube-master:4001 \
--v = 0"
/etc/init/kube-proxy.conf
Finally, we will restart the Docker service.
$ service docker restart
Now we are done with the configuration. You can check by running the following
commands.
$ /opt/bin/kubectl get minions
By downloading
From Docker file
By Downloading
The existing image can be downloaded from Docker hub and can be stored on the
local Docker registry.
In order to do that, run the Docker pull command.
$ docker pull --help
Usage: docker pull [OPTIONS] NAME[:TAG|@DIGEST]
Pull an image or a repository from the registry
-a, --all-tags = false Download all tagged images in the repository
--help = false Print usage
Following will be the output of the above code.
The above screenshot shows a set of images which are stored in our local Docker
registry.
If we want to build a container from the image which consists of an application to
test, we can do it using the Docker run command.
$ docker run –i –t unbunt /bin/bash
From Docker File
In order to create an application from the Docker file, we need to first create a
Docker file.
Following is an example of Jenkins Docker file.
FROM ubuntu:14.04
MAINTAINER [email protected]
ENV REFRESHED_AT 2017-01-15
RUN apt-get update -qq && apt-get install -qqy curl
RUN curl https://get.docker.io/gpg | apt-key add -
RUN echo deb http://get.docker.io/ubuntu docker main > /etc/apt/↩
sources.list.d/docker.list
RUN apt-get update -qq && apt-get install -qqy iptables ca-↩
certificates lxc openjdk-6-jdk git-core lxc-docker
ENV JENKINS_HOME /opt/jenkins/data
ENV JENKINS_MIRROR http://mirrors.jenkins-ci.org
RUN mkdir -p $JENKINS_HOME/plugins
RUN curl -sf -o /opt/jenkins/jenkins.war -L $JENKINS_MIRROR/war-↩
stable/latest/jenkins.war
RUN for plugin in chucknorris greenballs scm-api git-client git ↩
ws-cleanup ;\
do curl -sf -o $JENKINS_HOME/plugins/${plugin}.hpi \
-L $JENKINS_MIRROR/plugins/${plugin}/latest/${plugin}.hpi ↩
; done
ADD ./dockerjenkins.sh /usr/local/bin/dockerjenkins.sh
RUN chmod +x /usr/local/bin/dockerjenkins.sh
VOLUME /var/lib/docker
EXPOSE 8080
ENTRYPOINT [ "/usr/local/bin/dockerjenkins.sh" ]
Once the above file is created, save it with the name of Dockerfile and cd to the file
path. Then, run the following command.
$ sudo docker build -t jamtur01/Jenkins .
Once the image is built, we can test if the image is working fine and can be
converted to a container.
$ docker run –i –t jamtur01/Jenkins /bin/bash
Environment Variable
export NUM_NODES = 2
export KUBE_AUTOSCALER_MIN_NODES = 2
export KUBE_AUTOSCALER_MAX_NODES = 5
export KUBE_ENABLE_CLUSTER_AUTOSCALER = true
Once done, we will start the cluster by running kube-up.sh. This will create a cluster
together with cluster auto-scalar add on.
./cluster/kube-up.sh
On creation of the cluster, we can check our cluster using the following kubectl
command.
$ kubectl get nodes
NAME STATUS AGE
kubernetes-master Ready,SchedulingDisabled 10m
kubernetes-minion-group-de5q Ready 10m
kubernetes-minion-group-yhdx Ready 8m
Now, we can deploy an application on the cluster and then enable the horizontal pod
autoscaler. This can be done using the following command.
$ kubectl autoscale deployment <Application Name> --cpu-percent = 50 --min = 1 --
max = 10
The above command shows that we will maintain at least one and maximum 10
replica of the POD as the load on the application increases.
We can check the status of autoscaler by running the $kubclt get hpa command.
We will increase the load on the pods using the following command.
$ kubectl run -i --tty load-generator --image = busybox /bin/sh
$ while true; do wget -q -O- http://php-apache.default.svc.cluster.local; done
We can check the hpa by running $ kubectl get hpa command.
$ kubectl get hpa
NAME REFERENCE TARGET CURRENT
php-apache Deployment/php-apache/scale 50% 310%
Setting up Kubernetes dashboard involves several steps with a set of tools required
as the prerequisites to set it up.
Docker (1.3+)
go (1.5+)
nodejs (4.2.2+)
npm (1.3+)
java (7+)
gulp (3.9+)
Kubernetes (1.1.2+)
Setting Up the Dashboard
$ sudo apt-get update && sudo apt-get upgrade
Installing Python
$ sudo apt-get install python
$ sudo apt-get install python3
Installing GCC
$ sudo apt-get install gcc-4.8 g++-4.8
Installing make
$ sudo apt-get install make
Installing Java
$ sudo apt-get install openjdk-7-jdk
Installing Node.js
$ wget https://nodejs.org/dist/v4.2.2/node-v4.2.2.tar.gz
$ tar -xzf node-v4.2.2.tar.gz
$ cd node-v4.2.2
$ ./configure
$ make
$ sudo make install
Installing gulp
$ npm install -g gulp
$ npm install gulp
Verifying Versions
Java Version
$ java –version
java version "1.7.0_91"
OpenJDK Runtime Environment (IcedTea 2.6.3) (7u91-2.6.3-1~deb8u1+rpi1)
OpenJDK Zero VM (build 24.91-b01, mixed mode)
$ node –v
V4.2.2
$ npn -v
2.14.7
$ gulp -v
[09:51:28] CLI version 3.9.0
Building GO
$ ./all.bash
$ vi /root/.bashrc
In the .bashrc
export GOROOT = $HOME/go
export PATH = $PATH:$GOROOT/bin
$ go version
go version go1.4.3 linux/arm
Installing Kubernetes Dashboard
$ git clone https://github.com/kubernetes/dashboard.git
$ cd dashboard
$ npm install -g bower
Running the Dashboard
$ git clone https://github.com/kubernetes/dashboard.git
$ cd dashboard
$ npm install -g bower
$ gulp serve
[11:19:12] Requiring external module babel-core/register
[11:20:50] Using gulpfile ~/dashboard/gulpfile.babel.js
[11:20:50] Starting 'package-backend-source'...
[11:20:50] Starting 'kill-backend'...
[11:20:50] Finished 'kill-backend' after 1.39 ms
[11:20:50] Starting 'scripts'...
[11:20:53] Starting 'styles'...
[11:21:41] Finished 'scripts' after 50 s
[11:21:42] Finished 'package-backend-source' after 52 s
[11:21:42] Starting 'backend'...
[11:21:43] Finished 'styles' after 49 s
[11:21:43] Starting 'index'...
[11:21:44] Finished 'index' after 1.43 s
[11:21:44] Starting 'watch'...
[11:21:45] Finished 'watch' after 1.41 s
[11:23:27] Finished 'backend' after 1.73 min
[11:23:27] Starting 'spawn-backend'...
[11:23:27] Finished 'spawn-backend' after 88 ms
[11:23:27] Starting 'serve'...
2016/02/01 11:23:27 Starting HTTP server on port 9091
2016/02/01 11:23:27 Creating API client for
2016/02/01 11:23:27 Creating Heapster REST client for http://localhost:8082
[11:23:27] Finished 'serve' after 312 ms
[BS] [BrowserSync SPA] Running...
[BS] Access URLs:
--------------------------------------
Local: http://localhost:9090/
External: http://192.168.1.21:9090/
--------------------------------------
UI: http://localhost:3001
UI External: http://192.168.1.21:3001
--------------------------------------
[BS] Serving files from: /root/dashboard/.tmp/serve
[BS] Serving files from: /root/dashboard/src/app/frontend
[BS] Serving files from: /root/dashboard/src/app
The Kubernetes Dashboard
Cloudify: The Cloudify platform provides infrastructure automation using
Environment-as-a-Service technology to deploy and continuously manage any cloud,
private data center, or Kubernetes service from one central point, while enabling
developers to self-service their environments.
AWS infrastructure automation tools are AWS OpsWorks, AWS Elastic Beanstalk,
EC2 Image Builder, AWS Proton, AWS Service Catalog, AWS Cloud9, AWS
CloudShell, and Amazon CodeGuru.
There are two types of cloud automation. The first is support for corporate data center
operations. The second is hosting for websites and mobile applications at scale. Public
cloud hardware from AWS, Google Cloud, and Microsoft Azure can be used for either
purpose.
Chef Automate: Chef Automate provides a single dashboard and analytics for
infrastructure automation, Chef Habitat for application automation, and Chef InSpec
for security and compliance automation. Chef Automate measurably increases the
ability to deliver software quickly, increasing speed and efficiency while decreasing risk.
Google Cloud Deployment Manager: Google Cloud Deployment Manager is an
infrastructure deployment service that automates the creation and management of Google
Cloud resources. Google Cloud Deployment Manager is an infrastructure deployment service
that automates the creation & management of Google Cloud Resources, It can be used to
write flexible templates and configuration files, and use them to create deployments with
multiple services (Cloud Storage, Compute Engine, Cloud SQL, etc.)
Puppet Enterprise: Puppet is an efficient system management tool for centralizing and
automating the configuration management process. It can also be utilized as open-
source configuration management for server configuration, management, deployment,
and orchestration.
Red Hat Ansible Automation Platform: Red Hat® Ansible® Automation Platform is an
end-to-end automation platform to configure systems, deploy software, and orchestrate
advanced workflows. It includes resources to create, manage, and scale across the entire
enterprise. The following components are included in Red Hat Ansible Automation
Platform: Self-Hosted and/or on-premises components, Automation controller, Private
automation hub, Automation content navigator, Automation execution environments,
Execution environment builder, Automation mesh, and Ansible content tools.
VMware vRealize Automation: VMware vRealize® Automation™ is a modern
infrastructure automation platform that increases productivity and agility by reducing
complexity and eliminating manual or semi- manual tasks. This automation allows you
to rapidly respond to changing business needs. You can put in place personalized and
relevant policies to enforce deployment standards, service levels, and resource quotas.
With vRA, you get a wide selection of multi-cloud, multi-vendor support, and extensible
design. To this effect, integrate your storage array with vRealize Automation(vRA) and
vRealize Orchestrator(vRO) products from VMware. This will empower the users to
design and consume your storage offerings to best fulfill their business case.
Assignment 2 (CO2)