0% found this document useful (0 votes)
117 views54 pages

Unit II TCS 750

Uploaded by

rohitfanclub7777
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
117 views54 pages

Unit II TCS 750

Uploaded by

rohitfanclub7777
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 54

Unit II Cloud orchestration tools: AWS CloudFormation, IBM Cloud Orchestrator,

RedHat Ansible, Microsoft Azure Automation, Terraform, Kubernetes, Cloudify, and


Morpheus.
DC/OS container orchestration, Mesos Containers, Docker Containers.
Cloud infrastructure automation tools: Chef Automate, Google Cloud Deployment
Manager, Puppet Enterprise, Red Hat Ansible Automation Platform, VMware vRealize
Automation.

Cloud orchestration tools: Cloud orchestration technologies integrate automated tasks and
processes into a workflow to perform specific business functions. Cloud orchestration tools
entail policy enforcement and ensure that processes have the proper permission to execute or
connect to a workload.
AWS CloudFormation: CloudFormation is a method of provisioning AWS infrastructure
using code. It allows you to model a collection of related resources, both AWS and third
party, to provision them quickly and consistently. AWS CloudFormation also provides you
with a mechanism to manage the resources through their lifecycle. CloudFormation is
designed to help you manage your AWS resources, especially associated resources. You can
use CloudFormation to group resources with dependencies into stacks using templates.

AWS Cloudformation benefits:

 Automation: AWS CloudFormation helps to automate the process of creating,


configuring and managing AWS resources. This allows for the infrastructure to be
deployed quickly, reliably and repeatedly.
 Consistency and standardization: With AWS CloudFormation, it is possible to
create standard templates of infrastructure stacks that can be used to create identical
copies of the same infrastructure. This ensures consistency in the infrastructure
deployment and makes it easier to maintain.
 Cost savings: AWS CloudFormation helps to reduce costs by allowing customers to
use existing infrastructure templates and reuse them across multiple environments.
This reduces the cost of designing and deploying new infrastructure.
 Security: AWS CloudFormation helps to ensure that all AWS resources are
configured in a secure manner by using security policies and rules. This helps to
protect the infrastructure from potential security threats.
 Scalability: AWS CloudFormation allows for the quick and easy scaling of
resources on demand. This means that customers can quickly and easily add
resources to meet their changing needs.

Amazon Web Services is a subsidiary of Amazon.com that provides on-demand cloud


computing platforms to individuals, companies, and governments, on a paid subscription
basis.
Why do we need AWS Cloudformation?
Just imagine that you have to develop an application that uses various AWS resources.
When it comes to creating and managing those resources, it can be highly time-consuming
and challenging. It can become difficult for you to develop the application when you are
spending the whole time in managing those AWS resources. What if we have a service for
that? So here comes AWS Cloudformation in the picture.
What is AWS Cloudformation?
This is a service provided by AWS that helps you create and manage the resources so that
you can spend less time managing those resources and more time focusing on your
applications that run in AWS. You just have to create a template which describes all the
resources you require, then AWS Cloudformation will take care of managing and
provisioning all the resources. AWS provides a Cloudformation designer for designing the
template wherein you can put all the resources. You can also define the dependencies of all
the resources that are needed. You can also reuse your templates to replicate your
infrastructure in multiple environments and regions.
Getting Started with AWS Cloudformation
Our template is created in JSON or YAML script. We will be discussing the JSON script in
this article. JSON is a text-based format that represents structured data on the basis of
JavaScript object syntax. It carries the AWS resources details in the structured format
according to which AWS infrastructure is created.

Structure of Cloudformation JSON Template


 Format version: It defines the version of a template.
 Description: Any extra description or comments about your template are written in
the description of the template.
 Metadata: It can be used to provide further information using JSON objects.
 Parameters: Parameters are used when you want to provide custom or dynamic
values to the stack during runtime. Therefore, we can customize templates using
parameters.
 Mappings: Mapping in the JSON template helps you to map keys to a
corresponding named value that you specify in a conditional parameter.
 Conditions: Conditions are used to define if certain resources are created or when
the resource’s properties are assigned to a value when the stack is created.
 Transform: Transform helps in reusing the template components by building a
simple declarative language for AWS CloudFormation.
 Resources: In this, you can specify the properties of AWS resources (AWS EC2
instance, S3 bucket, AWS lambda ) you want in your stack.
 Output: The output defines the value which is generated as an output when you
view your own stack properties.

Installing EC2 instance and LAMP package(Apache, MySQL, and PHP) on top of it
using AWS Cloudformation
Using AWS cloudformation, we will be creating a template using which instance will be
launched and the LAMP package will be installed on top of it automatically
Step 1: Go to the Cloudformation dashboard on the AWS management console. Click on
create the stack.
Step 2: You will be redirected to this webpage. We will be using a sample template of
Lamp Stack in this. Select the option: Use a sample template. Select the Lamp Stack
template. Click on View in Designer to view the design of the template.

Step 3: Now you will be redirected to the designer page which shows the design of the
template. It shows the instance which will be created with Apache and MySQL installed
over it. It also shows the security groups attached to the security purpose of the instance.
Here you can design your own infrastructure accordingly.
Step 4: These are the components of the template which we discussed earlier. Rename the
template accordingly.

Step 5: This is the code written in JSON format which contains all the specifications and
dependencies about the infrastructure to be created.

Step 6: Now click on the cloud-shaped upload button to come out of the designer.
Step 7: We will come back to the same web page. Click on Next.

Step 8: Specify the desired stack name over here

Step 9: Mention the name of the database you want to create on the MySQL database.
Also, specify the password and name of db-user.
Step 10: Choose the instance type. Select any available key pair which will be used in
making an SSH connection with the instance. Click on Next.

Step 11: You don’t have to worry about the advanced settings. Click on Next.
Step 12: Click on create a stack. The instance will be created with the LAMP package
installed on it. You can easily work with PHP and MySQL on the instance.

IBM Cloud Orchestrator:


You must install and configure the IBM® Process Designer, one of the user interfaces for
Business Process Manager to start with IBM Cloud Orchestrator content authoring. This
procedure describes the IBM Process Designer installation and configuration on a Windows
operating system.
Procedure
Install and configure IBM Cloud Orchestrator. See the Installing section of the IBM Cloud
Orchestrator information center. Business Process Manager is installed as a product
component.
 Install IBM Process Designer on the Windows operating system on which you
perform content development. See Installing IBM Process Designer in the Business
Process Manager information center.
 Configure IBM Process Designer to connect with Business Process Manager:
 Edit C:\\Windows\system32\drivers\etc\hosts.
 On a new line, add the IP and host name (cs3-hostname) of the machine where the
Business Process Manager is installed.
 Save the file.
 Start Process Designer to develop content.
Developing toolkit and application content
 Implementing workflow orchestration: Workflow orchestration is a Business Process
Manager based extension to IBM® Cloud Orchestrator which consists of UI panels to
collect extra data.
 Managing business processes: IBM Business Process Manager Standard is a
comprehensive Business Process Management offering. It gives you visibility and
insight to manage business processes. It offers tools and runtime environments for
process design, execution, monitoring and optimization, and basic system-integration
support.
 Implementing extensions with IBM Cloud Orchestrator: You can build custom
extension operations that are based on business processes and human services, for UI
extensions, by using the Business Process Manager Process Designer tool. The
business logic of the extension is implemented by the Business Process Definition.
The association between a UI extension that is based on a human service and business
process definition is done during the registration of the extension operation in the
instance actions or self-service offerings.
 Best practices and guidelines: When you create toolkits or Process Applications, there
are some best practices to be followed in naming conventions, structuring, modeling,
and error handling.
 Importing and exporting toolkits and process applications: You can create toolkits or
reuse toolkits that are shared by other content developers. To use a toolkit that is
created and shared by other content providers, use the import toolkit feature. You can
use export and import utilities to move the toolkit from development process server to
production process server or between any two process servers.
 Business Process Manager security: For a class of use cases, it is important to
understand the security context of a Business Process Manager process or user
interface. The activity-based user authentication and authorization are described here.

IBM Bluemix: IBM Bluemix is the service provider that provides the platform as a service.
It supports several programming languages and services as well as integrated Develops to
build, run, deploy and manage applications on the cloud. Bluemix is based on Cloud
Foundry open technology and runs on SoftLayer infrastructure. Bluemix supports several
programming languages including Java, Node.js, Go, PHP, Swift, Python, Ruby Sinatra,
Ruby.

Bluemix gives a user 2GB of run-time and container memory free for 30 days, plus access
to provision up to 10 services.
Cloud Foundry tool: PaaS service of bluemix.
Following are the steps to host a website on IBM cloud
1.
 First, create an account on IBM Bluemix.
 Verify the email mentioned in the account, by clicking on the link in the mail
sent from the IBM team.
 Select Cloud foundry space Dev and Location is the united kingdom. Cloud
Foundry is an open source Platform as a Service (PaaS) technology.
 Use the cf command (cf: Cloud Foundry) to allocate the space on cloud and
store the data like the website on the cloud.
 After that, Click on the create resume.
2.
 If using the Windows operating system, then download the cf-installer. It
helps in installing the Cloud Foundry to local machine. Then open the terminal.
 If using Linux, then just open the terminal
3. Enter cf -v

4. Enter cf api https://api.eu-gb.bluemix.net


This command will connect to the bluemix and show you the API endpoint and the API
version.

5. After that, log into the account created through terminal or cmd by typing cf login -
u ********@g***l.com

6.
 Make a folder from where the website is to be deployed on IBM cloud.
 Go to that location. For going to some location, one can also run the
following command on cmd cd Location

7. Now, Target the cloud foundry tool and the email from which IBM Bluemix is
registered.
cf target -o *********@gmail.com -s dev

8. Push the webpage to IBM by using the command


cf push -p .war

9. Now the website is successfully hosted on IBM Bluemix with the name mentioned
as a ToolChain.
Here, the name taken is geeksforgeeks, then the toolchain shows the name
geeksforgeeks.

10. The webpage given below shows a webpage that contains the
string GeeksForGeeks.
RedHat Ansible: Ansible is an IT automation engine that can automate various IT needs.
And it has features like application deployment that means you can deploy your application
easily as per your requirements, cloud provisioning, configuration management is also the
main feature where you can configure and describe your automation job, and intra-service
orchestration. In this, (Yet Another Markup Language)YAML is used for configuring that
helps for describing automation jobs as per requirement. It is Designed for multi-tier
deployments, Ansible models the IT infrastructure by describing how various systems
interrelate, instead of managing one system at a time.
Features :
 In this, It uses no extra functionality and cost like no agents and no extra custom
security infrastructure, hence it is easy to deploy.
 It uses a very simple language called YAML (Yet Another Markup Language) in the
form of Ansible Playbooks and you can configure it as per your requirement, and it
helps describe the automation jobs in a way that looks like basic English.
 The Ansible Automation Engine has a direct interaction with the users who write
playbooks and also interacts with cloud services and the Configuration Management
Database (CMDB).
Architecture components :
Here, we will discuss the architecture part and will discuss its components. The Ansible
automation engine consists of various components as described below as follows.

ANSIBLE ARCHITECTURE DIAGRAM

 Inventories: Ansible inventories are lists of hosts with their IP addresses, servers,
and databases which have to be managed via an SSH for UNIX, Linux, or
Networking devices, and WinRM for Windows systems.

 APIs: Application Programming Interface or APIs are used as a mode of transport


for public and private cloud services.

 Modules: Modules are executed directly on remote hosts through playbooks and
can control resources like services, packages, files, or execute system commands.
They act on system files, install packages and make API calls to the service
network. There are over 450 Ansible that provide modules that automate various
jobs in an environment. For example, Cloud Modules like Cloud Formation create
or delete an AWS cloud formation stack.

 Plugins: Plugins are pieces of code that augment Ansible’s core functionality and
allow executing Ansible tasks as a job build step. Ansible ships with several handy
plugins and one can also write it on their own. For example, Action plugins act as
front-ends to modules and can execute tasks on the controller before calling the
modules themselves.

 Networking: Ansible uses a simple, powerful, and agent-less automation


framework to automate network tasks. It uses a separate data model and spans
different network hardware.

 Hosts: Hosts refer to the nodes or systems (Linux, Windows, etc) which are
automated by Ansible.

 Playbooks: Playbooks are simple files written in YAML format which describe the
tasks to be executed by Ansible. Playbooks can declare configurations, orchestrate
the steps of any manual ordered process and can also launch various tasks.

 CMDB: It stands for Configuration Management Database (CMDB). In this, it


holds data to a collection of IT assets, and it is a repository or data warehouse where
we will store this kind of data, and It also defines the relationships between such
assets.

 Cloud: It is a network of remote servers hosted on the internet to store, manage and
process data instead of storing it on a local server.

Microsoft Azure Automation: Automation is a powerful tool for optimizing cost in Azure.
Automation in Azure can help you save time, and money, and reduce complexity.
Automation allows you to use existing resources more efficiently, increase productivity, and
reduce errors. Automation can also help you automate repeatable processes and tasks, freeing
up resources for more strategic initiatives. Automation in Azure can be used to optimize costs
in a variety of ways. For example, automation can be used to monitor and manage usage and
spend, as well as to automate workloads and services. Automation can also be used to
identify and eliminate redundant services and resources, which can reduce costs.

Ways to Optimize Costs by Automation in Azure: Azure’s automation tools also offer the
ability to automate processes such as deployment, configuration, and monitoring of cloud
resources. This can be done by using Azure Automation runbooks. A runbook is a set of
instructions that can be used to automate a particular task or workflow. For example,
businesses can use runbooks to automate the deployment and configuration of virtual
machines or applications in the cloud.
Monitoring and Managing Usage and Spend: Automation can help you track and analyze
your usage and spend over time, allowing you to identify opportunities for cost savings.
Automation can also be used to set thresholds and alerts, so you can be notified when costs
start to exceed budgeted amounts. Automation can also be used to automate the process of
shutting down and scaling up resources, to ensure that you are only using the resources you
need, when you need them.
Automating Workloads and Services: Automation can be used to automate the process of
provisioning and configuring resources, as well as the process of deploying and managing
applications. Automation can also be used to automate the process of monitoring, managing,
and scaling workloads and services. Automation can help you ensure that the resources you
are using are optimized for performance, cost, and reliability.
Identifying and Eliminating Redundant Services and Resources: Automation can be used
to identify and eliminate resources that are no longer needed or are being underutilized.
Automation can also be used to identify and eliminate services that are redundant or
unnecessary. Automation can help you reduce the overall cost of running services and
resources, as well as reduce the complexity of managing them.
Automating Managing, Scaling Services, and Workloads: Automation can be used to
automate the process of scaling up and down services and workloads, as well as to automate
the process of managing and monitoring them. Automation can help you ensure that the
services and workloads you are using are optimized for cost, performance, and reliability.
Azure’s automation tools provide businesses with a variety of options to optimize costs and
increase efficiency. By automating processes such as deployment, configuration, and
monitoring of cloud resources, businesses can reduce costs and increase efficiency.
Additionally, businesses can use the cost optimization feature to identify underutilized
resources and optimize their cloud spending accordingly. Finally, businesses can use an ARM
to quickly deploy and manage their applications in the cloud. All of these features provide
businesses with the tools they need to optimize costs and increase efficiency.
Types of Automation Tools: There are many different types of automation tools available,
including:
 Scripting Tools: Scripting tools allow developers to automate tasks by writing scripts
in a programming language. Scripting tools can automate tasks such as web page
updates, data entry, and report generation.
 Workflow Automation Tools: Workflow automation tools allow developers to create
automated processes. Workflows can be used to automate tasks such as document
approval, data entry, and customer notification.
 Machine Learning Tools: Machine learning tools use artificial intelligence to
automate tasks such as data analysis and product recommendations.
 Robotic Process Automation: RPA tools use robots to automate tasks such as data
entry, report generation, and customer service.
 Natural Language Processing: NLP tools use artificial intelligence to understand and
interpret natural language. NLP tools can be used to automate tasks such as customer
service and document analysis.
 Cost Savings Through Automation: Optimizing cost with Azure’s automation tools is
a great way to reduce IT costs and increase efficiency. Automation helps to streamline
processes, reduce manual labor, and improve accuracy. It also helps to reduce the
need for manual labor, reducing the need for more employees and allowing a business
to focus more on its core operations.
Azure Automation also provides a number of Cost-Saving Features: Create and Manage
Scheduled Tasks, Automate Workflows and Processes, and Integrate with other Systems.
This helps to streamline processes and reduce the need for manual labor. It also helps to
reduce the time it takes to complete tasks and eliminates human errors, which can save time
and money. Azure Cost Management is another cost-saving tool. It helps to optimize cloud
costs and provides visibility into cloud spending. It allows businesses to gain insights into
their cloud spending, identify cost optimization opportunities, and set budgets. This helps to
reduce costs associated with cloud services and can help to identify areas where costs can be
reduced. Azure Cost Optimizer is another cost-saving tool. It helps to identify cost
optimization opportunities and reduce costs associated with cloud services. It provides
detailed cost insights and helps to identify areas where costs can be reduced. It also helps to
identify areas where costs can be optimized for a given workload. Azure’s automation tools
provide a number of cost-saving opportunities. By reducing the need for manual labor,
streamlining processes, and eliminating human errors, automation can reduce costs associated
with cloud services and increase efficiency. Additionally, Azure’s cost-saving features can
help to identify cost optimization opportunities and reduce costs associated with cloud
services. All of these features can help businesses to optimize costs and increase efficiency.
Automation Strategies for Cloud-Based Applications: The cloud offers organizations a
variety of advantages when it comes to hosting their applications. By leveraging the various
cloud service models and architectures, organizations can benefit from increased scalability,
reduced costs, and improved performance. However, managing cloud-based applications can
be a challenging task, requiring organizations to optimize their use of the cloud and its
associated tools. Automation strategies are essential for organizations to maximize their cloud
use cost-effectively and efficiently. Azure is a powerful cloud platform, enabling
organizations to create, manage, and deploy cloud-based applications with ease. As such,
Azure offers a range of automation tools and features to help organizations better manage
their applications and reduce costs. Some Automation Strategies available to organizations
with Azure to optimize their Cloud-Based Apps and reduce costs:

 Scripting and Automation Tools: Azure provides a range of scripting and automation
tools, such as Azure Automation, Azure Resource Manager, and PowerShell, to
enable organizations to automate various tasks associated with their applications.
These tools can be used to automate the deployment and management of applications,
as well as the provisioning and scaling of resources. By automating these tasks,
organizations can gain greater control over their applications and reduce costs.
 Azure Policy: Azure Policy is a service which allows organizations to define and
enforce standards and restrictions across their Azure resources. This can help
organizations ensure that their applications are secure and compliant with industry
standards, while also helping to reduce costs by preventing unnecessary resource
usage.
 Azure Automation: Azure Automation is a service which allows organizations to
automate the deployment and management of their applications. Organizations can
use this service to automate the provisioning and scaling of resources, as well as the
deployment and management of applications. This can help organizations reduce
costs by allowing them to quickly deploy and manage their applications without
having to manually manage the resources involved.
 Azure Resource Manager: Azure Resource Manager is a service which allows
organizations to manage their Azure resources in a unified manner. This can help
organizations reduce costs by ensuring that their applications are deployed and
managed in a consistent and cost-effective manner.
 Azure Storage Accounts: Azure Storage Accounts enable organizations to store and
manage their data in a secure and cost-effective manner. By leveraging Storage
Accounts, organizations can reduce their storage costs, while ensuring that their data
is secure and accessible.
Automated Cloud Provisioning: Automated Cloud Provisioning is the process of using
automated tools to manage and configure cloud-based services. It is the process of
automating the provisioning process, which includes the creation, deployment, and
management of cloud-based services such as servers, applications, and data storage.
Automated cloud provisioning allows organizations to efficiently and cost-effectively
manage their IT infrastructure in the cloud. It eliminates the need for manual processes,
which can be time-consuming and expensive. By automating the processes organizations can
reduce their costs by eliminating the need for manual labor and other associated costs. It also
helps organizations streamline their IT operations, as it allows them to quickly deploy and
configure cloud-based resources, as well as manage and monitor their cloud environment.
Organizations can also use it to improve their security posture. Automated Cloud
Provisioning ensures that all cloud resources are configured securely, and it also allows
organizations to monitor their cloud environment and detect any potential security threats
with the ability to quickly respond to any security incidents, as the automated tools can detect
and alert the organization of any suspicious activity. Organizations can also use Automated
Cloud provisioning to improve their scalability. Automated cloud provisioning allows
organizations to quickly and efficiently scale up or down their cloud resources, depending on
their needs. This can help organizations save time and money in the long run, as they don’t
need to manually add or remove cloud resources.
Automated Resource Management: Automated Resource Management (ARM) is a set of
tools that allow organizations to better manage their cloud resources. ARM provides
automated infrastructure management, cost optimization, and resource utilization. Provision
and Manage Cloud Resources to reduce the cost of cloud operations and improve the
efficiency of cloud deployments. Define policies that govern their Cloud Infrastructure and
define the types of resources that should be provisioned, the amount of resources that can be
used, and the rules that govern resource utilization. Define Cost-Optimization Rules that can
be used to determine when resources should be scaled up or down to optimize cost. Resource
Utilization Insights can help organizations identify resources that are underutilized,
overutilized, and low-performing. This information can then be used to adjust resource
allocation and scale resources up or down to optimize cost. Automation Tools to help
organizations automate the deployment and management of their cloud resources. By using
Azure’s automation tools, organizations can save time and money by automating manual
tasks, optimizing cost and resource utilization, and monitoring their cloud deployments.
These tools can help organizations reduce the cost of cloud operations and improve the
efficiency of cloud deployments.
Automated Performance Monitoring: Automated performance monitoring is a process that
involves the use of automated tools to monitor the performance of an IT system or
application. This process is used to ensure that the system or application is working as
expected and that it meets the required performance objectives.
Automated Performace monitoring involves various benefits:
 Identifying and addressing any performance issues quickly and efficiently can save
both time and money, as it reduces the need for manual testing and troubleshooting.
 Identifying Potential Security threats and other vulnerabilities, allows organizations to
take action to address these issues before they become serious problems.
 Analyzing the system or application in order to identify areas of improvement and
then implementing strategies to improve those areas.
 Create Reports on the performance of the system or application which can then be
used to identify areas of improvement.
 Optimize the cost of an IT system or Application. This can be done by reducing the
amount of manual testing and troubleshooting that is required, as well as by reducing
the cost of resources used in the optimization process.
 Reduce the amount of time it takes to identify and address any performance issues,
thus reducing the cost of addressing those issues.
Automated Deployment and Configuration: Azure Automated Deployment and
Configuration is a cloud-based solution that allows users to quickly and easily deploy and
configure their Azure resources. By automating the process of deploying and configuring
Azure resources, users are able to optimize their Azure costs, reduce complexity, and save
time. Azure Automated Deployment and Configuration is a simple but powerful solution that
makes it easy to deploy and configure a wide range of Azure resources. This includes Virtual
Networks, Compute Resources, Storage, Databases, and more. It allows users to easily
specify the type and size of resources they need, and then quickly and easily deploy and
configure them in Azure. This ensures that users have the resources they need for their
applications without having to manually manage and configure them. The solution also
provides users with the ability to quickly and easily scale their Azure resources, so that they
can take advantage of cost savings when their applications require more resources. This
ensures that users are able to make the most of their Azure investments, while also keeping
their costs down. Azure Automated Deployment and Configuration also helps users save
time. It eliminates the need for users to manually deploy and configure their resources. This
allows them to quickly and easily deploy and configure their resources, and save time in the
process.
Automated Security and Compliance: Automated Security and Compliance (AS&C) is
a cloud-based security and compliance solution that helps organizations of all sizes to
quickly and cost-effectively secure their applications and data in the cloud. It combines
advanced security features with automated compliance processes to provide an end-to-
end security and compliance solution.
Automated Security and Compliance provide various advantages:
 Achieve and Maintain Compliance with Industry Standards and Regulations such as
HIPAA, ISO 27001, PCI DSS, and other global regulations.
 Quickly Identify and address Security and Compliance Issues, and provides
comprehensive reporting to help organizations maintain compliance with security and
compliance requirements.
 A Comprehensive set of Security and Compliance Services that can be tailored to
meet the unique needs of each organization. It provides automated security and
compliance processes, such as vulnerability scanning, monitoring, and reporting. It
also provides automated compliance processes, such as regulatory reporting and audit
trail generation.
 Optimize Costs by providing a comprehensive security and compliance solution that
is designed to reduce the cost of maintaining compliance with security and
compliance regulations. It also reduces the costs associated with manual security and
compliance processes, such as vulnerability scanning, monitoring, and reporting.
 Automating Compliance Processes, such as regulatory reporting and audit trail
generation. Automating these processes can reduce the manual effort and time
associated with compliance processes, and can help reduce the cost associated with
managing compliance.
 Real-time Security and Compliance Monitoring provides a comprehensive set of
reports and alerts that can help organizations quickly identify and address security and
compliance issues. This can help organizations to quickly identify and address
security and compliance issues and avoid costly penalties.
 Automated Backup and Disaster Recovery: Automated Backup and Disaster
Recovery is a cloud-based service that provides automated backup and disaster
recovery capabilities to Azure customers. It helps organizations to protect their data
and applications in the event of a disaster or other unforeseen event.
The main advantages of using Azure Automated Backup and Disaster Recovery are:
 Cost Savings: The service is billed based on the amount of storage used for the
backups, the number of recovery points, the duration of the backups, and other
factors. This allows customers to scale their backup environment as their needs
change, without incurring any additional costs.
 Scalability: The service allows customers to increase their backup and disaster
recovery capabilities as their needs increase. This helps to ensure that their data and
applications are always safe and secure. Additionally, the service provides customers
with the ability to monitor the performance of their backups, allowing them to make
changes as needed.
 Reliability: The service is designed to be highly reliable and secure. It is designed to
handle large volumes of data and applications, ensuring that customers always have
access to their data when they need it.
Follow the below step-by-step process as it is to get started from Azure Portal.
Step 1: Log in to your Azure Portal
Step 2: From the left top corner of the page >> click Menu and select Dashboard

Step 3: After opening the Dashboard >> click on +New Dashboard/Create >> select Blank
or Custom Dashboard of your choice to create one. Then give some names to the dashboard
and then click on save. Refer to the below images.
Now You will see a blank dashboard. Follow the next steps to add the azure monitor data.

Step 4: Now, open the Azure Monitor >> Navigate to the logs section.
Step 5: Run some KQL queries in the Log section then click on Pin to >> Azure
Dashboards.
Step 6: After selecting the Azure Dashboards >> select your dashboard which you have
created in step 3 >> click on Pin to add it to the selected dashboard.

Step 7: Now go to your dashboard which you have pinned and check the preview to verify.
That’s it you are done creating a dashboard in azure. You share your dashboard with other
colleagues in your organization or team or you can keep them private to restrict access.
Terraform
Terraform is a popular infrastructure-as-code tool that allows you to automate the
provisioning and management of infrastructure resources. It uses configuration files written
in the HashiCorp Configuration Language (HCL) to define the desired state of your
infrastructure, and it uses various commands to apply those configurations and manage
your infrastructure resources.
How Does Terraform Work?
With Terraform, users can define infrastructure resources using a simple, declarative
configuration language. These resources can include virtual machines, networking
components, storage resources, and more. Once the configuration is defined, Terraform can
be used to create, modify, and destroy these resources in a repeatable and predictable way.
One of the key benefits of Terraform is its ability to support multiple cloud providers, as
well as on-premises and open-source tools. This means that users can define infrastructure
resources using a single configuration and use Terraform to manage resources across
different environments.
Overall, Terraform is a powerful and flexible tool that enables users to define and manage
infrastructure resources in a reusable and automated way. It is widely used in a variety of
industries and scenarios, including cloud infrastructure, data centers, and hybrid
environments.

Components of Terraform Architecture

Terraform Configuration Files

These files contain the definition of the infrastructure resources that Terraform will
manage, as well as any input and output variables and modules. The configuration files are
written in the HashiCorp Configuration Language (HCL), which is a domain-specific
language designed specifically for Terraform.

Terraform State File

This file stores the current state of the infrastructure resources managed by Terraform. The
state file is used to track the resources that have been created, modified, or destroyed, and it
is used to ensure that the infrastructure resources match the desired state defined in the
configuration files.

Infrastructure as Code

Terraform allows you to use code to define and manage your infrastructure, rather than
manually configuring resources through a user interface. This makes it easier to version,
review, and collaborate on infrastructure changes.

Cloud APIs or other Infrastructure Providers

These are the APIs or other interfaces that Terraform uses to create, modify, or destroy
infrastructure resources. Terraform supports multiple cloud providers, as well as on-
premises and open-source tools.
Providers

Terraform integrates with a wide range of cloud and infrastructure providers, including
AWS, Azure, GCP, and more. These providers allow Terraform to create and manage
resources on those platforms.
Overall, the architecture of a Terraform deployment consists of configuration files, a state
file, and a CLI that interacts with cloud APIs or other infrastructure providers to create,
modify, or destroy resources. This architecture enables users to define and manage
infrastructure resources in a declarative and reusable way.
Terraform Modules
In Terraform, a module is a container for a set of related resources that are used together to
perform a specific task. Modules allow users to organize and reuse their infrastructure code,
making it easier to manage complex infrastructure deployments.
Modules are defined using the ‘ module ‘ block in Terraform configuration. A module
block takes the following arguments:
 source: The source location of the module. This can be a local path or a URL.
 name: The name of the module. This is used to reference the module in other parts
of the configuration.
 version: The version of the module to use. This is optional and can be used to
specify a specific version of the module.
Inside a module block, users can define the resources that make up the module, as well as
any input and output variables that the module exposes. Input variables allow users to pass
values into the module when it is called, and output variables allow the module to return
values to the calling configuration. Modules can be nested, allowing users to create
complex infrastructure architectures using a hierarchical structure. Modules can also be
published and shared on the Terraform Registry, enabling users to reuse and extend the
infrastructure code of others.
What is Terraform Core?
The open-source binary for Terraform Core is available for download and usage on the
command line. The configuration files you provide (your desired state) and the present state
(a state file generated and managed solely by Terraform) are the two input sources used by
Terraform’s Core. The Core then develops a plan for what resources need to be added,
altered, or eliminated using this knowledge.
Why Terraform?
Terraform offers many benefits and it is a widely used tool in present organizations for
managing their infrastructure.

Multi-Cloud And Multi-Provider Support: Terraform can manage multi-cloud at a time


like Amazon Web Services (AWS), Azure, and Google Cloud Platform(GCP) and
also you can manage your on-premises infrastructure. The language used in terraform was
Hashi Crop Language (HCL).

Terraform Is Declerative Mangement Tool: There is no need to tell Terraform how to


achieve the desired step-by-step you can just mention the desired state you want Terraform
will automatically achieve that. So that the terraform is called a declarative management
tool.

Mutable and Immutable Infrastructure: Mutable infrastructure refers to upgrading the


software by modifying the existing one. Immutable infrastructure refers to infrastructure
that is never modified once it is created which one to choose will depend upon us.

State Management: Terraform logs(maintains) information about the resources it has


created in a state file( terraform. tfstate). This enables Terraform to know which resources
are under its control and when to update and destroy them.

Private Module Registry: A private module registry is a repository for Terraform modules
that is only accessible to a specific group of users, rather than being publicly available.
Private module registries are useful for organizations that want to manage and distribute
their own infrastructure code internally, rather than using publicly available modules from
the Terraform Registry. To use a private module registry, users need to configure their
Terraform CLI to authenticate with the registry and access the modules. This typically
involves setting up an access token or other authentication method and specifying the
registry URL in the Terraform configuration. Once configured, users can use the ‘ module ‘
block in their Terraform configuration to reference the modules in the private registry, just
like they would with publicly available modules. Private module registries can be hosted on
a variety of platforms, including cloud providers, on-premises servers, and open-source
tools. Overall, private module registries are a useful tool for organizations that want to
manage and distribute their own Terraform modules internally, enabling them to better
control and reuse their infrastructure code.
Terraform Commands

Terraform init

Terraform init command initializes a Terraform working directory by downloading and


installing any required plugins and dependencies. It should be run before any other
Terraform commands.
$ terraform init

Terraform Validate

The validate command performs precisely what its name implies. It ensures that the code is
internally coherent and examines it for syntax mistakes. Only the configuration files (*.tf)
in the active working directory are examined. You must provide the -a recursive flag if you
want to validate files inside of folders (for example, if you have a module/ directory).
$ terraform validate

Terraform Apply

Terraform apply command applies the changes defined in the configuration to your
infrastructure. It creates or updates the resources according to the configuration, and it also
prompts you to confirm the changes before applying them.
$ terraform apply
Terraform Destroy

Terraform destroy command will destroy all the resources created by Terraform in the
current working directory. It is a useful command for tearing down your infrastructure
when you no longer need it.
$ terraform destroy

Terraform Import

Imports an existing resource into the Terraform state, allowing it to be managed by


Terraform.
$ terraform import

Terraform Console

Opens an interactive console for evaluating expressions in the Terraform configuration.


$ terraform console

Terraform Refresh

This command updates the state of your infrastructure to reflect the actual state of your
resources. It is useful when you want to ensure that your Terraform state is in sync with the
actual state of your infrastructure.
$ terraform refresh
Advantages of Terraform
 Declarative Configuration: Terraform uses a declarative configuration language,
which means that users define the desired state of their infrastructure resources, rather
than the specific steps required to achieve that state. This makes it easier to understand
and manage complex infrastructure deployments.
 Support for Multiple Cloud Providers: Terraform supports multiple cloud
providers, as well as on-premises and open-source tools, which means that users can
define and manage their infrastructure resources using a single configuration.
 Reusable Infrastructure Code: Terraform allows users to define their
infrastructure resources in a reusable and modular way, using features such as modules
and variables. This makes it easier to manage and maintain complex infrastructure
deployments.
 Collaboration and Version Control: Terraform configuration files can be stored in
version control systems such as Git, which makes it easier for teams to collaborate and
track changes to their infrastructure.
 Efficient Resource Management: Terraform has features such as resource
dependencies and provisioners that enable users to manage their infrastructure resources
efficiently, minimizing duplication and ensuring that resources are created and
destroyed in the correct order.
Disadvantages of Terraform
 Complexity: Terraform can be complex to learn and use, especially for users who
are new to infrastructure automation. It has a large number of features and can be
difficult to understand the full scope of its capabilities.
 State Management: Terraform uses a state file to track the resources it manages,
which can cause issues if the state file becomes out of sync with the actual
infrastructure. This can happen if the infrastructure is modified outside of Terraform or
if the state file is lost or corrupted.
 Performance: Terraform can be slower than some other IaC tools, especially when
managing large infrastructure deployments. This can be due to the need to communicate
with multiple APIs and the overhead of managing the state file.
 Limited Error Handling: Terraform does not have robust error handling, and it can
be difficult to diagnose and fix issues when they arise. This can make it difficult to
troubleshoot problems with infrastructure deployments.
 Limited Rollback Capabilities: Terraform does not have a built-in rollback
feature, so it can be difficult to undo changes to infrastructure if something goes wrong.
Users can use the ‘ terraform destroy ‘ command to destroy all resources defined in
the configuration, but this can be time-consuming and may not be feasible in all
situations.

Kubernetes: Kubernetes Cluster mainly consists of Worker Machines called Nodes and a
Control Plane. In a cluster, there is at least one worker node. The Kubectl CLI
communicates with the Control Plane and Control Plane manages the Worker Nodes.
Kubernetes – Cluster Architecture
As can be seen in the diagram below, Kubernetes has a client-server architecture and has
master and worker nodes, with the master being installed on a single Linux system and the
nodes on many Linux workstations.
Kubernetes Components
Kubernetes is composed of a number of components, each of which plays a specific role in
the overall system. These components can be divided into two categories:
 nodes: Each Kubernetes cluster requires at least one worker node, which is a
collection of worker machines that make up the nodes where our container will be
deployed.
 Control plane: The worker nodes and any pods contained within them will be
under the control plane.
Control Plane Components
It is basically a collection of various components that help us in managing the overall
health of a cluster. For example, if you want to set up new pods, destroy pods, scale pods,
etc. Basically, 4 services run on Control Plane:
Kube-API server
The API server is a component of the Kubernetes control plane that exposes the Kubernetes
API. It is like an initial gateway to the cluster that listens to updates or queries via CLI like
Kubectl. Kubectl communicates with API Server to inform what needs to be done like
creating pods or deleting pods etc. It also works as a gatekeeper. It generally validates
requests received and then forwards them to other processes. No request can be directly
passed to the cluster, it has to be passed through the API Server.
Kube-Scheduler
When API Server receives a request for Scheduling Pods then the request is passed on to
the Scheduler. It intelligently decides on which node to schedule the pod for better
efficiency of the cluster.
Kube-Controller-Manager
The kube-controller-manager is responsible for running the controllers that handle the
various aspects of the cluster’s control loop. These controllers include the replication
controller, which ensures that the desired number of replicas of a given application is
running, and the node controller, which ensures that nodes are correctly marked as “ready”
or “not ready” based on their current state.
etcd
It is a key-value store of a Cluster. The Cluster State Changes get stored in the etcd. It acts
as the Cluster brain because it tells the Scheduler and other processes about which
resources are available and about cluster state changes.
Node Components
These are the nodes where the actual work happens. Each Node can have multiple pods and
pods have containers running inside them. There are 3 processes in every Node that are
used to Schedule and manage those pods.
Container runtime
A container runtime is needed to run the application containers running on pods inside a
pod. Example-> Docker
kubelet
kubelet interacts with both the container runtime as well as the Node. It is the process
responsible for starting a pod with a container inside.
kube-proxy
It is the process responsible for forwarding the request from Services to the pods. It has
intelligent logic to forward the request to the right pod in the worker node.
Addons Plug-in
We may install functionality in the cluster (such as Daemonset, Deployment, etc.) with the
aid of add-ons. This namespace resource provides cluster-level functionality, making it a
Kube-system namespace resource.

Some of the available add-ons

 CoreDNS
 KuberVirt
 ACI
 Calico etc.
Commands for Kubectl
Here are some common commands for interacting with a Kubernetes cluster:
 To view a list of all the pods in the cluster, you can use the following command:
kubectl get pods
 To view a list of all the nodes in the cluster, you can use the following command:
kubectl get nodes
 To view a list of all the services in the cluster, you can use the following command:
kubectl get services

Prerequisites
Installing Docker − Docker is required on all the instances of Kubernetes. Following
are the steps to install the Docker.
Step 1 − Log on to the machine with the root user account.
Step 2 − Update the package information. Make sure that the apt package is
working.
Step 3 − Run the following commands.
$ sudo apt-get update
$ sudo apt-get install apt-transport-https ca-certificates
Step 4 − Add the new GPG key.
$ sudo apt-key adv \
--keyserver hkp://ha.pool.sks-keyservers.net:80 \
--recv-keys 58118E89F3A912897C070ADBF76221572C52609D
$ echo "deb https://apt.dockerproject.org/repo ubuntu-trusty main" | sudo tee
/etc/apt/sources.list.d/docker.list
Step 5 − Update the API package image.
$ sudo apt-get update
Once all the above tasks are complete, you can start with the actual installation of
the Docker engine. However, before this you need to verify that the kernel version
you are using is correct.

Install Docker Engine


Run the following commands to install the Docker engine.
Step 1 − Logon to the machine.
Step 2 − Update the package index.
$ sudo apt-get update
Step 3 − Install the Docker Engine using the following command.
$ sudo apt-get install docker-engine
Step 4 − Start the Docker daemon.
$ sudo apt-get install docker-engine
Step 5 − To very if the Docker is installed, use the following command.
$ sudo docker run hello-world
Install etcd 2.0
This needs to be installed on Kubernetes Master Machine. In order to install it, run
the following commands.
$ curl -L https://github.com/coreos/etcd/releases/download/v2.0.0/etcd
-v2.0.0-linux-amd64.tar.gz -o etcd-v2.0.0-linux-amd64.tar.gz ->1
$ tar xzvf etcd-v2.0.0-linux-amd64.tar.gz ------>2
$ cd etcd-v2.0.0-linux-amd64 ------------>3
$ mkdir /opt/bin ------------->4
$ cp etcd* /opt/bin ----------->5
In the above set of command −

 First, we download the etcd. Save this with specified name.


 Then, we have to un-tar the tar package.
 We make a dir. inside the /opt named bin.
 Copy the extracted file to the target location.
Now we are ready to build Kubernetes. We need to install Kubernetes on all the
machines on the cluster.
$ git clone https://github.com/GoogleCloudPlatform/kubernetes.git
$ cd kubernetes
$ make release
The above command will create a _output dir in the root of the kubernetes folder.
Next, we can extract the directory into any of the directory of our choice /opt/bin, etc.
Next, comes the networking part wherein we need to actually start with the setup of
Kubernetes master and node. In order to do this, we will make an entry in the host
file which can be done on the node machine.
$ echo "<IP address of master machine> kube-master
< IP address of Node Machine>" >> /etc/hosts
Following will be the output of the above command.

Now, we will start with the actual configuration on Kubernetes Master.


First, we will start copying all the configuration files to their correct location.
$ cp <Current dir. location>/kube-apiserver /opt/bin/
$ cp <Current dir. location>/kube-controller-manager /opt/bin/
$ cp <Current dir. location>/kube-kube-scheduler /opt/bin/
$ cp <Current dir. location>/kubecfg /opt/bin/
$ cp <Current dir. location>/kubectl /opt/bin/
$ cp <Current dir. location>/kubernetes /opt/bin/
The above command will copy all the configuration files to the required location. Now
we will come back to the same directory where we have built the Kubernetes folder.
$ cp kubernetes/cluster/ubuntu/init_conf/kube-apiserver.conf /etc/init/
$ cp kubernetes/cluster/ubuntu/init_conf/kube-controller-manager.conf /etc/init/
$ cp kubernetes/cluster/ubuntu/init_conf/kube-kube-scheduler.conf /etc/init/

$ cp kubernetes/cluster/ubuntu/initd_scripts/kube-apiserver /etc/init.d/
$ cp kubernetes/cluster/ubuntu/initd_scripts/kube-controller-manager /etc/init.d/
$ cp kubernetes/cluster/ubuntu/initd_scripts/kube-kube-scheduler /etc/init.d/

$ cp kubernetes/cluster/ubuntu/default_scripts/kubelet /etc/default/
$ cp kubernetes/cluster/ubuntu/default_scripts/kube-proxy /etc/default/
$ cp kubernetes/cluster/ubuntu/default_scripts/kubelet /etc/default/
The next step is to update the copied configuration file under /etc. dir.
Configure etcd on master using the following command.
$ ETCD_OPTS = "-listen-client-urls = http://kube-master:4001"
Configure kube-apiserver
For this on the master, we need to edit the /etc/default/kube-apiserver file which
we copied earlier.
$ KUBE_APISERVER_OPTS = "--address = 0.0.0.0 \
--port = 8080 \
--etcd_servers = <The path that is configured in ETCD_OPTS> \
--portal_net = 11.1.1.0/24 \
--allow_privileged = false \
--kubelet_port = < Port you want to configure> \
--v = 0"
Configure the kube Controller Manager
We need to add the following content in /etc/default/kube-controller-manager.
$ KUBE_CONTROLLER_MANAGER_OPTS = "--address = 0.0.0.0 \
--master = 127.0.0.1:8080 \
--machines = kube-minion \ -----> #this is the kubernatics node
--v = 0
Next, configure the kube scheduler in the corresponding file.
$ KUBE_SCHEDULER_OPTS = "--address = 0.0.0.0 \
--master = 127.0.0.1:8080 \
--v = 0"
Once all the above tasks are complete, we are good to go ahead by bring up the
Kubernetes Master. In order to do this, we will restart the Docker.
$ service docker restart
Kubernetes Node Configuration
Kubernetes node will run two services the kubelet and the kube-proxy. Before
moving ahead, we need to copy the binaries we downloaded to their required folders
where we want to configure the kubernetes node.
Use the same method of copying the files that we did for kubernetes master. As it
will only run the kubelet and the kube-proxy, we will configure them.
$ cp <Path of the extracted file>/kubelet /opt/bin/
$ cp <Path of the extracted file>/kube-proxy /opt/bin/
$ cp <Path of the extracted file>/kubecfg /opt/bin/
$ cp <Path of the extracted file>/kubectl /opt/bin/
$ cp <Path of the extracted file>/kubernetes /opt/bin/
Now, we will copy the content to the appropriate dir.
$ cp kubernetes/cluster/ubuntu/init_conf/kubelet.conf /etc/init/
$ cp kubernetes/cluster/ubuntu/init_conf/kube-proxy.conf /etc/init/
$ cp kubernetes/cluster/ubuntu/initd_scripts/kubelet /etc/init.d/
$ cp kubernetes/cluster/ubuntu/initd_scripts/kube-proxy /etc/init.d/
$ cp kubernetes/cluster/ubuntu/default_scripts/kubelet /etc/default/
$ cp kubernetes/cluster/ubuntu/default_scripts/kube-proxy /etc/default/
We will configure the kubelet and kube-proxy conf files.
We will configure the /etc/init/kubelet.conf.
$ KUBELET_OPTS = "--address = 0.0.0.0 \
--port = 10250 \
--hostname_override = kube-minion \
--etcd_servers = http://kube-master:4001 \
--enable_server = true
--v = 0"
/
For kube-proxy, we will configure using the following command.
$ KUBE_PROXY_OPTS = "--etcd_servers = http://kube-master:4001 \
--v = 0"
/etc/init/kube-proxy.conf
Finally, we will restart the Docker service.
$ service docker restart
Now we are done with the configuration. You can check by running the following
commands.
$ /opt/bin/kubectl get minions

In order to create an application for Kubernetes deployment, we need to first create


the application on the Docker. This can be done in two ways −

 By downloading
 From Docker file
By Downloading
The existing image can be downloaded from Docker hub and can be stored on the
local Docker registry.
In order to do that, run the Docker pull command.
$ docker pull --help
Usage: docker pull [OPTIONS] NAME[:TAG|@DIGEST]
Pull an image or a repository from the registry
-a, --all-tags = false Download all tagged images in the repository
--help = false Print usage
Following will be the output of the above code.
The above screenshot shows a set of images which are stored in our local Docker
registry.
If we want to build a container from the image which consists of an application to
test, we can do it using the Docker run command.
$ docker run –i –t unbunt /bin/bash
From Docker File
In order to create an application from the Docker file, we need to first create a
Docker file.
Following is an example of Jenkins Docker file.
FROM ubuntu:14.04
MAINTAINER [email protected]
ENV REFRESHED_AT 2017-01-15
RUN apt-get update -qq && apt-get install -qqy curl
RUN curl https://get.docker.io/gpg | apt-key add -
RUN echo deb http://get.docker.io/ubuntu docker main > /etc/apt/↩
sources.list.d/docker.list
RUN apt-get update -qq && apt-get install -qqy iptables ca-↩
certificates lxc openjdk-6-jdk git-core lxc-docker
ENV JENKINS_HOME /opt/jenkins/data
ENV JENKINS_MIRROR http://mirrors.jenkins-ci.org
RUN mkdir -p $JENKINS_HOME/plugins
RUN curl -sf -o /opt/jenkins/jenkins.war -L $JENKINS_MIRROR/war-↩
stable/latest/jenkins.war
RUN for plugin in chucknorris greenballs scm-api git-client git ↩
ws-cleanup ;\
do curl -sf -o $JENKINS_HOME/plugins/${plugin}.hpi \
-L $JENKINS_MIRROR/plugins/${plugin}/latest/${plugin}.hpi ↩
; done
ADD ./dockerjenkins.sh /usr/local/bin/dockerjenkins.sh
RUN chmod +x /usr/local/bin/dockerjenkins.sh
VOLUME /var/lib/docker
EXPOSE 8080
ENTRYPOINT [ "/usr/local/bin/dockerjenkins.sh" ]
Once the above file is created, save it with the name of Dockerfile and cd to the file
path. Then, run the following command.
$ sudo docker build -t jamtur01/Jenkins .
Once the image is built, we can test if the image is working fine and can be
converted to a container.
$ docker run –i –t jamtur01/Jenkins /bin/bash

Deployment is a method of converting images to containers and then allocating


those images to pods in the Kubernetes cluster. This also helps in setting up the
application cluster which includes deployment of service, pod, replication controller
and replica set. The cluster can be set up in such a way that the applications
deployed on the pod can communicate with each other.
In this setup, we can have a load balancer setting on top of one application diverting
traffic to a set of pods and later they communicate to backend pods. The
communication between pods happen via the service object built in Kubernetes.
Ngnix Load Balancer Yaml File
apiVersion: v1
kind: Service
metadata:
name: oppv-dev-nginx
labels:
k8s-app: omni-ppv-api
spec:
type: NodePort
ports:
- port: 8080
nodePort: 31999
name: omninginx
selector:
k8s-app: appname
component: nginx
env: dev
Ngnix Replication Controller Yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: appname
spec:
replicas: replica_count
template:
metadata:
name: appname
labels:
k8s-app: appname
component: nginx
env: env_name
spec:
nodeSelector:
resource-group: oppv
containers:
- name: appname
image: IMAGE_TEMPLATE
imagePullPolicy: Always
ports:
- containerPort: 8080
resources:
requests:
memory: "request_mem"
cpu: "request_cpu"
limits:
memory: "limit_mem"
cpu: "limit_cpu"
env:
- name: BACKEND_HOST
value: oppv-env_name-node:3000
Frontend Service Yaml File
apiVersion: v1
kind: Service
metadata:
name: appname
labels:
k8s-app: appname
spec:
type: NodePort
ports:
- name: http
port: 3000
protocol: TCP
targetPort: 3000
selector:
k8s-app: appname
component: nodejs
env: dev
Frontend Replication Controller Yaml File
apiVersion: v1
kind: ReplicationController
metadata:
name: Frontend
spec:
replicas: 3
template:
metadata:
name: frontend
labels:
k8s-app: Frontend
component: nodejs
env: Dev
spec:
nodeSelector:
resource-group: oppv
containers:
- name: appname
image: IMAGE_TEMPLATE
imagePullPolicy: Always
ports:
- containerPort: 3000
resources:
requests:
memory: "request_mem"
cpu: "limit_cpu"
limits:
memory: "limit_mem"
cpu: "limit_cpu"
env:
- name: ENV
valueFrom:
configMapKeyRef:
name: appname
key: config-env
Backend Service Yaml File
apiVersion: v1
kind: Service
metadata:
name: backend
labels:
k8s-app: backend
spec:
type: NodePort
ports:
- name: http
port: 9010
protocol: TCP
targetPort: 9000
selector:
k8s-app: appname
component: play
env: dev
Backed Replication Controller Yaml File
apiVersion: v1
kind: ReplicationController
metadata:
name: backend
spec:
replicas: 3
template:
metadata:
name: backend
labels:
k8s-app: beckend
component: play
env: dev
spec:
nodeSelector:
resource-group: oppv
containers:
- name: appname
image: IMAGE_TEMPLATE
imagePullPolicy: Always
ports:
- containerPort: 9000
command: [ "./docker-entrypoint.sh" ]
resources:
requests:
memory: "request_mem"
cpu: "request_cpu"
limits:
memory: "limit_mem"
cpu: "limit_cpu"
volumeMounts:
- name: config-volume
mountPath: /app/vipin/play/conf
volumes:
- name: config-volume
configMap:
name: appname

Autoscaling is one of the key features in Kubernetes cluster. It is a feature in which


the cluster is capable of increasing the number of nodes as the demand for service
response increases and decrease the number of nodes as the requirement
decreases. This feature of auto scaling is currently supported in Google Cloud
Engine (GCE) and Google Container Engine (GKE) and will start with AWS pretty
soon.
In order to set up scalable infrastructure in GCE, we need to first have an active
GCE project with features of Google cloud monitoring, google cloud logging, and
stackdriver enabled.
First, we will set up the cluster with few nodes running in it. Once done, we need to
set up the following environment variable.

Environment Variable
export NUM_NODES = 2
export KUBE_AUTOSCALER_MIN_NODES = 2
export KUBE_AUTOSCALER_MAX_NODES = 5
export KUBE_ENABLE_CLUSTER_AUTOSCALER = true
Once done, we will start the cluster by running kube-up.sh. This will create a cluster
together with cluster auto-scalar add on.
./cluster/kube-up.sh
On creation of the cluster, we can check our cluster using the following kubectl
command.
$ kubectl get nodes
NAME STATUS AGE
kubernetes-master Ready,SchedulingDisabled 10m
kubernetes-minion-group-de5q Ready 10m
kubernetes-minion-group-yhdx Ready 8m
Now, we can deploy an application on the cluster and then enable the horizontal pod
autoscaler. This can be done using the following command.
$ kubectl autoscale deployment <Application Name> --cpu-percent = 50 --min = 1 --
max = 10
The above command shows that we will maintain at least one and maximum 10
replica of the POD as the load on the application increases.
We can check the status of autoscaler by running the $kubclt get hpa command.
We will increase the load on the pods using the following command.
$ kubectl run -i --tty load-generator --image = busybox /bin/sh
$ while true; do wget -q -O- http://php-apache.default.svc.cluster.local; done
We can check the hpa by running $ kubectl get hpa command.
$ kubectl get hpa
NAME REFERENCE TARGET CURRENT
php-apache Deployment/php-apache/scale 50% 310%

MINPODS MAXPODS AGE


1 20 2m

$ kubectl get deployment php-apache


NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
php-apache 7 7 7 3 4m
We can check the number of pods running using the following command.
jsz@jsz-desk2:~/k8s-src$ kubectl get pods
php-apache-2046965998-3ewo6 0/1 Pending 0 1m
php-apache-2046965998-8m03k 1/1 Running 0 1m
php-apache-2046965998-ddpgp 1/1 Running 0 5m
php-apache-2046965998-lrik6 1/1 Running 0 1m
php-apache-2046965998-nj465 0/1 Pending 0 1m
php-apache-2046965998-tmwg1 1/1 Running 0 1m
php-apache-2046965998-xkbw1 0/1 Pending 0 1m
And finally, we can get the node status.
$ kubectl get nodes
NAME STATUS AGE
kubernetes-master Ready,SchedulingDisabled 9m
kubernetes-minion-group-6z5i Ready 43s
kubernetes-minion-group-de5q Ready 9m
kubernetes-minion-group-yhdx Ready 9m

Setting up Kubernetes dashboard involves several steps with a set of tools required
as the prerequisites to set it up.

 Docker (1.3+)
 go (1.5+)
 nodejs (4.2.2+)
 npm (1.3+)
 java (7+)
 gulp (3.9+)
 Kubernetes (1.1.2+)
Setting Up the Dashboard
$ sudo apt-get update && sudo apt-get upgrade

Installing Python
$ sudo apt-get install python
$ sudo apt-get install python3

Installing GCC
$ sudo apt-get install gcc-4.8 g++-4.8

Installing make
$ sudo apt-get install make

Installing Java
$ sudo apt-get install openjdk-7-jdk

Installing Node.js
$ wget https://nodejs.org/dist/v4.2.2/node-v4.2.2.tar.gz
$ tar -xzf node-v4.2.2.tar.gz
$ cd node-v4.2.2
$ ./configure
$ make
$ sudo make install

Installing gulp
$ npm install -g gulp
$ npm install gulp
Verifying Versions
Java Version
$ java –version
java version "1.7.0_91"
OpenJDK Runtime Environment (IcedTea 2.6.3) (7u91-2.6.3-1~deb8u1+rpi1)
OpenJDK Zero VM (build 24.91-b01, mixed mode)

$ node –v
V4.2.2

$ npn -v
2.14.7

$ gulp -v
[09:51:28] CLI version 3.9.0

$ sudo gcc --version


gcc (Raspbian 4.8.4-1) 4.8.4
Copyright (C) 2013 Free Software Foundation, Inc. This is free software;
see the source for copying conditions. There is NO warranty; not even for
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Installing GO
$ git clone https://go.googlesource.com/go
$ cd go
$ git checkout go1.4.3
$ cd src

Building GO
$ ./all.bash
$ vi /root/.bashrc
In the .bashrc
export GOROOT = $HOME/go
export PATH = $PATH:$GOROOT/bin

$ go version
go version go1.4.3 linux/arm
Installing Kubernetes Dashboard
$ git clone https://github.com/kubernetes/dashboard.git
$ cd dashboard
$ npm install -g bower
Running the Dashboard
$ git clone https://github.com/kubernetes/dashboard.git
$ cd dashboard
$ npm install -g bower
$ gulp serve
[11:19:12] Requiring external module babel-core/register
[11:20:50] Using gulpfile ~/dashboard/gulpfile.babel.js
[11:20:50] Starting 'package-backend-source'...
[11:20:50] Starting 'kill-backend'...
[11:20:50] Finished 'kill-backend' after 1.39 ms
[11:20:50] Starting 'scripts'...
[11:20:53] Starting 'styles'...
[11:21:41] Finished 'scripts' after 50 s
[11:21:42] Finished 'package-backend-source' after 52 s
[11:21:42] Starting 'backend'...
[11:21:43] Finished 'styles' after 49 s
[11:21:43] Starting 'index'...
[11:21:44] Finished 'index' after 1.43 s
[11:21:44] Starting 'watch'...
[11:21:45] Finished 'watch' after 1.41 s
[11:23:27] Finished 'backend' after 1.73 min
[11:23:27] Starting 'spawn-backend'...
[11:23:27] Finished 'spawn-backend' after 88 ms
[11:23:27] Starting 'serve'...
2016/02/01 11:23:27 Starting HTTP server on port 9091
2016/02/01 11:23:27 Creating API client for
2016/02/01 11:23:27 Creating Heapster REST client for http://localhost:8082
[11:23:27] Finished 'serve' after 312 ms
[BS] [BrowserSync SPA] Running...
[BS] Access URLs:
--------------------------------------
Local: http://localhost:9090/
External: http://192.168.1.21:9090/
--------------------------------------
UI: http://localhost:3001
UI External: http://192.168.1.21:3001
--------------------------------------
[BS] Serving files from: /root/dashboard/.tmp/serve
[BS] Serving files from: /root/dashboard/src/app/frontend
[BS] Serving files from: /root/dashboard/src/app
The Kubernetes Dashboard
Cloudify: The Cloudify platform provides infrastructure automation using
Environment-as-a-Service technology to deploy and continuously manage any cloud,
private data center, or Kubernetes service from one central point, while enabling
developers to self-service their environments.

Morpheus: Morpheus is a powerful self-service engine to provide enterprise agility,


control, and efficiency. Quickly enable on-prem private clouds, centralize public cloud
access, and orchestrate change with cost analytics, governance policy, and automation.
DC/OS container orchestration: DC/OS (the Distributed Cloud Operating System) is an
open-source, distributed operating system based on the Apache Mesos distributed
systems kernel. DC/OS manages multiple machines in the cloud or on-premises from a
single interface; deploys containers, distributed services, and legacy applications into
those machines; and provides networking, service discovery and resource management
to keep the services running and communicating with each other.
Mesos Containers: Mesos plays well with existing container technologies (e.g., docker)
and also provides its own container technology. It also supports composing different
container technologies (e.g., docker and mesos). Mesos implements the following
containerizers: Composing. Kubernetes and Apache Mesos are two of the most popular
COEs. These two technologies take different approaches to container management.
Kubernetes works purely as a container orchestrator. Mesos is more like an “operating
system for your data center. Apache Mesos, an open source project designed for large-
scale container deployments, manages compute clusters, including container clusters
and federation.
Docker Containers: Docker is a containerization platform that packages your application
and all its dependencies together in the form of a docker container to ensure that your
application works seamlessly in any environment. Docker Container is a standardized unit
which can be created on the fly to deploy a particular application or environment. It could be
an Ubuntu container, CentOs container, etc. to full-fill the requirement from an operating
system point of view. Also, it could be an application oriented container like CakePHP
container or a Tomcat-Ubuntu container etc.
Let’s understand it with an example: A company needs to develop a Java Application. In
order to do so the developer will setup an environment with tomcat server installed in it.
Once the application is developed, it needs to be tested by the tester. Now the tester will
again set up tomcat environment from the scratch to test the application. Once the application
testing is done, it will be deployed on the production server. Again the production needs an
environment with tomcat installed on it, so that it can host the Java application. If you see the
same tomcat environment setup is done thrice. There are some issues that I have listed below
with this approach:

 There is a loss of time and effort.


 There could be a version mismatch in different setups i.e. the developer & tester
may have installed tomcat 7, however the system admin installed tomcat 9 on the
production server.
Now, I will show you how Docker container can be used to prevent this loss.
In this case, the developer will create a tomcat docker image ( An Image is nothing but a
blueprint to deploy multiple containers of the same configurations ) using a base image
like Ubuntu, which is already existing in Docker Hub (the Hub has some base images
available for free) . Now this image can be used by the developer, the tester and the
system admin to deploy the tomcat environment. This is how this container solves the
problem.

AWS infrastructure automation tools are AWS OpsWorks, AWS Elastic Beanstalk,
EC2 Image Builder, AWS Proton, AWS Service Catalog, AWS Cloud9, AWS
CloudShell, and Amazon CodeGuru.
There are two types of cloud automation. The first is support for corporate data center
operations. The second is hosting for websites and mobile applications at scale. Public
cloud hardware from AWS, Google Cloud, and Microsoft Azure can be used for either
purpose.
Chef Automate: Chef Automate provides a single dashboard and analytics for
infrastructure automation, Chef Habitat for application automation, and Chef InSpec
for security and compliance automation. Chef Automate measurably increases the
ability to deliver software quickly, increasing speed and efficiency while decreasing risk.
Google Cloud Deployment Manager: Google Cloud Deployment Manager is an
infrastructure deployment service that automates the creation and management of Google
Cloud resources. Google Cloud Deployment Manager is an infrastructure deployment service
that automates the creation & management of Google Cloud Resources, It can be used to
write flexible templates and configuration files, and use them to create deployments with
multiple services (Cloud Storage, Compute Engine, Cloud SQL, etc.)

Puppet Enterprise: Puppet is an efficient system management tool for centralizing and
automating the configuration management process. It can also be utilized as open-
source configuration management for server configuration, management, deployment,
and orchestration.
Red Hat Ansible Automation Platform: Red Hat® Ansible® Automation Platform is an
end-to-end automation platform to configure systems, deploy software, and orchestrate
advanced workflows. It includes resources to create, manage, and scale across the entire
enterprise. The following components are included in Red Hat Ansible Automation
Platform: Self-Hosted and/or on-premises components, Automation controller, Private
automation hub, Automation content navigator, Automation execution environments,
Execution environment builder, Automation mesh, and Ansible content tools.
VMware vRealize Automation: VMware vRealize® Automation™ is a modern
infrastructure automation platform that increases productivity and agility by reducing
complexity and eliminating manual or semi- manual tasks. This automation allows you
to rapidly respond to changing business needs. You can put in place personalized and
relevant policies to enforce deployment standards, service levels, and resource quotas.
With vRA, you get a wide selection of multi-cloud, multi-vendor support, and extensible
design. To this effect, integrate your storage array with vRealize Automation(vRA) and
vRealize Orchestrator(vRO) products from VMware. This will empower the users to
design and consume your storage offerings to best fulfill their business case.
Assignment 2 (CO2)

Q.1 Briefly explain AWS CloudFormation.

Q.2 Briefly explain IBM Cloud Orchestrator.

Q.3 Briefly explain Microsoft Azure Automation.

Submission link: https://forms.gle/eTYHFZ4tBapSryC66

You might also like