0% found this document useful (0 votes)
21 views39 pages

Module 1-5 Cloud

This is about cloud computing and its various aspects in edge and fog computing

Uploaded by

Siddhant Yadav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views39 pages

Module 1-5 Cloud

This is about cloud computing and its various aspects in edge and fog computing

Uploaded by

Siddhant Yadav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 39

Containerization and

Micro Services

CCA3010

Module 1
Syllabus

Introduction to DevOps and Kubernetes: Software delivery


challenges: Waterfall and static delivery, Agile and digital delivery,
software delivery on the cloud, Continuous integration, Continuous
delivery, DevOps configuration management. Infrastructure as a code.
Introduction to Kubernetes, running your first container, the demo
application, building a container, container registries, Life cycle of a
container, working with a Docker file, docker commands.
Software delivery challenges:
✓Software delivery can be a complex process, and organizations often
face various challenges throughout the development and deployment
lifecycle. Some common software delivery challenges include:

1.Unclear Requirements:
2.Poor Communication:
3.Scope Creep:
4.Inadequate Testing:
5.Integration Issues:
6.Lack of Automation:
7.Insufficient Quality Assurance:
8.Resource Constraints:
9.Ineffective Project Management:
Software delivery challenges:
10. Security Concerns:
11. Regulatory Compliance:
12.Scaling Challenges:

• Addressing these challenges often involves adopting best practices such as Agile
methodologies, DevOps principles, continuous integration and delivery (CI/CD), and
robust testing and automation strategies.
Waterfall Model:
• The Waterfall model is a traditional and linear software development
methodology where progress is seen as flowing steadily through sequential
phases such as requirements, design, implementation, testing, deployment,
and maintenance.

• Challenges:

1.Rigidity to Changes: Changes in requirements are challenging to


accommodate once a phase is completed.
2.Limited Stakeholder Involvement: Stakeholder input is mainly solicited
at the beginning and end of the development process.
3.Late Detection of Defects: Testing is typically performed after the
development phase, leading to late detection of defects.
4.Long Delivery Times: The linear nature of Waterfall can result in longer
development and delivery times.
Static Analysis:
• Static analysis involves the examination of software without executing it. This analysis
can be performed on the source code or other software artifacts to identify potential
issues, vulnerabilities, or adherence to coding standards.

• Challenges:
1.False Positives/Negatives: Static analysis tools may produce false positives (indicating
issues that aren't real problems) or false negatives (missing actual issues).
2.Limited Context Awareness: Static analysis tools might lack the ability to fully
understand the context of the software, leading to potential misinterpretations.
Agile
• Agile is an iterative and incremental approach to software development. It prioritizes
flexibility, collaboration, and customer satisfaction. Agile methodologies -such as Scrum
• Key Principles:
1.Customer Collaboration Over Contract Negotiation: Agile values customer feedback and
collaboration throughout the development process.
2.Responding to Change Over Following a Plan: Agile embraces change and adapts to
evolving requirements, even late in the development process.
3.Deliver Working Software Regularly: Frequent and consistent delivery of working
software is a core tenet of Agile.
4.Individuals and Interactions Over Processes and Tools: Agile emphasizes the importance
of effective communication and collaboration within development teams.

• Benefits:
• Faster response to changing requirements, enhanced customer satisfaction, and a more
adaptive and collaborative development environment.
Digital Delivery:
• Overview:
• Digital delivery refers to the process of delivering software, services, or products in a
digital format. It often involves the use of digital technologies, online platforms, and
automated processes to bring products and services to end-users.

• Key Components:
1.Digital Technologies: Leveraging technologies such as cloud computing, mobile
applications, and web-based platforms for delivery.
2.Automation: Streamlining and automating various aspects of the delivery process,
including deployment, testing, and monitoring.
3.User-Centric Design: Focusing on designing products and services with the end-user in
mind, often involving iterative feedback loops.

• Benefits:
• Faster time to market, scalability, improved user experiences, and the ability to adapt to
evolving digital trends.
Software delivery on the cloud,
• Software delivery on the cloud refers to the process of developing, deploying, and
delivering software applications using cloud computing services and resources. Cloud
computing provides a scalable, on-demand, and pay-as-you-go infrastructure, offering
several advantages for software delivery. Here are key aspects and considerations related
to software delivery on the cloud:
1.Cloud Service Models:
1. Infrastructure as a Service (IaaS): Provides virtualized computing resources,
allowing developers to manage the operating system, runtime, middleware, and
applications. Examples include Amazon EC2, Azure Virtual Machines, and Google
Compute Engine.
2. Platform as a Service (PaaS): Offers a platform with development tools, databases,
and runtime environments, abstracting the underlying infrastructure. Examples
include Heroku, Google App Engine, and Azure App Service.
3. Software as a Service (SaaS): Delivers software applications over the internet,
eliminating the need for users to install, manage, and maintain the software.
Examples include Google Workspace, Microsoft 365, and Salesforce.
Software delivery on the cloud,
2. Key Considerations for Cloud Software Delivery:
• Scalability: Cloud services provide the ability to scale resources up or down based on
demand, ensuring that applications can handle varying workloads effectively.
• Flexibility: Cloud environments offer a range of services and tools that cater to
diverse application needs, enabling developers to choose the best-fit solutions for
their requirements.
• DevOps and CI/CD: Cloud platforms support DevOps practices, enabling
continuous integration and continuous delivery (CI/CD) pipelines. Automation tools
help streamline the development, testing, and deployment processes.
• Resource Optimization: Cloud platforms allow for efficient resource utilization by
dynamically allocating and deallocating resources based on demand. This helps
optimize costs and improve overall performance.
• Security: Cloud providers offer a variety of security features, but it's crucial for
development teams to implement best practices for securing applications and data in
the cloud.
Software delivery on the cloud,
3. Cloud Delivery Models:
1. Multi-Cloud: Involves using services from multiple cloud providers, offering
redundancy, resilience, and the ability to choose the best services from each provider.
2. Hybrid Cloud: Combines on-premises infrastructure with cloud services, providing
flexibility and allowing organizations to leverage existing investments while
benefiting from cloud capabilities.
4. Serverless Computing:
Serverless architectures, offered through platforms like AWS Lambda, Azure Functions,
and Google Cloud Functions, allow developers to focus on writing code without
managing the underlying infrastructure. This model is event-driven and scales
automatically.
5. Cost Management:
Cloud services are typically pay-as-you-go, and effective cost management involves
optimizing resource usage, leveraging reserved instances, and monitoring spending to
avoid unnecessary expenses.
Software delivery on the cloud,
In summary, software delivery on the cloud offers numerous advantages, including
scalability, flexibility, and a variety of managed services.

However, teams need to consider factors like security, cost management, and the
appropriate cloud service model based on their specific requirements.
Continuous Integration
Continuous Integration (CI) and Continuous Delivery (CD) are integral practices in modern
software development, helping teams to automate and streamline the software delivery
process.
• Continuous Integration (CI):
• Definition: Continuous Integration is a development practice where developers
frequently integrate their code changes into a shared repository. Each integration triggers
an automated build and a series of tests to detect and address integration issues early in
the development lifecycle.
• Key Concepts:
1.Automated Builds: The CI process involves automatically building the software after
every code integration to ensure that the codebase is always in a functional state.
2.Automated Testing: Automated tests, including unit tests and integration tests, are
executed to verify that the new changes do not introduce regressions or integration issues.
3.Version Control: CI relies on a version control system (e.g., Git) to manage and track
changes to the codebase.
Continuous Integration
4. Frequent Integration: Developers integrate their code changes into the main branch
multiple times a day, promoting collaboration and reducing the risk of integration issues.

• Benefits:

• Early Issue Detection: Integration issues and conflicts are detected early in the
development process, reducing the time and effort required for bug fixing.
• Consistent Builds: Automated builds ensure that the software is built consistently,
avoiding issues related to different environments.
• Collaboration: Frequent integration encourages collaboration among team members,
leading to a more cohesive and rapidly evolving codebase.
Continuous Delivery
• Definition: Continuous Delivery is an extension of CI that focuses on automating the
entire software delivery process, including testing, deployment, and release. The goal is
to make the software delivery process efficient, reliable, and ready for deployment at any
time.
• Key Concepts:

1.Automated Deployment: Continuous Delivery involves automating the deployment


process, ensuring that the software can be deployed to various environments consistently.
2.Pipeline Automation: A delivery pipeline is created to automate the steps involved in
building, testing, and deploying the application.
3.Environment Independence: CD enables the deployment of applications to multiple
environments (e.g., development, staging, production) with minimal manual intervention.
4.Incremental Releases: The ability to release small, incremental changes to production,
reducing the risk and impact of each release.
Continuous Delivery
• Benefits

• Reliable Releases: Automated deployment processes increase the reliability of releases,


reducing the risk of human error.
• Faster Time to Market: With continuous delivery, teams can release software more
frequently, leading to a faster time to market for new features and improvements.
• Reduced Manual Intervention: Automation reduces the need for manual intervention in
the deployment process, making it more efficient and less error-prone.
• Improved Collaboration: CD encourages collaboration between development, testing,
and operations teams, fostering a culture of continuous improvement.
Continuous Integration vs. Continuous Delivery:
• Relationship: CI is a subset of CD. CI focuses on the frequent integration of code
changes and automated testing, while CD extends this to include automated deployment
and delivery to different environments.

• Purpose: CI aims to catch integration issues early in the development process, while CD
focuses on automating the entire software delivery lifecycle, making it ready for
production deployment at any time.
Infrastructure as Code (IaC)
• Infrastructure as Code (IaC) is a key concept in modern software development and IT
operations. It involves managing and provisioning computing infrastructure through
machine-readable script files, rather than through physical hardware configuration or
interactive configuration tools.
• Key Concepts of Infrastructure as Code:
1. Automation:
IaC emphasizes the automation of infrastructure provisioning and management. Instead of
manually configuring servers and other infrastructure components, automation scripts are used to
define the desired state.
2. Declarative Syntax:
IaC scripts are written using a declarative syntax, where the focus is on defining the desired end-
state of the infrastructure rather than specifying the step-by-step procedures to achieve that state.
3. Idempotence:
IaC scripts should be idempotent, meaning that running the same script multiple times should
produce the same result as running it once. This ensures consistency and repeatability.
Infrastructure as Code (IaC)
4. Scalability: Infrastructure can be easily scaled up or down by modifying the IaC scripts.
This is particularly beneficial in cloud environments where resources can be dynamically
provisioned based on demand.

5. Documentation: IaC scripts serve as documentation for the infrastructure. By examining


the scripts, one can understand how the infrastructure is configured and provisioned.
Infrastructure as Code (IaC)
• Benefits of Infrastructure as Code:
1.Consistency:
IaC ensures that infrastructure is provisioned consistently across different environments,
reducing the risk of configuration drift between development, testing, and production.
2.Efficiency:
Automation through IaC accelerates the provisioning process, enabling faster
development and deployment cycles.
3.Collaboration:
Version control and script sharing facilitate collaboration among team members, enabling
them to work on infrastructure configurations collaboratively.
4.Reproducibility:
IaC allows for the recreation of entire infrastructure environments with a few
commands, making it easier to replicate environments for testing, debugging, or disaster
recovery.
Infrastructure as Code (IaC)
• Benefits of Infrastructure as Code:

5. Scalability:
With IaC, scaling infrastructure up or down based on demand becomes more
straightforward, particularly in cloud environments.
Popular IaC Tools:
1. Terraform:
A widely-used open-source IaC tool that supports multiple cloud providers and on-premises
infrastructure.
2. AWS CloudFormation:
Amazon Web Services' native IaC service for defining and provisioning AWS infrastructure.
3. Azure Resource Manager (ARM) Templates:
Microsoft Azure's native IaC tool for defining and deploying resources within Azure.
4. Ansible:
An open-source automation tool that supports configuration management and infrastructure
provisioning.
5. Chef:
A configuration management tool that can be used for defining and managing infrastructure.
6. Puppet:
A configuration management tool similar to Chef, used for automating the provisioning and
management of infrastructure.
Introduction to Kubernetes
• Kubernetes is an open source container management tool which automates container
deployment, container scaling, and load balancing.

• It schedules, run and manages isolates containers which are running on


virtual/Physical/Cloud machines.

• All top Cloud providers support Kubernetes.

History

• Google developed an internal system called ‘borg’ (later named as Omega) to deploy and
manage thousands google applications and services on their cluster.

• In 2014, google introduced Kubernetes an open source platform written in ‘Golang’ and
later donated to CNCF.
Introduction to Kubernetes
Online Platform for K8s

• Kubernetes playground
• Play with K8s
• Play with Kubernetes classroom

Cloud based K8s Services

• GKS – Google Kubernetes Services


• AKS – Azure Kubernetes Services
• EKS- Elastic Kubernetes Services
Introduction to Kubernetes

Kubernetes Installation tool

• Minicube
• Kubeadm
Introduction to Kubernetes

Problem with scaling up the containers

• Containers can not communicate with each other

• Autoscaling and Load balancing was not possible

• Containers had to be managed carefully


Introduction to Kubernetes
Features of Kubernetes

• Orchestration (clustering of any number of containers running on different network)


• Autoscaling
• Auto-Healing
• Load Balancing
• Platform independent (Cloud/ Virtual/ Physical)
• Fault Tolerance (Node/POD Failure)
• Rollback (going back to previous version)
• Health monitoring of containers
• Batch execution (one time, sequential, Parallel)
Introduction to Kubernetes
Kubernetes Vs. Docker Swarm
Features Kubernetes Docker Swarm
Installation and Cluster Complicated and Time consuming Fast and easy
Configuration
Supports K8s can work with almost all Works for Docker only
container types like Rocket, Docker,
ContainerD
GUI GUI Available GUI Not Available

Data Volumes Only shared with containers in same Can be shared with any other
Pod container
Autoscaling Support vertical and horizontal Not support autoscaling
scaling
Logging and /monitoring Inbuilt tool for monitoring Used 3rd party tool like splunk
Market Share 57 11
Lifecycle of a Container
Kubernetes involves several stages, from creation to termination. Here's an overview of the
typical lifecycle:
1. Container Image Creation
2. Container Definition
3. Container Scheduling
4. Container Initialization
5. Container Running
6. Monitoring and Logging
7. Scaling and Updates
8. Container Termination
9. Resource Cleanup
10.Container Deletion
Lifecycle of a Container
1. Container Image Creation- The lifecycle begins with the creation of a container image.
This image is a lightweight, standalone, and executable software package that includes
everything needed to run a piece of software, including the code, runtime, libraries, and
system tools.
2. Container Definition- Kubernetes uses YAML or JSON configuration files to define the
specifications of a container and its associated resources, such as CPU and memory
requirements, environment variables, and volume mounts. This configuration is often
encapsulated in a Kubernetes Pod object.
3. Container Scheduling- The Kubernetes scheduler takes the container specifications and
decides on which node within the cluster the container should run.
4. Container Initialization- Once scheduled, the container runtime (e.g., Docker) pulls the
container image from a container registry onto the node where it's scheduled.
5. Container Running- The container is now in a running state, actively serving its intended
purpose. It will continue to run until it completes its task, is manually stopped, or encounters
an issue.
Lifecycle of a Container
6. Monitoring and Logging- Kubernetes monitors the container's health by periodically
checking its status. Logs generated by the container are typically collected and managed by
Kubernetes for later inspection.
7. Scaling and Updates- Kubernetes allows for scaling applications by adjusting the number
of replicas.
8. Container Termination- Containers can be terminated for various reasons, such as
completing their tasks, reaching resource limits, encountering errors, or being manually
stopped. When a container terminates, its resources are released, and it moves to a
"terminated" state.
9. Resource Cleanup- After termination, Kubernetes can perform resource cleanup, ensuring
that any associated resources like volumes, network connections, and temporary storage are
properly released.
10. Container Deletion- The container instance is removed from the node, and its status is
updated in the Kubernetes control plane.
Container Registries
• A container registry is a centralized repository for storing and managing container images.
• Container registries store container images, which are snapshots of a file system, application
code, runtime, libraries, and other dependencies needed to run a containerized application.

Registry Types:
• Public Registries: These are open to the public, and anyone can access and download images.
Examples include Docker Hub, Google Container Registry, and Quay.io.
• Private Registries: These are restricted to authorized users or organizations. Private registries
provide a secure and controlled environment for storing proprietary or sensitive images.
Examples include Amazon ECR, Azure Container Registry, and Harbor.
Docker Hub:
• Docker Hub is one of the most widely used public container registries. It hosts a vast number of
public images that developers can use directly or as a base for building their own images. It also
supports private repositories for organizations.
Container Registries
Authentication and Authorization:
Registries typically implement authentication mechanisms to control access to images. Users or
systems need proper credentials to pull or push images to a registry. Access control policies
ensure that only authorized individuals or systems can perform specific actions.

Image Tagging and Versioning:


Container images are tagged with a version identifier (tag). Proper versioning practices help
manage changes to images over time. It is common to use semantic versioning or other
versioning schemes to indicate the image's status or changes.

Monitoring and Logging:


Many container registries provide monitoring and logging capabilities, allowing administrators
to track image pulls, pushes, and other activities. This information is valuable for auditing and
troubleshooting.
Docker
Docker is a platform designed to enable the creation, deployment, and execution of applications using
containers.

Key components and concepts of Docker include:


• Docker Engine:
The core of Docker is the Docker Engine, which is responsible for building, running, and managing
containers.
• Docker Images:
Docker images are the building blocks of containers. An image is a lightweight, standalone, and
executable package that includes the application code, runtime, libraries, and system tools needed to
run the application.
• Docker Containers:
Containers are instances of Docker images. They run in isolated environments, sharing the host
machine's kernel but having their own file system, process space, and network interfaces.
Docker
• Docker Compose:
Docker Compose is a tool for defining and managing multi-container Docker applications.
• Docker Hub:
Docker Hub is a cloud-based registry service provided by Docker that allows users to store and share
Docker images. It serves as a repository for both official and user-contributed Docker images, making
it easy to discover and use pre-built images.
• Docker Swarm:
Docker Swarm is Docker's native clustering and orchestration solution for managing a cluster of
Docker hosts.
• Docker Volumes:
• Docker Volumes are used to persist data generated by and used by Docker containers.
Docker Commands (Commonly used)
• To install docker
yum install docker -y
• To check docker version
docker –v
or
docker --version
• To check whether docker service is running or not
service docker status
• To start service docker
service docker start

• To check OS type, memory, IP…


docker info
Docker Commands (Commonly used)
• Any image in docker
docker images
• To check running container (process state)
docker ps
• To check all containers
docker ps -a
• To start service docker
service docker start

• To download image (ubuntu)


docker run ubuntu
Docker Commands (Commonly used)
• To make container with your name
docker run –it –name abc ubuntu/bin/bash
• To start container
docker start abc
• To go inside container
docker attach abc
• To stop container
docker stop abc

• To delete container
docker rm abc
Docker Commands (Commonly used)
• To delete image
docker image rm (image name)
• Only download image (ubuntu) from docker hub to local machine
docker pull ubuntu
• Show docker disk usage
docker system df
• Login to docker registry
docker login

• Logout to docker registry


docker logout
• Remove unused data
docker system prune

You might also like