0% found this document useful (0 votes)
3 views425 pages

What Is DevOps

DevOps is a culture that bridges the gap between development and operations teams to enhance the software development life cycle (SDLC). It involves continuous processes such as planning, development, integration, testing, deployment, and monitoring to ensure efficient application delivery. The document outlines the roles of DevOps engineers, the environments involved in the process, and the tools used to facilitate these operations.

Uploaded by

jffharuna
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views425 pages

What Is DevOps

DevOps is a culture that bridges the gap between development and operations teams to enhance the software development life cycle (SDLC). It involves continuous processes such as planning, development, integration, testing, deployment, and monitoring to ensure efficient application delivery. The document outlines the roles of DevOps engineers, the environments involved in the process, and the tools used to facilitate these operations.

Uploaded by

jffharuna
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 425

Now that you are in the DevOps

family

What
is
Devops ?
DEFINITION

DevOps is the combination of two main words


DEFINITION
DEFINITION
DEFINITION

Let us talk about


DEFINITION

What is DEV ?
DEFINITION

Now that you know DEV is


application development what
is application?

Is a program, or set of programs, that allows end-users to perform


particular functions.
DEFINITION

Big grammar aside

Are just tools we use daily to easy our life and solve issues, for
example we use facebook, twitter, whatsapp to communicate
or banking applications to monitor our bank accounts, or
doordash to order food when we are hungry, or gmail,
yahoo,msn, aol, outlook to check and monitor our email.
DEFINITION

Now that you know what is


application and DEV

Has
NOTHING
to do with
DEFINITION

dev
elop
DEFINITION

In other words

Write application
DEFINITION

In other words

DO
DEFINITION

If Devops does not develop


applications who does ?
DEFINITION

Developers world
DEFINITION

Any question about


DEFINITION

Let us talk about


DEFINITION

What is OPS
DEFINITION

What are Operations


DEFINITION

What is Operation

In simple words operation in this context stand for transforming a code coming from
developers into a working application
DEFINITION

DevOps engineer operation tools


DevOps engineer operation
process
Any question about
DevOps engineer operation
process
In other words

pl i c a t i on
e ap
Writ
c od e mor
e
A r e
he
on t

of
Side
t he
to
i rt h
G iv e b
Now that you have an idea
of what is
Tired of the big grammar ?
In simple worlds DevOps is a tradition or culture that help break the gap between
Development and operations teams to improve and better control the software

development life cycle (SDLC)


software development life cycle
DevOps engineers are in middle of all
(SDLC) teams and provide tools for those teams as well
Quality
assurance
Team (QA)
Devops Security and
Operations team
Provide tools for (DevSecOps)

Pro
vid
e to
Pr ol s
ov for
Development
ide
oot

team (DEV)
ls
f or

r
ls fo
v i de too
Provide tools for Pro
Security and
Infosec
team
Network
Team (IT)
Quality
Engineers
Database
Administrators

CyberSecurity
Developers Engineers

IT Helpdesk

Network
Engineers
DEVOPS ENGINEERS COLLEAGUES Security
Engineers
Quality
Developers Engineers

DEVOPS ENGINEERS
Devops process
Devops process

Continue Continuous
Continue Continuous
Planning Integration
Development code
(CI) Inspection

Continuous
Testing
Continuous Continuous Continuous
Security
Delivery (CD) Deployment

Continue
Monitoring and
Continuous Continuous Logging
Feedback Improvement
Devops process

Continue
Planning
The process of continuously defining goals, tasks, and
requirements for the project, involving regular
discussions to ensure everyone is aligned.

The team meets to discuss adding a discount feature to


an e-commerce site. They outline what it should look like,
how it should work, and which developers will handle
each part.
Devops process

Continue
Development
The practice of constantly writing, testing, and building
code in small increments, so features are developed
quickly and efficiently.

Developers start working on different parts of the


discount feature, like coding the input field and
calculating the discount amount. Each part is tested
individually to catch errors early.
Devops process

Continuous
Integration
(CI)
Frequently merging all code changes into a shared
repository, followed by automated testing to ensure new
code works well with existing code.

After finishing a part of the discount feature, a developer


integrates it into the main codebase. Automated tests
check to ensure that the feature doesn’t interfere with the
checkout process.
Devops process

Continuous
code
Inspection
Reviewing code regularly, either manually or through
automated tools, to identify issues like bugs, security
vulnerabilities, and code quality concerns.

A team member reviews the discount code logic and


spots a potential security issue, like improperly validating
discount codes. They fix it before it goes live.
Devops process

Continuous
Testing
Devops process

Continuous
Deployment
Automatically deploying code to the live production
environment after all tests pass, making the latest
updates instantly available to users.

After passing all tests, the discount feature is


automatically deployed to the live website, enabling
users to apply discount codes at checkout without any
downtime.
Devops process

Continuous
Delivery (CD)
Automatically deploying code to the live production
environment after all tests pass, making the latest
updates instantly available to users.

After passing all tests, the discount feature is


automatically deployed to the live website, enabling
users to apply discount codes at checkout without any
downtime.
Devops process

Continuous
Security
Devops process

Continuous
Feedback

Regularly collecting feedback from users and monitoring


tools to understand how the application performs and
where improvements are needed.

The team monitors user feedback on the discount


feature. If users report confusion, the team makes a note
to improve the user interface.
Devops process

Continuous
Improvement

Using feedback to make enhancements and


optimizations to improve the software over time.

Based on feedback, the team makes changes to the


discount code field, adding clearer instructions to
improve usability.
Devops process

Continuous
Monitoring and
Logging

Tracking the application’s performance and recording


events to detect issues and understand user interactions.

Monitoring tools keep track of how often discount codes


are used and log any errors. If multiple users experience
a problem, the team is alerted to investigate
Devops Tooling

Continuous
Version Integration and Configuration
Control Continuous Management Containerization
Systems and Orchestration
Deployment
(CI/CD)

Artifact Collaboration
Infrastructure Monitoring and
Repository Communication
as Code (IaC) and Logging Management

Source Code Security and Backup and


Analysis (Static Vulnerability Testing Disaster
Code Analysis) Scanning Recovery
Devops Tooling

Version
Control
Systems
Devops Tooling

Continuous
Integration and
Continuous
Deployment
(CI/CD)
Devops Tooling

Configuration
Management
Devops Tooling

Containerization
and Orchestration
Devops Tooling

Infrastructure
as Code (IaC)
Devops Tooling

Monitoring
and Logging
Devops Tooling

Artifact
Repository
Management
Devops Tooling

Collaboration
and
Communication
Devops Tooling

Source Code
Analysis (Static
Code Analysis)
Devops Tooling

Security and
Vulnerability
Scanning
Devops Tooling

Testing
Devops Tooling

Backup and
Disaster
Recovery
Metric for Devops
Success

Deployment/
release How often
Frequency deployments/release
happen.

Lead Time for


Changes Time from code change to
production.

Mean Time to
Recovery
(MTTR) Average time to recover from
failures.

Change Failure
Rate Percentage of changes
causing failures.
Environment classification

Dev
environment
These are
LOWER
environments
QA
environment

Staging/pre-
production
Environment
(PRE-PROD)

This is an
Production UPPER
Environment
(PROD)
environment
Environment input Output

Dev Code application


environment

QA application Tested application


environment

Application ready
Staging/pre- for end user
production Tested application
Environment
(PREPROD)

Production Application ready Fully operational


Environment for end user Application
(PROD)
Environment input Output

Code application

Tested application

Application ready
Fully operational
for end user
Application
Environment flow

Dev
QA
environment
environment

Staging/pre-
production
Environment
(PREPROD)

Production
Environment
(PROD)
Environment flow

Dev Purpose: The development (Dev) environment is where developers


environment initially write, build, and test their code. This is an isolated area for
experimentation and feature development.

Process: Code changes are typically tested here to verify initial


functionality.

Outcome: If the code passes preliminary testing, it moves to the next


stage (QA environment). If it fails, it remains in the Dev environment for
further refinement.
Environment flow

QA
environment
Purpose: The QA (Quality Assurance) environment is a dedicated
testing space where QA engineers or automated tests rigorously validate
the code for functionality, performance, and security.

Process: In QA, tests simulate real-world scenarios to check for bugs or


issues that may not have been caught in the Dev environment.

Outcome: If the code passes, it moves to the Staging/Pre-Production


environment for further testing. Failing tests here require code to be sent
back to the Dev environment for fixes.
Environment flow

Staging/pre-
production
Environment
Purpose: Pre-production is an environment that closely mirrors the
(PREPROD)
production environment but does not impact real users. It’s intended to
serve as a final verification point before production.

Process: Code is tested with production-like data and configurations,


ensuring it will perform as expected in the live environment.

Outcome: If the code passes, it proceeds to production. If any issues


are found, it will be sent back to Dev.
Environment flow

Production
Environment
(PROD) Purpose: The production environment is the live environment accessible
by end-users. It’s where the application is fully operational and expected
to be stable.

Process: Once code reaches production, it’s released to users.


Monitoring tools keep track of performance and user feedback to catch
any issues that may have been missed.

Outcome: A successful deployment to production means the new


version is live and accessible to all users. If issues arise in production,
they are often handled as hotfixes or emergency patches.
Preprod vs Prod

Production
Staging/pre-
Environment
production
(PROD)
Environment
(PREPROD)

Pre-production (PREPROD): Production (PROD):

● Acts as a final testing ground before ● The live environment where the application is
releasing new code to end-users. fully available to end-users.
● Hosts actual customer data and handles live
● Designed to simulate the production Purpose traffic, so it needs to be stable, secure, and
environment as closely as possible, providing
a space to conduct "real-world" tests without highly performant.
affecting live users. ● All updates in production are accessible to
● Used for staging, testing, and validating code end-users, so changes here are closely
with production-like data and configurations monitored.
to ensure compatibility, performance, and
stability.
Preprod vs Prod

Production
Staging/pre-
Environment
production
(PROD)
Environment
(PREPROD)

Pre-production (PREPROD): Production (PROD):

● Configured to mirror the production ● Configured for optimal stability, performance,


environment as closely as possible, including and security, as it hosts live user data and
settings, database structure, and server handles real traffic.
Configuration ● Contains actual production data, which must
configurations.
● May use masked or anonymized data that and Data be kept secure and private.
resembles real data to prevent privacy risks. ● This environment is tightly controlled, with
● Allows testing of configurations, new minimal changes allowed to reduce the risk
releases, and integrations to see if they will of downtime or issues for users.
work in production.
Preprod vs Prod

Production
Staging/pre-
Environment
production
(PROD)
Environment
(PREPROD)
Pre-production (PREPROD):

● Final testing and quality assurance Production (PROD):


environment, where full regression,
● While not a testing environment, monitoring
performance, load, and integration tests are
and real-time validation occur here. Any
performed.
● Sometimes includes limited "canary testing" or Testing issues that arise can trigger alerts.
● Changes in production may be deployed
"blue-green deployments" to validate changes and incrementally to mitigate risk, such as
without affecting all users.
● Used to simulate production loads to identify
Validation through phased rollouts or feature toggles.
● Incident management processes are in place
any remaining bugs or issues in a controlled
to address production issues swiftly.
environment.
Preprod vs Prod

Staging/pre-
production Production
Environment Environment
(PREPROD) (PROD)

Pre-production (PREPROD):

● Generally limited to developers, QA engineers, Production (PROD):


and sometimes specific internal stakeholders.
● Accessible to all end-users.
● Not accessible to end-users, ensuring that any
● Any issues in production can directly impact
testing here does not disrupt the user
experience User the user experience, so changes here must
be carefully managed.
Access
Preprod vs Prod

Staging/pre-
production Production
Environment Environment
(PREPROD) (PROD)

Pre-production (PREPROD):

● More flexibility for testing new updates, Production (PROD):


configurations, or integrations before they reach
● Changes are more tightly controlled and
production.
typically follow a release schedule or
● Deployments to pre-production can happen Change Control approval process to minimize disruptions.
frequently as part of final validation before and Frequency ● Updates in production are carefully planned,
release. of Updates
often requiring sign-offs or staged rollouts to
ensure stability.
Preprod vs Prod

Staging/pre-
production Production
Environment Environment
(PREPROD) (PROD)

Pre-production (PREPROD):

● Failures here are expected as part of the testing Production (PROD):


process, with an easy rollback or reset strategy
● Failures in production can have significant
to address issues.
user and business impact, so there are strict
● Allows for rapid iterations to test and fix issues Failure and protocols for incident management and rapid
before releasing to production. Rollback
rollback.
Strategy
● Monitoring and alerting are crucial to catch
and address issues immediately.
Preprod vs Prod

Production
Staging/pre- Environment
production (PROD)
Environment
(PREPROD)
Environment classification

Dev
environment
These are
LOWER
environments
QA
environment

Staging/pre-
production
Environment
(PREPROD) In DevOps, the production environment (often referred to
simply as "production" or "prod") is the live environment where
This is an the application, service, or system is fully operational and
accessible by end-users. It is the final stage in the
Production UPPER development pipeline, following stages like development,
testing, and staging, and represents the "real-world"
Environment
(PROD)
environment environment where the application is expected to perform
reliably.
Environment classification

Dev
www.development.devopseasylearning.com
environment

Only available to
software factory
QA
www.testing.devopseasylearning.com employees
environment

Staging/pre-
production www.pre-production.devopseasylearning.com
Environment
(PREPROD)

Available to
Production
Environment
www.devopseasylearning.com anyone
(PROD)
Environment software life cycle

cycle-1 cycle-2 cycle-3


Dev v3.0.0 Dev v4.0.0 Dev v5.0.0
environment environment environment

QA v2.0.0 QA v3.0.0 QA v3.0.0


environment environment environment

Staging/pre-
Staging/pre-
v1.0.0
Staging/pre-
v2.0.0 production
v4.0.0
production production
Environment Environment Environment
(PREPROD) (PREPROD) (PREPROD)

Production Production
v1.0.0
Production v3.0.0
Environment v1.0 Environment Environment
(PROD) (PROD) (PROD)
Environment software life cycle

cycle-4 cycle-5 cycle-6


Dev v6.0.0 Dev v7.0.0 Dev v8.0.0 cycle…n
environment environment environment

QA v5.0.0 QA v6.0.0 QA v7.0.0


environment environment environment

Staging/pre-
Staging/pre-
v4.0.0
Staging/pre-
v5.0.0 production
v6.0.0
production production
Environment Environment Environment
(PREPROD) (PREPROD) (PREPROD)

Production Production
v4.0.0
Production v5.0.0
Environment v3.0 Environment Environment
(PROD) (PROD) (PROD)
Environment owners

Dev Developers
environment

QA
Quality assurance
environment

Staging/pre-
production
DevOps
Environment
(PREPROD)

Production DevOps
Environment
(PROD)
Environment provider

Dev DevOps provide Processes & Tooling


environment

QA DevOps provide Processes & Tooling


environment

Staging/pre-
production
DevOps provide Processes & Tooling
Environment
(PREPROD)

Production
Environment DevOps provide Processes & Tooling
(PROD)
Deployment vs Release

Deployment Release

Release and Deployment are two related but distinct concepts in software
development and operations, each with a different purpose and process.
Deployment vs Release

Deployment

Deployment

● Definition: Deployment is the process of moving code from a development or testing


environment to a live or production environment where it can be accessed by end-users.
● Purpose: To make the codebase or application physically available on production servers
or infrastructure.
● Scope: Technical process involving infrastructure configuration, code migration, and
version updates.
● Visibility: Often internal; may not be visible to end-users, as deployments can happen
without immediate access to the new features (e.g., a dark deployment).
● Example: A new version of an application is deployed to production servers but may not
be immediately available to users if the feature is hidden behind a feature flag.
Deployment vs Release

Release

Release

● Definition: Release is the act of making a deployed version of an application accessible


to end-users, often accompanied by announcements, version updates, and
documentation.
● Purpose: To introduce new features, improvements, or fixes to end-users.
● Scope: Business-oriented, involving communication with stakeholders, marketing, and
possibly training for end-users.
● Visibility: Directly visible to end-users; a release typically marks the point when users
can access the changes.
● Example: After deploying a new feature to production, the release is when that feature is
made accessible to users, often accompanied by release notes or an announcement.
Deployment vs Release

Deployment Release

Only visible to Visible to anyone


members of
software factory
(internal)
Deployment vs Release

Deployment Release

Happens in lower environments (dev, Happens in


QA, Staging/pre-production) Production
environment
Deployment vs Release

Dev The application is Deployed to DEV area to


environment
be checked by Developers

The application is Deployed to QA area


QA
environment to be tested by Quality assurance (Tester)

Staging/pre- The application is Deployed to PREPROD area


production
Environment
to be tested by DevOps Engineers
(PREPROD)

The application is Released to PRODUCTION (PROD) area


Production
Environment
to be used by customers, end users, anyone
(PROD)
Deployment vs Release

Dev
www.development.devopseasylearning.com
environment

QA
www.testing.devopseasylearning.com
environment

Staging/pre-
production
www.pre-production.devopseasylearning.com
environment

Production www.devopseasylearning.com
environment
Computer vs Server

Server
Computer
Computer vs Server

Computer

A computer is a general-purpose device capable of performing a wide range of tasks


based on instructions from software applications. It typically includes essential
components such as a central processing unit (CPU), memory (RAM), storage (like a
hard drive or SSD), and input/output devices (like a keyboard, monitor, and mouse).
Computers are usually designed for personal or business use, allowing users to
perform tasks like browsing the web, writing documents, gaming, or working with
media.
Computer vs Server

Server

A server is a specialized computer designed to provide services, data, or resources to


other computers (clients) over a network. Servers are often more powerful, with robust
hardware and software configurations built to manage high workloads, handle multiple
requests simultaneously, and run continuously with minimal downtime. Common
services provided by servers include web hosting, file storage, database management,
and application hosting, making them crucial in data centers, businesses, and online
services.
Computer vs Server

Role and
Functionality Computer: Generally intended for personal or business use to perform
tasks like web browsing, word processing, gaming, or media consumption.

Server: Specifically designed to provide services, store data, and handle


requests from other devices (clients) over a network. Typical services
include hosting websites, databases, or applications.
Computer vs Server

Hardware and
Performance Computer: Has hardware geared toward personal or office tasks. It may
have moderate processing power and storage, usually designed for
individual tasks.

Server: Built to handle multiple, concurrent requests and run 24/7.


Servers have robust, high-performance hardware, redundant storage,
more RAM, and failover components to prevent downtime.
Computer vs Server

Operating System
Computer: Runs operating systems designed for personal use, like
Windows 10/11, macOS, or Linux distributions like Ubuntu Desktop.

Server: Runs specialized server operating systems, such as Windows


Server, Ubuntu Server, or Red Hat Enterprise Linux (RHEL), optimized for
stability, security, and multi-user operations.
Computer vs Server

Network
Connectivity and
Access Computer: Primarily accessed by a single user, usually physically or over
the internet for specific purposes like remote work.

Server: Accessed by multiple users or devices on a network. It serves


resources, data, or services across local networks or the internet.
Server

Server are located in

Data Center
Important DevOps
Concepts
Important DevOps
Concepts
Important DevOps
Concepts
Important DevOps
Concepts
Important DevOps
Concepts
Important DevOps
Concepts
Important DevOps
Concepts
Important DevOps
Concepts
Important DevOps
Concepts
Important DevOps
Concepts
Important DevOps
Concepts
Important DevOps
Concepts
What to know from
day 1 at work as new
DevOps Engineer

Version Control
(e.g., Git)

Why: Almost every DevOps pipeline


relies on version control to track code
changes. Understanding Git is
essential.

Key Skills:
Commit, push,
pull, merge,
branching.
What to know from
day 1 at work as new
DevOps Engineer

Continuous Integration
and Continuous
Deployment (CI/CD)

Why: CI/CD automates code


integration and deployment, speeding
up development and reducing human
error.

Key Tools: Jenkins,


GitHub Actions, GitLab
CI, Bitbucket Pipelines.
What to know from
day 1 at work as new
DevOps Engineer

Infrastructure as Code
(IaC)

Why: IaC allows you to manage and


provision infrastructure through code,
which improves consistency and
enables automation.

Key Tools: Terraform,


CloudFormation,
Ansible.
What to know from
day 1 at work as new
DevOps Engineer

Containers and
Containerization

Why: Containers, like Docker, package


applications and their dependencies,
making them portable and consistent
across environments.

Key Tools: Docker,


Docker Compose.
What to know from
day 1 at work as new
DevOps Engineer

Orchestration

Why: Orchestration tools help manage


and automate the deployment, scaling,
and management of containerized
applications.

Key Tools: Kubernetes,


Helm.
What to know from
day 1 at work as new
DevOps Engineer

Monitoring and Logging

Why: Monitoring allows you to track


system health and performance, while
logging is essential for debugging and
auditing.

Key Tools: Prometheus


(monitoring), Grafana
(visualization), ELK Stack
(Elasticsearch, Logstash,
Kibana for logging).
What to know from
day 1 at work as new
DevOps Engineer

Automation

Why: Automating repetitive tasks frees


up time for strategic work and reduces
errors.

Key Tools: Ansible


(configuration
management), Jenkins (for
CI/CD automation).
What to know from
day 1 at work as new
DevOps Engineer

Networking Basics

Why: Understanding networking


fundamentals is critical for
troubleshooting connectivity, load
balancing, and security.

Key Concepts: IP
addresses, DNS,
HTTP/HTTPS, firewalls,
VPN.
What to know from
day 1 at work as new
DevOps Engineer

Cloud Platforms

Why: Most DevOps work today is


cloud-based. Understanding cloud
fundamentals is crucial.

Key Platforms: AWS,


Azure, Google Cloud
Platform (GCP).
What to know from
day 1 at work as new
DevOps Engineer

Linux and Shell


Scripting

Why: Linux is the dominant OS for


servers, and shell scripting is a
powerful tool for automation.

Key Skills: Basic Linux


commands, writing and
debugging shell scripts,
managing files and
processes.
What to know from
day 1 at work as new
DevOps Engineer

Security Fundamentals

Why: DevOps engineers must secure


applications, infrastructure, and
pipelines.

Key Concepts: SSH,


secrets management, IAM
(Identity and Access
Management), least
privilege.
What to know from
day 1 at work as new
DevOps Engineer

Configuration
Management

Why: Configuration management tools


ensure systems are configured
consistently across multiple
environments.

Key Tools: Ansible, Puppet,


Chef.
What to know from
day 1 at work as new
DevOps Engineer

CI/CD Pipeline
Components

Why: A DevOps engineer needs to


understand the stages in a CI/CD
pipeline, from build to deployment.

Key Concepts: Build, test,


release, deployment,
rollback.
What to know from
day 1 at work as new
DevOps Engineer

Fundamentals of Load
Balancing and Scaling

Why: Ensuring applications can handle


varying levels of traffic is a key
responsibility.

Key Concepts: Load


balancing, horizontal
scaling, vertical scaling,
auto-scaling.
What to know from
day 1 at work as new
DevOps Engineer

Basic Networking and


HTTP/HTTPS Protocols

Why: Almost every application interacts


with the internet or a network, making
network basics crucial.

Key Concepts: IP
addresses, DNS, ports,
HTTP methods, load
balancers.
What to know from
day 1 at work as new
DevOps Engineer

Backup and Recovery

Why: Understanding backup strategies


and recovery procedures is critical for
data protection and disaster recovery.

Key Concepts: RPO


(Recovery Point Objective),
RTO (Recovery Time
Objective).
What to know from
day 1 at work as new
DevOps Engineer

Observability

Why: Observability (logging,


monitoring, tracing) is key for
troubleshooting and understanding
system health.

Key Tools: Prometheus


(metrics), Grafana
(dashboards), ELK Stack
(logging).
What to know from
day 1 at work as new
DevOps Engineer

Basic SQL and


Databases

Why: Knowledge of databases is


essential, as most applications rely on
them.

Key Concepts: SQL


basics, database backups,
performance tuning.
What to know from
day 1 at work as new
DevOps Engineer

Collaboration and
Communication

Why: DevOps is about bridging the gap


between development and operations,
so communication skills are essential.

Key Tools: Slack, JIRA,


Confluence.
What to know from
day 1 at work as new
DevOps Engineer

Blameless
Postmortems and
Incident Management

Why: When incidents happen,


understanding the root cause and
learning from mistakes is key to
continuous improvement.

Key Concepts: Blameless


culture, RCA (Root Cause
Analysis), incident
response.
Here are the key environments that typically require DevOps
practices to ensure smooth operations, consistency, and efficiency
throughout the software development lifecycle

Testing/QA
Environment

Development
Staging
Environment
DevOps Environment

Production
Environment
Here are the key environments that typically require DevOps
practices to ensure smooth operations, consistency, and efficiency
throughout the software development lifecycle

1. Development Environment

● Purpose: Where developers write, test, and debug


code. It’s often highly flexible to allow rapid changes.
● DevOps Focus: Automate setup, provide quick
feedback with CI, and enforce version control to
manage changes.
Here are the key environments that typically require DevOps practices
to ensure smooth operations, consistency, and efficiency throughout
the software development lifecycle

2. Testing/QA Environment

● Purpose: Where code is tested to ensure quality and


functionality before moving to staging or production. It
might involve multiple types of testing (unit, integration,
functional).
● DevOps Focus: Set up CI/CD pipelines for automated
testing, use infrastructure as code (IaC) to recreate
environments, and maintain configuration consistency.
Here are the key environments that typically require DevOps practices
to ensure smooth operations, consistency, and efficiency throughout
the software development lifecycle

3. Staging Environment

● Purpose: A close replica of the production environment


where final testing occurs. It serves as the last line of
testing to detect any issues before deploying to
production.
● DevOps Focus: Ensure staging mirrors production as
closely as possible, including data and configurations,
and provide controlled deployment practices like blue-
green or canary deployments.
Here are the key environments that typically require DevOps practices
to ensure smooth operations, consistency, and efficiency throughout
the software development lifecycle

4. Production Environment

● Purpose: The live environment where users access the


application. Stability, performance, and security are
paramount here.
● DevOps Focus: Monitor health and performance,
ensure robust logging, enforce security policies,
automate scaling, and establish disaster recovery plans.
Environment flow

Production
Environment
Development
Environment

Testing/QA
Environment Staging
Environment
What does DevOps
Engineer Deploy ?

Infrastructure

Application

As a DevOps Engineer we Deploy,


secure, and manage
Applications tools

Version Control -
Code Repositories - Database Applications -
GitHub, GitLab, Bitbucket
Artifactory, Nexus MySQL, PostgreSQL,
MongoDB

CI/CD Tools - Jenkins,


GitHub Actions, GitLab Monitoring Applications Logging and Observability -
CI/CD - Grafana, Prometheus, ELK Stack (Elasticsearch,
Datadog Logstash, Kibana), Splunk

Automation Tools -
Ansible Tower, Rundeck
Infrastructure tools

Infrastructure as Code (IaC)


Containerization and
Orchestration
● Terraform
● AWS CloudFormation ● Docker
● Azure Resource Manager (ARM) ● Kubernetes
● Google Cloud Deployment ● OpenShift
Manager ● Docker Swarm

Cloud Providers Continuous Integration/Continuous


Configuration Management
Deployment (CI/CD)
● Amazon Web Services (AWS)
● Ansible ● Microsoft Azure ● Jenkins
● Chef ● Google Cloud Platform (GCP) ● GitLab CI/CD
● Puppet ● IBM Cloud ● CircleCI
● SaltStack ● Oracle Cloud ● Travis CI
● Bamboo
Infrastructure tools

Monitoring and Logging Security and Compliance

● Prometheus ● HashiCorp Vault (secrets management)


● Grafana ● AWS IAM (Identity and Access Management)
● Datadog ● Okta (Identity Management)
● New Relic ● Open Policy Agent (OPA)
● Cloud Security Posture Management (CSPM)
● Nagios
tools like Prisma Cloud and Dome9
● ELK Stack (Elasticsearch, Logstash,
Kibana)
● Splunk

Backup and Disaster Database Management


Recovery
● Amazon RDS (Relational Database
Service)
● AWS Backup ● MySQL, PostgreSQL, MongoDB
(managed in cloud or on-premises)
● Redis, Cassandra, DynamoDB for
NoSQL databases
Important concept to know: Basic Data Types and
Structures

String

String: A sequence of characters, often used for text.

Hello
class
devops
Important concept to know: Basic Data Types and
Structures

Integer

Integer: Whole numbers, often used for counts or IDs.

20
31
15
0
Important concept to know: Basic Data Types and
Structures

Boolean

Boolean: A true/false or yes/no value.

True
False
Yes
no
Important concept to know: Basic Data Types and
Structures

Array

Array: A collection of elements, usually of the same type, stored


in a specific order.

[1, 2, 3, 4, 5]
["apple", "banana", "cherry"]
Important concept to know: Basic Data Types and
Structures

List

List: Similar to an array but can hold mixed data types


(language-dependent).

[1, "apple", True, 3.14]


[soda, "12", True, apple]
Important concept to know: Basic Data Types and
Structures

Map

Map: A collection of key-value pairs, often used for


configurations.

{"name": "Alice", "age": 30}


{name: "Alice", age: 30}
Important concept to know: Basic Data Types and
Structures

Key

Key: Identifiers used in key-value pairs or secrets.

{"environment": "production"}, "environment" is the key.


Important concept to know: Basic Data Types and
Structures

Value

Value: The data or information associated with a key.

In {"environment": "production"}, "production" is the


value.
Important concept to know: Security and
Authentication

Secret

Secret: Sensitive information like API keys, passwords, or


tokens stored securely.

API_KEY="ABCD1234XYZ"
Important concept to know: Security and
Authentication

Password

Password: A string used for authentication.

"MyS3cureP@ssw0rd"
Important concept to know: Security and
Authentication

Token

Token: A string representing a session or access credentials,


often for APIs.

"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9"
Important concept to know: Security and
Authentication

Encryption

Encryption: The process of converting data into a secure


format to prevent unauthorized access.

Example (Before encryption): "Hello"


Example (After encryption): "SGVsbG8=" (Base64-encoded)
Important concept to know: Metadata and
Organization

Key Pair Value

Key Pair Value: A pair where one element (key) identifies the
other (value).

{"region": "us-west-1"}
Important concept to know: Metadata and
Organization

Tag

Tag: Metadata added to resources for organization or


identification.

{"name": "web-server", "environment": "production"}


Important concept to know: Metadata and
Organization

Variable

Variable: A named placeholder for a value that can change or


be reused.

username = "admin"
Important concept to know: Compute, network,
storage

All data collected or


Compute provided to users
We use this to allow the virtual- are stored here
(VM) machine(server) to connect to
internet so that anyone also
connected to internet can
access it and the application
virtual-
machine(server)
here is the Storage
environment the
application live
Networking
Important concept to know: Compute, network,
storage

Compute What is it?


Compute refers to the virtual machines (VMs) or servers where

(VM) applications and services run. These can be physical servers,


VMs in cloud environments (e.g., AWS EC2, Azure VMs), or
containers like Docker.

virtual-
machine(server) What is it for?
here is the Compute provides the processing power needed to run
applications, handle user requests, and perform backend
environment the operations.
application live
Why is it important for DevOps?
DevOps engineers need to:

● Automate the provisioning and management of VMs to ensure consistency.


● Optimize compute resources to control costs and improve performance.
● Implement scaling strategies (e.g., autoscaling) to handle fluctuating demands efficiently.
Important concept to know: Compute, network,
storage

Networking What is it?


Networking encompasses the infrastructure and configurations
that enable communication between applications, services, and
users. This includes IP addresses, DNS, load balancers,
We use this to allow the virtual- firewalls, and subnets.
machine(server) to connect to
internet so that anyone also
connected to internet can What is it for?
Networking ensures that:
access it and the application
● Services can communicate with each other seamlessly.
● Users can access applications over the internet or intranet.
● Traffic is routed efficiently and securely.

Why is it important for DevOps?


DevOps engineers need to:

● Configure and manage secure network environments (e.g., VPCs, security groups).
● Implement high availability and fault tolerance using load balancers.
● Troubleshoot and monitor network performance to avoid bottlenecks or downtime.
Important concept to know: Compute, network,
storage

What is it?
Storage Storage refers to the systems and devices used to save data,
including block storage (e.g., AWS EBS), object storage (e.g.,
S3), and file systems (e.g., NFS).

All data collected or


provided to users
What is it for?
Storage is used to:

are stored here ● Store application data, backups, and logs.


● Ensure data is persistently available, even if compute instances are stopped or restarted.
● Facilitate disaster recovery by creating snapshots and backups.

Why is it important for DevOps?


DevOps engineers need to:

● Automate storage provisioning to ensure consistency across


environments.
● Monitor storage usage to prevent over- or un
Application birth

Code
Devops
Get the code from DEV
team
Docker Devops take the docker
image and turn it into an
application

image
Devops use automation
tools to transform DEV
code into a docker
image
Application
Movie birth

Shooting a movie
Movie on TV
Movie DVD

Application

Code Docker
image
Application birth

Code
Devops
Get the code from DEV
team

An application can be written in a single language or


multiple languages
Application birth

Code
Devops
Get the code from DEV
team

Depending on the application at hand a specific language


or language will be selected.
Application birth

Code
Devops
Get the code from DEV
team

No matter the language or languages selected by the


developers they will end up producing a code
Application birth

Code
Devops
Get the code from DEV
team

DevOps only expect CODE from the Dev team


Application birth

Code
Devops
Get the code from DEV
team

DevOps DON’T care about the language or languages use


by the Dev team
Application birth

Code
Devops
Get the code from DEV
team

DevOps MUST always know on which language or


languages the application has been written
Application birth

Docker
image
Devops
Get the code from DEV
team

Devops use automation


tools to transform DEV
code into a docker An image packages the
image application code,
dependencies, libraries, and
environment configurations
An image is just a seal box that contains all the code required to run an
application, ensuring
written by developer
consistency across different
environments (e.g.,
development, testing, and
production).
Application birth

Docker
image
Devops
Get the code from DEV
team

Devops use automation


tools to transform DEV
code into a docker Each layer in an image
image represents a change or
addition to the image, such
as installing dependencies,
adding configuration files, or
copying application code.
These layers are cached,
An image is made of multiple layer allowing for faster builds by
reusing unchanged layers.
Application birth

Docker
image
Devops
Get the code from DEV
team

Devops use automation


tools to transform DEV
code into a docker Once created, an image is
image immutable. When an image
is used to create a container,
a writable layer is added on
top, but the underlying image
layers remain unchanged,
ensuring that the application
An image is read only runs consistently every time.
Application birth

Docker
image
Devops
Get the code from DEV
team

Devops use automation


tools to transform DEV
code into a docker Complex applications often
image use multiple images for
different services, such as
separate images for the front
end, back end, and
database, each with its own
dependencies and
An application can have multiple images configuration.
Application birth

Docker
image
Devops
Get the code from DEV
team

Devops use automation


tools to transform DEV
code into a docker Images are designed to be
image small and self-contained,
making them portable across
different systems. This
portability is one of the key
advantages, allowing
applications to run
Images are Lightweight and Portable consistently on any platform
that supports
containerization.
Application birth

Docker
image
Devops
Get the code from DEV
team

Devops use automation


tools to transform DEV
code into a docker Images are typically stored in
image container registries (e.g.,
Docker Hub, AWS ECR,
Google Container Registry),
making it easy to share and
pull images as needed for
deployment.
Images Can be Stored and Shared in Repositories
Application birth

Docker
image
Devops
Get the code from DEV
team

Devops use automation


tools to transform DEV
code into a docker Each image can have
image multiple versions, tagged to
indicate different releases or
configurations. This
versioning allows teams to
roll back to previous versions
if needed and provides clear
Images are Versioned tracking for updates.
Application birth

Docker
image
Devops
Get the code from DEV
team

Devops use automation


tools to transform DEV
code into a docker An image encapsulates all
image required libraries and
dependencies, avoiding
conflicts with the host system’s
environment and other
applications. This isolation
ensures that applications
Images Enable Isolation of Dependencies behave as expected regardless
of the underlying infrastructure.
Application birth

Docker
image
Devops
Get the code from DEV
team

Devops use automation


tools to transform DEV
code into a docker Images can be based on other
image images, building additional
functionality on top of a base
image. For instance, an
application image might extend
a base OS image by adding
specific dependencies and
Images Can be Layered and Extended configurations.
Application birth

Docker
image
Devops
Get the code from DEV
team

Devops use automation


tools to transform DEV
code into a docker By reducing unnecessary files,
image combining commands, and
reusing common base images,
developers can create
optimized images that are
faster to build, transfer, and
deploy, reducing overhead in
Images Can be Optimized for Performance and Size production environments.
Application birth

Docker
image
Devops
Get the code from DEV
team

Devops use automation


tools to transform DEV
code into a docker Since images are standardized
image and immutable, they allow
containers to be spun up or
down quickly, enabling rapid
scaling in response to demand
without the need for lengthy
setup processes.
Images Allow for Rapid Scaling
Application birth

Docker
image
Devops
Get the code from DEV
team

Devops use automation


tools to transform DEV
code into a docker An image serves as the
image blueprint for creating
containers. Each time a
container is created, it
instantiates from the image,
inheriting its configurations and
dependencies, and is run in an
Images are the Blueprint for Containers isolated environment.
Application birth

Application

Devops take the docker


image and turn it into an
application

An application is essentially a
program or set of programs
designed to perform specific
tasks, delivering particular
functionality to end users or
other systems.
An Application is a Set of Code and Configurations to
Achieve Specific Functionalities
Application birth

Application

Devops take the docker


image and turn it into an
application

Complex applications are


typically made up of multiple,
interdependent components or
services. For example, a web
application might include a
user interface, backend API,
database, and caching layer,
An Application is Composed of Multiple Components each with distinct roles.
Application birth

Application

Devops take the docker


image and turn it into an
application

Applications are often divided


into tiers, such as the front-end
(user interface), business logic
(backend services), and data
storage (database). With
microservices architecture,
applications are broken into
An Application Can Be Multi-Tiered or Microservices-Based loosely coupled services, each
handling specific
functionalities.
Application birth

Application

Devops take the docker


image and turn it into an
application

Applications often have


multiple versions, including
development, staging, and
production versions, as well as
updates over time. Versioning
allows for easy tracking of
changes, bug fixes, and
An Application Can Have Multiple Versions feature enhancements.
Application birth

Application

Devops take the docker


image and turn it into an
application

Applications rely on various


libraries, frameworks, and
external services (e.g.,
databases, authentication
services) that provide essential
functionalities without needing
An Application Has Dependencies on Libraries and External to build everything from
scratch.
Services
Application birth

Application

Devops take the docker


image and turn it into an
application

Applications rely on various


libraries, frameworks, and
external services (e.g.,
databases, authentication
services) that provide essential
functionalities without needing
An Application Has Dependencies on Libraries and External to build everything from
scratch.
Services
Application birth

Application

Devops take the docker


image and turn it into an
application

For containerized deployments,


an application can be built from
multiple images, each
corresponding to different
components. For instance, a
database, backend, and
frontend may each have a
An Application May Use Multiple Images dedicated image, allowing
them to be deployed
independently.
Application birth

Application

Devops take the docker


image and turn it into an
application

Some applications are


stateless, meaning each
request is independent, while
others are stateful, requiring
persistent data storage (e.g., a
shopping cart application that
needs to remember user
An Application is Stateless or Stateful selections).
Application birth

Application

Devops take the docker


image and turn it into an
application

Applications are designed with


configurable settings (e.g.,
environment variables,
configuration files) to adapt to
different environments or
requirements. This
configurability makes it easy to
An Application is Configurable deploy the application in
various scenarios without
modifying the code.
Application birth

Application

Devops take the docker


image and turn it into an
application

Once deployed, an
application’s health and
performance need to be
continuously monitored to
ensure reliability, track usage
metrics, detect anomalies, and
An Application Needs to be Monitored and Managed in troubleshoot issues in real
time.
Production
Application birth

Application

Devops take the docker


image and turn it into an
application

Modern applications are


designed to scale up or down
based on demand, typically by
creating more instances or
containers, enabling reliable
performance under variable
load conditions.
An Application Can Be Scalable
Application birth

Application

Devops take the docker


image and turn it into an
application

Applications often contain


sensitive data or processes, so
they must be secured against
unauthorized access, data
leaks, and other vulnerabilities.
This includes implementing
access controls, encryption,
An Application Requires Security Controls and regular security audits.
Application birth

Application

Devops take the docker


image and turn it into an
application

Applications may require


persistent data storage (e.g.,
databases, file storage) to save
data even when instances are
restarted or scaled down,
ensuring data consistency and
availability.
An Application Can Use Persistent Storage
Application birth

Application

Devops take the docker


image and turn it into an
application

Applications undergo testing at


various stages (e.g., unit tests,
integration tests, end-to-end
tests) to verify functionality,
compatibility, performance, and
security before being deployed
to production.
An Application is Tested Across Multiple Stages
Application birth

Application

Devops take the docker


image and turn it into an
application

Applications go through a
lifecycle, starting from
development, moving through
testing and staging, and finally
being deployed to production.
This lifecycle may repeat for
An Application Has a Lifecycle from Development to every new version or update.
Deployment
Application birth

Application

Devops take the docker


image and turn it into an
application

Applications can be user-facing


(e.g., mobile or web apps) that
interact with users directly or
backend systems that support
other applications, processing
data or handling business logic
behind the scenes.
An Application is a User-Facing or Backend System
Application birth

Application

Devops take the docker


image and turn it into an
application

Proper documentation of an
application’s functionality, setup
instructions, API references,
and troubleshooting guides is
crucial to support developers,
operators, and end-users,
especially for complex
An Application Requires Documentation applications.
Application birth

Application

Devops take the docker


image and turn it into an
application

Applications can be packaged


in containers for lightweight
and consistent deployment or
virtualized if they require a full
operating system.
Containerized applications are
preferred for microservices and
An Application Can Be Containerized or Virtualized cloud-native applications.
Application birth

Application

Devops take the docker


image and turn it into an
application

Applications are typically


deployed in multiple
environments (development,
testing, staging, production) to
ensure quality and stability
before going live, with each
environment configured to
An Application is Deployed in Various Environments mirror production as closely as
possible.
Application birth

Code
Devops
Get the code from DEV
team
Docker Devops take the docker
image and turn it into an
application

image
Devops use automation
tools to transform DEV
code to a docker image
Application
Application birth

Docke
Code rfile
Dock
er
image

Doc
Application ker
con DockerHub
tain
er
Application birth

Devops write Dockerfile to encapsulate the


Dev code

Docke
Devops get code from dev team

Code rfile Devops use Dockerfile to build a docker


image

Dock
er
image
The docker container use the application
code to create the application Devops use docker image to create a docker
container

Doc Devops store the docker image into docker

Application ker hub

con DockerHub
tain
er
Movie birth

Shooting a movie
Movie on TV
Movie DVD

Application

Code Docker
image
Movie birth

Code Dockerfile
Docker image
DockerHub

Storing movie so that it can


be use later on at any time
Shooting a movie Movie DVD
Writing/copy a
movie on a disk
Application Docker container

Movie on TV DVD player


Application birth

Code: The development team writes the application code, which is


then handed over to the DevOps team.
Code
Application birth

Docke
Dockerfile: The DevOps team creates a Dockerfile, which contains
instructions on how to package the code and dependencies into a
rfile Docker image. The Dockerfile encapsulates all necessary steps to
build the environment.
Application birth

Dock
er Docker Image: Using the Dockerfile, a Docker image is built. This
image is a snapshot of the application environment, containing all
image the code, libraries, and dependencies required to run the
application.
Application birth

Do DockerHub: The created Docker image is then stored in


ck DockerHub (or another container registry). This makes the image

e rH
accessible for future use and deployment on different
environments.

u b
Application birth

Doc Docker Container: The Docker image is used to create a Docker

ker
container, which is a runnable instance of the image. The container
runs the application by isolating it in a consistent environment.
con
tain
er
Application birth

Application: The container executes the application, providing a


Ap

stable and isolated runtime that ensures consistent performance


p

across different systems.


l ic
at
io
n
Application birth

Dockerfile

Dockerfile: The DevOps team creates a


Dockerfile, which contains instructions on
how to package the code and dependencies
into a Docker image. The Dockerfile
encapsulates all necessary steps to build the The Dockerfile is a script
environment. containing a set of instructions
on how to assemble an image.
It defines every step needed to
create a self-contained
environment for an application,
ensuring consistency across all
A Dockerfile is a Blueprint for Building Docker Images stages.
Application birth

Dockerfile

Dockerfile: The DevOps team creates a


Dockerfile, which contains instructions on
how to package the code and dependencies
into a Docker image. The Dockerfile
encapsulates all necessary steps to build the The first line in most
environment. Dockerfiles is the FROM
instruction, which defines the
base image. This is the
foundational layer for the
image, providing a starting
environment (e.g., FROM
A Dockerfile Specifies the Base Image python:3.9 or FROM
ubuntu:latest) and often
including essential libraries or
dependencies.
Application birth

Dockerfile

Dockerfile: The DevOps team creates a


Dockerfile, which contains instructions on
how to package the code and dependencies
into a Docker image. The Dockerfile
encapsulates all necessary steps to build the The Dockerfile can include
environment. instructions to install additional
packages, libraries, or
dependencies needed for the
application, typically using
commands like RUN apt-get
install or RUN npm install.
A Dockerfile Can Install Dependencies This ensures all dependencies
are included within the image.
Application birth

Dockerfile

Dockerfile: The DevOps team creates a


Dockerfile, which contains instructions on
how to package the code and dependencies
into a Docker image. The Dockerfile
encapsulates all necessary steps to build the The COPY or ADD instruction
environment. in a Dockerfile copies the
application’s code and
resources from the host
machine into the image. This
brings the application files into
the isolated environment where
A Dockerfile Copies Application Code the image will be run.
Application birth

Dockerfile

Dockerfile: The DevOps team creates a


Dockerfile, which contains instructions on
how to package the code and dependencies
into a Docker image. The Dockerfile
encapsulates all necessary steps to build the Using the ENV instruction, a
environment. Dockerfile can define
environment variables that the
application will use. This allows
configuration details to be set
at build time, keeping sensitive
information out of the code and
A Dockerfile Can Set Environment Variables enabling different
configurations for various
environments.
Application birth

Dockerfile

Dockerfile: The DevOps team creates a


Dockerfile, which contains instructions on
how to package the code and dependencies
into a Docker image. The Dockerfile
encapsulates all necessary steps to build the The CMD or ENTRYPOINT
environment. instruction defines the default
command to execute when a
container starts from the
image. For example, it could
specify CMD ["python",
"app.py"] to run a Python
A Dockerfile Specifies Commands to Run the Application application or start a web
server.
Application birth

Dockerfile

Dockerfile: The DevOps team creates a


Dockerfile, which contains instructions on
how to package the code and dependencies
into a Docker image. The Dockerfile
encapsulates all necessary steps to build the Each instruction in the
environment. Dockerfile creates a new layer
in the final image. Docker
caches these layers, allowing
for faster rebuilds by reusing
unchanged layers. This layered
approach makes Docker
A Dockerfile Contains Multiple Layers images efficient and enables
incremental updates.
Application birth

Dockerfile

Dockerfile: The DevOps team creates a


Dockerfile, which contains instructions on
how to package the code and dependencies
into a Docker image. The Dockerfile
encapsulates all necessary steps to build the Multi-stage builds allow
environment. complex applications to be built
in one stage (e.g., compiling or
bundling code) and packaged
in a minimal image in the final
stage. This reduces the image
size by including only essential
A Dockerfile Supports Multi-Stage Builds files and dependencies in the
final image.
Application birth

Dockerfile

Dockerfile: The DevOps team creates a


Dockerfile, which contains instructions on
how to package the code and dependencies
into a Docker image. The Dockerfile
encapsulates all necessary steps to build the The WORKDIR instruction sets
environment. the default directory where
commands will be executed
within the image. This is helpful
for organizing code and setting
the context for the application.

A Dockerfile Can Set the Working Directory


Application birth

Dockerfile

Dockerfile: The DevOps team creates a


Dockerfile, which contains instructions on
how to package the code and dependencies
into a Docker image. The Dockerfile
encapsulates all necessary steps to build the The EXPOSE instruction in a
environment. Dockerfile indicates which
network ports the container will
listen on at runtime. Although
this doesn’t publish the port to
the host automatically, it serves
as documentation and
A Dockerfile Allows for Port Exposure prepares the container for port
binding.
Application birth

Dockerfile

Dockerfile: The DevOps team creates a


Dockerfile, which contains instructions on
how to package the code and dependencies
into a Docker image. The Dockerfile
encapsulates all necessary steps to build the Build arguments (ARG) allow
environment. dynamic values to be passed
into the Dockerfile at build time.
These values can help
customize the image based on
different needs without
hardcoding values directly into
A Dockerfile Includes Build Arguments the Dockerfile.
Application birth

Dockerfile

Dockerfile: The DevOps team creates a


Dockerfile, which contains instructions on
how to package the code and dependencies
into a Docker image. The Dockerfile
encapsulates all necessary steps to build the By defining all setup steps in a
environment. Dockerfile, DevOps teams can
ensure that the application will
run consistently regardless of
the underlying host
environment, as the entire
environment is encapsulated in
A Dockerfile Ensures Consistency Across Environments the image.
Application birth

Dockerfile

Dockerfile: The DevOps team creates a


Dockerfile, which contains instructions on
how to package the code and dependencies
into a Docker image. The Dockerfile
encapsulates all necessary steps to build the Like other code files,
environment. Dockerfiles are version-
controlled, allowing teams to
track changes, roll back to
previous versions, and
document modifications for
future reference.
A Dockerfile is Version-Controlled
Application birth

Dockerfile

Dockerfile: The DevOps team creates a


Dockerfile, which contains instructions on
how to package the code and dependencies
into a Docker image. The Dockerfile
encapsulates all necessary steps to build the The Dockerfile enables teams
environment. to build the same Docker
image on any system with
Docker installed, making the
process of setting up an
application highly portable and
repeatable across different
A Dockerfile is Portable Across Systems machines or cloud
environments.
Application birth

DockerHub

DockerHub: The created Docker image is


then stored in DockerHub (or another
container registry). This makes the image
accessible for future use and deployment on
different environments. DockerHub is a cloud-based
registry where Docker images
can be stored, managed, and
shared. It serves as a central
hub for accessing pre-built
images and sharing custom
images within teams or with the
DockerHub is a Repository for Docker Images public.
Application birth

DockerHub

DockerHub: The created Docker image is


then stored in DockerHub (or another
container registry). This makes the image
accessible for future use and deployment on
different environments. DockerHub allows users to
create public repositories that
anyone can access and
download, as well as private
repositories that restrict access
to specific users or teams. This
flexibility allows for both open-
DockerHub Provides Public and Private Repositories source collaboration and
secure image storage.
Application birth

DockerHub

DockerHub: The created Docker image is


then stored in DockerHub (or another
container registry). This makes the image
accessible for future use and deployment on
different environments. DockerHub includes a
collection of verified "official
images," such as nginx,
mysql, python, etc.,
maintained by Docker and
trusted providers. These
images are reliable, widely
DockerHub Hosts Official Images used, and regularly updated,
providing a solid foundation for
many applications.
Application birth

DockerHub

DockerHub: The created Docker image is


then stored in DockerHub (or another
container registry). This makes the image
accessible for future use and deployment on
different environments. Images in DockerHub can have
multiple versions, managed
through tags (e.g.,
nginx:1.19, python:3.8).
Tags allow users to specify
exact versions, ensuring
consistency when pulling
DockerHub Supports Versioning with Image Tags images.
Application birth

DockerHub

DockerHub: The created Docker image is


then stored in DockerHub (or another
container registry). This makes the image
accessible for future use and deployment on
different environments. DockerHub enables users to
"push" images (upload) from
their local Docker environment
to the registry, as well as "pull"
images (download) for local
use. This enables efficient
sharing and reuse of images
DockerHub Allows Image Pulling and Pushing across different environments
and systems.
Application birth

DockerHub

DockerHub: The created Docker image is


then stored in DockerHub (or another
container registry). This makes the image
accessible for future use and deployment on
different environments. DockerHub can be integrated
into CI/CD pipelines, allowing
automated pushing of new
images to DockerHub after
successful builds. This ensures
the latest images are always
available in the registry for
DockerHub Integrates with CI/CD Pipelines deployment.
Application birth

DockerHub

DockerHub: The created Docker image is


then stored in DockerHub (or another
container registry). This makes the image
accessible for future use and deployment on
different environments. DockerHub includes a security
scanning feature that analyzes
images for known
vulnerabilities. This helps
ensure that images stored in
DockerHub are secure and that
users are aware of potential
DockerHub Has Built-in Image Scanning for Vulnerabilities security issues.
Application birth

Docker
container

Docker Container: The Docker image is


used to create a Docker container, which is a
runnable instance of the image. The
container runs the application by isolating it in
a consistent environment. A container encapsulates an
application and all of its
dependencies, running in
isolation from other processes
on the host system. This
isolation ensures that each
container behaves consistently
A Container is an Isolated Runtime Environment regardless of where it’s
deployed.
Application birth

Docker
container

Docker Container: The Docker image is


used to create a Docker container, which is a
runnable instance of the image. The
container runs the application by isolating it in
a consistent environment. Containers are created from
images, which act as templates
defining what’s inside the
container (e.g., code, libraries,
and system tools). Once an
image is instantiated, it
becomes a container that can
A Container is Based on an Image be executed.
Application birth

Docker
container

Docker Container: The Docker image is


used to create a Docker container, which is a
runnable instance of the image. The
container runs the application by isolating it in
a consistent environment. Containers share the host
operating system kernel and
require fewer resources than
traditional virtual machines
(VMs). This allows for higher
density and more efficient
resource utilization on the
A Container is Lightweight and Efficient same infrastructure.
Application birth

Docker
container

Docker Container: The Docker image is


used to create a Docker container, which is a
runnable instance of the image. The
container runs the application by isolating it in
a consistent environment. While the underlying image is
read-only, a writable layer is
added to the container when
it’s created. This layer allows
the application to make
changes while running, such as
writing logs or temporary data,
A Container Has a Writable Layer without modifying the original
image.
Application birth

Docker
container

Docker Container: The Docker image is


used to create a Docker container, which is a
runnable instance of the image. The
container runs the application by isolating it in
a consistent environment. Containers can run on any
system that supports the
container runtime (e.g.,
Docker, Kubernetes), making
them highly portable across
different environments, from
local development to cloud-
A Container is Portable based production.
Application birth

Docker
container

Docker Container: The Docker image is


used to create a Docker container, which is a
runnable instance of the image. The
container runs the application by isolating it in
a consistent environment. Containers are lightweight and
require minimal setup, allowing
them to start and stop almost
instantly. This makes them
ideal for dynamic scaling, rapid
deployment, and ephemeral
workloads.
A Container Starts and Stops Quickly
Application birth

Docker
container

Docker Container: The Docker image is


used to create a Docker container, which is a
runnable instance of the image. The
container runs the application by isolating it in
a consistent environment. Containers are designed to be
temporary. When a container is
stopped or deleted, any data in
the writable layer is lost unless
explicitly saved in external
storage. This stateless nature
encourages best practices for
A Container is Ephemeral by Nature storing persistent data outside
of containers.
Application birth

Docker
container

Docker Container: The Docker image is


used to create a Docker container, which is a
runnable instance of the image. The
container runs the application by isolating it in
a consistent environment. Containers can expose
network ports, allowing
external systems or other
containers to communicate
with them. Port mapping allows
for controlled access to the
containerized application.
A Container Can Expose Specific Ports
Application birth

Docker
container

Docker Container: The Docker image is


used to create a Docker container, which is a
runnable instance of the image. The
container runs the application by isolating it in
a consistent environment. Each container has its own
isolated filesystem, network,
and process space, which
helps avoid conflicts between
applications. This controlled
environment allows containers
to run consistently across
A Container Runs in a Controlled Environment development, testing, and
production.
Application birth

Docker
container

Docker Container: The Docker image is


used to create a Docker container, which is a
runnable instance of the image. The
container runs the application by isolating it in
a consistent environment. Containers are often managed
by container orchestration tools
like Kubernetes, which handle
tasks such as scaling, load
balancing, and health checks,
ensuring that containers run
reliably and can scale as
A Container Can Be Managed by Orchestration Tools needed.
Application birth

Docker
container

Docker Container: The Docker image is


used to create a Docker container, which is a
runnable instance of the image. The
container runs the application by isolating it in
a consistent environment. Containers are typically
stateless, where they don’t
maintain data after shutdown.
However, stateful containers
can be achieved by attaching
persistent storage, allowing
them to retain data across
A Container Can Be Stateless or Stateful restarts.
Application birth

Docker
container

Docker Container: The Docker image is


used to create a Docker container, which is a
runnable instance of the image. The
container runs the application by isolating it in
a consistent environment. Containers follow runtime
specifications like the Open
Container Initiative (OCI),
ensuring that containers
behave predictably and can be
run across compatible
container runtimes (e.g.,
A Container is Defined by a Runtime Specification Docker, Podman).
Application birth

Docker
container

Docker Container: The Docker image is


used to create a Docker container, which is a
runnable instance of the image. The
container runs the application by isolating it in
a consistent environment. Containers use Linux
namespaces to isolate
processes, networking, and
filesystem, and cgroups
(control groups) to control
resource allocation (CPU,
memory, disk I/O). This
A Container Uses Namespaces and Cgroups for Isolation ensures that containers don’t
interfere with each other or the
host system.
Application birth

Docker
container

Docker Container: The Docker image is


used to create a Docker container, which is a
runnable instance of the image. The
container runs the application by isolating it in
a consistent environment. Containers can communicate
with each other and external
systems through networks.
Docker and Kubernetes allow
users to define networks,
enabling containers to join
networks and use DNS for
A Container Can Be Connected to a Network inter-container communication.
Application birth

Docker
container

Docker Container: The Docker image is


used to create a Docker container, which is a
runnable instance of the image. The
container runs the application by isolating it in
a consistent environment. Containers go through a
lifecycle: they are created,
started, stopped, and removed.
Orchestration tools manage
the lifecycle of containers,
handling restarts, health
checks, scaling, and load
A Container Has Lifecycle Management distribution.
Application birth

Docker
container

Docker Container: The Docker image is


used to create a Docker container, which is a
runnable instance of the image. The
container runs the application by isolating it in
a consistent environment. Containers make it easier to
adopt microservices, where
applications are broken down
into smaller, independently
deployable services. Each
service can be packaged in its
own container, allowing
A Container Promotes a Microservices Architecture independent scaling and
updating.
Application birth

Docker
container

Docker Container: The Docker image is


used to create a Docker container, which is a
runnable instance of the image. The
container runs the application by isolating it in
a consistent environment. Once a container is running, it
typically doesn’t change. Any
updates or changes to the
application require building a
new image and creating a new
container. This immutability
makes containers reliable and
A Container is Often Immutable predictable.
Application birth

Docker
container

Docker Container: The Docker image is


used to create a Docker container, which is a
runnable instance of the image. The
container runs the application by isolating it in
a consistent environment. Containers allow developers to
run applications in the same
environment across
development, testing, and
production, reducing "it works
on my machine" issues and
A Container is Suitable for Dev, Test, and Prod improving consistency.
Environments
Application birth

Docker
container

Docker Container: The Docker image is


used to create a Docker container, which is a
runnable instance of the image. The
container runs the application by isolating it in
a consistent environment. Containers can be monitored
for metrics like CPU, memory
usage, and health status.
Monitoring tools like
Prometheus and Grafana
provide visibility into container
performance, helping with
A Container Can be Monitored performance optimization and
troubleshooting.
Questions

Where does developers store code ?


Important to know
before we move on

Docker
Application
lives in container

Docker
Virtual machine
container (VM)aka server
lives in

Virtual machine
(VM)aka server Data-Center
lives in
Important to know
before we move on

Datacenter

Virtual
Machine(VM) aka
Server

Docker Container

Application
Important to know
before we move on

Docker
Dockerfile
Helps create images

Docker
images Docker container
Helps create

Docker container
Is an environment that Application
start and run the
application and also
make it available for the
end user on the browser
Library, Binary,
Dependency, artifact

Library Dependency

Artifact
Binary
Library, Binary,
Dependency, artifact

A library is a collection of pre-written code that developers can


use to simplify their software development process. Libraries
Library typically provide reusable functions, classes, and modules that
help solve common problems or provide specific features.

Purpose: To share and reuse common functionality.

Usage: Developers link libraries into their applications to use the


provided functionality.
Library, Binary,
Dependency, artifact

A binary is an executable file or compiled program that a


Binary computer can run directly. It contains machine code that has
been generated by a compiler from source code.

Purpose: To execute a program on a specific platform.

Usage: The final product users interact with after compilation.


Library, Binary,
Dependency, artifact

A dependency is any external component that a software


Dependency application requires to function properly. This could include
libraries, frameworks, modules, or even other services.

Purpose: To provide additional functionality or support without


reinventing the wheel.

Types:

● Direct dependencies: Used directly by the application.


● Transitive dependencies: Indirectly required through
another dependency.
Library, Binary,
Dependency, artifact

An artifact refers to any file or package that is produced as a


Artifact result of the software development process. Artifacts are typically
the output of a build pipeline and include things like binaries,
libraries, or other packaged resources.

Purpose: To distribute or deploy the outputs of a build process.

Usage: Artifacts are often stored in artifact repositories (e.g.,


Nexus, Artifactory) and are consumed by deployment pipelines or
other teams.
Library, Binary,
Dependency, artifact

Library Dependency

Binary Artifact
Questions: Where does developers store code ?

Imagine you’re writing a book, but you’re not working on it alone. You
have a team of writers, editors, and designers all making changes to
the manuscript. Now, imagine the chaos if everyone just edited a single
copy of the book on their own computer without any way to track
changes, work on separate sections, or see what others were doing.

In software development, writing code is a lot like writing that book. Developers
(the writers) need a system to store and manage all the code they’re writing so
they can collaborate, make changes safely, and keep track of the history of
every change. T
Questions: Where does developers store code ?

1. Where Code is Stored – Think of a Central Library

In the example of the book, imagine you have a central library where
all versions of the manuscript are stored. This library allows writers to
check out a copy, make changes, and check it back in.

In coding, this central library is a Version Control System (VCS) like GitHub, GitLab, or Bitbucket.
It’s an online place where all the code (the manuscript) is stored and where developers can safely
collaborate and track changes.
Questions: Where does developers store code ?

2. How Changes are Managed – Keeping Track of Edits

In the book example, every time a writer checks out a copy, makes
edits, and returns it, the library keeps a record of who made which
changes. This way, if any mistakes are made, you can go back to an
earlier version.

Similarly, in a VCS, every change to the code is recorded as a “commit” (like a saved snapshot). This
means you can look back through the history of changes, see who made what change, and even
revert to an earlier version if something goes wrong.
Questions: Where does developers store code ?

3. Working on Different Parts – Using Branches

Let’s say you have different writers working on different chapters of the
book at the same time. Instead of editing the main manuscript directly,
they work on their own copies of their specific chapters and merge
them into the main book when they’re done.

In coding, developers use branches to work on different parts of the code independently. They can
try out new features or make changes without affecting the main version. When they’re done and
everything looks good, they can “merge” their branch back into the main code.
Questions: Where does developers store code ?

4. Reviewing and Approving Changes – Like Editors Reviewing Manuscript


Edits

Before any major edits are added to the final manuscript, an editor
reviews them to make sure they fit with the overall story. Similarly,
before code is added to the main version, other developers review it to
check for errors or improvements.

In VCS systems like GitHub, developers create something called a pull request to ask for their
changes to be reviewed. Once the changes are approved, they’re added to the main code.
Questions: Where does developers store code ?

5. Accessing the Code Anywhere – Cloud Libraries

In our book example, the central library could be online, so any writer
can access the manuscript from anywhere in the world.

In coding, VCS platforms like GitHub are online, so developers can access the code from anywhere.
They “clone” (make a copy of) the code to work on their own computer, then “push” their changes
back to the online repository when they’re done.
Example: Building a Simple To-Do List App Together

Imagine you and a friend are building a to-do list app together. You both want to be able to
add, delete, and organize tasks, and each of you will work on different parts of the app.
Here’s how you’d use a Version Control System like GitHub:

Store the Main Code in GitHub:

● You set up a GitHub repository (like an online folder) for your app. This is where the
main version of the code lives.

Work on Different Features:

● You create a branch to work on the feature for adding tasks.


● Your friend creates another branch to work on the feature for deleting tasks.

Commit Your Changes:

● Every time you make progress on your feature, you commit (save) your changes with a
message describing what you did, like “Added feature to add tasks to the to-do list.”
Example: Building a Simple To-Do List App Together

Imagine you and a friend are building a to-do list app together. You both want to be able to
add, delete, and organize tasks, and each of you will work on different parts of the app.
Here’s how you’d use a Version Control System like GitHub:

Review and Merge:

● When you’re both done with your features, you each create a pull request on
GitHub. This allows you to review each other’s work, make sure everything works
correctly, and then merge the changes into the main code.

Pull the Latest Version:

● Now, the main code on GitHub has the features for both adding and deleting tasks. You both
“pull” (download) this latest version to your computers, and now you each have the updated
app.

Why This Matters

Using a Version Control System like GitHub makes it easy to collaborate, keep track of changes, and
build your app without accidentally overwriting each other’s work. If something goes wrong, you can
always go back to an earlier version of the code and fix it.
Questions

What is structure of an application ?


Question: What is structure of an application ?

Think of an application like a house. To make it work, you need


different parts with specific functions, all connected and interacting
with each other.
Question: What is structure of an application ?

Frontend (User Simple application


Interface)

APIs (Application
Programming
Interfaces)

Database
Question: What is structure of an application ?

Frontend (User APIs (Application


Interface) Backend (Server) Database Programming
Interfaces)

Complex application
Middleware
Services
Authentication and Cache
Authorization
Question: What is structure of an application ?

Frontend (User The “Visible” Part of the Application


Interface)

Frontend (User Interface): The


visible part that users interact
Imagine the frontend as the walls, windows, doors, and with, like buttons and text fields.
decor of the house. It’s the part that people interact with
directly.

In an application, the frontend includes everything the user


sees and clicks on: buttons, forms, text, images, and
layouts.

Technologies like HTML, CSS, and JavaScript are commonly used to


create the frontend, which can be viewed in a web browser or an app
interface.
Question: What is structure of an application ?

The “Behind-the-Scenes” Part of the


Backend (Server) Application

Backend (Server): The


behind-the-scenes logic that
processes and handles data
Think of the backend as the rooms and storage areas that requests.
you don’t see right away but are essential for making the
house functional.

The backend handles the logic, processes, and


calculations that power the app. It receives data from the
frontend, processes it, and stores it where needed.

Common backend technologies include Node.js, Python, Java, and PHP.


The backend can also use APIs (Application Programming Interfaces) to
communicate with the frontend.
Question: What is structure of an application ?

The “Storage Room” of the Application


Database

Database: The storage room


where all data is kept, ready to be
The database is like a storage room or filing cabinet where accessed and updated.
all important information is kept, ready to be accessed
when needed.

It stores data in an organized way, making it easy to retrieve


or update. Databases are crucial for storing user information,
content, and other essential data.

Popular databases include MySQL, PostgreSQL, MongoDB, and SQLite.


Question: What is structure of an application ?

APIs (Application
Programming The “Messenger”
Interfaces)

APIs: The messenger that connects


different parts, letting the frontend and
backend communicate.
APIs are like a phone line connecting different rooms in
the house, allowing them to communicate with each other.

In an application, APIs let the frontend talk to the backend


and retrieve or send data. APIs can also connect your app to
external services, like a weather app that shows data from a
weather provider.

APIs ensure that data flows smoothly and securely between parts of the
application.
Question: What is structure of an application ?

Authentication and The “Security System”


Authorization

● This is like the locks on your doors or a security system Authentication and Authorization: The
in the house, controlling who can enter and what they security system, controlling who can access
and modify data.
can access.

Authentication verifies the identity of users (e.g., login with


username and password), while authorization determines
what each user can do (e.g., edit tasks only if they’re the
owner).

This is often managed by backend code and frameworks that control


access to different parts of the application.
Question: What is structure of an application ?

The “Shortcut Storage”


Cache

Cache: The shortcut storage for


frequently accessed data to make the app
The cache is like a small storage room near the front of the faster.
house, keeping frequently used items for easy access.

Caching stores data temporarily to make loading faster, like


frequently accessed pages or user data, so the app doesn’t
need to pull from the main database every time.

Common caching solutions include Redis and Memcached.


Question: What is structure of an application ?

Services Specialized Workers in the Application

Services: Specialized helpers that handle


Services are like specialized workers or appliances in your specific tasks, such as sending emails or
house, handling specific tasks, such as sending notifications, processing payments.
processing payments, or managing files.

Some services are external, like Google Maps for location or


Stripe for payments, which the application accesses via APIs.
Question: What is structure of an application ?

Middleware The “Connector” Between Parts

Middleware: The connector that processes


Middleware acts like a hallway connecting rooms, allowing data or checks permissions as it moves
them to pass information back and forth. between parts.

It sits between the frontend, backend, and database,


handling specific tasks such as logging activity, checking
permissions, or processing data.

Middleware makes sure data is processed or filtered correctly before it


moves on to its destination.
Questions

How many type of application exist ?


Questions

How many type of application exist ?

In software development, applications can be structured in two main ways: Monolithic


and Microservices.

Monolithic Application Microservices


Application
Questions

How many type of application exist ?

A monolithic application is built as one large, unified piece. All the components (like the
user interface, backend, database, and business logic) are tightly connected and
operate as a single unit.

Monolithic Application
Questions

How many type of application exist ?

A microservices application is made up of many small, independent services. Each


service is responsible for one specific function of the application, and these services
communicate with each other over a network.

Microservices
Application
Microservice

Think of a microservice like a team of specialists in a pizza restaurant. Instead of


having one person do everything (take orders, make pizza, deliver it), each person has
a specific job:
Microservices
Application ● Order Taker: Handles orders from customers.
● Pizza Chef: Makes the pizzas.
● Delivery Driver: Delivers the pizzas.
● Cashier: Manages payments.

Each person focuses on their own job and works independently, but they work together
to make the restaurant run smoothly. If the cashier has a problem, the chef can still
keep making pizzas, and the delivery driver can still deliver orders.
Microservice

What is a Microservice?
Microservices
Application A microservice is like one of these specialists. Instead of building an app where one
big system does everything, we split the app into smaller pieces, and each piece (or
"microservice") has a specific job.

Example: Online Shopping App

Imagine an online shopping app. Using microservices, the app would be split into smaller
pieces, each with one job:

1. Product Service: Shows the items for sale.


2. Cart Service: Keeps track of what’s in your cart.
3. Order Service: Handles orders when you check out.
4. Payment Service: Processes payments.
5. Notification Service: Sends emails or texts about your order.
Microservice

Each of these parts is a microservice.

Microservices How They Work Together


Application
When you order something:

● The Cart Service sends your cart info to the Order Service.
● The Order Service checks with the Payment Service to process payment.
● The Notification Service sends you a confirmation email.

Each microservice does just one job, and if one has an issue, it doesn’t stop the others from
working.

Why Microservices are Useful

1. Independent Updates: You can change or update one microservice without affecting the
others.
2. Scalability: If the Cart Service gets busy, you can add more resources to it without
changing the rest of the app.
3. Reliability: If one microservice goes down (like Notifications), the rest of the app still works.
Now that you know a bit about DevOps and Application
etc ..
Let me take you to the next step
Questions

Understanding Stateful vs. Stateless Architectures


Questions

Stateless Stateful

Understanding Stateful vs. Stateless Architectures


Questions

Stateless

A stateless application is a software application that does not retain any


data or session information about user interactions.

Don't store user session data on the server, so each request is treated
independently. Stateless applications are good for dynamic workloads and
changing business requirements. They can easily scale horizontally and are
simpler to develop and maintain. However, they often require frequent database
queries, which can create performance bottlenecks.
Questions

Stateful

Stateful applications are web applications that store data related to user
sessions on the server side

Save client session data on the server, allowing for faster processing and
improved performance. Stateful applications are good for predictable
workloads and consistent user experiences. However, they can be more
difficult to scale than stateless applications.
Questions

Stateless Stateful

stateful applications save information about previous


sessions, while stateless applications do not
Questions

Stateless Stateful

stateful applications save information about previous


sessions, while stateless applications do not
Client-side vs Server-side
Client-side vs Server-side

Client Side Server Side

This is the behind-the-scenes part of


This is the part of a computer system
a computer system. The server
that the user interacts with directly. It processes requests, handles data,
includes everything you see and use on and sends the right information back
your screen, like clicking buttons, typing to the client. Think of it as the
in forms, or watching videos. The client powerful, hidden part that makes
side is all about showing information to things work but isn’t directly visible to
you and letting you interact with it. the user.
Client-side vs Server-side

Imagine you’re using a computer or a phone to play a game online or browse a website. The
client side and the server side work together like a team to make everything happen.
Client-side vs Server-side

The client side is everything you see and do on your device. It’s what makes the game or website
Client Side look nice and lets you interact with it.

When you open a game or website, all the colors, images, text, and buttons you see are part of the
client side.

If you click a button, type in a text box, or swipe your screen, the client side handles that. It’s like the
“face” of the program that talks to you.

Think of it like the outside of a vending machine: you press buttons, choose snacks, and see the
display. The client side is all about what you can see, touch, and use.
Client-side vs Server-side

The server side is like the engine inside the vending machine that you don’t see. It’s hidden but
does all the important work behind the scenes
Server Side

When you press a button to choose a snack, the server side processes that choice, checks if the snack
is in stock, and then sends it down to you.

On a website, when you log in, the server side checks if your password is correct. Or if you search for
something, it finds the results for you.

So, the server side is the “brain” of the operation. It stores all the information, runs calculations, and
makes sure everything works the way it should. You don’t see it, but it’s working hard every time you
interact with the client side.
Client-side vs Server-side

Client Side Server Side

Whenever you do something on the client side, like press a button, the server side respond. The
client and server talk to each other to make sure everything works smoothly. The client asks for
what you need, and the server figures it out and sends it back.
Networking

As a DevOps Engineer do I need to know a networking ?


Networking

As a DevOps Engineer do I need to know a networking ?

Yes, as a DevOps Engineer, having an understanding of


networking is essential.

Here’s why networking knowledge is critical in DevOps


Networking

DevOps often involves setting up, managing, and troubleshooting


infrastructure, much of which relies on network configuration.

Infrastructure Management

Understanding network protocols, IP addressing, subnets,


and DNS helps ensure efficient communication between
servers, services, and users.
Networking

Knowing networking basics is essential when setting up environments, load


balancers, virtual networks, and firewalls

Deployment and Automation

Many cloud platforms require familiarity with network


configurations for secure and scalable deployments.
Networking

Security is paramount, and networking is a core part of securing


applications.

Security

Configuring network security groups, VPNs, firewalls, and


secure connections can prevent unauthorized access and
protect sensitive data.
Networking

During deployment, applications often need to interact with other services


over the network.

CI/CD Pipelines

Knowing how to configure network policies and


troubleshoot connectivity issues is valuable in maintaining
smooth deployment processes.
Networking

When issues arise, such as latency or downtime, understanding networking


can help pinpoint and resolve them.

Monitoring and
Troubleshooting

Tools like Wireshark, netstat, and traceroute are used to


monitor and debug network performance.
Networking

In a microservices environment, services communicate over networks.

Microservices and
Containerization

Tools like Wireshark, netstat, and traceroute are used to


monitor and debug network performance.
Networking

Key networking concepts to be familiar


with include
OSI model

IP
TCP subnet

CIDR Firewalls
DNS Load
balancing

reverse
proxies VPN
TLS SSH
Networking
GET

Key networking concepts to be familiar


with include
UDP
HTTP

TCP
HTTPS

DHCP Certificate

MX
record
SOA A record
SSL CNAME

CA

Network
Layers
NS NAT
Ports POST
Networking

The OSI (Open Systems Interconnection) model is a conceptual


framework that standardizes the functions of a telecommunication
or computing system into seven layers: Physical, Data Link,
OSI Model Network, Transport, Session, Presentation, and Application. Each
layer is responsible for specific functions, making it easier to
troubleshoot and understand networking processes.

When you access a website, the OSI model explains how each layer
interacts. For instance, the Application layer is your web browser, the
Transport layer (using TCP) breaks data into packets, and the Network
layer routes those packets to the correct IP address.
Networking

OSI Model
Networking

TCP is a core protocol of the internet protocol suite that ensures


TCP (Transmission reliable, ordered, and error-checked data delivery between
Control Protocol) applications on different devices in a network.

When you download a file, TCP guarantees that every packet of data
arrives and is reassembled in the correct order. If a packet is lost, TCP
resends it, ensuring your file downloads correctly without missing data.
Networking

IP is a protocol that assigns unique addresses (IP addresses) to


IP (Internet devices in a network and is responsible for routing data from source
Protocol) to destination.

Typing 192.168.1.1 in a browser often takes you to your home router’s settings page. This
IP address uniquely identifies your router on your local network, allowing communication with
connected devices.
Networking

A subnet, short for subnetwork, is a segmented piece of a larger


network. It helps organize networks, improve performance, and
Subnet enhance security by isolating sections of the network.

In an office, the IT department might be assigned to a subnet like 192.168.1.0/24, while


HR uses 192.168.2.0/24. Subnets help keep departments' data separate, improving
management and security.
Networking

CIDR is a way to allocate IP addresses more efficiently by using


CIDR (Classless flexible subnet masks, which define IP address ranges. CIDR
Inter-Domain notation (e.g., 192.168.1.0/24) specifies the range of IP
Routing) addresses within a network.

192.168.1.0/24 represents IP addresses from 192.168.1.1 to 192.168.1.254. CIDR


helps networks save IPs by only allocating what’s necessary, rather than following fixed
classes (A, B, C).
Networking

DNS translates human-readable domain names (like


DNS (Domain Name www.example.com) into IP addresses that computers use to
System) identify each other on the network.

When you type "www.google.com" in a browser, DNS translates it to Google’s IP address


(e.g., 142.250.190.78). Without DNS, we’d need to remember numerical IP addresses
instead of website names.
Networking

A firewall is a security device or software that monitors and controls


network traffic, blocking unauthorized access while allowing
Firewalls permitted traffic.

A company firewall may block access to certain websites, like social media, to enhance
productivity. It may also allow only specific applications, like email, to prevent malware from
entering the network.
Networking

Load balancing distributes network traffic across multiple servers to


ensure no single server is overwhelmed. It improves service
Load Balancing availability and scalability.

High-traffic websites like Amazon use load balancers to distribute incoming requests across
multiple servers, ensuring that no server gets overloaded and users experience fast
response times.
Networking

A reverse proxy is a server that sits between users and web


servers, forwarding client requests to the appropriate server. It’s
Reverse Proxies commonly used for load balancing, caching, and securing
applications.

A reverse proxy like Nginx receives requests for mywebsite.com, checks which backend
server is available, and forwards the request to that server. The client only interacts with the
reverse proxy, not the actual backend servers.
Networking

A VPN extends a private network across a public network, enabling


secure and encrypted connections. It allows users to send and
VPN (Virtual Private receive data securely as if their devices were directly connected to
Network) the private network.

When working remotely, employees use a VPN to securely access the company's internal
network. This allows them to work as if they’re in the office while keeping data secure over
the public internet.
Networking

TLS is a cryptographic protocol that provides secure communication


over a network. It’s commonly used for HTTPS to encrypt data
TLS (Transport between a browser and a web server, ensuring privacy and data
Layer Security) integrity.

When you visit a bank’s website (indicated by "https://"), TLS encrypts your data, such as
login credentials, to prevent eavesdropping by malicious parties. Sites without TLS are
labeled "http://" and are less secure.
Networking

SSH is a network protocol that provides secure remote login and


other secure network services over an unsecured network. It uses
SSH (Secure Shell) encryption to protect data sent between a client and a server.

To manage a remote server, a DevOps engineer might use ssh [email protected]. This
allows them to securely execute commands on the server as if they were physically present,
with data encrypted during the session.
Networking

HTTP is the protocol used for transferring data on the web. It


HTTP (Hypertext defines how messages are formatted and transmitted, and how web
Transfer Protocol) servers and browsers should respond to various commands.

When you visit a website like http://example.com, your browser uses HTTP to
communicate with the web server and request the webpage.
Networking

HTTPS (Hypertext HTTPS is the secure version of HTTP, using encryption (TLS/SSL)
Transfer Protocol to protect the data exchanged between the client and server.
Secure)

Banking websites use HTTPS (e.g., https://bank.com) to ensure that sensitive information,
like your password, is encrypted and secure from eavesdropping.
Networking

TCP is a connection-oriented protocol that ensures reliable data


TCP (Transmission transmission by establishing a connection before data transfer,
Control Protocol) resending lost packets, and ensuring data integrity.

TCP is used when downloading a file from the internet. If any packet is lost during transfer,
TCP ensures it is resent, so the file downloads correctly.
Networking

UDP is a connectionless protocol that prioritizes speed over


UDP (User reliability, meaning it doesn’t guarantee the delivery or order of
Datagram Protocol) packets.

Online gaming and video streaming often use UDP because a few lost packets (causing a
minor glitch) are preferable to the delay caused by resending packets.
Networking

DHCP (Dynamic DHCP automatically assigns IP addresses to devices on a network,


Host Configuration allowing them to communicate without manual configuration.
Protocol):

When you connect to Wi-Fi at a coffee shop, the router assigns your device an IP address
using DHCP, so you can access the internet.
Networking

MX records are DNS records that specify the mail servers


MX Record (Mail responsible for receiving email for a domain.
Exchange Record)

The MX record for example.com might direct emails to mail.example.com, allowing the
domain to receive emails.
Networking

An A (Address) record is a DNS record that maps a domain name to


an IP address, allowing users to access the site with a human-
A Record readable address.

The A record for example.com might map to 192.168.1.1, so when you type
example.com, you’re directed to that IP address.
Networking

SSL is an encryption protocol used to secure data sent between a


SSL (Secure client and a server. SSL has mostly been replaced by TLS but is still
Sockets Layer) commonly referenced.

When you see a padlock icon in the browser address bar (usually for HTTPS sites), SSL or
TLS is securing the data between your browser and the website.
Networking

The SOA record is a DNS record that provides essential information


about a domain, including the primary name server, the email of the
SOA (Start of domain administrator, and domain serial numbers for DNS
Authority) synchronization.

When a DNS server checks the SOA record for example.com, it learns which server is
authoritative for the domain and can contact the administrator if needed.
Networking

NS records in DNS specify which servers are authoritative for a


domain, meaning they contain the actual DNS records for that
NS (Name Server) domain.

If ns1.example.com is an NS record for example.com, it indicates that


ns1.example.com holds the DNS records for example.com.
Networking

NAT is a method that allows multiple devices on a local network to


NAT (Network share a single public IP address, making it possible for them to
Address connect to the internet.
Translation)

In a home network, your router uses NAT to let multiple devices (laptops, phones) share one
public IP address while assigning each device a unique local IP address.
Networking

Ports are logical endpoints in a network connection, used by


protocols to specify different types of traffic and applications (e.g.,
Ports web traffic, email).

Port 80 is commonly used for HTTP traffic, and port 443 is used for HTTPS. When you visit
a website, your browser connects to the server’s IP on port 80 or 443.
Networking

A digital certificate is an electronic document used to prove the


ownership of a public key. Certificates are used to establish secure
connections on the internet (typically for HTTPS). The certificate
Certificate binds a public key with an entity’s identity, ensuring that the data
sent between the user and server is secure.

When you visit a secure website (https://example.com), your browser checks the site’s
digital certificate to verify it’s legitimate. If the certificate is valid and trusted, it establishes a
secure (TLS/SSL) connection. The certificate shows the site’s authenticity and that data
exchange will be encrypted.
Networking

A Certificate Authority (CA) is a trusted organization responsible for


issuing digital certificates. The CA verifies the identity of the
CA (Certificate certificate requester (e.g., a website or organization) before issuing
Authority) the certificate. This trust is fundamental to the security of HTTPS
and other encrypted communications.

Let’s say example.com wants to secure its website with HTTPS. It requests a certificate
from a CA, like Let’s Encrypt or DigiCert. The CA verifies the identity of example.com and
issues a certificate. Now, when users visit example.com, their browsers trust that the
certificate is legitimate because it’s signed by a trusted CA.
Networking

GET POST

In web development and DevOps, GET and POST are


two fundamental HTTP methods used for
communication between a client (such as a web
browser or API consumer) and a server.
Networking

Definition: GET is an HTTP method used to request data from a specified resource. It retrieves information from the server without modifying any data.
GET Purpose: Primarily used to fetch data from the server.
Characteristics:

● Idempotent: Calling GET multiple times should not have any side effects. Each request will yield the same result, without altering data on the server.
● URL Parameters: GET requests can include parameters in the URL (known as query parameters), often used for filtering, sorting, or paginating data.
● Caching: GET requests are often cached by browsers, making them suitable for retrieving data that doesn’t change frequently.

Example of a GET Request:

● Use Case: Retrieving a user’s profile information.


● Request: GET /api/user/profile?id=123
● Response: The server responds with the user’s profile data in JSON or HTML format. No data is altered on the server.

When to Use GET:

● When you need to retrieve information without modifying data (e.g., viewing a webpage, searching for products, listing users).
● For API calls that simply read or fetch data, such as querying database records or returning search results.
● When you want to enable caching, as GET responses can be stored and reused to improve performance.
Networking

POST
Definition: POST is an HTTP method used to send data to the server to create or update a resource. Unlike
GET, POST requests may result in changes to the server’s data.
Purpose: Primarily used for creating new resources or submitting data to the server.
Characteristics:

● Non-Idempotent: Calling POST multiple times can have different results each time. For example,
sending a POST request twice might create duplicate entries.
● Request Body: POST requests carry data in the body (rather than the URL), allowing large and
complex data structures to be sent, such as JSON objects.
● Not Cached: POST requests are generally not cached by browsers or servers, making them
appropriate for operations where fresh data is needed every time.

When to Use POST:

● When creating a new resource (e.g., submitting a form, registering a user, adding a new item).
● When sending sensitive information that shouldn’t be visible in the URL (e.g., passwords, personal
data).
● For operations that modify the server’s data, such as creating or updating records.
Networking

GET
POST
Networking

LoadBalancer

A load balancer is a tool that distributes incoming network or


application traffic across multiple servers.

Its main job is to ensure that no single server gets overwhelmed,


which helps keep websites and applications fast, reliable, and
available even if one server fails.

Imagine it like a traffic cop that directs cars (user requests) to


different open lanes (servers), so traffic flows smoothly and
everyone gets where they need to go without long waits or
pileups.
Networking

LoadBalancer
Networking

Here are some common examples


of load balancers

AWS Elastic Load Balancer (ELB): Used on Amazon Web Services to


automatically distribute incoming application traffic across multiple targets, like
EC2 instances.

NGINX: Often used as a software load balancer and reverse proxy. It can
distribute requests based on different rules (like round-robin or least
connections) and handle HTTP, HTTPS, and TCP/UDP traffic.

HAProxy: A popular open-source software load balancer that supports both HTTP
and TCP load balancing. It’s used by companies like Airbnb, Instagram, and
Twitter for high-traffic applications.

Google Cloud Load Balancer: Google’s load balancing service that distributes
traffic across multiple servers in Google Cloud and can automatically scale to
handle large volumes of traffic.

Azure Load Balancer: Microsoft’s cloud-based load balancer that distributes


incoming traffic for Azure applications, ensuring high availability.
Networking

Loadbalancers Routing logic

Round Robin: Distributes requests sequentially across servers, looping back


to the start once each has been selected. It’s simple and works well when
servers are of equal capability.

Least Connections: Directs traffic to the server with the fewest active
connections. This is useful when each request can have a different duration, as
it helps prevent any single server from being overloaded.

Least Response Time: Sends requests to the server with the fastest response
time, helping reduce latency for end users.

Weighted Round Robin: Assigns a weight to each server based on capacity, with
higher-weight servers receiving more requests. This is useful when servers have
different resource levels.
Networking

LoadBalancers Routing logic

Random: Routes each request to a randomly selected server. This can work
well in some cases but lacks the optimization of other methods.

Geographic Routing: Directs users to servers based on geographic location,


improving latency by serving users from the nearest server.
Networking

LoadBalancers type

There are two type of LoadBalancer(LB)

Application Load
Balancer(ALB) Network Load
Balancer(NLB)
Networking

Load Balancer type

Application Load Balancer (ALB)

● Layer: Operates at Layer 7 (Application Layer) of the OSI model.


● Best for: Web applications that require HTTP and HTTPS support with advanced request
routing features.
● Routing: Supports content-based routing, such as path-based and host-based routing.
ALB can route requests to different target groups based on URL path or host.
● Protocol Support: HTTP, HTTPS, and WebSocket.
● Health Checks: Offers health checks at the application level, such as HTTP status codes.
● Ideal Use Cases: Microservices, container-based applications, and applications requiring
Application Load SSL termination or redirect capabilities.
Balancer(ALB)
Networking

Load Balancer type

Network Load Balancer (NLB)

● Layer: Operates at Layer 4 (Transport Layer) of the OSI model.


● Best for: Applications needing ultra-low latency and high network throughput, such as
gaming or IoT applications.
● Routing: Uses TCP/UDP-based load balancing; routes traffic to instances based on IP
address and port.
Network Load ● Protocol Support: TCP, UDP, and TLS.
Balancer(ALB) ● Health Checks: Performs health checks at the network level (e.g., TCP connection tests).
● Ideal Use Cases: High-performance applications, real-time data transfer, and
environments where connection stability is crucial (e.g., databases or critical APIs).
Networking

HTTP vs HTTPS

HTTP (Hypertext Transfer Protocol) and HTTPS (Hypertext Transfer


Protocol Secure) are both protocols used to transfer data over the web
Networking

HTTP vs HTTPS

HTTP (Hypertext Transfer Protocol)

● Security: HTTP does not encrypt data. All information sent over HTTP is in plain
text, making it vulnerable to interception by third parties.
● Port: Uses port 80 by default.
● Data Privacy: Data privacy is not guaranteed. Anyone who intercepts HTTP data
can read it, including login credentials, personal information, etc.
● Use Cases: While still used, HTTP is increasingly discouraged for any sensitive
data transfer. It is mainly used for internal or non-sensitive data transmissions.
Networking

HTTP vs HTTPS

HTTPS (Hypertext Transfer Protocol Secure)

● Security: HTTPS is HTTP with encryption, typically using SSL (Secure Sockets
Layer) or TLS (Transport Layer Security) protocols. This encryption secures the
data so only the intended recipient can decrypt and read it.
● Port: Uses port 443 by default.
● Data Privacy: Encrypts data in transit, ensuring privacy and security. HTTPS also
validates the authenticity of the website, protecting against "man-in-the-middle"
attacks.
● Use Cases: Recommended for any data-sensitive or public-facing websites, such
as those involving login credentials, payments, or personal data. HTTPS is now a
standard requirement for secure websites and is favored by search engines.
Networking

Common known ports


Security

As a DevOps Engineer do I need to know a Security ?


Security

As a DevOps Engineer do I need to know a Security ?

Yes, as a DevOps Engineer, having a strong understanding of security is


essential. Security is a crucial part of the DevOps lifecycle, often referred to
as DevSecOps when security practices are integrated into DevOps
workflows.

Here are several key reasons why security knowledge is


vital in a DevOps role
Security

The CI/CD pipeline automates the process of building, testing, and


deploying code. Since it handles sensitive information (like API keys,
secrets, and credentials) and deploys code to production, it’s a prime target
for attackers.

Securing the CI/CD


Pipeline

Ensuring that credentials are stored securely, using tools like


HashiCorp Vault or AWS Secrets Manager, and implementing
proper access controls are essential to prevent unauthorized
access.
Security

DevOps engineers often manage infrastructure with IaC tools like Terraform,
Ansible, or CloudFormation. Misconfigurations in IaC can lead to security
vulnerabilities.

Infrastructure as Code
(IaC) Security

If a security group in AWS is accidentally set to allow open access (e.g., 0.0.0.0/0
on SSH), it can expose your infrastructure to attacks. Knowing how to set secure
defaults and review IaC for security issues is critical.
Security

DevOps engineers work with network configurations, including firewalls,


VPNs, and load balancers. Understanding network security concepts like IP
whitelisting, NAT, and port management is necessary to prevent
unauthorized access and secure data transmission.

Network Security

Setting up secure VPCs, subnets, and using security groups and firewalls to
control traffic can help protect sensitive data and systems.
Security

Many DevOps pipelines include stages for scanning code and applications
for vulnerabilities. Knowing how to configure and interpret security tools (like
SonarQube, OWASP Dependency-Check, or Snyk) is valuable.

Application Security

Automating static application security testing (SAST) and dynamic application


security testing (DAST) in the CI/CD pipeline helps detect vulnerabilities early in
the development cycle.
Security

Containers, like those managed by Docker and Kubernetes, have unique


security concerns. DevOps engineers should know how to secure container
images, set resource limits, and manage Kubernetes security
configurations.

Container Security

Using tools like Aqua Security or Twistlock to scan Docker images for
vulnerabilities, managing permissions within Kubernetes, and avoiding running
containers as the root user are all essential container security practices.
Security

Containers, like those managed by Docker and Kubernetes, have unique


security concerns. DevOps engineers should know how to secure container
images, set resource limits, and manage Kubernetes security
configurations.

Identity and Access


Management (IAM)

Proper IAM ensures that only authorized users and services have access to
resources. Managing permissions correctly and following the principle of least
privilege are crucial.
Security

Proper IAM ensures that only authorized users and services have access to
resources. Managing permissions correctly and following the principle of
least privilege are crucial.

Identity and Access


Management (IAM)

Configuring IAM roles and policies in AWS or other cloud platforms to limit access
based on roles (e.g., giving production access only to certain team members)
helps reduce the risk of unauthorized access.
Security

Security monitoring is vital for detecting suspicious activity, and knowing


how to respond to incidents helps mitigate damage. DevOps engineers
should understand logging, monitoring, and alerting tools and processes.

Monitoring and Incident


Response

Setting up monitoring with tools like Prometheus, Grafana, and ELK stack
(Elasticsearch, Logstash, Kibana) helps detect anomalies. Having incident
response protocols in place helps respond quickly to security incidents.
Security

Many industries have regulatory requirements like GDPR, HIPAA, and SOC
2. Knowing these requirements helps ensure that DevOps practices meet
security standards and legal obligations.

Compliance and
Regulatory
Requirements

Implementing data encryption, secure storage for sensitive information, and


access audits can help meet compliance requirements.
Security

Many industries have regulatory requirements like GDPR, HIPAA, and SOC
2. Knowing these requirements helps ensure that DevOps practices meet
security standards and legal obligations.

Compliance and
Regulatory
Requirements

Implementing data encryption, secure storage for sensitive information, and


access audits can help meet compliance requirements.
Important Security
Concepts
Important Security
Concepts

Users, processes, and systems should only have the minimum access
necessary to perform their functions.

Principle of
Least
Privilege
(PoLP)

Reduces the attack surface by limiting access to only what’s necessary, preventing
unauthorized access to sensitive resources.
Important Security
Concepts

Encryption is the process of encoding data to prevent unauthorized access.


Data at rest refers to data stored on a disk, while data in transit refers to
data being transferred over a network.

Encryption (Data
at Rest and Data
in Transit)

Encrypting data ensures that even if it’s intercepted or accessed by unauthorized


users, it remains unreadable without the decryption key.
Important Security
Concepts

IAM is a framework of policies and technologies that ensure the right


individuals and systems have the right access to technology resources.

Identity and
Access
Management
(IAM)

Helps manage permissions, ensuring that only authenticated and authorized


entities can access certain resources.
Important Security
Concepts

The process of identifying, classifying, remediating, and mitigating


vulnerabilities.

Vulnerability
Management

Continuous vulnerability assessment allows organizations to identify and fix


security flaws before they are exploited by attackers.
Important Security
Concepts

Storing and managing sensitive information (such as API keys, passwords,


and credentials) securely to prevent unauthorized access.

Secrets
Management

Protects sensitive data from exposure, especially in CI/CD pipelines where secrets
are frequently used.
Important Security
Concepts

Compliance with industry standards (e.g., GDPR, HIPAA, SOC 2) involves


following legal guidelines for handling data.

Compliance and
Regulatory
Standards

Non-compliance can lead to legal consequences, so it’s essential to know relevant


standards for handling data in a secure and compliant manner.
Important Security
Concepts

Practices that help prevent security vulnerabilities in code, like input


validation, error handling, and avoiding hard-coded secrets.

Secure Coding
Practices

Prevents common vulnerabilities (like SQL injection and cross-site scripting) and
ensures code is secure by design.
Important Security
Concepts

Firewalls control incoming and outgoing network traffic, and network


segmentation divides networks into smaller parts.

Firewall Rules and


Network
Segmentation

Limits the spread of attacks within a network and restricts access to sensitive
areas, providing layered security.
Important Security
Concepts

Continuously tracking, recording, and analyzing system events and


behaviors to detect and respond to security incidents.

Monitoring and
Logging

Monitoring enables early detection of suspicious activity, while logging provides an


audit trail to investigate incidents.
Important Security
Concepts

A structured approach to managing and addressing security incidents to


limit damage and reduce recovery time.

Incident
Response and
Recovery

Being prepared with an incident response plan helps organizations respond swiftly
and minimize the impact of security breaches.
Important Security
Concepts

Regularly applying updates and patches to software to fix known


vulnerabilities.

Patching and
Update
Management

Helps prevent attackers from exploiting known vulnerabilities and ensures that
systems are up-to-date and secure.
Important Security
Concepts

The process of identifying and evaluating potential threats and


vulnerabilities in a system.

Threat Modeling

Helps proactively identify and mitigate risks, making security a built-in aspect of the
system’s design.
Important Security
Concepts

A security model where trust is never assumed; instead, verification is


required at every stage for every user and device attempting access.

Zero Trust
Architecture

Reduces the risk of unauthorized access by continuously verifying trustworthiness


within a system.
Important Security
Concepts

The practice of maintaining secure and consistent settings across all


infrastructure components and software.

Configuration
Management and
Secure Defaults

Ensures all systems start with secure configurations, reducing vulnerabilities from
default or misconfigured settings.
Important Security
Concepts

Integrating security practices into CI/CD pipelines to automatically detect


vulnerabilities and enforce security standards.

Security in CI/CD
(DevSecOps)

Shifts security left, allowing issues to be identified and addressed early in the
development cycle, reducing overall risk.
Important Security
Concepts

Protecting APIs from vulnerabilities like unauthorized access, injection


attacks, and data breaches.

API Security

APIs are widely used to integrate services and access data, so securing them is
crucial to protect both backend services and sensitive information.
Important Security
Concepts

Securing containers and the containerized environment, including image


scanning, runtime protection, and secure configurations.

Container
Security

Containers are widely used in DevOps, and they introduce unique security
challenges that require special handling.
Important Security
Concepts

NAT translates private IP addresses to a public IP address, hiding the


internal network from external users.

NAT (Network
Address
Translation) and
IP Masking

Helps prevent direct access to internal network devices, reducing the risk of
external attacks.
Important Security
Concepts

TLS/SSL certificates encrypt data in transit between a client and server.


Certificate management involves handling these certificates securely.

TLS/SSL
Certificates and
Certificate
Management

Certificates ensure data confidentiality and integrity during transmission and help
users verify the authenticity of a website or server.
Important Security
Concepts

Access control defines who can access resources, and MFA adds an extra
layer of security by requiring multiple forms of verification.

Access Control
and Multi-Factor
Authentication
(MFA)

Access control and MFA help prevent unauthorized access, even if passwords are
compromised.
DATABASE

As a DevOps Engineer do I need to know a DATABASE ?


DATABASE

As a DevOps Engineer do I need to know a DATABASE ?

Yes, as a DevOps Engineer, it’s important to have a solid understanding of


databases. While you might not need to be a full-fledged database
administrator (DBA), knowing the basics of databases, as well as certain
advanced concepts, is essential

Here are several key reasons why Database knowledge is


vital in a DevOps role
DATABASE

DevOps engineers often need to integrate database changes into the CI/CD
pipeline to ensure that schema migrations and updates are deployed
seamlessly alongside application code.

CI/CD Pipelines for


Database Changes

A DevOps pipeline might include a step for running database


migrations using tools like Liquibase or Flyway. This ensures that
any schema changes (e.g., adding a new table or column) are
applied to the database in a controlled way during deployment.
DATABASE

Database servers and configurations are part of the infrastructure. As a


DevOps engineer, you’ll often provision, configure, and manage databases
using IaC tools like Terraform or Ansible.

Infrastructure as
Code (IaC)

You might use Terraform to create an Amazon RDS instance in


AWS or a MySQL database in Azure, specifying configurations
like instance size, storage, and security settings.
DATABASE

Database performance has a direct impact on application performance, so


it’s important to know how to monitor and troubleshoot databases. DevOps
engineers often set up monitoring tools to track database metrics like query
performance, CPU usage, and disk space.

Database Monitoring
and Performance
Tuning

Tools like Prometheus, Grafana, or DataDog can be configured


to monitor database health. If query latency spikes, you may
need to investigate the root cause, which could involve checking
slow-running queries or identifying high CPU usage.
DATABASE

Ensuring data safety is critical. DevOps engineers often handle automated


backups, as well as restoration and disaster recovery processes, to ensure
data is protected and can be restored in case of an incident.

Database Backup and


Recovery

Setting up automated daily backups for a PostgreSQL database


on AWS RDS and testing the restoration process periodically to
confirm that backups work as expected.
DATABASE

Databases often contain sensitive information, making them a high-value


target for attackers. Understanding database security fundamentals (like
access control, encryption, and network restrictions) is crucial for protecting
data.

Database Security

Configuring database access using roles and permissions,


encrypting data at rest and in transit, and restricting network
access to trusted IP addresses.
DATABASE

Applications may use SQL or NoSQL databases depending on the data


structure and performance requirements. DevOps engineers should
understand the differences and use cases for each type to support
applications effectively.

Understanding SQL
and NoSQL
Databases

A DevOps engineer might support a relational database like


MySQL for an application that needs strong data consistency,
while another application might use MongoDB, a NoSQL
database, for flexible, unstructured data storage
DATABASE

Data migrations (moving data from one database to another or from one
version of a database to another) are common in DevOps workflows,
especially when upgrading applications or changing environments. Knowing
how to plan and execute migrations is essential to avoid data loss and
ensure data integrity.

Data Migrations

● Migrating data from an on-premises MySQL database to an AWS RDS


instance or using tools like pg_dump and pg_restore to migrate a
PostgreSQL database.
DATABASE

As applications grow, databases may need to scale to handle increased load.


Understanding scaling options (e.g., sharding, read replicas) and replication
is important for high availability and performance.

Database Scaling and


Replication

Setting up read replicas in MySQL to offload read requests from the primary
database, which improves performance for read-heavy applications.
DATABASE

Proper configuration and optimization can drastically improve database


performance. Knowing how to adjust database settings and optimize queries
can prevent performance bottlenecks.

Database
Configuration and
Optimization

Adjusting PostgreSQL configurations like max_connections, shared_buffers,


and work_mem to better utilize server resources based on the application
workload.
DEVOPS MOST
USED DATABASE
USER MANAGEMENT

As a DevOps Engineer do I need to know User


Management ?
USER MANAGEMENT

As a DevOps Engineer do I need to know User


Management ?

User permissions and access control are crucial for managing


who can access resources and what actions they can perform
within a system. Implementing an effective permissions model
requires planning and a robust understanding of the system’s
requirements.
USER MANAGEMENT

Role-Based Access Control (RBAC): Users are assigned roles, and


permissions are assigned to roles rather than individual users. This simplifies
management in systems where users need similar access based on their
role (e.g., admin, editor, viewer).

Access Control Attribute-Based Access Control (ABAC): Uses attributes (such as user
Models department, time of access, or security level) to define access control policies. This
model allows fine-grained control but can be complex to implement and manage.

Discretionary Access Control (DAC): Access is based on the identity of subjects


and permissions defined by resource owners. Owners control who has access to
their resources and can share or restrict them.

Mandatory Access Control (MAC): Access is enforced based on regulated,


predefined policies (often by government or corporate policy) and is typically non-
overridable by individual users. MAC is common in highly sensitive environments
like government or military.
USER MANAGEMENT

Read/Write/Execute (RWE): These fundamental permissions dictate if a


user can view, edit, or execute a file/resource.

Permission Types Admin/Superuser Access: Allows full access to manage resources, users, and
configurations within a system.

Admin/Superuser Access: Allows full access to manage resources, users, and


configurations within a system.
USER MANAGEMENT

Access Control Lists (ACLs): List of permissions attached to resources


specifying which users or roles can access them and their actions.

Implementing Access Access Control Lists (ACLs): List of permissions attached to resources specifying
Control which users or roles can access them and their actions.

Permissions Hierarchies: Set up permissions in a hierarchy to avoid conflicts,


where a higher level (like an admin) can override restrictions at a lower level (like a
user).
USER MANAGEMENT

USER PERMISSION

The Read/Write/Execute (RWE) model is a fundamental permissions structure used in


operating systems and file systems to control access to files, directories, and other resources. It
determines what actions a user, group, or process can perform with a particular resource and is
commonly implemented with a combination of these three basic permissions.

Read (R) Write (W) Execute (X)


USER MANAGEMENT

USER PERMISSION

Read (R)

Definition: Grants permission to view or read the contents of a file or list the
contents of a directory.

Use Cases:

● For a text document, the read permission allows users to open and view the
file.
● For directories, it allows users to list the files and subdirectories within that
directory.

Example Scenarios:

● Viewing Documents: A report file can be read by users but not altered unless they have additional
write permissions.
● Website Files: Web server configurations typically grant read access to public files for web access
but restrict write permissions to prevent unauthorized changes.
USER MANAGEMENT

USER PERMISSION

Definition: Grants permission to modify or change the contents of a file,


Write (W) create or delete files within a directory, or modify the metadata of a file or
directory.

Use Cases:

● For a document, write permission allows users to edit and save changes to the document.
● For directories, it allows users to add new files, delete existing files, or rename files within the
directory.

Example Scenarios:

● Collaborative Editing: Team members might have write access to project files,
allowing them to contribute to shared documents.
● Application Logs: Application directories require write access so that processes
can log events, but typically, this permission is not granted to regular users.
USER MANAGEMENT

USER PERMISSION

Execute (X) Definition: Grants permission to run or execute a file, which is especially
relevant for scripts, binaries, and applications.

Use Cases:

● For a program file or script, execute permission allows users to run the file as
a process.
● For directories, it allows users to access or traverse through directories and
view their content (if read permission is also set).

Example Scenarios:

● Running Applications: A user needs execute permissions to run software or


scripts directly from the file system.
● Server Access: Users or automated processes need execute access to scripts
or commands to perform scheduled tasks or deployments.
USER MANAGEMENT

USER PERMISSION

The Read/Write/Execute (RWE) model is a fundamental permissions structure used in


operating systems and file systems to control access to files, directories, and other resources. It
determines what actions a user, group, or process can perform with a particular resource and is
commonly implemented with a combination of these three basic permissions.

Group: Other users in the


User: The owner of the Others: Any other user
same group as the file
file. with access to the system.
owner.
USER MANAGEMENT

User Permissions and Access Control

This concept is about defining what a user can and cannot do within a system. Permissions and
access control are the core of system security, ensuring that users only have access to
resources essential for their role.

Authentication: Verifies the identity


of users (e.g., passwords, multi-
factor authentication, SSO).
Authentication ensures that only
valid users can log in. Separation of Duties (SoD):
Granular Permissions: Defining fine-
Ensures that critical tasks are split
grained permissions helps in precise Least Privilege Principle: A critical
control. For example, permissions can be security practice where users are given across users to avoid conflicts of
broken down by operations (read, write, the minimum permissions necessary for interest and reduce the risk of fraud
execute), resource type (databases, file their roles. This limits the potential or error (e.g., separating deployment
systems, servers), or even resource damage in case of an accidental or and monitoring roles).
location. malicious action.

Authorization: Once
authenticated, authorization
determines what actions a user is
allowed to perform based on their
permissions. For instance, an
administrator might have full
access, whereas a regular user
may have limited access.
USER MANAGEMENT

Role-Based Access Control (RBAC)

RBAC is an access control model that assigns permissions to users based on their role within
an organization. It simplifies permission management by grouping permissions into roles rather
than assigning them individually, which can be especially useful in large environments.
Roles: RBAC groups permissions
into roles (e.g., Developer,
Administrator, Tester), and users are
assigned roles rather than individual
permissions. A role represents a
function in the organization.
Dynamic Role Adjustment: As
Permissions: Each role users change roles (e.g., a
has a defined set of Role Assignment: Users are
developer becoming a lead), their
assigned to roles based on their
permissions associated job responsibilities. This
permissions are automatically
adjusted by assigning them to a new
with it, determining what simplifies management since role, reducing the risk of "permission
actions users with that permissions need updating only creep."
role can perform. at the role level.

Role Hierarchy: Roles can be


organized hierarchically, where
higher roles inherit the permissions
of lower roles. For instance, a
"Senior Developer" role may have
all permissions of a "Developer"
role plus additional privileges.
USER MANAGEMENT

User vs Group

In DevOps, understanding the distinction between users and groups is essential for managing
permissions, ensuring security, and promoting efficient collaboration.

USER GROUP
USER MANAGEMENT

User vs Group

USER A user is an individual entity with a unique identity on a system. Each user
represents a person (e.g., developer, sysadmin) or a process (e.g.,
automated deployment bot) that can interact with the system. Users are
typically assigned specific permissions to control what they can access and
what actions they can perform.

● User ID (UID): Each user is assigned a unique identifier, which is used by the system to manage and control
access.
● Home Directory: Users often have their own directories where they can store personal files and configuration
settings.
● Ownership and Permissions: Users can own files and processes, and permissions can be set on resources
to control what users can do (e.g., read, write, execute).
● Authentication and Security: Users authenticate using passwords, SSH keys, or other methods, and each
user’s actions are tracked for security and auditing purposes.

Example Use Case: In a deployment pipeline, each developer has their own user account to access specific servers,
repositories, or configurations. Permissions assigned to each user control their access to only the resources they
need.
USER MANAGEMENT

User vs Group

GROUP
A group is a collection of users that share similar permissions, making it
easier to manage access control for multiple users at once. Groups are
especially useful for organizing users with similar roles or responsibilities
(e.g., developers, testers, admins).

● Group ID (GID): Each group has a unique identifier, similar to user IDs, used to manage and assign
permissions to multiple users.
● Shared Permissions: Permissions set on a group allow all members of that group to inherit access to specific
files, directories, or resources.
● Efficient Access Management: Instead of managing permissions for each user individually, you can set
permissions for a group. This ensures consistency and simplifies permission changes.
● Collaboration and Access Control: Group-based permissions facilitate collaboration by ensuring that all team
members have access to necessary resources without compromising security.

Example Use Case: In a DevOps environment, a “deploy” group can be created for team members responsible for
deployments. Assigning deployment permissions to this group ensures that all members have the same level of
access, reducing the need to manage permissions individually.
USER MANAGEMENT

User vs Group

In DevOps, understanding the distinction between users and groups is essential for managing
permissions, ensuring security, and promoting efficient collaboration.

GROUP
USER
USER MANAGEMENT

User vs Group vs Role

GROUP
GROUP
USER

Role: A set of
permissions or actions
Group: A collection of that users or groups
User: An individual users. Groups are can perform. Instead of
person or account used to assign the assigning permissions
that logs into a same permissions to directly to a user, you
system. Each user multiple users at give them a role that
has their own unique once, making it easier has the needed
identity (username) to manage access for permissions for their job
and can have specific similar types of users (like "admin" or
permissions. (like a team). "viewer").
USER MANAGEMENT

User vs Group vs Role

GROUP
USER GROUP

User = one person/account.


Group = a collection of users.
Role = a set of permissions assigned
based on job duties.
USER MANAGEMENT

User vs Service account

● User: Represents a real person who logs into a system.


Each user has a username, password, and permissions SERVICE ACCOUNT
USER based on their role or group. Users typically interact with
the system directly and require authentication like
passwords or MFA (multi-factor authentication) to access
resources.
○ Purpose: Used by people to perform actions
on the system.
○ Example: A developer logging into a cloud
console to deploy code.
● Service Account: Represents a machine or application,
not a person. Service accounts are used by software or
automated processes to interact with systems without
human intervention. They have their own permissions
and credentials (like API keys or tokens) and are often
limited to the specific actions they need to perform.
○ Purpose: Used by applications or
automated tasks to perform specific
functions.
○ Example: A CI/CD pipeline using a service
account to deploy code automatically.

In short:

● User = Real person accessing the system.


● Service Account = Non-human account used by
programs or automated tasks.
AUTH & AUTHZ

As a DevOps Engineer do I need to know Authorization to


Authentication ?
AUTH & AUTHZ

Yes, as a DevOps engineer, understanding both authentication and


authorization is essential because these concepts are foundational
for securing systems, managing access, and ensuring smooth
operations in any environment.

Authentication Authorization (AuthZ)


(AuthN)
AUTH & AUTHZ

Authentication
(AuthN)
Authentication (AuthN)

● Definition: Authentication is the process of verifying who a user or entity is. It answers
the question, “Are you who you claim to be?”
● Importance in DevOps:
○ Ensures only legitimate users or services can access resources, especially in
cloud, CI/CD, and production environments.
○ Common methods include passwords, multi-factor authentication (MFA), tokens,
and certificates.
○ As a DevOps engineer, you may set up and manage authentication mechanisms
for secure access to tools, repositories, servers, and cloud platforms.
● Example: Configuring OAuth, LDAP, or Single Sign-On (SSO) for accessing systems
like Jenkins, Kubernetes, or AWS.
AUTH & AUTHZ

Authorization (AuthZ)
Authorization (AuthZ)

● Definition: Authorization is the process of determining what an authenticated user or


entity is allowed to do. It answers, “What are you allowed to do?”
● Importance in DevOps:
○ Ensures users only access resources necessary for their role, reducing the risk
of accidental or malicious changes.
○ Authorization policies define access control (e.g., who can deploy, access
production, view logs).
○ DevOps engineers often work with tools like IAM (Identity and Access
Management) to set permissions, roles, and access policies for users and
service accounts.
● Example: Using AWS IAM policies to grant specific permissions to a role, or defining
RBAC (Role-Based Access Control) in Kubernetes.

You might also like