0% found this document useful (0 votes)
5 views10 pages

Devops Lead

Pooja Patel is an experienced IT professional with over 13 years in the tech industry, specializing in DevOps practices across various organizations. She has led initiatives to streamline development and deployment processes, focusing on automation and collaboration between teams, and is currently leading DevOps practices at Wipro. Pooja is seeking a role as a Cloud DevOps VP to influence cloud and DevOps strategies at a higher organizational level.

Uploaded by

Pooja Patel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views10 pages

Devops Lead

Pooja Patel is an experienced IT professional with over 13 years in the tech industry, specializing in DevOps practices across various organizations. She has led initiatives to streamline development and deployment processes, focusing on automation and collaboration between teams, and is currently leading DevOps practices at Wipro. Pooja is seeking a role as a Cloud DevOps VP to influence cloud and DevOps strategies at a higher organizational level.

Uploaded by

Pooja Patel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

My name is pooja patel , With over [13+] years of experience

in the IT tech industry, I have had the privilege of working


across various domains, from system administration and
software development to cloud infrastructure and continuous
integration/continuous deployment (CI/CD) pipelines.
I start my carrer with hsbc as a devops engineer were:- A bank
wants to develop a secure web application for online banking. The
DevOps team uses GitHub to manage the codebase, implement CI/CD
(Continuous Integration/Continuous Deployment), and automate testing
and deployment.

after I worked with standard chartered as senior debop


werei am working security and compliance team :- Using
DevOps Automation to Deploy Lambda APIs across
Accounts and Environments role can be used to
deploy AWS Lambda Serverless API code intpre-production
and production environments. I am are building on the
process outlined in this Building a CI/CD pipeline for cross-
account deployment of an AWS Lambda API with the
Serverless Framework by programmatically automating the
deployment of Amazon API Gateway using CloudFormation
templates
currently I am working with wipro I am leading practice of
dveops in organization
Collaboration between our development and operations
teams to ensure seamless, reliable delivery of our software
products.

specialize in streamlining and automating the development


and deployment processes to enhance efficiency and
reliability. My team and I focus on bridging the gap between
development and operations, ensuring seamless integration
and continuous delivery of high-quality software.
Throughout my career, I've led various initiatives that have
significantly reduced deployment times, increased system

Internal - General Use


uptime, and improved overall developer productivity. I'm
passionate about fostering a culture of collaboration and
innovation, leveraging cutting-edge tools and technologies to
drive our organization's success. I'm excited to continue
pushing the boundaries of what's possible in DevOps and
supporting our teams in delivering exceptional results."
MY strong Techicnal expereties cicd , python , sql , docker
Kuber, terraform git , cloud , LINUX , Ansible, Jenkins
Have
.
.

End-to-End AWS
DevOps Pipeline
Overview
Objective:- Cardholder and Merchant feature store generation:- Build a shared
service which provides card holder/merchant insights. This service helps to enable
Financial Institutions to increase spend on the different types of cards (credit and
debit) offered to their customers by providing set of model predictions and spend
details and score.

**1. Planning and Requirements Gathering:


 Tools: AWS CodeCommit or Jira for managing tasks,
user stories, and requirements.
1. Source Data Management
Amazon S3: Store Raw Data Files
 Create an S3 Bucket:
o Use the AWS Management Console, AWS
CLI, or Infrastructure as Code (IaC) tools like
AWS CloudFormation to create a new S3
bucket.
o Define appropriate permissions to control
access to the bucket.
 Upload Raw Data:
o Store raw data files (CSV, JSON, etc.) in the
bucket. These files could come from various
sources like IoT devices, user uploads, or third-
party data providers.
 Organize Data:

Internal - General Use


o Use a folder structure or naming conventions
(e.g., date-based or source-based) for easier
management and retrieval.
2. Trigger Pipeline
AWS Lambda: Trigger on S3 Event Notifications
 Configure S3 Event Notifications:
o Set up event notifications on the S3 bucket to
trigger on specific events, such as PUT (when
a new object is created) or POST (when an
object is uploaded).
o This configuration can be done through the
AWS Management Console or via the AWS
SDK.
 Create AWS Lambda Function:
o Write a Lambda function that processes the
uploaded data. This function can be written in
Python, Node.js, or any supported language.
o The function can include logic to:
 Validate the incoming data.
 Trigger AWS Glue jobs or other
processing workflows.
 IAM Role:
o Ensure that the Lambda function has an
appropriate IAM role with permissions to read
from the S3 bucket and trigger Glue jobs.
3. Data Transformation
AWS Glue/Athena: ETL Processes or Querying Data
 Using AWS Glue:
o Create a Glue Crawler:
 Set up a crawler to scan the S3 bucket
for new data and create a data catalog.
This catalog contains metadata about
the data, which can be used for ETL
jobs.
o Define ETL Jobs:
 Create Glue ETL jobs using the AWS
Glue Studio or by writing custom scripts
in Python or Scala.
 The ETL job can read data from S3,
perform transformations (like filtering,
joining, or aggregating), and write the
transformed data back to S3 or a data
warehouse like Amazon Redshift.
 Using Amazon Athena:

Internal - General Use


o If immediate querying is needed, you can set
up Amazon Athena, which allows you to run
SQL queries directly against data stored in S3.
o Create tables in Athena that point to the S3
location of your raw data and use standard
SQL queries to analyze it.
4. Build and Deploy
AWS CodePipeline: Automate the Process

2. Development:
 Code Repository: Developers use AWS
CodeCommit, to managed source control service
that hosts Git repositories.
 Branching Strategy: Implement GitFlow or another
branching strategy for managing feature
development, bug fixes, and releases.
 Create a CodePipeline:
o Use AWS CodePipeline to define an end-to-
end workflow that includes stages for building,
testing, and deploying your data processing
jobs.
 Define Pipeline Stages:
o Source Stage: Integrate with the S3 bucket to
pull the latest data files. You can also use AWS
CodeCommit if you're managing ETL scripts.
 Build Stage: Set up AWS CodeBuild if you need to
package scripts or applications.( Build
Automation: Use AWS CodeBuild to automatically
build the application whenever code is committed. It
compiles source code, runs tests, and produces
software packages.) Code Quality
Checks: Integrate tools like SonarQube with AWS
CodeBuild to enforce coding standards and perform
static code analysis.

o
o Transformation Stage: Use Lambda or Glue
as part of this stage to perform ETL or trigger
Glue jobs.
 Testing Stage: Optional, but you can include testing
of your scripts or transformation results to validate
the accuracy of your ETL jobs.( Unit Testing: Run
automated unit tests using frameworks like JUnit or
PyTest within AWS CodeBuild.)

Internal - General Use


o
o Deploy Stage: Deploy the transformed data to
a destination such as another S3 bucket, a
database, or a data warehouse.
 Notifications:
o Set up Amazon SNS or AWS CloudWatch
Events to notify relevant stakeholders on
pipeline successes, failures, or other events.
Monitoring and Maintenance
 CloudWatch Logs: Use CloudWatch to monitor the
logs generated by Lambda functions and Glue jobs
for debugging and performance tuning.
 Cost Management: Keep an eye on costs associated
with S3 storage, Lambda execution, and Glue jobs to
optimize the pipeline.
This detailed breakdown provides a comprehensive view of
setting up a data processing pipeline using AWS services,
facilitating efficient data ingestion, transformation, and
analysis.
---------------------------------------------------------------------------------
------------------------------
**4. Continuous Testing:
 Integration Testing: Use AWS CodePipeline to
automate integration testing. Integration tests
ensure that different modules work together as
intended.
 Security Testing: Use tools like AWS Inspector for
automated security assessments of your
applications.
 Performance Testing: Use AWS CloudWatch for
performance monitoring and AWS Performance
Insights for database performance tuning.
**5. Continuous Deployment (CD):
 Artifact Repository: Store built artifacts in AWS S3
or AWS CodeArtifact.
 Deployment Automation: Use AWS CloudFormation
or AWS CDK (Cloud Development Kit) to define
and provision infrastructure as code.
 Containerization: Use Amazon ECR (Elastic
Container Registry) for storing Docker images.
 Orchestration: Deploy containerized applications
using Amazon ECS (Elastic Container Service) or
Amazon EKS (Elastic Kubernetes Service).
**6. Environment Management:

Internal - General Use


 Dev/Test Environments: Use separate AWS
accounts or VPCs for development, testing, and
staging environments to ensure isolation and
consistency.
 Configuration Management: Use AWS Systems
Manager for configuration management across
environments.

———————————————------------------------------------
--------------------------------------------

In a typical DevOps pipeline, the


approval of pull requests is usually
managed by the following roles:
1. Peer Developers:
o Colleagues working on the same project or
feature review each other's code.
o Ensures that multiple eyes catch potential
issues and maintain code quality.
2. Team Leads or Senior Developers:
o More experienced developers provide in-
depth reviews to ensure adherence to best
practices and project standards.
o They may focus on architectural decisions,
design patterns, and overall code quality.
3. DevOps Engineers:
o Review changes that impact the
deployment pipeline, infrastructure, or
configurations.
o Ensure that any changes comply with
operational and security guidelines.
4. Automated Tools:
o Automated code review tools (like
SonarQube or CodeClimate) can provide
additional checks.
o These tools can enforce coding standards,
detect bugs, and identify security
vulnerabilities.
The specific individuals or groups responsible for approving
pull requests can vary based on the project's size, structure,
and policies. Some organizations also implement mandatory
code review policies where a certain number of approvals
are required before merging a pull request.

Internal - General Use


4o
————————————

End-to-End Azure DevOps Pipeline


**1. Planning and Requirements Gathering:
 Tools: Azure Boards for managing tasks, user
stories, and project planning.
**2. Development:
 Code Repository: Use Azure Repos for version
control.
 Branching Strategy: Implement a branching strategy
like GitFlow.
**3. Continuous Integration (CI):
 Build Automation: Azure Pipelines automates the
build process.
 Code Quality Checks: Integrate with SonarCloud for
static code analysis.
 Unit Testing: Use frameworks like MSTest, NUnit, or
xUnit, and run tests as part of the CI process.
**4. Continuous Testing:
 Integration Testing: Use Azure Pipelines to
automate testing workflows.
 Security Testing: Integrate with tools like
WhiteSource or Checkmarx for static and dynamic
security tests.
 Performance Testing: Use Azure Load Testing to
evaluate performance under various conditions.
**5. Continuous Deployment (CD):
 Artifact Repository: Use Azure Artifacts for storing
built artifacts.
 Deployment Automation: Use Azure Resource
Manager (ARM) templates or Azure DevTest Labs
for Infrastructure as Code (IaC).
 Containerization: Use Docker to containerize the
application.
 Orchestration: Use Azure Kubernetes Service (AKS)
to manage and scale containerized applications.
**6. Environment Management:
 Dev/Test Environments: Create separate
environments for development, testing, and staging
using Azure Resource Manager (ARM) templates.

Internal - General Use


 Configuration Management: Use Azure Automation
or Azure Policy to manage configuration across
environments.
**7. Continuous Monitoring:
 Application Monitoring: Use Azure Monitor and
Application Insights to monitor application
performance and availability.
 Log Management: Use Azure Log Analytics to
aggregate and analyze logs.
 Alerting: Configure alerts in Azure Monitor to notify
the team of any issues.
**8. Feedback and Iteration:
 User Feedback: Collect feedback using Azure
DevOps Feedback Requests.
 Continuous Improvement: Use Agile methodologies
to iterate on the product, incorporating feedback
and making continuous enhancements.
Example Workflow:
1. Code Commit: A developer commits code to the
Azure Repos repository.
2. Automated Build: Azure Pipelines triggers an
automated build and runs unit tests.
3. Code Quality and Security Checks: SonarCloud
and security tools perform code quality and security
checks.
4. Integration Testing: Automated integration tests
are executed using Azure Pipelines.
5. Artifact Creation: A Docker image is created and
stored in Azure Artifacts.
6. Deployment: ARM templates provision the
necessary infrastructure, and AKS deploys the
Docker container.
7. Monitoring and Alerts: Azure Monitor and
Application Insights monitor the application, and
alerts are sent if any issues arise.
8. Feedback Loop: User feedback is collected, and
the cycle continues with new features and
improvements.

https://www.tekki-gurus.com/use-azure-devops-create-cloud-
deployment-pipeline/

Internal - General Use


Internal - General Use
https://learn.microsoft.com/en-us/azure/devops/pipelines/
architectures/devops-pipelines-baseline-architecture?
view=azure-devops

I’m seeking a role where I can have a more significant impact on the organization by
shaping its cloud and DevOps strategy at a higher level. As a Cloud DevOps VP, I would
have the opportunity to influence not just the technical aspects but also the broader
business outcomes, driving innovation and ensuring that the organization stays ahead in
the rapidly evolving cloud landscape."

Internal - General Use

You might also like