0% found this document useful (0 votes)
40 views3 pages

Pooja

Pooja Patel is a seasoned professional with over 12 years of experience in designing and managing scalable infrastructure environments, specializing in cloud computing, CI/CD pipelines, and data management. She has a strong background in AWS and Azure, along with expertise in machine learning and data engineering, demonstrated through various leadership roles in companies like IBM and Wipro. Pooja holds multiple certifications in data science, cloud architecture, and project management, showcasing her commitment to continuous professional development.

Uploaded by

Pooja Patel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views3 pages

Pooja

Pooja Patel is a seasoned professional with over 12 years of experience in designing and managing scalable infrastructure environments, specializing in cloud computing, CI/CD pipelines, and data management. She has a strong background in AWS and Azure, along with expertise in machine learning and data engineering, demonstrated through various leadership roles in companies like IBM and Wipro. Pooja holds multiple certifications in data science, cloud architecture, and project management, showcasing her commitment to continuous professional development.

Uploaded by

Pooja Patel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

POOJA PATEL

8826621116
[email protected]

Professional Summary:
 Over 12+ years of experience in designing, automating, and managing scalable, secure, and highly available infrastructure environments. Expertise
in CI/CD pipelines, infrastructure as code, cloud computing, and container orchestration. Adept at working in cross-functional teams to build,
deploy, and maintain applications in dynamic environments. Proven track record of optimizing performance, improving system reliability, and
reducing deployment times. Brings strong presentation, analytical and problem-solving skills. experience in designing, developing, and maintaining
data pipelines for large-scale, mission- critical production systems using Databricks, AWS, Spark and machine learning, AI, and deep learning
 experience in architecting/designing applications, creating multi-tier architectures following Micro-Services and service-oriented architectural
principles, Business Development and Collaborating with technical teams in Cloud Environments. Experience in configuring Continuous Integration
(CI) Server i.e. Jenkins, SonarQube and Code Pipeline. High-level understanding of Amazon Web Services global infrastructure and service
migrations, Cloud Orchestration & Automation, Security, Identity & Access Management, Monitoring and Configuration, Governance & Compliance,
Application Delivery, Data protection, Image and Patch Management while focusing on core Business Priority.
 Well versed with AWS service EC2, ECS, Cloud front, Autoscaling, cloud formation, cloud trail, ELB,SQS
 experience in upstream/downstream marketing and product management within the medical device industry. Demonstrated history of analyzing
client needs and translating customer requirements into innovative and value-added product solutions. Collaborative leader skilled communicating
with people from diverse backgrounds and at all levels. Utilizes unique blend of strategic, technical, clinical, and marketing skills to drive cross-
functional teams, leading to successful launch of highly technical products.
 Consistently optimizes and improves NLP systems by evaluating strategies and testing changes in machine learning models.
 Leading Data Management and Governance initiatives to ensure optimal data quality, security, and compliance.
 Created and maintained data pipelines in Azure Data Factory using Linked Services to ETL data from different sources like Azure SQL, Blob storage,
ADLS and Azure SQL Data warehouse. Strong understanding of Hadoop Architecture, Hadoop Clusters, HDFS, Job Tracker, Task Tracker, Name
Node, Data Node, Map Reduce, Spark. Created and trained models in ML, Deep Learning, computer vision and NLP.
 Experience with Snowflake cloud data warehouse and AWS S3 bucket for integrating data from multiple source systems which include loading
nested JSON formatted data into snowflake table.
 proven track record in mutual funds, managed accounts, and separate accounts,
 Experience with Azure Cloud, Azure Data Factory, Azure Data Lake Storage, Azure Synapse Analytics, Azure Analytical services, Big Data
Technologies (Apache Spark), and Data Bricks is preferred.
 Experienced in working with Amazon Web Services (AWS) like Autoscaling, DynamoDB, Route53, EC2 for computing, S3 for storage, EMR, S3 and
cloud watch to run and monitor Spark jobs., with strong understanding n machine learning and Statistics
 Hands on experience in migrating on-premises ETLs to Google cloud platform (GCP) using Big Query, Cloud Storage, Data Proc and Composer
 Experience in writing Map Reduce programs using Apache Hadoop for analyzing Big Data.
 Hands on experience in writing Ad-hoc Queries for moving data from HDFS to HIVE and analyzing the data using HIVE QL.
 Experience in designing, developing, testing and maintaining BI applications and ETL applications.
 Extensively worked on Spark using Scala on cluster for computational (analytics), installed it on top of Hadoop performed advanced analytical
application by making use of Spark with Hive and SQL/Oracle/Snowflake.
 Expertise in Python data extraction and data manipulation, and widely used python libraries like NumPy, Pandas, and Matplotlib for data analysis.
 ETL pipelines in and out of data warehouses using a combination of Python and Snowflakes SnowSQL Writing SQL queries against Snowflake.
 Played AWS key role in Migrating Teradata objects into Snowflake environment.
 Created a connection from Azure to an on-premises data center using the Azure Express Route for Single and Multi-Subscription.
 Experience in streaming data using Kafka as a platform in batches and real-time
 Have hands on experience with Snowflake Data warehouse. Created Schemas, Tables and views. Improved the performance by optimizing the
views for data validations.
 Worked with scheduling all jobs for automating the data pipelines using Airflow, Oozie and control M.
 Implementing data movement from File system to Azure Blob storage using python API
 Written Kafka consumer Topic to move data from adobe clickstream JSON object to Datalake.
 Experience on working with file structures such as text, CSV, JSON, sequence, parquet, and Avro file formats.
 Have Experience in List Comprehensions and Python inbuilt functions such as Map, Filter and Lambda.
 Knowledge of tools like Snowflake, SSIS, SSAS, SSRS to design warehousing applications.
 Expertise in using Sqoop & Spark to load data from MySQL/Oracle to HDFS or HBase.

 Proficient in the Integration of various data sources with multiple relational databases like Oracle11g
/Oracle10g/9i, Sybase12.5, Teradata and Flat Files into the staging area, Data Warehouse and Data Mart.
Core Competencies
 DevOps Tools: Jenkins, GitLab CI, CircleCI, Bamboo, Travis CI
 Version Control: Git, GitHub, Bitbucket, GitLab,Python , sql

Public
 Cloud Platforms: AWS (EC2, Lambda, S3, RDS, EKS), Azure, Google Cloud (GCP)
 Infrastructure as Code (IaC): Terraform, CloudFormation, Ansible, Chef, Puppet
 Containerization: Docker, Kubernetes, Docker Swarm
 Monitoring & Logging: Prometheus, Grafana, ELK Stack (Elasticsearch, Logstash, Kibana), Nagios, Splunk
 Scripting & Automation: Bash, Python, Shell scripting, PowerShell
 Databases: MySQL, PostgreSQL, MongoDB, Redis
 Configuration Management: Ansible, Puppet, Chef
 Operating Systems: Linux (CentOS, Ubuntu), Windows
 Networking & Security: VPNs, Load Balancing, Firewalls, Security Groups
 CI/CD Practices: Automated testing, Blue/Green deployment, Canary Releases, Rolling updates
 Agile Methodologies: Scrum, Kanban

Certifications:
 IBM Data Science Professional Certification: IBM Inc.
 Databricks Certified Data Engineer Associate: Databricks Inc.
 Certified ScrumMaster (CSM): Scrum Alliance Organization
 AWS Fundamental certified
 AWS Certified Solutions Architect – Associate
 Certified Kubernetes Administrator (CKA)
 HashiCorp Certified: Terraform Associate
 Google Cloud Professional DevOps Engineer (in progress)
 AWS Machine learning specialty
 AWS Architect certified
 Certified: CSM, CSPO, CSP, SAFe Agilist
 Project Management Professional (PMP)
 Agile Certified Practitioner (PMI-ACP)
 Certified as Professional Scrum Master (PSM I)
 Scrum Product Owner Accredited Certification (SPOC)
 Project Management Methodologies
 MS Project Expert Certification

Projects
CI/CD Pipeline Automation for Multi-Environment Deployments
 Led a project to design and implement a multi-environment CI/CD pipeline using Jenkins and Docker. The solution reduced the time taken to
deploy code to production from 3 days to 1 hour, improving development velocity significantly.
Cloud Infrastructure Cost Optimization
 Conducted an in-depth analysis of the company’s AWS usage and implemented strategies to reduce costs by 30%, including instance
rightsizing, using reserved instances, and optimizing resource allocation.

Professional Experience:

IBM Senior Manager – Jan 2024 to till date


Responsibilities:

 Lead the design, implementation, and maintenance of highly available and scalable infrastructure on AWS, managing over 50 microservices in
Kubernetes.
 Architected and automated CI/CD pipelines using Jenkins, GitLab CI, and Docker to streamline build and deployment processes, reducing release
cycles by 40%.
 Led the day-to-day management of a non-production environment support team, ensuring stability, efficiency, and alignment with operational
goals.
 Drove the environment stability agenda, implementing strategies to improve system performance and minimize downtime.
 Spearheaded automation adoption within the team, streamlining processes and improving operational efficiency across multiple environments.
 Directed end-to-end projects, from planning and designing to implementation and go-live, ensuring successful delivery of initiatives within the
designated timeframes.
 Collaborated with stakeholders and global team members to gather complex requirements, fostering an inclusive and positive team environment.
 Delivered strong project management, ensuring successful execution of initiatives and maintaining clear communication with stakeholders.
 Developed and maintained Python scripts for automation, orchestration, and system monitoring.
 Leveraged Unix/Linux systems to optimize infrastructure, troubleshoot issues, and enhance operational performance.
 Implemented CI/CD pipelines using Jenkins and Ansible, enabling continuous integration and deployment across environments.
 Deployed and managed containerized applications in a Kubernetes cluster, improving deployment speed and scalability.
 Developed and maintained Infrastructure as Code using Terraform, enabling consistent and reproducible environments for development, staging,
and production.
 Implemented automated monitoring and alerting using Prometheus, Grafana, and AWS CloudWatch, improving system uptime by 25%.
 Collaborated closely with development teams to troubleshoot, debug, and optimize application performance, leading to a 30% improvement in
overall system stability.

Public
 Integrated security best practices into the CI/CD pipeline, ensuring secure code deployment and reducing security vulnerabilities.
 Conducted training sessions for junior DevOps engineers, fostering knowledge-sharing and improving team capabilities.

Devops lead WIPRO manager : - 05/2020 to 04/2023

 Built and maintained CI/CD pipelines using Jenkins and GitLab CI to automate software delivery and infrastructure provisioning, reducing manual
intervention by 50%.
 Managed cloud infrastructure on AWS, including EC2, S3, RDS, and VPC, optimizing resource allocation and reducing costs by 20%.
 Automated infrastructure provisioning and configuration management using Terraform and Ansible, ensuring consistency across all environments.
 Led the migration of legacy applications to microservices-based architectures, significantly improving system scalability and reliability.
 Monitored production environments and resolved performance bottlenecks using New Relic and AWS CloudWatch.
 Participated in on-call rotation, providing incident response, troubleshooting, and post-mortem analysis to ensure system availability.

Standard Chartaered Bank =

05/2017 tilL 04/2020

 Led development initiatives and managed complex projects from concept through delivery.
 Collaborated closely with stakeholders to understand business requirements and translate them into technical solutions.
 Developed and maintained scripts in Python and Shell to automate manual processes and streamline development workflows.
 Utilized RDBMS (Sybase/DB2) for database management and wrote SQL queries to extract and manipulate data for application integrations.
 Provided hands-on technical support for Unix/Linux-based systems, ensuring operational stability and optimizing system performance.
 Implemented key DevOps practices including continuous integration, automated testing, and environment configuration management.
 Worked with global teams to ensure alignment between development, testing, and production environments.

Senior Manager HSBC Bank: - 08/2012 to 04/2017

 Managed cloud infrastructure on AWS, including EC2 instances, Lambda functions, and RDS databases, achieving a 99.9% uptime.
 Implemented Docker and Kubernetes for container orchestration, improving deployment speed and resource management.
 Built and maintained automation scripts for server provisioning, patching, and configuration management.
 Developed monitoring dashboards with Grafana and Prometheus to track infrastructure health and alert on failures, reducing response time to
incidents.
 Collaborated with the development team to build automated testing and release pipelines, improving code quality and reducing deployment errors.
 Managed database backups, replication, and failover strategies to ensure data integrity and disaster recovery.

Education Details
DAVV IBS Indore
MBA in Business Decision Making. Feb. 2012

DAVV University -PIMR, Indore, M.P.

BBA - 70%, Feb. 2010

Public

You might also like