0% found this document useful (0 votes)
60 views5 pages

Krishna Gopal Devops - 1

Krishna Adimalla is a DevOps/Kubernetes Engineer with over 9 years of experience in Kubernetes administration, cloud services management, and enterprise applications. He has expertise in CI/CD pipelines, container orchestration, and monitoring tools, along with strong skills in AWS services and infrastructure as code using Terraform. His professional background includes roles at various companies where he implemented scalable solutions, automated deployments, and ensured system reliability and performance.

Uploaded by

vijaydevops92
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
60 views5 pages

Krishna Gopal Devops - 1

Krishna Adimalla is a DevOps/Kubernetes Engineer with over 9 years of experience in Kubernetes administration, cloud services management, and enterprise applications. He has expertise in CI/CD pipelines, container orchestration, and monitoring tools, along with strong skills in AWS services and infrastructure as code using Terraform. His professional background includes roles at various companies where he implemented scalable solutions, automated deployments, and ensured system reliability and performance.

Uploaded by

vijaydevops92
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

Krishna Adimalla

PROFESSIONAL SUMMARY:
 Over 9 years of experience comprising of Kubernetes Administration, Cloud Services Management,
Enterprise Applications and Servers Administration.
 Strong experience in managing Kubernetes environments for scalability, availability and zero downtime.
 Experienced in deployment strategies such as Rolling update, Canary and Blue-Green in Kubernetes for
providing updates to the applications and creating custom ConfigMaps and secrets in an encoded format
for security.
 Experienced in containerizing applications to deploy on managed Kubernetes service EKS and AKS.
 Creating Alerts and Dashboards for monitoring SLO/SLI using Data Dog and CloudWatch
 Good experience with Helm charts with Kustomize composing deployment manifest files to deploy K8s
objects/microservices.
 Experience with Chaos testing for Network exhaustion, Pod failure, Node failure, High CPU load, Memory
exhaustion, new deployment failures, Horizontal Pod Autoscaling (HPA), Container startup failures and
Dependency failures
 Experienced in infrastructure and application monitoring (observability) tools Prometheus-Grafana,
Splunk, ELK/OpenSearch, Open Telemetry, Datadog, New Relic, AWS CloudWatch, and AppDynamics
 Good Understanding of Java, microservices architecture and distributed data streaming system like
Kafka.
 Efficient in writing Infrastructure as a code (IaC) in Terraform, AWS Cloud formation and experience
using Ansible, Chef for configuration management.
 Strong CICD experience with Git, Jenkins, Azure DevOps, for build automation and deployments and
expertise in using tools like Maven, Ant, MS Build for building deployable artifacts from source code
repositories.
 Experience in working on AWS and its services like VPC, EC2, IAM, ECS, EBS, RDS, S3, Lambda, ELB, Auto
Scaling, Route 53, Cloud Front, Cloud Watch, Cloud Trail, SQS, and SNS.
 Experienced in Databases like MySQL, Oracle, MariaDB, MongoDB, DynamoDB
 Good understanding of OSI Model, TCP/IP protocol suite (IP, ARP, TCP, UDP, SMTP, FTP, and TFTP)
 Good understanding of Observability and MELT implementation patterns for large-scale services. Solid
understanding of Site Reliability Engineering principles, with a substantiated history of successfully
applying SLAs, SLIs, and SLOs to enhance and quantify system dependability and efficiency.

TECHNICAL SKILLS:

Container Kubernetes, K3s, Docker, Rancher, Clair, Chaos Monkey, Gremlin, EKS, AKS,
Orchestration: Containerd, Istio (Service Mesh),
Monitoring Prometheus, Grafana, Nagios, CloudWatch, Splunk, Data Dog, New Relic
Logging Elasticsearch, Kibana, Promtail-Loki, Kafka, Fluentd, Logstash
Cloud Technologies Amazon Web Services (AWS), Azure DevOps Provisioning
Tools Terraform, OpenTofu, CloudFormation
Config& Management Ansible, Chef
Languages Python, Java, Perl, JSON, YAML, PowerShell, Bash/Shell Scripting
CI-CD Tools Jenkins, Azure
DevOps Build Tools: Maven, Ant, Gradle
Code Quality SonarQube
Version Tools Git, Bitbucket, Subversion, GitHub
Database MySQL, Oracle, Amazon , MariaDB, MongoDB, Redis
Networking/ Protocols DNS, LDAP, TCP/IP, FTP, HTTP, HTTPS, SSH, SFTP, SCP, SSL
Operating Systems LINUX (RHEL 4/5/6/7), UNIX, Ubuntu, Centos, Windows
App/Web servers Tomcat, Ngnix, Apache Web Server, Web logic, IBM Web sphere
CERTIFICATIONS: -
 AWS Certified Solutions Architect Associate:
https://www.credly.com/badges/e39fdf6a-24c4-4ff3-8dcb-7fffe1320ebe/public_url
 Certified Kubernetes Administrator (CKA):
https://www.credly.com/badges/4629f9f1-fddb-4413-97d8-7dab05730158/public_url

WORK EXPERIENCE:
Client: AdvizeX, Cleveland, OH Oct 2023-
Present
Role: DevOps/Kubernetes Engineer
Responsibilities:
 Designed and implemented Continuous Integration and Continuous Delivery (CI/CD) pipelines using Git,
Jenkins, Bamboo, and GitHub Actions to automate the build, test, and deployment processes across
development, QA, and production environments.
 Expert in automating deployments with AWS, using IAMs to integrate Jenkins with AWS Code Pipeline,
and creating EC2 instances for virtual servers.
 Designed and deployed multiple applications using various AWS services (e.g., EC2, S3, RDS, VPC, IAM,
ELB, EMR, CloudWatch, Route 53, Lambda, and CloudFormation) with a focus on high availability and
fault tolerance.
 Implemented event-driven architectures, integrated different AWS services and SaaS applications,
automated workflows based on events, and created real-time data processing applications.
 Managed High-performance applications requiring robust database features, fault-tolerant and scalable
database solutions, and migration of on-premises databases to the cloud by using AWS Aurora.
 Implemented AWS Lambda functions to run scripts in response to events in the Amazon S3 bucket using
the Amazon API gateway.
 Provided operational and maintenance support for AWS cloud resources, including launching,
maintaining, and troubleshooting EC2 instances, S3 buckets, VPCs, ELBs, and RDS.
 Expertise in building Terraform to create AWS Infrastructure by pulling Terraform code from GitHub
repositories and working closely with teams to ensure high-quality and timely delivery of builds and
releases.
 Build and Manage Kubernetes clusters using Terraform to automate infrastructure deployment,
configuration, and scaling, resulting in increased efficiency and reduced deployment time. Manage, and
Scale containerized applications using EKS.
 Implemented Kubernetes network and service discovery concepts such as Services and Ingress for
application pods and end users to communicate.
 Manage and configure Kubernetes application Deployment YAML files using Kubectl and Helm
 Implemented interactive and customizable visualizations for tracking experiments by using Weights &
Biases (W&B), making it easier to analyze performance and worked on Robust Tracking and
Management like Comprehensive tools for tracking experiments, managing datasets, and organizing
models to ensure reproducibility and streamline the ML workflow.
 Created alerts and dashboards for monitoring SLO/SLI using Datadog and CloudWatch, ensuring
proactive incident management.
 Designed and implemented a robust CI/CD pipeline using Jenkins integrated with OpenShift to automate
the build, test, and deployment processes.
 Worked with various Docker components, including Docker Engine, Hub, Machine, Compose, Swarm,
and Docker registry, and created custom Docker container images, tagged them, and pushed them to
Docker Hub.
 Containerized legacy applications and microservices using Docker and deployed them on OpenShift
clusters, enabling scalable and efficient management of applications.
 Experience changing the AWS infrastructure from Elastic Beanstalk to Docker with Kubernetes.
 Implemented role-based access control (RBAC) and integrated OpenShift with LDAP for secure user
management and compliance with regulatory requirements Configured network policies and security
groups to isolate sensitive data and ensure secure communication between services, adhering to
industry standards.
 Configured Kubernetes for high availability, including pod autoscaling, node affinity, and anti-affinity
rules.
 Performed SRE (Site Reliability Engineer) responsibilities with Observability and Monitoring Expertise.
 Responsible for application support experiencing downtime or degraded performance, aimed at
ensuring the reliability, availability, and performance of applications in production.
 Involved in strategies and processes to restore service availability and data access following a
catastrophic event by using Disaster Recovery concepts
 Automated the collection and analysis of metrics, logs, and traces. Set up alerts to notify teams of
anomalies or threshold breaches.
 Implemented Observability and APM by using Prometheus, Splunk, Grafana, Dynatrace, ELK Stack, and
Jaeger.
 Expertise in Scheduling regular backups and test recovery processes to ensure data integrity and system
availability.
 Automated Deployments and Integrated Rancher with their CI/CD pipeline to automatically deploy
changes to their development and staging environments.
 Experience using AppDynamics, Dynatrace, Kibana, Grafana dashboard and Prometheus Alert Manager
for monitoring the health of Kubernetes and OpenShift nodes.
 Set up Redis with multiple replicas and use automatic failover to ensure high availability and reliability
and monitoring solutions to track Redis performance and health
 Managed databases with Oracle, MySQL, DynamoDB, RDBMS, and server DB tasks.
 Used Packer and Terraform to automate system operations for deployment automation.
 Experienced in Agile and most recently in CI/CD practices.
 Written automation scripts using Bash, JSON, Groovy, Python, and Maven for build automation.
 Documented processes and configurations in Confluence, creating structured spaces for effective
collaboration among different scrum teams.
Environment: Git, SVN, Jenkins, Maven, AWS, Azure, Terraform, Kubernetes (Kops, Kubeadm, AKS,
OpenShift, EKS), ELK Stack (Elasticsearch, Logstash, Kibana), Fluent Bit, Splunk, Prometheus, Grafana,
Datadog, Dynatrace, Ansible, Argo CD, Docker (Engine, Hub, Machine, Compose, Swarm, Registry), AWS
Lambda, Amazon API Gateway, AWS RDS, VPC, ELB, Route 53, Ingress, Bash, JSON, Groovy, Python,
CloudWatch, IAM, AWS CloudFormation, Kubectl, Helm.

UPS, Dallas, TX June 2022-Sep


2023
Project: ECM (Enterprise Customer Management)
Role: DevOps/Kubernetes Engineer
Responsibilities:
 Proficient in using Jenkins for Continuous Integration and End-to-End automation of build and
deployment processes.
 Utilized EKS to orchestrate Docker Container deployment, scaling, and management.
 Automated infrastructure tasks using Ansible playbooks, including Continuous Deployment, application
server setup, and stack monitoring.
 Automation of AWS using Python scripting (boto3 library) to manage AWS resources, coordinate
processes and workflows as well as package and deploy code.
 Develop and Deploy web applications using serverless technologies such as AWS Lambda
 Collaborates with Development and IT Operations teams to ensure efficient execution of projects.
 Configures, deploys and manages packaged Docker Containers and Images built by Dockerfile code.
 Establishes Complete DevOps Pipeline (Git-Hub-Jenkins & Maven-Ansible/Gradle-Docker- Kubernetes) on
AWS platform. Wrote Shell (bash) and Python scripts in Jenkins to automate deployment processes.
 Experienced in using Ansible with jinja2 Templating for server configuration, software deployment, and
orchestration of continuous deployments or zero downtime rolling updates.
 Implemented End-to-End ML Pipelines like Build and managed ML pipelines using Kubernetes and
Leverages Kubernetes for scaling ML workflows.
 Experienced in Tracking experiments and managing models through integration with various tools and
supports various ML frameworks and tools, enabling flexibility in model development and training.
 Installed and Configured Prometheus and Alert Manager for Kubernetes cluster monitoring and set up
alerts to be sent to Pager Duty and Slack.
 Proficient in Ansible Tower with knowledge of dashboard usage, Role-based access control (RBAC), and
developing Ansible playbooks for managing application OS configuration files in GIT hub, integrating with
Jenkins, and verifying with Jenkins’s plugins. Also deployed applications in Linux environment.
 Created Ansible manifest files, roles, and profile modules to automate system operations and manage
servers on Microsoft Azure Platform, Azure Virtual Machines, and encrypted data using Ansible vault.
 Well-versed in all phases of the Software Development Life Cycle (SDLC), focusing on quality software
build and release.
 Installed OMS agent as a daemon set on Kubernetes cluster.
 Managed Kubernetes charts using Helm, created reproducible builds of Kubernetes applications, and
managed Kubernetes manifest files and Helm package releases.
 Configured Splunk Forwarder as a daemon set to forward application logs from the Kubernetes cluster to
Splunk.
 Set up New Relic minion in Azure to monitor application services, write JSON scripts and creating APM
dashboard to identify bottleneck areas.
 Conducted load tests to analyze environmental health.
 Experienced in creating and maintaining Docker containers and images.
 Wrote API proxies on Apigee Edge platform.
 Created jobs in Azure Databricks to extract data from databases
Environment: Aws, Azure, Azure Devops, Terraform, Ansible, Docker, Git, Jira, Jenkins, Kubernetes,
OpenShift, Maven, SonarQube, ELK, Java, shell, Bash, Python, WebSphere, WebLogic, Tomcat, Nginx.

Client: RELX, Ohio (Chennai) Oct 2019 - Nov 2021


Role: DevOps/Site Reliability Engineer (SRE)
Responsibilities:
 Implemented Kubernetes architecture, setup, images, jobs, labels and selectors, namespace, node,
service, pod, replication controller and Kubernetes deployments.
 Managed Kubernetes deployments, objects for high availability and scalability using horizontal pod scaler
and resources management.
 Deployed Prometheus with Grafana to monitor the Kubernetes cluster and configured alerts firing when
different conditions were met.
 Integrated EFK (Elasticsearch, Fluentd, Kibana) stack as the logging solution for the deployed Kubernetes
cluster.
 Creating Alerts and Dashboards for monitoring SLO/SLI using DataDog and CloudWatch
 Writing infrastructure code using Terraform and writing CI/CD pipelines
 Composed Deployment pipelines for Kubernetes using Argo CD
 Implemented deployment strategies such as Rolling update, Canary and Blue- Green in Kubernetes for
providing updates to the applications and creating custom ConfigMaps and secrets in an encoded format
for security.
 Automated secrets management tightens security and enables compliance in DevOps workflows, making
it a critical tool for teams managing sensitive data in cloud-native and containerized environments by
using HashiCorp Vault
 Executed benchmarking on containers and orchestration platform performance check by using open-
source tools Sysbench, JMeter, Apache bench
 Deployed Fluent bit as the daemons on each node and integrated with Fluentd as the aggregator to
manage cluster logging
 Setup Nginx Ingress controller to manage the ingress/egress routing rules for Kubernetes.
 Performed proof of concepts on various open-source CNCF graduated solutions to test and deploy with
Kubernetes. Performed POC on Istio.
 Setup light-weight metrics and logs forwarding with Fluentbit, Telegraf, and Metricbeat to different
output plugins.
 Ensured cluster security with image vulnerabilities scanning with Twistlock, container runtime security,
orchestration platform security.
 Deployed Jaeger for the tracing across the containerized environment for the better observability
 Implemented build pipelines using CircleCI
 Proof of Concepts (POC) on cloud native technologies to integrate with light weight Kubernetes K3s.
 Extensively used Bash and Python scripting for the task’s automation.
Environment: Kubernetes, Prometheus, Grafana, Fluentd, Elastic, EFK, EKS, AKS, Argo CD, CircleCI,
Metricbeat, Twistlock, Vault, Nginx Ingress, AWS Cloud Infrastructure, Chaos Monkey, Clair, Karpenter, Java
Spring Boot, Python.
Client: Forsys Inc, India Apr 2017-Sep 2019
Project: Global Network Solutions Migration
Role: Site Reliability Engineer
Responsibilities:
 Designed and built scalable Kubernetes clusters on AWS for deploying microservices, improving
application scalability and fault tolerance.
 Conducted Chaos Engineering experiments with Gremlin to proactively identify system weaknesses and
improve resilience and collected metrics.
 Extensively worked with Scheduling, deploying, and managing container replicas onto a node using
Kubernetes and experienced in creating Kubernetes clusters.
 Developed Kubernetes Pod definitions and deployments and used Helm Charts to version control
complete deployment strategies.
 Competence in creating Ansible Playbooks and encrypting the data using Ansible Vault and maintaining
role-based access control by using Ansible Tower to manage web applications and environment’s
configuration files.
 Experience in using Ansible as a Configuration management tool, to automate repetitive tasks, quickly
deploy critical applications, and proactively manage change.
 Installed, Configured, and automated the Jenkins Build jobs for Continuous Integration (CI) and AWS
Deployment pipelines using various plugins like Jenkins EC2 plugin, AWS Code Deploy, AWS S3, and
Jenkins CloudFormation plugin.
 Implemented complete automation in the process of CI/CD from scratch with the help of Jenkins pipeline
and CF templates.
 Experience in setting up Jenkins CI/CD pipelines and integrating build and deployment tools like maven,
npm, Artifactory, SonarQube, Ansible, Groovy, Python, Docker, and Kubernetes.
 Automated infrastructure using Terraform and AWS CloudFormation and used AWS CloudFormation for
updating the stacks.
 Built end-to-end CI/CD pipelines in Jenkins integrating SCM, compiling source code, performing tests, and
pushing build artifacts to Nexus.
 Managed storage in AWS using Elastic Block Storage, S3, created Volumes, and configured Snapshots.
 Implemented monitoring and alerting solutions using Prometheus, Grafana, and ELK Stack, enabling
proactive issue detection and reducing mean time to resolution.
Environment & Tools: Subversion, AWS, Clear Case, Jenkins, Java/J2EE, ANT, MAVEN, DB2, UNI.

Client: Genpact (Formerly CSC), India June 2015-Mar


2017
Project: Digital Ally
Role: Linux Administrator
Responsibilities:
 Administered RedHat Linux 4.x/5 servers for several functions including managing Apace
Tomcat server, mail server, MySQL database and firewalls in both development and production
environments.
 Experience in system administration, system builds, server builds, installs, upgrades, patches, migration,
troubleshooting, security, backup, disaster recovery, performance monitoring and fine tuning on SUN
SOLARIS, Red Hat Linux systems and Windows.
 Created, configured, and diagnosed user and group permissions to facilitate System security.
 Implemented scripts for ClearCase repository maintenance and code deployment.
 Installed, configured, and administered Windows servers, Active Directory Services, FTP, WSUS, IIS Web
Server & SQL Database Server.
 Used Logical Volume Manage (LVM) to create disk groups, volumes, volume groups, and
used RAID's tools for backup and recovery.
 Performed Kernel and memory upgrades on Linux servers in Virtual environment.
 Monitored system performance using performance-monitoring commands like SAR, PROF, VMSTAT,
IOSTAT, and NETSTAT.

EDUCATION:
 Masters in Information Sciences 2022 - Trine university
 Bachelor’s in computer applications From JNTUH -2015

You might also like