0% found this document useful (0 votes)
39 views4 pages

Raja V Reddy

profile

Uploaded by

Raj Gupta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views4 pages

Raja V Reddy

profile

Uploaded by

Raj Gupta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Raja v

Telco DevOps Engineer/ SRE

Professional Summary:
• Overall, 10 plus Years of Experience in the Telco industry comprising of Cloud/ DevOps Engineering. Possess an effective
combination of work experience in Linux, Kernel administration, along with AWS and Azure also Continuous Integration,
Continuous Deployment, Build/ Release Management, and virtualization technologies which also includes
troubleshooting. Experience in creating, maintaining, troubleshooting, and monitoring Azure and AWS.
• Extensive experience in 5G , AWS which includes Cloud Formation Templates, Elastic Load Balancer, Elastic Bean Stalk,
Cloud Watch, IAM, VPC, Cloud Trail, Route 53, EC2, RDS, ECS, Lambda, S3, Redux and Flux, OAM, Dynamo DB, SNS, SQS.
• Expert in Jenkins CI with extensive work done on Build and Deployment jobs.
• Good experience in integrating Jenkins with Ansible.
• Extensive work done on Docker with orchestration using Docker-compose.
• Good experience with Kubernetes and clustering. Experience in User Management and Plugin management for Jenkins
and deployed build files in different servers. Setting up and downstream jobs using Jenkins.
• Extensive work done on automation and development using Python and Power Shell scripts.
• Performed Root Cause Analysis on countless defects discovered and customer bug reports, hardening core AIX Kernel
modules for better reliability.
• Spearheaded the end-to-end release management process, ensuring seamless coordination between development,
quality assurance, and operations teams to deliver software releases on schedule for One Stash.
• Developed and executed comprehensive risk mitigation strategies to anticipate and address potential release blockers,
ensuring minimal disruption to production environments and maximizing customer satisfaction for One Stash.
• Initiated continuous improvement initiatives to enhance release management processes, leveraging feedback loops and
metrics analysis to identify areas for optimization and streamline workflows for increased efficiency and reliability in
One stash.
• Proficient in designing and implementing microservices architectures, breaking down monolithic applications into
smaller, independent services to improve scalability, agility, and maintainability.
• Experienced in containerization technologies such as Docker and container orchestration platforms like Kubernetes,
facilitating the deployment, scaling, and management of microservices in dynamic and distributed environments.
• Skilled in implementing decentralized data management strategies within microservices architectures, utilizing
databases such as MySQL, PostgreSQL, or NoSQL solutions like MongoDB to ensure each service has its own data store
and can operate autonomously.
• Proficient in configuring and managing Kubernetes cluster networking, ensuring seamless communication between pods
within a cluster.
• Established robust performance monitoring mechanisms utilizing tools such as New Relic, Datadog, or Prometheus,
enabling real-time visibility into application behavior and proactively identifying performance bottlenecks in Application
Performance.
• Proficient in deploying, configuring, and managing OpenStack environments. Experienced in setting up compute,
storage, and networking components using OpenStack services like Nova, Cinder, Neutron, and Keystone.
• Skilled in providing Infrastructure as a Service solutions using OpenStack, enabling on-demand access to virtualized
resources for development, testing, and production environments.
• Implemented optimization strategies at various levels of the application stack, including codebase refactoring, database
query tuning, and resource utilization optimization, resulting in significant improvements in response times and
scalability for Application Performance.
• Demonstrated ability to integrate OpenStack with other technologies and customize deployments to meet specific
organizational requirements. Experience in leveraging OpenStack APIs and plugins for seamless integration with existing
systems.
• Experience in solving manual redundant infrastructure issues by creating Cloud Formation Templates using the AWS
Serverless model and deploying RESTFUL APIs using API Gateway and triggering Lambda functions.
• Experience in provisioning SAAS, PAAS, IAAS through PowerShell, Python in Azure, and AWS.
• Experience in building the JSON Terraform files to create reusable templates to provision the infrastructure.
• Useful for analyzing the dump of all stacks in all live threads for thread dump.
• Used to determine hangs, deadlocks, and reasons for performance degradation system in heap dump.
• Migrated the On-premises infrastructure, applications, and dependencies to determine the target AWS services and
prepared the design documentation.
• Proficient in leveraging GitHub Enterprise for collaborative software development, managing repositories, and
facilitating team collaboration within an enterprise environment.
• Experienced in setting up and configuring GitHub Enterprise instances, including user management, access control, and
integration with existing authentication systems.
• Skilled in creating and managing repositories, branches, and pull requests on GitHub Enterprise to facilitate agile
development workflows.

Certifications:
• AWS Developer Associate
• Red Hat Enterprise Linux Certified

TECHNICAL SKILLS:

Languages JSON, YAML, Groovy, Terraform, Shell, C, C++, Ruby, Golang, core Java Bash & Python

Operating Systems RedHat Linux 4/5/6/7, CentOS 5/6/7, Windows Servers 2003, 2008, 2008 R2, 2012, 2012
R2, Windows 2000, XP, 7, Redux and Flux, Cloud Services
Build Tools Maven, Ant, Docker

Versioning Tools GIT, GitHub, Gitlab, Bitbucket

Amazon Web Services EC2, S3, VPC, AWS SFTP, SNS, SES, Route53, Cloud Watch, Cloud Trail, IAM, SQS, Lambda,
ECS, EKS, RDS, AWS CLI, Cloud Front.
Monitoring Tools Cloud Watch, Nagios, Grafana, Prometheus, AppDynamics Splunk, New Relic

Cloud Services Amazon Web Services

Application Servers Apache, Tomcat, JBoss

CI/CD Tools Gitlab, GitHub, Git, Jenkins, Bamboo, SonarQube, White Source, Chef, Docker, Kubernetes,
Argo CD, Ansible, Nexus, Jfrog, Veracode
Other Tools Network Protocols WinSCP, SSH, VPN,

Virtualization VMware Client, Virtual Box, Vagrant

Database Technologies Oracle, MySQL, NoSQL, MongoDB, Cassandra, DynamoDB

Work Experience:

Client: Verizon – Irving, TX June 2022- Present


Telco DevOps Engineer
Responsibilities:
• Migrated the Telco logs from Splunk to ELK.
• Developed and maintained pipelines for ingesting and analyzing data in Elastic Search.
• Configured Splunk Forwarder, Fluent D from K8, and DB to Logstash.
• Created and customized advanced dashboards, alerts, reports, Splunk searches and visualizations in Splunk
Enterprise as desired in IT teams.
• Monitoring Splunk environment and performing health checks.
• Building security-focused content for Splunk, including creation of complex threat detection logic and operational
dashboards.
• Conducted thorough gap analysis of existing IAM policies and procedures.
• Proficient in optimizing Telco workloads by strategically assigning CPU cores to specific virtual machines or processes
through CPU pinning techniques. Skilled in maximizing CPU performance and minimizing latency by ensuring
dedicated CPU resources for critical applications in virtualized Telco environments.
• Experienced in managing Non-Uniform Memory Access (NUMA) architectures in Telco infrastructures to enhance
memory performance and reduce latency. Proficient in configuring NUMA policies and optimizing resource
allocation for Telco workloads to leverage the locality of memory access.
• Proficient in designing, developing, and deploying complex data pipelines using Apache Airflow.
• Demonstrated ability to leverage Airflow's DAGs (Directed Acyclic Graphs) for orchestrating data workflows,
scheduling tasks, and monitoring job dependencies.
• Proficient in Hadoop grid for large-scale data processing, enabling efficient handling of petabyte-sized datasets.
• Strong troubleshooting and performance distributed processing capabilities on the Hadoop Grid to achieve
significant improvements in data processing speed and reduced computational costs.
• Integrated Kubernetes orchestration with on-premises infrastructure components such as networking, storage, and
security systems, leveraging automation tools and scripts to streamline provisioning, configuration, and
maintenance tasks.
• Proficiency in using Ansible for automating playbooks, defining tasks, roles and execute complex automation
workflows.
• Automated the deployment of IAM policies to targeted OUs within AWS Organizations, streamlining the policy
management process and reducing manual intervention.
• Competent in setting up monitoring solutions for Apache Airflow using tools like Prometheus and Grafana to track
job statuses, resource utilization, and performance metrics.
• Proficient in debugging docker files, ensuring correct syntax, addressing issues during image builds.
• Skilled in optimizing docker files for efficiency, minimizing image size and reducing the layers.
• Expertise in implementing docker build contexts, ensuring only essential files are included to avoid unnecessary
resource consumption.
• Designed and developed Python applications to automate Identity and Access Management (IAM) policy
management within AWS Organizations.
• Experienced in integrating GitHub Enterprise with other tools and services such as JIRA, Slack, and Jenkins to
streamline development workflows and communication within the organization.
• Proficient in implementing DevOps principles and practices to enhance collaboration, automation, and efficiency
across software development, testing, and deployment lifecycles.
• Skilled in identifying and resolving issues with Kubernetes YAML or JSON manifest.
• Worked on Splunk Architecture and working of various components indexer, forwarder, search head, deployment
server, Cluster Master) Heavy and Universal Forwarder.
• Leveraged AWS SDKs and APIs to interact with AWS services programmatically, facilitating efficient policy
automation and management.
• Hands on experience with APM tools like Dynatrace, App Dynamics, Splunk, ELK, Grafana, Prometheus.
• Experience in CI/ CD tools such as Jenkins, Git Hub, Maven, and Groovy scripting.
• Analyzed the logs data and filter columns by Logstash configuration and sent it to Elastic Search.
• Involved in updating the cluster settings using both API calls and configuration file changes.
• Worked on cluster maintenance and data migration from one server to another and upgraded ELK stack.

Client: Samsung SDS America – Ridgefield Park, NJ April 2021- May 2022
Lead AWS DevOps Engineer/ SRE
Responsibilities:
• Created and maintained fully automated AEM CI/ CD pipelines for code deployment using Jenkins.
• Actively managed, improved and monitored cloud infrastructure on AWS EC2, S3, RDS including backups, patches, and
scaling.
• Experienced in utilizing configuration management tools such as Ansible, Puppet, or Chef to automate infrastructure
provisioning, configuration, and management, enabling consistent and reliable deployments.
Environment: Jenkins, Docker, Amazon EC2, Kubernetes, Argo CD, Gitlab, Ansible, SVN, Ansible, Grafana, Maven, Gradle,
JIRA, Confluence.
Client: Department of Health care - Memphis, TN Nov 2020- Mar 2021
Lead DevOps Engineer
Responsibilities:
• Standardized change management process with the adoption of Kubernetes with AWS EKS.
• Responsible for design, implementation, architecture, and support of cloud-based servers and service solutions.
• Managed multiple AWS accounts with multiple VPCs for both prod and non-prod where primary objectives included
automation, build-out, and integration.

Environment: AWS, IAAS, Splunk, ELK, Ansible, Docker, Kubernetes, EKS, EC2, AMI, S3, RDS, VPC.
Client: DXC Technologies - Dallas, TX Oct 2018- Oct 2020
DevOps Engineer
Responsibilities:
• Authored Terraform modules for Infrastructure management. Authored and published a module to the Terraform
registry for enterprise customers to deploy our product inside the AWS environment.
• Used Cloud Formation Templates to simplify provisioning and management of EC2 instances, RDS, and VPC on AWS.
Environment: Ansible, AWS, VPC, Docker, Kubernetes, NAT, Terraform, Jenkins, VPC, GIT, Linux.

Client: Go Air Aviation - Chicago, IL June 2017- Sept 2018


DevOps Engineer
Responsibilities:
• Worked with multiple application and infrastructure teams in this project to implement the DevOps best practices.
• Responsible for Continuous Integration (CI) and Continuous Delivery (CD) process implementation using Jenkins along
with Shell scripts to automate routine jobs.
• Managed the Subversion branching strategy for a few applications by creating Release branches.
• Efficiently performed the task of resolving conflicts while merging in Subversion with the J2EE development background.

Client: Amgen, Los Angeles, CA July 2016- May 2017


Build & Release Engineer
Responsibilities:
• Responsible for Build & Release of applications and writing automation scripts.
Environment: Development, QA and UAT, Confluence, Release, WAR files, Installation.

Client: DST Worldwide Services, Hyderabad, India June 2013 - June 2015
Linux Administrator
Responsibilities:
• Performed application installation, upgrades/ Patches, troubleshooting, maintenance, and monitoring of Linux (RHEL)
servers.
• Networking configuration and troubleshooting issues with respect to networking and configuration files.
Environment: EC2, System Backup, Security Setup, On-Premises, Log Files.

EDUCATION:
• Bachelor of Technology in Computer Science and Engineering (2013) – JNTU Hyderabad, India
• Master’s in computer science from New England (2017), New Hampshire, NH

You might also like