0% found this document useful (0 votes)
12 views7 pages

Format

Uploaded by

ranganathcta123
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views7 pages

Format

Uploaded by

ranganathcta123
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 7

Candidate Name as per Aadhar Card ( First

Name, Last Name) Ranganatha Chandrashekarappa

Current Company Wipro Limited


Contact Number +91 9739790144
Total Years of experience &
11.6 & 5 years of Kafka
Relevant experience
more than 10 years of experience in IT
industries, excellent started career with
Linux, over the periods of time get Bigdata
technologies, and with more than 6 years
of experience currently working in Kafka
on Various platforms, Confluent Kafka,
Opensource Kafka, and on Open shift as
Brief on your Technical experience well, excellent knowledge on DevOps,
Google Cloud Platform, & Kubernetes
working on enhancing skills to
Elasticsearch and Rabbitmq, on project
requirements, Workingcurrently deployed
Elasticsearch om Prod Env. Automation
skills include: Shell script and deploying
same in live environments
DOB (DD/MM/YY) 18/01/1979
Gender Male
Wipro email ID [email protected]
Current Location Bangalore
Have you previously worked with Standard
NO
Chartered Bank (Yes / No)
If yes, please update the Bank ID (SCB Emp
ID). If the above answer is No, please leave this
blank
Note: please don’t update the SCB email ID
Notice Period (only for external candidates)
Name: RanganathaChandrashekarappa
Email: [email protected] +91 9739790144
Professional Summary:
 Over 10 years of IT experience including 5 + years in Big Data Technologies which includes 3+ years in KAFKA ADMIN
 Extensively worked on setup Confluent kafka cluster on Kubernetes (GKE)
 Worked on kafka Security: worked on securing kafka cluster using Cfssl and Cfssljson
 Worked on setup confluent operator, kafka connect and kafka replicator from yaml file
 Worked on Source code repositories GIT, Git Secret and Ansible.
 Proficient experience in using configuration management tools like CHEF and Ansible,
 Experienced in writing the automatic scripts for installation of ssl certificates, and creation of topic etc
 Experience in Kafka clusters Architect and integration manage and Monitoring, real time data processing.
 Hands-on experience on Confluent Kafka cluster including cluster operations, interpret cluster metrics, recover failures,
and cluster security
 Install, configure and manage Schema Registry, Kafka Connect, KSQL, REST Proxy clusters
 Install, configure, and manage Confluent Control Center, and other monitoring tools like Prometheus/Grafana
 Strong Knowledge in Kafka Cluster Capacity Planning, Performance Tuning, Cluster Monitoring
 Extensive experience in writing Storm topology to accept the events from Kafka producer and emit into Cassandra DB
 Understands different serialization/deserializations formats (JSON, Avro, etc.) and can explain producers and
consumers for each
 In depth understanding/knowledge of Hadoop Architecture and various components such as HDFS, Name Node, Job
Tracker, Data Node, Task Tracker and Map Reduce concepts. Experience in Ansible and related tools for configuration
management.
 Extensive experience in installation, configuration, maintenance, design, implementation, and support on Linux.
Experience in spinning Hive server2 and Impala daemons as required.
 Experience in designing Automatic failover control using zookeeper and quorum journal node.
 Experienced in writing the automatic scripts for monitoring the file systems, key MapR services.
 Experience in setting up Name Node high availability for major production cluster.
 Experience in analyzing existing Hadoop cluster, Understanding the performance bottlenecks and providing the
performance tuning solutions accordingly. Experience on Oracle, MongoDB, AWS Cloud, Greenplum.
 Experience in working large environments and leading the infrastructure support and operations.
 Benchmarking Hadoop clusters to validate the hardware before and after installation to tweak the configurations to
obtain better performance. Experience in configuring Zookeeper to coordinate the servers in clusters.
 Experience in administering the Linux systems to deploy Hadoop cluster and monitoring the cluster.
 Experience on Commissioning, Decommissioning, Balancing, and Managing Nodes and tuning server for optimal
performance of the cluster.
 Good Working Knowledge on Linux concepts and building servers ready for Hadoop and Kafka Cluster setup.

Technical Summary
 Kafka: POC on AWS kafka Kafka-cluster setup, Confluent kafka 6.1, Confluent operator, opensource kafka v2.8
 Big Data Technologies: Hortonworks, Cloudera, HDFS, Hive, Map Reduce, Cassandra, Pig, Avro, HBase, Storm,
Confluent Kafka 4.0X, 5.3, Open source (Apache Kafka)
 DevOps Tools and Languages: CHEF, Ansible, Jenkins, Git, Java, Shell Scripting, SQL
 Application Servers: Apache WebLogic Server, Web sphere
 Cloud Platforms: Google Cloud Platform (GCP), Kubernetes GKE, Amazon Web Services (AWS),
 NoSQL Databases: HBase, Cassandra
 Operating Systems: Linux, UNIX, Mac OS X 10.9.5, Windows Security: Kerberos, SSL, SASL and mTLS

Professional Experience:

Kafka Engineer July 2020 to


Present

Telefonica Wipro:

 Working On installation of new Confluent- kafka v 6.1 cluster for our new client at different stages
 Completed training on Google cloud platform, Kubernetes GKE
 Involved in preparation of Design document
 Successfully Installed confluent operator
 Worked on generation of mTLS certificate for securing kafka cluster using Cfssl and CfsslJson
 Successfully worked on enabling Acls, Kakfa ACL's with anonymous users and with different hostnames.
 Worked on generation of external client certificate
 Successfully Installed confluent kafka in Dev, Test and Stage different environments
 Involved in setup confluent kafka cluster in production environment
 Successfully installed different components confluent kafka control center, zookeeper, Broker, connect and
ksqldb
 Successfully installed Schema Registry, connect and Replicator
 Worked on connectivity testing, performance tuning (Perf test)
 Worked on creation of topic, produce, and consume messages
 Worked on storage of credentials secrets, root CA at Git Secret
 Successfully Installed Kafka connectors (Debezium etc…)

BestBuy Wipro, MN
Responsibilities:
 Led the installation, configuration and deployment of product software’s on new edge nodes that connect and contact
Kafka cluster for data acquisition.
 Installed Kafka cluster with separate nodes for brokers and creating a backup for all the instances in Kafka Environment
 Supported and worked for the Docker team to install Apache Kafka cluster in multimode and enabled security in the
DEV environment.
 Successfully completed POC on kafkahq GUI for kafka topic monitoring tool
 Successfully configured LDAP for group access and SSL for Kafkahq
 Installed Kafka manager for consumer lags and for monitoring Kafka Metrics also this has been used for adding topics,
Partitions etc.
 Created Chef Site cookbook which installs and configures Kafkahq for multimode cluster
 Created and wrote shell scripts (Bash),Python and PowerShell for automating tasks( which
successfully deploys schema into schema registry)
 Worked on creation of Grafana Dashboards for Kafka consumer groups monitoring
 Involved in creation of Kafka topics using Chef Automation
 Successfully Generated consumer group lags from Kafka using their API
 Successfully test Kakfa ACL's with anonymous users and with different hostnames.
 Successfully completed POC on setting Kafka clusters on AWS environment
 Worked on Kafka Mirror maker transfer data between AWS clusters to on premise Kafka clusters
 Worked on creation of chef site cook for Mirror maker
 Worked on creation of Chef site cookbook and role cookbook which install and deploys Schema-Registry
 Involved in writing in spec tests for Ibmmq Inventory chef site cookbooks
 Experience working with production servers at multiple data centers
 Setup MQ and Message Broker Pub/Sub environments
 Documented all the installations and maintenances organized and provided documents for
the offshore team.

Environment: Apache Kafka 5.3.1, UNIX, SQL, Kafka Cluster, Linux, Git, Chef, Jenkins, Ibmmq

Hadoop Kafka Administrator Dec 2019 to June


2020
Verizon (INFOSYS), Dallas TX
Responsibilities:
 Installed and developed different POC's for different application/infrastructure teams in Apache Kafka and Confluent
open source for multiple clients.
 Using Kafka, Elastic search, REST. Developed plugins to de-serialize data in non-native Kafka.
 Worked on upgradation of Kafka cluster to latest version by rolling upgrade and rolling restart of the existing servers.
 Led the installation, configuration and deployment of product softwares on new edge nodes that connect and contact
Kafka cluster for data acquisition.
 Installed Kafka cluster with separate nodes for brokers and creating a backup for all the instances in Kafka Environment.
 Designed and implemented by configuring Topics in new Kafka cluster in all environment.
 Configured various topics on the Kafka server to handle transactions flowing from multiple ERP systems.
 Performed benchmarking of Kafka cluster to measure the performance and resource considerations and tuning the
cluster for optimal performance.
 Integrated Kafka with Flume in sand box Environment using Kafka source and Kafka sink.
 Implemented a distributing messaging queue to integrate with Cassandra using Apache Kafka.
 Performance tuning Apache Kafka on clusters.
 Successfully secured the Kafka cluster with Kerberos Implemented Kafka Security Features using SSL and without
Kerberos. Further with more grain-fines Security set up Kerberos to have users and groups this will enable more
advanced security features and Integrated Apache Kafka for data ingestion.
 Successfully Generated consumer group lags from Kafka using their API Kafka- Used for building real-time data pipelines
between clusters.
 Implemented real time log analytics pipeline using Confluent Kafka, storm, elastic search Logstash Kibana
 Worked on Kafka Backup Index, Log4j appended minimized logs and Pointed Ambari server logs to NAS Storage.
 Extensively worked on managing Kafka logs for traceability and debugging.
 Conducted cluster sizing, tuning, and performance benchmarking on a multi-tenant OpenStack platform to achieve
desired performance metrics.
 Created Kafka topics and provide ACLs to users by setting up rest mirror and mirror maker to transfer the data between
different Kafka clusters.
 Involved in updating scripts and step actions to install ranger plugins.
Environment: Confluent Kafka 4.0, 5.3.1, UNIX, SQL, Kafka Cluster, Linux, MySQL, Hortonworks, Hadoop
Kafka Administrator Dec 2018 to Dec 2019
TCF Bank, MN
Responsibilities
 Installed and developed different POC's for different application/infrastructure teams in Apache Kafka and Confluent
open source for multiple clients.
 Created a data pipeline through Kafka Connecting two different clients Applications namely SEQUENTRA and LEASE
ACCELERATOR.
 Using Kafka, Elastic search, REST. Developed plugins to de-serialize data in non-native Kafka.
 Installed and worked POC Datastax CDC connector for Apache Kafka
 Installed Kafka cluster with separate nodes for brokers and creating a backup for all the instances in Kafka Environment.
 Designed and implemented by configuring Topics in new Kafka cluster in all environment.
 Configured various topics on the Kafka server to handle transactions flowing from multiple ERP systems.
 Performed benchmarking of Kafka cluster to measure the performance and resource considerations and tuning the
cluster for optimal performance.
 Implemented a distributing messaging queue to integrate with Cassandra using Apache Kafka.
 Performance tuning Confluent Kafka on clusters.
 Successfully Generated consumer group lags from Kafka using their API Kafka- Used for building real-time data pipelines
between clusters.
 Implemented real time log analytics pipeline using Confluent Kafka, storm, elastic search Logstash Kibana and
Greenplum.
 Extensively worked on managing Kafka logs for traceability and debugging.
 Installed Confluent Kafka cluster with separate nodes for brokers.
 Created Kafka topics and provide ACLs to users by setting up rest mirror and mirror maker to transfer the data between
different Kafka clusters.
 Involved in updating scripts and step actions to install ranger plugins.
Environment: Confluent Kafka 4.0, & 5.0 Rest proxy, Schema-Registry UNIX, SQL, Linux,

Hadoop Administrator June 2016 to Dec


2018
BCBS - Dallas,TX

Responsibilities:
 Installed and configured Hadoop multi-node cluster and maintenances by Nagios.
 Install and configure different Hadoop ecosystem components such as Spark, HBase, Hive, Pig etc. as per requirement.
 Configuring HA to various services such as Name Node, Resource Manager, HUE etc., as required to maintain the SLA of
the organization.
 Performed POC to assess the workloads and evaluate the resource utilization and configure the Hadoop properties
based on the benchmark result.
 Tuning the cluster based on the POC and benchmark results.
 Commissioning and de-commissioning of cluster nodes.
 Monitoring System Metrics and logs for any problems
 Expertise building Cloudera, Hortonworks Hadoop clusters on bare metal and Amazon EC2 cloud.
 Experienced in installation, configuration, troubleshooting and maintenance of Kafka & Spark clusters.
 Experience in setting up Kafka cluster on AWS EC2 Instances.
 Worked on setting up Apache NiFi and performing POC with NiFi in orchestrating a data pipeline.
 Currently working as admin in Hortonworks (HDP) distribution for 4 clusters ranges from POC to PROD.
 Monitoring System Metrics and logs for any problems using Check Mk monitoring tool.
 User provisioning (creation and deletion) of user on Prod and Non-Prod cluster according to client request.
 Ensure that critical customer issues are addressed quickly and effectively.
 Apply troubleshooting techniques to provide solutions to our customer's individual needs.
 Investigate product related issues both for individual customers and for common trends that may arise.
 Resolve customer problems via telephone, email or remote access.
 Maintain customer loyalty through integrity and accountability.
 Research customer issues in a timely manner and follow up directly with the customer with recommendations and
action plans.
 Escalate cases to the engineering team when the problem is beyond the scope of technical support or falls out of the
support team's expertise.
Environment: Hadoop, Yarn, Spark, Kafka, Hive, Pig, Sqoop, Cloud Era, Kerberos, NiFi, Java 8.0, Log4J, GIT, AWS, JIRA.

Linux Administrator Sept 2011 to Feb


2016
Unitech Ltd - India

Responsibilities:
 Installation, Configuration, upgradation and administration of Windows, Sun Solaris, Red Hat Linux and Solaris.
 Linux and Solaris installation, administration and maintenance.
 User account management, managing passwords setting up quotas and support.
 Worked on Linux Kick-start OS integration, DDNS, DHCP, SMTP, Samba, NFS, FTP, SSH, and LDAP integration.
 Network traffic control, IPSec, Quos, VLAN, Proxy, Radius integration on Cisco Hardware via Red Hat Linux Software.
 Installation and configuration of MySQL on Windows Server nodes.
 Responsible for configuring and managing Squid server in Linux and Windows.
 Configuration and Administration of NIS environment.
 Package and Patch management on Linux servers.
 Worked on Logical volume manager to create file systems as per user and database requirements.
 Data migration at Host level using Red Hat LVM, Solaris LVM, and Veritas Volume Manager.
 Expertise in establishing and documenting the procedures to ensure data integrity including system fail-over and
backup/recovery in AIX operating system.
 Managed 100 + UNIX servers running RHEL, HPUX on Oracle HP. Solaris Disk Mirroring (SVM), ZONE installation and
configuration
 Escalating issues, accordingly, managing team efficiently to achieve desired goals.
Environment: Linux, TCP/IP, LVM, RAID, Networking, Security, user management, MySQL.

Education:
MSc: 2000 to 2002
Kuvempu University, Shankaraghatta, India
BSc: 1997 to 2000
Kuvempu University, Govt. Science College Chitradurga

References:
Ramesh Mourya Pradeepta Sarangi
Manager TCF Bank Manager
Contact No.9164138156 Contact No. 6124138939
Email: [email protected]

You might also like