0% found this document useful (0 votes)
22 views

Praneeth Python Resume

Uploaded by

Raj Patel
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views

Praneeth Python Resume

Uploaded by

Raj Patel
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 7

Praneeth

Python Developer
Email: [email protected]
Phone: 9899991224

Professional Summary:
⮚ Overall 7+ years of experience spread across Python, Apache Spark, Scala, Java, SQL
technologies.
⮚ Experience on fetching the live stream data from DB2 to HDFS using Spark Streaming and
Apache Kafka.
⮚ Experience in real time data from various data sources through Kafka data pipelines and
applied various transformations to normalize the data stored in HDFS Data Lake.
⮚ Expertise in developing data driven applications using Python 2.7, Python 3.6 on PyCharm
and Anaconda Spyder IDE's.
⮚ Hands on experience in configuring and working with Flume to load the data from multiple
sources
⮚ Proficient in designing and querying the NoSQL databases like HBase, Cassandra,
MongoDB, Impala.
⮚ Experienced in using MVC architecture using RESTful, Soap Web services and Soap UI
and high-level Python Web frameworks like Django and Flask. Experience object-oriented
programming (OOP) concepts using Python, Django, and Linux.
⮚ Experience with GraphSQL APIs.
⮚ Having good experience in Hadoop Big Data processing. Expertise in developing the queries
in Hive, Pig.
⮚ Experienced in MVW frameworks like Django, Angular.js, Java Script, backbone.js,
jQuery and Node.js.
⮚ Have experience on Kubernetes and Docker for runtime environment of system to build,
test & deploy
⮚ Good experience in working with Amazon Web Services like EC2, Virtual private clouds
(VPCs), Storage models (EBS, S3, instance storage), Elastic Load Balancers (ELBs).
⮚ Familiar with JSON based REST Web services.
⮚ Deeply involved in writing complex Spark-Scala scripts, Spark context, Cassandra SQL
context, used multiple API's, methods which support data frames, RDD's, Cassandra table
joins and finally write/save the data frames/RDD's to Cassandra database.
⮚ Experience with Version Control, ideally SVN, CVS and GIT.
⮚ Knowledge on integrating different eco-systems like Kafka - Spark - HDFS.
⮚ Good Knowledge in Apache Spark and Spark SQL.
⮚ Experience in running Spark streaming applications in cluster mode and Spark log debugging.
⮚ Experience in optimizing volumes, EC2 instances and created multiple VPC instances and
created alarms and notifications for EC2 instances using Cloud Watch
⮚ Extensive knowledge of creating manages tables and external tables in Hive Eco system.
⮚ Worked extensively in design and development of business process using SQOOP, PIG,
HIVE, HBASE
⮚ Expertise in Working on Data Encryption (Client-Side and Server-Side) and securing data at
rest and in transit for data in S3, EBS, RDS, EMR, Red Shift using Key Management
Service (KMS).
⮚ Good Knowledge in Amazon AWS concepts like EMR and EC2 web services which
provides fast and efficient processing of Big Data.
⮚ Experienced in developing Web Services with Python programming language and Good
working experience in processing large datasets with Spark using Scala and Pyspark.
⮚ Knowledge on Spark framework for batch and real-time data processing.
⮚ Knowledge on Scala Programming Language. Good experience with Talend open studio for
designing ETL Jobs for Processing of data.
⮚ Worked on Python Quant platform for machine learning analysis.
⮚ Hands on experience in MVC architecture and Java EE frameworks like Struts2, Spring
MVC, and Hibernate.
⮚ Experienced in WAMP (Windows, Apache, MYSQL, and Python) and LAMP (Linux,
Apache, MySQL, and Python) Architecture and Wrote Automation test cases using
Selenium Web Driver, JUnit, Maven, and spring.
⮚ Good knowledge in Software Development Life Cycle (SDLC) and Software Testing Life
Cycle (STLC).
⮚ Worked in agile and waterfall methodologies with high quality deliverables delivered on-
time.
⮚ Experience with Test Driven Development (TDD), Agile, Scrum and Waterfall
methodologies. Used ticketing systems like JIRA, Bugzilla and other proprietary tools.
⮚ Excellent communication and inter-personal skills detail oriented, analytical, time bound,
responsible team player and ability to coordinate in a team environment and possesses high
degree of self-motivation and a quick learner.

Technical Skills:

Programming Python 2.x/3.x, Java, JavaScript, PL/SQL, Shell Scripting


Languages
Backend Technologies Python, NodeJS, Bash Scripting
Code maintenance Tools CVS, Tortoise SVN, GIT
Python Libraries Requests, Scrapy, wxPython, Pillow, SQL Alchemy, Beautiful Soup, Twisted,
NumPy, SciPy, matplotlib, Pygame, Pyglet, PyQT, PyGtk, Scapy, pywin32, ntlk,
nose, SymPy, Ipython
Web Frameworks Django, Pyramid, TurboGears, Muffin, CherryPy
GUI Frameworks PyJamas, Gnome Python, gui2py, PyFltk, PyForms, PyGtk, PySide, TkInter
Version Control Tools Concurrent Versions System (CVS), Subversion (SVN), GIT, GIT Hub, Mercurial
Automation Tools Ansible
Testing Tools Unit test, pytest, pythoscope, PyMock, Mocker, antiparser, webunit, webtest,
PAMIE, Selenium, Splinter, PyFIT, PyUseCase, Automa, PyChecker
IDE Eclipse, Notepad++, Netbeans, PyCharm, PyStudio
Databases MySQL, SQLite, MongoDB, Cassandra, NoSQL, PostgreSQL
Bug Tracking Tools Bugzilla, Jira
Operating Systems Windows 98/NT/2000/XP/Vista/7/8, Unix/Linux, Sun Solaris
Methodologies Agile, Scrum

Education details:
Master of Science (Computer Science) 3.82/4.0
San Diego State University,
San Diego, CA, USA.

Bachelor of Technology (Computer Science Engineering) 7.63/10.0


Jawaharlal Nehru University (JNTU)
Visakhapatnam, AP, India.
Professional Experience:
Ford Motor Company March 2023 - Present
Sr. Python Developer
Responsibilities:
⮚ Developed an application in Linux environment and dealt with all its commands.
⮚ Administrate Continuous Integration services (Jenkins, Nexus Art factory and Repository).
⮚ Designed and Developed DB2 SQL Procedures and UNIX Shell Scripts for Data
Import/Export and Conversions.
⮚ Extensively worked with Avro and Parquet files and converted the data from either format
Parsed Semi Structured JSON data and converted to Parquet using Data Frames in
PySpark.
⮚ Performed dynamic UI designing with HTML5, CSS3, less, Bootstrap JS, JavaScript,
jQuery, JSON and AJAX.
⮚ Loading, analyzing and extracting data to and from Elastic Search with Python.
⮚ Consumed the data from Kafka using Apache spark.
⮚ Worked with SQOOP jobs to import the data from RDBMS and used various optimization
techniques to optimize Hive, Pig and SQOOP.
⮚ Developed and maintained reports of all automation issues and test results
⮚ Updating the Test Automation suite regularly to ensure its accuracy and usefulness.
⮚ Developed analytical component using Scala and KAFKA.
⮚ Used SQL queries to debug issues through logs.
⮚ Designed Forms, Modules, Views and Templates using Django and Python.
⮚ Involved in application development for Cloud platforms using technologies like
Java/J2EE, Spring Boot, Spring Cloud, Micro Services, and REST.
⮚ Design, deploy and manage a Continuous Integration System which includes automated
testing and automated notification of results using technologies like Ansible, Terraform,
Packer, Cloud formation, Docker and Server spec.
⮚ Created Hive DDL on Parquet and Avro data files residing in both HDFS and S3 bucket
⮚ Worked with Amazon Web Services (AWS) using EC2 for hosting and Elastic map reduce
(EMR) for data processing with S3 as storage mechanism.
⮚ Worked with various HDFS file formats like Avro, Sequence File and various compression
formats like Snappy, bzip2.
⮚ Worked extensively on AWS Components such as Airflow, Elastic Map Reduce (EMR),
Athena, and Snowflake.
⮚ Used GraphQL APIs to fetch the required data while automating for Regression Testing.
⮚ Implemented RESTful Web-services for sending and receiving the data between multiple
systems. Rewrite existing Python/Flask module to deliver a certain format of data.
ENVIRONMENT:
Python 2.7, Django, C++, Java, jQuery, MySQL, Oracle 11.2, Linux, Django, Eclipse, Shell
Scripting, HTML, XHTML, SVN, CSS, AJAX, Bugzilla, JavaScript, Apache Web Server, Apache
spark, Git, Jenkins.

Clearwater Analytics Aug 2021- Mar 2023


Python Developer
Responsibilities:
➢ Designed and developed Applications using Python with Django framework in Linux
environment.
➢ Wrote Python scripts to parse JSON documents and load the data in database.
➢ Created REST architecture with token-based authentication and authorization.
➢ Developed Views and Templates with Django view controller and template language to
create a user-friendly website interface.
➢ Used Pandas to put the data as time series and tabular format for east timestamp data
manipulation and retrieval.
➢ Created the server-less REST full API using AWS Lambda and used it as a trigger.
➢ Used python library Beautiful Soup for Web scraping to extract data for building graphs
➢ Developed GUI using webapp2 for dynamically displaying the test block documentation and
other features of python code using a web browser.
➢ Created the backend application using Python, Django and MySQL. Developed Merge jobs
in Python to extract and load data into MySQL database.
➢ Created Data tables utilising PyQt to display patient and policy information and add, delete,
and update.
➢ Involved in object-oriented programming (OOP) concepts using Python.
➢ Used GIT version control and deployed the project to Jenkins.
ENVIRONMENT:
Python, Django, Angular JS, AWS, JSON, MySQL, PyQt, GIT, Jenkins, Linux.

San Diego State University, San Diego, CA Aug 2019 – Jul 2021
Python Developer
Responsibilities:
⮚ Worked AWS Components such as Airflow, Elastic Map Reduce (EMR), Athena, and
Snowflake.
⮚ Worked with Avro and Parquet files and converted the data from either format Parsed Semi
Structured JSON data or converted to Parquet using Data Frames in PySpark.
⮚ Developed a Python Script to load the CSV files into the S3 buckets and created AWS
S3buckets, performed folder management in each bucket, managed logs and objects within
each bucket.
⮚ Created Hive DDL on Parquet and Avro data files residing in both HDFS and S3 bucket
⮚ Created Airflow Scheduling scripts in Python to automate the process of sqooping wide
range of data sets.
⮚ Worked with Amazon Web Services (AWS) using EC2 for hosting and Elastic map reduce
(EMR) for data processing with S3 as storage mechanism.
⮚ Maintained and developed Docker images for a tech stack including Cassandra, Kafka,
Apache, and several in house written Java services running in Google Cloud Platform
(GCP) on Kubernetes.
⮚ Created data partitions on large data sets in S3 and DDL on partitioned data.
⮚ Extensively used Stash Git-Bucket for Code Control
⮚ Developed analytical component using Scala and KAFKA.
⮚ Designed Forms, Modules, Views and Templates using Django and Python.
⮚ Involved in application development for Cloud platforms using technologies like Java/J2EE,
Spring Boot, Spring Cloud, Micro Services, and REST.
⮚ Working with Continuous Integration (CI) and Continuous Delivery/Deployment (CD)
Environment.
⮚ Automated setting up server infrastructure for the DevOps services, using Ansible, shell and
python scripts.
⮚ Implemented RESTful Web-services for sending and receiving the data between multiple
systems.
⮚ Rewrite existing Python/Flask module to deliver certain format of data.
⮚ Developed an application in Linux environment and dealt with all its commands.
⮚ Administrate Continuous Integration services (Jenkins, Nexus Artifactory and Repository).
⮚ Designed and Developed DB2 SQL Procedures and UNIX Shell Scripts for Data
Import/Export and Conversions.
⮚ Performed dynamic UI designing with HTML5, CSS3, less, Bootstrap JS, JavaScript,
jQuery, JSON and AJAX.
⮚ Loading, analyzing and extracting data to and from Elastic Search with Python.
⮚ Experience in creating Kafka producer and Kafka consumer for Spark streaming.
⮚ Developed an information pipeline utilizing Kafka and Storm to store data into HDFS.
⮚ Loading spilling data using Kafka, Flume and real time Using Spark and Storm.
⮚ Performed Kafka analysis, feature selection, feature extraction using Apache Spark
Machine.
⮚ Used Different Spark Modules like Spark core, Spark SQL, Spark Streaming, Spark
Data sets and Data frames.
⮚ Develop and Execute scripts on AWS Lambda to generate AWS Cloud Formation template.
⮚ Micro service architecture development using Python and Docker on an Ubuntu Linux
platform using HTTP/REST interfaces with deployment into a multi-node Kubernetes
environment.
⮚ Used Spark SQL with Python for creating data frames and performed transformations on
data frames like adding schema manually, casting, joining data frames before storing them.
⮚ Worked on Spark streaming using Apache Kafka for real time data processing and
implemented Oozie job for daily import.
⮚ Used Spark Data Frame Operations to perform required Validations in the data and to
perform analytics on the Hive data.
ENVIRONMENT:
Scala language with Akka framework, Java, J2EE Sqoop, Kafka, CDH3, Kubernetes, PHP, Docked,
Cassandra, Python, Oozie, collection, Scala, AWS cloud, storm, Ab Initio, Apache, SQL, Elastic
search, NoSQL, Bit bucket, HBase, Flume, Zookeeper, ETL, Agile.

Vertical Solutions, India Dec 2018 – Jul 2019

Python Developer
Responsibilities:
⮚ Responsible for gathering requirements, system analysis, design, development, testing and
deployment.
⮚ Participated in the complete SDLC process.
⮚ Analyzed and reported risk levels for business strategies and transactions.
⮚ Reviewed and updated internal risk management policies to accommodate new business
processes.
⮚ Produced monthly report using Excel spreadsheets.
⮚ Utilized Joins and sub-queries to simplify complex queries involving multiple tables while
optimizing procedures and triggers in production.
⮚ Tested coding modifications and assisted with application and system testing to minimize
errors and downtime, successfully gaining a 35% reduction in general and specific errors and
downtime problems.
⮚ Developed web-based open stack applications using Python and Django for large dataset
analysis.
⮚ Extensively used regular expressions and core features in Python using lambda, map,
reduce etc. and effectively implemented logging feature using python logging library and
profiling using cProfile
⮚ Written many programs to parse excel file and process many user data with data validations.
⮚ Used Quants platform for Numerical analysis for Insurance premium.
⮚ Used Subversion version control tool to coordinate team-development.
⮚ Developed tools to automate some base tasks using Shell Scripting, Python.
⮚ Designed and Developed User Interface using front-end technologies like HTML, CSS,
JavaScript, jQuery, Angular JS, Bootstrap and JSON.
⮚ Worked with regular expressions, urllib modules.
⮚ Used PySpark to expose Spark API to Python.
⮚ Map Reduce jobs in Python for data cleaning and data processing.
⮚ Used different type of transformations and actions in Apache Spark.
⮚ Used Spark cluster to manipulate RDD's (resilient distributed datasets). And also used
concepts of RDD partitions.
⮚ Wrote python automation testing using selenium web driver across Chrome, Firefox and
IE browsers.
⮚ Built unit test framework using python pyunit test framework for automation testing.
⮚ Having experienced in Agile Methodologies, Scrum stories and sprints experience in a
Python based environment, along with data analytics, data wrangling and Excel data
extracts.
⮚ Developed views and templates with Python and Django's view controller and templating
language to create a user-friendly website interface.
⮚ Worked under DEVOPS environment for continuous integration and continuous deployment
using Jenkins and puppet.
⮚ Configured and deployed project using the Amazon EC2 on AWS.
⮚ Designed and developed data management system using MySQL. Involved in Agile
Methodologies and SCRUM Process.
⮚ Created unit test/regression test framework for working/new code.
⮚ Using version control tool - Git with Jenkins to accumulate all the work done by team
members.
⮚ Using agile methodology - SCRUM, along with JIRA for project.
⮚ Developed entire frontend and backend modules using Python on Django Web Framework.
⮚ Responsible for debugging and troubleshooting the web application.
⮚ Participated in writing scripts for test automation using `QL APIs
ENVIRONMENT:
Python 2.7, Django, C++, Java, jQuery, MySQL, Oracle 11.2, Linux, Django, Eclipse, Shell
Scripting, HTML, XHTML, SVN, CSS, AJAX, Bugzilla, JavaScript, Apache Web Server, Apache
spark, Git, Jenkins.

Student Zone, India Nov 2016 – Nov 2018


Jr. Full Stack Python Developer
Responsibilities:
⮚ Worked with Linux systems and RDBMS database on a regular basis in order to ingest data
using Sqoop.
⮚ Experience in Cloud based services (AWS) to retrieve the data.
⮚ Worked and expertise hands on Scala programming for processing real time information
using Spark API's in the cloud environment.
⮚ Using Kafka and Kafka brokers we initiated spark context and processed live streaming
information with the help of RDD as is.
⮚ Worked on Spark using Python and Spark SQL for faster testing and processing of data.
⮚ Involved in enabling Amazon Kinesis firehose to capture streaming data directly on to S3
and also Red Shift. It automatically scales to match the throughput of your data and requires
no ongoing administration.
⮚ Developed and maintained the continuous integration and deployment systems using Jenkins,
ANT, Akka and MAVEN.
⮚ Used Akka as a framework to create reactive, distributed, parallel and resilient concurrent
applications in Scala.
⮚ Installed, Configured Talend ETL on single and multi-server environments.
⮚ Experience in creating tables, dropping and altered at run time without blocking updates and
queries using HBase and Hive.
⮚ Developed ETL test scripts based on technical specifications/Data design documents and
Source to Target mappings.
⮚ Hands-on experience with Hortonwork tools like Tez and Ambari.
⮚ Worked on Apache Nifi as ETL tool for batch processing and real time processing.
⮚ Extracted files from MongoDB through Sqoop and placed in HDFS and processed.
⮚ Writing user console page in lift along with the snippets in Scala. The product is responsible
to give access to the user to all their credentials and privileges within the system
⮚ Used Oozie workflow engine to create the workflows and automate the MapReduce, Hive,
Pig jobs.
⮚ Implemented SEO based Drupal modules optimizing the search function all over the site.
⮚ Supported in setting up QA environment and updating configurations for implementing
scripts with Pig and Sqoop. Cluster co-ordination through Zookeeper.
⮚ Used a LAMBDA EXPRESSION to improve Sack Employees further and avoid the need for
a separate class.
⮚ Developed UNIX shell scripts to load large number of files into HDFS from Linux File
System.
⮚ Experience in creating hive tables HiveQL.
⮚ Using HIVE join queries to join multiple tables of a source system and load them into Elastic
Search Tables.
⮚ Developed workflow in Oozie to automate the tasks of loading the data into HDFS and pre-
processing with Pig.

ENVIRONMENT:
ETL, Spark, Kafka, Python, Shell Scripting, SQL Talend, Elastic search, solr, Linux- Ubuntu, AWS,
Hortonworks, MongoDB, VPC, Lambda, Hive, Zookeeper, Pig, Sqoop, Oozie, Solr, Tez, Ambari,
YARN, Akka, Jenkins, Kinesis, ANT, Map Reduce.

You might also like