0% found this document useful (0 votes)
93 views4 pages

Supriya Data Engineer Resume

Supriya is an experienced Data Engineer and AI/ML Specialist with over 8 years of expertise in data solutions, machine learning, and cloud applications, proficient in Python, PySpark, SQL, and various cloud platforms. She has a strong track record in developing predictive models, optimizing business processes, and ensuring data integrity, with experience in real-time data processing and compliance with GDPR and CCPA. Supriya holds a Master's degree in Information Technology and multiple certifications, and has worked with notable companies like Charter Communications and Accenture, contributing to significant improvements in system reliability and operational efficiency.

Uploaded by

venu.b151999
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
93 views4 pages

Supriya Data Engineer Resume

Supriya is an experienced Data Engineer and AI/ML Specialist with over 8 years of expertise in data solutions, machine learning, and cloud applications, proficient in Python, PySpark, SQL, and various cloud platforms. She has a strong track record in developing predictive models, optimizing business processes, and ensuring data integrity, with experience in real-time data processing and compliance with GDPR and CCPA. Supriya holds a Master's degree in Information Technology and multiple certifications, and has worked with notable companies like Charter Communications and Accenture, contributing to significant improvements in system reliability and operational efficiency.

Uploaded by

venu.b151999
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 4

AI/DATA ENGINEER

Supriya
[email protected]|346-375-0907| www.linkedin.com/in/bhandavisupriyachikkam
-------------------------------------------------------------------------------------------------------------------------------------------
Summary:
 Experienced Data Engineer & AI/ML Specialist with 8+ years of experience in data solutions, machine
learning models, and cloud applications.
 Proficient in Python, PySpark, SQL, and cloud platforms like AWS, Azure, and GCP.
 Skilled in designing and deploying predictive models to optimize business processes, enhance system
performance, and improve operational efficiency.
 Expertise in building scalable ETL pipelines, automating workflows, and ensuring data integrity in cloud
environments.
 Strong background in anomaly detection, reinforcement learning, and real-time data processing for
high-traffic networks.
 Proven track record in reducing downtime, improving system reliability, and optimizing network
performance.
 Collaborative team player, with a focus on aligning technical solutions with business goals and
enhancing customer satisfaction.
 Experienced in root cause analysis, troubleshooting, and ensuring data quality and compliance with
GDPR, CCPA.
 Expertise in cloud-native technologies, machine learning algorithms, and integrating DevOps practices.
 Committed to driving innovation and delivering impactful solutions in AI/ML-driven environments.
-------------------------------------------------------------------------------------------------------------------------------------------
Education:
Master of Science in Information Technology – Arkansas Tech University Graduated: December 2024
Bachelor of Technology in Information Technology – GITAM University Graduated: 2016
-------------------------------------------------------------------------------------------------------------------------------------------
Certifications:
 Infosys certified Global Agile Developer
 Infosys certified Microsoft Azure Fundamentals (AZ-900)
 Infosys certified Python Programmer
 Infosys certified Python Associate
-------------------------------------------------------------------------------------------------------------------------------------------
Technical Skills:
Category Tools/Technologies
Programming
Python, PySpark, SQL, Unix Shell Scripting
Languages
Apache Spark, Snowflake, Azure Data Factory (ADF), Azure Databricks, Kafka,
Data Engineering
ETL Tools (e.g., Turbine)
AWS (SageMaker), Microsoft Azure (ADF, Azure ML Studio, Blob Storage, Data
Cloud Platforms
Lake Storage Gen2), GCP AI Platform
ARIMA, LSTM, Isolation Forest, Autoencoders, Reinforcement Learning,
Machine Learning
Hyperparameter Tuning
Data Visualization Tableau, Power BI
Big Data Processing Spark, Databricks
Real-time Data
Kafka, Streaming Analytics
Processing
DevOps Jenkins, Docker, CI/CD Pipelines
Database Technologies Snowflake, Delta Tables, SQL-based Databases
Anomaly Detection Isolation Forest, Autoencoders
Model Deployment AWS SageMaker, Azure ML Studio, GCP AI Platform
Compliance GDPR, CCPA
Automation Batch Processing, ETL Pipelines, DevOps Practices
-------------------------------------------------------------------------------------------------------------------------------------------
Professional Experience
Charter Communications, CT Jan 2024 – Till Date
Sr. Data Engineer/AI-ML
Responsibilities:
 Developed and deployed predictive models to identify network anomalies, minimize downtime, and
enhance system reliability, resulting in a 20% improvement in uptime.
 Designed and maintained scalable data pipelines to process and analyze large volumes of real-time
network data using Spark, Kafka, and Snowflake.
 Built traffic forecasting models leveraging machine learning algorithms like ARIMA and LSTM, enabling
dynamic bandwidth allocation and reducing network congestion by 30%.
 Implemented anomaly detection algorithms using techniques such as Isolation Forest and Autoencoders
to detect unusual traffic patterns and potential security threats in real-time.
 Optimized network routing strategies through reinforcement learning algorithms, reducing latency by
15% and improving data throughput across key regions.
 Collaborated with cross-functional teams, including network engineers and product managers, to align
AI/ML solutions with business goals and operational needs.
 Deployed machine learning models into production environments using AWS SageMaker ensuring
scalability and robustness.
 Built interactive dashboards and visualization tools using Tableau and Power BI for real-time
monitoring of network performance and predictive insights.
 Conducted root cause analysis on network outages using historical data and provided actionable
recommendations to mitigate similar issues in the future.
 Ensured the integrity and security of data pipelines by implementing compliance measures aligned with
GDPR and CCPA standards.
 Automated network optimization workflows by integrating CI/CD pipelines using Jenkins and Docker,
reducing deployment times by 40%.
 Continuously improved model performance through hyperparameter tuning and retraining cycles based
on new data and evolving network conditions.
 Designed and implemented cloud-based solutions to handle real-time streaming analytics, enabling
efficient data ingestion and analysis for high-traffic networks.
Environment: AWS, Apache Kafka, Apache Spark, Snowflake, TensorFlow, PyTorch, Scikit-learn, Keras, Apache
Airflow, Jenkins, Docker, Tableau, Power BI, Grafana, Kibana, GDPR, CCPA.
-------------------------------------------------------------------------------------------------------------------------------------------
Accenture, India Jan 2023 to Aug 2023
Application Development Senior Analyst
Responsibilities:
 Developed ADB notebooks using PySpark to perform data transformations as per business
requirements.
 Designed ETL pipelines using Azure Data Factory (ADF) for batch data processing, loading data into
delta tables, and implemented scripts for data storage and transformation.
 Created triggers to schedule a pipeline and worked on deploying the changes to the production
environment.
 Deployed and maintained changes in production environments, troubleshooting bugs and resolving
issues.
 Led the migration of large volumes of structured and unstructured data from legacy systems to Azure
Blob Storage, Azure Data Lake Storage Gen2 ensuring data consistency and minimal downtime.
 Applied DevOps practices to enhance multi-team project integration.
Environment: Azure Data Factory, Azure Blob Storage, Azure Data Lake Storage Gen2, Azure Databricks
(PySpark), and Azure DevOps for pipeline.
-------------------------------------------------------------------------------------------------------------------------------------------
Ltimindtree, India Oct 2022 to Jan 2023
Senior Data Engineer
Responsibilities:
 Led a team in debugging data discrepancies and delivering accurate solutions within set timelines using
Azure Data Factory, Databricks, or SQL-based solutions.
 Implemented batch data processing and worked on migrating data to cloud-based storage solutions.
 Automated batch processing workflows and scheduled regular data loads to ensure seamless data flow
using Azure Data Factory pipelines and triggers.
 Implemented data transformation and reconciliation logic using Azure to ensure consistency between
source and target systems.
 Implemented optimization techniques like partitioning, indexing, and parallel processing within Azure
Data Factory and Azure Databricks to improve batch processing performance.
Environment: Microsoft Azure, Azure Data Factory (ADF), Batch Data Processing, ETL Pipelines, PySpark, SQL-
based solutions, Azure Databricks, Data Transformation, Reconciliation Logic.
-------------------------------------------------------------------------------------------------------------------------------------------
Infosys Ltd, India Dec 2018 to Oct 2022
Technology Analyst
Responsibilities:
 Worked with SMEs to develop analysis documentation for change requests and new implementations of
new/existing retailers.
 Set up data pipelines using Turbine ETL tool for the onboarding of new retailers and ensuring data
accuracy.
 Investigating and addressing complex customer issues and production environment bugs by analyzing
root causes, implementing fixes, and testing solutions to ensure smooth operations.
 Migrated and validated data during transitions from on-premises systems to Azure cloud.
 Extensively used Azure data bricks, PySpark, and SQL for migration Scripts and monitored the data
transformation process in ETL tool and fixed the issues in case of any process failures.
 Developed data models for extraction and transformation of data from Azure Storage Container using
Unix box shell script.
 Supported UAT and production environments, resolving issues in data quality and transformation.
 Automated large data sets for transfer to the Azure cloud using scripts and tools like Turbine and
PySpark.
 Worked on Automation scripts to deliver a high-rated and huge volume of data to be copied to Azure
Cloud.
 Troubleshot and resolved data-related issues during ETL processes using Azure Data Bricks and other
ETL tools.
Environment: Microsoft Azure, Turbine, Custom ETL Pipelines, PySpark, Python, SQL, Unix Shell Scripting,
Azure SQL Database, Azure DevOps, Version Control Systems.
-------------------------------------------------------------------------------------------------------------------------------------------
Vision Technologies June 2016 – Nov 2018
Data Analyst
Responsibilities:
 Extracted, transformed, and analyzed large datasets using SQL, Excel, and Python to support business
decision-making processes.
 Designed and developed interactive dashboards and reports in Tableau and Power BI to visualize key
performance metrics and trends.
 Conducted predictive analysis using statistical models to forecast sales, customer behavior, and
inventory needs.
 Performed data cleansing and validation to ensure accuracy and consistency in reporting and analysis.
 Collaborated with cross-functional teams to identify business requirements and translate them into
actionable insights.
 Analyzed customer behavior and churn patterns, developing retention strategies that improved
customer retention by 12%.
 Automated recurring reports to streamline reporting processes and reduce manual effort by 25%.
 Conducted A/B testing to evaluate the effectiveness of marketing campaigns and promotional strategies.
 Built financial models and performed variance analysis to monitor revenue, expenses, and budget
allocations.
 Partnered with supply chain teams to optimize inventory management, reducing holding costs by 18%.
Environment: SQL, MS Excel, Oracle DB, Tableau, Power BI, Python, R, Regression, Time Series Analysis,
Forecasting.

You might also like