0% found this document useful (0 votes)
14 views3 pages

User File 4

Mounish M is a Data Engineer with over 5 years of experience in Big Data, Hadoop, Spark, and Azure services. He has hands-on expertise in SQL, Python, Pyspark, and has worked on projects involving data migration to Azure Delta Lake and analytics development. Mounish has a strong understanding of data processing, Agile methodologies, and has contributed to various projects in roles that require collaboration with onshore teams and clients.

Uploaded by

jaswantkorrapati
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views3 pages

User File 4

Mounish M is a Data Engineer with over 5 years of experience in Big Data, Hadoop, Spark, and Azure services. He has hands-on expertise in SQL, Python, Pyspark, and has worked on projects involving data migration to Azure Delta Lake and analytics development. Mounish has a strong understanding of data processing, Agile methodologies, and has contributed to various projects in roles that require collaboration with onshore teams and clients.

Uploaded by

jaswantkorrapati
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Mounish.

M
Mail ID : [email protected] Mobile No: 7204987052

Professional Summary:

 Around 5.3 Years of professional IT experience as a Data engineer with Big


Data, Hadoop ecosystems, Spark, Pyspark and Azure services.
 Hands-on experience with SQL, Python, HDFS, Hive, Spark, Pyspark and Azure services
(ADF, ADB and ADLS).
 Very good understanding of Hadoop and Spark architectures.
 Very good understanding of Partitions and Bucketing concepts in Hive.
 Good at understanding data and data processing by using Pyspark.
 Very good experience of writing code on Python and Pyspark.
 Knowledge on data science concepts like python programming, Data preparation.
 Good experience on de-normalizing in hive tables.
 Very good experience on Azure Databricks notebooks development. Involved in SCD type
1, type2 and type 0 implementation.
 Proficiency in Object Oriented Programming (OOPS) concepts.
 Efficient Team Player with good communication skills and interaction with Client and team
members.
 Having experience in Azure Databricks, Data Lake and build a Deta Lake.
 Hands experience in Writing in notebooks using Spark.
 Good knowledge of Database Queries.
 Created the mount point to read data from and ingest into the Azure SQL DB using Azure
Data bricks.

Education:

 B.Tech-SVCET (JNTUA) College of engineering - 2018

Professional Experience:
 Working as a Data Engineer in COGNIZANT, Bangalore from JAN-2023 to TILL
DATE.
 Worked as a Software Engineer in KYNDRYL SOLUTIONS PVT LTD, Bangalore
from DEC-2021 to DEC-2022.
 Worked as an Engineer in VIS NETWORKS PVT LTD, Bangalore from DEC-2018 to
NOV-2021.

Technical Skills:

Azure Services : Azure Data Factory, Azure Data bricks, Azure SQL, Azure key vault
Operating Systems : Windows
Programming Languages : SQL, Pyspark
Database skills : MYSQL, Oracle.
Project Details:

Project:

Client : Target
Project Title : Build Azure Delta Lake
Role : Big Data Engineer
Skills : Pyspark, Azure Databricks, Big Data,
Hadoop

,Description:

This project focuses on migrating the multiple data warehouses data into Cloud
using Microsoft Azure technologies includes Azure Storage accounts, Azure Databricks
and Azure Data Factory. Using ADF provided connectors connecting to the source extract
data and keep it into Azure Gen2 lake and in Delta format. In this project using API calls
process workday data in to Gen2lake. Implement the use cases as per the business
requirements and maintain data in to different zones in Delta Lake.

Roles & Responsibilities:

• Followed AGILE methodology involved in sprint planning, sprint retro and daily
synch up meetings.
• Involved in doing few major enhancements and reprocessing of entire data 3 layers
tables.
• Getting business requirements from Onshore team and client.
• Develop Pyspark code as per onshore team and Client requirement.
• QA that in staging environment and review with onshore team, once everything looks
good then we deploy this in production.
• Post deployment also have some QA and share that QA document with Onshore and
client.
• At the end of each day, we will be logging our efforts in JIRA tool.

Project:

Project : Live Landscape


Environment : ADF, Pyspark, SQL, Hadoop, Bigdata
Spark Roll : BigData Engineer
Description:

Live landscapes application is a Mitie internal web-based application and its describes about
the creation, editing of buildings and maintaining the building information, contact
information, Rooms, occupancy and utilization information across geographically with in
Mitie. Since the buildings are created in this application, the building data is considered as the
master data. The data from Live landscapes application will be flowing to multiple streams
and get the data from different streams to Live landscapes application.

Roles & Responsibilities:

 Followed AGILE methodology for developing analytics.


 Every day discuss the requirement with inventors (Data scientists).
 Develop code in Python, Scala or Pyspark as per inventor suggestion.
 MRL meetings should be done before deploying analytic into pre-prod and Exposure on
Azure Data Factory activities such as Lookup, if condition, for each, Set Variable, Append
Variable, Get Metadata, Filter and wait.
 Followed AGILE methodology involved in sprint planning, sprint retro and daily synch-up
meetings.
 Involved in doing few major enhancements and reprocessing of entire data 3 layers tables.
 Getting business requirements from Onshore team and client.
 Develop Pyspark code as per onshore team and Client requirement.
 QA that in staging environment and review with onshore team, once everything looks good
then we deploy this in production.
 Post deployment also have some QA and share that QA document with Onshore and client.

Declaration:

I assure you that, if selected and given opportunity I shall do my work most consciously and
I am sure that I will prove as an asset for your Organization, I also hereby declare that the
information given above is true to the best of my knowledge & belief.

(Mounish M)

You might also like