0% found this document useful (0 votes)
18 views3 pages

Data Engineer - Bangalore - Jio - Data Platform

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views3 pages

Data Engineer - Bangalore - Jio - Data Platform

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

About the job

Job Description

Big Data Centre of Excellence is building state-of-the-art Data Analytics and Management systems
used across various Jio Platforms. Primarily we enable different businesses - ingest, process,
manage and derive insights from data. We developed numerous data intelligence capabilities on top
of these systems which communicate with each other and help businesses make informed
decisions. Big Data COE encourages the usage of Open Source tools/services to build robust and
resilient systems that form the core for various AI/ML capabilities.

This role is for a Data Engineer with solid development experience who will focus on creating robust
data pipelines and enhancing existing ETL processes. You will be an integral part of development
team examining requirements and designing optimal solutions. The role is for a self-motivated
individual with knowledge of Hadoop ecosystem and various Hadoop tools/services. The candidate
will perform hands-on activities including design, documentation, development and test of new
functionality.

Location Bangalore

Basic Qualifications

3 to 6 years of relevant work experience with a bachelors degree or masters degree, or a PhD.
Preferred Qualifications

Excellent coding skills in Java/Scala/python especially in OOP constructs & concurrency systems;
experienced in building highly optimal software systems.

Well versed in shell scripting for working on Unix/Linux based systems.

Experience in designing, building, tuning & troubleshooting distributed, scalable data pipelines & data
streaming solutions

In depth knowledge and hands-on experience of Hadoop based computing solutions including but
not limited to MapReduce, Spark, Hive, Tez, YARN, HBase, Kudu, Druid, Presto, Impala etc,.

Building real time streaming solutions including but not limited to Kafka, Spark Streaming, Structure
Streaming, Apache Flink, Apache Beam, NiFi

Responsibilities

Build and extend re-usable platform components & frameworks within Big Data Ecosystem, having
highly modular and extendible design.
Iterative improvement in existing solutions and technology stack by prototyping from latest bleeding
edge technologies.

Capability to work efficiently in both the roles as an Individual Contributor (IC) or as a Team player

Good team player - mentor, guide and lead other developers & team members in building robust,
highly resilient data processing pipelines.

Encourage use of correct coding practices through aggressive code reviews and TDD approach.

Desired Skills and Experience

Apache Spark, Hive, Apache Kafka, Scala, Hadoop, SQL

You might also like