0% found this document useful (0 votes)
27 views

Data Engineer JD

This job posting is seeking a data engineer to design and implement scalable data solutions. Key responsibilities include collaborating with teams to build data pipelines and analyze data quality. Candidates should have 3+ years of experience as a data engineer along with skills like Python/R, SQL, cloud technologies, and data migration. The ideal candidate will also have experience designing data processing pipelines and configuring data ingestion and provisioning systems.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views

Data Engineer JD

This job posting is seeking a data engineer to design and implement scalable data solutions. Key responsibilities include collaborating with teams to build data pipelines and analyze data quality. Candidates should have 3+ years of experience as a data engineer along with skills like Python/R, SQL, cloud technologies, and data migration. The ideal candidate will also have experience designing data processing pipelines and configuring data ingestion and provisioning systems.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 2

Data Engineer

Summary:
In this role, you will be able to show your skills on a variety of projects that can fit your needs.
Doesn’t matter if you are a generalist that wants to do everything or you are very skilled in
certain parts of data engineering, we will be here to support your needs and to find the right
project for you.

Responsibilities:
 Design and implement modern scalable data solutions together with leads and
architects, using a range of new and emerging technologies.
 Work with Agile frameworks and implementation approaches in the delivery.

Requirements:
Must have:
 Minimum of 3 years of experience working as a Data Engineer
 You will have a solid understanding of cloud-based technologies (at least one of Azure,
GCP, or AWS)
 Experience with Python/ R / Scala.
 Knowledge of SQL / NoSQL databases.
 Demonstrated experience in building data pipelines in data analytics implementations
such as Data Lake / Data Warehouse / Lakehouse.
 Ability to collaborate closely with business analysts, architects, and client stakeholders
to create technical specifications.
 Ability to analyze and profile data, and assess data quality in the context of business
rules.
 Extensive experience of data migration.

Nice to have:
 Proven experience in end-to-end implementation of data processing pipelines.
 Experience configuring or developing custom code components for data ingestion, data
processing, and data provisioning, using Big Data & distributed computing platforms
such as Hadoop / Spark
 Proficiency in data modeling, for both structured and unstructured data, for various
layers of storage
 Experience in using and understating infrastructure as a tool platform like Terraform or
Cloud Build
 Hands-on experience with streaming data
 Understanding of Design patterns (Lambda architecture/Data Lake/Microservices)

Soft Skills:
 Team player with experience working with distributed and global teams.
 Excellent communication skills with the ability to articulate and present complex
information concisely to both technical and non-technical audiences.
 Analytical and critical thinking skills prepared to challenge conventional thinking.

You might also like