0% found this document useful (0 votes)
74 views1 page

Data Engineer Phillips66

The Data Engineer will be responsible for creating and optimizing Phillips66's data pipeline and architecture to optimize data flow and collection across teams. They will support analysts and scientists with data initiatives by ensuring optimal data delivery and building analytics tools. Key responsibilities include creating and maintaining the optimal data pipeline, assembling large datasets, automating processes, and working with stakeholders on data-related issues. The ideal candidate has 2+ years of experience as a Data Engineer with skills in SQL, data pipelines, visualization, databases, and programming languages.

Uploaded by

Hrishikesh Wagh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
74 views1 page

Data Engineer Phillips66

The Data Engineer will be responsible for creating and optimizing Phillips66's data pipeline and architecture to optimize data flow and collection across teams. They will support analysts and scientists with data initiatives by ensuring optimal data delivery and building analytics tools. Key responsibilities include creating and maintaining the optimal data pipeline, assembling large datasets, automating processes, and working with stakeholders on data-related issues. The ideal candidate has 2+ years of experience as a Data Engineer with skills in SQL, data pipelines, visualization, databases, and programming languages.

Uploaded by

Hrishikesh Wagh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1

Data Engineer

Phillips66

Job Overview

The Data Engineer will be responsible for creating and optimizing our data pipeline, data architecture as
well as optimizing data flow and collection for cross functional teams. The ideal candidate is an
experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building
them from the ground up. The Data Engineer will support our data analysts and data scientists on data
initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects.
They must be self-directed and comfortable supporting the data needs of multiple teams, systems and
projects.

Responsibilities for Data Engineer:


• Create and maintain optimal data pipeline.
• Assemble large, complex data sets that meet functional / non-functional business requirements.
• Identify, design, and implement internal process improvements: automating manual processes,
optimizing data delivery, etc.
• Build analytics tools/dashboards that utilize the data pipeline to provide actionable insights into
operational efficiency and other key business performance metrics using Tableau.
• Work with stakeholders including the business owners, Data and Design teams to assist with data-
related technical issues and support their data needs.
• Create data tools for analytics and data scientist team members that assist them in building and
optimizing models/visualizations.

Required Qualifications for Data Engineer


We are looking for a candidate with 2+ years of experience in a Data Engineer role, who has attained a
bachelor's degree in Computer Science, Statistics, Informatics, Information Systems or another
quantitative field. They should also have experience in following:
• Advanced working SQL knowledge and experience working with relational databases, query authoring
(SQL).
• Experience building and optimizing 'big data' data pipelines, architectures and data sets.
• knowledge with data visualization best practices.
• Minimum 1-year experience in Tableau development
• Experience with various data prep/pipeline/integration tools like Alteryx, Azure Data Factory etc.
• In-depth knowledge of relational databases (e.g. Oracle and SQL Server), including data warehousing
concepts and best practices
• Proficient is SQL
• Experience in at least one the following languages – R, Python, SCALA
• Experience working in an agile or Scrum based environment
• Ability to test and document end-to-end processes.
Experience working with Microsoft Azure or AWS platform will be preferred

You might also like