0% found this document useful (0 votes)
5 views

JD - Associate Data Engineer

Uploaded by

shardamadhav2001
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

JD - Associate Data Engineer

Uploaded by

shardamadhav2001
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Episource has devoted more than a decade in building solutions for risk adjustment to measure

healthcare outcomes. The backbone of our consistent success has been our obsession with
data and technology. At Episource, all of our strategic initiatives start with the question - how
can data be “deployed”? Our analytics platforms and datalakes ingest huge quantities of data
daily, to help our clients deliver services. We have also built our own machine learning and NLP
platform to infuse added productivity and efficiency into our workflow. Combined, these build a
foundation of tools and practices used by quantitative staff across the company.

We’re looking to hire someone to help scale Machine Learning and NLP efforts at Episource.
You’ll work with the team that develops the models powering Episource’s product focused on
NLP driven medical coding. Some of the problems include improving our ICD code
recommendations, clinical named entity recognition and information extraction from clinical
notes.

This is a role for highly technical engineers who combine outstanding oral and written
communication skills, and the ability to code up prototypes and productionalize using a large
range of tools, algorithms, and languages. Most importantly they need to have the ability to
autonomously plan and organize their work assignments based on high-level team goals.

You will be responsible for setting an agenda to develop and build machine learning platforms
that positively impact the business, working with partners across the company including
operations and engineering. You will be working closely with the machine learning team to
design and implement back end components and services. You will be evaluating new
technologies, enhancing the applications, and providing continuous improvements to produce
high-quality software.

Core Responsibilities

● Create and maintain optimal data pipeline architecture


● Setting up CI/CD pipelines
● Responsible to design and develop distributed, high volume, high-velocity multi-threaded
event processing systems
● Knowledge of software engineering best practices across the development lifecycle,
including agile methodologies, coding standards, code reviews, source management,
build processes, testing, and operations
● Deploying data pipelines in production based on CI/CD practices
● Designing scalable implementations of the models developed by our Data Scientists
Job Requirements
● Scripting Skills: Strong scripting (Python) and automation skills.
● Experience in building modern web applications (preferably RESTful) with Django/Flask
frameworks (or equivalent).
● Experience with cloud platforms (AWS/GCP), Preferably AWS.
● Experience in streaming architectures
● Good understanding of Docker and container platforms like Docker swarm or Kubernetes
● Provisioning /configuration management using Terraform, Ansible etc.
● Comfortable with Git and GitHub
● Contribution to blogs, open-source projects, communities will be viewed positively

You might also like