0% found this document useful (0 votes)
100 views3 pages

151 A.Data Devops Engineer

This document describes the role of a Data Devops Engineer at Cartrack, a smart mobility SaaS company. The role involves managing infrastructure like virtual machines, Kubernetes clusters, and distributed systems. Key responsibilities include Linux system administration, Kubernetes management, infrastructure automation, operations of systems like Kafka and Spark, monitoring, security, and incident response. Strong experience with Kubernetes, Docker, Linux administration, and monitoring tools is required. The hiring process involves technical interviews and a work sample exercise.

Uploaded by

Im Nim
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
100 views3 pages

151 A.Data Devops Engineer

This document describes the role of a Data Devops Engineer at Cartrack, a smart mobility SaaS company. The role involves managing infrastructure like virtual machines, Kubernetes clusters, and distributed systems. Key responsibilities include Linux system administration, Kubernetes management, infrastructure automation, operations of systems like Kafka and Spark, monitoring, security, and incident response. Strong experience with Kubernetes, Docker, Linux administration, and monitoring tools is required. The hiring process involves technical interviews and a work sample exercise.

Uploaded by

Im Nim
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Role Profile

Designation Data Devops Engineer


Department Research & Development
Reporting to Head of Data Platform
Locations available Global (Permanent Full time)
A bit about us
We’re a world-leading smart mobility SaaS tech company across 23 countries. Our teams are collaborative, vibrant and
fast-growing, and all team members are empowered with the freedom to influence our products and technology.

Links:
https://www.cartrack.co.za/
https://www.cartrack.sg/
https://developer.cartrack.com/

Cartrack has a diversified portfolio of products in the vehicle space, example include vehicle recovery, fleet
management, vehicle sales, work force optimisation to name a few.

This role is within the data analytics platform team and functions across Asia, Africa and Europe.

Are you curious, innovative and passionate? Do you take ownership, embrace challenges, and love problem-solving?

Skills we are looking for and examples of day to day tasks


● Linux system administration – Managing virtual machines on-premises and in the cloud, from initial
provisioning and automation to optimisation for certain workloads. For example, disabling swap for
kubernetes or tuning the kernel for elasticsearch. Monitoring, debugging the systems and network
management is also vital to your success as a data devops engineer since there are terabytes of data flowing
through our systems daily.
● Kubernetes management - Kubectl, Kustomize, Helm charts. Kubernetes is our preferred tool for managing all
infrastructure, it gives a good level of abstraction to standardise on and with community helm charts some of
the work is already available from official sources. However, more often than not these need to be extended
and improved upon for our specialised use cases. We are looking for an individual with a passion for
kubernetes that enjoys working with the flexibility it provides, but also understands that it requires continual
improvement for non-standard use cases, e.g., most of our services are stateful and kubernetes was
engineered for stateless systems but it is evolving for more stateful application management.
● Infrastructure as code to automate on-premise and cloud infrastructure, because flexibility is key when a
spark cluster need to be scaled according to different configurations for large batch processing. The aim is to
empower data engineers to scale clusters dynamically via configuration files and simple deployment methods
so they can focus on dealing with the processing challenges rather than infrastructure challenges.
● Operations for developing high availability or deployment & upgrades of distributed systems such as: Kafka,
Kubernetes, Spark, Trino, Flink, Airflow etc. Your focus will be empowering data scientists and data engineers,
by deploying initial proof of concepts to fully functional deployments, as well as, tuning and monitoring these
systems.
● Building and deploying systems for metrics tracking, monitoring, and debug logging. Kubernetes helm charts
often come with integrations for logging into systems such as prometheus and elasticsearch. But sometimes a
custom integration is required or metrics need to be extended for debugging systems or improving system
performance. You’ll work closely with others needing these extensions and you will learn more about how
these systems function along the journey.
● CI/CD build systems to ensure our teams can deploy frequently and safely. We are looking for experience in
automating docker builds and java code builds with the deployment thereof to production systems. Having
checks and balances in place before deployment is important since we want to minimise downtime during
system upgrade roll outs and to avoid integration issues. The fact that we have teams across the globe means
there are no downtime windows and the system needs to remain available at all times.
● Troubleshoot networking (tcp/ip, calico/weave, vlans, tcpdump, routing etc.) on premise, in the cloud and in
and between kubernetes clusters.
● Security aware, design systems with security in mind rather than retro fit solutions after the fact. Keep up to
date with the security landscape and identify important risks before they become a problem. The log4j
security vulnerability gave everyone in the security industry a run for their money. As similar threats become
known, we need to sometime deprioritise certain tasks to roll out images with security fixes and actively
detect these threats in an ever-connected world.
● User access and credential management. Liaising with the relevant teams inside cartrack to provision and
grant access to systems based on their function. For example, a data scientist has a different permissions and
access structure to that of a data engineer, it is vital to understand these differences and engineer the
systems according to remain compliant with the legislation in the operating territory.
● Responding to incidents and requests, troubleshooting with team mates spread across the globe (Asia,
Europe and South Africa). Because things sometimes do go wrong, being flexible to respond to incidents and
remediate them as quickly as possible is vital. For example, being able to quickly detect an issue with a
database server and add extra disk so others can resume their tasks without too much interruption.

You have
Required experience:
● Kubernetes cluster deployment and administration.
● Deep understanding of kubernetes storage management and abstractions.
● Understanding of how helm charts work, their limitations and how to improve upon these limitations.
● Linux system provisioning and administration (Nutanix, VMWare, KVM).
● System and cluster monitoring (Prometheus and/or Elasticsearch).
● Understanding of how docker works and familiarity with writing dockerfiles and automating the process.
● Networking management and troubleshooting of kubernetes, cloud and on-premises systems.

Interests or extra skills that need to be refined:


● Open-source database administration (Postgres, MySQL).
● Distributed computing engines (Spark, Trino, Presto, Flink).
● Object storage management and administration.
● CI/CD pipeline development for docker and/or code builds (Java).
● System automation (Terraform, Cloudformation, Ansible, Python scripting, Shell scripting).
● Event based systems (Kafka, Elasticsearch).
● Cloud computing (Architecture, Operations, Cost management).
● User and access management (LDAP, IAM policies, Databases).
● Platform security, monitoring and continual auditing and improvement.
● Development environment and scheduler management (Jupyterhub, Airflow).
● Automation, as you refine a repeatable process, it should be automated to free up your time to focus on new
tasks so you can improve and evolve your skillsets.

Qualifications & Experience:


● Bachelor’s Degree or Advanced Diploma in IT, Computer Science, Engineering with 3 years of experience in a
software/technology environment, where you were responsible for architecting and deploying distributed
systems.
● In the event that a candidate does not have a bachelor’s degree or an advanced diploma, an equivalent
experience requirement must be met, which equates to a minimum of 6 years of experience in a
software/technology environment, where you were responsible for architecting and deploying distributed
systems.

Get in touch
We’d love to hear from you, chat with you and learn more about you!
Please send your CV to [email protected]
Subject: Name_Data Devops Engineer

Interview process:

1. 15-minute technical interview based on your work experience and introduction to the data devops position.
2. Short psychometric evaluation.
3. Work sample exercise. Select an exercise from a list to complete, the purpose of the exercise is to guide the
conversation during the final technical interview. Note this exercise is not graded, so there is no pass or fail
threshold. The work sample exercise is a simplified example of the day-to-day tasks that would form part of
your KPIs.
4. 30–45-minute technical interview.
5. 15–30-minute interview with executive sponsor of the data platform function.

Reference number: 151A

You might also like