Clearcode Tracker-Case-Study-by-Clearcode
Clearcode Tracker-Case-Study-by-Clearcode
Case study
Customer data platforms (CDPs), on the other hand, collect first-party data from a
range of sources, create a single customer views (SCVs), and push audiences to
other systems and tools.
Although the functionality and goal of AdTech and MarTech platforms varies, they
all have one thing in common; they all need a component that collects and
delivers data from different sources (e.g. websites) to different systems (e.g.
DSPs).
Key points:
• The tracker is used to collect event data (e.g. impressions, clicks, and
video metrics) from different sources.
• We designed and built our own tracker system that can be used for
future client projects.
• The tracker can be integrated into existing platforms and tools, which
saves development time and costs.
• The two best performing technology stacks were Go (aka Golang) and
Nginx + Lua, but chose Go because of its growing popularity.
1
About our Tracker Project
One of our development teams completed an internal research and development
project to build a tracker that can collect a range of events, such as:
• Impressions
• Clicks
• Video metrics (e.g. watch time and video completion rates)
Our project focused on building a tracker for a DSP, but it can be adapted to any
AdTech or MarTech platform that needs to collect event data.
The main goal of the project was to build a tracker with core functionality,
example extensions, example deployment scripts, documentation, and allow the
tracker to be integrated into other components, such as analytics tools and
reporting databases.
With our Tracker project, we followed the same development process that we
apply to all our AdTech and MarTech development projects for our clients.
2
MVP Scoping Phase
The goal of the Minimum Viable Product (MVP) Scoping phase was to define the
scope of the project and select the architecture and tech stack.
• Decide what events the tracker should collect and how it will do it.
Here’s an overview of what the story map looked like:
3
Defining the Functional Requirements
Based on the results from the story mapping sessions, we created a list of
functional requirements for the tracker.
The functional requirements relate to the features and processes of the tracker.
• Process requests and generate events with proper dimensions and metrics.
• Allow request types and preprocessing of events to be configured.
• Extend its basic functionalities with plugins by exposing its API.
• Expose generated events to specified collectors (plugins) and allow them to
be attached to specific event types by configuration.
4
• Deployable to AWS, GCP, and Azure.
• Ability to integrate with custom plugins.
• Ability to integrate with other DSP components such as the banker.
• Platform scalability — the tracker needs to be able to handle large increases
in events.
• Latency of requests
• Requests per second
• Error ratio
• Percentiles latency
The ideal benchmark tool needed to be easily integrated with our continuous
integration (CI) environment, either run via the command line or a plugin for
Jenkins.
Below are the pros and cons of the benchmark tools that met our requirements.
5
Wrk
Wrk is a HTTP benchmarking tool capable of generating significant load.
Pros: Cons:
k6
Pros: Cons:
• It’s not a domain-specific • Tests are written in JavaScript,
language (DSL). which is not an ideal testing
language.
• Tests are written in JavaScript, so
it’s easy for the team to use.
6
Locust
Locust is an easy-to-use, distributed, load testing tool. It's written in Python and
built on the Requests library.
Pros: Cons:
• It’s not a DSL and tests are written • We found out during the
in Python, so it’s easy for the team benchmark testing phase that
to use . Locust is a bit slow and requires a
lot of infrastructure to create a
• Easy to set up and use. decent amount of load on the
• Optional web UI with charts benchmark tool.
k6 and Locust seem to be comparably easy to set up and use, although Locust
was chosen as it allows us to write tests in Python, which is a technology the
team knows very well.
Gatling
Gatling is a powerful open-source load testing solution.
Pros: Cons:
• There’s a Jenkins plugin available, • It uses a scala-based DSL for
which allows us to view reports writing tests, meaning we’d have
generated by Gatling in Jenkins. to learn how to use it before we
could start running tests.
Gatling was discarded due to its complexity and scala-based DSL used for tests
configuration.
7
Once we had chosen the benchmark tools, we moved on to selecting the
technologies that we would test.
• Golang because it’s growing in popularity and we were familiar with it.
• Rust because other development teams have used it to build trackers in the
past.
We decided to run initial benchmark tests on all the technologies using the
benchmarking tools listed above, but later focused on running more tests using
wrk2, gatling and locust.
8
Sprint 1
What we achieved and built in this sprint:
Sprint 2
What we achieved and built in this sprint:
Sprint 3
What we achieved and built in this sprint:
9
Sprint 4
What we achieved and built in this sprint:
Sprint 5
What we achieved and built in this sprint:
• Auto-generated documentation.
• Created a quickstart guide.
• Built docker images and pushed them into the internal registry.
10
How Can Our Tracker Help You?
By using our tracker component, you’ll save months of development time and
tens of thousands of dollars in costs.
To learn more about how our tracker can benefit your business, contact us via one
of the channels listed on the next page.
11
About Clearcode
Since 2009, we’ve partnered with tech companies to develop RTB, programmatic,
data management, and analytics platforms for all advertising channels — from
display and mobile to video, audio, and DOOH.
12