0% found this document useful (0 votes)
112 views14 pages

Data Trust, by Design - Principles, Patterns and Practices (Part 2 - Up Front Terms and Conditions)

The document proposes a new design pattern for presenting terms and conditions and privacy policies to users in a way that builds trust and transparency. It suggests breaking long legal documents into clear, manageable sections and providing contextual guidance to help users understand how their data will be used and their rights. The goal is to move away from hidden "legalese" and instead design experiences that inform, empower and enable users to make meaningful choices about their data and privacy.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
112 views14 pages

Data Trust, by Design - Principles, Patterns and Practices (Part 2 - Up Front Terms and Conditions)

The document proposes a new design pattern for presenting terms and conditions and privacy policies to users in a way that builds trust and transparency. It suggests breaking long legal documents into clear, manageable sections and providing contextual guidance to help users understand how their data will be used and their rights. The goal is to move away from hidden "legalese" and instead design experiences that inform, empower and enable users to make meaningful choices about their data and privacy.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

medium.

com
/greater-than-experience-design/data-trust-by-design-principles-patterns-and-practices-part-2-up-front-terms-an…

Unknown Title
Nathan Kinch ⋮ ⋮ 3.05.2018

Data Trust, by Design: Principles, Patterns and Practices (Part 2


— Up front Terms and Conditions)

Nathan Kinch

Greater Than Experience Design

We kicked off this series with the why, how and what of Data Trust, by Design. Before continuing, it’s worth
going back to that post and reading or re-reading. It’ll make the contents of this post more applicable.

If you can’t be bothered or you’re short on time, here’s the gist of it;

Why does data trust matter?

The world of data is changing really quickly. There are behavioural, regulatory and technological forces
impacting this change. Basically we could end up going one of two ways. We could go down the Black
Mirror, or maybe even Minority Report route, or we could go down a route that appears to be a bit more
utopian — a future that is inherently human centric.

This doesn’t mean no sharing of data. That couldn’t be further from the truth. What it means is a more
balanced, trust-based ecosystem for data sharing and more importantly, data-driven value exchanges.

1/14
This won’t happen through some chance encounter. This future needs to be designed.

How can data trust and DTbD help us achieve this?

If people have a high propensity to willingly share their data, if organisations are incentivised to process
data ethically, and if an entire ecosystem is optimised for individual, communal and societal outcomes,
data can become a ‘thing’ that positively impacts us all.

The job of DTbD principles, patterns and practices is to help make this a reality we can all contribute to.

What is data trust?

In simple terms Data Trust is the trust a person places in an organisation’s data practices. Data trust has
been earned when people have a their data. As we now know, this is not the case today.

DTbD is the practice of designing transparent, positive-sum, value-generating experiences that give people
the ability to make free and easy choices about how their data is and isn’t used.

And here are the 6 principles.

Principle 1

First contact: Define shared objectives

Principle 2

Before every interaction: Make the purpose clear

Principle 3

Establish a baseline: You are equals

Principle 4

Take your time: Trust has to be earned

Principle 5

Mutual success: Share in the value you co-create

Principle 6

Say goodbye: Make endings matter

In our introductory post we mentioned that we’d be progressively communicating some of the design
patterns we were working on. This post is about the first of those patterns; upfront terms and conditions.

Right now it’s clear the status quo design pattern relied upon for upfront T&Cs is broken. People really
don’t know and are often unhappy with what they’re signing up to. With the advent of regulations like the
GDPR and the extension of rights that brings to citizens, hideaway terms and conditions will have to
become a thing of the past. We will collectively need to find a better model.

2/14
It’s for this very reason we decided to focus on communicating some of the work we’d been doing on this
pattern first. As a general rule, almost everyone that interacts with a product or service online has some
experience with terms and conditions. The problem is their experience looks something like this…

Sport sign up
Press"L" to appreciate it Sochnik | Behance | Facebook | Instagram | Uplabs

dribbble.com

Please don’t think I’m picking on this particular sign up flow. I haven’t critiqued it. I don’t have any data that
supports its efficacy or lack thereof. In fact, at a glance it looks pretty solid. It appears to have been
designed in such a way that the least number of actions required to decrease Time to Value (TtV) are right
there waiting for me to complete at the point of time I’m most motivated to complete them. It’s solid design
output.

The real problem is that the terms and conditions — the agreement you’re entering into with said provider
— are hidden in such a way you’ll never actually interact with them. You’re effectively operating in the dark,
blindly trusting said provider will only ever act in your best interests. If only organisations were all that
trustworthy…

And this isn’t unique — it’s the current design pattern for terms and conditions. It’s followed by a mountain
of legalese that’s incomprehensible for most people.

The thing is, there’s enough evidence to support the fact our working memory is limited, that we bypass
anything even remotely periphery to get to the outcome we want and that we often do stuff in autopilot
mode that we ‘know’ people don’t read terms and conditions. If you think this point is up for debate, you’ll
probably have an event log that shows the three hits your T&Cs and privacy policy collectively achieved all
of last year.

These types of patterns do not inspire data trust. To earn data trust we must be radically transparent. We
must deliver the value we promised we would. Then, throughout the course of the relationship our
organisation has with a person we need to show we’re willing to own the consequences of our actions. If
we screw up we need to mitigate the risk, communicate the impact and demonstrate what we’ve learned
and how it’s made us better and our customers safer.

Equifax is a brilliant example of what not to do in this type of situation. The more recent debacle that cost
Facebook about $58 billion in a week, and plenty more the week later, also highlights this.

Don’t be Equifax. Design for trust instead


Consumer trust is at an all time low. Questionable data practices, outdated business
models and zero-sum experiences…

medium.com

Facebook’s not the problem, we all are

3/14
I permanently deleted my Facebook account last week. I hadn’t used the service in
months for a variety of reasons —…

medium.com

All of this thinking underpins the type of work we currently do with clients. And it’s exactly why something
like terms and conditions, which has never really been ‘designed’ before must now be brought to the
forefront. These types of functions must become part of our proposition.

If right now you’re thinking, “this dude is bonkers, no one even cares,” then you’d be partially correct. What
we’ve come to learn is that a high proportion of people will only engage with things like terms and
conditions at the very highest level. This seems to be largely the result of learned behaviour, but it is the
state of play nonetheless. Even though this is the case it doesn’t mean we can’t consciously design
empowering experiences that cater to different risk profiles, learning types and situational contexts. We
should be motivated to inform, empower and enable the people we serve as customers to make choices
about how they engage with our products and services. Zero sum isn’t the future.

So, without further adieu, here’s what we’re currently working on.

This is a very high level, static view of the experience, so let’s look a little closer at the function and
purpose of each core component.

Lead with values

4/14
If you’ve read our playbook you’ll know we’re big on leading with values. We think values inform purpose
and intent, purpose informs culture and organisational design, culture and organisational design enable
workflows and practices, and from there, workflows and practices produce outputs in the form of products,
services and business models.

5/14
Leading with data values or some form of promise does a decent job of communicating intent and purpose
in a way people actually ‘get’. It’s the first notion of transparency and thanks to the work we’ve been doing,
now believe it’s a great starting point for establishing a new data relationship. But it’s just a start. Don’t over
value it. What matters is what you do next, and the day after and the day after that.

Break things down into manageable chunks

We’re not here to act as your legal advisors. Seriously, none of this constitutes anything close to legal
advice. Work with lawyers and data protection practitioners early and often. No exceptions when it comes
to data driven initiatives.

Having said that, the things we sign up to upfront can often be broken down. It may seem like we’re
agreeing to terms and conditions, but it’s more likely there are terms of service, a privacy policy and
perhaps terms of use. If we’re to help people understand what we intend do do with their data, how they
enact their rights and the value they should be expecting, we need to break things down into meaningful,
manageable chunks. This component serves that purpose.

Contextual guidance*

6/14
There’s a * here because contextual guidance isn’t limited to one specific part of the experience. Right
now, things like the GDPR are set to break some existing organisational practices. This is likely to result in
new patterns surfacing themselves to people. Are we expecting them to just get it, or do we think there’s
an opportunity to offer assistance?

Our thinking aligns to the latter. We’ve come to learn that people’s first reaction to a new pattern isn’t
always positive. Anything that causes someone to flip from autopilot to thinking mode can induce friction.
Our objective with this type of contextual guidance is to provide the simplest, least intrusive form of
guidance that supports someone in taking an affirmative action as their next step. It might go without
saying, but the way contextual guidance surfaces in an experience is dependent on a variety of factors
from brand, to product, to reliance on data, to stage of the relationship. No one size fits all here. We take
insights from our work and do our absolute best to contextualise what we learn in our daily practice to the
commercial projects we work on. Just like our entire DTbD scope of work, this is a work in progress.

Visibility of tangible process

7/14
Our limbic brain literally throws a hit of Dopamine our way when we cross off a task from our to do list.
Completing stuff feels good. In fact, giving people visibility of the tangible progress they’re making towards
a specific outcome increases the likelihood they’ll keep going. We want that Dopamine hit! Neuroscientists
refer to this as “self directed learning”.

8/14
This pattern has been embedded into most onboarding experiences that have been designed over the last
few years. It’s not new to design, but it is new to the design of upfront terms and conditions.

We’re experimenting with different ways we can surface this at the moment. As with most of the
components of the pattern/s we’re proposing, the implementation will be context sensitive.

Support engagement at each level


If an agreement has a number of components, whether that be separate agreements, a request for many
different data attributes, the proposal of different processing purposes or something else entirely, the core
idea of the EU’s evolving regulatory environment is that people should understand what they’re getting into
and make a choice about what they may or may not be comfortable with. It’s really about defining shared
objectives, making the purpose clear, establishing a relationship as equals and earning each other’s trust
over a period of time. In fact, this model of data sharing more closely represents real life human
relationships.

We believe there is a way to balance engaging at each level with the expectation of a valuable, meaningful
and engaging experience. It’s a tough balance to strike, but we’re working on it by catering to different
learning types, different risk profiles and different situational contexts.

Layering

This is by no means a new pattern. It’s more likely an existing pattern applied to a slightly different use
case. In this context layering is one of the ways we, and many others, are beginning to help people
understand what they’re signing up to. Basically this means we start at the highest level (i.e. Privacy
Policy) and progressively expose detail (i.e. What are our legal grounds for processing your personal data?
As part of delivering you our service we process your “non-sensitive” personal data. The legal ground for

9/14
our processing of your “non-sensitive” personal data is your consent. This means you’re in control of what
you do and do not share. You even have the ability to revoke our access to your data at any point in time)
so that someone who feels they trust a brand and just wants to get to the thing they’re signing up for can
actually do that. Yet at the same time, depending on the context, the sensitivity of the data and the stage of
the relationship relationship (i.e an existing customer signing up to a new product or service they’re not yet
familiar with), people can dive deeper and query what they’re getting into.

We’re constantly questioning whether layering is the most effective approach. Some of the work we’ve
done using this pattern has been extremely effective. Other approaches like video and visualisations have
also proven effective. The approach you take will likely depend on your point of view, your design system
and of course the form factor or platform people are engaging with.

Just keep in mind the objective; to maximise the likelihood people engaging in the experience actually
achieve comprehension and eventually make a real choice. You want to inform, empower and enable the
people you serve to make choices.

Delay the core action

Something we’re experimenting with is delaying the core action of accepting or declining up front T&Cs.
We’re doing this because we tested it. If you give people the ability to accept or decline (which is already
better than today’s zero sum model by the way, it’s just not anywhere near ideal) on the first screen,
autopilot kicks in and most people accept without really thinking through the potential consequences.
When you dive deeper through contextual inquiry, a variety of rationales get thrown your way, but under the
surface people eventually get to the fact they do this with everything else they sign up to so they may as
well do it with this. As we mentioned earlier, this is a learned behaviour. It’s too hard to read everything. It’s
incomprehensible anyway so we may as well make it easy on ourselves and just get to the thing we want.
By surfacing the explicit action of accept or decline after people have engaged with each individual

10/14
component (in this case the top level of Terms of Service, Terms of Use and Privacy Policy) at least at the
highest level, we’re hopefully starting to help people change the way they sign up to products and services.

We’ve got some key questions surrounding how long this is necessary. As in, if the environment in which
data is used becomes inherently trustworthy, or we have personalised assistants doing stuff for us, will
engaging this way be relevant? Right now that’s an unknown. For now though we need to balance the
value expectations people have with the limitations of organisations and the regulatory environment they
will soon operate it.

Consequence clarification

This will actually be released as its own design pattern, however it’s worth touching on right now. Given
people will likely have the ability to accept or decline up front terms and conditions, and consent can’t be a
precondition (although there are other ways to legitimise processing activities, so speak to your legal
representatives), declining the terms might not necessarily mean someone can’t continue. We believe

11/14
people should be able to continue, and as we’ll reference below, already advocate progressive disclosure
to the clients we work with.

Consequence clarification, in this context, is about helping the person to understand how the proposition
we promised to deliver them might be impacted. For instance, if the proposition is inherently data driven
and relies on analysing some data to produce an insight, then not analysing the data might render the
application fairly useless. Even though this is the case we shouldn’t halt someone’s progress. We should
let them progress, navigate on their own terms and give them the opportunity to progressively share with
us as we further develop our relationship.

Give em a receipt

12/14
This is not something unique to Greater Than X. There is a even a consent receipt standard being
developed, which has already been implemented by Digi.me and Consentua. The idea is that the person
agreeing to something gets a simple summary they always have access to. In fact, from this summary they
should be able to dynamically engage with what they’ve signed up to. In the context of consent this might
mean revoking access to certain data feeds. In the context of up front T&Cs this might mean something
different entirely.

In the spirit of transparency, we’re yet to decide whether we will release a standalone design pattern for
data sharing or consent receipts. Lots of work is being done on this so we’ll progressively assess the value
of us further contributing to the pattern and make a decision over the coming months.

There’s lots more I could detail in terms of the work that’s gone into what you see here. That would be a
seriously long post. We’d actually like you to spend time with the people you care about today. Onwards!

Some considerations we’re currently working through actively:

How do we define valuable versus ‘value less’ friction?


What’s the maximum number of acceptable interactions between starting and accepting or declining
up front T&Cs? We currently have what we think is a pretty strong hypothesis on this, but need more
data to support or refute it
Text heaviness currently burdens the T&Cs landscape. How can we work towards simpler, more
visual mechanisms that support the same outcomes (understanding, choice, ability to dynamically
engage and exercise rights etc.)?
If legalese remains the primary communication method of up front terms and conditions, how do we
synthesise a clause or entire agreement in such a way it’s comprehensible (something that is making
a little easier for us anyway)?
How can organisations actually operationalise these new experiences? It’s not just design this
impacts, but risk and compliance, product, marketing, privacy and security and of course
engineering. We’ve currently got a fairly strong POV, but every organisation has its own subtle
complexity. Making it a more systematic adoption and implementation pathway is key

13/14
Who actually owns this initiative? It differs by organisation at present. What if design owned it? With
the support of other departments and key stakeholders, what impact might this have? The personal
or team leading this initiative needs to grapple with desirability, viability and feasibility considerations.
We’re actually starting to see new roles emerge to lead core aspects of customer data strategy and
implementation.
How long will it take for new patterns to become broadly accepted? What number of interactions with
radically transparent data practices does one have to have before that level of transparency and
control become a basic expectation?
Many more questions… Have some? Feel free to send them our way :)

What we’ve proposed above isn’t a one size fits all design pattern. It’s a work in progress and as we know,
there are many ways to peel a carrot! What we hope is that this serves as a frame of reference and point of
inspiration. We hope it challenges you to think differently about informing, empowering and designing
experiences that enable the people you serve as customer’s to make free and easy choices about how
their data is and isn’t used. In fact, we very much hope this type of thinking catapults you along the
pathway towards data trust.

If you’re wondering what to do next, here are three ideas:

1. Conduct a collaborative, time-boxed terms and conditions design session with a group of colleagues.
Produce some output and put it to the test
2. Critique our work. Try figure out how you might immediately improve what we’ve proposed. Then, get
started on step 1
3. . We’d love to hear about your experiences grappling with these types of challenges. It’s our view the
more people dedicating chunks of time to this type of work, the better the outcomes for the entire
market. So let’s collaborate! If you’re really keen we can send you a sketch file with some basic flows
and screen designs to kick start your effort :)

Before signing off I’d like to say a special thanks to James Harvey for working closely with me on this. Stay
tuned. There’s plenty more to come.

14/14

You might also like