Hcai Interface Thesis
Hcai Interface Thesis
Veronica Nedar
Simon Proper
i
Nomenclature and abbreviations
ACD Activity-Centered Design
Agent Any participant within an interaction, like a user or a robot.
App Application, a downloadable program usually for phone or tablet.
Autonomous agent A product or system with AI, such as the robot HUGO
HCAI Human-Centered Artificial Intelligence
HCI Human-Computer Interaction
HUGO The autonomous delivery robot developed by the company
Interaction sequence A set of interactions following each other in a predetermined pattern.
RtD Research through Design
UI User interface
User test In this thesis a combination of user and usability testing was done. The
word ‘user test’ will refer to all tests done with users even though
usability testing is also part of it.
UX User experience
Web app Web application, an application in a browser.
ii
Abstract
The use of autonomous agents is an ever-growing possibility in our day-to-day life and, in some cases,
already a reality. One future use might be autonomous robots performing last mile deliveries, a service
the company HUGO delivery is currently developing. The goal of developing their autonomous delivery
robot HUGO is to reduce the emissions from deliveries in the last mile by replacing delivery trucks with
emission free autonomous robots. However, this new way of receiving deliveries introduces new design
challenges since most people have little to no prior experience of interacting with autonomous agents.
The user interface is therefore of great importance in making the user understand and be able to
interact comfortably with the autonomous agent, thus also a key aspect in reaching user adoption. This
thesis work examines how an interface for an autonomous food delivery service, such as the HUGO
delivery, could be designed by applying a Human-Centered Artificial intelligence and Activity Centered
Design focus in the design process, resulting in a design proposal for a web app. The conclusion of the
thesis includes identification of the six essential interactions present in an autonomous food delivery
service, as well as how HCAI and which of its guidelines can be applied when designing an interface for
the interaction with an autonomous delivery robot.
iii
Acknowledgement
We have worked with this thesis project during the whole semester of 2022 to explore the concept of
Human Centered AI applied to interaction design of an autonomous robot. We hope that our work can
contribute to future research in the area of Human-Centered AI and interaction design for autonomous
agents. We are very grateful and would like to thank Berge and HUGO Delivery for the opportunity to
work with this project. We could not have done this without help and we would therefore like to express
our gratitude to those that have been involved in the thesis work.
The HUGO delivery team has given us an exciting project and a lot of help when researching their
product, we are grateful that they have chosen to give us their time explaining, user testing and sitting
through interviews for our work. We would like to thank our examiner Stefan Holmlid for his feedback
and his guidance. We also thank our opponents for accompanying us during the project, giving us
feedback and support, especially in our report.
Lastly, we would like to extend our gratitude to our supervisor Chu Wanjun who has given us great
support during the thesis work and guided us through all our questions. Thanks to him we were
introduced to the theory of Human-Centered AI and Activity-Centered Design. He helped us shift the
focus of our thesis towards these areas, elevating our work and making it more interesting and
innovative.
Thank you all for the insights, inspiration and help during our thesis.
v
Table of Contents
1. INTRODUCTION ...............................................................................................................................1
1.1 PURPOSE AND GOAL ................................................................................................................................................. 2
1.2 RESEARCH QUESTIONS ............................................................................................................................................. 2
1.3 SCOPE .................................................................................................................................................................... 3
2. THEORETICAL FRAMEWORK ..............................................................................................................4
2.1 INTERACTION DESIGN ............................................................................................................................................... 4
2.2 LAST MILE DELIVERY (LMD) ...................................................................................................................................... 5
2.3 LEAN STARTUP ........................................................................................................................................................ 5
2.4 RESEARCH THROUGH DESIGN (RTD) ........................................................................................................................... 6
2.5 ACTIVITY CENTERED DESIGN (ACD) ............................................................................................................................ 7
2.6 HUMAN-CENTRED ARTIFICIAL INTELLIGENCE (HCAI) ..................................................................................................... 8
2.7 THEORY OF EMPLOYED METHOD ................................................................................................................................ 11
2.7.1 Future analysis ........................................................................................................................................... 11
2.7.2 Prototyping ................................................................................................................................................ 11
2.7.3 Interviews ................................................................................................................................................... 12
2.7.4 Service blueprint ........................................................................................................................................ 13
2.7.5 Flowchart ................................................................................................................................................... 14
2.7.6 User and usability testing .......................................................................................................................... 15
2.7.7 Task analysis .............................................................................................................................................. 17
2.7.8 Wireframing................................................................................................................................................ 18
2.7.9 Bodystorming ............................................................................................................................................. 18
3. PROJECT PROCESS ........................................................................................................................ 19
4. LITERATURE STUDY ....................................................................................................................... 20
5. PRE-STUDY ................................................................................................................................... 21
5.1 METHOD ............................................................................................................................................................... 21
5.1.1 Observations and interviews...................................................................................................................... 21
5.1.2 Future analysis ........................................................................................................................................... 21
5.1.3 Flowchart ................................................................................................................................................... 23
5.1.4 Service blueprint ........................................................................................................................................ 24
5.2 FINDINGS .............................................................................................................................................................. 25
5.2.1 Business cases ............................................................................................................................................ 25
5.2.2 Future analysis ........................................................................................................................................... 30
5.2.3 Service blueprint ........................................................................................................................................ 32
6. DEVELOPING INTERACTIONS FOR THE ERICSSON CASE ..................................................................... 35
6.1 PROTOTYPING ....................................................................................................................................................... 35
6.2 USER TESTING ....................................................................................................................................................... 37
6.3 ITERATION 1 .......................................................................................................................................................... 37
6.3.1 Method ....................................................................................................................................................... 38
6.3.2 Findings ...................................................................................................................................................... 42
6.4 ITERATION 2 .......................................................................................................................................................... 48
6.4.1 Method ....................................................................................................................................................... 48
6.4.2 Findings ...................................................................................................................................................... 55
6.5 ITERATION 3 .......................................................................................................................................................... 65
6.5.1 Method ....................................................................................................................................................... 65
6.5.2 Findings ...................................................................................................................................................... 67
6.6 DESIGN PROPOSAL ................................................................................................................................................. 69
7. DISCUSSION .................................................................................................................................. 72
vii
7.1 METHOD DISCUSSION ............................................................................................................................................. 72
7.2 RESEARCH QUESTION 1 ........................................................................................................................................... 74
7.3 RESEARCH QUESTION 2............................................................................................................................................ 76
7.4 FUTURE RESEARCH & SUGGESTIONS TO THE COMPANY .................................................................................................. 79
8. CONCLUSION ................................................................................................................................ 82
REFERENCES .................................................................................................................................... 84
APPENDIX A...................................................................................................................................... 87
APPENDIX B ..................................................................................................................................... 90
APPENDIX C ..................................................................................................................................... 93
APPENDIX D ..................................................................................................................................... 96
APPENDIX E.....................................................................................................................................100
APPENDIX F .....................................................................................................................................102
Table of figures
Figure 1 The delivery robot HUGO. Copyright 2022 HUGO Delivery AB ....................................................................... 1
Figure 2 The Human-Centered AI framework presented. From Human-Centered AI, by Ben Shneiderman (2022,
Chapter 8: Two-Dimensional HCAI Framework). Reprinted with permission. ............................................................ 9
Figure 3 An example of a service blueprint showing how the categories and actions can be mapped. From Service
Blueprints: Definition by Sarah Gibbons (2017). Reprinted with permission. .......................................................... 14
Figure 4 Example of a flow chart and its components. From Wireflows: A UX Deliverable for Workflows and Apps
by Page Laubheimer (2016). Reprinted with permission. .......................................................................................... 15
Figure 5 An example of a hierarchical task analysis showing how the goal is broken down into tasks and subtasks.
From Task Analysis: Support Users in Achieving Their Goals by Maria Rosala (2020). Reprinted with permission.
...................................................................................................................................................................................... 17
Figure 6 Depicting the thesis five phases of the project process ............................................................................... 19
Figure 7 A mindmap of two web app interfaces for delivery services ....................................................................... 22
Figure 8 A mind map of the inspirational sources with notes of findings and remarks............................................ 23
Figure 9 The building blocks for the flowcharts ......................................................................................................... 24
Figure 10 Flowchart visualisation of the ASDA case ................................................................................................... 26
Figure 11 Flowchart visualisation of the Borealis case .............................................................................................. 27
Figure 12 Flowchart visualisation of the PostNord case ............................................................................................ 28
Figure 13 Flowchart visualisation of the Dominos´s Pizza case ................................................................................ 29
Figure 14 Flowchart visualisation of the Ericsson case .............................................................................................. 29
Figure 15 Moodboard with results from the future analysis ...................................................................................... 31
Figure 16 Service blueprint of how the HUGO delivery service will operate ............................................................. 33
Figure 17 Prototype 1 of user test 1 depicting a flow where a curtain design is used. ............................................. 40
Figure 18 Prototype 2 of user test 1 depicting a flow with a card design and steps of the interaction in the top. .. 40
Figure 19 Prototype 3 of user test 1 depicting a flow with fewer interactions. ......................................................... 41
Figure 20 Prototype 4 in user test 1 depicting a similar flow as prototype 1 but with another design and different
information placement. ............................................................................................................................................... 41
Figure 21 Prototype 5 in user test 1 depicting a similar flow as prototype 3 but with some UI inspiration from
prototype 2. .................................................................................................................................................................. 42
Figure 22 Sketch wireframes from iteration 1 showing a concept involving lock and unlock buttons. .................. 42
Figure 23, Sketch wireframes from iteration 1 showing a concept involving fold out actions ................................ 43
viii
Figure 24, Example of wireframes produced in iteration 1 ........................................................................................ 43
Figure 25 The three different prototypes produced in iteration 1 ............................................................................ 46
Figure 26 Combining ACD with the HCAI framework ................................................................................................. 50
Figure 27 A mock up HUGO robot ............................................................................................................................... 51
Figure 28 A user opening the mock up HUGO robot. ................................................................................................. 52
Figure 29 Web app frame showing the user’s address and an alternative to change it ........................................... 53
Figure 30 Web app frame showing where HUGO is on the map and the button making HUGO play a sound to find
it.................................................................................................................................................................................... 53
Figure 31 Web app frame showing the loss of connection. Loss of connection message is at the bottom of the
frame. ........................................................................................................................................................................... 54
Figure 32 Prototype produced in iteration 2 .............................................................................................................. 54
Figure 33 Visualisation of the task analysis, showing the tasks involved for the Ericsson case .............................. 55
Figure 34 Activity-Centered Design result (zoomable) .............................................................................................. 56
Figure 35 First section of the ACD chart ...................................................................................................................... 58
Figure 36 second section of the ACD chart ................................................................................................................. 60
Figure 37 third section of the ACD chart ..................................................................................................................... 62
Figure 38 Prototype produced in iteration 2 .............................................................................................................. 63
Figure 39 The HUGO delivery robot used in user test 3 ............................................................................................. 66
Figure 40 The tested prototype in user test 3 ............................................................................................................. 66
Figure 41 Examples of prototypes in iteration 3 ........................................................................................................ 67
Figure 42 Frame 1,2, and 3 of the web app design proposal ..................................................................................... 69
Figure 43 Frame 4,5, and 6 of the web app design proposal ..................................................................................... 70
Figure 44 Frame 7 and 8 in the web app design proposal ......................................................................................... 71
Table of tables
Table 1 SUS score results ............................................................................................................................................ 46
Table 2 Design alternatives for the localise operation in the ACD mapping............................................................. 57
Table 3 Design alternatives for the unlocking operation in the ACD mapping ......................................................... 59
Table 4 Design alternatives for the opening of the lid operation in the ACD mapping ............................................ 59
Table 5 Design alternatives for the closing and locking operation in the third sub-section of the ACD mapping.. 61
ix
INTRODUCTION
1. Introduction
In an ever expanding and globalized world, E-commerce is becoming more common and
sales has grown by 20% each year since 2009 (International Post Corporation, 2021). The last
part of the delivery process, bringing the package to the end customer, is called the ‘last
mile’ (Dolan, 2022). Additionally, the last mile is the part of the delivery process that is both
the most ineffective as well as most costly. This is due to multiple reasons, one of them being
the fact that the size of the delivers can be small. Additionally, the distances between these
delivery points can span up to miles in rural areas and in cities deliveries are delayed due to
traffic.
To address this issue, the startup company HUGO Delivery, a subsidiary of Berge Group, is
developing a last mile delivery robot, see Figure 1, to help build a sustainable future (HUGO
Delivery AB, n.d.). The robot is autonomous and will transport packages from A to B. At the
time of the report, there are five different business cases that the robot is being developed
for and they will serve as the core for this thesis report. The business cases all involve
transportation of goods but are all unique with different environments they operate in with
different prerequisites.
HUGO Delivery wish for the Ericsson food case to be further developed and has also
expressed that a phone web app design for the delivery would be a preferred solution to
1
INTRODUCTION
explore, they are however open to other solutions as well. This was taken into consideration,
and it slightly narrowed the scope, research questions and aim.
The science and application of autonomous vehicles is rapidly evolving and even though it
has not yet reached regular consumers at large it is often viewed as a future reality. This
however raises questions about how users should interact with the new A.I technology and
new fields of research, such as Human Centered AI (Shneiderman, 2020b), have been a
consequence of this. An autonomous delivery robot solving the last mile delivery issue could
have a positive effect on the climate crisis but in order for this to work and make a difference
it also needs users to be ready to adapt it. This is where HCAI become an interesting
approach since it specifically targets autonomous agents and design of safe, reliable, and
trustworthy AI interaction. If the service can be designed to appeal to users and leave a
positive impression it is possible that a larger amount of people would use it, resulting in a
greater environmental benefit.
Designing for autonomous agents require insight into how users act towards a product like
the HUGO robot, that can move on its own and perform tasks without specific instructions
from the user. There are few products like this in the day-to-day life of people which makes
it a very novel interaction pattern and interesting to explore further.
2
INTRODUCTION
1.3 Scope
The report will focus on investigating interactions between the autonomous agent (HUGO)
and the end user (the customer). Thus, interactions between the robot and other humans or
other systems will not be considered, for example the service providers, pedestrians, and
backstage support systems. Similarly, the part of the service in which the user does not
interact with the autonomous agent but still with the service will be taken into
consideration, although it will not be the main focus of the thesis. In designing the user
interface, the focus will be on the functionality of the interface and not the visual design.
Although aesthetics of the phone interface will be explored when designing, it will not be
the main focus of the product.
The thesis will be restricted to investigate the five given cases presented in the introduction.
As per request by the company the design proposal will be developed for the Ericsson food
delivery service case. Additionally, since the company have identified web apps as their
platform for communication between the user and the service, the thesis will primarily
investigate implementing a web app design solution for the service. Moreover, when
discussing HUGO and the autonomous agent, it refers to the current state and design
implementation of the robot being developed at the time of the thesis.
3
THEORETICAL FRAMEWORK
2. Theoretical framework
This chapter explains the theoretical framework of which the thesis builds upon. The thesis
case is an autonomous agent and a digital system that build upon computer technology to
guide the interaction of humans between the two. This requires researching several areas of
both theory, approaches and methods that can be applied to one or several of the
components in this interaction.
‘Interaction design is design of the acts that define intended use of things. “Intended us”’ does not
refer to function in a more general sense, i.e. what a given thing does as we use it; a corkscrew
opening a bottle of wine for example. It is about acts that define use of this particular corkscrew,
i.e. it refers to a particular act interpretation of a given thing as a cork screw.’ - (Hallnäs & Redström,
2006, p. 23)
According to this, to design interactions is to design how a user should interact with a
product or interface, which means that the designer creates the conceptual context of the
intended use but without the expectation that a user step by step interact with the product
in a strict or given way. There is always a chance that a user will not re-enact the intended
use which makes it unnecessary to stage exactly how an interaction should be. Instead, the
designer focus on how a product can enhance the chance of intended use through logic and
nudging the user in the right direction.
Interaction design is a broad term and cover many different disciplines. The most relevant
and interesting area in interaction design for this thesis is Human-Centered Artificial
Intelligence (HCAI), a discipline deriving from human computer interaction (HCI)
(Shneiderman, 2020a). The design of human-computer interaction is a big part of
interaction design when designing digital interfaces and Arvola (2010) describes the word
interaction in human-computer interaction as following:
‘The word interaction in human-computer interaction design, can be defined as a mutually and
reciprocally performed action between several parties in close contact, where at least one party is
human and at least one is computer-based’ - (Arvola, 2010, p. 1)
The interaction is not necessarily only between a computer-based party, e.g. a phone, and
a human but can also involve other parties, such as an autonomous agent. The definition of
an interaction is therefore not bound to a computer and a user but can involve other parties
as well.
4
THEORETICAL FRAMEWORK
There are broader interaction design principles that are used as guidelines to design
interaction (Cooper et al., 2014). These are general principles that can apply to a variety of
interactions but can also be further developed to specific interactions, like that of an
autonomous robot. There are also UX/UI design guidelines based in cognition and human
psychology, like how humans interpret visual signals and patterns. As an example, one
prevalent such guideline is that a user interface should mimic how people expect the system
to look and work, taking inspiration from industry standards, cognition, and societal codes.
This helps users navigate the system with more ease since the system is recognisable to
them and they quickly pick up on what to do since they have seen it before or are
neurologically coded to understand it (Johnson, 2014).
Ries further stress the importance of how experiments are conducted and mainly why. He
states that simply planning to ship something, being product or service, in order to see what
happens as experiment will always guarantee success at doing just that, seeing what
happens. However, doing it this way won’t necessarily provide you with any validated
learning, which is one of the core parts of the Lean Startup feedback loop. As Rise states it,
‘if you cannot fail, you cannot learn’ (2019).
5
THEORETICAL FRAMEWORK
How hypotheses are formulated are therefore important to allow for learning. There are
according to Ries two types of hypotheses that are the most important for entrepreneurs to
test, being value hypothesis and growth hypothesis. Value hypothesis relates to if the
product or service provides value to the customer. Growth hypothesis focuses on the other
hand on testing how the product or service spreads to new customers and in turn test how
new users discover it.
Zimmerman et al. however emphasize the difference between the artifacts produced in
design practice and design research. In design research the intent lies in producing
knowledge, whereas design practice aims at creating commercially viable products.
Therefore, the focus in design research is on making the right things rather than what would
be commercially successful. The contribution should therefore according to Zimmerman et
al. demonstrate significant invention and not merely update existing product. (2007)
Zimmerman & Forlizzi propose a five step process to RtD projects (2014). The five steps
being:
1. Select
2. Design
3. Evaluate
4. Reflect & disseminate
5. Repeat
Select
Zimmerman & Forlizzi states that the first step in the process is selecting what to investigate,
being a problem or a design opportunity. Besides what to investigate, how to investigate it
is also part of the first step and in turn, selecting which of the three practises, lab, field or
showroom to follow within RtD. To finish the selection step the last part is finding and
selecting exemplars of RtD projects that can serve as a guideline for the project in question.
Design
When the focus and the approach of the project is selected the next step is starting the
design process by setting the initial framing of the project. This includes doing literature
studies in the given field to both understand the state of the art as well as the uncertainties
and problems found in other research. Setting the frame can also be conducting fieldwork
6
THEORETICAL FRAMEWORK
according to Zimmerman & Forlizzi. When the initial frame is set the exploration and
creation of products can start. They propose that the process should be iterative where the
products as well as the framing evolves and is tweaked within the process. Furthermore,
Zimmerman & Forlizzi stresses the importance of documenting the process, which steps are
taken and the rationale behind those decisions.
Evaluate
Artifacts generated in the design step of the process are the input for this step. The
evaluation is performed in relation to the decision made in the first step, research questions
and RtD practise.
Repeat
The last and final step Zimmerman’s and Forlizzi’s process is to investigate the problem
again, preferably multiple times to gain the best results.
Norman (2013; 2005) bases ACD in his own adoption of Activity Theory where he separates
into three levels of abstraction layers, activities, task and operations. Activities being the
highest level in the abstraction layers operate within the largest scope and works towards a
high-level goal, where an example by Norman describing the activity layer is ‘go shopping’.
Activities themselves consists of multiple lower-level components, tasks, that together
makeup for an activity. These tasks are separate with their own low-level goal, but together
in a collection of activities they operate towards the same high-level goal, the activity. In the
7
THEORETICAL FRAMEWORK
case of going shopping, an example for a task is present as ‘drive to the market’ or ‘find a
shopping basket’. The third abstraction layer, operation, follows the same principle as task,
where a task is executed by multiple operations.
Shneiderman (2020a, 2020b, 2022) propose a two-dimensional framework for HCAI work
with the goal of aiding in creating reliable, safe and trustworthy applications. By exploring
the degree of automation in relation to the level of human control, the framework aims at
challenging the current notion that increased automation implies that the amount of
human control needs to decrease. Therefore, Shneiderman states that with the framework
automation can in fact be increased and the amount of human control can not only retain
but also increase. The framework is illustrated as a two-axis graph divided into four different
fields, as can be seen in Figure 2. Each axis ranging from low to high, where the vertical axis
is the level of human control, and the horizontal is the level of computer autonomation.
Shneiderman(2022, Chapter 8: Two-Dimensional HCAI Framework) explains that most
reliable, safe & trustworthy systems are located on the right side of the framework. At the
same time, he argues that the desired goal is often, but not always, for design to be placed
in the upper right quadrant where the level of human control as well as the level of computer
automation are both high at the same time. Systems can still be placed in the lower right
quadrant and be considered reliable, safe & trustworthy, but as Shneiderman state, ‘The
lower right quadrant is home to relatively mature, well-understood systems for predictable
tasks’ whereas the upper right quadrant are for more complex and less understood tasks
where the context of use may vary.
However, even though HCAI is an extension of HCI, Xu et al. (2021) states that traditional HCI
methodology is not sufficient for HCI practitioners to develop HCAI systems. Seen as HCI has
previously only involved developing interactions with non-AI systems, where the behaviour
of automation is well defined, and the result can be anticipated. According to Xu et al. AI has
introduced a characteristic of autonomous machine behaviour that is less predictable.
Additionally, the characteristics and implications that it brings needs to be fully understood
by designers to create HCAI systems where these behaviours are managed. Therefore, Xu et
al. emphasises that there needs to be a transition in the HCI community towards developing
for AI systems to enable for adoption of the HCAI approach. To further evolve the HCAI
framework they propose five HCAI design goals to help guide HCI practitioners in developing
HCAI systems.
8
THEORETICAL FRAMEWORK
The goal that Xu et al. refers to as Human-controlled AI means that humans, not necessarily
the users, are kept as the ultimate decision makers. This goal also aligns with Shneiderman’s
statements on Human-controlled AI (2020a, 2020b, 2022). Giving the user access to controls
for activation, operation and override can aid in the goal of achieving safe, reliable, and
trustworthy systems. This in turn also helps in fulfilling the goal of achieving decision-
making that is human-driven.
9
THEORETICAL FRAMEWORK
intent of the AI, for example its current goal or acknowledgement of a command, should be
done using subtle approaches as opposed to more explicit verbal communication between
the user and the AI. Defining fixed signals for interactions to communicate the internal state
of the robot AI.
In addition to the five design goals presented by Xu et al. (2021), Shneiderman (2022) states
that the eight golden rules he created for HCI (Shneiderman, n.d.) are still valid for designing
HCAI systems.
Building upon the Eight Golden Rules, Shneiderman (2022) presents a HCAI pattern
language to be used when designing HCAI systems, which according to Shneiderman
address common design problems through shorter expressions of important ideas.
10
THEORETICAL FRAMEWORK
The second step is to find key information regarding the topic. The information in this step
is preferably from reliable and scientific sources since its used to compile statistics and find
expert opinions on the matter. This step can be used to back up the trends found in the
previous step with actual data and to better understand the product.
In the third step of the method success factors from other companies, organizations or
services are analysed to understand what they are doing right and how they became so
popular. Success factors, according to the method, is defined as attributes and factors that
can be seen as integral to popular companies’ success (Wikberg Nilsson et al., 2015). By
compiling different successful products attributes, a pattern of the factors they have in
common can be found and used as input and ideas for the new product.
In the end a collection of ideas, industry standards and inspirational examples are gathered
as a basis for brainstorming new products or improve already existing products.
2.7.2 Prototyping
A prototype is a model built to test a concept and can take many shapes, for example a
sketch on paper, a cardboard model, a staged interaction, or a digital interactive model
(Stickdorn et al., 2018; Wikberg Nilsson et al., 2015). The type of prototype used varies
depending on what is of interest to research and what the end product should be. According
to Martin and Hanington (2012, p. 138) prototyping is a critical part of the design process
and essential for testing concepts against designers, clients and end customers.
The prototype can be used to find out more about the product, allowing a designer to
develop the product further. In contrast to a finished or ready to launch product a prototype
is usually not as detailed, and the quality can range from paper sketches to almost finished
physical/digital models (Wikberg Nilsson et al., 2015). This means that a prototype can be
simpler, only built to test specific functions, and production technicalities and costs can be
ignored. In the end it should give an idea of how the finished product could look and work,
and how it shouldn’t work, to take the next steps of design decisions in the process.
When prototyping is applied to a product design process it is usually done in several stages
of the process in order to confirm and test things like different aesthetic choices, user
11
THEORETICAL FRAMEWORK
friendliness and the need for different functions, this allows designers to build upon their
discoveries. The purposes for prototyping described in Design: process och metod (Wikberg
Nilsson et al., 2015) are:
There are some specific prototype types that are very common within UX and UI design.
Such as sketch prototype or usability prototype. A sketch prototype is usually sketched on
paper or simpler versions of paper/cardboard models used early in the design process to
explore different solutions. It is a way to make a visual manifestation of the designer’s
thoughts to structure and explore different ideas. The Usability prototype is all about testing
your solutions towards the user of the product. The prototype is built to be able to interact
with. User tests, evaluations and measurements are applied to gather data and strategically
find important information about the products usability. (Wikberg Nilsson et al., 2015)
A product can also give the user a certain experience while using it and this is explored
through experience prototypes. The purpose of this kind of prototype is to test what kind of
emotional response a product/service might give the user. The experience prototype is
based on the concept that people understand the product better by experiencing it instead
of just reading or hearing about it (Buchenau & Suri, 2000). This prototype requires the
context of how the product should be used to be clear to the user, through for example
roleplay or simulations, since this prototyping method tries to pinpoint how a user react to
the product or what kind of opinions, they form (Wikberg Nilsson et al., 2015).
2.7.3 Interviews
By interviewing, a lot of information can be gathered, and it is a fundamental method used
to be able to understand context, opinions, behaviour, facts and more about the area being
researched (Delft University of Technology, 2020).
Interviews usually provide less measurable data but give a lot of qualitative data (Rev, 2022).
Interviews can be used in several stages of the design process to discover different things.
In the beginning it can give context to a problem or situation, it can also be used at a later
stage to evaluate a product or gain detailed customer insight. Interviews can be time
consuming when compared to methods like focus groups but in return it provides a deeper
insight due to the ability to probe further into interesting areas.
12
THEORETICAL FRAMEWORK
direction. An unstructured interview might not even have questions but instead be an open
discussion of a certain subject.
When conducting interviews, they are usually recorded, noted or in other ways saved to later
be analysed. In this thesis Thematic Content Analysis is used to find patterns and
understand the opinions of the interview subject, mostly when analysing notes from user
tests. Thematic content analysis, shortened to TCA, is a type of descriptive transcript or note
analysis that finds common themes in the notes by scanning all the text, finding patterns
and sort the text into different categories, often by colour coding the text (Braun & Clarke,
2006). The researcher’s stance should be objective when grouping and distilling the themes
and the analysis of the themes are later performed when all the data has been looked over
and categorized.
A service blueprint usually corresponds to a customer journey map and follows all the
different processes needed to let the user achieve their goal. By dividing the different
processes into categories, the customer’s journey can be followed throughout all the
divisions of the service (Gibbons, 2017). The most prominent division is the frontstage and
backstage category that differentiates what processes and actions the user can see and not
see. For example, the user can see and interact with a cashier at the store, but they won’t be
able to see the process their purchase puts into motion that updates the inventory when a
product is bought.
With the service blueprint all the processes necessary for the flow of the service can be
mapped to gain a better understanding of what is needed to make a certain customer
journey work in the end (Gibbons, 2017). An example of a service blueprint can be seen in
Figure 3.
13
THEORETICAL FRAMEWORK
Figure 3 An example of a service blueprint showing how the categories and actions can be
mapped. From Service Blueprints: Definition by Sarah Gibbons (2017). Reprinted with
permission.
2.7.5 Flowchart
Flowcharts are graphic charts used to analyse, design, or document a sequence of
operations in order to visualise the whole process (Chapin, 2003). This can clarify which
steps are present in the sequence or if there are any lose ends. It is often used to track how
digital interfaces work or should work but can also be used to chart a flow of actions of a
service or the actions in a process for a digital system. An example of a flowchart can be seen
in Figure 4.
14
THEORETICAL FRAMEWORK
Figure 4 Example of a flow chart and its components. From Wireflows: A UX Deliverable for
Workflows and Apps by Page Laubheimer (2016). Reprinted with permission.
The chart consists of a set of textboxes that are defined to a certain type of action or
interaction needed to take the next step in the process, such as an option, indicating that
the user can make a choice at that step, or a process, indicating that the user’s input has
influenced the system (Chapin, 2003). The blocks are then connected with arrows showing
how the blocks depend on each other and in what order they should come. Standards exists
for the symbols but there is also room to create specific building blocks to fit the process at
hand.
Usability testing is very similar to user testing, and they are often used as synonyms, but the
aim is different (Moran, 2019). User tests aim to introduce new products to users and find
15
THEORETICAL FRAMEWORK
out how users receive it, often for commercial purposes, and usability test aims to find out
how the product functions in the hand of a user and if they understand how to interact with
it. A test with users can however involve both aims depending on the plan for the test which
means that a test’s aim isn’t necessarily only one of them.
There are many ways to do usability tests, what method is used can also depend on the
product in question and what is of interest to find out. There are three components
commonly present in user/usability testing and these are the facilitator, the user and the
tasks the user performs to test the product (Moran, 2019). The facilitator guides the test and
the user to make sure the right things are tested, and that the user knows what to do. A test
can involve more than one facilitator and it is encouraged to have other roles present at a
test as well, so that different roles can focus on their responsibility and to have more than
one observer (Rettig, 1994). These roles could involve taking notes, managing props, or
taking photos. It depends on what type of test is conducted and what needs to be done. The
user is the one participating in testing the product. User testers can be selected according
to a certain profile, like demographic or interests, but can also be picked at random. There
is also the expert evaluation method where a person with relevant knowledge, for example
a usability expert, becomes the user tester (Benyon, 2019, pp. 246–247). The expert user has
the prior experience to pick up on common problems and flaws in the product while testing
it and they can be particularly useful in the early design process. This is because they often
see if the system has any major flaws that could otherwise interfere when regular users test
it, which makes the expert user tester fitting for early design exploration. The task the user
performs is simply a stated question or an objective for the user that aims to focus the user’s
attention to a certain part of the interaction. This helps when structuring the test and to get
feedback on interesting areas of the product.
In interaction design methods like interviewing, observing, and roleplaying in a scenario are
commonly incorporated in the testing. Usually, the product or service being tested is not yet
fully finished or have every component in place for being properly used, this is when
facilitators use the Wizard of Oz approach. This means that the facilitators find ways to show
the user what is supposed to happen by creating the illusion of it happening by other means.
For example, if a product should be able to play a sound but has no such capability a speaker
could be placed nearby and have a facilitator control it, giving the illusion that the product
can play the sound. Having the user play out the whole scenario of using a service/product
and having them more immersed in it can give great insight. Having several users do the test
can give a greater indication of patterns or behaviours. User testing does not necessarily
require a lot of participants. A recommended amount of user testers according to Jakob
Nielsen is about five participants (Nielsen, 2012). This depends on the tests being conducted
but having more participants usually does not give any new insights since the patterns
repeat themselves. Exceptions to this rule are tests like quantitative studies and eye
tracking.
There are also more measurable methods in user testing like regular scoring questions or
the SUS method. The System usability scale, shortened to SUS, is a common tool regarded
as an industry standard to evaluate, through a questionnaire the user answers, how
measurably user-friendly a system is perceived to be (Sauro, 2011). SUS is used by asking
16
THEORETICAL FRAMEWORK
the user tester ten standard questions and rate them on a scale 1-5. The scores are then
calculated and converted to a score of 0-100 instead of the 0-40 score that is gained from the
1-5 scale. An average score for SUS is 68 which is seen as a benchmark, above this is above
an acceptable usability score and below is lower than acceptable (Sauro, 2011).
‘Task analysis is crucial for user experience, because a design that solves the wrong problem (i.e.,
doesn’t support users’ tasks) will fail, no matter how good its UI’- (Rosala, 2020)
A task analysis allows a designer to learn about how users work or do certain actions. The
first step is to gather information on goals and tasks by observing, interviewing, and
studying the user’s journey to achieve their goal. Next, the information is structured in a
hierarchical diagram with the user’s goal on top. Underneath the goals, the tasks needed to
achieve it are placed and below each of the tasks; the subtasks needed to achieve the task
above. An example of this can be seen in Figure 5. This information is later used to define
what overarching goals a user has and what tasks are needed to achieve them, giving ideas
and structure to the design. From the analysis a designer can meet user’s expectations and
help them achieve their goals in an efficient way (Rosala, 2020).
Figure 5 An example of a hierarchical task analysis showing how the goal is broken down into
tasks and subtasks. From Task Analysis: Support Users in Achieving Their Goals by Maria
Rosala (2020). Reprinted with permission.
17
THEORETICAL FRAMEWORK
2.7.8 Wireframing
A wireframe prototype is often produced by design teams early in the design process with
the primary purpose of creating basic concepts and design directions for the design team.
Wireframe prototypes are further described as high-level sketches usually without any
visual design as the focus of the prototype lies in exploring and visualising the interaction
flow and navigation model. Additionally, wireframes describing the software can be
produced and presented in multiples ways, ranging from sketches on paper to visualisation
created with graphic design software. (Arnowitz et al., 2007, pp. 138–139)
2.7.9 Bodystorming
Bodystorming is mentioned in Imagining and experiencing in design, the role of performances
(Iacucci et al., 2002) as a technic that visualise a user experience through a scenario by using
props and interactive environments. The method is of explorative nature and inspired by
improvisational theatre. By immersing the designers into the scenario of acting like a regular
user using a product or reacting to a situation, bodystorming helps designers generate ideas
and understand context through exploration.
The method can be adapted to fit different products, the type of user or persona using the
product and differ in how accurate it is to the real scenario. A bodystorming session can for
example be only roleplaying and talking or it can involve sets and props closely resembling
the intended scenario. It is important to note that bodystorming is not like user testing since
the aim is to explore ideas and see potential issues, not to evaluate the product (Iacucci et
al., 2002).
18
PROJECT PROCESS
3. Project Process
In this chapter the overall process of the master thesis will be presented. In Figure 6 a visual
representation of the process can be seen. The project started with a literature study,
followed by a pre-study, to gain knowledge within the researched area, examine the cases
given by the company and find interesting ideas for design. When a base of knowledge
regarding the theory and case had been achieved the thesis moved forward to developing
the Ericsson case, simplified to food delivery to allow a more general approach. This was
done in three iterations, all focusing on prototyping and user testing to constantly improve
the design. After the final test, all data and new information was summarized and analysed
to discuss the findings and come to conclusions.
19
LITERATURE STUDY
4. Literature study
The initial phase of the thesis project was dedicated to establishing the theoretical
framework which would later serve as the foundation for the thesis research and design
work. The literature study focused mostly on methodology within the design field and more
specifically within Human-Centered Design as well as approaches on how to conduct
scientific studies within the design practise. Additionally, a great focus was placed on
gathering information on the preconditions for the project, mainly being last mile delivery.
Even though the literature study phase was placed in the beginning of the thesis and a large
period was allocated for performing it, collecting information and theory was an ongoing
continuous process that prolonged after the initial phase and was conducted throughout
the whole project.
Most of the sources used in the project are scientific papers and reports found at well-known
publishers as SpringerLink, IEEE and ACM. Moreover, due to the nature of the project, both
being in the design field but mainly at the fore front in development of autonomous delivery,
additional sources generated from industrial practices as Google AI Research (PAIR) and
Nielsen Norman Group (NNG) were used to support where academia is lacking and to
provide with state-of-the-art examples from the industry.
20
PRE-STUDY
5. Pre-study
After performing the literature study, a pre-study was conducted with the focus on
understanding the current status of the product as well as the business cases provided by
the company. This was done by utilizing different methods established in the theoretical
framework to explore the state of art in the delivery industry as well as what other
companies were doing in similar industries and how their solutions were implemented.
5.1 Method
In this chapter will the process and methods used for the pre-study be presented.
The analysis was carried out by researching different services with similarities to the HUGO
project, seeing what they had in common and how people were using the product. To
structure the analysis a mind map of services and products were created to gather
screenshots and notes of the different services, see Figure 7 as an example.
21
PRE-STUDY
Services and products where selected based on the similarity in interactions they had with
the delivery robot concept. The focus during the analysis was on delivery solutions, digital
to physical interaction, app design UI and process design of the service. The results were
then used in identifying what these services where doing that held them at high regard
amongst consumers, their success factors. They were also used to benchmark how a web
app of today could look and function for this kind of service. In other cases, such as the E-
scooters, the interaction with physical objects through a phone was the focus. This was in
order to understand what types of technology were available, how they worked and if it was
commonly used by consumers.
After finding and mapping the inspirational pieces a thorough brainstorming was held by
discussing and putting up notes with ideas and remarks, as can be seen in Figure 8. These
notes with remarks as well as the result from the discussion would later by used as
inspiration for the development phase of the thesis project.
22
PRE-STUDY
Figure 8 A mind map of the inspirational sources with notes of findings and remarks
5.1.3 Flowchart
After identifying success factors and patterns in other services, the process of the five given
cases for HUGO were each individually mapped with the method flow chart from a template
found in Figma, the building block template can be seen in Figure 9. The goal of the process
was to analyse the current state of the services that were being developed by the HUGO
team and to find similarities for the different cases and interesting areas of interaction.
Therefore, the focus of the mapping was to highlight the different interaction points
between the robot and the user as well as the flow of the process, meaning how different
phases of the process occurs and what type of data is present. The processes were then
cross-examined to identify similarities and interactions of interest to all the cases.
Exploring the different cases and creating the flowcharts was an iterative process based on
the interview made with the design team behind HUGO. Each iteration of the flowcharts was
reviewed by the team member in charge of that specific company and based on the
feedback from those meetings the chart was revised and improved to be more accurate to
their given case. In addition to making the flowcharts more accurate, the feedback meetings
also created a better understanding of the different cases and their challenges.
23
PRE-STUDY
The service blueprint included the customer journey and could be used to better understand
the user and their experience. The service blueprint was used at this stage of the project to
map out the system to get a better understanding for what happens in all the steps and how
the user interacts with the service. In turn, this also provides the requirements of the service,
e.g., what support systems are needed to provide the service to the user. For example, when,
and in what format, to present the user with information. The goal of producing a service
blueprint was to lay a foundation for a task analysis and to further explore essential
interactions.
The blueprint showed what kind of communication and processes were vital in both
frontstage and backstage to make the interaction with a delivery robot work. The focus of
the thesis was however on the frontstage actions, but the backstage actions was mapped as
well to better understand the needs of the system. A service blueprint has different sections
of processes but can be adapted depending on the service, in the service blueprint made for
this thesis there are:
24
PRE-STUDY
• Evidence – The physical or digital evidence the user sees of the interaction or
process, like a text or a receipt.
• Customer journey – The user’s actions during the service that lead to their end goal.
• Line of interaction – This line is drawn to visualise where the user’s action and the
front stage actions meet.
• Front stage – This is the actions from the service providers that the user can see and
interact with.
o Robot actions – This is the HUGO robot’s interactions during the service
process.
o App – The actions of the web application on the user’s phone
o Technology – Actions of other relevant technology, like GPS.
• Line of visibility – the line that’s symbolise what a user can see of the service and
what is hidden.
• Line of internal interaction – The line that differentiate actions made by service
provider and actors outside of the service providers ownership.
• Support processes – Processes not owned by the service provider but is needed in
the service. Such as fetching data or processing money transfers.
5.2 Findings
In this chapter the findings from the pre-study phase will be presented.
In this chapter the results from researching the different companies and exploring the cases
will be presented.
5.2.1.1 ASDA
ASDA is a supermarket chain located in the UK with stores across the country in multiple
sizes and formats (ASDA, n.d.). According to their website they serve 18 million customers in
their stores weekly. In the case of HUGO, a developer on the team explains that the use case
will target short distance delivery of groceries from the store to the customer. More
25
PRE-STUDY
specifically orders will be made by the customers beforehand for pick up and will be package
by the store’s employees. The groceries will then be loaded onto HUGO in crates and
delivered from the store to the customers car in the parking lot. There will be specific
parking slots for pickup, however one designer states that in order to minimize human error
the customer won’t get a chosen slot to park at but will instead rather have to choose one
of the specified slots and inform the store when and which slot they have parked at.
The design for the crates holding the groceries was not at the point of the thesis finalized
and was still in development. However, the designers stated that the intention for the design
was for the crates to include some sort of locking mechanism to ensure the crates and
groceries safety during travel. Thus, affecting the mapping of the service as the flow
therefore had to include a step for unlocking the creates/box to retrieve the crates with
groceries from HUGO, as can be seen in Figure 10. Another unique aspect to the case was
that the current design for the service included that the user had to alert the store that they
both had arrived at the store, as well as in what slot they had parked. This implies that there
needs an additional interaction, and that the user starts the interaction by informing the
store of their arrival, from which the store then loads the crates onto HUGO and sends it out
to the user’s car.
5.2.1.2 Borealis
Borealis is one of the leading manufacturers of polyolefin solutions in the world and has
Sweden’s only manufacturing plant in Stenungsund (Borealis, n.d.). The lead developer on
the team for Borealis states that the intend use for HUGO in this case is transporting samples
of their produced material in the factory. This means that HUGO will be retrieving material
samples from the production line and transport them to a lab for analysis. The factory has
multiple production lines meaning that there can be multiple stops on the route to the lab.
26
PRE-STUDY
Borealis presents a unique aspect of the case in that is has two types flows for interacting
with HUGO, as can be seen in Figure 11. One where the users send samples via HUGO, this
being out in the production, the other being the user receiving the samples in the lab.
Despite the difference in tasks at the two stations, it was evident that the two stations shared
similar flows with the only difference being placing a sample in HUGO as opposed to
retrieving a sample in the other.
As the factory is a closed off area, the developer explains that there is no need for the lid to
be locked during transportation as it needs to be in the other cases. This simplifies the
interaction when sending and receiving samples to HUGO since the locking and unlocking
steps can be taken out of the design. Despite that, there is still a need for an interaction from
the user at the end of the delivery signalling end of interaction and that the user has either
received or placed a new package in HUGO ready to go to the next stop
5.2.1.3 PostNord
With an unique distribution network spanning across the Nordic countries, PostNord
provides solutions in communications, logistics, e-commerce and distribution (PostNord
Group AB, n.d.). The goal for the future is to integrate HUGO into PostNord’s daily operation
to assist them in moving towards a more sustainable and fossil-free future (PostNord Group
AB, 2022). One designer on the HUGO team explains that the intended business case that
they are developing for is Customer to Business (C2B), meaning that the customer will be
sending packages with HUGO as opposite to retrieving, which is common to the other cases.
27
PRE-STUDY
Since the PostNord case was intended to be C2B, meaning that the user use the services to
send parcels to a company. The goal of the user therefore differs from the majority of the
other cases explored. However, even though the user’s goal differs from the other cases, the
flow of the service in Figure 12 still shares similarities with the other cases, and for some is
even almost identical with them, as the Ericsson and Domino’s Pizza cases. The main
difference found in sending packages with HUGO is highlighted by the importance of
confirming to the user that the delivery was a success. Which in this case implies that the
package has successfully been delivered to its end destination.
28
PRE-STUDY
5.2.1.5 Ericsson
The Ericsson office in Kista science park wants to test their traffic sensor technology with
the help of HUGO. A developer on the HUGO team explains that in order for Ericsson to be
able to test their sensors and collect data, they need a service in place for HUGO to be able
to operate in the environment intended for their sensors to operate in. Therefore, the
context in which the Ericsson case is in, is the joint area of Kista science park and Kista
gallery. The operation performed by HUGO seen in Figure 14 will be delivering food from
restaurant in the Kista gallery food court to employees at the Ericsson office in Kista science
park.
29
PRE-STUDY
• Unlock/open box
• Take/drop off goods
• Close/lock box
In the process of creating and analysing the flow some questions were raised surrounding
the design of the user interaction. One of the biggest questions being how and what
information to present to the user. When to present the user with this information also
became important to take into consideration. Besides informing the user, communicating
to the user when the start and end of the interaction with the robot was happening was
found to be another area that needed to be investigated further. Furthermore, this relates
to the users understanding of who is in control at a particular point of the process and the
importance of the transfer of control between the user, phone and robot being evident to
the user.
Another insight from analysing the service is the lack of design for when the user fails an
interaction. What should happen when a user makes an action that is not in line with the
intended actions in the design, should the user correct the action, or should the system fix
the error. As can be seen in Figure 10, Figure 12, Figure 13 and Figure 14, when the user
makes a choice different from the intended action, the flow goes out of scope and leads
nowhere. These are points of interest and are therefore identified to be important in the
process of designing the flow of the service, where the implementation of the web app could
support the user in the case of an unintended failure in the interaction.
30
PRE-STUDY
From the analysis and following discussion a few important ideas and patterns were found:
• It was found that an app or web app was the most common way to make the service
accessible to customers. According to the HUGO team there are also some negative
feelings towards traditional downloadable mobile apps among costumers, saying
it’s an unnecessary extra step in using the service, supporting the idea that a web app
would be what users expected from a service such as HUGO delivery.
• Interaction with a physical object through the phone, like opening the Instabox or
unlocking the e-scooter, was usually done by having the user interact through the
phone but with something physical on the object, like buttons or a QR-code.
• The app used for Starship has a few interesting features specific to an autonomous
delivery robot that were interesting to explore. Especially how they convey
information to the user about what the user needs to do and what the robot does
automatically.
• The Starship app is also an example of what types of interactions are controlled by
the user and what is done automatically. For example, the box unlocks automatically
when the user states that they are next to the robot and locks again when the user
states that they want the robot to leave.
31
PRE-STUDY
• The analysis also resulted in pointers on how UI for a delivery app could be designed.
Creating an interface with attributes that are common for other apps could make it
more self-explanatory and the user know what to expect. There where patterns of
how and where information was presented and what kind of information the user
received, like map location and time remaining until delivery.
• The interactive steps of the delivery apps provided framework for what interactions
were necessary in a delivery service, like tracking and confirming an order.
• The delivery service Instabox use stationary boxes that the user unlocks with their
phone, and this was especially interesting for researching how the user could open
the box through a phone interface. Sparking interesting ideas on how to design the
unlocking interaction and indicating that users are accepting towards receiving a
delivery service through their phone.
The chart represents the service with a focus on the customers journey and not as
thoroughly in the backstage section. An interesting finding from the blueprint was that the
section of interest to the thesis started at the step of finding the autonomous agent, to it
driving off. This part was unique since the user was interacting with the AI in this section and
distinctively different from regular delivery services in that sense. The blueprint also showed
what types of actions where necessary for the service to function which created a
comprehensive layout for what actions should be present in the web app.
32
PRE-STUDY
Figure 16 Service blueprint of how the HUGO delivery service will operate
33
6. Developing interactions
for the Ericsson case
To test and evaluate the identified interactions of the five cases, a concept needed to be
developed. The Ericsson case was selected to be used in the project to be further developed
as it was relevant at that point in time as well as requested by the company. Seen as the
company was a startup working with an agile and iterative process, the choice was made to
adopt the Lean Startup methodology, working in fast feedback loops consisting of build-
measure-learn loops. These loops where performed in sprints were one sprint equalled one
loop. The sprints lasted two to three weeks each and the average time for building the
prototype was one to two weeks. The last week of the iteration was used for testing, in other
words the measure and learn part of the feedback loop.
The service blueprint and flowchart from the pre-study phase was used as prerequisite in
the start of the development phase. The first iteration served as a baseline for the whole
design iteration process where the initial assumptions made in the pre-study phase were
tested against expert users, meaning the developers of HUGO. By creating quick and
unpolished wireframe prototypes that could be tested against expert users, some initial user
feedback could be collected and used to develop the next iteration.
In accordance with the lean-startup methodology, each iteration had one hypothesis as
starting point, deciding what to test which in turn affected what to build in the MVP for that
iteration. The hypotheses themselves were based on the assumptions made from the
findings from the previous iteration or the only the pre-study as in the case of the first
iteration. The hypothesis additionally assisted in keeping the development of the MVP
focused to what was going to be tested.
6.1 Prototyping
Prototyping was consistently used throughout the design process in the project and was
performed with multiple levels of detail and intentions. The purpose, nonetheless, stayed
the same for all the use cases throughout the project, to explore different design ideas
through designing and testing against users. As with the whole development phase itself,
the Lean Startup approach was adopted for the prototyping as well. The prototyping was
therefore performed according to the process proposed by Ries (2019) for creating MVPs.
Which meant planning in reversed order of execution by first deciding what and how to test
the iterations hypothesis before designing the actual prototype. Moreover, the prototypes
were lacking many of the interactive implementations for the components that were
deemed to not be relevant for the testing of the hypothesis. In doing so it helped to keep the
prototypes leaner by avoiding creating too advanced prototypes which only added
unnecessary complexity irrelevant for testing the hypothesis. Saving both time in
developing the prototypes and ensuring focus on the right components, whilst also making
35
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE
it easier and quicker to pivot to other ideas as the complexity of the prototypes were kept
rather low and focus narrow.
Whilst adopting many different forms during the project, the produced prototypes were
essential to the whole design process. To get the most truthful feedback from the users, all
the prototypes were experience prototypes. This was motivated by Buchenau & Suri (2000)
statement that experiencing the product gives the user a better understanding of it than
reading or hearing about it. Creating these experience prototypes can be accomplished in
multiple ways, both analogue and digital, and for creating more advanced high-fidelity
digital prototypes there are also multiple alternatives of tools to use, with their own
advantages and disadvantages. In this thesis project the web-based design tool Figma was
chosen for designing the prototypes. The choice was based on the tools ease of use, previous
experience working with it as well as the abilities for collaboration and testing that it offers.
Additionally, the company also use the tool internally which further motivated the choice as
it allowed for easier handover of design material to the company at the end of the project.
To start the design process of the user interface in the first iteration, the method
wireframing, see 2.7.8, was chosen because it allowed for rapid and iterative sketching of
low-fidelity designs to explore different concept ideas. The process of creating wireframes
started with quick sketches with pen and paper, to explore ideas in a swift manner and to
get up and running easily. To facilitate the exploration, blank templates of phones were
printed on paper to avoid needing to draw the framing of the phone when sketching.
Additionally, when using the templates, the results also became more consistent as the
frame gave a reference point for size, which also led to the proportions being more accurate
and even across all designs. Making them more realistic in terms of sizing and therefore also
easier to reuse in later stages.
After producing wireframes on papers, the design process moved over to creating higher-
quality wireframes in Figma. The choice of switching tool to a graphical design tool like
Figma was that one of the main benefits of working in such a software as opposed to on
paper was that iterating over ideas becomes easier and quicker once some elements are
created and composed into basic designs. The trade-off however for working with software-
based tools are the increased time that it requires initially to get setup and create the first
designs, but once that is done, the process of iterating over design ideas becomes more
efficient. The digital wireframes were then later used in the first iteration as foundation for
creating prototypes on. The focus was to combine different ideas into single concepts and
to make them interactive, by doing so the level of fidelity was also raised making the
prototypes more suitable for user testing.
In the second iteration, the prototyping process mostly involved refining the designs from
the first iteration as well as combining the different concepts into one. Specific to the
prototyping in the second iteration was also designing for the alternative cases that the
iteration aimed to explore. This meant creating multiple variation to parts of the design
where some element where either added or tweaked in order to introduce the intended
failure, creating the wanted scenario to test. For example, error messages were
36
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE
implemented to simulate and test the experience of when the user loses connection to
HUGO and can´t interact with it.
For the third and last iteration, the prototyping process focused on combining all the
insights from the first and second iteration into one elaborate high-fidelity prototype. The
prototype was created to be used and validated in the last user testing. Therefore, additional
interactions that were previously not fully designed for were added and made usable to
simulate as close to the full experience as possible. This meant adding additional menus,
information overlays and other features that had not been relevant to the previous tests.
The analysis of the user test was done after each test when reading protocols/transcripts
from interviews using the TCA method and discussing the findings. This involved reading the
transcripts/protocols and marking interesting or relevant text by colour categorisation.
Categories differed depending on the questions asked but could, for example, be that
several of the users mentioned a specific improvement or that they reacted negatively to
something. When all the text had been categorised a summary of them were created and
used as basis for discussion and brainstorming.
6.3 Iteration 1
For the first iteration the goal was to design quick and unpolished wireframes in order to
test the hypothesis, which was based on the gathered information in the pre-study. Due to
the nature of the project being a startup company developing a product that has not existed
before, there were no actual users to interview and test on but rather only future users.
Therefore, the hypothesis of the first iteration was merely based on assumption made from
the material of the pre-study and the goal was thereby to test whether these assumptions
were accurate or not.
37
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE
The assumption was that since this is a new type of interaction for a large portion of the
future users, the service will need to provide the user with information regarding the
delivery process and give clear instructions on what is expected of the user to reach their
goal. Therefore, the hypothesis for the first MVP was formulated as the following.
• User wants information available and presented to them about how and when to
interact with the autonomous agent.
In addition, when creating the MVP, the goal was also to experiment and test different forms
for how information could be presented to the user.
6.3.1 Method
In this chapter will the process and methods used for the first iteration be presented.
6.3.1.1 Bodystorming
Bodystorming and roleplaying was used to further analyse and explore interacting with
HUGO. Using the delivery box from and old HUGO model as representation for the robot in
the roleplaying, allowing for direct interaction with the lid when opening and closing the
box. In the bodystorming session three scenarios were tested to explore different ways of
interacting with HUGO through the phone as a tool. With exception for the first scenario
which was the intended case, the cases were created whilst performing the session in a
“what if” manner.
38
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE
The goal of using these three scenarios was to explorer the initial idea of using a web app as
an interface between the user agent and HUGO the autonomous agent, as well as exploring
alternative scenarios in a ‘what if’ manner. In doing so, it opened for the possibility of
identifying aspects of the scenarios that are working without a web app and could be used
in that case to simplify the interaction. The other scenarios were therefore compared to the
web app case which would serve as a reference point. At this point in the design process, no
actual interactive prototypes of the web app were produced and therefore the interaction
with the web app in this case were based on the wireframe drawings produced as well as the
imagination of interacting with a web app. The focus of this stage was therefore not to
explore the design of the user interface for the app or the SMS messages, but rather their
roles as interfaces between the user and the autonomous agent.
Design students were a fitting user tester and tested the design to gain insight in how users
without a connection to HUGO would experience the prototypes, but they were experienced
enough to give relevant feedback and design ideas. This form of testing in an early stage
allows a type of co-design, where the users are not only testing but can also express ideas
and opinions, giving more depth and insight into the design.
39
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE
The reason that two of the prototypes were not tested with SUS was that they were similar
to prototypes that were tested in the number of interactions or how the flow was built. They
were mostly discussed to examine placement of information and UI elements. The whole
description of the test and questions asked can be found in Appendix A.
The tested prototypes that were evaluated with SUS can be seen, in order of testing, in
Figure 17, Figure 18 and Figure 19.
Figure 17 Prototype 1 of user test 1 depicting a flow where a curtain design is used.
Figure 18 Prototype 2 of user test 1 depicting a flow with a card design and steps of the
interaction in the top.
40
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE
The prototypes that were not tested but shown and explained to the user testers after
prototype 1-3 can be seen in Figure 20 and Figure 21.
Figure 20 Prototype 4 in user test 1 depicting a similar flow as prototype 1 but with another
design and different information placement.
41
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE
Figure 21 Prototype 5 in user test 1 depicting a similar flow as prototype 3 but with some UI
inspiration from prototype 2.
6.3.2 Findings
In this chapter the findings from the first iteration are presented.
6.3.2.1 Wireframing
By analysing the interaction points in the service blueprint and flow chart for the Ericsson
case wireframes of different screen views could be designed. These wireframes aimed at
exploring different design concepts for the assumptions and the hypothesis.
The results from the initial wireframing can be seen in Figure 22, Figure 23 and Figure 24.
There were other wireframes as well but the quality of these were determined to not be clear
enough to be presented as they were only quick sketches.
Figure 22 Sketch wireframes from iteration 1 showing a concept involving lock and unlock
buttons.
42
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE
Figure 23, Sketch wireframes from iteration 1 showing a concept involving fold out actions
43
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE
6.3.2.2 Bodystorming
The result from the bodystorming session consist mostly of insights found when performing
the roleplaying and from discussions that followed during and after performing the different
scenarios. The result is therefore in the form of insights found when summarising the
discussion.
Findings
The bodystorming session also presented other interesting insights not specific to only one
case. Both the first and third case presented an interesting aspect in the interaction where
the user pick up the package from the box. At that point, holding the package in one hand
and closing the lid with the other could prevent the user from accessing the phone. Meaning
that at that point it might not be possible to provide any new information or instruction to
the user and at the same time not possible for the user to give input back to the service
through the app, creating a short interval where the web app is out of scope for the service.
Relevant for when designing the information flow of the web app, taking into consideration
when the user will and can have the phone in their hand will affect what and when to present
information.
44
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE
During the session a question was raised related to all three cases, the question was about
when to present information to the user about the deliver as well as how to address the
situation if there is an issue with the delivery address. Should there be an SMS text sent when
the robot has arrived or right before it arrives? How should the service handle that if the
address for the delivery is not correct? Questions found needed to be addressed later in the
design of both the service and the web app.
6.3.2.3 Prototyping
Utilizing the result from the task analysis with the produced wireframes allowed for pairing
the individual wireframe designs into a flow of interactions. This resulted in five prototypes,
however as four of the prototypes shares many similarities with another, only three of the
prototypes were decided to be turned into an interactive prototype, these three can be seen
in Figure 25. The three prototypes are designed to test different design concepts and
principles where each have one focus each with different design suggestions on layout and
service flow.
The first and second prototype presents the user with information on the process, what step
they’re on and what steps are left to completing the process and thereby reaching their goal.
The two prototypes test different design concepts for presenting the information where the
first has a status-bar at the top of the screen showing the steps. The second one however,
doesn’t change screen but rather opens a different section, can be compared to a dresser
where each step is drawer. As you progress through the process you move to open the next
drawer and closing the previous.
The third prototype adopts a minimalistic design where the goal is to test the user
interaction of having as few screens as possible and thereby interactions as possible.
The prototype explores the assumption that there should be a step before unlocking the
robot where the user must verify that the user is at the robot. By removing this verification
step in the prototype, the interaction experience by test users can be compared to the other
two prototypes and used to determine if the assumption is correct or not.
45
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE
46
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE
• All flows evaluated with SUS scored high enough to be above the 68-score required,
only exception was one user scoring prototype 2 as 62,5 making the average score
for prototype 2 a bit lower than expected.
• On average the students scored the different app flows higher than the expert users.
From the interviews and discussion following points were raised by users:
• Most of the users displayed a mix of anxiety with some excitement for using HUGO
before testing the flows, e.g., uncertainty for safety of the package, and uncertainty
on usability for people with less tech experience. Expert users expressed more
excitement about the project, thinking it will be efficient and fast and like the idea of
robots but still expressed that there was uncertainty as well.
• Users showed preference to have only one thing to do for each screen and clear,
short instructions were important for the majority.
• Help buttons and undo buttons are wanted to feel less anxious.
• Users showed preference towards having a progress bar to be able to tell what the
next interaction would be or how many are left. One user also mentioned that it gave
them a feeling of control.
• 3/5 liked that there where pictures showing how to open HUGO. One user
commented that a picture of HUGO should be showed before the interaction so that
users not having seen the robot before knows what to look for.
• Two users mentioned that a slide bar for opening and closing the boxes lock are
appreciated since it would be harder to accidentally click the opening/closing
button.
• There were some opinions about contact info being unnecessarily large and that it
could be a folded out when clicked on instead.
• Confirmation of being next to HUGO was deemed unnecessary by one user but the
rest didn’t comment on it.
• Opinions differed on how to open the box, 2/5 liked using only the phone, 2/5 would
have wanted a physical interaction with the box. 1/5 was indifferent but didn't mind
using only their phone.
• The one solution to opening the box with physical interaction that was overall
positive was the QR code option.
47
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE
• One opinion was stated about the need to be able to see that the robot standing by
them is the HUGO assigned to them in the case of several HUGOs being present.
Either by colour or number on the robot or by confirmation on the phone from the
robot.
• A function to confirm delivery address before Hugo starts to drive was suggested. In
connection to this a function where the users pin their location on a map came up as
an alternative to confirming address.
• The warning that Hugo will drive away, was appreciated by most users but one user
pointed to it possibly being unnerving to some users if a warning symbol was used
since it could make the user think they have done something wrong.
6.4 Iteration 2
Iteration 2 had a focus on Activity Centered Design, exploring to combine ACD with the HCAI
framework to identify potential collaborations and issues. The ideas and feedback from user
test 1 were incorporated into the prototypes as well as ideas from ACD/HCAI. Signals and
feedback from the autonomous agent and the web app were of great interest and the
hypothesis for this iteration was:
• The user wants to both interact digitally and physically with the robot as well as
receive digital and physical feedback when interacting.
6.4.1 Method
In this chapter will the process and methods used for the second iteration be presented.
However, seen as this was a method used to further explore the case, making decision based
on assumptions was justified as they were supported by the collected insights from the first
iteration of user testing. The visualisation from the task analysis was, as the service
blueprint, also a living document that later was revised with corrections and changes as a
better understanding of the tasks were attained. Since it was later used with other methods,
48
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE
both new information appeared and changes to the design of the service was made. Which
had an impact on the flow of tasks for the user itself as well as the understanding of it,
meaning that changes also had to be made to the visualisation of the task analysis.
The first step in the process was to use the result from the task analysis and to restructure
and transform it in accordance to the different levels of abstraction layers in ACD presented
by Norman (2013; 2005). The highest layer in the abstraction, the activity layer, mapped
directly to the top task in the task analysis, also referred to as the user goal. Next, the first
level of tasks in the analysis corresponded to the action layer. Lastly, the associated
subtasks were placed in the operation layer with connecting lines to their parent tasks in the
action layer above. The mapping was therefore as following:
After placing all the tasks from the task analysis in the different layers, the next step in the
process was located at the operation layer. Each operation was analysed to identify the
agent, its goal and the tool used to perform the operation. If the operation affected or
involved more than one agent, the correlation between them were of interest. Additionally,
if two or more agents shared goals that indicated that there was a collaboration between
the agents.
When these collaborations were identified the next step was to both identify potential errors
caused by the collaboration of the agents as well as to map their collaboration in
Shneiderman’s (2020a, 2020b, 2022) HCAI framework. The interest of HCAI in this project
however was on the relation of control between the agents and how it is transferred or
shared in operations. Shneiderman’s framework on the other hand explores the relation
between high-low human control and high-low AI automation. To better align with the
interests of the report, the framework was thus modified to instead explore the dynamics of
when agents are either active or inactive in collaborations. This meant that both the axis in
the framework was changed to range from inactive to active making it a binary metric. Active
agent was defined as an agent taking an active role in reaching the common goal in a
collaboration. Inactive were on the other hand defined as taking a passive role in reaching
the goal, still able to participate in the collaboration with actions but only when explicitly
requested by the other agent. When being active, agents could take actions in response to
the other agent’s activities.
49
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE
Identifying collaborations in the operation layer and exploring them in the framework was
performed iteratively where the two parts provided each other with input for the next
iteration. In the process of exploring collaborations in the framework, alternative
collaborations with different tools or goals could be identified. Further, opening for the
possibility of connecting operations and actions together, changing them into larger
collaborations. The exploration of collaboration was performed by moving between the
different quadrants developing different design alternative for the operation. Identifying
alternative collaborations for operations by moving between the four fields in the
framework not only gave new perspective on the collaboration but also generated solutions
to conflicts by testing and shifting the level of activity on the actors.
The purpose for using this method was to identify potential failures in the design, both from
the user and the autonomous agent separately but mainly to identify potential conflicts
caused when they actively collaborate at the same time. Identifying these potential conflicts
opened the possibility for finding solutions to the conflicts by exploring to shift the levels of
activity between the agents to find ways of preventing the conflicts from occurring.
Naturally there were cases where conflicts could not be solved by tuning the relation of
activity between the agents which resorted to implementing contingencies in the design of
the web app to address these conflicts when they occur. Exploring these conflicts also raised
questions on the risks of conflicts and their impact on the user experience. Which led to
discussions on whether the probability of them occurring and their consequences on the
user experience are severe enough to be worth addressing or if doing so adds unnecessary
complexity, making the risk worth accepting over added complexity.
50
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE
The test was done by using a mock up version of HUGO (Figure 27), since the real HUGO was
not available for testing. This also allowed for a quicker and more efficient testing since the
technical set up for HUGO wasn’t needed. The robot used was a smaller type of radio-
controlled car (RC car) with a box mounted on top of it. The RC car was controlled by an app,
and it also had controls for light and sound, making it possible to test light and sound signals
towards the user. Other functions, like the sound of the box clicking or moving the lid was
performed by one of the facilitators.
51
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE
The user participated in a type of roleplaying session where they interacted with a mock-up
HUGO robot and a Figma prototype of the app on a phone given to them during the test,
allowing for more control of the prototype from the facilitators point of view. There were
two different sections of the test, the first one had the user go through the whole interaction
with HUGO, from receiving the order confirmation to sending HUGO off, and the second
section focusing on four potential fail scenarios to see if the user understood the solutions
built into the app. The user was encouraged to talk out loud during the session to help the
two facilitators understand what impressions, feelings and thoughts that came to mind. The
facilitators in turn explained the scenarios to the users, managed notes, and asked interview
questions. In the end of each test section a discussion was held with the participant to
discuss the experience and if the participants had any ideas on improvement. The questions
from the test can be found in Appendix C.
• The user started by answering the initial questions and the facilitators explained the
scenario and what the HUGO delivery service is, showing a picture of the HUGO robot
for clarity. The user was handed the phone with the Figma prototype with an
explanation on how to use it during the test.
• The test started with the first scenario where a normal interaction with the HUGO
delivery service takes place. They were given the premiss: You have called the
52
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE
restaurant and ordered to have your food delivered with HUGO to your address. You
want to acquire your food and complete the delivery with HUGO.
• The user played out the scenario with the mock up HUGO robot by following the
instructions on the prototype. Texts with links to the prototype were sent to the
phone to mimic how a real service contact their customers. They received a text
telling them that their order was confirmed and one telling them that HUGO had
arrived at their address. The scenario ended when the user managed to tell HUGO to
leave.
• Then the second scenario was presented, and this consisted of four different and
shorter scenarios where something in the interaction failed and the solution to these
failures were tested. These were:
o The wrong address was shown. The user was asked to change it. See Figure
29.
o The user was not able to see the HUGO robot and had to use the sound button
to find it. Seen in the bottom of Figure 30.
o Loss of internet connection making it impossible to open the box. Seen in the
bottom of Figure 31.
o The user not closing the lid. This resulted in a text telling the user that they
had forgotten to close it and needed to finish the interaction.
• When test was finished the user answered questions and discussed how the
interaction felt. This was done to find out more about how they experienced the
robot and if they had any suggestions of improvement.
Figure 29 Web app frame showing the user’s Figure 30 Web app frame showing where
address and an alternative to change it HUGO is on the map and the button making
HUGO play a sound to find it.
53
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE
Figure 31 Web app frame showing the loss of connection. Loss of connection message is at
the bottom of the frame.
The frames seen in Figure 32 were the ones tested in the normal scenario by users together
with the mock up version of HUGO. They are presented in chronological order.
54
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE
6.4.2 Findings
In this chapter the finding from the second iteration are presented.
Figure 33 Visualisation of the task analysis, showing the tasks involved for the Ericsson case
55
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE
For the third action, Localise HUGO, the first interaction with the autonomous agent is
introduced. In this operation both the user and the autonomous agent share similar goal of
localising the other, therefore they have a common goal of finding each other. When
exploring this collaboration in the operation layer and in the HCAI mapping process,
multiple design alternative was suggested, see Table 2, and potential conflicts and failures
were identified. When both the user and the autonomous agent are active at the same time,
searching for one and other, a potential risk of them circling each other were identified.
Similarly, if the user expects the autonomous agent to locate them and the agent is designed
to be stationed waiting for the user, then there is a risk of them both waiting for each other.
When the user is active and searches for the autonomous agent it was found that there was
a risk for the autonomous agent to be obscured by some object, for example standing
behind a car in the street, hiding it from the user’s field of view. To address this conflict the
use of sound and light on the agent was used. There were also two alternatives for how these
two elements would be used, one where the agent is active and one where it is inactive. For
the alternative where the agent is active, it would be stationary at the given address and
react to when the user’s movement by lighting up and/or making a sound as the user
approaches the agent. In the second alternative where the agent is inactive the user uses
the web app to make the agent play sound and blink to make it easier to locate, this is also
56
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE
the alternative that was chosen to be implemented in the prototype as it gave the user more
control in locating the autonomous agent.
Table 2 Design alternatives for the localise operation in the ACD mapping
Autonomous
Nr. User agent Description
1. Active Inactive User is actively searching for the agent and is prompting
the agents to blink the lights and make sound to locate it
2. Active Active The user is actively searching for the agent, the agent
lights up and/or makes sound when the user gest near the
autonomous agent.
3. Active Active Both the user and the agent are actively searching for the
other.
4. Inactive Active The agent is actively searching for the user.
5. Inactive Inactive Neither the user nor the agent is actively searching for the
other
57
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE
58
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE
Table 3 Design alternatives for the unlocking operation in the ACD mapping
Autonomous
Nr. User agent Description
1. Active Inactive The user unlocks the lid through the web app.
2. Active Active The autonomous agent automatically unlocks the lid when
the user confirms that they are at the autonomous agent.
3. Inactive Active The autonomous agent automatically unlocks the lid when
the user gets in close proximity to the agent. Using the
location of the phone through the web app.
4. Inactive Inactive The lid is not locked
As all alternative except where the box was not locked relied on the connection between the
autonomous agent and the web app, it was identified that the risk of that connection
breaking could be a potential conflict leading to a failure. It was also deemed impossible to
fully prevent as the loss in connection between the web app and the agent could be caused
by multiple reasons. Thus, the decision was made to address the issue after the failure
occurred which resulted in the error message in the prototype.
For the second operation, open the lid, there were no initial collaboration identified as the
operation was initially determined to be performed by the user manually without any goal
from the autonomous agent. However, in using the HCAI framework to explore other design
alternatives it introduced collaboration where the user and agent work towards opening the
lid together, resulting in what can be seen in
Table 4.
Table 4 Design alternatives for the opening of the lid operation in the ACD mapping
Autonomous
Nr. User agent Description
1. Active Inactive User opens the lid by hand.
Active Inactive User prompts the autonomous agent to open the lid
2. through the web app
Active Active The agent actively assists the user when they open the lid
3. by hand, using motors to support the user’s movement
Inactive Active The agent automatically opens the lid without any
4. interaction from the user
No conflict was identified for this operation, but rather the insight that alternatives that
involve the autonomous agent performing operations needed motorized mechanics of sorts
and thus added complexity to the product itself. Therefore, based on the current technical
possibilities of the autonomous agent the decision was made to seek simplicity and
implement the manual solution in the prototype. Similarly for the last operation in the
second sub-section, pickup food, the need for a collaboration between the user and the
59
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE
autonomous agent was deemed unnecessary as it would add unwanted complexity to the
product. Thus, the user is the only agent in that operation in the prototype.
The third and last sub-section of the chart, see Figure 37, presents both the last out of the
four operations linked to the retrieve food delivery action, as well as present the operation
related to the action of sending the autonomous agent off. When exploring the operation
related to the action of retrieving the food, it became evident that both the operations of
closing and locking the lid had close relation to each other and could be combined. Thus,
the decision was made to explore both operations together in the HCAI framework. In
exploring both operations multiple alternatives were identified which is presented in Table
5. The main conflict that was identified, highlighted in red in Table 5, is when the user forgets
to close the lid when leaving. This is present in all alternatives except for when the
autonomous agent automatically closes the lid when the user has retrieved the package and
walked away from the autonomous agent. To address this conflict in the prototype, the
service sends the user an SMS reminding them to close the lid after a given time. On the
other hand, for the alternative where the autonomous agent leaves automatically it was
instead found to be a potential conflict where there could be an uncertainty for the user on
who should close the lid and that the user might feel uncomfortable in leaving the
autonomous agent without closing the lid.
Additionally, when exploring the alternatives for closing and locking the lid, ideas involving
the last action of sending off the autonomous agent were also presented. Thus, the decision
60
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE
was made to also explore the action in combination with the locking operation from the
action before. In doing so the alternative 2 in Table 5 was proposed and was also the
alternative that was chosen for the prototype. Due to the operations of closing and locking
the lid was explored before the operation in the last action, the alternatives combining all
three operations was placed in the first action in Figure 37. Besides those design
alternatives, in the operation in the last action the only alternative identified was that the
user would manually prompt the autonomous agent to leave through the web app.
Common to all the alternatives in the last sub-section was a potential conflict where the
user’s expectation on the autonomous agent’s behaviour does not corresponds to the
actual behaviour. Which is according to HCAI literature is considered a failure despite the
autonomous agent behaving according to design. Therefore, information on the
consequences of certain user action was added to prepare the user and tune the
expectations.
Table 5 Design alternatives for the closing and locking operation in the third sub-section of
the ACD mapping
Autonomous
Nr User agent Description
1. Active Inactive User closes the lid and lock it through the web app
2. Active Active User closes the lid and lock it through the web app. The
autonomous agent automatically leaves after being
locked.
3. Active Active Automatically locking and driving of when the user
closes the lid
4. Active Active Automatically locking when the user closes the lid
5. Active Active Autonomous agent automatically closes the lid, locks,
and drives of when the user walks away from the agent
6. Inactive Active User doesn’t close the lid, the autonomous agent sends
a reminder text to close the lid
7. Inactive Inactive User doesn’t close the lid
61
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE
62
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE
6.4.2.3 Prototyping
Results from the prototyping in iteration 2. A card design combining many of the UI elements
and interactions from the first iteration prototypes. The interaction sequence was also
modified in accordance with the findings from ACD/HCAI analysis. The prototype can be
seen in Figure 38.
• 3/5 user were interested and exited to try a delivery robot. 2/5 where sceptical of the
efficiency of it but not entirely negative towards the product.
• Two users explicitly liked that they did not have to interact with humans at all when
using the service.
• Light and sound feedback from HUGO was brought up by many users as a positive
thing and something they would need to clearly understand the robot’s intentions.
63
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE
• One user suggested that the light and sound for locking, and unlocking could mimic
that of a car.
• Two users expressed worries regarding not knowing what the robot actually does
automatically, for example if it opens the lid by itself. The users were somewhat
nervous to touch or be too close to the robot because of it.
• 4/5 users where positive to the experience of using the robot and thought it was
simple to use. The robot mock-up was however a simple build and not very realistic
which some users mentioned could influence how they feel about it.
• Opinions were divided on how automatic the robot and app should be, the one step
that was least commented as an unnecessary step was the opening interaction,
indicating that users think this step is less of a hassle than the rest.
• The words used in the app was not always clear to the users and all of them pointed
out that they could not connect that the ‘slide to open’ meant that they would unlock
the lid. This set expectations among the users that the lid would open automatically.
It set the users expectation of the AI to something that was not necessarily correct.
• Users expressed that the curtain menu explaining how the interaction works needed
pictures and shorter text, but it was very appreciated to be able to see beforehand
what to expect.
• Majority of users thought that interacting with HUGO was simple and not to
complicated. However, 4/5 users expected the lid to open and close automatically.
• No user thought anything was missing or hindered them in doing the task and all
users could reach their goal of completing the delivery through the app.
• One user tried to lock the lid before closing it. A message telling them that they need
to close the lid before they lock could solve this according to the user.
64
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE
6.5 Iteration 3
In this iteration the focus was on implementing the changes suggested from iteration 2 into
the design and confirm that the final design was well received by users. The hypothesis for
this iteration was therefore not as specific as earlier iterations and could be summarised as
confirming that users could understand and use the web app and see if there were any major
final design suggestions.
6.5.1 Method
In this chapter the process and methods used for the third iteration are presented.
The test involved five user testers with the average age being 29,6 years with the oldest user
being 36 years old and the youngest 22 years old. The users placed themselves at a high level
of technical ability with the average being 4,8 on a 1 to 5 scale. The users where all working
in the tech industry at an office which was preferable since the design for the interface was
to be used by the Ericsson staff in the Ericsson food delivery case. All this meant that the
final test users where similar to the average person working at Ericsson and that, hopefully,
a result closer to actually testing Ericsson employees could be achieved.
The test was conducted with the real HUGO delivery robot ( Figure 39) to mimic, as close as
possible, how the final service would look. The test however still had to rely on a wizard of
Oz technic during the test since Figma prototypes were used and they could not be
connected to the actual robot’s actions. The facilitator instead had to explain when the
screen automatically moved to the next page and ask the user tester to change it
themselves. The sequence that the users tested can be seen in Figure 40.
65
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE
66
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE
• Users were asked a few demographic questions and their feelings towards
autonomous delivery before the interaction with the app and autonomous agent.
• They then enacted the service flow as a customer using the Figma web app, on a
phone given to them from the facilitators, to communicate with HUGO.
• Afterwards they were asked questions in an interview, such as opinions, feelings and
ideas, regarding their interaction with the autonomous agent HUGO.
All questions asked in the interview portions of the test as well as the full description of the
test can be found in Appendix E.
6.5.2 Findings
In this chapter will the findings from the third iteration be presented.
6.5.2.1 Prototype
The last prototyping used the feedback from Iteration 2 and resulted in the final design
proposal shown in Design proposal 6.6. There were a few other prototypes before the final
design and an example of these can be seen in Figure 41 Examples of prototypes in iteration
3They mostly focused on different visual designs but also explored shortening the
interaction sequence.
From the interviews and discussion following conclusions and opinions were found:
• Users expressed an overall positive and curious attitude. There were however some
worries about safety from two users, expressing some general anxiety towards the
concept and how it will be to interact with it. There were also concerns about how
reliable the robot delivery will be compared to a traditional delivery service.
• One user mentioned that they would like a map when wating for the delivery.
• Light and sound signals were mentioned to be important for understanding the
robots signals.
• 5/5 users expressed that the app was easy to use and understand, the number of
interactions were balanced and the information/instructions were easy to read.
• One user was still sceptical to the whole concept of automated delivery but had no
objections against the app interaction.
• A majority of the users liked seeing more information in the beginning to feel more
prepared on what to do when the robot arrived.
• Some users were a little anxious about using HUGO, both in finding it and how hard
they were supposed to close the lid.
• There were comments from one user about UI and presenting information in more
of a hierarchy.
• One user thought it was hard to close the lid while holding their package and the
phone at the same time. Curiously the other users didn’t have any problem with it
and just switched hands for the phone or put down the package. It was first when
asked about this that they reflected on this possibly being a problem.
• All users indicated that they could perform the tasks and that nothing was missing.
• One user would have liked a text confirming that the delivery was done.
68
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE
• Frame 2 is simply an informative screen that tells the user how long they will have to
wait for their delivery, what stages the delivery is in and detailed information on how
the delivery works. It also has a picture of HUGO so that the user knows what to look
for when it arrives.
• The user then receives a new text with a link telling the user that HUGO has arrived.
They are greeted by frame 3 that shows a map and positions of HUGO and the user.
The user can also make HUGO play a sound to find it. When the user is next to the
robot, they can choose to confirm this by unlocking it.
69
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE
In Figure 43 the next step of the interaction can be seen. This is where the user has found
HUGO and starts interacting more physically with the robot.
• In frame 4 the user is told that the box is unlocked and that they can now lift the lid.
In doing so it will automatically switch to the next screen. There is also a bar on top
of the information that tells the user how close they are to completing the interaction
with HUGO.
• Frame 5 tells the user to take their goods out of the box and there is also a manual
confirmation that the user needs to press to ensure that they don’t close the lid by
accident or forget any packages in the box. This will stop HUGO from leaving due to
any mistake since it will otherwise recognise the delivery as completed if the user
closes the lid.
• In the next step, frame 6, the user is told that they can finish the delivery by closing
the lid. They are also warned that this will make HUGO leave and that it can
potentially start moving signalling that they are giving back the control to the
autonomous agent.
70
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE
The next step in the interaction is about confirming to the user that they have completed
the delivery. This can be seen in Figure 44.
• In frame 7 the user is told that they have successfully completed their delivery and
are also presented with the information that HUGO will leave in a certain amount of
time. This is to communicate to the user that HUGO will start moving soon and give
them time to act if needed, for example if they wish to step back. The user can also
open the lid to stop HUGO from leaving, taking the user back to frame 5. If this is not
needed, they can simply press the ‘I’m done’ button to make HUGO leave
immediately.
• When HUGO has started driving the user is presented with frame 8 which is the final
frame showing the user that they are completely finished with the interaction and
thanking them for using the service. There is also a button more clearly centred at
the bottom of the frame for the help centre. This is in case of the user having
problems or noticing that they have done something wrong in the previous steps.
A help button is present in frames 4-8 to make it easy for the user to find information or
contact customer support when they are interacting physically with HUGO. All frames also
have the menu in the top right corner where the customer service and information on how
the delivery works can be found.
71
DISCUSSION
7. Discussion
In this chapter the methods applied, and the results are discussed in relation to the two
research questions of the thesis. Additionally, the collaboration between a human and an
autonomous AI agent in a delivery service are discussed and compared to a traditional
delivery. Lastly, future research for the area are discussed as well as suggestions of future
implementations for the company.
However, despite performing user testing at the end of each iteration continuously
throughout the development phase. Points can still be made surrounding potential biased
in the designing of the interactions. Even though the design was mainly based on the
feedback from the iterations, it was still partly rooted in our own mental model of the AI and
as designers we have greater knowledge of the system and the product than the actual
users. Thus, it is harder to design interactions with mental models that fully matches that of
new users. However, adopting the activity-centred design philosophy can help in
minimizing bias, both from designers but also from the user’s preference, which also could
be considered a form of bias. By shifting focus in designing to focus on the activity rather
than purely the user and their experience, the design process becomes less reliant on the
opinions of individuals and consequently the design can cater to a broader user group.
Adopting ACD was also important in the design process as the user groups in the tests were
generally homogenous with similar self-estimated level of technological ability. At the same
time, even though the users could be considered a homogenous group, users showed
differences in their preference for the way to interact with the service. ACD was therefore
useful for addressing both the issue of having homogenous test groups as well as having
users with broad preference for how to interact with the service.
72
DISCUSSION
A unique and interesting part of the thesis process was the use of the method presented by
our supervisor Chu Wanjun where we combined the Activity-Centered Design philosophy
with the Human-Centered AI framework. It introduced a new and interesting way of
analysing and exploring interactions in collaborations between agents, identifying conflicts
and finding design alternatives. The method being experimental, and novel naturally meant
that it was not as clearly defined as existing well tested and develop methods otherwise
used in the project, and thus it had its limitations. Where one example is the lack of a clear
definition for what active and inactive meant. Of course, as with all methods they need to
be adapted for the case they will be used for, and the same principle clearly applied for this
project as well. We had to outline what a suitable definition of an active/inactive agent was
for our case and a binary definition was chosen for this project. However, having a more
nuanced definition ranging between the two states, similar to the original framework, was
also discussed but for simplicity it was not chosen.
The way our visualisation of the method was done also had its limitations. When exploring
the operations in the HCAI framework, moving between the four quadrants, and exploring
alternative design implementations, new types of collaborations were constantly identified.
For example, new collaborations where the tools could be different or where there was no
collaboration at all. How to address these new collaborations and where to place them was
not thought of in the current visualisation of the method that we shaped and did not
naturally support adding alternative collaboration. Similarly, as presented in the 6.4.1,
when using the method, we also saw the possibility of combining operations by chaining
them together, which there were no natural way of doing either. Using the method was
therefore sometimes difficult, but regardless of that it was still a useful tool for identifying
potential conflicts and to design interactions for the operations. The method was also used
in this case as more of an exploration tool as the current system that was analysed was the
design from the first iteration which was very rough. Using the method to analyse another
already existing service that was of higher quality or even production quality, could
therefore have given a different experience.
Lastly, in the final design proposal the relationship between interactions made directly with
the autonomous agent and the automatic state updates made in the web app presents us
with an interesting topic. In this project the web application has been seen as a tool for the
user to interact with the autonomous agent through. However, as the autonomous agent in
the proposed design updates the state of the app when the user interacts with the agent
directly, it could be argued that the web app is in fact a tool for the autonomous agent as
well. Implying that the human user and the autonomous agent is, in some collaborative
interactions, not only sharing the same goal but also sharing tools. This arguably only
applies to when the user and the autonomous agent are both active, as in the other cases
one of the agents are not taking an active role in reaching the goal, thus not using any tool.
This is however up for debate and depends on how the perspective on active/inactive is
framed as well as what defines as a collaboration for the specific use case. Furthermore,
depending on the technical implementation of a phone and the role it plays in a
collaboration, the phone could arguably in some cases be considered as an agent as well
and not merely a tool.
73
DISCUSSION
It is evident in the analysis of the different business cases that even though they differ in
multiple ways, both in the context in which they operate as well as the goal of the user, they
still share similarities, specifically in what tasks the user needs to perform when directly
interacting with the autonomous agent. These tasks are present in all the business cases
analysed and could therefore be argued to highlight the essential interactions in the flow of
the service, when interacting with an autonomous delivery robot such as HUGO.
The suggested essential interactions begins when the user has a motive to start interacting
with the physical robot and stops when the robot leaves. Of course, there is an earlier
starting action when looking at the whole service since it requires a setup of the user
needing delivery in the first place, but these actions vary, both in number and what type of
action, depending on the case and are not strictly connected to the autonomous agent.
These actions are also often relatively similar to interactions in already existing services,
such as ordering mail delivery, and they do not involve the autonomous agent in the same
way as the suggested interaction sequence. The autonomous agent could simply not be a
part of the service at all and instead be replaced by a person delivering the package. They
can therefore not be seen as essential and are not as interesting for the thesis as the
interactions involving the physical robot.
The start of the interaction between the user and HUGO is especially interesting as this stage
signals the start of the collaboration between the human and the autonomous agent. At this
point the user’s interaction with the service shifts from interacting with the service through
the phone, which is purely digital, to interacting both digitally and physically with the agent.
This mix of digital/physical interaction also indicates a switch in context for the user and
they need to understand when to change between interacting with the agent through the
phone and interacting with the agent physically.
One important design guideline when designing for AI according to Googles AI guidelines is
explainability, which means to clearly present what the AI does and will do as a reaction to
the users input (Google PAIR, 2021). This helps set expectations on the AI, building trust and
keep the user in a sense of control. It is crucial to present information on what the required
type of action, digital or physical, from the user is and what the autonomous agent does by
itself to make the interaction sequence work. The start of the interaction also gives the user
their first impression as what to expect in the continuing sequence and how they should act
towards the agent. The start of the interaction sequence should also happen at the user’s
initiative since it signals that the user gains control of the AI and that they now have a say in
what the AI does. According to Schneiderman (2020b), Human control in combination with
automation is desirable and is more likely to produce a reliable, trustworthy and safe
application which makes it important in the start of the interaction to make the user feel
safe in using the design. This was notable in the user tests as those expressing worries in
interacting with the autonomous agent in the first interview changed their state of mind
74
DISCUSSION
when presented with clear information about what the interaction entailed before meeting
the agent. Afterwards they also expressed that they felt at ease during the interaction due
to clear information regarding what was going to happen next and what the agent would do
in response to their actions.
Similarly, the end of the interaction between the user and HUGO marks the end of the
collaboration. Designing for this is particularly important as the end of the collaboration as
well as the transfer of control from the user to the autonomous agent needs to be properly
signalled to the user. This is further motivated by the insight that the user’s incentive for the
service changes at this point. In a food delivery situation, the user’s goal is to receive their
package, when this is completed the incentive to further interact with the autonomous
agent disappears. The user has completed their goal and might therefore not see a point in
interacting any further and might lose interest in completing the sequence. This indicates
that the interaction of ending the sequence needs to be simple and natural, ensuring that
the user either completes the sequence or that it can be seen as completed at the stage of
taking their package. This is important when trying to eliminate faults from happening in
the interaction. Noteworthy is that the incentive can be assumed to be reversed in the case
PostNord as the user goal is to send a delivery and not receive it. This means that the design
for the end of the sequence is not at the same risk of loss in incentive for the user as the food
delivery case.
From the findings of this thesis, an interaction sequence can be found that specify the
essential interactions in the case of the autonomous delivery robot HUGO. The specified
essential interactions are:
• Locate.
In order to interact with the robot, the user needs to be at its location or have some
way of knowing where it can be found.
75
DISCUSSION
This interaction sequence was found when analysing the business cases given by the
company and it was confirmed when user testing that these actions where important and
necessary to reach the end goal of the user.
The actions are not strictly separated in the sequence and can be combined, for example by
combining closing the lid with ending the interaction sequence. The different actions are
also not necessarily bound to the user or the robot, e.g., the lid could be opened manually
by the user or automatically by the robot, which allows for assigning actions to be carried
out by either one of them when designing the interaction sequence.
When asked about their expectations on using an autonomous delivery service during user
testing, multiple participants had concerns about the efficiency of the service and often
compared it to traditional delivery services. Similarly, the experience and customer service
that a human provides in traditional delivery services was also sometimes mentioned to be
desired when interacting with the autonomous agent. These expectations raise a discussion
about the level of automation that could be implemented in the web app design and when
it should be used. In context of this thesis, level of automation and human control in
Shneiderman’s (2020a, 2020b, 2022) framework is, as stated in 6.4.1.1 , modified and refers
to which of the agents are active and inactive in an interaction. Some users expect high level
of customer service, which could imply that they want to minimise the number of manual
tasks required by them, as that would be more like what the traditional delivery services by
humans provide. But despite the expressed need for efficiency and service, the users
presented different preferences on the level and number of manual tasks they needed to
perform. Where some wanted more manual steps in the app and others preferred less to
none. Indicating that some users desired to have more control when interacting with the
agent, and some desired autonomy to a greater extent from the agent.
While the users and their preferences are important when doing Human-Centered Design,
Norman (2013; 2005) presents the drawbacks of focusing on individual persons and
therefore state that designers should design for the activity. The response from the user test
shows that this applies for the thesis case as well and designing strictly for either one of the
preferences will always leave some users unsatisfied. Thus, opting for designing around the
user’s task were chosen as that would avoid specifically designing to please a subset of the
users wants and needs. It would instead focus on solving the design for the essential task
needed to be completed by the user, which Norman argues users are more willing to learn
how to do than learning things that are not essential for an activity (2013). Therefore, the
essential tasks identified in 7.2 are focus points in designing the interaction sequence.
76
DISCUSSION
Moreover, the nonessential tasks are still present in the interaction sequence, however by
leveraging the autonomous agent to perform these tasks the sequences can become less
demanding on the user. Making a potentially more efficient, as well as an easier to use,
service for first time users.
Thus, the argument can be made that the autonomous agent should be active and thereby
work proactively, as often as it can be deemed suitable, to support the user in completing
the essential tasks. This not only works towards Shneiderman’s (2020a, 2020b, 2022) goal of
enhancing humans by removing additional task needed to be performed by them. It is also
supported by his philosophy that humans and AI should work together with high level of
automation in combination with high level of human control and that it is in fact according
to him desirable in HCAI design to achieve reliable, safe, and trustworthy systems. In this
thesis, this is mainly implemented in the design by automatically updating the state of the
web app based on the user’s interactions with the autonomous agent, where state of the
web app referring to what is presented on the screen. An example presented in 6.6
illustrating this implementation is when the user opens the lid on the agent, the instructions
presented in the web app automatically updates to the next, displaying the information
relevant to the next step. The opposite would be to require the user to manually confirm for
the web app to move to the next step. Additionally, by introducing automation in updating
the state of the web app, it created a clear and intuitive feedback system for the user when
performing interactions on the autonomous agent. Whilst at the same time also establishing
a connection between the physical and digital interfaces, clearly indicating the effects on
one when interacting with the other.
77
DISCUSSION
sequence, there are still requirements for manual confirm by the user for both that they are
at the autonomous agent and that they have retrieved their delivery. Giving the user control
over the most critical point in the interaction, the start and the end of the interaction
sequence as found in 7.2. By having the user confirmation of this interactions, the more
critical actions by the autonomous agent can automatically be performed as unlocking the
lid and leaving once the user has closed the lid. Ensuring that the control is still in the user’s
hands whilst allowing the autonomous agent to be active at the same time. This design
rational is supported by both the eight golden rules by Shneiderman (Shneiderman, n.d.)
and by HCAI literature (Google PAIR, 2021; Riedl, 2019; Shneiderman, 2020a, 2020b, 2022; Xu
et al., 2021).
When designing for collaboration between a user and an autonomous agent, conflicts will
inevitably arise that cause failures. According to Riedl (2019), autonomous agents will
frequently make mistakes, cause failures, violate the user’s expectations or simply do
actions that confuse them. Riedl further explains that when the autonomous agent defies
the user’s expectations or confuses them, the action can still be accurate based on the
situation, but the user perceives it as a failure. Motivating even further the importance of
providing users with information and calibrating their expectations and mental model.
Nonetheless, some failures can be addressed by implementing prevention measures into
the design of the product or systems to stop the failures from occurring. Implementing
measures and fail-safes in the design can however add unwanted and unnecessary
complexity to the system, leading to costs in development as well as potentially effecting
the user experience negatively. At the same time, the severity of a failures effect on the user
experience can vary where some a have critical negative effect whereas some are merely
annoying to the user. Similarly, the probability of a failure occurring is also worth discussing,
in the thesis was the scenario of the autonomous agent leaving when a user has misplaced
or forgot one of their items in the transport compartment discussed. Even though that
would have a negative effect on the user experience, the probability of that happening was
deemed to not be worth implementing a fail-safe for, instead the customer support service
would handle that situation. Thus, the cost of implementing prevention measures and fail-
safes into the design can sometimes outweigh the cost of the consequences from the failure,
implying that the risk and consequences for some failures is worth accepting over complex
and costly solutions.
One example of costly complexity found in this design process was the scenario in which the
user forgets to close the lid and leaves the autonomous agent. Despite informing the user to
confirm retrieval of the delivery and to close the lid afterwards the user could potentially
forget after taking their package. Suggestions of solutions such as having sensors which can
determine when the users have retrieved their package and then automatically close the lid
using motorized pistons were made. But due to the complexity of implementing the solution
and the estimated probability, it was deemed to be less costly to accept the risk and to
inform the user that they forgot to close the lid or manually handle the situation through a
technician. Yet another example, though more technical, is when the web app is unable to
communicate with the autonomous agent, the reason behind the error can be any of
multiple reasons and can even be an error on the user’s side, such as having lost connection
to the internet. Despite that, when testing scenarios with errors in the user tests, it showed
78
DISCUSSION
that the users are prone to blame the service for not working despite the problem being on
their side. Which again refers back to Riedl’s (2019) statement on users perceiving failure
despite correct behaviour from the AI. Thus, in situations where the same error can be
caused by many different reasons, it could be argued that trying to prevent failure is difficult
and therefore addressing the failure instead after it occurs is a more appropriate approach.
Where providing the user with information about the error to help guide them to the
appropriate action is one example presented by Riedl (2019).
To summarise based on the above discussion, the following are suggestion for how HCAI
could be applied when designing a user interface for a food delivery service with an
autonomous delivery robot.
• Where suitable, designers should strive to design for collaborations where both the
user and the autonomous agent are both active and work together towards a
common goal. Where the user focus on the essential-task and the autonomous agent
support and enhance the user by focusing on automating the non-essential tasks.
• Setting the right expectations for the users is highlighted in HCAI literature to be
important and was also found in the development phase to be especially true for new
and novel product like the one examined in this thesis. Thus, it can be argued that
designers should present the users with information upfront to tune their
expectations and mental model.
• To ensure that users always stay in control whilst having an active autonomous
agent at the same time, designs should implement manual confirmation before
automation with critical consequences are performed.
Future research
The thesis work has resulted in some interesting discussion points regarding the design of
interactions for the HUGO delivery service and from these conclusions there are areas that
could be worthy of researching further in the future.
• This thesis has focused on the interaction design and the user experience of the
service through the web app. This means that even though some thought has gone
79
DISCUSSION
into the UI elements of the web app it has not been of greater focus. Developing the
UI elements by researching how to present information and communicate to the user
for a novel interaction like this could be interesting.
• The ACD/HCAI method used in the thesis, presented in chapter 6.4.1.1 , produced
design possibilities and helped identify failures. The method can be of help to
designers working with designing for AI interaction, but it is an experimental and not
yet specified method. By developing, specifying, and evaluating the method of
ACD/HCAI it could be an even better addition to designers working with autonomous
agents.
Company suggestions
These are suggestions directed at HUGO delivery. They are outside of the scope of the report
but still possible improvements and based of the findings in the thesis.
• During the thesis work ideas regarding how physical features and functions could be
improved arose.
o Many of the users assumed or wished that the lid would open and close
automatically, possibly because a lot of other actions done by the HUGO
robot happened automatically, which could suggest that this would be a
feature worth implementing into the robot’s design to enhance the
experience for the user.
80
DISCUSSION
o In the last test, using the actual HUGO robot, users pointed out that the lid
was missing a handle. This made it harder to understand where to open it
and, lifting the lid. A clearly constructed handle on the lid could be a helpfull
addition to the design.
• The design proposal in 6.6 is an example of how a web app for this kind of interaction
could function but should not be seen as the only way to design it. There are other
ways to incorporate the essential interactions into a web app design and as the
HUGO delivery robot is developed further, new functions or interaction sequences
might be needed, requiring a redesign of the design proposal. Technical limitations
might also apply when building the web app that could affect the design.
81
8. Conclusion
The use of autonomous agents is an ever-growing possibility in our day-to-day life and, in
some cases, already a reality. One future use might be autonomous robots performing last
mile deliveries, a service the company HUGO delivery is currently developing. The goal of
developing their autonomous delivery robot HUGO is to reduce the emissions from
deliveries in the last mile by replacing delivery trucks with emission free autonomous
robots. However, this new way of receiving deliveries introduces new design challenges
since most people have little to no prior experience of interacting with autonomous agents.
The user interface is therefore of great importance in making the user understand and be
able to interact comfortably with the autonomous agent, thus also a key aspect in reaching
user adoption.
The following interaction sequences were found during the thesis work, and they specify the
essential interactions in the case of an autonomous food delivery robot.
As an answer to RQ1, ‘What interaction sequences are essential for end users in the case of
interacting with an autonomous delivery robot through a phone interface?’, the specified
interactions sequences are:
These interaction sequences were found when analysing the business cases given by the
company and partly when researching the flow of other delivery services in the future
analysis. It was also confirmed when designing and testing that these actions where
important and necessary to reach the end goal of the user and other tasks could be
automated. The start and end of the interaction were especially interesting since they signal
control being moved to or from the user or the autonomous agent. They proved to be
important in the design and should be handled with extra care when designing products
involving AI.
In exploring how to apply HCAI principle when developing the phone interface for the
service, multiple findings were made for important points to have in mind when applying
HCAI to designing interactions with autonomous agents. These findings are:
82
CONCLUSION
5. Weigh the cost of preventing conflict against the consequences and risk of failures
occurring.
These recommendations are based on our findings and observations found when applying
HCAI in designing the user interface for the service. Whilst being produced for this specific
type of case, the hope is for them to be generally applicable to other cases involving a user
and an autonomous agent.
Autonomous agents are becoming an ever-growing part of our everyday lives and we believe
that Human-Centered AI will play an important role in helping designers create the future of
autonomous systems, with a focus on the human experience. There is still new knowledge
to be found within this area and hopefully this thesis can, in some way, contribute to new
research and inspire more people to learn about it.
83
References
1. Arnowitz, J., Arent, M., & Berger, N. (2007). Effective prototyping for software makers. Elsevier
Morgan Kaufmann.
2. Arvola, M. (2010). Interaction Designers’ Conceptions of Design Quality for Interactive
Artifacts. 9.
3. ASDA. (n.d.). Company Facts. Corporate - ASDA. Retrieved 5 April 2022, from
https://corporate.asda.com/our-story/company-facts
4. Benyon, D. (2019). Designing user experience: a guide to HCI, UX and interaction design
(Fourth edition). Pearson.
5. Borealis. (n.d.). Anläggningar i Sverige - Borealis i Sverige - Stenungsund - Borealis.
Borealisgroup (en-GB). Retrieved 5 April 2022, from
https://www.borealisgroup.com/stenungsund/borealis-i-sverige/anl%C3%A4ggningar-i-
sverige
6. Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in
Psychology, 3(2), 77–101. https://doi.org/10.1191/1478088706qp063oa
7. Buchenau, M., & Suri, J. F. (2000). Experience prototyping. Proceedings of the Conference on
Designing Interactive Systems Processes, Practices, Methods, and Techniques - DIS ’00, 424–
433. https://doi.org/10.1145/347642.347802
8. Chapin, N. (2003). Flowchart. In Encyclopedia of Computer Science (pp. 714–716). John Wiley
and Sons Ltd.
9. Cooper, A., Reimann, R., Cronin, D., & Cooper, A. (2014). About face: the essentials of
interaction design (Fourth edition). John Wiley and Sons.
10. Delft University of Technology. (2020). Delft design guide: perspectives, models, approaches,
methods (A. van Boeijen, J. Daalhuizen, & J. Zijlstra, Eds.; Revised edition). BIS Publishers.
11. Dolan, S. (2022, January 11). The challenges of last mile delivery logistics and the tech
solutions cutting costs in the final mile. Business Insider.
https://www.businessinsider.com/last-mile-delivery-shipping-explained
12. Dominos’s Pizza. (2021). 2021 Annual Report [Annual Report].
13. Doncieux, S., Chatila, R., Straube, S., & Kirchner, F. (2022). Human-centered AI and robotics.
AI Perspectives, 4(1), 1. https://doi.org/10.1186/s42467-021-00014-x
14. Frayling, C. & Royal College of Art. (1993). Research in art and design. Royal College of Art.
15. Gibbons, S. (2017, August 27). Service Blueprints: Definition. Nielsen Norman Group.
https://www.nngroup.com/articles/service-blueprints-definition/
16. Google PAIR. (2021, May 18). People + AI Guidebook. https://pair.withgoogle.com/guidebook
17. Hallnäs, L., & Redström, J. (2006). Interaction design foundations, experiments. University
College of Borås. The Swedish School of Textiles. The Textile Research Centre.
18. HUGO Delivery AB. (n.d.). Last mile delivery. Last Mile - Autonomy. Retrieved 23 February
2022, from https://hugodelivery.com/
19. Iacucci, G., Iacucci, C., & Kuutti, K. (2002). Imagining and experiencing in design, the role of
84
REFERENCES
85
REFERENCES
https://www.cs.umd.edu/~ben/goldenrules.html
39. Shneiderman, B. (2020a). Human-Centered Artificial Intelligence: Three Fresh Ideas. AIS
Transactions on Human-Computer Interaction, 109–124.
https://doi.org/10.17705/1thci.00131
40. Shneiderman, B. (2020b). Human-Centered Artificial Intelligence: Reliable, Safe &
Trustworthy. International Journal of Human–Computer Interaction, 36(6), 495–504.
https://doi.org/10.1080/10447318.2020.1741118
41. Shneiderman, B. (2022). Human-centered ai. Oxford University Press.
42. Stickdorn, M., Hormess, M., Lawrence, A., & Schneider, J. (Eds.). (2018). This is service design
doing (First edition). O’Reilly.
43. Wikberg Nilsson, Å., Ericson, Å., & Törlind, P. (2015). Design: process och metod.
Studentlitteratur.
44. Xu, W., Dainoff, M. J., Ge, L., & Gao, Z. (2021). Transitioning to human interaction with AI
systems: New challenges and opportunities for HCI professionals to enable human-centered
AI. https://doi.org/10.48550/ARXIV.2105.05424
45. Zimmerman, J., & Forlizzi, J. (2014). Research Through Design in HCI. In J. S. Olson & W. A.
Kellogg (Eds.), Ways of Knowing in HCI (pp. 167–189). Springer New York.
https://doi.org/10.1007/978-1-4939-0378-8_8
46. Zimmerman, J., Forlizzi, J., & Evenson, S. (2007). Research through design as a method for
interaction design research in HCI. Proceedings of the SIGCHI Conference on Human Factors
in Computing Systems, 493–502. https://doi.org/10.1145/1240624.1240704
86
Appendix A
Structure and questions of test 1
Initial Questions
1. Name of test user (anonymous in report)
2. Age
3. What is your estimated level of experience with IT and technology? 1-5
Vad är din nivå på teknikvana, enligt dig själv?
1- Väldigt ovan och känns jobbigt när jag ska interagera med digitala system.
5- Väldigt van och har inga problem med att interagera med nya digitala system.
4. What are the first thoughts and feelings that come to mind when you imagine how
it's like to use an autonomous delivery robot?
Vad är det första du tänker och känner när du föreställer dig att använda en autonom
leverans-robot?
• When testing people without the knowledge of HUGO show a picture of the robot
and explain the flow of the service.
The main task is to get your package from the robot by using the app
If they need guidance use the sentences within the parentheses.
1. You have just ordered from the restaurant and received a text with a link. You press
the link and find yourself on this page.
(Can you find your delivery information?)
2. Some time passes and you wait for your delivery, you receive a new text with a link
and open it. This is the page you land on.
(What do you want to do in this step to complete your end goal?)
87
APPENDIX A
• I think that I would need the support of a technical person to be able to use this
system.
Jag tror att jag hade behövt hjälp av en teknisk person eller liknande för att kunna
använda det här systemet.
• I would imagine that most people would learn to use this system very quickly.
Jag tror att de flesta hade lärt sig använda det här systemet väldigt snabbt
• I needed to learn a lot of things before I could get going with this system.
Jag behövde lära mig en hel del innan jag kunde börja använda systemet
88
APPENDIX A
Open questions
1. How did you feel during the interaction? What types of thoughts came to mind?
Vilka känslor kom till dig under interaktionen? Vilka tankar dök upp?
2. Was there anything specific you reacted to in the app, both negative and positive?
Var det något specifikt du reagerade på i appen, både negativt och positivt?
3. Was there anything missing for you to be able to complete the tasks?
Saknades något för att du skulle kunna genomföra uppgifterna eller använda appen?
4. What did you think about opening the box through the phone screen? Would you
have liked to do it in another way?
Vad tyckte du om att öppna en låda via telefonen? Hade du velat göra det på ett annat sätt?
89
Appendix B
Protocol from test 1
Vad är det första du tänker och känner när du föreställer dig att använda en autonom
leveransrobot?
• Bör inte märkas helst. I bästa fall är det en snabbare/flexiblare leverans, vill knappt
komma ihåg vad den heter för att leveransen var så snabb. Känslor av
ifrågasättande, kommer det funka? säkerhet? Hur gör jag om det går fel?
Osäkerhet.
• Bekvämlighet, bekvämt för att man får paketet närmar sig än ett ombud. Till
dörren/porten är första tanken. Kan öppna upp möjligheter för snabba leveranser.
• En viss oro till stöld av paket. Känns som att hela roboten kan bli stulen. Lite orolig
för generationer/mindre teknikvana användare som inte är lika vana att använda
teknik.
• Man har ju ingen direkt referens till hur den ser ut men man fattar ju konceptet,
Taggad att få se och testa teknologi, nyfiken på hur den funkar.
• Lite spännande, tror också jag skulle ha höga förväntningar att den skulle funka
smidigare än typ instabox, annars känns det inte värt. Det ska kännas intuitivt att
använda
• Känns jävligt läskigt, känns som att det inte kommer funka. Varför ska man ha
robotar till allt. Ganska onödigt.
• Tänker på jobbet ( han jobbar med det), tänker på starship, coolt företag. Har sett
dem användas på riktigt. Känns coolt att få använda. Imponerad. Sugen på att testa
gränserna
Vilka känslor kom till dig under interaktionen? Vilka tankar dök upp?
• Flow 7 var en bergodalbana, var inte uppenbart att den skulle hoppa vidare. Trodde
det var knappar men verkar inte vara det. Inte uppenbart att man inte kan
interagera med den. Schysst att ha en sak att göra per skärm, när man är klar med
den är det lätt att fatta nästa steg. Känns som det finns inbyggd multitasking i det
hela, mycket saker att göra på lås och avslutningsskärmar. Minimerat antal
interaktioner per skärmbild.
• Kändes rätt enkelt i alla flöden, nice, lätt att klicka igenom. HUGO färgen är ful.
90
APPENDIX B
• De flesta kändes enkla att använda. kommer min telefon bli full av notiser om jag
använder den? Många som slåss om uppmärksamhet på skärmen. Hur agerar
roboten egentligen med reaktionstid och avstånd? Vad händer om jag låser upp
den för tidigt? Vad händer när jag låser den? Åker roboten iväg? Kan upplevas som
lagg om det inte ger direkt feedback. Måste den köra hela vägen hem till mig eller
kan jag möta HUGO?
Var det något specifikt du reagerade på i appen, både negativt och positivt?
• Funderar på hjälpknappar, kopplar till oro, skönt att ha på alla ställen. Vill gärna ha
en viss närhet till en faktisk person via dem, typ att kunna få tag på dem om något
går fel. Nice med tydligt gröna signalerade hjälpknappar. Finns ingen undo knapp,
kanske vore bra, undvika feltryckpaniken.
• Konstigt att kunna ändra info mitt under körning, kanske inte en bra grej. Kanske
borde bekräfta adressen innan den kör. Om det är för långt att köra vad sker då?
Behövs antagligen steg för det också. Gillade progressbaren, Flöden med dem
kändes mest nice. Slides är kul.
• Bilder på stänga och öppna var bra och tydligt. Positivt att man kan följa på kartan
och få tidsangivelse, bra för att kunna anpassa sig. Negativt, kanske onödigt att visa
all kontaktinfo på första sidan. borde kunna fällas ihop eller så.
• Varningsdelen i slutet var nice men kan vara en confirm interaktion ist kanske.
Läskigt med varningar potentiellt. Slide är bra för man kommer inte åt den hur som
helst, mindre risk för felklick. Hugo färgen med vitt är trevligt och välkomnande.
Svart text syns bra så borde nog användas för att ge ordentlig kontrast. Tycker om
att man ser stegen i interaktionen, spelar ingen roll om de är horisontella eller
vertikala bara de är där, känns betryggande. Man tycker om att känna att man har
kontroll som användare
• Nej knappen för att bekräfta att HUGO är vid en är onödig. Är det skillnad på att få
sms och en app? Kanske lättare med app? App blir mer streamlineat. Gillade skicka
iväg knapp som är en slide. Skön känsla med slide. Flow 7, kan man trycka på
stegen innan de ska användas eller är de låsta då? Känns konstigt om man inte kan
det. Inte supertydligt att de är knappar iof. Är de knappar?
Saknades något för att du skulle kunna genomföra uppgifterna eller använda appen?
No user said that a step was missing for them to be able to complete the task.
Vad tyckte du om att öppna en låda via telefonen? Hade du velat göra det på ett annat
sätt?
91
APPENDIX B
• Vill helst inte hålla i telefonen när man öppnat lådan. Vore skönt att ha interaktion
på lådan, kanske QR, kanske keypad. Vore coolt om Hugo kan känna av närheten av
telefonen.
• Jag vill öppna den via telefonen. Att låsa roboten kanske inte behöver göras via
telefonen. Kanske roboten skulle kunna öppna sig vid sin destination, då behöver
man ingen telefon. Blipp förknippas med betalningen, känns inte nice. Kanske bra
med QR för att försäkra oss om att kunden är vid roboten.
• Telefon kändes okej, så länge jag kan se på lådan att det är min Hugo så jag vet
vilken som är min.
• Tycker om att man bara använder telefonen. Känns tryggt att ha alla steg i
telefonen. Robotar kan vara lite läskiga att ha att göra med. Ska vara många steg så
man vet vad som försiggår
• Vill nog inte bara ha telefonen, vore nice att bekräfta att man är nära. Vill nog inte
ha körkorts-lösning på lådan. Blipp hade varit coolt, löst många problem, känns
smooth. Scanna QR är också ganska smooth. Knappsats känns som det kan bli
sunkigt iom att alla ska ta på det.
• Nice med progressbar uppe på skärmen, Nice med bilder som förklarar, bör inte
vara för mycket text. Mer förklarande bilder. Beror nog mycket på hur roboten
agerar för om saker känns smidigt. Förklaringsknappar kanske inte behöver vara i
mitten. Kanske vore bra om användaren kan bestämma placeringen av HUGOs
leverans på en karta med en markör. Jag kanske är lite biased iom att jag jobbar
med det.
92
Appendix C
Structure of test 2
Area of testing
• interaction with robot
• User understanding robot’s signals
• Opening box
• Understanding when in control or not
• Information flow from app to human
Hypothesis:
The user wants to both interact digitally and physically with the robot as well as receive
digital and physical feedback when interacting.
Initial Questions
Name of test user (anonymous in report ofc)
Age
What is your estimated level of experience with IT and technology? 1-5
Vad är din nivå på teknikvana, enligt dig själv?
1- Väldigt ovan och känns jobbigt när jag ska interagera med digitala system.
5- Väldigt van och har inga problem med att interagera med nya digitala system.
What are the first thoughts and feelings that come to mind when you imagine how it's like
to use an autonomous delivery robot?
Vad är det första du tänker och känner när du föreställer dig att använda en autonom
leverans-robot?
1. Normal scenario
The web app and service works as intended
2. Fail-scenarios
a. The wrong address is listed as delivery address + The user can’t open the lock
due to bad connection.
b. The user can’t see Hugo when they go out to meet it + The user forgets to close
the lid.
93
APPENDIX C
Scenario test 1.
Normal test where the participant follows the planned interaction for the app and robot.
• The user starts with answering the initial questions and then gets the scenario
explained to them.
• Scenario: You have called the restaurant and ordered to have your food delivered
with HUGO to your address. You want to acquire your food and complete the delivery
with HUGO to finish the test.
• The user will receive a text with a link to the figma prototype. The user takes out the
phone and looks at the figma prototype with the first screen.
• Next step is starting the interaction with the robot by finding it and completing the
delivery. The user receives a new text with a figma prototype link.
• The user can be directed to HUGO by the facilitator of the test since actual GPS
tracking is not available.
• The user starts the next interaction with HUGO where the goal is to receive the
package and end the interaction so that HUGO can leave.
• The test is finished, and an open discussion is held with the user to find out more
about how they experienced the robot and if they have any ideas of how to improve
the experience of their own.
Scenario test 2.
A test where the user experiences errors when executing the planned interaction with the
app and robot. The user will not test the whole flow but will instead get the scenario of the
fault given to them directly. Ex. The user tests the changing of address scenario and is
finished with the specific test scenario after they have changed the address and does not
need to complete the flow fully.
The user starts the scenario and tests the following errors one at a time. The facilitator will
make sure that the user is presented with the right screens and an explanation on what they
will be doing.
• The app states the wrong address, You wish to change it
• The box does not unlock due to bad connection
Restart scenario.
• You cant see HUGO and wonder where it has parked
• You forget to close the lid and walk away from HUGO
The test is finished and an open discussion is held with the user to find out more about how
they experienced the robot and if they have any ideas of how to improve the experience of
their own.
94
APPENDIX C
Scenario 1:
Did you feel like you could understand the robots intentions and signals?
Kändes det som du kunde förstå robotens signaler och avsikt?
How did it feel and what type of thoughts came to you when interacting with the robot?
Hur kändes det och vilka tankar dök upp när du interagerade med roboten?
Was there anything specific you reacted to during the test, both negative and positive?
Var det något specifikt du reagerade på i appen, både negativt och positivt?
Was there anything missing for you to be able to complete the tasks?
Saknades något för att du skulle kunna genomföra uppgifterna eller använda appen?
Scenario 2:
How did it feel and what type of thoughts came to you when interacting with the robot?
Hur kändes det och vilka tankar dök upp när du interagerade med roboten?
95
Appendix D
Protocol from user test 2
Vad är det första du tänker och känner när du föreställer dig att använda en autonom
leverans-robot?
• Spännande men också långsamt, tänker att autonomt går långsamt, tänker på de
autonoma bussarna som rullar på campus. Men ändå spännande.
• Det känns lite onödigt. Hemleveranser i allmänhet är onödiga och som han inte känner är
nödvändigt. Onödigt med utkörning från leverans utlämning till dörr. Foodora, använder
inte, tar lång tid och kan bli kallt, men finns ändå ett annat syfte med det. Hellre att inte
behöva kommunicera med någon, så länge det är lika snabbt så är det skönt att slippa den
interaktionen med budbärare.
• Förväntar sig att det inte borde vara mindre smidigt än vad det är med Fodoora idag, man
ska kunna följa vart den är och så. Det ska vara smidigt att öppna den interagera med den.
Lika smidigt eller smidigare än att använda fodoora. Man ska inte behöva fundera över
något i processen.
• Det finns stor potential för stöld. Smidigt, personal effektivisering. ‘Cool grej’
• Spännande. Lite oklart hur det ska fungera, hur han ska få sin mat och all logistik bakom
det. Futuristiskt men också många frågetecken hur det ska fungera.
• Märkte inte de så tydligt lamporna, men bra att spela upp ljuden.
• Ljud och ljus är bra att ha på HUGO. Hade tolkat röd som att gå inte nära eller interagera
med. Blå exempelvis är neutralt, skulle man kunna använda. Ta inspiration från hur bilar
med ljus.
• Lite oklart om den öppnar sig själv eller inte, man vill inte pilla för mycket på den eftersom
det är en robot.
• Ja, det kändes bra.
• Svårt att säga hur det är i vanliga fall. Ska blinka när man trycker på signalknappen. låter
bra men svårt att avgöra när det inte är med i testet
Hur kändes det och vilka tankar dök upp när du interagerade med roboten?
• Inga problem med roboten, kändes enkelt. Inte läskigt eller komplicerat, straight forward.
Behövs ingen display, tyckte det var skönt att inte behöva interagera med en display på
HUGO.
96
APPENDIX D
• Kände sig låst i telefonen, behövde lägga mycket tid på att läsa och interagera med appen.
Hade hellre velat lägga mindre tid i appen och mer på att interagera direkt med HUGO.
Automatiskt låsa och låsa upp HUGO. Inte användarens uppgift att låsa och skicka iväg
HUGO, vill att HUGO ska göra det själv. Känner att användaren är klar när hen har tagit sina
varor. Men tycker att man ska ha kontrollen att kunna stoppa HUGO om han åker iväg. När
man stänger locket så är det bekräftelsen på att man är klar. Skulle kunna finnas en knapp
i appen för att avbryta eller meddela service folket att något inte stämmer.
• Kändes bra, smidigt. Man ifrågasätter varför saker fungerar på vissa sätt. Exempelvis hur
fungerar säkerheten och det känns som att det finns mycket som kan gå fel. Känner en viss
oro för att det inte alltid kommer att fungera. Lite oklart hur vissa saker fungerar, vad som
händer när man klickar skicka iväg, om locket öppnas av sig själv.
• Flödet känns rimligt, gillar att det är steg för steg. Vill nästan ha flera steg, lås upp -> öppna
-> osv.
• Överlag väldigt bra, appen bra gränssnitt och intuitiv. Roboten känns väldigt prototyping när
det är en kartong, svårt att säga hur det skulle vara på riktigt. Man får inte så mycket support
av appen, man får göra mycket själv som användare, känns mer som instabox. Van att får
att får mer service, likt när man får leverans av människor. känns ovant men inte
nödvändigtvis ett problem skönt så länge det fungerar, mindre människokontakt är skönt.
Var det något specifikt du reagerade på i appen, både negativt och positivt?
• Otydligt om man ska låsa upp i appen eller om det ska ske fysiskt, förslag att separera en
sida för att låsa upp och en som berättar att öppna lådan. Skriva om från ‘HUGO är öppen’
till ‘HUGO är upplåst’. Tryck kan vara lite förvirrande. Info om att HUGO lämnar var tydligt
och bra. Bra att information om hur leveransen går till kommer två gånger. Ha korta
meningar i informationen är bra, men texten som finns är i bra längd.
• Många steg, kände att många steg var i appen som skulle kunna vara fysisk. Exempelvis
hur man låser upp HUGO, ju mer man interagerar direkt med HUGO ju närmre känner man
sig den.Lång text, mycket att läsa i första meddelandet. Flytande text, vill veta vad man ska
göra tydligt och kort.
• Dra för ‘att lås upp’ istället för ‘att öppna’ allmänt mycket information i varje steg. En aning
övertydligt i varje steg. Förvirrande i stegen, där man ska låsa upp, slida, öppna, många steg
som skulle kunna slås ihop även att hur information formuleras kan vara förvirrande. Kan
skippa lås och lås upp stegen. låsa upp och låsa sig automatiskt. Minska stegen för låsa
och öppna/stänga med andra ord.
• Otydligt med slidern som säger dra för att öppna, medan det står ovan lås upp och öppna
lådan. otydligt om det är användaren som låser upp locket. Dubbla instruktioner. Om något
ska hända av en handling så borde det vara på separat sidor.
• Skönt att det finns hjälpcenter, att man kan få hjälp om man behöver. Känns som att man
ska kunna trycka på alla rutor, dvs exempelvis den som visar tiden och bilden på HUGO, vet
inte vad det skulle göra men kändes som att det skulle hända något. Skönt med drop-down
menyer. Hade föredragit att det hade varit med ikoner också i info drop-downen. Gillade att
infomenyn flyttade med till andra sidan. Kartan är trevlig, hade föredragit att man kan se
sig själv på kartan också. Otydligt om ‘dra för att öppna’ kommer öppna locket automatiskt
eller om man låser upp. Upplever att man är kvar på samma steg även om stegen byter, de
är lika. Det är lagomt antal steg men föredra hellre färre än flera. Om något steg kändes
onödigt så var det steg 2 dvs har du tagit dina varor (notera: avslutningssteget var också
onödigt enligt användaren) Färgerna var fina
97
APPENDIX D
Var det något specifikt du reagerade på när du interagerade med roboten, både
negativt och positivt?
• Vilken färg som används som signal är viktigt för att visa på avsikt. En lampa där man ska
trycka på HUGO, tydligt vart man ska ta tag på locket.
• Det var enkelt att interagera med. Otydligt vem som ska stänga locket, användaren eller
HUGO själv
• Det förklarades i appen att man skulle öppna den så han förväntade sig inte att den skulle
öppna sig av sig själv
• Otydligt om locket kommer att öppnas automatiskt eller om man ska öppna manuellt.
Förväntar sig att den skulle kunna ha förmågan att göra det.
Saknades något för att du skulle kunna genomföra uppgifterna eller använda appen?
Övriga kommentarer?
• Kollar informationen, särskrivning i informationen :) Andra sms:et, öppnar länken innan han
går ut. Tror han ska låsa upp fysiskt på lådan. Förvirring hur han ska låsa upp. Undrar om
han skulle få ett sms efteråt. van vid att postnord skickar ut sms efter hämtad leverans,
inget som behövs men bra att ha om någon annan hämtar paketet åt någon.
• Första sidan: behövde tänka till på första rutan med frågan om adressen är rätt. Andra
sidan: Bilden, ska stämma överens med HUGO som kommer att komma. Osäker om man
ska klicka vidare på något, skulle vilja ha någon feedback på statusen utöver tiden till
leverans.
98
APPENDIX D
Den borde förstå att efter en viss tid eller att användare går utanför en radie så ska hugo
kunna åka iväg, men man ska kunna stoppa hugo om den åker iväg. Ljus indikationer
och/eller ljud när den är påväg att åka iväg.
• Känner att lösningen är anpassad för förstagångsanvändare och även där lite för
utvecklande.
Ta bort ansvaret från användaren för att låsa, det är inte användarens incitament att låsa
HUGO när användaren har tagit sina varor.
Anser att det är bekräftelse på att man är klar med att ta sina varor när man har stängt
locket. Men ha möjligheten att låsa upp HUGO igen.
• Första SMSet
Klickade in och kollade på information
Andra SMSet
Tryckte på ljudet.
Trodde HUGO skulle öppnas när det står att den är öppen. Klickade lås innan locket är
stängt.
Andra SMS:
Testade pip och ljus
Var påväg att öppna innan låsa upp.
99
Appendix E
Structure of test 3
Area of testing
Final testing to confirm the design choices and see if there are any final design
suggestions.
Material
HUGO box
Computer to take notes
Phone for user
Fake package
Camera
Initial Questions
• Name of test user (anonymous in report ofc)
• Age
• What is your estimated level of experience with IT and technology? 1-5
Vad är din nivå på teknikvana, enligt dig själv?
1- Väldigt ovan och känns jobbigt när jag ska interagera med digitala system.
5- Väldigt van och har inga problem med att interagera med nya digitala
system.
• What are the first thoughts and feelings that come to mind when you imagine how
it's like to use an autonomous delivery robot?
Vad är det första du tänker och känner när du föreställer dig att använda en autonom
leverans-robot?
• The user starts with answering the initial questions and then gets the scenario
explained to them.
• Scenario: You have called the restaurant and ordered to have your food delivered
with HUGO to your address. Your goal is to acquire your food and complete the
delivery with HUGO.
100
APPENDIX E
• The user will receive a text with a link to the figma prototype either on their own
phone or a phone that is lent to them.The user takes out the phone and looks at the
figma prototype with the first screen.
• Next step is starting the interaction with the robot by finding it and completing the
delivery. The user receives a new text with a figma prototype link that shows them
where HUGO is. The user can be directed to HUGO by the facilitator of the test since
actual GPS tracking is not available.
• The user starts the next interaction with HUGO where the goal is to receive the
package and end the interaction so that HUGO can leave.
• The test is finished and an open discussion is held with the user to find out more
about how they experienced the robot and if they have any ideas of how to improve
the experience of their own.
• How did it feel and what type of thoughts came to you when interacting with the
robot?
Hur kändes det och vilka tankar dök upp när du interagerade med roboten?
• Was there anything specific you reacted to during the test, both negative and
positive?
Var det något specifikt du reagerade på under testet, både negativt och positivt?
• Was there anything missing for you to be able to complete the tasks?
101
Appendix F
Protocol from user test 3
Vad är det första du tänker och känner när du föreställer dig att använda en autonom leverans-robot?
Det ska vara i tid, hoppas att det inte ska ta för mycket tid. Sätter det i relation till vad man beställer.
Tänker att det ska komma till mig inte till ett postombud, ska inte ta lång tid som cykelbuden. Viktigare
med transparensen vart den är än att det står en angiven tid som kan komma att ändras. Vill kunna se
vart den är.
Kan vara skönt att maten kommer vara klar när jag väl går ned för att hämta maten, slipper stå i kö på
restaurangen.
Spännande och intressant, hur kommer det att fungera. Hur kommer ingen annan att kunna ta mina
saker. Men mer spännande.
Nyfikenhet, hur fungerar det här, hur lång tid kommer det att ta, hur kommer den ta sig in i hissen. Har
sett HUGO nu men annars hade jag funderat hur det hade fungerat och varit nyfiken på det.
Coolt, andra tanken är det pålitligt, är det säkert både för allmänheten och hur är säkerheten för
produkterna som ska levereras till mig. Är leveransen lika pålitlig som när en människa levererar. Till
en början skeptisk.
Hur kändes det och vilka tankar dök upp när du interagerade med roboten?
Skulle velat ha mer svar från roboten, skulle velat ha något ljus som signalerar när den är öppen/stängd
osv. Ljud hade varit bra också
Lite mycket text i appen, litar på att tekniken fungerar så lika mycket beskrivet behövs inte.
Upprepningar i slutet med att hugo åker iväg
En karta som visar adressen när man bekräftar, så man kan se vart hugo tror att man bor.
Kändes bra och enkelt att få SMS, skönt att inte behöva ha en app. HUGO dök upp utanför dörren, det
var smidigt. Kul, kändes nytt och spännande, ganska lätt att klicka sig igenom alla stegen.
Instruktionerna var väldigt lätt att följa. När HUGO väl var där så var det väldigt ‘straight forward’.
Kändes väldigt smidigt. Kommer den bara leverera en sak åt gången, kommer det bara vara mina
saker i eller kommer man behöva oroa sig för att andra tidigare leveranser kommer att ta mina saker.
102
APPENDIX F
Väldigt lättsamt, intuitivt och tydliga instruktioner. Bara ja eller ingenting, inte många olika val.
Väldigt positivt, enkelt, smidigt. Lätt för en icke teknisk person att följa alla stegen. ‘Agda 65 skulle klara
av det’.
Interaktionsmässigt med appen: det var enkelt att förstå, informationen var synlig och enbart det som
behövde göras var synligt på sidan.
Informationen som presenterades i web appen var tydligt och kände därför att man inte behövde leta
efter information.
Bra att visa mycket information, hellre för tydligt med information
Inte alltid självklart för användaren att man ska stänga locket.
Tyckte det var väldigt bra, var osäker på hur man skulle hitta den. Gav instruktioner på varje steg.
Väldigt enkelt att interagera med roboten.
Orolig för hur man skulle interagera med HUGO, hur hårt man skulle stänga locket bland annat, men
fick feedback på när locket var stängt
Var det något specifikt du reagerade på under testet, både negativt och positivt?
När man väl ska öppna hugo går det bra, när man ska stänga hugo blir det svårare med varorna och
telefonen i händerna.
Bra att man kan klicka sig igenom stegen enkelt.
Ingen verifikation av att det var specifikt hans HUGO men upplevde inte att de behövdes heller.
Inte behöva ha telefonen uppe efter att man bekräftat att man tagit varorna, vill kunna lägga ned
telefonen i fickan och veta att det är klart efter det.
Beroende på vart man är kanske det finns en risk att man ska behöva leta efter HUGO, men det fanns
en karta som gör det lättare.
När det är leverans med en person så är det deras jobb att hitta mig, men med HUGO så blir det jag
som ska hitta HUGO.
Tänkte inte på att ljudknappen var till för att hitta HUGO. Om det är mitt inne i staden så är det inte
säker att man kommer att höra ljudet.
Väldigt tillfredsställande ljud på locket när man öppnar och stänger.
Lite otydligt med ‘Jag är klar’ knappen, vad den är till för.
103
APPENDIX F
Intressant är att det är ett nytt sätt att leverera saker på. Blev nyfiken eftersom jag är teknisk och gillar
att interagera med Teknik, därför var det en bra känsla att interagera med.
Negativt är att den mänskliga faktorn utesluts, den social delen utelämnas. Det personliga bemötandet
försvinner och det kan både vara positivt och negativt.
Finns det något som HUGO kan göra för att vara mer personlig?:
Man skulle kunna måla dit ett par glada ögon, finns robotar i kontorsmiljö som ger en mer personlig
interaktion.
Mycket information på en och samma gång på andra sidan, man är rädd att man missar något. Förstärk
det som är viktigt att läsa på den sidan, separare texten tydligare.
Annars tyckte det var bra.
Saknades något för att du skulle kunna genomföra uppgifterna eller använda appen?
Nej men när man trycker att man har hämtat sina varor ska man få ett SMS att leveransen är klar.
Nej, upplever inte att någonting saknas. Uppgiften var straight forward. Appen var tydlig och
interaktionerna var tydliga mot tjänsten.
Nej, tyckte det var bra instruktioner uppdelat i uppgifter som man skulle göra.
Övriga kommentarer?
104
APPENDIX F
Hade svårt att hålla i telefonen, paketet och stänga locket samtidigt
Det var positivt, var kul med robot, var smidigt att jag inte behövde gå ut eller liknande utan att HUGO
kom till mig.
Skulle du använda appen igen, ja i något unikt tillfälle, beställa mat om man inte vill gå och köpa själv.
Skulle aldrig beställa cykelbud hit i denna kontexten.
Fungerade bra, bekvämt sätt att använda mobilen för att klicka sig igenom. Bra så länge det håller
tiden.
Första SMSet:
Läser informationen som finns på andra sidan.
Är det en app eller är det en webbsida.
Andra SMSet:
iaktagelse, Hade inga problem med att hålla i telefonen och leveransen. Stannade kvar och tittade på
att HUGO skulle åka, tryckte inte direkt på ‘Jag är klar’
Första SMS:et
Läser informationen
Snyggt och enkelt att förstå att appen och HUGO går i samma färgtema.
Reagerade på att det var den HUGO gröna färgen.
Tycker att tre steg som indikerar vad man ska göra är tydligt. Kan alltid automatisera saker och bygga
mer på de tekniska aspekterna, exempelvis en våg sensor som känner när man tagit sina varor. Men
tycker att så det är gjort nu är logiskt och enkelt att förstå för att utföra uppgiften.
105
APPENDIX F
Andra SMSet:
Ställde varorna på marken
106