0% found this document useful (0 votes)
25 views118 pages

Hcai Interface Thesis

Uploaded by

Asma Aldrees
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views118 pages

Hcai Interface Thesis

Uploaded by

Asma Aldrees
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 118

Linköpings universitet/Linköping University | Department of Computer and Information science

Master Thesis, 30 hp | Human-Centered Systems (HCS)


Spring 2022 | LIU-IDA/LITH-EX-A--22/037--SE

Exploring Human-Centered AI:


Designing The User Interface
for an Autonomous Last Mile
Delivery Robot

Veronica Nedar
Simon Proper

Supervisor: Chu Wanjun


Examiner: Stefan Holmlid
Copyright
The publishers will keep this document online on the Internet – or its possible replacement – for a
period of 25 years starting from the date of publication barring exceptional circumstances.
The online availability of the document implies permanent permission for anyone to read, to
download, or to print out single copies for his/hers own use and to use it unchanged for non-
commercial research and educational purpose. Subsequent transfers of copyright cannot revoke this
permission. All other uses of the document are conditional upon the consent of the copyright owner.
The publisher has taken technical and administrative measures to assure authenticity, security and
accessibility.
According to intellectual property law the author has the right to be mentioned when his/her work is
accessed as described above and to be protected against infringement.
For additional information about the Linköping University Electronic Press and its procedures for
publication and for assurance of document integrity, please refer to its www home page:
https://ep.liu.se/.

© 2022 Veronica Nedar & Simon Proper


This work is licensed under the Creative Commons Attribution-NonCommercial 4.0 International
License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc/4.0/

i
Nomenclature and abbreviations
ACD Activity-Centered Design
Agent Any participant within an interaction, like a user or a robot.
App Application, a downloadable program usually for phone or tablet.
Autonomous agent A product or system with AI, such as the robot HUGO
HCAI Human-Centered Artificial Intelligence
HCI Human-Computer Interaction
HUGO The autonomous delivery robot developed by the company
Interaction sequence A set of interactions following each other in a predetermined pattern.
RtD Research through Design
UI User interface
User test In this thesis a combination of user and usability testing was done. The
word ‘user test’ will refer to all tests done with users even though
usability testing is also part of it.
UX User experience
Web app Web application, an application in a browser.

ii
Abstract
The use of autonomous agents is an ever-growing possibility in our day-to-day life and, in some cases,
already a reality. One future use might be autonomous robots performing last mile deliveries, a service
the company HUGO delivery is currently developing. The goal of developing their autonomous delivery
robot HUGO is to reduce the emissions from deliveries in the last mile by replacing delivery trucks with
emission free autonomous robots. However, this new way of receiving deliveries introduces new design
challenges since most people have little to no prior experience of interacting with autonomous agents.
The user interface is therefore of great importance in making the user understand and be able to
interact comfortably with the autonomous agent, thus also a key aspect in reaching user adoption. This
thesis work examines how an interface for an autonomous food delivery service, such as the HUGO
delivery, could be designed by applying a Human-Centered Artificial intelligence and Activity Centered
Design focus in the design process, resulting in a design proposal for a web app. The conclusion of the
thesis includes identification of the six essential interactions present in an autonomous food delivery
service, as well as how HCAI and which of its guidelines can be applied when designing an interface for
the interaction with an autonomous delivery robot.

Keywords: Human-Centered AI, Last-Mile Delivery, Human-Centered Design, Activity-Centered Design,


Autonomous Delivery, Autonomous Robot, Artificial Intelligence, UX-design, Interaction Design, User
Interface, Research through Design, Prototyping, Service Design

iii
Acknowledgement
We have worked with this thesis project during the whole semester of 2022 to explore the concept of
Human Centered AI applied to interaction design of an autonomous robot. We hope that our work can
contribute to future research in the area of Human-Centered AI and interaction design for autonomous
agents. We are very grateful and would like to thank Berge and HUGO Delivery for the opportunity to
work with this project. We could not have done this without help and we would therefore like to express
our gratitude to those that have been involved in the thesis work.

The HUGO delivery team has given us an exciting project and a lot of help when researching their
product, we are grateful that they have chosen to give us their time explaining, user testing and sitting
through interviews for our work. We would like to thank our examiner Stefan Holmlid for his feedback
and his guidance. We also thank our opponents for accompanying us during the project, giving us
feedback and support, especially in our report.

Lastly, we would like to extend our gratitude to our supervisor Chu Wanjun who has given us great
support during the thesis work and guided us through all our questions. Thanks to him we were
introduced to the theory of Human-Centered AI and Activity-Centered Design. He helped us shift the
focus of our thesis towards these areas, elevating our work and making it more interesting and
innovative.

Thank you all for the insights, inspiration and help during our thesis.

Linköping, June 2022


Veronica Nedar & Simon Proper

v
Table of Contents
1. INTRODUCTION ...............................................................................................................................1
1.1 PURPOSE AND GOAL ................................................................................................................................................. 2
1.2 RESEARCH QUESTIONS ............................................................................................................................................. 2
1.3 SCOPE .................................................................................................................................................................... 3
2. THEORETICAL FRAMEWORK ..............................................................................................................4
2.1 INTERACTION DESIGN ............................................................................................................................................... 4
2.2 LAST MILE DELIVERY (LMD) ...................................................................................................................................... 5
2.3 LEAN STARTUP ........................................................................................................................................................ 5
2.4 RESEARCH THROUGH DESIGN (RTD) ........................................................................................................................... 6
2.5 ACTIVITY CENTERED DESIGN (ACD) ............................................................................................................................ 7
2.6 HUMAN-CENTRED ARTIFICIAL INTELLIGENCE (HCAI) ..................................................................................................... 8
2.7 THEORY OF EMPLOYED METHOD ................................................................................................................................ 11
2.7.1 Future analysis ........................................................................................................................................... 11
2.7.2 Prototyping ................................................................................................................................................ 11
2.7.3 Interviews ................................................................................................................................................... 12
2.7.4 Service blueprint ........................................................................................................................................ 13
2.7.5 Flowchart ................................................................................................................................................... 14
2.7.6 User and usability testing .......................................................................................................................... 15
2.7.7 Task analysis .............................................................................................................................................. 17
2.7.8 Wireframing................................................................................................................................................ 18
2.7.9 Bodystorming ............................................................................................................................................. 18
3. PROJECT PROCESS ........................................................................................................................ 19
4. LITERATURE STUDY ....................................................................................................................... 20
5. PRE-STUDY ................................................................................................................................... 21
5.1 METHOD ............................................................................................................................................................... 21
5.1.1 Observations and interviews...................................................................................................................... 21
5.1.2 Future analysis ........................................................................................................................................... 21
5.1.3 Flowchart ................................................................................................................................................... 23
5.1.4 Service blueprint ........................................................................................................................................ 24
5.2 FINDINGS .............................................................................................................................................................. 25
5.2.1 Business cases ............................................................................................................................................ 25
5.2.2 Future analysis ........................................................................................................................................... 30
5.2.3 Service blueprint ........................................................................................................................................ 32
6. DEVELOPING INTERACTIONS FOR THE ERICSSON CASE ..................................................................... 35
6.1 PROTOTYPING ....................................................................................................................................................... 35
6.2 USER TESTING ....................................................................................................................................................... 37
6.3 ITERATION 1 .......................................................................................................................................................... 37
6.3.1 Method ....................................................................................................................................................... 38
6.3.2 Findings ...................................................................................................................................................... 42
6.4 ITERATION 2 .......................................................................................................................................................... 48
6.4.1 Method ....................................................................................................................................................... 48
6.4.2 Findings ...................................................................................................................................................... 55
6.5 ITERATION 3 .......................................................................................................................................................... 65
6.5.1 Method ....................................................................................................................................................... 65
6.5.2 Findings ...................................................................................................................................................... 67
6.6 DESIGN PROPOSAL ................................................................................................................................................. 69
7. DISCUSSION .................................................................................................................................. 72

vii
7.1 METHOD DISCUSSION ............................................................................................................................................. 72
7.2 RESEARCH QUESTION 1 ........................................................................................................................................... 74
7.3 RESEARCH QUESTION 2............................................................................................................................................ 76
7.4 FUTURE RESEARCH & SUGGESTIONS TO THE COMPANY .................................................................................................. 79
8. CONCLUSION ................................................................................................................................ 82
REFERENCES .................................................................................................................................... 84
APPENDIX A...................................................................................................................................... 87
APPENDIX B ..................................................................................................................................... 90
APPENDIX C ..................................................................................................................................... 93
APPENDIX D ..................................................................................................................................... 96
APPENDIX E.....................................................................................................................................100
APPENDIX F .....................................................................................................................................102

Table of figures
Figure 1 The delivery robot HUGO. Copyright 2022 HUGO Delivery AB ....................................................................... 1
Figure 2 The Human-Centered AI framework presented. From Human-Centered AI, by Ben Shneiderman (2022,
Chapter 8: Two-Dimensional HCAI Framework). Reprinted with permission. ............................................................ 9
Figure 3 An example of a service blueprint showing how the categories and actions can be mapped. From Service
Blueprints: Definition by Sarah Gibbons (2017). Reprinted with permission. .......................................................... 14
Figure 4 Example of a flow chart and its components. From Wireflows: A UX Deliverable for Workflows and Apps
by Page Laubheimer (2016). Reprinted with permission. .......................................................................................... 15
Figure 5 An example of a hierarchical task analysis showing how the goal is broken down into tasks and subtasks.
From Task Analysis: Support Users in Achieving Their Goals by Maria Rosala (2020). Reprinted with permission.
...................................................................................................................................................................................... 17
Figure 6 Depicting the thesis five phases of the project process ............................................................................... 19
Figure 7 A mindmap of two web app interfaces for delivery services ....................................................................... 22
Figure 8 A mind map of the inspirational sources with notes of findings and remarks............................................ 23
Figure 9 The building blocks for the flowcharts ......................................................................................................... 24
Figure 10 Flowchart visualisation of the ASDA case ................................................................................................... 26
Figure 11 Flowchart visualisation of the Borealis case .............................................................................................. 27
Figure 12 Flowchart visualisation of the PostNord case ............................................................................................ 28
Figure 13 Flowchart visualisation of the Dominos´s Pizza case ................................................................................ 29
Figure 14 Flowchart visualisation of the Ericsson case .............................................................................................. 29
Figure 15 Moodboard with results from the future analysis ...................................................................................... 31
Figure 16 Service blueprint of how the HUGO delivery service will operate ............................................................. 33
Figure 17 Prototype 1 of user test 1 depicting a flow where a curtain design is used. ............................................. 40
Figure 18 Prototype 2 of user test 1 depicting a flow with a card design and steps of the interaction in the top. .. 40
Figure 19 Prototype 3 of user test 1 depicting a flow with fewer interactions. ......................................................... 41
Figure 20 Prototype 4 in user test 1 depicting a similar flow as prototype 1 but with another design and different
information placement. ............................................................................................................................................... 41
Figure 21 Prototype 5 in user test 1 depicting a similar flow as prototype 3 but with some UI inspiration from
prototype 2. .................................................................................................................................................................. 42
Figure 22 Sketch wireframes from iteration 1 showing a concept involving lock and unlock buttons. .................. 42
Figure 23, Sketch wireframes from iteration 1 showing a concept involving fold out actions ................................ 43

viii
Figure 24, Example of wireframes produced in iteration 1 ........................................................................................ 43
Figure 25 The three different prototypes produced in iteration 1 ............................................................................ 46
Figure 26 Combining ACD with the HCAI framework ................................................................................................. 50
Figure 27 A mock up HUGO robot ............................................................................................................................... 51
Figure 28 A user opening the mock up HUGO robot. ................................................................................................. 52
Figure 29 Web app frame showing the user’s address and an alternative to change it ........................................... 53
Figure 30 Web app frame showing where HUGO is on the map and the button making HUGO play a sound to find
it.................................................................................................................................................................................... 53
Figure 31 Web app frame showing the loss of connection. Loss of connection message is at the bottom of the
frame. ........................................................................................................................................................................... 54
Figure 32 Prototype produced in iteration 2 .............................................................................................................. 54
Figure 33 Visualisation of the task analysis, showing the tasks involved for the Ericsson case .............................. 55
Figure 34 Activity-Centered Design result (zoomable) .............................................................................................. 56
Figure 35 First section of the ACD chart ...................................................................................................................... 58
Figure 36 second section of the ACD chart ................................................................................................................. 60
Figure 37 third section of the ACD chart ..................................................................................................................... 62
Figure 38 Prototype produced in iteration 2 .............................................................................................................. 63
Figure 39 The HUGO delivery robot used in user test 3 ............................................................................................. 66
Figure 40 The tested prototype in user test 3 ............................................................................................................. 66
Figure 41 Examples of prototypes in iteration 3 ........................................................................................................ 67
Figure 42 Frame 1,2, and 3 of the web app design proposal ..................................................................................... 69
Figure 43 Frame 4,5, and 6 of the web app design proposal ..................................................................................... 70
Figure 44 Frame 7 and 8 in the web app design proposal ......................................................................................... 71

Table of tables
Table 1 SUS score results ............................................................................................................................................ 46
Table 2 Design alternatives for the localise operation in the ACD mapping............................................................. 57
Table 3 Design alternatives for the unlocking operation in the ACD mapping ......................................................... 59
Table 4 Design alternatives for the opening of the lid operation in the ACD mapping ............................................ 59
Table 5 Design alternatives for the closing and locking operation in the third sub-section of the ACD mapping.. 61

ix
INTRODUCTION

1. Introduction
In an ever expanding and globalized world, E-commerce is becoming more common and
sales has grown by 20% each year since 2009 (International Post Corporation, 2021). The last
part of the delivery process, bringing the package to the end customer, is called the ‘last
mile’ (Dolan, 2022). Additionally, the last mile is the part of the delivery process that is both
the most ineffective as well as most costly. This is due to multiple reasons, one of them being
the fact that the size of the delivers can be small. Additionally, the distances between these
delivery points can span up to miles in rural areas and in cities deliveries are delayed due to
traffic.

To address this issue, the startup company HUGO Delivery, a subsidiary of Berge Group, is
developing a last mile delivery robot, see Figure 1, to help build a sustainable future (HUGO
Delivery AB, n.d.). The robot is autonomous and will transport packages from A to B. At the
time of the report, there are five different business cases that the robot is being developed
for and they will serve as the core for this thesis report. The business cases all involve
transportation of goods but are all unique with different environments they operate in with
different prerequisites.

Figure 1 The delivery robot HUGO. Copyright 2022 HUGO Delivery AB


The companies represented in the business cases are:
• PostNord – Customer to Business
• Ericsson – Food delivery
• Domino's Pizza – Food delivery
• Asda – Groceries
• Borealis – Industrial application

HUGO Delivery wish for the Ericsson food case to be further developed and has also
expressed that a phone web app design for the delivery would be a preferred solution to

1
INTRODUCTION

explore, they are however open to other solutions as well. This was taken into consideration,
and it slightly narrowed the scope, research questions and aim.

The science and application of autonomous vehicles is rapidly evolving and even though it
has not yet reached regular consumers at large it is often viewed as a future reality. This
however raises questions about how users should interact with the new A.I technology and
new fields of research, such as Human Centered AI (Shneiderman, 2020b), have been a
consequence of this. An autonomous delivery robot solving the last mile delivery issue could
have a positive effect on the climate crisis but in order for this to work and make a difference
it also needs users to be ready to adapt it. This is where HCAI become an interesting
approach since it specifically targets autonomous agents and design of safe, reliable, and
trustworthy AI interaction. If the service can be designed to appeal to users and leave a
positive impression it is possible that a larger amount of people would use it, resulting in a
greater environmental benefit.

Designing for autonomous agents require insight into how users act towards a product like
the HUGO robot, that can move on its own and perform tasks without specific instructions
from the user. There are few products like this in the day-to-day life of people which makes
it a very novel interaction pattern and interesting to explore further.

1.1 Purpose and Goal


The purpose of this report is to investigate what fundamental functions and interaction
sequences are required as well as expected when interacting with a service utilizing an AI
driven delivery robot.

The aims of this report are:


• Identifying the essential interactions for an autonomous delivery service.
• Exploring how these interactions can be designed for a digital interface.
• Explore how to apply HCAI and design principles when designing an autonomous
delivery robot service.
• Presenting design proposals for an interface based on literature, pre-study and user
tests.
• Build a final digital prototype of the interface for the autonomous delivery service.

1.2 Research Questions


The research questions the thesis aims to answer are:
• What interaction sequences are essential for end users in the case of interacting with
an autonomous delivery robot through a phone interface?
• How can HCAI principles be applied when designing a user interface for the identified
interaction sequences in an autonomous food delivery service?

2
INTRODUCTION

1.3 Scope
The report will focus on investigating interactions between the autonomous agent (HUGO)
and the end user (the customer). Thus, interactions between the robot and other humans or
other systems will not be considered, for example the service providers, pedestrians, and
backstage support systems. Similarly, the part of the service in which the user does not
interact with the autonomous agent but still with the service will be taken into
consideration, although it will not be the main focus of the thesis. In designing the user
interface, the focus will be on the functionality of the interface and not the visual design.
Although aesthetics of the phone interface will be explored when designing, it will not be
the main focus of the product.

The thesis will be restricted to investigate the five given cases presented in the introduction.
As per request by the company the design proposal will be developed for the Ericsson food
delivery service case. Additionally, since the company have identified web apps as their
platform for communication between the user and the service, the thesis will primarily
investigate implementing a web app design solution for the service. Moreover, when
discussing HUGO and the autonomous agent, it refers to the current state and design
implementation of the robot being developed at the time of the thesis.

3
THEORETICAL FRAMEWORK

2. Theoretical framework
This chapter explains the theoretical framework of which the thesis builds upon. The thesis
case is an autonomous agent and a digital system that build upon computer technology to
guide the interaction of humans between the two. This requires researching several areas of
both theory, approaches and methods that can be applied to one or several of the
components in this interaction.

2.1 Interaction Design


This project and the resulting design is based in the field of interaction design. The book
Interaction design: foundations, experiments by Hallnäs and Redström (2006) defines
interaction design as the following:

‘Interaction design is design of the acts that define intended use of things. “Intended us”’ does not
refer to function in a more general sense, i.e. what a given thing does as we use it; a corkscrew
opening a bottle of wine for example. It is about acts that define use of this particular corkscrew,
i.e. it refers to a particular act interpretation of a given thing as a cork screw.’ - (Hallnäs & Redström,
2006, p. 23)

According to this, to design interactions is to design how a user should interact with a
product or interface, which means that the designer creates the conceptual context of the
intended use but without the expectation that a user step by step interact with the product
in a strict or given way. There is always a chance that a user will not re-enact the intended
use which makes it unnecessary to stage exactly how an interaction should be. Instead, the
designer focus on how a product can enhance the chance of intended use through logic and
nudging the user in the right direction.

Interaction design is a broad term and cover many different disciplines. The most relevant
and interesting area in interaction design for this thesis is Human-Centered Artificial
Intelligence (HCAI), a discipline deriving from human computer interaction (HCI)
(Shneiderman, 2020a). The design of human-computer interaction is a big part of
interaction design when designing digital interfaces and Arvola (2010) describes the word
interaction in human-computer interaction as following:

‘The word interaction in human-computer interaction design, can be defined as a mutually and
reciprocally performed action between several parties in close contact, where at least one party is
human and at least one is computer-based’ - (Arvola, 2010, p. 1)

The interaction is not necessarily only between a computer-based party, e.g. a phone, and
a human but can also involve other parties, such as an autonomous agent. The definition of
an interaction is therefore not bound to a computer and a user but can involve other parties
as well.

4
THEORETICAL FRAMEWORK

There are broader interaction design principles that are used as guidelines to design
interaction (Cooper et al., 2014). These are general principles that can apply to a variety of
interactions but can also be further developed to specific interactions, like that of an
autonomous robot. There are also UX/UI design guidelines based in cognition and human
psychology, like how humans interpret visual signals and patterns. As an example, one
prevalent such guideline is that a user interface should mimic how people expect the system
to look and work, taking inspiration from industry standards, cognition, and societal codes.
This helps users navigate the system with more ease since the system is recognisable to
them and they quickly pick up on what to do since they have seen it before or are
neurologically coded to understand it (Johnson, 2014).

2.2 Last Mile Delivery (LMD)


In a study of the definition of last mile in literature, Motavallian (2019) defines the last mile
as ‘LMD is the last transportation of a consignment in a supply chain from the last dispatch
point to the delivery point where the consignee receives the consignment.’ (p.106)
Motavallian explains that ‘consignment’ in the definition is used for what is commonly called
product, in other word consignment is the item that is being delivered. To whom it is
delivered to is referred to as ‘consignee’. Therefore, the ‘delivery point’ is the location where
the item is delivered to consignee. ‘Last dispatch point’ on the other hand refers to the point
where the last step in the supply chain would begin, this could be from various location
where ports, consolidation centres and warehouses are some examples.

2.3 Lean Startup


Working iteratively in loops of build, measure and learn is the core of the Lean Startup
methodology (Ries, 2019). The goal is to move fast by building so called minimum viable
products (MVP’s), which are quick and cheap products used to learn from testing against the
users. The lessons learned from MVP’s fuels the next iteration of building and testing. The
basis for the iteration processes are assumptions that are in the form of statements turned
into hypotheses. The hypotheses carrying the most risk are the ones that are the most
important to prioritise as they are most vital to the success of the startup and determines if
the company needs to pivot or preserve. The hypothesis itself is in turn what decides what
and how to build the MVP since it should be designed around testing the hypothesis.
Therefore, according to Ries, the actual planning of the build-measure-learn feedback loop
is in practice performed in reversed order. Meaning that the steps for the planning process
is to first understand what to learn, then what to measure and how. Then lastly, deciding
what product to build that enables experiments who provides the needed measurements.

Ries further stress the importance of how experiments are conducted and mainly why. He
states that simply planning to ship something, being product or service, in order to see what
happens as experiment will always guarantee success at doing just that, seeing what
happens. However, doing it this way won’t necessarily provide you with any validated
learning, which is one of the core parts of the Lean Startup feedback loop. As Rise states it,
‘if you cannot fail, you cannot learn’ (2019).

5
THEORETICAL FRAMEWORK

How hypotheses are formulated are therefore important to allow for learning. There are
according to Ries two types of hypotheses that are the most important for entrepreneurs to
test, being value hypothesis and growth hypothesis. Value hypothesis relates to if the
product or service provides value to the customer. Growth hypothesis focuses on the other
hand on testing how the product or service spreads to new customers and in turn test how
new users discover it.

2.4 Research through Design (RtD)


Research through art and design, first proposed by Frayling (1993), has been adopted by
multiple fields and is more commonly referred to as Research through Design (RtD).
Zimmerman et al. (2007) states that RtD in human computer interaction (HCI) helps in
addressing the issues of complex problems that traditional approaches of science and
engineering struggles to model. Leveraging methods unique to design and design
processes, RtD contributes with complementary knowledge. In that way, RtD differentiates
itself from traditional science in that the theoretical contributions can come in the form of
artifacts produced in the design process. By delivering artifacts as research contribution,
designers can utilize their own expertise in the design field when conducting research.

Zimmerman et al. however emphasize the difference between the artifacts produced in
design practice and design research. In design research the intent lies in producing
knowledge, whereas design practice aims at creating commercially viable products.
Therefore, the focus in design research is on making the right things rather than what would
be commercially successful. The contribution should therefore according to Zimmerman et
al. demonstrate significant invention and not merely update existing product. (2007)

Zimmerman & Forlizzi propose a five step process to RtD projects (2014). The five steps
being:

1. Select
2. Design
3. Evaluate
4. Reflect & disseminate
5. Repeat

Select
Zimmerman & Forlizzi states that the first step in the process is selecting what to investigate,
being a problem or a design opportunity. Besides what to investigate, how to investigate it
is also part of the first step and in turn, selecting which of the three practises, lab, field or
showroom to follow within RtD. To finish the selection step the last part is finding and
selecting exemplars of RtD projects that can serve as a guideline for the project in question.

Design
When the focus and the approach of the project is selected the next step is starting the
design process by setting the initial framing of the project. This includes doing literature
studies in the given field to both understand the state of the art as well as the uncertainties
and problems found in other research. Setting the frame can also be conducting fieldwork

6
THEORETICAL FRAMEWORK

according to Zimmerman & Forlizzi. When the initial frame is set the exploration and
creation of products can start. They propose that the process should be iterative where the
products as well as the framing evolves and is tweaked within the process. Furthermore,
Zimmerman & Forlizzi stresses the importance of documenting the process, which steps are
taken and the rationale behind those decisions.

Evaluate
Artifacts generated in the design step of the process are the input for this step. The
evaluation is performed in relation to the decision made in the first step, research questions
and RtD practise.

Reflect & Disseminate


After evaluating the artifacts produced, the next step is reflecting on learnings from the
process and to consolidate it into research in order to spread it further. The form for
consolidation can vary from publications and videos to products staying in use after the
research.

Repeat
The last and final step Zimmerman’s and Forlizzi’s process is to investigate the problem
again, preferably multiple times to gain the best results.

2.5 Activity Centered Design (ACD)


Activity Centered Design (ACD) is presented by Norman (2013; 2006) as enhancement of the
well-established Human Centered Design (HCD) focusing on designing for the activities
performed by users. Norman emphasises that ACD does not mean excluding the user in the
design process, but rather shift the focus from designing for the user to the activities
performed by user. A deep understanding of the user is therefore still a part of ACD, however
ACD expands further and involves deeper understanding of both the technology, the tools,
and the activities. By shifting focus to the activity, ACD allows for designing products that
works on a wider global scale as products are developed for accomplishing tasks. Thus, it
minimizes the risk of developing for a specific group of users. In turn, ACD is therefore more
well suited for developing products aimed at larger, nonhomogeneous target groups. One
of the main critiques to traditional HCD by Norman(2005) is the fact that individuals, and
therein users, are constantly changing meaning that the design suited for the user needs of
today might not be the same in the future. Moreover, Norman (2013) argues that user are
willing to learn things that appear to be essential for the activity, as opposed to systems
where requirements don’t appear to be necessary and support the activity.

Norman (2013; 2005) bases ACD in his own adoption of Activity Theory where he separates
into three levels of abstraction layers, activities, task and operations. Activities being the
highest level in the abstraction layers operate within the largest scope and works towards a
high-level goal, where an example by Norman describing the activity layer is ‘go shopping’.
Activities themselves consists of multiple lower-level components, tasks, that together
makeup for an activity. These tasks are separate with their own low-level goal, but together
in a collection of activities they operate towards the same high-level goal, the activity. In the

7
THEORETICAL FRAMEWORK

case of going shopping, an example for a task is present as ‘drive to the market’ or ‘find a
shopping basket’. The third abstraction layer, operation, follows the same principle as task,
where a task is executed by multiple operations.

2.6 Human-Centred Artificial Intelligence (HCAI)


Shneiderman (2020a, 2020b, 2022) introduce HCAI as a way to challenge the existing view
on developing AI and the interaction between humans and AI agents to go beyond the novel
view of the future for AI. He states that the traditional view on development of AI has its focus
on the AI itself and its algorithms, and argues that HCAI can aid designers and researchers
to shifts the focus and reframes the view on AI development to focus on the human and
therein the user. By placing the user at the centre, focusing on the user-experience,
designers and researchers can develop theory and AI systems that are safe, reliable and
trustworthy with an emphasis on serving the user and their needs, aiming to empower the
users instead of replacing them.

Shneiderman (2020a, 2020b, 2022) propose a two-dimensional framework for HCAI work
with the goal of aiding in creating reliable, safe and trustworthy applications. By exploring
the degree of automation in relation to the level of human control, the framework aims at
challenging the current notion that increased automation implies that the amount of
human control needs to decrease. Therefore, Shneiderman states that with the framework
automation can in fact be increased and the amount of human control can not only retain
but also increase. The framework is illustrated as a two-axis graph divided into four different
fields, as can be seen in Figure 2. Each axis ranging from low to high, where the vertical axis
is the level of human control, and the horizontal is the level of computer autonomation.
Shneiderman(2022, Chapter 8: Two-Dimensional HCAI Framework) explains that most
reliable, safe & trustworthy systems are located on the right side of the framework. At the
same time, he argues that the desired goal is often, but not always, for design to be placed
in the upper right quadrant where the level of human control as well as the level of computer
automation are both high at the same time. Systems can still be placed in the lower right
quadrant and be considered reliable, safe & trustworthy, but as Shneiderman state, ‘The
lower right quadrant is home to relatively mature, well-understood systems for predictable
tasks’ whereas the upper right quadrant are for more complex and less understood tasks
where the context of use may vary.

However, even though HCAI is an extension of HCI, Xu et al. (2021) states that traditional HCI
methodology is not sufficient for HCI practitioners to develop HCAI systems. Seen as HCI has
previously only involved developing interactions with non-AI systems, where the behaviour
of automation is well defined, and the result can be anticipated. According to Xu et al. AI has
introduced a characteristic of autonomous machine behaviour that is less predictable.
Additionally, the characteristics and implications that it brings needs to be fully understood
by designers to create HCAI systems where these behaviours are managed. Therefore, Xu et
al. emphasises that there needs to be a transition in the HCI community towards developing
for AI systems to enable for adoption of the HCAI approach. To further evolve the HCAI
framework they propose five HCAI design goals to help guide HCI practitioners in developing
HCAI systems.

8
THEORETICAL FRAMEWORK

Figure 2 The Human-Centered AI framework presented. From Human-Centered AI, by Ben


Shneiderman (2022, Chapter 8: Two-Dimensional HCAI Framework). Reprinted with
permission.
Designing for…
• Usable & explainable AI
• Human-driven decision-making
• Ethical & Responsible AI
• Augmenting human ability
• Human-controlled AI

The goal that Xu et al. refers to as Human-controlled AI means that humans, not necessarily
the users, are kept as the ultimate decision makers. This goal also aligns with Shneiderman’s
statements on Human-controlled AI (2020a, 2020b, 2022). Giving the user access to controls
for activation, operation and override can aid in the goal of achieving safe, reliable, and
trustworthy systems. This in turn also helps in fulfilling the goal of achieving decision-
making that is human-driven.

Designing AI that provides output that is understandable to humans is Xu et al.’s (2021)


definition for explainable AI. Riedl (2019) emphasise on the importance of designing for what
he refers to as non-expert user to achieve adoption of autonomous agent. He argues that in
case of failure or perceived failure by the user, AI should provide the user with information
about the AI’s choice in order for the user to understand the reason behind the agent’s
behaviour. He also states that in some cases, information itself can be enough as it helps the
user understand the AI and can adjust their expectations on the agent accordingly. But in
others cases the information needs to be enough so that the user can take appropriate
actions to the failure. Discussing AI in the form of autonomous robots, Doncieux et al.(2022)
adds to this, also highlighting the importance of AI’s actions and intent being
understandable to the users. They propose that communicating the internal state and

9
THEORETICAL FRAMEWORK

intent of the AI, for example its current goal or acknowledgement of a command, should be
done using subtle approaches as opposed to more explicit verbal communication between
the user and the AI. Defining fixed signals for interactions to communicate the internal state
of the robot AI.

Additionally, Google PAIR (2021, Chapter Explainability + Trust) connects explainablility


with user trust where they present to use explanation of the AI for building and calibrating
the users trust in the system. Similar to Riedl’s (2019) argument on explaining the AI’s
behaviour, Google PAIR (2021, Chapter Explainability + Trust) also emphasize on explaining
the output of the AI and to connect it to the users actions. They argue that receiving
feedback and explanation after action makes it easier to understand cause and effect, which
in turn facilitates faster learning for users. Google PAIR further emphasises on establishing
the user’s trust from the beginning by presenting information regarding the AI, its
capabilities and limitations when onboarding the application to set the user’s expectations.

In addition to the five design goals presented by Xu et al. (2021), Shneiderman (2022) states
that the eight golden rules he created for HCI (Shneiderman, n.d.) are still valid for designing
HCAI systems.

The Eight Golden Rules for interface design:


1. Strive for consistency
2. Seek universal usability
3. Offer informative feedback
4. Design dialogs to yield closure
5. Prevent errors
6. Permit easy reversal of actions
7. Keep users in control
8. Reduce short-term memory load

Building upon the Eight Golden Rules, Shneiderman (2022) presents a HCAI pattern
language to be used when designing HCAI systems, which according to Shneiderman
address common design problems through shorter expressions of important ideas.

HCAI pattern language:


1. Overview first, zoom and filter, then details-on-demand
2. Preview first, select and initiate, then manage execution
3. Steer by way of interactive control panels
4. Capture history and audit trails from powerful sensors
5. People thrive on human-to-human communication
6. Be cautious when outcomes are consequential
7. Prevent adversarial attacks
8. Incident reporting websites accelerate refinement

10
THEORETICAL FRAMEWORK

2.7 Theory of employed method


This sub-chapter presents the theory and background of the methods that was used during
this thesis project.

2.7.1 Future analysis


The method future analysis, as described by Wikberg Nilsson (2015), is used to identify
trends, patterns, information, and opinions that can be used to inspire and provide a
foundation for the product development. The first step is to look at trends and find the
needs and desires of the public, or a certain focus group. This could be done by interviews,
discussions within a project group/focus group or searching for popular topics and trends
online, to name a few examples.

The second step is to find key information regarding the topic. The information in this step
is preferably from reliable and scientific sources since its used to compile statistics and find
expert opinions on the matter. This step can be used to back up the trends found in the
previous step with actual data and to better understand the product.

In the third step of the method success factors from other companies, organizations or
services are analysed to understand what they are doing right and how they became so
popular. Success factors, according to the method, is defined as attributes and factors that
can be seen as integral to popular companies’ success (Wikberg Nilsson et al., 2015). By
compiling different successful products attributes, a pattern of the factors they have in
common can be found and used as input and ideas for the new product.

In the end a collection of ideas, industry standards and inspirational examples are gathered
as a basis for brainstorming new products or improve already existing products.

2.7.2 Prototyping
A prototype is a model built to test a concept and can take many shapes, for example a
sketch on paper, a cardboard model, a staged interaction, or a digital interactive model
(Stickdorn et al., 2018; Wikberg Nilsson et al., 2015). The type of prototype used varies
depending on what is of interest to research and what the end product should be. According
to Martin and Hanington (2012, p. 138) prototyping is a critical part of the design process
and essential for testing concepts against designers, clients and end customers.

The prototype can be used to find out more about the product, allowing a designer to
develop the product further. In contrast to a finished or ready to launch product a prototype
is usually not as detailed, and the quality can range from paper sketches to almost finished
physical/digital models (Wikberg Nilsson et al., 2015). This means that a prototype can be
simpler, only built to test specific functions, and production technicalities and costs can be
ignored. In the end it should give an idea of how the finished product could look and work,
and how it shouldn’t work, to take the next steps of design decisions in the process.

When prototyping is applied to a product design process it is usually done in several stages
of the process in order to confirm and test things like different aesthetic choices, user

11
THEORETICAL FRAMEWORK

friendliness and the need for different functions, this allows designers to build upon their
discoveries. The purposes for prototyping described in Design: process och metod (Wikberg
Nilsson et al., 2015) are:

• To better understand the design problem.


• To explore and develop different innovative solutions.
• To explore and experience different possibilities for the shape, form and design.
• To examine and experience how humans interact with the different solutions.
• To test functionality, experience and understanding of the design.

There are some specific prototype types that are very common within UX and UI design.
Such as sketch prototype or usability prototype. A sketch prototype is usually sketched on
paper or simpler versions of paper/cardboard models used early in the design process to
explore different solutions. It is a way to make a visual manifestation of the designer’s
thoughts to structure and explore different ideas. The Usability prototype is all about testing
your solutions towards the user of the product. The prototype is built to be able to interact
with. User tests, evaluations and measurements are applied to gather data and strategically
find important information about the products usability. (Wikberg Nilsson et al., 2015)

A product can also give the user a certain experience while using it and this is explored
through experience prototypes. The purpose of this kind of prototype is to test what kind of
emotional response a product/service might give the user. The experience prototype is
based on the concept that people understand the product better by experiencing it instead
of just reading or hearing about it (Buchenau & Suri, 2000). This prototype requires the
context of how the product should be used to be clear to the user, through for example
roleplay or simulations, since this prototyping method tries to pinpoint how a user react to
the product or what kind of opinions, they form (Wikberg Nilsson et al., 2015).

2.7.3 Interviews
By interviewing, a lot of information can be gathered, and it is a fundamental method used
to be able to understand context, opinions, behaviour, facts and more about the area being
researched (Delft University of Technology, 2020).

Interviews usually provide less measurable data but give a lot of qualitative data (Rev, 2022).
Interviews can be used in several stages of the design process to discover different things.
In the beginning it can give context to a problem or situation, it can also be used at a later
stage to evaluate a product or gain detailed customer insight. Interviews can be time
consuming when compared to methods like focus groups but in return it provides a deeper
insight due to the ability to probe further into interesting areas.

An interview can be structured, semi-structured or unstructured (Delft University of


Technology, 2020). This indicates how strictly the interviewer follow the decided upon plan
of questions. For example, a questionnaire is strict and does not allow discussions and
follow up questions while a less structured interview in person can have a few questions as
a starting point but then allow the interviewer to go of script when the subject change

12
THEORETICAL FRAMEWORK

direction. An unstructured interview might not even have questions but instead be an open
discussion of a certain subject.

When conducting interviews, they are usually recorded, noted or in other ways saved to later
be analysed. In this thesis Thematic Content Analysis is used to find patterns and
understand the opinions of the interview subject, mostly when analysing notes from user
tests. Thematic content analysis, shortened to TCA, is a type of descriptive transcript or note
analysis that finds common themes in the notes by scanning all the text, finding patterns
and sort the text into different categories, often by colour coding the text (Braun & Clarke,
2006). The researcher’s stance should be objective when grouping and distilling the themes
and the analysis of the themes are later performed when all the data has been looked over
and categorized.

2.7.4 Service blueprint


Mapping the activities in a service can aid the designer in finding interesting activities,
examine the flow of interactions between different parts of the service and discover how
they could be improved and optimised. A service blueprint is instrumental in mapping
services that involve several touchpoints or require effort in several different stages at the
same time. (Gibbons, 2017)

A service blueprint usually corresponds to a customer journey map and follows all the
different processes needed to let the user achieve their goal. By dividing the different
processes into categories, the customer’s journey can be followed throughout all the
divisions of the service (Gibbons, 2017). The most prominent division is the frontstage and
backstage category that differentiates what processes and actions the user can see and not
see. For example, the user can see and interact with a cashier at the store, but they won’t be
able to see the process their purchase puts into motion that updates the inventory when a
product is bought.

With the service blueprint all the processes necessary for the flow of the service can be
mapped to gain a better understanding of what is needed to make a certain customer
journey work in the end (Gibbons, 2017). An example of a service blueprint can be seen in
Figure 3.

13
THEORETICAL FRAMEWORK

Figure 3 An example of a service blueprint showing how the categories and actions can be
mapped. From Service Blueprints: Definition by Sarah Gibbons (2017). Reprinted with
permission.

2.7.5 Flowchart
Flowcharts are graphic charts used to analyse, design, or document a sequence of
operations in order to visualise the whole process (Chapin, 2003). This can clarify which
steps are present in the sequence or if there are any lose ends. It is often used to track how
digital interfaces work or should work but can also be used to chart a flow of actions of a
service or the actions in a process for a digital system. An example of a flowchart can be seen
in Figure 4.

14
THEORETICAL FRAMEWORK

Figure 4 Example of a flow chart and its components. From Wireflows: A UX Deliverable for
Workflows and Apps by Page Laubheimer (2016). Reprinted with permission.

The chart consists of a set of textboxes that are defined to a certain type of action or
interaction needed to take the next step in the process, such as an option, indicating that
the user can make a choice at that step, or a process, indicating that the user’s input has
influenced the system (Chapin, 2003). The blocks are then connected with arrows showing
how the blocks depend on each other and in what order they should come. Standards exists
for the symbols but there is also room to create specific building blocks to fit the process at
hand.

2.7.6 User and usability testing


To present a product to potential users and gather their reactions, actions with it and
opinions of it can be very beneficial for designers. This allows the design process to be
influenced by the actual end-user of the product and can help designers see problems or
unintended consequences that they themselves had not thought about. It can give a great
amount of information, such as potential reception of the product on the market or risks
and exploits of the product. (Delft University of Technology, 2020)

Usability testing is very similar to user testing, and they are often used as synonyms, but the
aim is different (Moran, 2019). User tests aim to introduce new products to users and find

15
THEORETICAL FRAMEWORK

out how users receive it, often for commercial purposes, and usability test aims to find out
how the product functions in the hand of a user and if they understand how to interact with
it. A test with users can however involve both aims depending on the plan for the test which
means that a test’s aim isn’t necessarily only one of them.

There are many ways to do usability tests, what method is used can also depend on the
product in question and what is of interest to find out. There are three components
commonly present in user/usability testing and these are the facilitator, the user and the
tasks the user performs to test the product (Moran, 2019). The facilitator guides the test and
the user to make sure the right things are tested, and that the user knows what to do. A test
can involve more than one facilitator and it is encouraged to have other roles present at a
test as well, so that different roles can focus on their responsibility and to have more than
one observer (Rettig, 1994). These roles could involve taking notes, managing props, or
taking photos. It depends on what type of test is conducted and what needs to be done. The
user is the one participating in testing the product. User testers can be selected according
to a certain profile, like demographic or interests, but can also be picked at random. There
is also the expert evaluation method where a person with relevant knowledge, for example
a usability expert, becomes the user tester (Benyon, 2019, pp. 246–247). The expert user has
the prior experience to pick up on common problems and flaws in the product while testing
it and they can be particularly useful in the early design process. This is because they often
see if the system has any major flaws that could otherwise interfere when regular users test
it, which makes the expert user tester fitting for early design exploration. The task the user
performs is simply a stated question or an objective for the user that aims to focus the user’s
attention to a certain part of the interaction. This helps when structuring the test and to get
feedback on interesting areas of the product.

In interaction design methods like interviewing, observing, and roleplaying in a scenario are
commonly incorporated in the testing. Usually, the product or service being tested is not yet
fully finished or have every component in place for being properly used, this is when
facilitators use the Wizard of Oz approach. This means that the facilitators find ways to show
the user what is supposed to happen by creating the illusion of it happening by other means.
For example, if a product should be able to play a sound but has no such capability a speaker
could be placed nearby and have a facilitator control it, giving the illusion that the product
can play the sound. Having the user play out the whole scenario of using a service/product
and having them more immersed in it can give great insight. Having several users do the test
can give a greater indication of patterns or behaviours. User testing does not necessarily
require a lot of participants. A recommended amount of user testers according to Jakob
Nielsen is about five participants (Nielsen, 2012). This depends on the tests being conducted
but having more participants usually does not give any new insights since the patterns
repeat themselves. Exceptions to this rule are tests like quantitative studies and eye
tracking.

There are also more measurable methods in user testing like regular scoring questions or
the SUS method. The System usability scale, shortened to SUS, is a common tool regarded
as an industry standard to evaluate, through a questionnaire the user answers, how
measurably user-friendly a system is perceived to be (Sauro, 2011). SUS is used by asking

16
THEORETICAL FRAMEWORK

the user tester ten standard questions and rate them on a scale 1-5. The scores are then
calculated and converted to a score of 0-100 instead of the 0-40 score that is gained from the
1-5 scale. An average score for SUS is 68 which is seen as a benchmark, above this is above
an acceptable usability score and below is lower than acceptable (Sauro, 2011).

2.7.7 Task analysis


A task analysis is a tool used to study how users perform tasks and what type of tasks are
needed to achieve the users goal. Maria Rosala states, in the article Task analysis: Support
users in achiving their goals (Rosala, 2020):

‘Task analysis is crucial for user experience, because a design that solves the wrong problem (i.e.,
doesn’t support users’ tasks) will fail, no matter how good its UI’- (Rosala, 2020)

A task analysis allows a designer to learn about how users work or do certain actions. The
first step is to gather information on goals and tasks by observing, interviewing, and
studying the user’s journey to achieve their goal. Next, the information is structured in a
hierarchical diagram with the user’s goal on top. Underneath the goals, the tasks needed to
achieve it are placed and below each of the tasks; the subtasks needed to achieve the task
above. An example of this can be seen in Figure 5. This information is later used to define
what overarching goals a user has and what tasks are needed to achieve them, giving ideas
and structure to the design. From the analysis a designer can meet user’s expectations and
help them achieve their goals in an efficient way (Rosala, 2020).

Figure 5 An example of a hierarchical task analysis showing how the goal is broken down into
tasks and subtasks. From Task Analysis: Support Users in Achieving Their Goals by Maria
Rosala (2020). Reprinted with permission.

17
THEORETICAL FRAMEWORK

2.7.8 Wireframing
A wireframe prototype is often produced by design teams early in the design process with
the primary purpose of creating basic concepts and design directions for the design team.
Wireframe prototypes are further described as high-level sketches usually without any
visual design as the focus of the prototype lies in exploring and visualising the interaction
flow and navigation model. Additionally, wireframes describing the software can be
produced and presented in multiples ways, ranging from sketches on paper to visualisation
created with graphic design software. (Arnowitz et al., 2007, pp. 138–139)

2.7.9 Bodystorming
Bodystorming is mentioned in Imagining and experiencing in design, the role of performances
(Iacucci et al., 2002) as a technic that visualise a user experience through a scenario by using
props and interactive environments. The method is of explorative nature and inspired by
improvisational theatre. By immersing the designers into the scenario of acting like a regular
user using a product or reacting to a situation, bodystorming helps designers generate ideas
and understand context through exploration.

The method can be adapted to fit different products, the type of user or persona using the
product and differ in how accurate it is to the real scenario. A bodystorming session can for
example be only roleplaying and talking or it can involve sets and props closely resembling
the intended scenario. It is important to note that bodystorming is not like user testing since
the aim is to explore ideas and see potential issues, not to evaluate the product (Iacucci et
al., 2002).

18
PROJECT PROCESS

3. Project Process
In this chapter the overall process of the master thesis will be presented. In Figure 6 a visual
representation of the process can be seen. The project started with a literature study,
followed by a pre-study, to gain knowledge within the researched area, examine the cases
given by the company and find interesting ideas for design. When a base of knowledge
regarding the theory and case had been achieved the thesis moved forward to developing
the Ericsson case, simplified to food delivery to allow a more general approach. This was
done in three iterations, all focusing on prototyping and user testing to constantly improve
the design. After the final test, all data and new information was summarized and analysed
to discuss the findings and come to conclusions.

Figure 6 Depicting the thesis five phases of the project process

19
LITERATURE STUDY

4. Literature study
The initial phase of the thesis project was dedicated to establishing the theoretical
framework which would later serve as the foundation for the thesis research and design
work. The literature study focused mostly on methodology within the design field and more
specifically within Human-Centered Design as well as approaches on how to conduct
scientific studies within the design practise. Additionally, a great focus was placed on
gathering information on the preconditions for the project, mainly being last mile delivery.
Even though the literature study phase was placed in the beginning of the thesis and a large
period was allocated for performing it, collecting information and theory was an ongoing
continuous process that prolonged after the initial phase and was conducted throughout
the whole project.

Most of the sources used in the project are scientific papers and reports found at well-known
publishers as SpringerLink, IEEE and ACM. Moreover, due to the nature of the project, both
being in the design field but mainly at the fore front in development of autonomous delivery,
additional sources generated from industrial practices as Google AI Research (PAIR) and
Nielsen Norman Group (NNG) were used to support where academia is lacking and to
provide with state-of-the-art examples from the industry.

20
PRE-STUDY

5. Pre-study
After performing the literature study, a pre-study was conducted with the focus on
understanding the current status of the product as well as the business cases provided by
the company. This was done by utilizing different methods established in the theoretical
framework to explore the state of art in the delivery industry as well as what other
companies were doing in similar industries and how their solutions were implemented.

5.1 Method
In this chapter will the process and methods used for the pre-study be presented.

5.1.1 Observations and interviews


Interviews were used throughout the thesis project to gain knowledge about the HUGO
project and to gain knowledge and opinions from test-users. The thesis project was partly
conducted at the office of the HUGO project and information from non-structured
interviews and observations were gathered at the office throughout the project. The
interviews where mostly non-structured or semi structured since this allowed follow up
questions and a wider discussion giving more information on specific topics. The
information from observations and small talk with the staff at the HUGO project were used
in the report but no protocol or transcription exists for them.

5.1.2 Future analysis


The method future analysis was used to identify common patterns in similar services by
other companies in other business areas. The areas of interest were delivery robots, delivery
services and human interaction with AI/computers to research what type of trends, new
styles, interactions, and information were relevant for the area.

The analysis was carried out by researching different services with similarities to the HUGO
project, seeing what they had in common and how people were using the product. To
structure the analysis a mind map of services and products were created to gather
screenshots and notes of the different services, see Figure 7 as an example.

21
PRE-STUDY

Figure 7 A mindmap of two web app interfaces for delivery services


The services and products that were examined were:
• Foodora
• PostNord (personal delivery)
Instabox
• Starship robot delivery
• E-scooters like Lime and Bird
• Smart home appliances

Services and products where selected based on the similarity in interactions they had with
the delivery robot concept. The focus during the analysis was on delivery solutions, digital
to physical interaction, app design UI and process design of the service. The results were
then used in identifying what these services where doing that held them at high regard
amongst consumers, their success factors. They were also used to benchmark how a web
app of today could look and function for this kind of service. In other cases, such as the E-
scooters, the interaction with physical objects through a phone was the focus. This was in
order to understand what types of technology were available, how they worked and if it was
commonly used by consumers.

After finding and mapping the inspirational pieces a thorough brainstorming was held by
discussing and putting up notes with ideas and remarks, as can be seen in Figure 8. These
notes with remarks as well as the result from the discussion would later by used as
inspiration for the development phase of the thesis project.

22
PRE-STUDY

Figure 8 A mind map of the inspirational sources with notes of findings and remarks

5.1.3 Flowchart
After identifying success factors and patterns in other services, the process of the five given
cases for HUGO were each individually mapped with the method flow chart from a template
found in Figma, the building block template can be seen in Figure 9. The goal of the process
was to analyse the current state of the services that were being developed by the HUGO
team and to find similarities for the different cases and interesting areas of interaction.
Therefore, the focus of the mapping was to highlight the different interaction points
between the robot and the user as well as the flow of the process, meaning how different
phases of the process occurs and what type of data is present. The processes were then
cross-examined to identify similarities and interactions of interest to all the cases.

Exploring the different cases and creating the flowcharts was an iterative process based on
the interview made with the design team behind HUGO. Each iteration of the flowcharts was
reviewed by the team member in charge of that specific company and based on the
feedback from those meetings the chart was revised and improved to be more accurate to
their given case. In addition to making the flowcharts more accurate, the feedback meetings
also created a better understanding of the different cases and their challenges.

23
PRE-STUDY

Figure 9 The building blocks for the flowcharts

5.1.4 Service blueprint


Based on the findings from the cross examination of the flowchart and mapping of the cases,
a service blueprint was created of the main case, the Ericsson case, displaying the expected
user journey. In addition to the user’s journey, the service blueprint also presented what
other processes, backstage- and supporting processes, that was required for the service to
fulfil the goal of the user. Since the service in the case was only a concept in its early stages,
multiple decision had to be made to fill in the gaps in the service blueprint as the exploration
was conducted. These decisions were made based on knowledge acquired from interviews
with the designer, the future analysis as well as the use of experience as a designer.

The service blueprint included the customer journey and could be used to better understand
the user and their experience. The service blueprint was used at this stage of the project to
map out the system to get a better understanding for what happens in all the steps and how
the user interacts with the service. In turn, this also provides the requirements of the service,
e.g., what support systems are needed to provide the service to the user. For example, when,
and in what format, to present the user with information. The goal of producing a service
blueprint was to lay a foundation for a task analysis and to further explore essential
interactions.

The blueprint showed what kind of communication and processes were vital in both
frontstage and backstage to make the interaction with a delivery robot work. The focus of
the thesis was however on the frontstage actions, but the backstage actions was mapped as
well to better understand the needs of the system. A service blueprint has different sections
of processes but can be adapted depending on the service, in the service blueprint made for
this thesis there are:

24
PRE-STUDY

• Evidence – The physical or digital evidence the user sees of the interaction or
process, like a text or a receipt.

• Customer journey – The user’s actions during the service that lead to their end goal.

• Line of interaction – This line is drawn to visualise where the user’s action and the
front stage actions meet.

• Front stage – This is the actions from the service providers that the user can see and
interact with.
o Robot actions – This is the HUGO robot’s interactions during the service
process.
o App – The actions of the web application on the user’s phone
o Technology – Actions of other relevant technology, like GPS.

• Line of visibility – the line that’s symbolise what a user can see of the service and
what is hidden.

• Backstage actions – Actions performed by service provider that cannot be seen by


user.

• Line of internal interaction – The line that differentiate actions made by service
provider and actors outside of the service providers ownership.

• Support processes – Processes not owned by the service provider but is needed in
the service. Such as fetching data or processing money transfers.

5.2 Findings
In this chapter the findings from the pre-study phase will be presented.

5.2.1 Business cases


At the time of the thesis work, the company was developing five different cases involving
delivery application. In this chapter each case is explained and the flow charts for each case
is described. Since the product was still under development at the time, the findings in this
chapter was based on multiple discussions and informal interviews with the development
team behind HUGO and was in some parts hypothetical for the future development of the
services.

In this chapter the results from researching the different companies and exploring the cases
will be presented.

5.2.1.1 ASDA
ASDA is a supermarket chain located in the UK with stores across the country in multiple
sizes and formats (ASDA, n.d.). According to their website they serve 18 million customers in
their stores weekly. In the case of HUGO, a developer on the team explains that the use case
will target short distance delivery of groceries from the store to the customer. More
25
PRE-STUDY

specifically orders will be made by the customers beforehand for pick up and will be package
by the store’s employees. The groceries will then be loaded onto HUGO in crates and
delivered from the store to the customers car in the parking lot. There will be specific
parking slots for pickup, however one designer states that in order to minimize human error
the customer won’t get a chosen slot to park at but will instead rather have to choose one
of the specified slots and inform the store when and which slot they have parked at.

The design for the crates holding the groceries was not at the point of the thesis finalized
and was still in development. However, the designers stated that the intention for the design
was for the crates to include some sort of locking mechanism to ensure the crates and
groceries safety during travel. Thus, affecting the mapping of the service as the flow
therefore had to include a step for unlocking the creates/box to retrieve the crates with
groceries from HUGO, as can be seen in Figure 10. Another unique aspect to the case was
that the current design for the service included that the user had to alert the store that they
both had arrived at the store, as well as in what slot they had parked. This implies that there
needs an additional interaction, and that the user starts the interaction by informing the
store of their arrival, from which the store then loads the crates onto HUGO and sends it out
to the user’s car.

Figure 10 Flowchart visualisation of the ASDA case

5.2.1.2 Borealis
Borealis is one of the leading manufacturers of polyolefin solutions in the world and has
Sweden’s only manufacturing plant in Stenungsund (Borealis, n.d.). The lead developer on
the team for Borealis states that the intend use for HUGO in this case is transporting samples
of their produced material in the factory. This means that HUGO will be retrieving material
samples from the production line and transport them to a lab for analysis. The factory has
multiple production lines meaning that there can be multiple stops on the route to the lab.

26
PRE-STUDY

Borealis presents a unique aspect of the case in that is has two types flows for interacting
with HUGO, as can be seen in Figure 11. One where the users send samples via HUGO, this
being out in the production, the other being the user receiving the samples in the lab.
Despite the difference in tasks at the two stations, it was evident that the two stations shared
similar flows with the only difference being placing a sample in HUGO as opposed to
retrieving a sample in the other.

As the factory is a closed off area, the developer explains that there is no need for the lid to
be locked during transportation as it needs to be in the other cases. This simplifies the
interaction when sending and receiving samples to HUGO since the locking and unlocking
steps can be taken out of the design. Despite that, there is still a need for an interaction from
the user at the end of the delivery signalling end of interaction and that the user has either
received or placed a new package in HUGO ready to go to the next stop

Figure 11 Flowchart visualisation of the Borealis case

5.2.1.3 PostNord
With an unique distribution network spanning across the Nordic countries, PostNord
provides solutions in communications, logistics, e-commerce and distribution (PostNord
Group AB, n.d.). The goal for the future is to integrate HUGO into PostNord’s daily operation
to assist them in moving towards a more sustainable and fossil-free future (PostNord Group
AB, 2022). One designer on the HUGO team explains that the intended business case that
they are developing for is Customer to Business (C2B), meaning that the customer will be
sending packages with HUGO as opposite to retrieving, which is common to the other cases.

27
PRE-STUDY

Since the PostNord case was intended to be C2B, meaning that the user use the services to
send parcels to a company. The goal of the user therefore differs from the majority of the
other cases explored. However, even though the user’s goal differs from the other cases, the
flow of the service in Figure 12 still shares similarities with the other cases, and for some is
even almost identical with them, as the Ericsson and Domino’s Pizza cases. The main
difference found in sending packages with HUGO is highlighted by the importance of
confirming to the user that the delivery was a success. Which in this case implies that the
package has successfully been delivered to its end destination.

Figure 12 Flowchart visualisation of the PostNord case

5.2.1.4 Domino’s Pizza


The global pizza chain Domino’s Pizza is the largest pizza company in the world with over
18,800 locations in more than 90 markets as of the beginning of 2022 (Dominos’s Pizza,
2021). Dominos provide pickup service to their customers, but their main service is delivery
of pizza directly to them. This is what HUGO is intended to be used for in this case. A designer
on the team states that HUGO will be operating in a smaller area around a Domino’s Pizza
store and manage the delivery of the pizza from the store to the end user’s location. See
Figure 13.

28
PRE-STUDY

Figure 13 Flowchart visualisation of the Dominos´s Pizza case

5.2.1.5 Ericsson
The Ericsson office in Kista science park wants to test their traffic sensor technology with
the help of HUGO. A developer on the HUGO team explains that in order for Ericsson to be
able to test their sensors and collect data, they need a service in place for HUGO to be able
to operate in the environment intended for their sensors to operate in. Therefore, the
context in which the Ericsson case is in, is the joint area of Kista science park and Kista
gallery. The operation performed by HUGO seen in Figure 14 will be delivering food from
restaurant in the Kista gallery food court to employees at the Ericsson office in Kista science
park.

Figure 14 Flowchart visualisation of the Ericsson case

29
PRE-STUDY

5.2.1.6 Findings in flow charts


Despite the differences in the processes themselves and the context in which they operate,
when analysing the five flow charts and cross-examine them to each other, it could be
identified that all the flows in particular included similarities in how the user interacts with
the robot when either retrieving or sending the parcel. Tasks identified as common to all
cases were:

• Unlock/open box
• Take/drop off goods
• Close/lock box

In the process of creating and analysing the flow some questions were raised surrounding
the design of the user interaction. One of the biggest questions being how and what
information to present to the user. When to present the user with this information also
became important to take into consideration. Besides informing the user, communicating
to the user when the start and end of the interaction with the robot was happening was
found to be another area that needed to be investigated further. Furthermore, this relates
to the users understanding of who is in control at a particular point of the process and the
importance of the transfer of control between the user, phone and robot being evident to
the user.

Another insight from analysing the service is the lack of design for when the user fails an
interaction. What should happen when a user makes an action that is not in line with the
intended actions in the design, should the user correct the action, or should the system fix
the error. As can be seen in Figure 10, Figure 12, Figure 13 and Figure 14, when the user
makes a choice different from the intended action, the flow goes out of scope and leads
nowhere. These are points of interest and are therefore identified to be important in the
process of designing the flow of the service, where the implementation of the web app could
support the user in the case of an unintended failure in the interaction.

5.2.2 Future analysis


The future analysis concluded in a mood board presenting several images and notes from
the explored subjects, see Figure 15. The analysis resulted both in design ideas of how the
UI should look and how a delivery service with digital interface operates from the customers
point of view. Examples of interactions with smart objects through a phone interface, like e-
scooters and Instabox delivery, where found to be inspiring in areas like HCI, information to
user and mechanics controlled through a phone.

30
PRE-STUDY

Figure 15 Moodboard with results from the future analysis

From the analysis and following discussion a few important ideas and patterns were found:

• It was found that an app or web app was the most common way to make the service
accessible to customers. According to the HUGO team there are also some negative
feelings towards traditional downloadable mobile apps among costumers, saying
it’s an unnecessary extra step in using the service, supporting the idea that a web app
would be what users expected from a service such as HUGO delivery.

• Interaction with a physical object through the phone, like opening the Instabox or
unlocking the e-scooter, was usually done by having the user interact through the
phone but with something physical on the object, like buttons or a QR-code.

• Starship is an experimental robot delivery service at a campus in the US that offers


the same type of service that the HUGO delivery doe in the Ericsson case. It shows
that this type of idea already exists and is possible to implement.

• The app used for Starship has a few interesting features specific to an autonomous
delivery robot that were interesting to explore. Especially how they convey
information to the user about what the user needs to do and what the robot does
automatically.

• The Starship app is also an example of what types of interactions are controlled by
the user and what is done automatically. For example, the box unlocks automatically
when the user states that they are next to the robot and locks again when the user
states that they want the robot to leave.

31
PRE-STUDY

• The analysis also resulted in pointers on how UI for a delivery app could be designed.
Creating an interface with attributes that are common for other apps could make it
more self-explanatory and the user know what to expect. There where patterns of
how and where information was presented and what kind of information the user
received, like map location and time remaining until delivery.

• The interactive steps of the delivery apps provided framework for what interactions
were necessary in a delivery service, like tracking and confirming an order.

• The delivery service Instabox use stationary boxes that the user unlocks with their
phone, and this was especially interesting for researching how the user could open
the box through a phone interface. Sparking interesting ideas on how to design the
unlocking interaction and indicating that users are accepting towards receiving a
delivery service through their phone.

5.2.3 Service blueprint


To map the interactions for the Ericsson case and find what type of actions all different
actors in the service do during the user’s journey to the end goal, a service blueprint was
made. The service blueprint created for the given case can be seen in Figure 16.

The chart represents the service with a focus on the customers journey and not as
thoroughly in the backstage section. An interesting finding from the blueprint was that the
section of interest to the thesis started at the step of finding the autonomous agent, to it
driving off. This part was unique since the user was interacting with the AI in this section and
distinctively different from regular delivery services in that sense. The blueprint also showed
what types of actions where necessary for the service to function which created a
comprehensive layout for what actions should be present in the web app.

32
PRE-STUDY

Figure 16 Service blueprint of how the HUGO delivery service will operate

33
6. Developing interactions
for the Ericsson case
To test and evaluate the identified interactions of the five cases, a concept needed to be
developed. The Ericsson case was selected to be used in the project to be further developed
as it was relevant at that point in time as well as requested by the company. Seen as the
company was a startup working with an agile and iterative process, the choice was made to
adopt the Lean Startup methodology, working in fast feedback loops consisting of build-
measure-learn loops. These loops where performed in sprints were one sprint equalled one
loop. The sprints lasted two to three weeks each and the average time for building the
prototype was one to two weeks. The last week of the iteration was used for testing, in other
words the measure and learn part of the feedback loop.

The service blueprint and flowchart from the pre-study phase was used as prerequisite in
the start of the development phase. The first iteration served as a baseline for the whole
design iteration process where the initial assumptions made in the pre-study phase were
tested against expert users, meaning the developers of HUGO. By creating quick and
unpolished wireframe prototypes that could be tested against expert users, some initial user
feedback could be collected and used to develop the next iteration.

In accordance with the lean-startup methodology, each iteration had one hypothesis as
starting point, deciding what to test which in turn affected what to build in the MVP for that
iteration. The hypotheses themselves were based on the assumptions made from the
findings from the previous iteration or the only the pre-study as in the case of the first
iteration. The hypothesis additionally assisted in keeping the development of the MVP
focused to what was going to be tested.

6.1 Prototyping
Prototyping was consistently used throughout the design process in the project and was
performed with multiple levels of detail and intentions. The purpose, nonetheless, stayed
the same for all the use cases throughout the project, to explore different design ideas
through designing and testing against users. As with the whole development phase itself,
the Lean Startup approach was adopted for the prototyping as well. The prototyping was
therefore performed according to the process proposed by Ries (2019) for creating MVPs.
Which meant planning in reversed order of execution by first deciding what and how to test
the iterations hypothesis before designing the actual prototype. Moreover, the prototypes
were lacking many of the interactive implementations for the components that were
deemed to not be relevant for the testing of the hypothesis. In doing so it helped to keep the
prototypes leaner by avoiding creating too advanced prototypes which only added
unnecessary complexity irrelevant for testing the hypothesis. Saving both time in
developing the prototypes and ensuring focus on the right components, whilst also making

35
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE

it easier and quicker to pivot to other ideas as the complexity of the prototypes were kept
rather low and focus narrow.

Whilst adopting many different forms during the project, the produced prototypes were
essential to the whole design process. To get the most truthful feedback from the users, all
the prototypes were experience prototypes. This was motivated by Buchenau & Suri (2000)
statement that experiencing the product gives the user a better understanding of it than
reading or hearing about it. Creating these experience prototypes can be accomplished in
multiple ways, both analogue and digital, and for creating more advanced high-fidelity
digital prototypes there are also multiple alternatives of tools to use, with their own
advantages and disadvantages. In this thesis project the web-based design tool Figma was
chosen for designing the prototypes. The choice was based on the tools ease of use, previous
experience working with it as well as the abilities for collaboration and testing that it offers.
Additionally, the company also use the tool internally which further motivated the choice as
it allowed for easier handover of design material to the company at the end of the project.

To start the design process of the user interface in the first iteration, the method
wireframing, see 2.7.8, was chosen because it allowed for rapid and iterative sketching of
low-fidelity designs to explore different concept ideas. The process of creating wireframes
started with quick sketches with pen and paper, to explore ideas in a swift manner and to
get up and running easily. To facilitate the exploration, blank templates of phones were
printed on paper to avoid needing to draw the framing of the phone when sketching.
Additionally, when using the templates, the results also became more consistent as the
frame gave a reference point for size, which also led to the proportions being more accurate
and even across all designs. Making them more realistic in terms of sizing and therefore also
easier to reuse in later stages.

After producing wireframes on papers, the design process moved over to creating higher-
quality wireframes in Figma. The choice of switching tool to a graphical design tool like
Figma was that one of the main benefits of working in such a software as opposed to on
paper was that iterating over ideas becomes easier and quicker once some elements are
created and composed into basic designs. The trade-off however for working with software-
based tools are the increased time that it requires initially to get setup and create the first
designs, but once that is done, the process of iterating over design ideas becomes more
efficient. The digital wireframes were then later used in the first iteration as foundation for
creating prototypes on. The focus was to combine different ideas into single concepts and
to make them interactive, by doing so the level of fidelity was also raised making the
prototypes more suitable for user testing.

In the second iteration, the prototyping process mostly involved refining the designs from
the first iteration as well as combining the different concepts into one. Specific to the
prototyping in the second iteration was also designing for the alternative cases that the
iteration aimed to explore. This meant creating multiple variation to parts of the design
where some element where either added or tweaked in order to introduce the intended
failure, creating the wanted scenario to test. For example, error messages were

36
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE

implemented to simulate and test the experience of when the user loses connection to
HUGO and can´t interact with it.

For the third and last iteration, the prototyping process focused on combining all the
insights from the first and second iteration into one elaborate high-fidelity prototype. The
prototype was created to be used and validated in the last user testing. Therefore, additional
interactions that were previously not fully designed for were added and made usable to
simulate as close to the full experience as possible. This meant adding additional menus,
information overlays and other features that had not been relevant to the previous tests.

6.2 User testing


A plethora of methods was combined to create three separate user tests and the structure
and complexity of user testing differed at each stage of testing. All tests involved testing
towards users and asking questions, both open ended discussion questions and structured
questions. The most obvious differences between tests are the use of scoring methods. The
first test involved a SUS score to ensure that the first prototypes reached the proper
benchmark, a score of 68 or above, for usability before evolving them further. The second
and third tests focused more on the experience of the users by scenario roleplay and
discussion, but the third test also involved a scoring of the overall experience from 1-5 to be
able to get an indication of how well the final design worked. Another difference was the use
of expert evaluation in the first user test where experienced members of the HUGO team
tested the product. This was to ensure that the prototypes were in line with what the team
had done so far. These tests were conducted after each iteration and tested different
hypotheses. The results from each test guided the design of the next iteration and the final
design proposal. All tests were conducted separately so that the users could not influence
each other.

The analysis of the user test was done after each test when reading protocols/transcripts
from interviews using the TCA method and discussing the findings. This involved reading the
transcripts/protocols and marking interesting or relevant text by colour categorisation.
Categories differed depending on the questions asked but could, for example, be that
several of the users mentioned a specific improvement or that they reacted negatively to
something. When all the text had been categorised a summary of them were created and
used as basis for discussion and brainstorming.

6.3 Iteration 1
For the first iteration the goal was to design quick and unpolished wireframes in order to
test the hypothesis, which was based on the gathered information in the pre-study. Due to
the nature of the project being a startup company developing a product that has not existed
before, there were no actual users to interview and test on but rather only future users.
Therefore, the hypothesis of the first iteration was merely based on assumption made from
the material of the pre-study and the goal was thereby to test whether these assumptions
were accurate or not.

37
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE

The assumption was that since this is a new type of interaction for a large portion of the
future users, the service will need to provide the user with information regarding the
delivery process and give clear instructions on what is expected of the user to reach their
goal. Therefore, the hypothesis for the first MVP was formulated as the following.

• User wants information available and presented to them about how and when to
interact with the autonomous agent.

In addition, when creating the MVP, the goal was also to experiment and test different forms
for how information could be presented to the user.

6.3.1 Method
In this chapter will the process and methods used for the first iteration be presented.

6.3.1.1 Bodystorming
Bodystorming and roleplaying was used to further analyse and explore interacting with
HUGO. Using the delivery box from and old HUGO model as representation for the robot in
the roleplaying, allowing for direct interaction with the lid when opening and closing the
box. In the bodystorming session three scenarios were tested to explore different ways of
interacting with HUGO through the phone as a tool. With exception for the first scenario
which was the intended case, the cases were created whilst performing the session in a
“what if” manner.

The first scenario – Web app


This was the intended scenario based on the findings from the pre-study, task analysis and
the initial wireframing. The case scenario focused on exploring the user experience when
interacting with HUGO through a web app as interface. The initiation of the interaction is the
user receiving a SMS message with a link to the web app. From that point forward the user
only interacts with the service and HUGO through the app and HUGO itself physically.

The second scenario – SMS messages


Taking a more minimalistic approach to the interaction and the use of technological
implementation, the third scenario explored interacting with HUGO without the presence of
a web app as interface to the service. Instead, the only user interaction with the service
except with HUGO directly is through receiving SMS messages with information. The start of
the interaction is the same as with the first scenario, via SMS text but with more elaborate
information on the process and the steps involved, thereafter the user only interacts
physically with HUGO.

The third scenario – Two-way SMS communication


Third and last scenario took inspiration from the second scenario with the minimalistic view
on the use of technology. Similarly to the second scenario, the service communicates
information to the user over SMS messages, however the third scenario introduced SMS
messaging as a two-way communication. Meaning that it explored the user experience of
interacting with HUGO by sending commands via SMS back to HUGO.

38
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE

The goal of using these three scenarios was to explorer the initial idea of using a web app as
an interface between the user agent and HUGO the autonomous agent, as well as exploring
alternative scenarios in a ‘what if’ manner. In doing so, it opened for the possibility of
identifying aspects of the scenarios that are working without a web app and could be used
in that case to simplify the interaction. The other scenarios were therefore compared to the
web app case which would serve as a reference point. At this point in the design process, no
actual interactive prototypes of the web app were produced and therefore the interaction
with the web app in this case were based on the wireframe drawings produced as well as the
imagination of interacting with a web app. The focus of this stage was therefore not to
explore the design of the user interface for the app or the SMS messages, but rather their
roles as interfaces between the user and the autonomous agent.

6.3.1.2 User test: Iteration 1


For the first iteration the wireframes of the web app were tested towards six users. The
prototypes were shown on a computer screen and no autonomous agent was used. In this
testing round expert evaluation of the prototypes was in focus and the testing involved user
testing and discussing design ideas with the expert testers and design students. Experts in
this case are those who have worked in the HUGO project and as senior UX designers. The
other users that tested the prototypes had some knowledge of UX and design since they
were all students in relevant academic areas. Four users where from the HUGO team and
two users were design students. However, one of the students could not be present for the
whole test and only did the test up until the SUS portion was done. Therefore, few
comments on the design exists from this user.

Design students were a fitting user tester and tested the design to gain insight in how users
without a connection to HUGO would experience the prototypes, but they were experienced
enough to give relevant feedback and design ideas. This form of testing in an early stage
allows a type of co-design, where the users are not only testing but can also express ideas
and opinions, giving more depth and insight into the design.

The structure of the first test:


• The test started with gathering demographic data like age and the users perceived
experience level of technology, e.g., their usage of digital devices and how quickly
and comfortably they felt like they could navigate new software. They were also
asked how they perceived using an autonomous delivery service.
• Then three different prototypes were shown by giving the user tester a scenario
connected to the prototypes. The facilitator explained the scenario and what was
happening in each step to the user tester and gave them the task; Receive your
package from the robot by using the app. The facilitator could also give the user
tester a specific task for each step if the user didn’t instinctively know what to do, for
example asking them if they could find how to unlock the box. After each prototype
a SUS evaluation was done by the users.
• The user then got to see the two other prototypes that were not tested with SUS and
gave opinions to all five prototypes of them by answering a few discussion questions
to find out what thoughts and opinions arose from them. Follow up question also

39
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE

occurred if a user-tester reacted to something specific in the prototypes or said


something that could be elaborated upon.

The reason that two of the prototypes were not tested with SUS was that they were similar
to prototypes that were tested in the number of interactions or how the flow was built. They
were mostly discussed to examine placement of information and UI elements. The whole
description of the test and questions asked can be found in Appendix A.

The tested prototypes that were evaluated with SUS can be seen, in order of testing, in
Figure 17, Figure 18 and Figure 19.

Figure 17 Prototype 1 of user test 1 depicting a flow where a curtain design is used.

Figure 18 Prototype 2 of user test 1 depicting a flow with a card design and steps of the
interaction in the top.

40
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE

Figure 19 Prototype 3 of user test 1 depicting a flow with fewer interactions.

The prototypes that were not tested but shown and explained to the user testers after
prototype 1-3 can be seen in Figure 20 and Figure 21.

Figure 20 Prototype 4 in user test 1 depicting a similar flow as prototype 1 but with another
design and different information placement.

41
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE

Figure 21 Prototype 5 in user test 1 depicting a similar flow as prototype 3 but with some UI
inspiration from prototype 2.

6.3.2 Findings
In this chapter the findings from the first iteration are presented.

6.3.2.1 Wireframing
By analysing the interaction points in the service blueprint and flow chart for the Ericsson
case wireframes of different screen views could be designed. These wireframes aimed at
exploring different design concepts for the assumptions and the hypothesis.

The results from the initial wireframing can be seen in Figure 22, Figure 23 and Figure 24.
There were other wireframes as well but the quality of these were determined to not be clear
enough to be presented as they were only quick sketches.

Figure 22 Sketch wireframes from iteration 1 showing a concept involving lock and unlock
buttons.

42
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE

Figure 23, Sketch wireframes from iteration 1 showing a concept involving fold out actions

Figure 24, Example of wireframes produced in iteration 1

43
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE

6.3.2.2 Bodystorming
The result from the bodystorming session consist mostly of insights found when performing
the roleplaying and from discussions that followed during and after performing the different
scenarios. The result is therefore in the form of insights found when summarising the
discussion.

First scenario – Web app


One of the key findings in the first scenario was the ambiguous state of HUGO when the user
tries to both unlock and lock HUGO in the web app. Meaning that the current product is
lacking clear feedback for when the state changes, as of now the only indication is the sound
of the lock opening which is not guaranteed to be hearable in all environments. One
suggestion presented for this issue was to add a physical indicator to the robot to visualise
it’s the locks state, whether the box is locked or not.

Second scenario – SMS message


The lack of accessible information about the service and the status of the delivery that the
web app can provide was evident in this case, the users must rely more on their own
intuition. In this case the service can also only provided the needed information in the initial
text whilst with the app the user can be guided step by step by the service. The issue
presented in the first case is also present here, however more evident as the user has no way
of accessing the current state of the autonomous agent. Moreover, this in turn also effects
the end of the interaction as it showed in the workshop that there was no clear ending to the
interaction and no channel to present the information to the user either.

Third scenario – Two-way SMS communication


The main benefit to this solution is that SMS is a less technically advanced solution whilst
also being more accessible than a web app, only demanding that the user have access to a
phone that can receive and send SMS messages. The evident downside however was the
fact that the actual input from the user became more complex as pushing or sliding a button
in the web app is simpler in its form than typing and sending commands. The interaction
shares similarities with interacting through a terminal on a computer, which one could
argue is less common amongst phone users than interaction patterns common to mobile
apps.

Findings
The bodystorming session also presented other interesting insights not specific to only one
case. Both the first and third case presented an interesting aspect in the interaction where
the user pick up the package from the box. At that point, holding the package in one hand
and closing the lid with the other could prevent the user from accessing the phone. Meaning
that at that point it might not be possible to provide any new information or instruction to
the user and at the same time not possible for the user to give input back to the service
through the app, creating a short interval where the web app is out of scope for the service.
Relevant for when designing the information flow of the web app, taking into consideration
when the user will and can have the phone in their hand will affect what and when to present
information.

44
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE

During the session a question was raised related to all three cases, the question was about
when to present information to the user about the deliver as well as how to address the
situation if there is an issue with the delivery address. Should there be an SMS text sent when
the robot has arrived or right before it arrives? How should the service handle that if the
address for the delivery is not correct? Questions found needed to be addressed later in the
design of both the service and the web app.

6.3.2.3 Prototyping
Utilizing the result from the task analysis with the produced wireframes allowed for pairing
the individual wireframe designs into a flow of interactions. This resulted in five prototypes,
however as four of the prototypes shares many similarities with another, only three of the
prototypes were decided to be turned into an interactive prototype, these three can be seen
in Figure 25. The three prototypes are designed to test different design concepts and
principles where each have one focus each with different design suggestions on layout and
service flow.

The first and second prototype presents the user with information on the process, what step
they’re on and what steps are left to completing the process and thereby reaching their goal.
The two prototypes test different design concepts for presenting the information where the
first has a status-bar at the top of the screen showing the steps. The second one however,
doesn’t change screen but rather opens a different section, can be compared to a dresser
where each step is drawer. As you progress through the process you move to open the next
drawer and closing the previous.

The third prototype adopts a minimalistic design where the goal is to test the user
interaction of having as few screens as possible and thereby interactions as possible.
The prototype explores the assumption that there should be a step before unlocking the
robot where the user must verify that the user is at the robot. By removing this verification
step in the prototype, the interaction experience by test users can be compared to the other
two prototypes and used to determine if the assumption is correct or not.

45
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE

Figure 25 The three different prototypes produced in iteration 1

6.3.2.4 User testing


User tests were used to confirm or denounce hypothesis of the design. The results from the
different tests were used to further develop the design for the next iteration.

Test 1, the first iteration


The findings from the first test consisted of a SUS score, opinions and feelings towards the
app and the concept. Observations and design input were also important when evaluating
the prototypes and were a part of the testing. One user did not leave any discussion
feedback but answered the SUS questionnaire. The complete protocol from test 1 can be
found inAppendix B. When summarising the results, the SUS score, as seen in Table 1, gave
the following numbers:
Table 1 SUS score results
Users Prototype 1 Prototype 2 Prototype 3
Expert 1 75 72,5 82,5
Expert 2 75 62,5 72,5
Expert 3 77,5 82,5 77,5
Student 1 85 100 90
Student 2 85 97,5 95
Student 3 92,5 85 92,5

Average score 81,67 83,33 85

46
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE

From this information the following findings could be extracted:

• All flows evaluated with SUS scored high enough to be above the 68-score required,
only exception was one user scoring prototype 2 as 62,5 making the average score
for prototype 2 a bit lower than expected.
• On average the students scored the different app flows higher than the expert users.

From the interviews and discussion following points were raised by users:

• Most of the users displayed a mix of anxiety with some excitement for using HUGO
before testing the flows, e.g., uncertainty for safety of the package, and uncertainty
on usability for people with less tech experience. Expert users expressed more
excitement about the project, thinking it will be efficient and fast and like the idea of
robots but still expressed that there was uncertainty as well.

• Users showed preference to have only one thing to do for each screen and clear,
short instructions were important for the majority.

• All flows were deemed as rather easy to use by 4/5 users.

• Help buttons and undo buttons are wanted to feel less anxious.

• Users showed preference towards having a progress bar to be able to tell what the
next interaction would be or how many are left. One user also mentioned that it gave
them a feeling of control.

• 3/5 liked that there where pictures showing how to open HUGO. One user
commented that a picture of HUGO should be showed before the interaction so that
users not having seen the robot before knows what to look for.

• Two users mentioned that a slide bar for opening and closing the boxes lock are
appreciated since it would be harder to accidentally click the opening/closing
button.

• There were some opinions about contact info being unnecessarily large and that it
could be a folded out when clicked on instead.

• Confirmation of being next to HUGO was deemed unnecessary by one user but the
rest didn’t comment on it.

• Opinions differed on how to open the box, 2/5 liked using only the phone, 2/5 would
have wanted a physical interaction with the box. 1/5 was indifferent but didn't mind
using only their phone.

• The one solution to opening the box with physical interaction that was overall
positive was the QR code option.

47
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE

• One opinion was stated about the need to be able to see that the robot standing by
them is the HUGO assigned to them in the case of several HUGOs being present.
Either by colour or number on the robot or by confirmation on the phone from the
robot.

• A function to confirm delivery address before Hugo starts to drive was suggested. In
connection to this a function where the users pin their location on a map came up as
an alternative to confirming address.

• The warning that Hugo will drive away, was appreciated by most users but one user
pointed to it possibly being unnerving to some users if a warning symbol was used
since it could make the user think they have done something wrong.

6.4 Iteration 2
Iteration 2 had a focus on Activity Centered Design, exploring to combine ACD with the HCAI
framework to identify potential collaborations and issues. The ideas and feedback from user
test 1 were incorporated into the prototypes as well as ideas from ACD/HCAI. Signals and
feedback from the autonomous agent and the web app were of great interest and the
hypothesis for this iteration was:

• The user wants to both interact digitally and physically with the robot as well as
receive digital and physical feedback when interacting.

6.4.1 Method
In this chapter will the process and methods used for the second iteration be presented.

6.4.1.1 Task analysis


The goal of the initial task analysis was to map out all the necessary tasks needed to be
performed by the user in reaching their end goal of receiving their ordered food. The service
blueprint, particularly the customer journey section, was used as a starting point to
understand how the user interacts with the service, which assisted in mapping the tasks
performed by the user. As with the delimitation for the project, the same applied for the task
analysis, meaning that it only took into consideration the user’s interaction with the service
from the point of having ordered food to when HUGO leaves the user. Everything that
happens before and after that period was not considered when performing the analysis.
Additionally, the specifics of the design of service were not at the time fully decided and
there was also no actual user or use case to fully base the mapping on. Thus, there was a
need for both making assumptions and decisions on parts of the services design, which in
ultimately affected what user tasks would be included in the visualisation of the analysis.

However, seen as this was a method used to further explore the case, making decision based
on assumptions was justified as they were supported by the collected insights from the first
iteration of user testing. The visualisation from the task analysis was, as the service
blueprint, also a living document that later was revised with corrections and changes as a
better understanding of the tasks were attained. Since it was later used with other methods,

48
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE

both new information appeared and changes to the design of the service was made. Which
had an impact on the flow of tasks for the user itself as well as the understanding of it,
meaning that changes also had to be made to the visualisation of the task analysis.

6.4.1.2 Activity Centered Design


For the second iteration of the design process, when the foundation for the web app was
laid, the supervisor to the thesis project, Chu Wanjun, suggested an adoption of Activity
Centered Design. He proposed to combine theory on ACD with the Human-Centered AI
perspective and framework from Shneiderman’s (2020a, 2020b, 2022). The result from the
task analysis would then be used as the scaffolding for this process. In this chapter the
process for this step and how the methods and framework was combined will be presented
and an illustration of the structure can be seen in Figure 26.

The first step in the process was to use the result from the task analysis and to restructure
and transform it in accordance to the different levels of abstraction layers in ACD presented
by Norman (2013; 2005). The highest layer in the abstraction, the activity layer, mapped
directly to the top task in the task analysis, also referred to as the user goal. Next, the first
level of tasks in the analysis corresponded to the action layer. Lastly, the associated
subtasks were placed in the operation layer with connecting lines to their parent tasks in the
action layer above. The mapping was therefore as following:

• Level 0 task (User goal) → Task layer


• Level 1 task → Action layer
• Level 2 task (subtasks) → Operation layer

After placing all the tasks from the task analysis in the different layers, the next step in the
process was located at the operation layer. Each operation was analysed to identify the
agent, its goal and the tool used to perform the operation. If the operation affected or
involved more than one agent, the correlation between them were of interest. Additionally,
if two or more agents shared goals that indicated that there was a collaboration between
the agents.

When these collaborations were identified the next step was to both identify potential errors
caused by the collaboration of the agents as well as to map their collaboration in
Shneiderman’s (2020a, 2020b, 2022) HCAI framework. The interest of HCAI in this project
however was on the relation of control between the agents and how it is transferred or
shared in operations. Shneiderman’s framework on the other hand explores the relation
between high-low human control and high-low AI automation. To better align with the
interests of the report, the framework was thus modified to instead explore the dynamics of
when agents are either active or inactive in collaborations. This meant that both the axis in
the framework was changed to range from inactive to active making it a binary metric. Active
agent was defined as an agent taking an active role in reaching the common goal in a
collaboration. Inactive were on the other hand defined as taking a passive role in reaching
the goal, still able to participate in the collaboration with actions but only when explicitly
requested by the other agent. When being active, agents could take actions in response to
the other agent’s activities.

49
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE

Figure 26 Combining ACD with the HCAI framework

Identifying collaborations in the operation layer and exploring them in the framework was
performed iteratively where the two parts provided each other with input for the next
iteration. In the process of exploring collaborations in the framework, alternative
collaborations with different tools or goals could be identified. Further, opening for the
possibility of connecting operations and actions together, changing them into larger
collaborations. The exploration of collaboration was performed by moving between the
different quadrants developing different design alternative for the operation. Identifying
alternative collaborations for operations by moving between the four fields in the
framework not only gave new perspective on the collaboration but also generated solutions
to conflicts by testing and shifting the level of activity on the actors.

The purpose for using this method was to identify potential failures in the design, both from
the user and the autonomous agent separately but mainly to identify potential conflicts
caused when they actively collaborate at the same time. Identifying these potential conflicts
opened the possibility for finding solutions to the conflicts by exploring to shift the levels of
activity between the agents to find ways of preventing the conflicts from occurring.
Naturally there were cases where conflicts could not be solved by tuning the relation of
activity between the agents which resorted to implementing contingencies in the design of
the web app to address these conflicts when they occur. Exploring these conflicts also raised
questions on the risks of conflicts and their impact on the user experience. Which led to
discussions on whether the probability of them occurring and their consequences on the
user experience are severe enough to be worth addressing or if doing so adds unnecessary
complexity, making the risk worth accepting over added complexity.

50
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE

6.4.1.3 User test: Iteration 2


The second test focused on interactions between the human and the robot, how a user
understands the signals of the autonomous agent, how they respond to fail solutions and
perceive how they control the autonomous agent. Five user testers participated in this user
test. All of the users where university students at ages 20-26 years old and rated their level
of technical ability as high. None of the users had any affiliation or previous knowledge
about the HUGO robot. The test heavily relied on a ‘wizard of oz’ set-up, where functions
and props were faked to give the impression of testing the real product.

The test was done by using a mock up version of HUGO (Figure 27), since the real HUGO was
not available for testing. This also allowed for a quicker and more efficient testing since the
technical set up for HUGO wasn’t needed. The robot used was a smaller type of radio-
controlled car (RC car) with a box mounted on top of it. The RC car was controlled by an app,
and it also had controls for light and sound, making it possible to test light and sound signals
towards the user. Other functions, like the sound of the box clicking or moving the lid was
performed by one of the facilitators.

Figure 27 A mock up HUGO robot

51
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE

Figure 28 A user opening the mock up HUGO robot.

The user participated in a type of roleplaying session where they interacted with a mock-up
HUGO robot and a Figma prototype of the app on a phone given to them during the test,
allowing for more control of the prototype from the facilitators point of view. There were
two different sections of the test, the first one had the user go through the whole interaction
with HUGO, from receiving the order confirmation to sending HUGO off, and the second
section focusing on four potential fail scenarios to see if the user understood the solutions
built into the app. The user was encouraged to talk out loud during the session to help the
two facilitators understand what impressions, feelings and thoughts that came to mind. The
facilitators in turn explained the scenarios to the users, managed notes, and asked interview
questions. In the end of each test section a discussion was held with the participant to
discuss the experience and if the participants had any ideas on improvement. The questions
from the test can be found in Appendix C.

The test was conducted as following:

• The user started by answering the initial questions and the facilitators explained the
scenario and what the HUGO delivery service is, showing a picture of the HUGO robot
for clarity. The user was handed the phone with the Figma prototype with an
explanation on how to use it during the test.

• The test started with the first scenario where a normal interaction with the HUGO
delivery service takes place. They were given the premiss: You have called the

52
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE

restaurant and ordered to have your food delivered with HUGO to your address. You
want to acquire your food and complete the delivery with HUGO.

• The user played out the scenario with the mock up HUGO robot by following the
instructions on the prototype. Texts with links to the prototype were sent to the
phone to mimic how a real service contact their customers. They received a text
telling them that their order was confirmed and one telling them that HUGO had
arrived at their address. The scenario ended when the user managed to tell HUGO to
leave.

• Then the second scenario was presented, and this consisted of four different and
shorter scenarios where something in the interaction failed and the solution to these
failures were tested. These were:

o The wrong address was shown. The user was asked to change it. See Figure
29.
o The user was not able to see the HUGO robot and had to use the sound button
to find it. Seen in the bottom of Figure 30.
o Loss of internet connection making it impossible to open the box. Seen in the
bottom of Figure 31.
o The user not closing the lid. This resulted in a text telling the user that they
had forgotten to close it and needed to finish the interaction.

• When test was finished the user answered questions and discussed how the
interaction felt. This was done to find out more about how they experienced the
robot and if they had any suggestions of improvement.

Figure 29 Web app frame showing the user’s Figure 30 Web app frame showing where
address and an alternative to change it HUGO is on the map and the button making
HUGO play a sound to find it.

53
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE

Figure 31 Web app frame showing the loss of connection. Loss of connection message is at
the bottom of the frame.

The frames seen in Figure 32 were the ones tested in the normal scenario by users together
with the mock up version of HUGO. They are presented in chronological order.

Figure 32 Prototype produced in iteration 2

54
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE

6.4.2 Findings
In this chapter the finding from the second iteration are presented.

6.4.2.1 Task analysis


From analysing the service blueprint and flow chart produced for the Ericsson case in the
pre- study, a task analysis was performed which produced a mapping of the user tasks
presented in Figure 33. This allowed the flow to be examined in detail and find tasks and
subtasks for the serviceflow.

Figure 33 Visualisation of the task analysis, showing the tasks involved for the Ericsson case

6.4.2.2 Activity Centered Design


The chart presented in Figure 34 is the result from the ACD method. The chart is divided into
three separate figures that highlight the sub-sections of the chart in detail. See Figure 35,
Figure 36 and Figure 37.

55
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE

Figure 34 Activity-Centered Design result (zoomable)


In the first sub-section, see Figure 35, the first three items in the action layer are examined.
The first operation represents both of the first two actions, check and confirm delivery order
and view updated status. The first action takes place in the beginning of the interaction with
the service, when the user has received an SMS with a link to the web app. Here one potential
issue was identified where the address specified in the delivery could potentially be wrong.
To address this issue to avoid the delivery robot to drive to the wrong address, the user had
to confirm the address before it departed from its pickup point, in this case the restaurant.
However, when the address was indeed wrong, the users need the option to correct the
delivery address and thus the option to change address was added in the prototype. The
view update status action had similar operation as the first activity and therefore shared
operation, however the user doesn’t confirm the information and there were also no
identified failure or conflict for the action. Since the autonomous agent is not present in
either of the actions the mapping of the HCAI relation was not explored extensively and thus
visualised in a simple way.

For the third action, Localise HUGO, the first interaction with the autonomous agent is
introduced. In this operation both the user and the autonomous agent share similar goal of
localising the other, therefore they have a common goal of finding each other. When
exploring this collaboration in the operation layer and in the HCAI mapping process,
multiple design alternative was suggested, see Table 2, and potential conflicts and failures
were identified. When both the user and the autonomous agent are active at the same time,
searching for one and other, a potential risk of them circling each other were identified.
Similarly, if the user expects the autonomous agent to locate them and the agent is designed
to be stationed waiting for the user, then there is a risk of them both waiting for each other.
When the user is active and searches for the autonomous agent it was found that there was
a risk for the autonomous agent to be obscured by some object, for example standing
behind a car in the street, hiding it from the user’s field of view. To address this conflict the
use of sound and light on the agent was used. There were also two alternatives for how these
two elements would be used, one where the agent is active and one where it is inactive. For
the alternative where the agent is active, it would be stationary at the given address and
react to when the user’s movement by lighting up and/or making a sound as the user
approaches the agent. In the second alternative where the agent is inactive the user uses
the web app to make the agent play sound and blink to make it easier to locate, this is also

56
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE

the alternative that was chosen to be implemented in the prototype as it gave the user more
control in locating the autonomous agent.

Table 2 Design alternatives for the localise operation in the ACD mapping
Autonomous
Nr. User agent Description
1. Active Inactive User is actively searching for the agent and is prompting
the agents to blink the lights and make sound to locate it
2. Active Active The user is actively searching for the agent, the agent
lights up and/or makes sound when the user gest near the
autonomous agent.
3. Active Active Both the user and the agent are actively searching for the
other.
4. Inactive Active The agent is actively searching for the user.
5. Inactive Inactive Neither the user nor the agent is actively searching for the
other

57
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE

Figure 35 First section of the ACD chart


The second sub-section, see Figure 36, focuses on the fourth action, retrieve food delivery,
where three out of four of its operations are presented. The first operation surrounds the
unlocking of the lid on the autonomous agent which was identified as a collaboration with
the common goal of unlocking the box. Four design alternatives were found that explores
the four different combinations of the active/inactive relationship between user and agent.
See Table 3.

58
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE

Table 3 Design alternatives for the unlocking operation in the ACD mapping
Autonomous
Nr. User agent Description
1. Active Inactive The user unlocks the lid through the web app.
2. Active Active The autonomous agent automatically unlocks the lid when
the user confirms that they are at the autonomous agent.
3. Inactive Active The autonomous agent automatically unlocks the lid when
the user gets in close proximity to the agent. Using the
location of the phone through the web app.
4. Inactive Inactive The lid is not locked

As all alternative except where the box was not locked relied on the connection between the
autonomous agent and the web app, it was identified that the risk of that connection
breaking could be a potential conflict leading to a failure. It was also deemed impossible to
fully prevent as the loss in connection between the web app and the agent could be caused
by multiple reasons. Thus, the decision was made to address the issue after the failure
occurred which resulted in the error message in the prototype.

For the second operation, open the lid, there were no initial collaboration identified as the
operation was initially determined to be performed by the user manually without any goal
from the autonomous agent. However, in using the HCAI framework to explore other design
alternatives it introduced collaboration where the user and agent work towards opening the
lid together, resulting in what can be seen in

Table 4.

Table 4 Design alternatives for the opening of the lid operation in the ACD mapping
Autonomous
Nr. User agent Description
1. Active Inactive User opens the lid by hand.
Active Inactive User prompts the autonomous agent to open the lid
2. through the web app
Active Active The agent actively assists the user when they open the lid
3. by hand, using motors to support the user’s movement
Inactive Active The agent automatically opens the lid without any
4. interaction from the user

No conflict was identified for this operation, but rather the insight that alternatives that
involve the autonomous agent performing operations needed motorized mechanics of sorts
and thus added complexity to the product itself. Therefore, based on the current technical
possibilities of the autonomous agent the decision was made to seek simplicity and
implement the manual solution in the prototype. Similarly for the last operation in the
second sub-section, pickup food, the need for a collaboration between the user and the

59
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE

autonomous agent was deemed unnecessary as it would add unwanted complexity to the
product. Thus, the user is the only agent in that operation in the prototype.

Figure 36 second section of the ACD chart

The third and last sub-section of the chart, see Figure 37, presents both the last out of the
four operations linked to the retrieve food delivery action, as well as present the operation
related to the action of sending the autonomous agent off. When exploring the operation
related to the action of retrieving the food, it became evident that both the operations of
closing and locking the lid had close relation to each other and could be combined. Thus,
the decision was made to explore both operations together in the HCAI framework. In
exploring both operations multiple alternatives were identified which is presented in Table
5. The main conflict that was identified, highlighted in red in Table 5, is when the user forgets
to close the lid when leaving. This is present in all alternatives except for when the
autonomous agent automatically closes the lid when the user has retrieved the package and
walked away from the autonomous agent. To address this conflict in the prototype, the
service sends the user an SMS reminding them to close the lid after a given time. On the
other hand, for the alternative where the autonomous agent leaves automatically it was
instead found to be a potential conflict where there could be an uncertainty for the user on
who should close the lid and that the user might feel uncomfortable in leaving the
autonomous agent without closing the lid.

Additionally, when exploring the alternatives for closing and locking the lid, ideas involving
the last action of sending off the autonomous agent were also presented. Thus, the decision

60
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE

was made to also explore the action in combination with the locking operation from the
action before. In doing so the alternative 2 in Table 5 was proposed and was also the
alternative that was chosen for the prototype. Due to the operations of closing and locking
the lid was explored before the operation in the last action, the alternatives combining all
three operations was placed in the first action in Figure 37. Besides those design
alternatives, in the operation in the last action the only alternative identified was that the
user would manually prompt the autonomous agent to leave through the web app.

Common to all the alternatives in the last sub-section was a potential conflict where the
user’s expectation on the autonomous agent’s behaviour does not corresponds to the
actual behaviour. Which is according to HCAI literature is considered a failure despite the
autonomous agent behaving according to design. Therefore, information on the
consequences of certain user action was added to prepare the user and tune the
expectations.

Table 5 Design alternatives for the closing and locking operation in the third sub-section of
the ACD mapping
Autonomous
Nr User agent Description
1. Active Inactive User closes the lid and lock it through the web app
2. Active Active User closes the lid and lock it through the web app. The
autonomous agent automatically leaves after being
locked.
3. Active Active Automatically locking and driving of when the user
closes the lid
4. Active Active Automatically locking when the user closes the lid
5. Active Active Autonomous agent automatically closes the lid, locks,
and drives of when the user walks away from the agent
6. Inactive Active User doesn’t close the lid, the autonomous agent sends
a reminder text to close the lid
7. Inactive Inactive User doesn’t close the lid

61
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE

Figure 37 third section of the ACD chart

62
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE

6.4.2.3 Prototyping
Results from the prototyping in iteration 2. A card design combining many of the UI elements
and interactions from the first iteration prototypes. The interaction sequence was also
modified in accordance with the findings from ACD/HCAI analysis. The prototype can be
seen in Figure 38.

Figure 38 Prototype produced in iteration 2

6.4.2.4 User testing


From the second iteration user test the following opinions, design suggestions and
observations could be gathered.

• 3/5 user were interested and exited to try a delivery robot. 2/5 where sceptical of the
efficiency of it but not entirely negative towards the product.

• Two users explicitly liked that they did not have to interact with humans at all when
using the service.

• Light and sound feedback from HUGO was brought up by many users as a positive
thing and something they would need to clearly understand the robot’s intentions.

63
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE

• One user suggested that the light and sound for locking, and unlocking could mimic
that of a car.

• Two users expressed worries regarding not knowing what the robot actually does
automatically, for example if it opens the lid by itself. The users were somewhat
nervous to touch or be too close to the robot because of it.

• 4/5 users where positive to the experience of using the robot and thought it was
simple to use. The robot mock-up was however a simple build and not very realistic
which some users mentioned could influence how they feel about it.

• Opinions about the amount of automation and tasks in the interaction:


o 2 users expressed that they would like almost the whole flow to be automatic.
o 1 user thought more steps would be preferable since they felt a sense of
having more control when there were more steps to confirm in the app.
o 2 users liked the number of tasks in the app.

• Opinions were divided on how automatic the robot and app should be, the one step
that was least commented as an unnecessary step was the opening interaction,
indicating that users think this step is less of a hassle than the rest.

• The words used in the app was not always clear to the users and all of them pointed
out that they could not connect that the ‘slide to open’ meant that they would unlock
the lid. This set expectations among the users that the lid would open automatically.
It set the users expectation of the AI to something that was not necessarily correct.

• Users expressed that the curtain menu explaining how the interaction works needed
pictures and shorter text, but it was very appreciated to be able to see beforehand
what to expect.

• Majority of users thought that interacting with HUGO was simple and not to
complicated. However, 4/5 users expected the lid to open and close automatically.

• No user thought anything was missing or hindered them in doing the task and all
users could reach their goal of completing the delivery through the app.

• One user tried to lock the lid before closing it. A message telling them that they need
to close the lid before they lock could solve this according to the user.

The protocol from user test 2 can be found in Appendix D.

64
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE

6.5 Iteration 3
In this iteration the focus was on implementing the changes suggested from iteration 2 into
the design and confirm that the final design was well received by users. The hypothesis for
this iteration was therefore not as specific as earlier iterations and could be summarised as
confirming that users could understand and use the web app and see if there were any major
final design suggestions.

6.5.1 Method
In this chapter the process and methods used for the third iteration are presented.

6.5.1.1 User test: Iteration 3


The aim of the last user test was to test the design and the whole flow of interacting with the
autonomous agent through the app. This meant not focusing on specific functions but the
overall experience. It served as a confirmation that the web app design worked and could
be used to interact with the HUGO robot. In comparison to user test 1 and 2 this test was
more of a high-fidelity test where as much as possible resembled the business case and how
the service would work in real life. There were however some factors that hindered the tests
from being a complete copy of the real service, for example the location of the test being
inside when the real service would be used outside and technical limitations on the Figma
prototypes. These limitations were compensated by explaining how the product was
supposed to act in situations where it could not perform as planned and thoroughly
explaining the scenario, making the user more immersed.

The test involved five user testers with the average age being 29,6 years with the oldest user
being 36 years old and the youngest 22 years old. The users placed themselves at a high level
of technical ability with the average being 4,8 on a 1 to 5 scale. The users where all working
in the tech industry at an office which was preferable since the design for the interface was
to be used by the Ericsson staff in the Ericsson food delivery case. All this meant that the
final test users where similar to the average person working at Ericsson and that, hopefully,
a result closer to actually testing Ericsson employees could be achieved.

The test was conducted with the real HUGO delivery robot ( Figure 39) to mimic, as close as
possible, how the final service would look. The test however still had to rely on a wizard of
Oz technic during the test since Figma prototypes were used and they could not be
connected to the actual robot’s actions. The facilitator instead had to explain when the
screen automatically moved to the next page and ask the user tester to change it
themselves. The sequence that the users tested can be seen in Figure 40.

65
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE

Figure 39 The HUGO delivery robot used in user test 3

Figure 40 The tested prototype in user test 3

66
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE

The test was carried out as following:

• Users were asked a few demographic questions and their feelings towards
autonomous delivery before the interaction with the app and autonomous agent.

• They then enacted the service flow as a customer using the Figma web app, on a
phone given to them from the facilitators, to communicate with HUGO.

• Afterwards they were asked questions in an interview, such as opinions, feelings and
ideas, regarding their interaction with the autonomous agent HUGO.

All questions asked in the interview portions of the test as well as the full description of the
test can be found in Appendix E.

6.5.2 Findings
In this chapter will the findings from the third iteration be presented.

6.5.2.1 Prototype
The last prototyping used the feedback from Iteration 2 and resulted in the final design
proposal shown in Design proposal 6.6. There were a few other prototypes before the final
design and an example of these can be seen in Figure 41 Examples of prototypes in iteration
3They mostly focused on different visual designs but also explored shortening the
interaction sequence.

Figure 41 Examples of prototypes in iteration 3


67
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE

6.5.2.2 User testing


The third user test gathered opinions about the final concept of the app. The average test
score for the app when asking the question ‘How did you experience using the HUGO
delivery?’ was 4,8 on a scale from 1 (very negative) to 5 (very positive) which is a positive
result.

From the interviews and discussion following conclusions and opinions were found:

• Users expressed an overall positive and curious attitude. There were however some
worries about safety from two users, expressing some general anxiety towards the
concept and how it will be to interact with it. There were also concerns about how
reliable the robot delivery will be compared to a traditional delivery service.

• One user mentioned that they would like a map when wating for the delivery.

• Light and sound signals were mentioned to be important for understanding the
robots signals.

• 5/5 users expressed that the app was easy to use and understand, the number of
interactions were balanced and the information/instructions were easy to read.

• One user was still sceptical to the whole concept of automated delivery but had no
objections against the app interaction.

• A majority of the users liked seeing more information in the beginning to feel more
prepared on what to do when the robot arrived.

• Some users were a little anxious about using HUGO, both in finding it and how hard
they were supposed to close the lid.

• There were comments from one user about UI and presenting information in more
of a hierarchy.

• One user thought it was hard to close the lid while holding their package and the
phone at the same time. Curiously the other users didn’t have any problem with it
and just switched hands for the phone or put down the package. It was first when
asked about this that they reflected on this possibly being a problem.

• All users indicated that they could perform the tasks and that nothing was missing.

• One user would have liked a text confirming that the delivery was done.

The protocol from the test can be found in Appendix F.

68
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE

6.6 Design proposal


The final design proposal created for the HUGO food delivery case is presented in this
chapter. It is a prototype of the web app design that was tested in user test 3 and is an
interactive prototype on a smartphone interface. The starting of the interaction, frames 1-3,
can be found in Figure 42.
• The first frame, frame 1, appears when the user receives as a link in a text message,
and it asks the user to confirm their address to ensure that HUGO drives to the right
place. This could also be accompanied by a map if further developed.

• Frame 2 is simply an informative screen that tells the user how long they will have to
wait for their delivery, what stages the delivery is in and detailed information on how
the delivery works. It also has a picture of HUGO so that the user knows what to look
for when it arrives.

• The user then receives a new text with a link telling the user that HUGO has arrived.
They are greeted by frame 3 that shows a map and positions of HUGO and the user.
The user can also make HUGO play a sound to find it. When the user is next to the
robot, they can choose to confirm this by unlocking it.

Figure 42 Frame 1,2, and 3 of the web app design proposal

69
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE

In Figure 43 the next step of the interaction can be seen. This is where the user has found
HUGO and starts interacting more physically with the robot.

• In frame 4 the user is told that the box is unlocked and that they can now lift the lid.
In doing so it will automatically switch to the next screen. There is also a bar on top
of the information that tells the user how close they are to completing the interaction
with HUGO.

• Frame 5 tells the user to take their goods out of the box and there is also a manual
confirmation that the user needs to press to ensure that they don’t close the lid by
accident or forget any packages in the box. This will stop HUGO from leaving due to
any mistake since it will otherwise recognise the delivery as completed if the user
closes the lid.

• In the next step, frame 6, the user is told that they can finish the delivery by closing
the lid. They are also warned that this will make HUGO leave and that it can
potentially start moving signalling that they are giving back the control to the
autonomous agent.

Figure 43 Frame 4,5, and 6 of the web app design proposal

70
DEVELOPING INTERACTIONS FOR THE ERICSSON CASE

The next step in the interaction is about confirming to the user that they have completed
the delivery. This can be seen in Figure 44.
• In frame 7 the user is told that they have successfully completed their delivery and
are also presented with the information that HUGO will leave in a certain amount of
time. This is to communicate to the user that HUGO will start moving soon and give
them time to act if needed, for example if they wish to step back. The user can also
open the lid to stop HUGO from leaving, taking the user back to frame 5. If this is not
needed, they can simply press the ‘I’m done’ button to make HUGO leave
immediately.

• When HUGO has started driving the user is presented with frame 8 which is the final
frame showing the user that they are completely finished with the interaction and
thanking them for using the service. There is also a button more clearly centred at
the bottom of the frame for the help centre. This is in case of the user having
problems or noticing that they have done something wrong in the previous steps.

Figure 44 Frame 7 and 8 in the web app design proposal

A help button is present in frames 4-8 to make it easy for the user to find information or
contact customer support when they are interacting physically with HUGO. All frames also
have the menu in the top right corner where the customer service and information on how
the delivery works can be found.

71
DISCUSSION

7. Discussion
In this chapter the methods applied, and the results are discussed in relation to the two
research questions of the thesis. Additionally, the collaboration between a human and an
autonomous AI agent in a delivery service are discussed and compared to a traditional
delivery. Lastly, future research for the area are discussed as well as suggestions of future
implementations for the company.

7.1 Method Discussion


Woking in iterative loops provided the ability to change and refine the design as we learned
more about the project, the field of HCAI and the design itself. Compared to a more
traditional waterfall approach where most of the information is needed at the early stages
and at the beginning of the development phase, the iterative approach allowed for more
flexibility and exploration of new knowledge in the development phase of the thesis.
Similarly, due to the nature of the company being a startup, new information appeared
along the project that could be incorporated thanks to the iterative workflow. As there was
no current production quality web app in place to use as basis for the project, creating a
design proposal in one linear process would most likely have been too extensive in terms of
the scoop for the project. Instead working iteratively allowed for progressively developing
the concept through MVPs, focusing on the most important aspect of the product as the
Lean Startup methodology suggests. This was good for staying nimble in the designing,
being able to pivot the design and explore multiple direction. At the same time the process
also allowed for continuous feedback and gathering knowledge from user testing,
knowledge which played a large role in the design of the web application.

However, despite performing user testing at the end of each iteration continuously
throughout the development phase. Points can still be made surrounding potential biased
in the designing of the interactions. Even though the design was mainly based on the
feedback from the iterations, it was still partly rooted in our own mental model of the AI and
as designers we have greater knowledge of the system and the product than the actual
users. Thus, it is harder to design interactions with mental models that fully matches that of
new users. However, adopting the activity-centred design philosophy can help in
minimizing bias, both from designers but also from the user’s preference, which also could
be considered a form of bias. By shifting focus in designing to focus on the activity rather
than purely the user and their experience, the design process becomes less reliant on the
opinions of individuals and consequently the design can cater to a broader user group.
Adopting ACD was also important in the design process as the user groups in the tests were
generally homogenous with similar self-estimated level of technological ability. At the same
time, even though the users could be considered a homogenous group, users showed
differences in their preference for the way to interact with the service. ACD was therefore
useful for addressing both the issue of having homogenous test groups as well as having
users with broad preference for how to interact with the service.

72
DISCUSSION

A unique and interesting part of the thesis process was the use of the method presented by
our supervisor Chu Wanjun where we combined the Activity-Centered Design philosophy
with the Human-Centered AI framework. It introduced a new and interesting way of
analysing and exploring interactions in collaborations between agents, identifying conflicts
and finding design alternatives. The method being experimental, and novel naturally meant
that it was not as clearly defined as existing well tested and develop methods otherwise
used in the project, and thus it had its limitations. Where one example is the lack of a clear
definition for what active and inactive meant. Of course, as with all methods they need to
be adapted for the case they will be used for, and the same principle clearly applied for this
project as well. We had to outline what a suitable definition of an active/inactive agent was
for our case and a binary definition was chosen for this project. However, having a more
nuanced definition ranging between the two states, similar to the original framework, was
also discussed but for simplicity it was not chosen.

The way our visualisation of the method was done also had its limitations. When exploring
the operations in the HCAI framework, moving between the four quadrants, and exploring
alternative design implementations, new types of collaborations were constantly identified.
For example, new collaborations where the tools could be different or where there was no
collaboration at all. How to address these new collaborations and where to place them was
not thought of in the current visualisation of the method that we shaped and did not
naturally support adding alternative collaboration. Similarly, as presented in the 6.4.1,
when using the method, we also saw the possibility of combining operations by chaining
them together, which there were no natural way of doing either. Using the method was
therefore sometimes difficult, but regardless of that it was still a useful tool for identifying
potential conflicts and to design interactions for the operations. The method was also used
in this case as more of an exploration tool as the current system that was analysed was the
design from the first iteration which was very rough. Using the method to analyse another
already existing service that was of higher quality or even production quality, could
therefore have given a different experience.

Lastly, in the final design proposal the relationship between interactions made directly with
the autonomous agent and the automatic state updates made in the web app presents us
with an interesting topic. In this project the web application has been seen as a tool for the
user to interact with the autonomous agent through. However, as the autonomous agent in
the proposed design updates the state of the app when the user interacts with the agent
directly, it could be argued that the web app is in fact a tool for the autonomous agent as
well. Implying that the human user and the autonomous agent is, in some collaborative
interactions, not only sharing the same goal but also sharing tools. This arguably only
applies to when the user and the autonomous agent are both active, as in the other cases
one of the agents are not taking an active role in reaching the goal, thus not using any tool.
This is however up for debate and depends on how the perspective on active/inactive is
framed as well as what defines as a collaboration for the specific use case. Furthermore,
depending on the technical implementation of a phone and the role it plays in a
collaboration, the phone could arguably in some cases be considered as an agent as well
and not merely a tool.

73
DISCUSSION

7.2 Research Question 1


RQ1: Which interaction sequences are essential in the case of human interaction with an
autonomous delivery robot through a phone interface?

It is evident in the analysis of the different business cases that even though they differ in
multiple ways, both in the context in which they operate as well as the goal of the user, they
still share similarities, specifically in what tasks the user needs to perform when directly
interacting with the autonomous agent. These tasks are present in all the business cases
analysed and could therefore be argued to highlight the essential interactions in the flow of
the service, when interacting with an autonomous delivery robot such as HUGO.

The suggested essential interactions begins when the user has a motive to start interacting
with the physical robot and stops when the robot leaves. Of course, there is an earlier
starting action when looking at the whole service since it requires a setup of the user
needing delivery in the first place, but these actions vary, both in number and what type of
action, depending on the case and are not strictly connected to the autonomous agent.
These actions are also often relatively similar to interactions in already existing services,
such as ordering mail delivery, and they do not involve the autonomous agent in the same
way as the suggested interaction sequence. The autonomous agent could simply not be a
part of the service at all and instead be replaced by a person delivering the package. They
can therefore not be seen as essential and are not as interesting for the thesis as the
interactions involving the physical robot.

The start of the interaction between the user and HUGO is especially interesting as this stage
signals the start of the collaboration between the human and the autonomous agent. At this
point the user’s interaction with the service shifts from interacting with the service through
the phone, which is purely digital, to interacting both digitally and physically with the agent.
This mix of digital/physical interaction also indicates a switch in context for the user and
they need to understand when to change between interacting with the agent through the
phone and interacting with the agent physically.

One important design guideline when designing for AI according to Googles AI guidelines is
explainability, which means to clearly present what the AI does and will do as a reaction to
the users input (Google PAIR, 2021). This helps set expectations on the AI, building trust and
keep the user in a sense of control. It is crucial to present information on what the required
type of action, digital or physical, from the user is and what the autonomous agent does by
itself to make the interaction sequence work. The start of the interaction also gives the user
their first impression as what to expect in the continuing sequence and how they should act
towards the agent. The start of the interaction sequence should also happen at the user’s
initiative since it signals that the user gains control of the AI and that they now have a say in
what the AI does. According to Schneiderman (2020b), Human control in combination with
automation is desirable and is more likely to produce a reliable, trustworthy and safe
application which makes it important in the start of the interaction to make the user feel
safe in using the design. This was notable in the user tests as those expressing worries in
interacting with the autonomous agent in the first interview changed their state of mind

74
DISCUSSION

when presented with clear information about what the interaction entailed before meeting
the agent. Afterwards they also expressed that they felt at ease during the interaction due
to clear information regarding what was going to happen next and what the agent would do
in response to their actions.

Similarly, the end of the interaction between the user and HUGO marks the end of the
collaboration. Designing for this is particularly important as the end of the collaboration as
well as the transfer of control from the user to the autonomous agent needs to be properly
signalled to the user. This is further motivated by the insight that the user’s incentive for the
service changes at this point. In a food delivery situation, the user’s goal is to receive their
package, when this is completed the incentive to further interact with the autonomous
agent disappears. The user has completed their goal and might therefore not see a point in
interacting any further and might lose interest in completing the sequence. This indicates
that the interaction of ending the sequence needs to be simple and natural, ensuring that
the user either completes the sequence or that it can be seen as completed at the stage of
taking their package. This is important when trying to eliminate faults from happening in
the interaction. Noteworthy is that the incentive can be assumed to be reversed in the case
PostNord as the user goal is to send a delivery and not receive it. This means that the design
for the end of the sequence is not at the same risk of loss in incentive for the user as the food
delivery case.

From the findings of this thesis, an interaction sequence can be found that specify the
essential interactions in the case of the autonomous delivery robot HUGO. The specified
essential interactions are:

• Start of the whole interaction sequence between user and robot.


The start of the interaction between human and robot. The action is unspecified
since there are several ways of starting the interaction depending on how the start is
designed.

• Locate.
In order to interact with the robot, the user needs to be at its location or have some
way of knowing where it can be found.

• Open the transport compartment


The user’s goal is to receive/send their package which means that they need to
access the transport compartment of the delivery robot, e.g. open and, before that,
possibly unlock the compartment.

• Take/drop off goods


The act of taking or leaving the package in the transport compartment.

• Close the transport compartment


The transport compartment will need to be closed somehow to keep the transport
safe. This could be an automatic action or a manual action.

75
DISCUSSION

• End interaction sequence


An interaction that signals to both the user and the robot that the interaction
sequence has ended. This could be an action like the user confirming that they want
the robot to leave or the user locking the transport compartment.

This interaction sequence was found when analysing the business cases given by the
company and it was confirmed when user testing that these actions where important and
necessary to reach the end goal of the user.

The actions are not strictly separated in the sequence and can be combined, for example by
combining closing the lid with ending the interaction sequence. The different actions are
also not necessarily bound to the user or the robot, e.g., the lid could be opened manually
by the user or automatically by the robot, which allows for assigning actions to be carried
out by either one of them when designing the interaction sequence.

7.3 Research question 2


RQ2: How can HCAI principles be applied when designing a user interface for the identified
interaction sequences in an autonomous food delivery service?

When asked about their expectations on using an autonomous delivery service during user
testing, multiple participants had concerns about the efficiency of the service and often
compared it to traditional delivery services. Similarly, the experience and customer service
that a human provides in traditional delivery services was also sometimes mentioned to be
desired when interacting with the autonomous agent. These expectations raise a discussion
about the level of automation that could be implemented in the web app design and when
it should be used. In context of this thesis, level of automation and human control in
Shneiderman’s (2020a, 2020b, 2022) framework is, as stated in 6.4.1.1 , modified and refers
to which of the agents are active and inactive in an interaction. Some users expect high level
of customer service, which could imply that they want to minimise the number of manual
tasks required by them, as that would be more like what the traditional delivery services by
humans provide. But despite the expressed need for efficiency and service, the users
presented different preferences on the level and number of manual tasks they needed to
perform. Where some wanted more manual steps in the app and others preferred less to
none. Indicating that some users desired to have more control when interacting with the
agent, and some desired autonomy to a greater extent from the agent.

While the users and their preferences are important when doing Human-Centered Design,
Norman (2013; 2005) presents the drawbacks of focusing on individual persons and
therefore state that designers should design for the activity. The response from the user test
shows that this applies for the thesis case as well and designing strictly for either one of the
preferences will always leave some users unsatisfied. Thus, opting for designing around the
user’s task were chosen as that would avoid specifically designing to please a subset of the
users wants and needs. It would instead focus on solving the design for the essential task
needed to be completed by the user, which Norman argues users are more willing to learn
how to do than learning things that are not essential for an activity (2013). Therefore, the
essential tasks identified in 7.2 are focus points in designing the interaction sequence.

76
DISCUSSION

Moreover, the nonessential tasks are still present in the interaction sequence, however by
leveraging the autonomous agent to perform these tasks the sequences can become less
demanding on the user. Making a potentially more efficient, as well as an easier to use,
service for first time users.

Thus, the argument can be made that the autonomous agent should be active and thereby
work proactively, as often as it can be deemed suitable, to support the user in completing
the essential tasks. This not only works towards Shneiderman’s (2020a, 2020b, 2022) goal of
enhancing humans by removing additional task needed to be performed by them. It is also
supported by his philosophy that humans and AI should work together with high level of
automation in combination with high level of human control and that it is in fact according
to him desirable in HCAI design to achieve reliable, safe, and trustworthy systems. In this
thesis, this is mainly implemented in the design by automatically updating the state of the
web app based on the user’s interactions with the autonomous agent, where state of the
web app referring to what is presented on the screen. An example presented in 6.6
illustrating this implementation is when the user opens the lid on the agent, the instructions
presented in the web app automatically updates to the next, displaying the information
relevant to the next step. The opposite would be to require the user to manually confirm for
the web app to move to the next step. Additionally, by introducing automation in updating
the state of the web app, it created a clear and intuitive feedback system for the user when
performing interactions on the autonomous agent. Whilst at the same time also establishing
a connection between the physical and digital interfaces, clearly indicating the effects on
one when interacting with the other.

Explainable AI is presented as one of the core principles in designing Human-Centered AI


systems, emphasising on providing the user with information to understand the capabilities
of the AI system to create and adjust their mental model of the system (Doncieux et al., 2022;
Google PAIR, 2021; Riedl, 2019; Shneiderman, 2020a, 2020b, 2022; Xu et al., 2021). It became
evident that this principle also applies when designing for autonomous delivery robots as
users need to understand the actions that the autonomous agents does and how they in
turn correlates to the user’s actions. In the web app it was found to be important to present
the user with information throughout the interaction sequence. Starting with information
on how the whole service works to prepare new users and to set their expectation for what
the autonomous agent will do, which is further supported by Google PAIR’s guidelines (2021,
Chapter Mental Model, 2021, Chapter Explainability + Trust). Likewise, during the interaction
sequences it was found to be important to present the user with information on the current
step in the process and what is expected from them. For interactions having more critical
impact, information on these consequences is presented to the user before the task is
performed to ensure that the user understands the consequences of their actions as well as
to understand the autonomous agent’s response to that action. This is providing
explainability to the user as well as helping the user in tuning their expectation on the
capabilities of the autonomous agent, which results in a more accurate mental model of the
autonomous agent.

In addition to explainability, control is highlighted as another key concept in designing HCAI


systems. Even though the autonomous agent automates many of the steps in the interaction

77
DISCUSSION

sequence, there are still requirements for manual confirm by the user for both that they are
at the autonomous agent and that they have retrieved their delivery. Giving the user control
over the most critical point in the interaction, the start and the end of the interaction
sequence as found in 7.2. By having the user confirmation of this interactions, the more
critical actions by the autonomous agent can automatically be performed as unlocking the
lid and leaving once the user has closed the lid. Ensuring that the control is still in the user’s
hands whilst allowing the autonomous agent to be active at the same time. This design
rational is supported by both the eight golden rules by Shneiderman (Shneiderman, n.d.)
and by HCAI literature (Google PAIR, 2021; Riedl, 2019; Shneiderman, 2020a, 2020b, 2022; Xu
et al., 2021).

When designing for collaboration between a user and an autonomous agent, conflicts will
inevitably arise that cause failures. According to Riedl (2019), autonomous agents will
frequently make mistakes, cause failures, violate the user’s expectations or simply do
actions that confuse them. Riedl further explains that when the autonomous agent defies
the user’s expectations or confuses them, the action can still be accurate based on the
situation, but the user perceives it as a failure. Motivating even further the importance of
providing users with information and calibrating their expectations and mental model.
Nonetheless, some failures can be addressed by implementing prevention measures into
the design of the product or systems to stop the failures from occurring. Implementing
measures and fail-safes in the design can however add unwanted and unnecessary
complexity to the system, leading to costs in development as well as potentially effecting
the user experience negatively. At the same time, the severity of a failures effect on the user
experience can vary where some a have critical negative effect whereas some are merely
annoying to the user. Similarly, the probability of a failure occurring is also worth discussing,
in the thesis was the scenario of the autonomous agent leaving when a user has misplaced
or forgot one of their items in the transport compartment discussed. Even though that
would have a negative effect on the user experience, the probability of that happening was
deemed to not be worth implementing a fail-safe for, instead the customer support service
would handle that situation. Thus, the cost of implementing prevention measures and fail-
safes into the design can sometimes outweigh the cost of the consequences from the failure,
implying that the risk and consequences for some failures is worth accepting over complex
and costly solutions.

One example of costly complexity found in this design process was the scenario in which the
user forgets to close the lid and leaves the autonomous agent. Despite informing the user to
confirm retrieval of the delivery and to close the lid afterwards the user could potentially
forget after taking their package. Suggestions of solutions such as having sensors which can
determine when the users have retrieved their package and then automatically close the lid
using motorized pistons were made. But due to the complexity of implementing the solution
and the estimated probability, it was deemed to be less costly to accept the risk and to
inform the user that they forgot to close the lid or manually handle the situation through a
technician. Yet another example, though more technical, is when the web app is unable to
communicate with the autonomous agent, the reason behind the error can be any of
multiple reasons and can even be an error on the user’s side, such as having lost connection
to the internet. Despite that, when testing scenarios with errors in the user tests, it showed

78
DISCUSSION

that the users are prone to blame the service for not working despite the problem being on
their side. Which again refers back to Riedl’s (2019) statement on users perceiving failure
despite correct behaviour from the AI. Thus, in situations where the same error can be
caused by many different reasons, it could be argued that trying to prevent failure is difficult
and therefore addressing the failure instead after it occurs is a more appropriate approach.
Where providing the user with information about the error to help guide them to the
appropriate action is one example presented by Riedl (2019).

To summarise based on the above discussion, the following are suggestion for how HCAI
could be applied when designing a user interface for a food delivery service with an
autonomous delivery robot.

• Where suitable, designers should strive to design for collaborations where both the
user and the autonomous agent are both active and work together towards a
common goal. Where the user focus on the essential-task and the autonomous agent
support and enhance the user by focusing on automating the non-essential tasks.

• Setting the right expectations for the users is highlighted in HCAI literature to be
important and was also found in the development phase to be especially true for new
and novel product like the one examined in this thesis. Thus, it can be argued that
designers should present the users with information upfront to tune their
expectations and mental model.

• To ensure that users always stay in control whilst having an active autonomous
agent at the same time, designs should implement manual confirmation before
automation with critical consequences are performed.

• Conflicts and failures are inevitable and implementing functionality in design to


forestall a failure can add unwanted complexity to the system or product. Thus,
when designing with conflicts in mind, designers and developers should weigh the
risk and consequences of failures occurring and how addressing them after occurring
affects the user experience against the cost of adding complexity to the system or
product.

7.4 Future research & suggestions to the company


Here we present some suggestions towards what could be of interest for future research
regarding the findings of the thesis and what the HUGO delivery team could do to further
enhance their service.

Future research
The thesis work has resulted in some interesting discussion points regarding the design of
interactions for the HUGO delivery service and from these conclusions there are areas that
could be worthy of researching further in the future.

• This thesis has focused on the interaction design and the user experience of the
service through the web app. This means that even though some thought has gone

79
DISCUSSION

into the UI elements of the web app it has not been of greater focus. Developing the
UI elements by researching how to present information and communicate to the user
for a novel interaction like this could be interesting.

• The ACD/HCAI method used in the thesis, presented in chapter 6.4.1.1 , produced
design possibilities and helped identify failures. The method can be of help to
designers working with designing for AI interaction, but it is an experimental and not
yet specified method. By developing, specifying, and evaluating the method of
ACD/HCAI it could be an even better addition to designers working with autonomous
agents.

• An interesting future research area is the development of guidelines for designing


products and systems with AI and especially regarding autonomous agents. This
thesis has involved a lot of user testing but mostly the design of the interaction
sequence. It has not involved enough user testing or evaluation to clearly prove that
the suggestions on how to design interfaces for autonomous agents could be
implemented as guidelines. A more specific analysis on how these interactions could
set an example for future products could strengthen existing guidelines and possibly
create new ones.

Company suggestions
These are suggestions directed at HUGO delivery. They are outside of the scope of the report
but still possible improvements and based of the findings in the thesis.

• During the thesis work ideas regarding how physical features and functions could be
improved arose.

o To help users understand the robot’s signals, installed lights could be


implemented into the robot design to show users what state the robot is in.
For example, when the box unlocks the lights could turn green. Many users
said that this would be helpful and perhaps even necessary for them to
understand the robot’s intentions.

o In addition to light, sounds are also helpful in signalling actions and


responses. A sound for when you wish to find HUGO or a sound for when the
box unlock would help users further understand that the robot is responding
to their inputs. The design of light and sound signals could mimic already
established signals, like a car that blinks when it locks, or the green and red
lights used in traffic.

o Many of the users assumed or wished that the lid would open and close
automatically, possibly because a lot of other actions done by the HUGO
robot happened automatically, which could suggest that this would be a
feature worth implementing into the robot’s design to enhance the
experience for the user.

80
DISCUSSION

o In the last test, using the actual HUGO robot, users pointed out that the lid
was missing a handle. This made it harder to understand where to open it
and, lifting the lid. A clearly constructed handle on the lid could be a helpfull
addition to the design.

• The design proposal in 6.6 is an example of how a web app for this kind of interaction
could function but should not be seen as the only way to design it. There are other
ways to incorporate the essential interactions into a web app design and as the
HUGO delivery robot is developed further, new functions or interaction sequences
might be needed, requiring a redesign of the design proposal. Technical limitations
might also apply when building the web app that could affect the design.

81
8. Conclusion
The use of autonomous agents is an ever-growing possibility in our day-to-day life and, in
some cases, already a reality. One future use might be autonomous robots performing last
mile deliveries, a service the company HUGO delivery is currently developing. The goal of
developing their autonomous delivery robot HUGO is to reduce the emissions from
deliveries in the last mile by replacing delivery trucks with emission free autonomous
robots. However, this new way of receiving deliveries introduces new design challenges
since most people have little to no prior experience of interacting with autonomous agents.
The user interface is therefore of great importance in making the user understand and be
able to interact comfortably with the autonomous agent, thus also a key aspect in reaching
user adoption.

The following interaction sequences were found during the thesis work, and they specify the
essential interactions in the case of an autonomous food delivery robot.
As an answer to RQ1, ‘What interaction sequences are essential for end users in the case of
interacting with an autonomous delivery robot through a phone interface?’, the specified
interactions sequences are:

1. Start of the whole interaction sequence between user and robot


2. Locate autonomous agent
3. Open the transport compartment
4. Take/drop off goods
5. Close the transport compartment
6. End of whole interaction sequence

These interaction sequences were found when analysing the business cases given by the
company and partly when researching the flow of other delivery services in the future
analysis. It was also confirmed when designing and testing that these actions where
important and necessary to reach the end goal of the user and other tasks could be
automated. The start and end of the interaction were especially interesting since they signal
control being moved to or from the user or the autonomous agent. They proved to be
important in the design and should be handled with extra care when designing products
involving AI.

In exploring how to apply HCAI principle when developing the phone interface for the
service, multiple findings were made for important points to have in mind when applying
HCAI to designing interactions with autonomous agents. These findings are:

1. Where suitable, strive for active user and autonomous agent.


2. Automate non-essential tasks to empower humans.
3. Provide the user with information upfront to tune their expectation and mental model.
4. Require manual confirmation before critical automation is executed.

82
CONCLUSION

5. Weigh the cost of preventing conflict against the consequences and risk of failures
occurring.

These recommendations are based on our findings and observations found when applying
HCAI in designing the user interface for the service. Whilst being produced for this specific
type of case, the hope is for them to be generally applicable to other cases involving a user
and an autonomous agent.

Autonomous agents are becoming an ever-growing part of our everyday lives and we believe
that Human-Centered AI will play an important role in helping designers create the future of
autonomous systems, with a focus on the human experience. There is still new knowledge
to be found within this area and hopefully this thesis can, in some way, contribute to new
research and inspire more people to learn about it.

83
References
1. Arnowitz, J., Arent, M., & Berger, N. (2007). Effective prototyping for software makers. Elsevier
Morgan Kaufmann.
2. Arvola, M. (2010). Interaction Designers’ Conceptions of Design Quality for Interactive
Artifacts. 9.
3. ASDA. (n.d.). Company Facts. Corporate - ASDA. Retrieved 5 April 2022, from
https://corporate.asda.com/our-story/company-facts
4. Benyon, D. (2019). Designing user experience: a guide to HCI, UX and interaction design
(Fourth edition). Pearson.
5. Borealis. (n.d.). Anläggningar i Sverige - Borealis i Sverige - Stenungsund - Borealis.
Borealisgroup (en-GB). Retrieved 5 April 2022, from
https://www.borealisgroup.com/stenungsund/borealis-i-sverige/anl%C3%A4ggningar-i-
sverige
6. Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in
Psychology, 3(2), 77–101. https://doi.org/10.1191/1478088706qp063oa
7. Buchenau, M., & Suri, J. F. (2000). Experience prototyping. Proceedings of the Conference on
Designing Interactive Systems Processes, Practices, Methods, and Techniques - DIS ’00, 424–
433. https://doi.org/10.1145/347642.347802
8. Chapin, N. (2003). Flowchart. In Encyclopedia of Computer Science (pp. 714–716). John Wiley
and Sons Ltd.
9. Cooper, A., Reimann, R., Cronin, D., & Cooper, A. (2014). About face: the essentials of
interaction design (Fourth edition). John Wiley and Sons.
10. Delft University of Technology. (2020). Delft design guide: perspectives, models, approaches,
methods (A. van Boeijen, J. Daalhuizen, & J. Zijlstra, Eds.; Revised edition). BIS Publishers.
11. Dolan, S. (2022, January 11). The challenges of last mile delivery logistics and the tech
solutions cutting costs in the final mile. Business Insider.
https://www.businessinsider.com/last-mile-delivery-shipping-explained
12. Dominos’s Pizza. (2021). 2021 Annual Report [Annual Report].
13. Doncieux, S., Chatila, R., Straube, S., & Kirchner, F. (2022). Human-centered AI and robotics.
AI Perspectives, 4(1), 1. https://doi.org/10.1186/s42467-021-00014-x
14. Frayling, C. & Royal College of Art. (1993). Research in art and design. Royal College of Art.
15. Gibbons, S. (2017, August 27). Service Blueprints: Definition. Nielsen Norman Group.
https://www.nngroup.com/articles/service-blueprints-definition/
16. Google PAIR. (2021, May 18). People + AI Guidebook. https://pair.withgoogle.com/guidebook
17. Hallnäs, L., & Redström, J. (2006). Interaction design foundations, experiments. University
College of Borås. The Swedish School of Textiles. The Textile Research Centre.
18. HUGO Delivery AB. (n.d.). Last mile delivery. Last Mile - Autonomy. Retrieved 23 February
2022, from https://hugodelivery.com/
19. Iacucci, G., Iacucci, C., & Kuutti, K. (2002). Imagining and experiencing in design, the role of

84
REFERENCES

performances. Proceedings of the Second Nordic Conference on Human-Computer Interaction


- NordiCHI ’02, 167. https://doi.org/10.1145/572020.572040
20. International Post Corporation. (2021). IPC Annual Review 2020 (p. 50).
https://www.ipc.be/sector-data/reports-library/ipc-reports-brochures/annual-review2020
21. Johnson, J. (2014). Designing with the Mind in Mind. Elsevier. https://doi.org/10.1016/C2012-
0-07128-1
22. Laubheimer, P. (2016, April 12). Wireflows: A UX Deliverable for Workflows and Apps. Nielsen
Norman Group. https://www.nngroup.com/articles/wireflows/
23. Martin, B., & Hanington, B. M. (2012). Universal methods of design: 100 ways to research
complex problems, develop innovative ideas, and design effective solutions (Digital ed).
Rockport Publishers.
24. Moran, K. (2019, January 12). Usability Testing 101. Nielsen Norman Group.
https://www.nngroup.com/articles/usability-testing-101/
25. Motavallian, J. (2019). Last mile delivery in the retail sector in an urban context [PhD Thesis].
RMIT University.
26. Nielsen, J. (2012, March 6). How Many Test Users in a Usability Study? Nielsen Norman Group.
https://www.nngroup.com/articles/how-many-test-users/
27. Norman, D. (2013). The Human-Centered Design Process. In The Design of Everyday Things :
Revised and Expanded Edition (pp. 221–237). Basic Books.
http://ebookcentral.proquest.com/lib/linkoping-ebooks/detail.action?docID=1167019
28. Norman, D. A. (2005). Human-centered design considered harmful. Interactions, 12(4), 14–
19. https://doi.org/10.1145/1070960.1070976
29. Norman, D. A. (2006). Logic versus usage: the case for activity-centered design. Interactions,
13(6), 45. https://doi.org/10.1145/1167948.1167978
30. PostNord Group AB. (n.d.). A brief summary of PostNord and its operations. Retrieved 6 April
2022, from https://www.postnord.com/about-us
31. PostNord Group AB. (2022, March 7). Nu testas självkörande leveransroboten Hugo.
https://www.postnord.se/foretagslosningar/artiklar/e-handel/har-kommer-sjalvkorande-
leveransroboten-hugo
32. Rettig, M. (1994). Prototyping for tiny fingers. Communications of the ACM, 37(4), 21–27.
https://doi.org/10.1145/175276.175288
33. Rev. (2022, March 30). How to Analyze Interview Transcripts in Qualitative Research. Rev.
https://www.rev.com/blog/analyze-interview-transcripts-in-qualitative-research
34. Riedl, M. O. (2019). Human‐centered artificial intelligence and machine learning. Human
Behavior and Emerging Technologies, 1(1), 33–36. https://doi.org/10.1002/hbe2.117
35. Ries, E. (2019). The lean startup: how constant innovation creates radically successful
businesses. Penguin Business.
36. Rosala, M. (2020, September 20). Task Analysis: Support Users in Achieving Their Goals.
Nielsen Norman Group. https://www.nngroup.com/articles/task-analysis/
37. Sauro, J. (2011, February 3). Measuring Usability with the System Usability Scale (SUS) –
MeasuringU. https://measuringu.com/sus/
38. Shneiderman, B. (n.d.). The Eight Golden Rules of Interface Design. Retrieved 3 May 2022, from

85
REFERENCES

https://www.cs.umd.edu/~ben/goldenrules.html
39. Shneiderman, B. (2020a). Human-Centered Artificial Intelligence: Three Fresh Ideas. AIS
Transactions on Human-Computer Interaction, 109–124.
https://doi.org/10.17705/1thci.00131
40. Shneiderman, B. (2020b). Human-Centered Artificial Intelligence: Reliable, Safe &
Trustworthy. International Journal of Human–Computer Interaction, 36(6), 495–504.
https://doi.org/10.1080/10447318.2020.1741118
41. Shneiderman, B. (2022). Human-centered ai. Oxford University Press.
42. Stickdorn, M., Hormess, M., Lawrence, A., & Schneider, J. (Eds.). (2018). This is service design
doing (First edition). O’Reilly.
43. Wikberg Nilsson, Å., Ericson, Å., & Törlind, P. (2015). Design: process och metod.
Studentlitteratur.
44. Xu, W., Dainoff, M. J., Ge, L., & Gao, Z. (2021). Transitioning to human interaction with AI
systems: New challenges and opportunities for HCI professionals to enable human-centered
AI. https://doi.org/10.48550/ARXIV.2105.05424
45. Zimmerman, J., & Forlizzi, J. (2014). Research Through Design in HCI. In J. S. Olson & W. A.
Kellogg (Eds.), Ways of Knowing in HCI (pp. 167–189). Springer New York.
https://doi.org/10.1007/978-1-4939-0378-8_8
46. Zimmerman, J., Forlizzi, J., & Evenson, S. (2007). Research through design as a method for
interaction design research in HCI. Proceedings of the SIGCHI Conference on Human Factors
in Computing Systems, 493–502. https://doi.org/10.1145/1240624.1240704

86
Appendix A
Structure and questions of test 1

Initial Questions
1. Name of test user (anonymous in report)
2. Age
3. What is your estimated level of experience with IT and technology? 1-5
Vad är din nivå på teknikvana, enligt dig själv?

1- Väldigt ovan och känns jobbigt när jag ska interagera med digitala system.
5- Väldigt van och har inga problem med att interagera med nya digitala system.

4. What are the first thoughts and feelings that come to mind when you imagine how
it's like to use an autonomous delivery robot?
Vad är det första du tänker och känner när du föreställer dig att använda en autonom
leverans-robot?

Tasks and flow of the test


• After initial questions, confirm that the user understands the test and explain that
you will tell them with storytelling what is happening in the scenario. The subtasks
for the steps can be used to push the user in the right direction but they are not
necessary to be able to complete the test, these tasks are marked by parenthesis. If
these tasks are used this should be noted.

• When testing people without the knowledge of HUGO show a picture of the robot
and explain the flow of the service.

The main task is to get your package from the robot by using the app
If they need guidance use the sentences within the parentheses.

1. You have just ordered from the restaurant and received a text with a link. You press
the link and find yourself on this page.
(Can you find your delivery information?)

2. Some time passes and you wait for your delivery, you receive a new text with a link
and open it. This is the page you land on.
(What do you want to do in this step to complete your end goal?)

3. You go out to HUGO and stand next to it.


(Can you unlock the box?)

87
APPENDIX A

4. You hear a click and the lid opens a bit.


(Can you get your package and what do you do next?)
(Can you lock the box again?)

5. HUGO drives off after you lock it.

Usability scale questions


Score 1-5, Repeat with each test, follow up questions for each stage. The scores will be
normalized afterwards.

• I think that I would like to use this system frequently.


Jag tror att jag skulle kunna använda det här systemet ofta

• I found the system unnecessarily complex.


Jag tyckte att systemet var onödigt komplext

• I thought the system was easy to use.


Jag tyckte systemet var lätt att använda

• I think that I would need the support of a technical person to be able to use this
system.
Jag tror att jag hade behövt hjälp av en teknisk person eller liknande för att kunna
använda det här systemet.

• I found the various functions in this system were well integrated.


Jag tyckte att de olika funktionerna i systemet var väl integrerade

• I thought there was too much inconsistency in this system


Jag tyckte systemet var inkonsekvent.

• I would imagine that most people would learn to use this system very quickly.
Jag tror att de flesta hade lärt sig använda det här systemet väldigt snabbt

• I found the system very cumbersome to use.


Jag tyckte systemet var besvärligt att använda

• I felt very confident using the system.


Jag kände mig väldigt trygg i att använda systemet

• I needed to learn a lot of things before I could get going with this system.
Jag behövde lära mig en hel del innan jag kunde börja använda systemet

88
APPENDIX A

Open questions
1. How did you feel during the interaction? What types of thoughts came to mind?
Vilka känslor kom till dig under interaktionen? Vilka tankar dök upp?

2. Was there anything specific you reacted to in the app, both negative and positive?
Var det något specifikt du reagerade på i appen, både negativt och positivt?

3. Was there anything missing for you to be able to complete the tasks?
Saknades något för att du skulle kunna genomföra uppgifterna eller använda appen?

4. What did you think about opening the box through the phone screen? Would you
have liked to do it in another way?
Vad tyckte du om att öppna en låda via telefonen? Hade du velat göra det på ett annat sätt?

5. Any additional comments?


Några övriga kommentarer?

89
Appendix B
Protocol from test 1
Vad är det första du tänker och känner när du föreställer dig att använda en autonom
leveransrobot?

• Bör inte märkas helst. I bästa fall är det en snabbare/flexiblare leverans, vill knappt
komma ihåg vad den heter för att leveransen var så snabb. Känslor av
ifrågasättande, kommer det funka? säkerhet? Hur gör jag om det går fel?
Osäkerhet.

• Bekvämlighet, bekvämt för att man får paketet närmar sig än ett ombud. Till
dörren/porten är första tanken. Kan öppna upp möjligheter för snabba leveranser.

• En viss oro till stöld av paket. Känns som att hela roboten kan bli stulen. Lite orolig
för generationer/mindre teknikvana användare som inte är lika vana att använda
teknik.

• Man har ju ingen direkt referens till hur den ser ut men man fattar ju konceptet,
Taggad att få se och testa teknologi, nyfiken på hur den funkar.

• Lite spännande, tror också jag skulle ha höga förväntningar att den skulle funka
smidigare än typ instabox, annars känns det inte värt. Det ska kännas intuitivt att
använda

• Känns jävligt läskigt, känns som att det inte kommer funka. Varför ska man ha
robotar till allt. Ganska onödigt.

• Tänker på jobbet ( han jobbar med det), tänker på starship, coolt företag. Har sett
dem användas på riktigt. Känns coolt att få använda. Imponerad. Sugen på att testa
gränserna

Vilka känslor kom till dig under interaktionen? Vilka tankar dök upp?
• Flow 7 var en bergodalbana, var inte uppenbart att den skulle hoppa vidare. Trodde
det var knappar men verkar inte vara det. Inte uppenbart att man inte kan
interagera med den. Schysst att ha en sak att göra per skärm, när man är klar med
den är det lätt att fatta nästa steg. Känns som det finns inbyggd multitasking i det
hela, mycket saker att göra på lås och avslutningsskärmar. Minimerat antal
interaktioner per skärmbild.

• Kändes rätt enkelt i alla flöden, nice, lätt att klicka igenom. HUGO färgen är ful.

• Kändes smidigt. Bra med steg för steg på flow 4.

90
APPENDIX B

• Spännande, trevlig interaktion. Inga specifika tankar eller känslor egentligen.

• De flesta kändes enkla att använda. kommer min telefon bli full av notiser om jag
använder den? Många som slåss om uppmärksamhet på skärmen. Hur agerar
roboten egentligen med reaktionstid och avstånd? Vad händer om jag låser upp
den för tidigt? Vad händer när jag låser den? Åker roboten iväg? Kan upplevas som
lagg om det inte ger direkt feedback. Måste den köra hela vägen hem till mig eller
kan jag möta HUGO?

Var det något specifikt du reagerade på i appen, både negativt och positivt?
• Funderar på hjälpknappar, kopplar till oro, skönt att ha på alla ställen. Vill gärna ha
en viss närhet till en faktisk person via dem, typ att kunna få tag på dem om något
går fel. Nice med tydligt gröna signalerade hjälpknappar. Finns ingen undo knapp,
kanske vore bra, undvika feltryckpaniken.

• Konstigt att kunna ändra info mitt under körning, kanske inte en bra grej. Kanske
borde bekräfta adressen innan den kör. Om det är för långt att köra vad sker då?
Behövs antagligen steg för det också. Gillade progressbaren, Flöden med dem
kändes mest nice. Slides är kul.

• Bilder på stänga och öppna var bra och tydligt. Positivt att man kan följa på kartan
och få tidsangivelse, bra för att kunna anpassa sig. Negativt, kanske onödigt att visa
all kontaktinfo på första sidan. borde kunna fällas ihop eller så.

• Varningsdelen i slutet var nice men kan vara en confirm interaktion ist kanske.
Läskigt med varningar potentiellt. Slide är bra för man kommer inte åt den hur som
helst, mindre risk för felklick. Hugo färgen med vitt är trevligt och välkomnande.
Svart text syns bra så borde nog användas för att ge ordentlig kontrast. Tycker om
att man ser stegen i interaktionen, spelar ingen roll om de är horisontella eller
vertikala bara de är där, känns betryggande. Man tycker om att känna att man har
kontroll som användare

• Nej knappen för att bekräfta att HUGO är vid en är onödig. Är det skillnad på att få
sms och en app? Kanske lättare med app? App blir mer streamlineat. Gillade skicka
iväg knapp som är en slide. Skön känsla med slide. Flow 7, kan man trycka på
stegen innan de ska användas eller är de låsta då? Känns konstigt om man inte kan
det. Inte supertydligt att de är knappar iof. Är de knappar?

Saknades något för att du skulle kunna genomföra uppgifterna eller använda appen?

No user said that a step was missing for them to be able to complete the task.

Vad tyckte du om att öppna en låda via telefonen? Hade du velat göra det på ett annat
sätt?

91
APPENDIX B

• Vill helst inte hålla i telefonen när man öppnat lådan. Vore skönt att ha interaktion
på lådan, kanske QR, kanske keypad. Vore coolt om Hugo kan känna av närheten av
telefonen.

• Jag vill öppna den via telefonen. Att låsa roboten kanske inte behöver göras via
telefonen. Kanske roboten skulle kunna öppna sig vid sin destination, då behöver
man ingen telefon. Blipp förknippas med betalningen, känns inte nice. Kanske bra
med QR för att försäkra oss om att kunden är vid roboten.

• Telefon kändes okej, så länge jag kan se på lådan att det är min Hugo så jag vet
vilken som är min.

• Tycker om att man bara använder telefonen. Känns tryggt att ha alla steg i
telefonen. Robotar kan vara lite läskiga att ha att göra med. Ska vara många steg så
man vet vad som försiggår

• Vill nog inte bara ha telefonen, vore nice att bekräfta att man är nära. Vill nog inte
ha körkorts-lösning på lådan. Blipp hade varit coolt, löst många problem, känns
smooth. Scanna QR är också ganska smooth. Knappsats känns som det kan bli
sunkigt iom att alla ska ta på det.

Något övrigt du tänkte på?


• Varningen för att hugo åker iväg är bra att ha tror jag. Vore bra med mer ljud och
ljus bekräftelse på själva roboten som ger status när man gör saker via telefonen.

• Nice med progressbar uppe på skärmen, Nice med bilder som förklarar, bör inte
vara för mycket text. Mer förklarande bilder. Beror nog mycket på hur roboten
agerar för om saker känns smidigt. Förklaringsknappar kanske inte behöver vara i
mitten. Kanske vore bra om användaren kan bestämma placeringen av HUGOs
leverans på en karta med en markör. Jag kanske är lite biased iom att jag jobbar
med det.

• Nej inte direkt, kändes enkelt och kul

• Bättre än jag trodde de skulle vara i det här stadiet.

92
Appendix C
Structure of test 2

Area of testing
• interaction with robot
• User understanding robot’s signals
• Opening box
• Understanding when in control or not
• Information flow from app to human

Hypothesis:
The user wants to both interact digitally and physically with the robot as well as receive
digital and physical feedback when interacting.

Initial Questions
Name of test user (anonymous in report ofc)
Age
What is your estimated level of experience with IT and technology? 1-5
Vad är din nivå på teknikvana, enligt dig själv?
1- Väldigt ovan och känns jobbigt när jag ska interagera med digitala system.
5- Väldigt van och har inga problem med att interagera med nya digitala system.

What are the first thoughts and feelings that come to mind when you imagine how it's like
to use an autonomous delivery robot?
Vad är det första du tänker och känner när du föreställer dig att använda en autonom
leverans-robot?

How the test is done

1. Normal scenario
The web app and service works as intended

2. Fail-scenarios

a. The wrong address is listed as delivery address + The user can’t open the lock
due to bad connection.
b. The user can’t see Hugo when they go out to meet it + The user forgets to close
the lid.

93
APPENDIX C

Scenario test 1.
Normal test where the participant follows the planned interaction for the app and robot.

• The user starts with answering the initial questions and then gets the scenario
explained to them.
• Scenario: You have called the restaurant and ordered to have your food delivered
with HUGO to your address. You want to acquire your food and complete the delivery
with HUGO to finish the test.

• The user will receive a text with a link to the figma prototype. The user takes out the
phone and looks at the figma prototype with the first screen.

• Next step is starting the interaction with the robot by finding it and completing the
delivery. The user receives a new text with a figma prototype link.

• The user can be directed to HUGO by the facilitator of the test since actual GPS
tracking is not available.

• The user starts the next interaction with HUGO where the goal is to receive the
package and end the interaction so that HUGO can leave.

• The test is finished, and an open discussion is held with the user to find out more
about how they experienced the robot and if they have any ideas of how to improve
the experience of their own.

Scenario test 2.
A test where the user experiences errors when executing the planned interaction with the
app and robot. The user will not test the whole flow but will instead get the scenario of the
fault given to them directly. Ex. The user tests the changing of address scenario and is
finished with the specific test scenario after they have changed the address and does not
need to complete the flow fully.

The user starts the scenario and tests the following errors one at a time. The facilitator will
make sure that the user is presented with the right screens and an explanation on what they
will be doing.
• The app states the wrong address, You wish to change it
• The box does not unlock due to bad connection

Restart scenario.
• You cant see HUGO and wonder where it has parked
• You forget to close the lid and walk away from HUGO

The test is finished and an open discussion is held with the user to find out more about how
they experienced the robot and if they have any ideas of how to improve the experience of
their own.

94
APPENDIX C

Questions to guide discussion

Scenario 1:
Did you feel like you could understand the robots intentions and signals?
Kändes det som du kunde förstå robotens signaler och avsikt?

How did it feel and what type of thoughts came to you when interacting with the robot?
Hur kändes det och vilka tankar dök upp när du interagerade med roboten?

Was there anything specific you reacted to during the test, both negative and positive?
Var det något specifikt du reagerade på i appen, både negativt och positivt?

Was there anything missing for you to be able to complete the tasks?
Saknades något för att du skulle kunna genomföra uppgifterna eller använda appen?

Scenario 2:
How did it feel and what type of thoughts came to you when interacting with the robot?
Hur kändes det och vilka tankar dök upp när du interagerade med roboten?

How did it feel when your interaction didn't go as planned?


Hur kändes det när din interaktion inte gick som planerat?

95
Appendix D
Protocol from user test 2
Vad är det första du tänker och känner när du föreställer dig att använda en autonom
leverans-robot?

• Spännande men också långsamt, tänker att autonomt går långsamt, tänker på de
autonoma bussarna som rullar på campus. Men ändå spännande.

• Det känns lite onödigt. Hemleveranser i allmänhet är onödiga och som han inte känner är
nödvändigt. Onödigt med utkörning från leverans utlämning till dörr. Foodora, använder
inte, tar lång tid och kan bli kallt, men finns ändå ett annat syfte med det. Hellre att inte
behöva kommunicera med någon, så länge det är lika snabbt så är det skönt att slippa den
interaktionen med budbärare.

• Förväntar sig att det inte borde vara mindre smidigt än vad det är med Fodoora idag, man
ska kunna följa vart den är och så. Det ska vara smidigt att öppna den interagera med den.
Lika smidigt eller smidigare än att använda fodoora. Man ska inte behöva fundera över
något i processen.

• Det finns stor potential för stöld. Smidigt, personal effektivisering. ‘Cool grej’

• Spännande. Lite oklart hur det ska fungera, hur han ska få sin mat och all logistik bakom
det. Futuristiskt men också många frågetecken hur det ska fungera.

Kändes det som du kunde förstå robotens signaler och avsikt?

• Märkte inte de så tydligt lamporna, men bra att spela upp ljuden.
• Ljud och ljus är bra att ha på HUGO. Hade tolkat röd som att gå inte nära eller interagera
med. Blå exempelvis är neutralt, skulle man kunna använda. Ta inspiration från hur bilar
med ljus.
• Lite oklart om den öppnar sig själv eller inte, man vill inte pilla för mycket på den eftersom
det är en robot.
• Ja, det kändes bra.
• Svårt att säga hur det är i vanliga fall. Ska blinka när man trycker på signalknappen. låter
bra men svårt att avgöra när det inte är med i testet

Hur kändes det och vilka tankar dök upp när du interagerade med roboten?

• Inga problem med roboten, kändes enkelt. Inte läskigt eller komplicerat, straight forward.
Behövs ingen display, tyckte det var skönt att inte behöva interagera med en display på
HUGO.

96
APPENDIX D

• Kände sig låst i telefonen, behövde lägga mycket tid på att läsa och interagera med appen.
Hade hellre velat lägga mindre tid i appen och mer på att interagera direkt med HUGO.
Automatiskt låsa och låsa upp HUGO. Inte användarens uppgift att låsa och skicka iväg
HUGO, vill att HUGO ska göra det själv. Känner att användaren är klar när hen har tagit sina
varor. Men tycker att man ska ha kontrollen att kunna stoppa HUGO om han åker iväg. När
man stänger locket så är det bekräftelsen på att man är klar. Skulle kunna finnas en knapp
i appen för att avbryta eller meddela service folket att något inte stämmer.
• Kändes bra, smidigt. Man ifrågasätter varför saker fungerar på vissa sätt. Exempelvis hur
fungerar säkerheten och det känns som att det finns mycket som kan gå fel. Känner en viss
oro för att det inte alltid kommer att fungera. Lite oklart hur vissa saker fungerar, vad som
händer när man klickar skicka iväg, om locket öppnas av sig själv.
• Flödet känns rimligt, gillar att det är steg för steg. Vill nästan ha flera steg, lås upp -> öppna
-> osv.
• Överlag väldigt bra, appen bra gränssnitt och intuitiv. Roboten känns väldigt prototyping när
det är en kartong, svårt att säga hur det skulle vara på riktigt. Man får inte så mycket support
av appen, man får göra mycket själv som användare, känns mer som instabox. Van att får
att får mer service, likt när man får leverans av människor. känns ovant men inte
nödvändigtvis ett problem skönt så länge det fungerar, mindre människokontakt är skönt.

Var det något specifikt du reagerade på i appen, både negativt och positivt?

• Otydligt om man ska låsa upp i appen eller om det ska ske fysiskt, förslag att separera en
sida för att låsa upp och en som berättar att öppna lådan. Skriva om från ‘HUGO är öppen’
till ‘HUGO är upplåst’. Tryck kan vara lite förvirrande. Info om att HUGO lämnar var tydligt
och bra. Bra att information om hur leveransen går till kommer två gånger. Ha korta
meningar i informationen är bra, men texten som finns är i bra längd.

• Många steg, kände att många steg var i appen som skulle kunna vara fysisk. Exempelvis
hur man låser upp HUGO, ju mer man interagerar direkt med HUGO ju närmre känner man
sig den.Lång text, mycket att läsa i första meddelandet. Flytande text, vill veta vad man ska
göra tydligt och kort.

• Dra för ‘att lås upp’ istället för ‘att öppna’ allmänt mycket information i varje steg. En aning
övertydligt i varje steg. Förvirrande i stegen, där man ska låsa upp, slida, öppna, många steg
som skulle kunna slås ihop även att hur information formuleras kan vara förvirrande. Kan
skippa lås och lås upp stegen. låsa upp och låsa sig automatiskt. Minska stegen för låsa
och öppna/stänga med andra ord.

• Otydligt med slidern som säger dra för att öppna, medan det står ovan lås upp och öppna
lådan. otydligt om det är användaren som låser upp locket. Dubbla instruktioner. Om något
ska hända av en handling så borde det vara på separat sidor.

• Skönt att det finns hjälpcenter, att man kan få hjälp om man behöver. Känns som att man
ska kunna trycka på alla rutor, dvs exempelvis den som visar tiden och bilden på HUGO, vet
inte vad det skulle göra men kändes som att det skulle hända något. Skönt med drop-down
menyer. Hade föredragit att det hade varit med ikoner också i info drop-downen. Gillade att
infomenyn flyttade med till andra sidan. Kartan är trevlig, hade föredragit att man kan se
sig själv på kartan också. Otydligt om ‘dra för att öppna’ kommer öppna locket automatiskt
eller om man låser upp. Upplever att man är kvar på samma steg även om stegen byter, de
är lika. Det är lagomt antal steg men föredra hellre färre än flera. Om något steg kändes
onödigt så var det steg 2 dvs har du tagit dina varor (notera: avslutningssteget var också
onödigt enligt användaren) Färgerna var fina

97
APPENDIX D

Var det något specifikt du reagerade på när du interagerade med roboten, både
negativt och positivt?

• Väldigt simpelt med interaktionen med roboten, inget som är komplicerat

• Vilken färg som används som signal är viktigt för att visa på avsikt. En lampa där man ska
trycka på HUGO, tydligt vart man ska ta tag på locket.

• Det var enkelt att interagera med. Otydligt vem som ska stänga locket, användaren eller
HUGO själv

• Det förklarades i appen att man skulle öppna den så han förväntade sig inte att den skulle
öppna sig av sig själv

• Otydligt om locket kommer att öppnas automatiskt eller om man ska öppna manuellt.
Förväntar sig att den skulle kunna ha förmågan att göra det.

Saknades något för att du skulle kunna genomföra uppgifterna eller använda appen?

• Nej det tycker han inte.


• Att man har möjligt att efter att man låst HUGO ska kunna avbryta HUGO från att lämna,
kundtjänsten som finns nu känns mer som en FAQ. En tydlig knapp på sista sidan som visar
att man kommer till en chat eller liknande med kundservice.
• Allting fanns.
• Informationen som behövdes fanns i appen.
• Hamburgermenyn funkar inte.

Övriga kommentarer?

• Kollar informationen, särskrivning i informationen :) Andra sms:et, öppnar länken innan han
går ut. Tror han ska låsa upp fysiskt på lådan. Förvirring hur han ska låsa upp. Undrar om
han skulle få ett sms efteråt. van vid att postnord skickar ut sms efter hämtad leverans,
inget som behövs men bra att ha om någon annan hämtar paketet åt någon.

• Första sidan: behövde tänka till på första rutan med frågan om adressen är rätt. Andra
sidan: Bilden, ska stämma överens med HUGO som kommer att komma. Osäker om man
ska klicka vidare på något, skulle vilja ha någon feedback på statusen utöver tiden till
leverans.

• Andra SMSet och framåt:


Lite långt meddelande, skulle kunna korta ned och mer koncist
Öppnar lådan innan han har låst upp den. Tror det är fysiskt på roboten, skulle vilja göra det
mer likt fysiskt på lådan.
Överflödigt att manuellt låsa lådan. Automatiskt låsning hade varit eftersökt. Ha någon
lampa eller liknande som ger indikation på att den har låst, men att man kan låsa upp den
igen.
Om man behöver låsa upp den igen så kan man göra det.

98
APPENDIX D

Den borde förstå att efter en viss tid eller att användare går utanför en radie så ska hugo
kunna åka iväg, men man ska kunna stoppa hugo om den åker iväg. Ljus indikationer
och/eller ljud när den är påväg att åka iväg.

• Känner att lösningen är anpassad för förstagångsanvändare och även där lite för
utvecklande.
Ta bort ansvaret från användaren för att låsa, det är inte användarens incitament att låsa
HUGO när användaren har tagit sina varor.
Anser att det är bekräftelse på att man är klar med att ta sina varor när man har stängt
locket. Men ha möjligheten att låsa upp HUGO igen.

• Tänker inte att HUGO är roboten


En instruktionsvideo alternativt animationer för hur leveransen går till. Korta ned
informationstexten i början och lägg till bilder/video.

• Första SMSet
Klickade in och kollade på information
Andra SMSet
Tryckte på ljudet.
Trodde HUGO skulle öppnas när det står att den är öppen. Klickade lås innan locket är
stängt.

Hade varit önskvärt att den kunde öppna sig själv.


Första SMS:
öppnar instruktionerna
Är det riktiga minuter eller ‘Fodoora minuter’
skönt att rutan för instruktioner finns kvar

Andra SMS:
Testade pip och ljus
Var påväg att öppna innan låsa upp.

99
Appendix E
Structure of test 3

Area of testing
Final testing to confirm the design choices and see if there are any final design
suggestions.

Material
HUGO box
Computer to take notes
Phone for user
Fake package
Camera

Tasks for facilitators during the test


Take notes
Acting for HUGO and send texts
Take Pictures and video
Guide the user

Initial Questions
• Name of test user (anonymous in report ofc)
• Age
• What is your estimated level of experience with IT and technology? 1-5
Vad är din nivå på teknikvana, enligt dig själv?
1- Väldigt ovan och känns jobbigt när jag ska interagera med digitala system.
5- Väldigt van och har inga problem med att interagera med nya digitala
system.

• What are the first thoughts and feelings that come to mind when you imagine how
it's like to use an autonomous delivery robot?
Vad är det första du tänker och känner när du föreställer dig att använda en autonom
leverans-robot?

How the test is done


Test round
The participant follows the planned interaction for the app and robot.

• The user starts with answering the initial questions and then gets the scenario
explained to them.
• Scenario: You have called the restaurant and ordered to have your food delivered
with HUGO to your address. Your goal is to acquire your food and complete the
delivery with HUGO.

100
APPENDIX E

• The user will receive a text with a link to the figma prototype either on their own
phone or a phone that is lent to them.The user takes out the phone and looks at the
figma prototype with the first screen.
• Next step is starting the interaction with the robot by finding it and completing the
delivery. The user receives a new text with a figma prototype link that shows them
where HUGO is. The user can be directed to HUGO by the facilitator of the test since
actual GPS tracking is not available.
• The user starts the next interaction with HUGO where the goal is to receive the
package and end the interaction so that HUGO can leave.
• The test is finished and an open discussion is held with the user to find out more
about how they experienced the robot and if they have any ideas of how to improve
the experience of their own.

Questions and guide to discussion

• How was the experience of using HUGO delivery, rated 1-5


Hur upplevde du det att använda HUGO?
1 = Very negative/ väldigt negativ upplevelse
5 = Very positive/ väldigt positiv upplevelse
3 = Neutral

• How did it feel and what type of thoughts came to you when interacting with the
robot?
Hur kändes det och vilka tankar dök upp när du interagerade med roboten?

• Was there anything specific you reacted to during the test, both negative and
positive?
Var det något specifikt du reagerade på under testet, både negativt och positivt?

• Was there anything missing for you to be able to complete the tasks?

101
Appendix F
Protocol from user test 3

Vad är det första du tänker och känner när du föreställer dig att använda en autonom leverans-robot?

Det ska vara i tid, hoppas att det inte ska ta för mycket tid. Sätter det i relation till vad man beställer.

Tänker att det ska komma till mig inte till ett postombud, ska inte ta lång tid som cykelbuden. Viktigare
med transparensen vart den är än att det står en angiven tid som kan komma att ändras. Vill kunna se
vart den är.

Kan vara skönt att maten kommer vara klar när jag väl går ned för att hämta maten, slipper stå i kö på
restaurangen.

Spännande och intressant, hur kommer det att fungera. Hur kommer ingen annan att kunna ta mina
saker. Men mer spännande.

Nyfikenhet, hur fungerar det här, hur lång tid kommer det att ta, hur kommer den ta sig in i hissen. Har
sett HUGO nu men annars hade jag funderat hur det hade fungerat och varit nyfiken på det.

Coolt, andra tanken är det pålitligt, är det säkert både för allmänheten och hur är säkerheten för
produkterna som ska levereras till mig. Är leveransen lika pålitlig som när en människa levererar. Till
en början skeptisk.

Riktigt coolt, taggad.

Hur kändes det och vilka tankar dök upp när du interagerade med roboten?

Skulle velat ha mer svar från roboten, skulle velat ha något ljus som signalerar när den är öppen/stängd
osv. Ljud hade varit bra också

Lite mycket text i appen, litar på att tekniken fungerar så lika mycket beskrivet behövs inte.
Upprepningar i slutet med att hugo åker iväg

En karta som visar adressen när man bekräftar, så man kan se vart hugo tror att man bor.

Kändes bra och enkelt att få SMS, skönt att inte behöva ha en app. HUGO dök upp utanför dörren, det
var smidigt. Kul, kändes nytt och spännande, ganska lätt att klicka sig igenom alla stegen.

Instruktionerna var väldigt lätt att följa. När HUGO väl var där så var det väldigt ‘straight forward’.
Kändes väldigt smidigt. Kommer den bara leverera en sak åt gången, kommer det bara vara mina
saker i eller kommer man behöva oroa sig för att andra tidigare leveranser kommer att ta mina saker.

102
APPENDIX F

Väldigt lättsamt, intuitivt och tydliga instruktioner. Bara ja eller ingenting, inte många olika val.
Väldigt positivt, enkelt, smidigt. Lätt för en icke teknisk person att följa alla stegen. ‘Agda 65 skulle klara
av det’.

Interaktionsmässigt med appen: det var enkelt att förstå, informationen var synlig och enbart det som
behövde göras var synligt på sidan.
Informationen som presenterades i web appen var tydligt och kände därför att man inte behövde leta
efter information.
Bra att visa mycket information, hellre för tydligt med information
Inte alltid självklart för användaren att man ska stänga locket.

Tyckte det var väldigt bra, var osäker på hur man skulle hitta den. Gav instruktioner på varje steg.
Väldigt enkelt att interagera med roboten.
Orolig för hur man skulle interagera med HUGO, hur hårt man skulle stänga locket bland annat, men
fick feedback på när locket var stängt

Var det något specifikt du reagerade på under testet, både negativt och positivt?

Bra med SMS


Kunde varit tydligare med första SMSet, nu har vi fått din order och börjar laga maten, vet inte exakt
vad första SMSet innebar
Hade varit bra med en karta
bra att man kunde ändra adress
färgerna var nice
Kan jobba mer med hierarkin för att få bättre tydlighet med vad som är viktigt i appen

När man väl ska öppna hugo går det bra, när man ska stänga hugo blir det svårare med varorna och
telefonen i händerna.
Bra att man kan klicka sig igenom stegen enkelt.

Ingen verifikation av att det var specifikt hans HUGO men upplevde inte att de behövdes heller.

Inte behöva ha telefonen uppe efter att man bekräftat att man tagit varorna, vill kunna lägga ned
telefonen i fickan och veta att det är klart efter det.

Om man kan slå ihop steg 2 och 3.


Man bekräftar att man är klar när man stänger locket.
Många steg som man ska genomföra med telefon och leveransen i handen

Beroende på vart man är kanske det finns en risk att man ska behöva leta efter HUGO, men det fanns
en karta som gör det lättare.
När det är leverans med en person så är det deras jobb att hitta mig, men med HUGO så blir det jag
som ska hitta HUGO.
Tänkte inte på att ljudknappen var till för att hitta HUGO. Om det är mitt inne i staden så är det inte
säker att man kommer att höra ljudet.
Väldigt tillfredsställande ljud på locket när man öppnar och stänger.

Lite otydligt med ‘Jag är klar’ knappen, vad den är till för.

103
APPENDIX F

UI/appen var väldigt bra. Lampor och ljud förenklar det.


Något som gör det enklare att öppna locket, någon form av handtag hade varit bra.

Hur hade du känt för Automatisk öppning:


- Man anger att man är vid HUGO, skulle kunna automatiskt öppna
- Detsamma för när man bekräftat att man tagit varorna för att stänga
Känns lite mer high tech. Behöver bara plocka sin leverans.
Känns som att det alltid kommer att strula, HUGO inte svara på när man klickar på att öppna och då
kommer användaren att försöka öppna locket ändå. Men hade varit att föredra

Intressant är att det är ett nytt sätt att leverera saker på. Blev nyfiken eftersom jag är teknisk och gillar
att interagera med Teknik, därför var det en bra känsla att interagera med.

Negativt är att den mänskliga faktorn utesluts, den social delen utelämnas. Det personliga bemötandet
försvinner och det kan både vara positivt och negativt.

Finns det något som HUGO kan göra för att vara mer personlig?:
Man skulle kunna måla dit ett par glada ögon, finns robotar i kontorsmiljö som ger en mer personlig
interaktion.

Mycket information på en och samma gång på andra sidan, man är rädd att man missar något. Förstärk
det som är viktigt att läsa på den sidan, separare texten tydligare.
Annars tyckte det var bra.

Saknades något för att du skulle kunna genomföra uppgifterna eller använda appen?

Lampor och ljud på HUGO.


Något sätt att vet att det är rätt HUGO som står framför mig.
Stänga locket var lite svårt när man hade telefonen och leveransen i handen.

Nej det som behövdes fanns med.

Nej men när man trycker att man har hämtat sina varor ska man få ett SMS att leveransen är klar.

Nej, upplever inte att någonting saknas. Uppgiften var straight forward. Appen var tydlig och
interaktionerna var tydliga mot tjänsten.

Nej, tyckte det var bra instruktioner uppdelat i uppgifter som man skulle göra.

Övriga kommentarer?

104
APPENDIX F

Hade svårt att hålla i telefonen, paketet och stänga locket samtidigt

Det var positivt, var kul med robot, var smidigt att jag inte behövde gå ut eller liknande utan att HUGO
kom till mig.

Skulle du använda appen igen, ja i något unikt tillfälle, beställa mat om man inte vill gå och köpa själv.
Skulle aldrig beställa cykelbud hit i denna kontexten.

Fungerade bra, bekvämt sätt att använda mobilen för att klicka sig igenom. Bra så länge det håller
tiden.
Första SMSet:
Läser informationen som finns på andra sidan.
Är det en app eller är det en webbsida.

Andra SMSet:
iaktagelse, Hade inga problem med att hålla i telefonen och leveransen. Stannade kvar och tittade på
att HUGO skulle åka, tryckte inte direkt på ‘Jag är klar’

Stora instruktioner och bara en sak per sida, enkelt.


Tycker färgerna är bra.

Första SMS:et
Läser informationen

Snyggt och enkelt att förstå att appen och HUGO går i samma färgtema.
Reagerade på att det var den HUGO gröna färgen.

Vid fråga om att hålla i alla saker i handen.


Om man beställer mycket, exempelvis två kassar så blir det svårare.
En liten hylla att ställa leveransen på medan man kollar telefonen.
Ser att det kan bli ett problem/utmaning för användaren.

Vid fråga om antal steg:


Bra antal steg,
Bra med bara ett alternativ, att man bekräftar för att gå vidare.
Tydligt, lättläst och intuitivt.
Andra SMS:et:
Ställer ned varorna på marken när de ska vidare i interaktionen

Fråga om antalet interaktioner och automatisk öppning av locket:


Fördelen med nuvarande är att det blir enklare för användaren att förstå med visuella element. För att
göra det bättre hade man kunnat göra animationer för att göra det enklare att förstå. Man skulle kunna
ha en knapp på HUGO som bagage knappen i bilen för att stänga locket, visualisera på knappen att
det kommer stänga. Nackdelen med det är att det kommer att bli mer tekniskt implementation.

Tycker att tre steg som indikerar vad man ska göra är tydligt. Kan alltid automatisera saker och bygga
mer på de tekniska aspekterna, exempelvis en våg sensor som känner när man tagit sina varor. Men
tycker att så det är gjort nu är logiskt och enkelt att förstå för att utföra uppgiften.

105
APPENDIX F

Andra SMSet:
Ställde varorna på marken

Öppna och stänga manuellt:


Tycker det går att begära det, tycker det fungerar bra.

Hålla mobilen i handen delen:


Hade stoppat ned telefonen i fickan om det inte hade varit någon annans telefon som användes i testet.

Ljud och Ljus:


När det är mörkt ute så hade det varit bra med någon form av belysning.

106

You might also like