0% found this document useful (0 votes)
29 views10 pages

Iceis 2021

Uploaded by

lizao_6
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views10 pages

Iceis 2021

Uploaded by

lizao_6
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

End-user Evaluation of a Mobile Application Prototype for

Territorial Innovation

Eliza Oliveira a, André C. Branco b, Daniel Carvalho c, Eveline Sacramento d,


Oksana Tymoshchuk e, Luis Pedro f, Maria J. Antunes g, Ana M. Almeida h and
Fernando Ramos1 i
DigitalMedia and Interaction Research Centre, University of Aveiro, Aveiro, Portugal

Keywords: User Experience, Mobile Application Prototype, Usability, Design and Evaluation, Communities-led
Initiatives, Territorial based Innovation.

Abstract: This study is part of a larger research effort taking place under the umbrella of CeNTER Program, an
interdisciplinary project that aims to promote the development of the Centro Region of Portugal. The general
contribution of this paper is the evaluation of a mobile application prototype that promotes collaboration
between the various agents involved in Tourism, Health and Wellbeing. For the evaluation of the prototype,
different methods were employed, which included the collection of quantitative and qualitative data.
Quantitative data were obtained through the combination of two User Experience evaluation tools (SUS and
AttrakDiff) and from usability metrics of effectiveness and efficiency, which are key factors related to the
usability of a product. Qualitative data were obtained using the Think-aloud protocol, which allowed
immediate feedback from end-users on their experience of interacting with the prototype. Although there are
still several improvements to be addressed, the overall end-users’ opinions show that the CeNTER application
is a sustainable and timely contribution, with an interesting potential to help foster community-led initiatives.
The article offers a better understanding for the evaluation of mobile applications, which foster the same
subject approached in this study.

1 INTRODUCTION digital platform (mobile application) is being


designed, whose primary focus is to promote
Digital media promotes the communication between collaboration between the various agents
local regional agents and boosts the dissemination of (community-led initiatives, public and private
information regarding local products and activities entities, networks and citizens), involved in
for an unlimited number of people online (Encalada territorial-based innovation processes in the Centro
et al. 2017). Thus, it can facilitate collaborative Region of Portugal (Tymoshchuk et al., 2021).
processes among local citizens, valuing endogenous The main goal of this paper is to present the
resources and promoting assets associated with a assessment of a prototype of a mobile application,
specific territory (Bonomi, 2017). It also allows to designed under the scope of the CeNTER Research
recreate a “virtual proximity” among the different Program, by end-users. Bearing in mind that
agents involved in the territory’s development continuous feedback from users in the early stages of
process (Saint-Onge et al., 2012). In this context, a development is crucial to detect possible problems
that a system may present, an initial testing phase was

a https://orcid.org/0000-0002-3518-3447
b
https://orcid.org/0000-0002-6493-6938
c https://orcid.org/0000-0003-0108-8887
d
https://orcid.org/0000-0003-0839-4537
e https://orcid.org/0000-0001-8054-8014
f
https://orcid.org/0000-0003-1763-8433
g https://orcid.org/0000-0002-7819-4103
h
https://orcid.org/0000-0002-7349-457X
i
https://orcid.org/0000-0003-3405-6953

495
Oliveira, E., Branco, A., Carvalho, D., Sacramento, E., Tymoshchuk, O., Pedro, L., Antunes, M., Almeida, A. and Ramos, F.
End-user Evaluation of a Mobile Application Prototype for Territorial Innovation.
DOI: 10.5220/0010479104950504
In Proceedings of the 23rd International Conference on Enterprise Information Systems (ICEIS 2021) - Volume 2, pages 495-504
ISBN: 978-989-758-509-8
Copyright c 2021 by SCITEPRESS – Science and Technology Publications, Lda. All rights reserved
ICEIS 2021 - 23rd International Conference on Enterprise Information Systems

carried out, with the evaluation of the prototype by they can adapt it to conform the needs of each user.
experts in a laboratory context. Such tests included Still according to these authors, compliance with
the appreciation of the prototype at different stages of these principles makes it possible to develop a
evolution, and from various perspectives, enabling a product or service that is entirely user-centered. In
complete assessment. order to understand users' desires and needs, it is
In the first phase of evaluation, two groups of necessary to gather as much observable data as
specialists carried out the heuristic evaluation of the possible in the entire design process and make a
prototype. The first panel consisted of five experts in comparative analysis of these data to determine what
the Digital Technologies field who have knowledge similarities are found. To do that, different evaluation
and experience in developing interfaces. The second methods are used, which include the collection of
panel consisted of five experts in the fields of qualitative and quantitative data.
Tourism, Health, and Well-being, who have User Experience refers to how the end-user feels
knowledge of the domain and are involved in about the products created. Experience is a construct
different community projects. This evaluation formed in the mind itself, in addition to an infinity of
allowed us to identify and correct 50 usability other factors and is a completely subjective issue
problems, providing more engaging versions of the (Knight, 2019). Bernhaupt and Pirker (2013) state
application (Branco et al., 2021 in press). that the concept of UX is related to positive emotions
This article presents the second phase of and emotional results, such as joy, fun and pride. For
assessment of the mobile application prototype, Knight (2019), creating an experience is not just
carried out with potential end-users. about how the product is designed, which structures
This study is framed by a “User-Centered Design” were implemented or whether state-of-the-art
(UCD) approach, which defines the process technology is used. It is about how the product helps
necessary to develop products easier to use and better users to accomplish their tasks, achieve their goals
fulfil the objectives related to the usability (Fonseca and how they feel when they use and get involved
et al., 2012). It is also supported on a User Experience with the product. In the case of digital solutions, for
(UX) theoretical basis, which provided significant example, intentions are turned into products, which
knowledge to elaborate on the mobile application will be used by real people.
prototype evaluation in the CeNTER Program scope. A mobile application's usability allows it to work
The paper is organized as follows. Section 2 as expected, enabling users to achieve their goals
briefly reviews some important concepts used in this effectively, efficiently, and pleasantly (Rogers et al.,
research. Section 3 addresses the adopted 2011), being presented as a great educational
methodology and Section 4 presents the mobile mechanism (Welfer, Silva and Kazienko, 2014). As
application prototype. Section 5 presents the Jones and Pu (2007) mention usability is not a purely
quantitative and qualitative results collected from the one-dimensional property of an interface. It consists
end-users’ evaluation tests. Finally, Section 6 of a subset of user experiences associated with the
contains the main conclusions and presents future effectiveness, efficiency, and satisfaction with which
research. users can perform a specific set of tasks in a given
environment. In fact, usability is one of the key
factors that affects a software quality (Dourado and
2 THEORETICAL Canedo, 2018).
In this context, efficiency is seen as "the
BACKGROUND quickness with which the user’s goal can be
accomplished accurately and completely and is
The purpose of UCD is to define the process usually a measure of time" (Rubin and Chisnell, 2008,
necessary to develop products that are easy to use and p.4). Effectiveness refers to "the extent to which the
better fulfill the objectives related to usability product behaves in the way that users expect it to and
(Fonseca et al., 2012). It implies, therefore, the active the ease with which users can use it to do what they
engagement of users throughout the product or intend" (Rubin and Chisnell, 2008, p.4).
service development process, in order to prevent Effectiveness is usually measured quantitatively with
digital systems from failing due to lack of an error rate. According to these authors, satisfaction
communication between developers and users (Still refers to "the user’s perceptions, feelings, and
and Crane, 2017). For these authors, design opinions of the product, usually captured through
professionals need to follow a set of guiding both written and oral questioning" (Rubin and
principles in the process of developing a product, so Chisnell, 2008, p.4).

496
End-user Evaluation of a Mobile Application Prototype for Territorial Innovation

Therefore, interfaces with good usability are from the user's first touch on the screen. Then, the
characterized by their ability to offer a practical, easy, percentage obtained from the best evaluator was
appreciable, and satisfying user experience (Rogers, calculated and compared to the average of the two
Sharp and Preece, 2011). In this sense, to certify that other evaluators for each task. Tasks that have the
a product has a satisfactory level of usability, it is highest difference ratio between the time, in seconds,
essential to carry out tests, that provide direct from the best participant to the average, will be those
information about the problems that users encounter, that have usability problems, since they present a
allowing researchers to obtain precise significant variation in their execution times and,
recommendations on what should be modified in an therefore, need to be reviewed.
interface (Carroll et al., 2002; Nielsen, 1994; Nielsen, Qualitative data were obtained through a dialogue
1997; Muchagata and Ferreira, 2019). with the evaluators, which was captured on video
throughout the test.
The test session began with presentation of the
3 METHODOLOGY CeNTER Program, the reading and collection of a
free and informed consent document and an
With the intent to cover the largest number of usage explanation of the test. The evaluation started after
that with a free exploration of the prototype by the
scenarios by each group of regional actors, such as
citizens, community-led initiatives, public and evaluator, followed by the dictation of each task by
private entities and networks, four different one of the team members. A Guided Exploration Task
Guide, or Cognitive Walkthrough (Wharton et al.,
hypothetical Use Cases were prototyped. These cases
correspond to common scenarios elaborated with 10 1994), was used, being this an inspection method
based on performing a sequence of actions to
ordered tasks to be performed by three distinct
complete a task. In addition, the Think aloud Protocol
participants that composed each group.
The collected data was based on qualitative and (Jaspers, 2009) was also employed, which encourages
users to think out loud while exploring and /or
quantitative information. Quantitative data were
obtained through the combination of two UX performing a set of tasks.
evaluation tools, and from metrics of effectiveness Afterwards, the instruments (SUS and AttrakDiff)
were presented to users, fulfilling the three phases of
and efficiency, which are key factors related to the
usability of a product. The evaluation instruments the test: Introduction, task execution and application
were the System Usability Scale (SUS) (Martins et of the instruments. All tests were recorded for later
analysis by the team, in order to obtain more
al., 2015), and the AttrakDiff (Hassenzahl et al.,
2003). SUS is a widely used instrument for qualitative data through the comments of the
identifying usability’s issues of a system, while evaluators.
Considering the dynamic evaluation process
AttrakDiff also comprises emotion and hedonic
aspects of a product, embracing other important UX presented, the tests encompassed the following goals:
factors in the evaluation. ▪ Measure indicative aspects of the prototype's
For each task performed by the users it was usability, such as efficiency, effectiveness and
identified whether they finished the task successfully satisfaction;
or with assistance. The completed tasks are those in ▪ Collect other important UX factors, such as
which users have accomplished their objective hedonic qualities and an overall perception
without any help. Tasks that required help were regarding the interface’s look and feel;
pointed out as “Needed some help” and were not ▪ Verify the acceptance of the CeNTER
considered for the computation. Based on this result, prototype concept;
a percentage of effectiveness is calculated for each ▪ Gather suggestions for improvement.
use case. This indicator was based on the Nielsen
(2001) success rate usability metric. The The evaluation sessions occurred in October and
effectiveness metric is a percentage of completed November of 2020, in locations and times that varied
tasks divided by the total number of tasks (ratio). according to the preference of each evaluator. Some
The efficiency metric considered the time that tests were carried out at the University of Aveiro,
each evaluator took to complete the tasks. According while others took place at the institution or even at the
to Nielsen (2001) and Sauro and Lewis (2016), the residence of the participants.
evaluator with the best average time is considered as The evaluations were carried out individually,
the reference for the use case that he belongs to. The with evaluators who met the inclusion criteria within
time was measured in seconds, and it was counted the different agents in the territory. A total of 12 tests

497
ICEIS 2021 - 23rd International Conference on Enterprise Information Systems

were accomplished. The researchers defined four Use consult and delete an event that has already taken
Cases (UC): UC1 - Community-led initiatives - place; (vii) Browse the notifications; (viii) Contact
involved evaluators representing community-based the organizers of a given event to clarify a doubt by
initiatives in the Centro Region of Portugal; UC 2 - email; (ix) Ask to be a volunteer and (x) Consult the
Public Entities - tests were carried out with City ideas section and insert an idea.
Councils, Health Centers, and Parish Councils Finally, the fourth and final Use Case (Networks)
representatives in the Centro Region of Portugal; UC requested the realization of the following tasks: (i)
3 - Citizens - people gathered as an individual Add an Initiative; (ii) Request a resource; (iii)
participation; and UC 4 - Networks - tests were Request partners; (iv) Consult events on the agenda;
performed with representatives of the Networks. This (v) Change user preferences; (vi) See on the map the
study's participants represented different profiles in volunteers available in a geographic area; (vii)
terms of education, age, gender, and role performed Consult information about a volunteer; (viii) Contact
in society, presenting distinct learning curves a volunteer; (ix) Comment on an idea; (x) Consult the
concerning the use of digital technologies. participations of the user.
Finally, after two months of testing, the UX The Use Cases were elaborated by the CeNTER
assessment instruments results were verified for data team, taking into consideration the results from a
analysis. In parallel, qualitative data obtained from previous research (Silva et al, 2020) that allowed the
the careful observation of the videos were gathered, identification of the potential regional agents highly
collecting comments and suggestions from the involved in territorial innovation. Therefore, the
evaluators during the test. outcomes achieved in this study may help to identify
whether the CeNTER accomplishes the relevant
3.1 Use Cases functionalities for territorial development.

This section presents the use cases in detail. Each use


case was composed of a sequence of 10 pre- 4 PROTOTYPE
established tasks proposed to the participants.
The Use Case 1 (Community-led Initiatives) A mobile application is currently under development
encompassed the following tasks: (i) See examples of and its main objective is to encourage interactions
higher-ranking events; (ii) Add new event; (iii) Select among local agents, to facilitate communication and
a specific date in the register; (vi) Request a specific collaboration processes, to benefit from existing
volunteer in the event register; (v) End registration mediation strategies and encourage the joint creation
(detailed event screen appears); (vi) Share event on of new ideas and activities. This effort is being
Facebook; (vii) See on the map the location of the developed using the Principle software, which allows
event; (viii) Check on the map if there are events the development of a medium-fidelity prototype
nearby; (ix) See settings / configurations; (x) Change (Oliveira et al., 2020).
user preferences. As shown in Figure 1, the main screen of the
The Use Case 2 (Public Entities) implied the application presents a grid with six primary tabs:
following tasks: (i) Search initiatives that are initiatives, events, entities, volunteers, resources, and
happening in a certain place; (ii) Read and participate highlights, which act as starting points in the
in an initiative; (iii) Identify the organization that application. When opening a tab, the user finds the
organizes this initiative; (iv) In this initiative, browse information displayed in a carousel mode, with cards
the existing events (Identify the place, date and time representing the different units of content. These
of the event); (v) Browse partners for this event; (vi) cards have essential information (e.g., image, date
Request to be an event partner; (vii) Go back to the and time, location) and can be manipulated with
home screen; (viii) Create a new resource offering; gestures, such as swiping (e.g., discard or save as
(ix) In the definitions, see initiatives created by you; favorites). Further, when a card is presented, different
(x) Open an initiative created by you and change its actions are possible, such as viewing the element on
location. the map, adding a new element and making specific
Use Case 3 (Individual participation) presented searches within each tab.
the following tasks to be accomplished: (i) Search The prototype header presents agenda features,
events occurring in a certain place; (ii) Search the search tools across the platform, and access to
classification of an event; (iii) Participate in an event; application settings. The menu in the footer includes
(iv) Create profile (choose the option register other functionality options, such as accessing the user
yourself); (v) Save an event; (vi) On the home page, profile and ideas, visiting saved items, viewing

498
End-user Evaluation of a Mobile Application Prototype for Territorial Innovation

notifications and general exploration on the map. The lowest and highest effectiveness index obtained in the
navigation in the application is done with a minimum use cases, being 80% for use case 2 and 97% for the
number of gestures. use case 4. According to the metrics pointed out by
Besides that, the CeNTER mobile application has Nielsen (2001), an index above 80% is considered
a small tutorial that aims to help anyone to easily good, and it is not necessary to reach a higher value
understand how to interact with the platform. in time for a project prototype (Nielsen, 2001; Sauro
and Lewis, 2016). These values correlate to the
average obtained in verbal help, so the use case with
the highest effectiveness index had a lower average in
verbal help and vice versa, i.e., when the evaluator
needed assistance to perform a task, this contributed
to the decrease in the effectiveness index. Henceforth,
the total average of the four use cases was made
(Table 1), making it possible to understand that
approximately every evaluator needed verbal help in
at least one in ten tasks.

Table 1: Global results regarding Effectiveness and


Efficiency.

Verbal Help Efficiency


Use
Effectiveness (average per (average
Case
end-user) per task)

Figure 1: Screen samples from CeNTER Prototype: Main UC1 87% 6,66% 16 sec.
screen and map screen.
UC2 80% 20% 15 sec.

UC3 83% 16,66% 18 sec.


5 DISCUSSION OF RESULTS
UC4 97% 10% 17 sec.
This section presents the main results obtained
through the tests carried out with potential end-users, Average 87% 13,33% 16,5 sec.
which provided relevant quantitative and qualitative
results regarding instrumental and non-instrumental The average was made according to the number of
characteristics of the medium fidelity prototype. times an end-user needed verbal help during the
execution of the 10 tasks. Afterwards, in the same use
5.1 Results from Effectiveness and case, the average obtained from all end-users was
Efficiency determined. Finally, the average obtained from the
total use cases was calculated.
This section presents the results of effectiveness and It is possible to conclude that the results obtained
efficiency tests with potential end-users of the in the efficiency analysis were satisfactory. It is
CeNTER prototype. The usability metric of noteworthy that the efficiency metrics were obtained
effectiveness (whether the user performed the task, according to the time difference that the distinct
with or without help, or did not perform it) and evaluators took to perform the same task. It is also
efficiency (time of execution of each task), provided observed that the average time of execution of each
cues on: how intuitive the design is; how frequent task was around 16.5 seconds (Table 1), with low
errors were committed, while performing a specific variation between the average of each use case, which
task or action; and the required learning curve to use demonstrates a high efficiency in terms of usability
the platform. The effectiveness usability metric from the prototype.
measured each user´s success rate in performing 10 In addition, three evaluators revealed some
tasks, totalizing 30 tasks performed in each use case. difficulties in carrying out tasks that required content
The results can be seen in Table 1. creation (creating a profile or event with a date and
On average, an efficiency rate of 87% was time) and browsing tasks (such as finding the existing
obtained. However, it is important to highlight the initiatives or reading the ideas’ screen and

499
ICEIS 2021 - 23rd International Conference on Enterprise Information Systems

subsequently creating a new idea). These outcomes Although the value related to SUS reinforces a
were directly influenced by the learning curve of high usability index, the value of Pragmatic
users, as well as their experience in using similar Dimension (PQ), which encompasses aspects
mobile applications. Thus, the usability evaluation of regarding usability and product functionality,
the CeNTER application prototype provided good obtained lower results (1,60), with oscillations
results in terms of learnability, effectiveness and between the Use Cases. The higher value was
efficiency. achieved in Use Case 4, while the lower scores were
given by the participants of the Use Case 2. However,
5.2 Results from the SUS and the global average value remained positive (between
AttrakDiff Instruments -3 and 3), so it is possible to consider that the
prototype has a favourable index in the criteria of
The main results concerning the application of SUS effectiveness, efficiency, satisfaction and ease of
and AttrakDiff in all Use Cases are shown in Table 2. learning.
The SUS results show that, in terms of usability In regard to the results obtained from the
characteristics, the prototype is at an excellent level AttrakDiff scale, the average values of the four
according to the opinion of the evaluators of the first dimensions were calculated, all of which had high
use case (85 points). According to Sauro (2011), the scores, being possible to achieve scores between -3 to
average of the System Usability Score is 68 points. In 3. The apical general value is related to the
this sense, if the score is less than this value, the prototypes’ aesthetics “ATT” - Attractiveness (2,23),
product probably faces usability problems, since it is followed by the Hedonic Quality – Stimulation (HQ-
under the average (Barbosa, 2019; Sauro, 2011). S - 1,88), which is strictly related to the desire to
Therefore, a score between 80 and 90 in SUS understand and develop skills for using the product.
corresponds to an excellent usability (Barbosa, 2019), Afterwards, the biggest score is from the Hedonic
reflected in the case of the CeNTER mobile Identification (HQ-I - 1,71), which are attributes
application prototype, with the global result of 85,83 alluding to the level of user identification with the
scores. system. Finally, as previously said, the lowest score
corresponds to the Pragmatic Quality (PQ - 1,60),
Table 2: Global results from SUS and AttrakDiff. which is correlated to usability issues.
Figure 2 shows the average values obtained in the
Instrumental other dimensions, highlighting the aspect related to
Non-instrumental Qualities
Qualities the prototype aesthetics (ATT), which presented, in
Use agreement with the previous results, a value
AttrakDiff significantly higher than in the other dimensions.
Cases SUS AttrakDiff (-3 to 3)
(-3 to 3)
(0 to
100)
PQ HQ-S HQ-I ATT

UC1 85 1,57 2,10 1,76 2,19

UC2 87,5 1,00 1,52 1,52 1,71

UC3 95 1,76 2,24 1,90 2,67

UC4 75,83 2,05 1,67 1,67 2,43

Average 85,83 1,60 1,88 1,71 2,23

The results obtained through the SUS


administration in all use cases show an overall Figure 2: Diagram of the global average of values of the
agreement among the participants, reinforcing the four dimensions of AttrakDiff.
value of excellence, which is between 80 and 90
points, relative to the usability criteria measured by Also, in a coherent way with the rest of the results,
this evaluation instrument within the CeNTER QH-S obtained a higher value than QH-I, showing
platform. that the aspects referring to the desire to understand
and develop skills for using the product are more

500
End-user Evaluation of a Mobile Application Prototype for Territorial Innovation

evident than those related to the level of user SUS, which refers to the ease of use, have shown to
identification with the system. be substantially high.
Figure 3 shows that the pair of words which Concerning Figure 4, the general results achieved
received the negative result in AttrakDiff was the from AttrakDiff positioned the confidence rectangle
topic “cheap - premium”, in QH-I dimension, with no in the “desirable” quadrant, assuming the perceptions
other negative average values among all items in the of PQ (1.60) and QH (1.80). According to the
other dimensions. Attrakdiff methodology, the smaller the difference
between the two rectangles, the greater is the
confidence level of the results, indicating that
participants maintained good affinity among their
responses. Moreover, in the CeNTER scope, the
confidence rectangle extends within the “desired” or
“desired” area. Therefore, it can be clearly classified
as a desirable product. This value, as well as all the
other graphs presented above, were generated
according to the AttrakDiff methodology.

Figure 3: Diagram of the description of word pairs. Global


average of measured items. Figure 4: Confidence rectangles of the evaluation with end-
users.
However, it is important to emphasise that, under
the CeNTER project, none of the opposites in “cheap An accurate analysis of the quantitative results in
- premium" has an essentially negative connotation. each use case, separately, shows that the participants
Thus, a quality of “cheap” might mean that the in UC2 had more difficulty in performing the tasks,
Platform is accessible to all social fringes, which considering that this was the group that most needed
consolidates the intention to democratize digital verbal help. In the meantime, the results obtained
technologies in all strata of the population. Likewise, from AttrakDiff showed lowest values scored by
“cheap” can refer to a low complexity of the platform, participants. In this sense, the global results indicate
indicating the desired ease of use within the scope of that the UC2 tasks (public entities) were challenging
CeNTER. This point of view is consistent with the for the local agents, reflecting the results of the
fact that the punctuation for the “simple - evaluation of AttrakDiff. Additionally, it is
complicated” opposites are significantly more noteworthy that the UC4 presented higher scores in
inclined towards the simple than for its reverse, and effectiveness, while the UC2 had better values in
with the fact that the usability score measured by terms of efficiency. Regarding SUS and AttrakDiff,
the higher average ponctuation was given by the end-

501
ICEIS 2021 - 23rd International Conference on Enterprise Information Systems

users of the UC3, showing a higher level of the limitations of the software used in the prototyping
satisfaction concerning the CeNTER prototype. process, for example some difficulties that the
evaluators felt in the movement of the cards.
5.3 Qualitative Results Improvement suggestions were related to the
possibility of changing the main screen according to
The Think-aloud protocol was used to obtain user preferences; apply search filters to the schedule;
immediate feedback from end-users about their replace the title "Ideas" with a more dynamic one,
experience of interacting with the prototype. The such as: "Get your idea moving". The largest number
application of this method allowed the qualitative of suggestions for improvement was related to the
evaluation of the prototype based on the users' verbal suggestion of new features (10/39). As an example of
comments. The inputs were divided according to each these suggestions, we can mention the suggestions of
corresponding screen to relate user comments to the “include supply/demand for an employee, in addition
main screens tested. Table 3 shows that, among the to a volunteer”; “Generate certificate of participation
screens that obtained the largest number of inputs, the of volunteers”; “To be able to invite participants who
main screen stood out (13/46), as well as the details have participated in previous events”. It is important
screen (9/46). to note that these suggestions are precious for
developing this mobile application and future digital
Table 3: Inputs according to the prototype interface. solutions aimed at community initiatives.
In addition to these inputs, 34 positive comments
Interfaces Nº of inputs were also collected on the mobile application under
development. These comments showed that users
Tutorial 1 have fully understood the purpose and objectives of
the platform. For example: “I liked the fact that I
Main screen 13
could cross similar initiatives, access it, either by map
Profile 5 or by theme. I liked the possibility of creating a
synergy between the partners”. “Many people want to
Register of an initiative or event 1 help and do not often know-how. Moreover, there are
always entities that have initiatives and want to
Ideas 4 share”; “The synergies created within the application
allow us to create new forms of interactions, which
Maps 4 current applications would not yet allow”.
Agenda 2

Saved 2 6 CONCLUSIONS
Notifications 0 Usability tests proved to be an effective way to
acquire information that contributes to significantly
Details of an event / initiative / entity 9
improve the interface of a future mobile application,
Others 5
thus favouring the user experience. The user-centered
design approach, used in all stages of the CeNTER
Total 46 prototype development, contributed strongly to the
understanding of the users' needs.
Forty-six (46) inputs were reported during the free The application of the Cognitive Walkthrough
exploration by the end-users, 36 of which were method and the Think-aloud protocol, together with
considered by the team as suggestions for platform the SUS and AttrakDiff, allowed the integration of
improvements, 7 as prototype usability errors, and quantitative and qualitative assessment approaches in
three were interpreted as suggestions for this study. The different methods of analysis with
improvement and usability errors. metrics of usability provided a multifaceted
Usability errors correspond to inconsistencies in understanding of what local agents expected and how
the interface's use, such as the lack of feedback on they intend to interact with the mobile application
acting, the need to do more than three steps on one of during their community and/or professional activities.
the screens to return to the home screen, and the Instrumental and non-instrumental characteristics of
difficulty moving the cards on the carousel. It should the prototype allowed us to obtain information, in
be noted that some of these problems were related to addition to the usability data, providing results on

502
End-user Evaluation of a Mobile Application Prototype for Territorial Innovation

aesthetic and emotional aspects related to the increase the relevance of the CeNTER platform as an
platform. original and useful option.
It is important to highlight that the number of As a final conclusion, it was possible to learn
usability problems identified during the end-users’ several important lessons throughout this
tests, compared to tests with experts, has significantly collaborative process, which can be useful for other
decreased. As previously mentioned, in usability tests researchers who develop digital solutions in the same
with experts 50 problems were identified and the vast subject area: i) include community initiatives in the
majority of those were soon corrected. After that, entire design process to better tailor the solution to
only seven usability problems were identified in the their needs; ii) be flexible to meet the preferences of
12 tests applied with end-users. the community and of the stakeholders; iii)
The analysis of the data collected indicates good incorporate mixed methods in design and assessment
usability and high values of acceptance and tests, which provide valuable information to produce
satisfaction from the different local agents with the an acceptable and well-designed solution.
developed prototype. This tends to demonstrate the As future work it is aimed to develop a fully
relevance of the end-user-centered approach to the functional platform, allowing the experimentation
development of tools dedicated to territorial based and evaluation in the context of community-led
innovation. The sample size was composed by three initiatives. Nonetheless, it is intended to study the
evaluators per use case, demonstrating that the adoption, use and impact of the application in
prototype had a good efficiency index, since it promoting processes of articulation and
obtained a classification of 80% or more in all use approximation between local agents, as well as in the
cases. construction and diffusion of knowledge and
The difficulties in carrying out tasks that required innovations.
content creation and consultation by some evaluators
were influenced by the learning curve of users, as
well as their experience in using similar mobile ACKNOWLEDGEMENTS
applications. Although the design of this study does
not allow us to generalize its results they reflect the This article was developed under the support of the
user experience of the regional agents previously
Research Program “CeNTER - Community-led
selected, providing evidence of what is important for Networks for Territorial Innovation” Integrated
a mobile application in territorial-based innovation. Research Program (CENTRO-01-0145-FEDER-
This study had limitations related to the sample
000002), funded by Programa Operacional Regional
size that was relatively small, which restricted the do Centro (CENTRO 2020), PT2020.”
generalization of the results. However, it seemed to
be sufficient for the execution of usability tests.
Another restriction is related to the fact that the
Principle software does not allow some types of REFERENCES
interactions, such as the pinch gesture (pinching to
zoom in and zoom out on the map of a mobile Barbosa, A., 2019. Medindo a usabilidade do seu produto
touchscreen application), insert personalized data by com System Usability Scale (SUS). CA Design.
https://medium.com/design-contaazul/medindo-a-
the user (the prototype only simulates the information
usabilidade-do-seu-produto-com-system-usability-
that is entered by the user) or some limitations in scale-sus-3956612d9229.
gestures such as drag and drop (it is not possible to Bernhaupt, M., Pirker, R., 2013. Evaluating User
use the same graphic object to perform two different Experience for Interactive Television: Towards the
drag functions). Development of a Domain-Specific User Experience
However, these limitations were not an Questionnaire, Human-computer Interact. - Lect. Notes
impairment for a good user experience evaluation. Comput. Sci. 8118, 642–659.
The main positive results from the evaluation tools Bonomi, S., Ricciardi, F., Rossignoli, C., 2017. Network
are a good indicator of the acceptance and a pleasant organisations for externality challenges: How social
entrepreneurship co-evolves with ICT-enabled
experience concerning the use of prototype. User tests
solutions. International Journal of Knowledge-Based
positively highlighted several platform features, such Development 8(4), 346-366.
as sharing resources and volunteers, collaborative Branco, A, C., Carvalho, D., Sacramento, E., Tymoshchuk,
development of events, sharing ideas and creating O., Oliveira, E., Antunes, M.J., Pedro, L., Almeida,
new initiatives based on these ideas. Also, many users A.M., Ramos, F., 2021, in press. Prototyping and
reported that these are innovative features, which evaluating a mobile app to promote territorial

503
ICEIS 2021 - 23rd International Conference on Enterprise Information Systems

innovation. In Proceedings – Conference Communities Journal of Digital Media and Interaction, 3(6), pp. 53-
and Networks for Territorial Innovation. 71 https://doi.org/10.34624/jdmi.v3i6.15517
Carroll, C., Marsden, P., Soden, P., Naylor, E., New, J., Rogers, Y., Sharp, H., Preece, J., 2011. Interaction design:
Dornan, T., 2002. Involving users in the design and beyond human-computer interaction. John Wiley and
usability evaluation of a clinical decision support Sons.
system. Computer methods and programs in Rubin, J., Chisnell, D., 2008. How to plan, design, and
biomedicine, 69(2), 123-135. conduct effective tests. Handbook of usability testing,
Dourado, M. A. D., Canedo, E. D., 2018. Usability 348.
heuristics for mobile applications a systematic review. Saint-Onge, H., Wallace, D., 2012. Leveraging
In Proceedings of the 20th International Conference on communities of practice for strategic advantage.
Enterprise Information Systems, (pp. 483-494). Routledge.
Encalada, L., Boavida-Portugal, I., Ferreira, C., Rocha, J., Silva, P. A., Antunes, M. J., Tymoshchuk, O., Pedro, L.,
2017. Identifying tourist places of interest based on Almeida, M., Ramos, F., 2020. Understanding the role
digital imprints: Towards a sustainable smart city. of communication and mediation strategies in
Sustainability 9(12), 2303-2317. https://doi.org/10. community-led territorial innovation: a systematic
3390/su9122317 review. Interaction Design and Architecture(s) Journal
Fonseca, M. J., Campos, P., Gonçalves, D., 2012. (IxD&A), 44, 7 – 28.
Introdução ao Design de Interfaces (2a Edição). FCA - Sauro, J., 2011. Measuring Usability with the System
Editora de Informática. Usability Scale (SUS). Measuring U.
Hassenzahl M., Burmester M., Koller F., 2003. AttrakDiff: https://measuringu.com/sus/.
Ein Fragebogen zur Messung wahrgenommener Sauro, J., Lewis, J. R., 2016. Quantifying the User
hedonischer und pragmatischer Qualität. In: Szwillus Experience, 2 Edition. Practical Statistics for User
G., Ziegler J. (eds) Mensch and Computer. Berichte des Research. Cambridge. ELSEVIER.
German Chapter of the ACM, vol 57. Still, B., Crane, K., 2017. Fundamentals of User-centered
VIEWEG+TEUBNER VERLAG. https://doi.org/10. Design. Nova York. CRC Press.
1007/978-3-322-80058-9_19 Tymoshchuk, O., Almeida, A. M., Pedro, L., Antunes, M.
Jaspers, M. W., 2009. A comparison of usability methods J., Ramos, F., Oliveira, E., Carvalho, D., 2021. Digital
for testing interactive health technologies: Technologies as Tools to Promote Tourism and
methodological aspects and empirical evidence. Territorial Development: Design of a Mobile
International journal of medical informatics, 78(5), Application for Community-Led Initiatives. In Dinis,
340-353. M. G., Bonixe, L., Lamy, S., Breda, Z. (Ed.), Impact of
Jones, N., Pu, P., 2007. User technology adoption issues in New Media in Tourism (pp. 268-291). IGI GLOBAL.
recommender systems. In Proceedings of the 2007 http://doi:10.4018/978-1-7998-7095-1.ch016
Networking and Electronic Commerce Research Welfer, D., da Silva, R. C. F., Kazienko, J. F., 2014.
Conference (pp. 379-394). MobiCAP: A mobile application prototype for
Knight, W., 2019. UX for Developers. Northampton. management of community-acquired pneumonia. In
APRESS. 2014 IEEE 16th International Conference on e-Health
Martins, A. I., Rosa, A. F., Queirós, A., Silva, A., Rocha, Networking, Applications and Services (Healthcom)
N., 2015. European portuguese validation of the system (pp. 126-127). IEEE.
usability scale (SUS). Procedia Computer Science, 67, Wharton, C., Rieman, J., Lewis, C., Polson, P., 1994. The
293-300. Cognitive Walkthrough: A practitioner’s guide. In J.
Muchagata, J., Ferreira, A., 2019. Visual Schedule: A Nielsen., R. L. Mack (Eds.), Usability inspections
Mobile Application for Autistic Children-Preliminary methods (pp. 105-140). New York. WILEY.
Study. In Proceedings of ICEIS 2, (pp. 452-459). 
Nielsen, J., 1994. Usability inspection methods. In
Conference companion on Human factors in computing
systems (pp. 413-414).
Nielsen, J., 1997. Usability testing. Handbook of human
factors and ergonomics, 2, 1543-1568.
Nielsen, J., 2001. Usability Metrics. Retrieved December
16, 2020, from https://www.nngroup.com/articles/
usability-metrics/
Nielsen, J., 2001. Success Rate: The Simplest Usability
Metric. Retrieved January 16, 2020, from
https://www.nngroup.com/articles/success-rate-the-
simplest-usability-metric/
Oliveira, E., Antunes, M. J., Tymoshchuk, O., Pedro, L.,
Almeida, M., Carvalho, D., Ramos, F., 2020.
Prototipagem de uma Plataforma Digital para a
Promoção da Inovação Territorial de Base Comunitária.

504

You might also like