(Brazil) Kinh Nghiệm Phát Triển
(Brazil) Kinh Nghiệm Phát Triển
(Brazil) Kinh Nghiệm Phát Triển
Trends and
Innovations in
Information
Systems and
Technologies
Volume 3
Advances in Intelligent Systems and Computing
Volume 1161
Series Editor
Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences,
Warsaw, Poland
Advisory Editors
Nikhil R. Pal, Indian Statistical Institute, Kolkata, India
Rafael Bello Perez, Faculty of Mathematics, Physics and Computing,
Universidad Central de Las Villas, Santa Clara, Cuba
Emilio S. Corchado, University of Salamanca, Salamanca, Spain
Hani Hagras, School of Computer Science and Electronic Engineering,
University of Essex, Colchester, UK
László T. Kóczy, Department of Automation, Széchenyi István University,
Gyor, Hungary
Vladik Kreinovich, Department of Computer Science, University of Texas
at El Paso, El Paso, TX, USA
Chin-Teng Lin, Department of Electrical Engineering, National Chiao
Tung University, Hsinchu, Taiwan
Jie Lu, Faculty of Engineering and Information Technology,
University of Technology Sydney, Sydney, NSW, Australia
Patricia Melin, Graduate Program of Computer Science, Tijuana Institute
of Technology, Tijuana, Mexico
Nadia Nedjah, Department of Electronics Engineering, University of Rio de Janeiro,
Rio de Janeiro, Brazil
Ngoc Thanh Nguyen , Faculty of Computer Science and Management,
Wrocław University of Technology, Wrocław, Poland
Jun Wang, Department of Mechanical and Automation Engineering,
The Chinese University of Hong Kong, Shatin, Hong Kong
The series “Advances in Intelligent Systems and Computing” contains publications
on theory, applications, and design methods of Intelligent Systems and Intelligent
Computing. Virtually all disciplines such as engineering, natural sciences, computer
and information science, ICT, economics, business, e-commerce, environment,
healthcare, life science are covered. The list of topics spans all the areas of modern
intelligent systems and computing such as: computational intelligence, soft comput-
ing including neural networks, fuzzy systems, evolutionary computing and the fusion
of these paradigms, social intelligence, ambient intelligence, computational neuro-
science, artificial life, virtual worlds and society, cognitive science and systems,
Perception and Vision, DNA and immune based systems, self-organizing and
adaptive systems, e-Learning and teaching, human-centered and human-centric
computing, recommender systems, intelligent control, robotics and mechatronics
including human-machine teaming, knowledge-based paradigms, learning para-
digms, machine ethics, intelligent data analysis, knowledge management, intelligent
agents, intelligent decision making and support, intelligent network security, trust
management, interactive entertainment, Web intelligence and multimedia.
The publications within “Advances in Intelligent Systems and Computing” are
primarily proceedings of important conferences, symposia and congresses. They
cover significant recent developments in the field, both of a foundational and
applicable character. An important characteristic feature of the series is the short
publication time and world-wide distribution. This permits a rapid and broad
dissemination of research results.
** Indexing: The books of this series are submitted to ISI Proceedings,
EI-Compendex, DBLP, SCOPUS, Google Scholar and Springerlink **
Editors
123
Editors
Álvaro Rocha Hojjat Adeli
Departamento de Engenharia Informática College of Engineering
Universidade de Coimbra The Ohio State University
Coimbra, Portugal Columbus, OH, USA
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
This book contains a selection of papers accepted for presentation and discussion at
the 2020 World Conference on Information Systems and Technologies
(WorldCIST’20). This conference had the support of the IEEE Systems, Man, and
Cybernetics Society (IEEE SMC), Iberian Association for Information Systems and
Technologies/Associação Ibérica de Sistemas e Tecnologias de Informação
(AISTI), Global Institute for IT Management (GIIM), University of Montengero,
Mediterranean University and Faculty for Business in Tourism of Budva. It took
place at Budva, Montenegro, during 7–10 April 2020.
The World Conference on Information Systems and Technologies (WorldCIST)
is a global forum for researchers and practitioners to present and discuss recent
results and innovations, current trends, professional experiences and challenges of
modern information systems and technologies research, technological development
and applications. One of its main aims is to strengthen the drive towards a holistic
symbiosis between academy, society and industry. WorldCIST’20 built on the
successes of WorldCIST’13 held at Olhão, Algarve, Portugal; WorldCIST’14 held
at Funchal, Madeira, Portugal; WorldCIST’15 held at São Miguel, Azores,
Portugal; WorldCIST’16 held at Recife, Pernambuco, Brazil; WorldCIST’17 held
at Porto Santo, Madeira, Portugal; WorldCIST’18 held at Naples, Italy and
WorldCIST’19 which took place at La Toja, Spain.
The program committee of WorldCIST’20 was composed of a multidisciplinary
group of almost 300 experts and those who are intimately concerned with infor-
mation systems and technologies. They have had the responsibility for evaluating,
in a ‘blind review’ process, the papers received for each of the main themes pro-
posed for the conference: (A) Information and Knowledge Management;
(B) Organizational Models and Information Systems; (C) Software and Systems
Modelling; (D) Software Systems, Architectures, Applications and Tools;
(E) Multimedia Systems and Applications; (F) Computer Networks, Mobility and
Pervasive Systems; (G) Intelligent and Decision Support Systems; (H) Big Data
Analytics and Applications; (I) Human–Computer Interaction; (J) Ethics,
Computers and Security; (K) Health Informatics; (L) Information Technologies in
v
vi Preface
Conference
General Chair
Álvaro Rocha University of Coimbra, Portugal
Co-chairs
Hojjat Adeli The Ohio State University, USA
Luis Paulo Reis University of Porto, Portugal
Sandra Costanzo University of Calabria, Italy
Advisory Committee
Ana Maria Correia (Chair) University of Sheffield, UK
Benjamin Lev Drexel University, USA
Chatura Ranaweera Wilfrid Laurier University, Canada
Chris Kimble KEDGE Business School and MRM, UM2,
Montpellier, France
Erik Bohlin Chalmers University of Technology, Sweden
Eva Onaindia Polytechnical University of Valencia, Spain
Gintautas Dzemyda Vilnius University, Lithuania
vii
viii Organization
Program Committee
Abdul Rauf RISE SICS, Sweden
Adnan Mahmood Waterford Institute of Technology, Ireland
Adriana Peña Pérez Negrón Universidad de Guadalajara, Mexico
Adriani Besimi South East European University, Macedonia
Agostinho Sousa Pinto Polytechnic of Porto, Portugal
Ahmed El Oualkadi Abdelmalek Essaadi University, Morocco
Ahmed Rafea American University in Cairo, Egypt
Alberto Freitas FMUP, University of Porto, Portugal
Aleksandra Labus University of Belgrade, Serbia
Alexandru Vulpe University Politehnica of Bucharest, Romania
Ali Idri ENSIAS, University Mohammed V, Morocco
Amélia Badica Universti of Craiova, Romania
Amélia Cristina Ferreira Silva Polytechnic of Porto, Portugal
Almir Souza Silva Neto IFMA, Brazil
Amit Shelef Sapir Academic College, Israel
Ana Isabel Martins University of Aveiro, Portugal
Ana Luis University of Coimbra, Portugal
Anabela Tereso University of Minho, Portugal
Anacleto Correia CINAV, Portugal
Anca Alexandra Purcarea University Politehnica of Bucharest, Romania
Andjela Draganic University of Montenegro, Montenegro
Aneta Polewko-Klim University of Białystok, Institute of Informatics,
Poland
Aneta Poniszewska-Maranda Lodz University of Technology, Poland
Angeles Quezada Instituto Tecnologico de Tijuana, Mexico
Organization ix
Health Informatics
A Product and Service Concept Proposal to Improve the Monitoring
of Citizens’ Health in Society at Large . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Luís Fonseca, João Barroso, Miguel Araújo, Rui Frazão,
and Manuel Au-Yong-Oliveira
Artificial Neural Networks Interpretation Using LIME for Breast
Cancer Diagnosis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Hajar Hakkoum, Ali Idri, and Ibtissam Abnane
Energy Efficiency and Usability of Web-Based Personal
Health Records . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
José Alberto García-Berná, Sofia Ouhbi, José Luis Fernández-Alemán,
Juan Manuel Carrillo-de-Gea, and Joaquín Nicolás
A Complete Prenatal Solution for a Reproductive Health
Unit in Morocco . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Mariam Bachiri, Ali Idri, Taoufik Rachad, Hassan Alami,
and Leanne M. Redman
Machine Learning and Image Processing for Breast Cancer:
A Systematic Map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Hasnae Zerouaoui, Ali Idri, and Khalid El Asnaoui
A Definition of a Coaching Plan to Guide Patients with Chronic
Obstructive Respiratory Diseases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Diogo Martinho, Ana Vieira, João Carneiro, Constantino Martins,
Ana Almeida, and Goreti Marreiros
Reviewing Data Analytics Techniques in Breast Cancer Treatment . . . . 65
Mahmoud Ezzat and Ali Idri
xvii
xviii Contents
Abstract. Nowadays wearable devices are very popular. The reason for that is
the sudden reduction in pricing and the increase in functionalities. Healthcare
services have been greatly benefiting from the emergence of these devices since
they can collect vital signs and help healthcare professionals to easily monitor
patients. Medical wellness, prevention, diagnosis, treatment and monitoring
services are the main focus of Healthcare applications. Some companies have
already invested in this market and we present some of them and their strategies.
Furthermore, we also conducted a group interview with Altice Labs in order to
better understand the critical points and challenges they encountered while
developing and maintaining their service. With the purpose of comprehending
users’ receptiveness to mHealth systems (mobile health systems which users
wear - wearables) and their opinion about sharing data, we also created a
questionnaire (which had 114 valid responses). Based on the research done we
propose a different approach. In our product and service concept solution, which
we share herein, we consider people of all ages to be targets for the
product/service and, beyond that, we consider the use of machine learning
techniques to extract knowledge from the information gathered. Finally, we
discuss the advantages and drawbacks of this kind of system, showing our
critical point of view.
1 Introduction
© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 3–14, 2020.
https://doi.org/10.1007/978-3-030-45697-9_1
4 L. Fonseca et al.
2 Literature Review
In this section we shall describe what has already been researched about wearable
devices and medical data processing and we shall explain some mHealth systems
implemented by enterprises or organizations that are similar to our proposed system.
The literature review will be the basis of the development of a possible solution to
improve the quality of life of the population while at the same time reducing healthcare
costs.
applications so that medical staff, trusted relatives and even the patient might access
that data.
These devices provide high-density data [14] (e.g., 10–500 times per second) which
can be processed using algorithms emerging in the machine learning field.
Machine learning is one of the many areas of artificial intelligence oriented towards
the study, processing and analysis of large amounts of data with the objective of
developing computational models capable of automatic learning [15]. These models are
able to detect relationships in the data, that would be difficult for humans to perceive.
Algorithms that make classification decisions have a tremendous dependency on
the quantity of the data needed in order to learn, so there is an opportunity to use data
from mHealth devices.
It is possible to draw conclusions about individuals such as type of physical
activity, level of stress, or intensity of pain [14]. For physical activity classifications K-
nearest neighbors (KNN) have been used as well as Bayes techniques from either single
accelerometer or multiple types of sensors; Artificial neural network (ANN) and
decision tree modelling are used to recognize these activities, fusing data from
accelerometers and GPS [16]. As fall detection is a major concern for elderly people,
support vector machines (SVM) have been studied to detect those events, as well as
gesture classification [16].
By using different data sources, it is possible to improve the validity of the esti-
mates with data fusion. For example, the accelerometer information can improve the
interpretation of the raw electrocardiogram (ECG) data while a person is exercising,
because the ECG signal is strongly affected. Improving the interpretation of data,
combining different signs, can reduce incorrect clinical evaluations that can lead to
false alarms. In practice, fusion can be really challenging because the spatial and
temporal resolutions of different data sources can be different [14].
In response to a specific use case, in this case diabetes, Medtronic created Mini-
Med, a device with the ability to automatically adjust basal insulin based on patient’s
CGM readings. It also keeps record of the last 90 days of pump history and generates
reports [20].
In an organizational context solutions are starting to be implemented, for example
in hospitals, clinics and private entities. An example of that occurred from 2009 to
2011, in London’s Chelsea and Westminster Hospital. They invested in an e-Health
pilot project that consists in recording and storing patients’ activities on a platform so
that doctors can easily access users’ data to analyze it and reach conclusions [13].
Altice developed a pilot project where the intended users are the elderly. Smart
Assisted Living (SmartAL) is a social and health support solution that includes the
telemonitoring of vital signs, such as weight, blood pressure, pulse rate, accessible via
TV with an interactive IPTV service, Android app or web browser. This system makes
it possible to configure threshold values of biometric data to emit alerts to healthcare
professionals, family and friends [3].
On the one hand, most of these kinds of organization take advantage of the
increasing ease of access to wearable devices, mostly for fitness and lifestyle purposes.
On the other hand, in many cases the integration of the elderly in this type of systems
has been a common concern.
3 Methodology
Based on the literature review section, the strategies of some organizations that develop
mhealth systems were analyzed. A group interview with Altice Labs was scheduled in
order to gain more knowledge about this kind of system.
Before the group interview with the company we created an interview script with
the aim to understand what were the main challenges that they encountered while
developing the product, which were its weaknesses and also if the product had a margin
of progress with the improvement of technology. In other words, we intended to do a
high-level SWOT analysis of their Assisted Living product. The group interview was
performed on the 17th of October 2019, for around thirty minutes, in Aveiro, using an
interview script, and involved two employees of the firm (both from the product
development department).
Notes were taken on the topics discussed. Authorization for the use of the material
discussed was a topic of the small group interview (at the end). Some material was
deemed as not being able to be used by the research group. The interview led to
important conclusions on the product concept developed during this research study.
During the group interview, we discussed aspects such as their system’s features,
people’s receptivity to it and other issues duly uncovered. Further to that, other
approaches and scenarios where their system could be integrated were discussed with
the objective of contributing to a better solution.
In an ideal system, it is essential that the system be capable of answering all
people’s needs.
We also performed a questionnaire with the purpose of perceiving citizens’
knowledge of this kind of device, their opinion about sharing personal data and their
8 L. Fonseca et al.
receptiveness to this kind of system. Thus, what we really wanted to understand with
this questionnaire was also if the younger generation was receptive to use a mHealth
product, thus expanding the market and increasing the target audience. In order to reach
a group of heterogeneous participants concerning age level, we distributed the ques-
tionnaire at a high school to receive feedback from younger people, in a nursing home
to get responses from older people, while we also visited a company to get responses
from middle-aged people.
In the survey, 114 people agreed to participate, whereby 51.8% were male and
48.2% female. With regards to the age group 14.9% were up to 17 years old, 40.4%
were between 18 and 35 years old, 28.9% were between 36 and 60 years old and the
remaining 15.8% were more than 60 years old. As concerns academic qualifications,
14.9% had attended primary school, 3.5% had attended the 2nd cycle of school, 2.6%
had attended the 3rd cycle of school, 24.6% had been to high school, 27.2% had a
licentiate degree, 26.3% had a Master’s degree and the remaining 0.9% had a PhD
degree. It was important to discriminate the age of the participants as well as their
academic qualifications in order to understand if they had an influence on their choices.
The questionnaire contained the following questions:
• Do you know any kind of device for measuring vital signs (example: bracelets and
smart watches, chest bands, etc.)?
• Would you be willing to use one of these devices to monitor your vital signs?
• Which biometric signs are you most interested in monitoring?
• Would you be willing to send your data to an outside entity and thereby benefit
from a closer monitoring of your health?
• Are you aware of the General Data Protection Regulation (GDPR)?
• Would you be willing to pay for a service that uses the information of your vital
signs, and thus enjoy a closer monitoring of your health?
4 Results
In this section, we shall present the conclusions obtained from the group interview with
Altice Labs as well as the results from the questionnaire.
According to Altice Labs’ research, they claim vital signs are not enough to predict
medical conditions. For a precise diagnostic it is mandatory to have complementary
exams, such as blood analyses and sonographies, among others.
Another point mentioned was that medical staff are usually not in favour of giving a
computer system the possibility of diagnosis, as they believe that the human factor is
extremely important to infer the final decision based on data semantics.
One of the main points that we gathered concerning their solution is that it is
necessary to manually insert users’ vital signs on the SmartAL platform. We believe
that a significative improvement would be an automatic collection of these signs using
mHealth devices.
4.2 Survey
In this section we shall present the conclusions after the questionnaire referred to above
(114 responses).
Considering the global results obtained, it was found that:
• Around 75% of the participants know about wearable devices;
• Although 25% do not know about wearable devices, only 16.7% would not use
them, i.e., 83.3% would accept to use these devices for monitoring purposes;
• 70.2% would be willing to share their data;
• 72.8% of the participants have knowledge about laws on data protection and pri-
vacy (GDPR);
• Only 21.1% would pay and around 49% of all participants are undecided;
• In general, people are mostly preoccupied in measuring their heart rate (74.6%) and
their stress levels (64.9%).
Analyzing each age group, the following was observed:
• There is a tendency for the older generation to know less about the monitoring
devices (see Fig. 1);
• Although the elderly do not know about these devices, they would be willing to use
them. In the other younger age groups people would generally use them (85%–
95%);
• Few young and elderly people know about GDPR (less than 50%), however people
between 18 and 60 years are aware of it (around 85%);
• In general, those who know about GDPR would not be comfortable sharing their
data; on the contrary, those who do not know about data privacy regulations would
not mind sharing;
• No one under 17 years of age answered they would pay for the service. The interval
of people from 18–60 years of age had a similar relative percentage, where 30%
said they would not pay, while 20% would, and the remaining 50% are undecided.
In the elderly group, only 20% are undecided and 50% of them would definitely be
willing to pay.
10 L. Fonseca et al.
In accordance with the questionnaire results, the literature review and the group
interview with Altice Labs, we present a possible solution.
According to the questionnaire results, it will be interesting to apply this kind of
system to the whole population, as all age groups would be willing to use wearable
devices for monitoring and not only elderly people. Although most systems are mainly
focused on elderly people, we noticed that most of this group answered that they are
not aware of this kind of device.
It is crucial that users have some wearables that are continuously sending their
biomedical signs such as heart rate (pulse), stress levels, body temperature, among
others, to a platform data management system hosted in the cloud. The use of wearable
devices during long periods of time produces a huge volume of data. That data can be
used by algorithms to detect long term patterns and notice when the pattern changes,
raising alerts to healthcare entities.
To improve diagnostics and the quality of the data, all exams done by patients such
as blood collection, ultrasounds, among others, should be stored.
Hospital entity staff have the responsibility to analyze the received alert and to
make a decision. If they consider a possible problem, they should notify the patient to
A Product and Service Concept Proposal 11
come to the hospital entity to do more precise exams and consequently verify if, in fact,
something is wrong with the patient.
Briefly, the aim of this system is to transform data into information and that
information into knowledge which may be used as a support decision system by the
medical community.
The ecosystem explained before is illustrated in Fig. 2.
Regarding the availability to pay for this kind of service, around 49% of all par-
ticipants are undecided, and 30% will not pay, so we believe it would be necessary to
have more concrete features that would potentially improve quality of life, for these
people to change their position.
This kind of system can have several drawbacks. As stated before, the collected
data is stored in a cloud platform, which normally replicates the data, to better protect
it, which can lead to considerable expenses, because it is necessary to have a big
infrastructure to store this data. Due to the sensitivity associated with this kind of data,
it is not possible to use existing cloud service providers, so it would be necessary to
build one from scratch.
The use of that data for other purposes is also a major concern because some
companies could take advantage of this information, for example by selling it or
customizing product advertisements. We further noticed that participants who know
about GDPR tend not to feel comfortable about sharing their data. Despite seeming
contradictory, this can be explained by their knowledge about related risks.
Despite the disadvantages, the information could also be relevant for research
purposes while considering the fact that participants must be anonymized.
6 Conclusion
In this paper, we start with a researching of the literature about devices, data processing
and existing healthcare systems. We had the opportunity to perform a group interview
with Altice Labs which gave us insights about the challenges that they had before such
12 L. Fonseca et al.
a prediction of medical conditions with only biometrical data was possible. In their
perspective, medical staff would not accept the possibility of full prediction by the
system and, on the other hand, they defend that these systems should only be for
decision support. Furthermore, from the questionnaire, we can conclude that most
people would be willing to use healthcare systems. Based on these results, we dis-
cussed and proposed an approach that would be able to solve some of the existing
issues, although we also state that some disadvantages are not easily overcome.
In conclusion, we note that there exist a lot of solutions related to healthcare,
especially for the elderly. However, there are still some problems, such as the elderlies’
unawareness of wearable devices and services related to them.
According to our research, we perceive that this solution could be applied to people
of all ages, taking advantage of the popularity of wearables since they are cheaper and
more robust than ever before. In general, people are receptive to this kind of system,
but it is evident that there are concerns about privacy issues.
In addition to the advantages for citizens, we believe that countries’ health systems
would reduce costs and improve health service qualities. Furthermore, the scientific
community may benefit from this solution using data gathered for researchers in var-
ious areas.
Acknowledgements. We would like to thank Telma Mota and Ricardo Machado from Altice
Labs for having agreed to be interviewed and for all the information provided during the group
interview. For the dissemination of the questionnaire, we would like to thank Patrícia Gonçalves
and Graça Ferraz, from the Recesinhos Social Center, for having helped in the gathering of data
from the elderly; and Ana Araújo for having helped in the gathering of data from the younger age
groups.
A Product and Service Concept Proposal 13
References
1. World Health Organization: mHealth: New horizons for health through mobile technologies.
Observatory 3, 66–71 (2011). http://www.webcitation.org/63mBxLED9
2. Machine Learning for Healthcare. https://www.mlforhc.org/. Accessed 19 Oct 2019
3. AlticeLabs: SmartAL – Smart Assisted Living. http://www.alticelabs.com/site/smartal/.
Accessed 16 Oct 2019
4. Wearable Devices: Wearable Technology and Wearable Devices: Everything You Need to
Know. http://www.wearabledevices.com/what-is-a-wearable-device/. Accessed 17 Oct 2019
5. Hänsel, K.: Wearable sensing approaches for stress recognition in everyday life. In:
Proceedings of the 2017 Workshop on MobiSys 2017 Ph.D. Forum - Ph.D. Forum 2017,
pp. 1–2 (2017). http://dl.acm.org/citation.cfm?doid=3086467.3086470. Accessed 19 Oct
2019
6. Qiu, H., Wang, X., Xie, F.: A survey on smart wearables in the application of fitness. In:
2017 IEEE 15th International Conference on Dependable, Autonomic and Secure
Computing, 15th International Conference on Pervasive Intelligence and Computing, 3rd
International Conference on Big Data Intelligence and Computing and Cyber Science and
Technology Congress (DASC/PiCom/DataCom/CyberSciTech), pp. 303–307 (2017). http://
ieeexplore.ieee.org/document/8328407/. Accessed 19 Oct 2019
7. Berglund, M.E., Duvall, J., Dunne, L.E.: A survey of the historical scope and current trends
of wearable technology applications. In: Proceedings of the 2016 ACM International
Symposium on Wearable Computers - ISWC 2016, pp. 40–43 (2016). http://dl.acm.org/
citation.cfm?doid=2971763.2971796. Accessed 19 Oct 2019
8. Sultan, N.: Reflective thoughts on the potential and challenges of wearable technology for
healthcare provision and medical education. Int. J. Inf. Manag. 35(5), 521–526 (2015).
https://www.sciencedirect.com/science/article/pii/S0268401215000468. Accessed 19 Oct
2019
9. Fletcher, R.R., Poh, M.-Z., Eydgahi, H.: Wearable sensors: opportunities and challenges for
low-cost health care. In: 2010 Annual International Conference of the IEEE Engineering in
Medicine and Biology, pp. 1763–1766 (2010). http://ieeexplore.ieee.org/document/5626734/.
Accessed 19 Oct 2019
10. Kolasinska, A., Quadrio, G., Gaggi, O., Palazzi, C.E.: Technology and aging: users’
preferences in wearable sensor networks. In: Proceedings of the 4th EAI International
Conference on Smart Objects and Technologies for Social Good - Goodtechs 2018, pp. 77–
81 (2018). http://dl.acm.org/citation.cfm?doid=3284869.3284884. Accessed 19 Oct 2019
11. Di Rienzo, M., Rizzo, F., Parati, G., Brambilla, G., Ferratini, M., Castiglioni, P.: MagIC
system: a new textile-based wearable device for biological signal monitoring. Applicability
in daily life and clinical setting. In: Annual International Conference of the IEEE
Engineering in Medicine and Biology - Proceedings, vol. 7, pp. 7167–7169 (2005)
12. Innovatemedtec: This smart bra can detect breast cancer much earlier than existing screening
tests - innovatemedtec content library. https://innovatemedtec.com/content/smart-bra?fbclid=
IwAR1IKHXd7ih9JnwVDQvL6wkUCpltayE-nsIHkv_9zADrYXhyI9bzKz56D5Q. Acces-
sed 19 Oct 2019
13. Sultan, N.: Making use of cloud computing for healthcare provision: opportunities and
challenges. Int. J. Inf. Manag. 34(2), 177–184 (2014). https://www.sciencedirect.com/
science/article/pii/S0268401213001680. Accessed 19 Oct 2019
14. Kumar, S., et al.: Mobile health technology evaluation: the mHealth evidence workshop. Am.
J. Prev. Med. 45(2), 228–236 (2013). https://www.sciencedirect.com/science/article/pii/
S0749379713002778. Accessed 13 Oct 2019
14 L. Fonseca et al.
15. Géron, A.: Hands-on machine learning with Scikit-Learn and TensorFlow: concepts, tools,
and techniques to build intelligent systems. https://www.oreilly.com/library/view/hands-on-
machine-learning/9781492032632/. Accessed 15 Oct 2019
16. Qi, J., Yang, P., Newcombe, L., Peng, X., Yang, Y., Zhao, Z.: An overview of data fusion
techniques for Internet of Things enabled physical activity recognition and measure. Inf.
Fusion 55, 269–280 (2020). https://www.sciencedirect.com/science/article/pii/S1566253
519302258?via%3Dihub. Accessed 05 Oct 2019
17. Apple: Efetuar um ECG com a app ECG no Apple Watch Series 4ou posterior - Suporte
Apple. https://support.apple.com/pt-pt/HT208955. Accessed 14 Oct 2019
18. Apple: Apple Watch Series 5 - Apple. https://www.apple.com/apple-watch-series-5/.
Accessed 14 Oct 2019
19. IMTInnovation: IMT Innovation Digital Health Incubator Wearable Technology to
Minimise Injury Risk. https://imtinnovation.com/2018/11/10/wearable-technology-to-mini
mise-injury-risk/. Accessed 19 Oct 2019
20. Medtronic: MiniMed 670G Insulin Pump System—Medtronic Diabetes. https://www.
medtronicdiabetes.com/products/minimed-670g-insulin-pump-system. Accessed 19 Oct
2019
21. Medical Board of Australia: Building a professional performance framework. https://www.
racgp.org.au/getmedia/d810a609-c344-4e97-b615-1f83b6e504eb/Medical-Board-Report-Bu
ilding-a-professional-performance-framework.PDF.aspx. Accessed 04 Jan 2020
Artificial Neural Networks Interpretation
Using LIME for Breast Cancer Diagnosis
1 Introduction
c The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 15–24, 2020.
https://doi.org/10.1007/978-3-030-45697-9_2
16 H. Hakkoum et al.
A systematic mapping study by Idri et al. [3] was conducted on 403 arti-
cles treating DM techniques in BC showed that most of the articles focused on
the diagnosis task using DM techniques with 78.63%, 7.63%, 9.16% and 4.58%
for classification, regression, clustering, and association respectively. They also
showed that the use of black-box models such as Support Vector Machines or
Neural Nets was very high followed by Decision Trees, which are explainable
models and highlighted that the use of Neural Nets has decreased over time
which is, very probably, due to their inexplicable behavior [3].
Interpretability is one of the most common reasons limiting artificial neu-
ral networks, and black-box in general, to be accepted and used in critical
domains such as the medical one. Indeed, healthcare offers more challenges to
Machine learning (ML) by being more demanding for interpretability [7]. Model
interpretability is thus often chosen over its accuracy. Therefore, understanding
black-box models can help assist and augment the provision of better care while
doctors remain integral to their role. It could also improve human performance,
extract insights, gain new knowledge about the disease which may be used to
generate hypotheses [7].
Skater is an Oracle unified framework for Model Interpretation [8] that used
LIME explanations to interpret a basic MLP, with 100 hidden nodes, trained on
the Wisconsin (Diagnostic) Database that has 30 attributes. This paper aims
to apply and evaluate the local interpretability technique LIME on an MLP.
The main contribution is the LIME interpretation of the best of two neural
networks: a basic MLP and a Deep MLP (four layers) trained on the Breast
Cancer Wisconsin (Original) data-set that has 9 attributes.
The rest of this paper is structured as follows: Sect. 2 presents some important
concepts related to this paper. Section 3 presents some related work dealing with
the use of interpretation techniques. Section 4 describes the database as well as
the performance measures used to select the best performing model. Section 5
presents the experimental design followed in this empirical evaluation. Section 6
discusses the obtained results. The threats to the validity of this paper are given
in Sect. 7. Section 8 presents conclusions and future work.
2 Background
This section presents an overview of the feed-forward neural networks that were
constructed and evaluated in our experiments. As well as the interpretability
techniques that were applied to their best variants.
Artificial neural networks are a set of algorithms designed to mimic the brain-
behavior [9]. There are multiple types of neural networks, the most basic type
is the feed-forward where information travels in one direction from input to
output. A popular example of this type is MLP, it is composed of an input layer
to receive the signal, an output layer that makes a decision, and in between, an
ANN Interpretation Using LIME For BC Diagnosis 17
explanation is to the prediction of the original model f, while the model complex-
ity Ω(g) is kept low (prefer fewer features). G is the family of possible explana-
tions, for example, all possible linear regression models [15], and the proximity
measure πx defines how large the neighborhood around instance x is that we
consider for the explanation. In practice, LIME only optimizes the loss part and
the user has to determine the complexity by selecting the maximum number of
features that the linear regression model may use.
Although the explanation of a single prediction provides some understanding
of the model, it is not sufficient to evaluate and assess trust in the model as a
whole. Therefore, Ribeiro et al. [14] proposed to explain a judiciously picked set
of individual instances using the SubmodularPick, an algorithm that helps to
select a representative set to simulate a global understanding of the model. For
example, if an explanation A was explained relying on two features x1 and x2 ,
there is no need to show the end-user another explanation that focused on the
same features x1 and x2 .
3 Related Work
A variety of work has been done on ML models’ interpretation. Since the ML
community has noticed the importance of explaining which features a model did
or did not take into account more than which parameter increased its accuracy
[16]. Interpretability aims increasing model trustworthiness so that it can be
used to make high-stakes decisions, especially in domains such as the medical
one [17].
ML explainability is thus a major concern since it has the power to break even
the models with the highest accuracy. When it comes to using a deployed model
to make decisions, end-users often ask the almighty question: “Why should I
trust it?”.
In 2002, Idri et al. [11] asked a slightly different question: “Can neural net-
works be easily interpreted in software cost estimation?” and they used the
method of [18] to map MLP to a fuzzy rule-based system which expresses the
information encoded in the architecture of the network and which can be easily
interpreted. Although they found the i-or operator that connected rules inap-
propriate.
LIME framework was applied in different domains such as medicine and
finance [16]. Particularly, it was applied by the Skater team [8] in different fields
one of them was breast cancer diagnosis where they used the Breast Cancer
Wisconsin (Diagnostic) Database, available on the UCI repository, to train four
models including an MLP. They discussed how sensitive each classifier is to the
attributes to show how interpretation techniques can help with model under-
standing for model selection.
Puri et al. [16] proposed an approach that can be thought of as an extension
to LIME. It learns if-then rules that represent the global behavior of a model
that solves a classification problem. Then validated their approach using different
data-sets, particularly the Wisconsin Breast Cancer data-set, to train a random
ANN Interpretation Using LIME For BC Diagnosis 19
forest classifier where they got an accuracy of 98%. After running their technique,
they compared the model predictions to the predictions of the resulted if-then
rules, and they computed a metric they introduced: Imitation@K where K is
the number of rules. Their approach generated rules able to imitate the model
behavior for a large fraction of the data-set.
4.2 Performance
The best model isn’t necessarily the one with higher accuracy. Therefore, multi-
ple performance classification measures were used to select the best MLP archi-
tecture:
– Accuracy: The most intuitive performance measure and it is simply a ratio
of correctly predicted observation to the total observations.
– Precision: The ratio of correctly predicted positive observations to the total
predicted positive observations.
– Recall (Sensitivity): The ratio of correctly predicted positive observations to
the all observations in actual class ‘yes’.
– F1-Score: The weighted average of Precision and Recall. Therefore, this score
takes both false positives and false negatives into account.
TP + TN
Accuracy = (2)
TP + FP + FN + TN
TP
P recision = (3)
TP + FP
TP + TN
Recall = (4)
TP + FN
20 H. Hakkoum et al.
P recision ∗ Recall
F 1 − Score = 2 ∗ (5)
P recision + Recall
5 Experimental Design
This Section presents the experimental process followed in this empirical evalua-
tion. It consists of choosing the parameters for models construction, selecting the
best performing model and fixing the number of LIME explanations to discuss.
Two MLP architectures were adopted and since our aim was interpretability
and not performance, hyperparameters were chosen randomly. We constructed a
basic MLP with 200 hidden nodes and a deep MLP where another hidden layer
of 128 nodes was added. To make training faster, the non-linearity activation
function ReLU (Rectified Linear Units) was used, since it makes training several
times faster than with other function such as tanH [21]. As to the output layer,
we opted for a softmax activation function and thus we had two neurons since
we have two classes (Malignant/Benign).
The dropout technique was used twice in the deep MLP to avoid over-fitting,
in the first hidden layer with a 0.5 probability and in the second layer with a
probability of 0.8. The neurons which are dropped out in this way do not con-
tribute to the forward pass and do not participate in back-propagation. So every
time an input is presented, the neural network samples a different architecture,
but all these architectures share weights. This technique reduces complex co-
adaptations of neurons since a neuron can not rely on the presence of particular
other neurons [21].
After the training of 500 epochs (training cycles) and a batch size of 128, the
Accuracy (Eq. 2) and F1-Score (Eq. 5) were voters to choose the best MLP. Here
BordaCount [22], a voting system was used. When each individual of a group
ranks m candidates, Borda has each assign 0 to its last-ranked candidate, 1 to
the second-to-the-last-ranked, until it assigns n − 1 to its top-ranked, and then,
for each of the candidates, sums over those numbers to determine the group
ranking [23].
K-folds Cross-validation is a validation technique that ensures that the model
is low on bias and that it can work well for the real unseen data. The data is
divided into k subsets, where one of the k subsets is used as the test/validation
set and the other (k − 1) subsets form the training set [24]. After k iterations,
the error and performance metrics are computed by getting the average over all
k folds. As a general rule and empirical evidence, 5 or 10 is generally preferred
for k [24]. For the experiments of the present study 10 folds cross validation was
performed.
ANN Interpretation Using LIME For BC Diagnosis 21
As cross validation’s purpose is model checking and not model building, after
selecting the best model to interpret in terms of Accuracy and F1-Score, the
chosen model was fitted with a normal split, feeding the maximum amount to
the training leaving 0.05 for the testing.
This phase aimed to interpret the selected model locally and define on which
features it relies on the most and how each feature affects the final decision.
A four instances set was chosen using the SubmodularPick algorithm provided
with LIME. LIME explanations were then computed for the four instances.
This section shows the results of the empirical evaluations of this study and
discusses the LIME provided local explanations.
After 10 folds cross validation training with 500 epochs, two models were con-
structed:
– MLP: A basic three layers MLP with 200 nodes/neurons with ReLu activation
function in the hidden layer, and two nodes for the output with a Softmax
activation function.
– Deep MLP (DMLP): Another hidden layer of 128 nodes was added to the
basic MLP. The dropout technique was used for both hidden layers.
Table 1 shows the accuracy and F1-Score values of the two models MLP and
DMLP. Note that F1-Score is the weighted average of precision and recall.
DMLP did better, which was expected since the added layer helped in doing
more computations and recognizing more useful patterns to distinguish between
the two classes [25]. Both, the accuracy and F1-Score were voters for the two
MLP candidates to choose the best MLP. Therefore, the BordaCount voting
system was used to select the best MLP architecture which turned out to be the
DMLP.
22 H. Hakkoum et al.
LIME was used for the local interpretability of the best MLP model. It focuses
on training local surrogate models (interpretable models) to explain individual
predictions and so the decider can understand why the model predicted a certain
class for a particular instance. A set of four instances was thus chosen using the
SubmodularPick algorithm for the best model, then the plotted explanations are
further interpreted to understand how the best model uses the features.
In Fig. 1, we notice in the first upper explanations how the cellSizeUniformity,
the barNuclei as well as the cellShapeUniformity and normalNucleoil switched
the prediction when they all increased which shows that instances tend to be
classified benign more when the uniformities are very low.
In the third explanation, the barNuclei being in the 2.9 bucket and the uni-
formities in 3.6 had a huge impact on classifying the instance as malignant and
although the blandChromatin was higher than 6 which voted for the benignity of
the instance, the model’s decision was affected more by the first three features.
In the fourth explanation, it was again the uniformalities features as well
as the barNuclei and normalNucleoil that affected the prediction the most but
differently, some “voted” for malignant such as normalNucleoil and cellShapeU-
niformity since the first was high and the second was average, while the others,
barNuclei, and cellSizeUniformity “voted” for benign since they were very low.
ANN Interpretation Using LIME For BC Diagnosis 23
7 Threats to Validity
In this work, parameter tuning was ignored since we focused on interpretability,
so no parameter tuning technique was used, which can be interesting and might
give better results [5].
The medical domain is a very critical one, using one data-set is not enough
to select the best classifier nor to define its trustworthiness. Moreover, to check
the reliability of explanations, using other interpretability techniques than LIME
would also be helpful to understand the model.
The interpretability techniques were applied to an MLP. Constructing and
interpreting other types of black-box models such as Support Vector Machine
will help understand those techniques more and generalize their effectiveness in
interpretation.
References
1. Al-Hajj, M., Wicha, M.S., Benito-Hernandez, A., Morrison, S.J., Clarke, M.F.:
Prospective identification of tumorigenic breast cancer cells. Proc. Nat. Acad. Sci.
100(11), 6890 (2003). Correction to 100(7):3983
2. Solanki, K.: Application of data mining techniques in healthcare data, vol. 148,
no. 2, p. 1622 (2016)
3. Idri, A., Chlioui, I., El Ouassif, B.: A systematic map of data analytics in breast
cancer. In: ACM International Conference Proceeding Series. Association for Com-
puting Machinery (2018)
4. Hosni, M., Abnane, I., Idri, A., de Gea, J.M.C., Fernandez Aleman, J.L.: Review-
ing ensemble classification methods in breast cancer. Comput. Methods Programs
Biomed. 177, 89–112 (2019)
5. Idri, A., Hosni, M., Abnane, I., de Gea, J.M.C., Fernandez Aleman, J.L.: Impact
of parameter tuning on machine learning based breast cancer classification. In:
Advances in Intelligent Systems and Computing, vol. 932, pp. 115–125. Springer
(2019)
24 H. Hakkoum et al.
6. Chlioui, I., Idri, A., Abnane, I., de Gea, J.M.C., Fernandez Aleman, J.L.:. Breast
cancer classification with missing data imputation. In: Advances in Intelligent Sys-
tems and Computing, vol. 932, pp. 13–23. Springer (2019)
7. Aurangzeb, A.M., Eckert, C., Teredesai, A.: Interpretable machine learning in
healthcare. In: Proceedings of the 2018 ACM International Conference on Bioinfor-
matics, Computational Biology, and Health Informatics, BCB 2018, pp. 559–560.
ACM Press, New York (2018)
8. Oracle’s unified framework for Model Interpretation. https://github.com/oracle/
Skater
9. Thomas, A.: An introduction to neural networks for beginners. Technical report in
Adventures in Machine Learning (2017)
10. Gardner, M.W., Dorling, S.R.: Artificial neural networks (the multilayer percep-
tron) - a review of applications in the atmospheric sciences. Atmos. Environ. 32(14–
15), 2627–2636 (1998)
11. Idri, A., Khoshgoftaar, T., Abran, A.: Can neural networks be easily interpreted
in software cost estimation? In: 2002 IEEE World Congress on Computational
Intelligence. IEEE International Conference on Fuzzy Systems, FUZZ-IEEE 2002.
Proceedings (Cat. No.02CH37291), vol. 2, pp. 1162–1167. IEEE (2002)
12. Miller, T.: Explanation in artificial intelligence: insights from the social sciences.
Artif. Intell. J. 267, 1–38 (2017)
13. Kim, B., Khanna, R., Koyejo, O.: Examples are not enough, learn to criticize! Crit-
icism for interpretability. In: Advances in Neural Information Processing Systems
(NIPS 2016), vol. 29 (2016)
14. Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should I trust you?” Explaining the
predictions of any classifier. In: Proceedings of the ACM SIGKDD International
Conference on Knowledge Discovery and Data Mining, 13–17 August 2016, pp.
1135–1144. Association for Computing Machinery (2016)
15. Molnar, C.: Interpretable Machine Learning. A Guide for Making Black Box Models
Explainable (2018). https://christophm.github.io/book/
16. Puri, N., Gupta, P., Agarwal, P., Verma, S., Krishnamurthy, B.: MAGIX: model
agnostic globally interpretable explanations (arXiv) (2017)
17. Lazzeri, F.: Automated and Interpretable Machine Learning - Microsoft Azure -
Medium (2019)
18. Benitez, J.M., Castro, J.L., Requena, I.: Are artificial neural networks black boxes?
IEEE Trans. Neural Netw. 8(5), 1156–1164 (1997)
19. https://archive.ics.uci.edu/ml/datasets/breast+cancer+wisconsin+(original)
20. Chawla, N., Bowyer, K., Hall, L., Kegelmeyer, W.: SMOTE: synthetic minority
over-sampling technique. J. Artif. Intell. Res. 16, 321–357 (2002)
21. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep con-
volutional neural networks. In: Advances in Neural Information Processing Sys-
tems, vol. 25, no. 2 (2012)
22. de Borda, J.C.: Mémoire sur les élections au scrutin, Mémoire de l’Académie
Royale. Histoire de l’Académie des Sciences, Paris, pp. 657–665 (1781)
23. Risse, M.: Why the count de Borda cannot beat the Marquis de Condorcet. Soc.
Choice Welfare 25(1), 95–113 (2005)
24. Gupta, P.: Cross-Validation in Machine Learning - Towards Data Science (2017)
25. Reed, R., MarksII, R.J.: Neural Smithing: Supervised Learning in Feedforward
Artificial Neural Networks, p. 38 (1999)
Energy Efficiency and Usability
of Web-Based Personal Health Records
1 Introduction
Energy efficiency and energy consumption are considered as fundamental sus-
tainability characteristics in architectural design [29]. Whilst energy consump-
tion is the amount of power used to operate a technology, energy efficiency refers
to the use of as little energy as possible in a particular system. Software sustain-
ability is attracting attention of researchers [3,27,28]. Sustainability in e-health
technologies is an important challenge for the healthcare industry [20]. Energy
efficiency will need improving due to the large amount of health data that will
This research is part of the BIZDEVOPS-GLOBAL-UMU (RTI2018-098309-B-C33)
project, and the Network of Excellence in Software Quality and Sustainability
(TIN2017-90689-REDT). Both projects are supported by the Spanish Ministry of
Science, Innovation and Universities and the European Regional Development Fund
(ERDF).
c The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 25–35, 2020.
https://doi.org/10.1007/978-3-030-45697-9_3
26 J. A. Garcı́a-Berná et al.
The PHRs were selected from a previous study [10]. The method proposed by the
Preferred Reporting Items for Systematic reviews and Meta-Analysis (PRISMA)
group [19] was employed for accurateness and impartiality of the selection. The
search was performed at myPHR.com and in ACM Digital Library, IEEE Digital
Library, Med-line and ScienceDirect. Web-based format was the inclusion crite-
rion (IC) with which a total of 19 PHRs were collected initially (see Fig. 1). A
refinement of the results was performed with the following exclusion criteria: non-
available PHRs (EC1), non-free PHRs (EC2), registration not possible (EC3),
mal-functioning (EC4), only available in USA (EC5), and low-popularity PHRs
(EC6). EC6 was applied through Alexa website (alexa.com/siteinfo), which is a
sorting online tool to verify visits in web portals.
Energy Efficiency and Usability of Web-Based Personal Health Records 27
From the results of 19 PHRs selected, those that met ECs were discarded. A
first rejection of PHRs was carried out. HealthyCircles, Telemedical, Dr. I-Net,
MedsFile.com, ZebraHealth, EMRySTICK and Dlife were dropped due to EC1,
myMediConnect and Juniper Health because of EC2, RememberItNow! by EC3,
WebMD HealthManager by EC4, and PatientPower by EC5. Finally another
round to discard more PHRs was done. In this case, My Health Folders and My
Doclopedia fulfilled EC6. The Alexa ranking mark exposed a very low popularity
of these portals—the higher the mark, the less popular a website is—. The Alexa
mark in some cases was not available which lead to a low popularity of the
portal consideration. Finally, the PHRs selected were HealthVet, PatientsLikeMe,
HealthVault, Health Companion, and NoMoreClipBoard. All these PHRs covered
as many functionalities as possible provided by this type of tools.
The use of PHRs was analyzed by carrying out a set of identified tasks with
common needs detected for a better interaction among several usage profiles
[4]. The recommendations for the development of a PHR from the American
Health Information Management Association [2] were also taken into account to
propose the tasks to be performed in the PHRs. Table 1 depicts a list of the 20
PHR common tasks identified.
28 J. A. Garcı́a-Berná et al.
The power expenses during the performance of the tasks were measured with the
Energy Efficient Tester (EET) [18]. This device is provided with sensors capable
to measure the instant power consumption of the processor, hard disk drive,
graphic card, monitor and total power supplied to a host machine.
The experiment was carried out using EET connected to a thin film
transistor-liquid crystal display (TFT-LCD) monitor Philips 170S6FS and a PC
provided with a GigaByte GA-8I945P-G motherboard, an Intel Pentium D @
3.0 Ghz processor, a set of 2 modules of 1 GB DDR2 @ 533 MHz RAM memory,
a Samsung SP2004C 200 GB 7500 rpm hard disk drive, a Nvidia GeForce GTS
8600 graphics card, and a Aopen Z350-08FC 350 W power supply.
Data were checked to ensure the absence of outliers. To this end, each task
was carried out five times in order to average the results and to smooth any
peak red of power consumption that may had occurred. The operating system
installed in the PC was Microsoft Windows 7 Professional, which allowed to
disable background running processes and reduce the resources required by the
operating system.
Energy Efficiency and Usability of Web-Based Personal Health Records 29
3 Results
3.1 Energy Consumption of the Selected Web-Based PHRs
The average energy consumption for each PHR and each sensor was calculated
with the data available in this supplementary file (http://tiny.cc/9w72fz). In
Table 2 the cells with the largest values were coloured in red whereas the ones
with the smallest values in green. In NoMoreClipBoard the lowest energy con-
sumption appeared in three sensors (monitor, processor and PC), in HealtVault
in one sensor (hard disk) and in PatiensLikeMe in another one sensor (graphics
card). In contrast, Health Companion spent the highest amount of energy in
three of the sensors (graphics card, hard disk and monitor) and PatiensLikeMe
in two of them (processor and the whole PC).
30 J. A. Garcı́a-Berná et al.
in the experiment. Solid colors appeared in most of the cases in the GUI of the
PHRs. NoMoreClipBoard was the only one with degraded colors, impacting on
the switching characteristic of the screen shots. The LCD-TFT employed in this
experiment generated a higher power consumption when dark tonalities were
shown in the monitor, which could be related to the textures in NoMoreClip-
Board [26].
Biggest widgets were relevant in terms of power needs when performing the
tasks. Biggest widget were showed from task 7 to 10 in Health Companion. In
addition, the energy spent by the monitor in this PHR stood out. The solid color
scheme and the dark tonalities of this PHR could explained a low energy need.
In HealthVet big widgets also appeared, moreover, they were closer to each other
in the aforementioned tasks. This PHR revealed the lowest power consumption
for the hard disk drive.
4 Discussion
4. Hick-Hyman Law. This law postulates that the human cognitive process of
taking decision can be accelerated [12,13]. People divide the number of options
into groups, eliminating around half of the remaining choices at each step.
A logarithmic relationship between reaction time and the number of choices
available is proposed in this law. In addition, when each option one at a time
must be considered by the users, the relationship between response time and
the number of choices available has been found to be linear [8]. Therefore,
few choices as possible should be available in a GUI to take advance of Hick-
Hyman Law. To this end, the most common functionality should be split into
a smaller menu [24]. HealthVault, HealthVet and PatientsLikeMe proceeded
with this law. The navigation to find the information in these PHRs was
divided into drop-down menus. There was a left column with the main options
of the PHR in HealthVault and a second level menu to view the medical
information. In HealthVet and PatientsLikeMe there was a first level menu
with the main options in the headline of the web. After clicking on this menu,
a left column appeared to retrieve the medical information.
This paper investigated the relationship between usability and energy efficiency
of five web-based PHRs. The findings showed that meeting both usability and
consuming less energy is challenging as it depends on factors related to the
hardware employed and the users’ manipulation of the system. Results allowed
us to suggest recommendations about energy efficiency of PHRs. However, the
inclusion of only five PHRs might have impacted the results. Further studies with
more PHRs and more tasks to be performed should be conducted to confirm our
results. For future work, we intend to propose a reusable requirement catalogue
for usable and energy efficient e-health applications and use energy management
systems during the performance of tasks to validate our catalogue.
References
1. ISO/IEC 25010 standard. Systems and Software Engineering – Systems and Soft-
ware Quality Requirements and Evaluation (SQuaRE) – System and Software
Quality Models (2011)
2. AHIMA: American health information management association (2019). Accesses
Dec 2019. http://www.ahima.org/
3. Ahmad, R., Hussain, A., Baharom, F.: A systematic review on characteristic and
sub-characteristic for software development towards software sustainability. Envi-
ronment 20, 34 (2015)
4. Archer, N., Fevrier-Thomas, U., Lokker, C., McKibbon, K.A., Straus, S.E.: Per-
sonal health records: a scoping review. J. Am. Med. Inform. Assoc. 18(4), 515–522
(2011)
5. Bhatt, C., Dey, N., Ashour, A.S.: Internet of Things and Big Data Technologies
for Next Generation Healthcare, vol. 23. Springer, Cham (2017)
34 J. A. Garcı́a-Berná et al.
6. Bidargaddi, N., van Kasteren, Y., Musiat, P., Kidd, M.: Developing a third-party
analytics application using Australia’s national personal health records system:
case study. JMIR Med. Inform. 6(2), e28 (2018)
7. Breuninger, J., Popova-Dlugosch, S., Bengler, K.: The safest way to scroll a list: a
usability study comparing different ways of scrolling through lists on touch screen
devices. IFAC Proc. Vol. 46(15), 44–51 (2013)
8. Cockburn, A., Gutwin, C.: A predictive model of human performance with scrolling
and hierarchical lists. Hum.-Comput. Interact. 24(3), 273–314 (2009)
9. Farzandipour, M., Meidani, Z., Riazi, H., Sadeqi Jabali, M.: Task-specific usability
requirements of electronic medical records systems: lessons learned from a national
survey of end-users. Inform. Health Soc. Care 43(3), 280–299 (2018)
10. Fernández-Alemán, J.L., Seva-Llor, C.L., Toval, A., Ouhbi, S., Fernández-Luque,
L.: Free web-based personal health records: an analysis of functionality. J. Med.
Syst. 37(6), 9990 (2013)
11. Helmer, A., Lipprandt, M., Frenken, T., Eichelberg, M., Hein, A.: Empowering
patients through personal health records: a survey of existing third-party web-
based PHR products. Electron. J. Health Inform. 6(3), 26 (2011)
12. Hick, W.E.: On the rate of gain of information. Q. J. Exp. Psychol. 4(1), 11–26
(1952)
13. Hyman, R.: Stimulus information as a determinant of reaction time. J. Exp. Psy-
chol. 45(3), 188 (1953)
14. Karampela, M., Ouhbi, S., Isomursu, M.: Exploring users’ willingness to share
their health and personal data under the prism of the new GDPR: implications in
healthcare. In: 41st Annual International Conference of the IEEE Engineering in
Medicine and Biology Society (EMBC), pp. 6509–6512. IEEE (2019)
15. Kelly, M.M., Coller, R.J., Hoonakker, P.: Inpatient portals for hospitalized patients
and caregivers: a systematic review. J. Hosp. Med. 13(6), 405–412 (2018)
16. Leung, R., MacLean, K., Bertelsen, M.B., Saubhasik, M.: Evaluation of haptically
augmented touchscreen GUI elements under cognitive load. In: 9th International
Conference on Multimodal Interfaces, pp. 374–381. ACM (2007)
17. Lyerla, F., Durbin, C.R., Henderson, R.: Development of a nursing electronic med-
ical record usability protocol. CIN: Comput. Inform. Nurs. 36(8), 393–397 (2018)
18. Mancebo, J., Arriaga, H.O., Garcı́a, F., Moraga, M., Garcı́a-Rodrı́guez de Guzmán,
I., Calero, C.: EET: a device to support the measurement of software consumption.
In: 6th International Workshop on Green and Sustainable Software (GREENS),
pp. 16–22 (2018)
19. Moher, D., Liberati, A., Tetzlaff, J., Altman, D.G.: Preferred reporting items
for systematic reviews and meta-analyses: the prisma statement. Ann. Int. Med.
151(4), 264–269 (2009)
20. Ouhbi, S.: Sustainability and internationalization requirements for connected
health services: method and applications. Proyecto de investigación (2018)
21. Ouhbi, S., Fernández-Alemán, J.L., Toval, A., Rivera Pozo, J., Idri, A.: Sustain-
ability requirements for connected health applications. J. Softw.: Evol. Process
30(7), e1922 (2018)
22. Rantanen, M.M., Koskinen, J.: Phr, we’ve had a problem here. In: IFIP Interna-
tional Conference on Human Choice and Computers, pp. 374–383. Springer (2018)
23. Savelyev, A., Brookes, E.: GenApp: extensible tool for rapid generation of web and
native GUI applications. Future Gener. Comput. Syst. 94, 929–936 (2017)
24. Sears, A., Shneiderman, B.: Split menus: effectively using selection frequency to
organize menus. ACM Trans. Comput.-Hum. Interact. (TOCHI) 1(1), 27–51 (1994)
Energy Efficiency and Usability of Web-Based Personal Health Records 35
25. Staccini, P., Lau, A.Y., et al.: Findings from 2017 on consumer health informatics
and education: health data access and sharing. Yearb. Med. Inform. 27(01), 163–
169 (2018)
26. Vallerio, K.S., Zhong, L., Jha, N.K.: Energy-efficient graphical user interface design.
IEEE Trans. Mob. Comput. 5(7), 846–859 (2006)
27. Venters, C., Lau, L., Griffiths, M., Holmes, V., Ward, R., Jay, C., Dibsdale, C., Xu,
J.: The blind men and the elephant: towards an empirical evaluation framework
for software sustainability. J. Open Res. Softw. 2(1), e8 (2014)
28. Venters, C.C., Capilla, R., Betz, S., Penzenstadler, B., Crick, T., Crouch, S., Naka-
gawa, E.Y., Becker, C., Carrillo, C.: Software sustainability: research and practice
from a software architecture viewpoint. J. Syst. Softw. 138, 174–188 (2018)
29. Villa, L., Cabezas, I., Lopez, M., Casas, O.: Towards a sustainable architectural
design by an adaptation of the architectural driven design method. In: International
Conference on Computational Science and Its Applications, pp. 71–86. Springer
(2016)
30. Yen, P.Y., Walker, D.M., Smith, J.M.G., Zhou, M.P., Menser, T.L., McAlearney,
A.S.: Usability evaluation of a commercial inpatient portal. Int. J. Med. Inform.
110, 10–18 (2018)
A Complete Prenatal Solution
for a Reproductive Health Unit in Morocco
1 Introduction
A pregnancy can encounter disorders or medical conditions that might influence the
progress of the pregnancy. It can be either related to the obstetrical history, medical
complications, lifestyle choices, nutrition, cardiac diseases, diabetes or hypertensive
disorders [1]. These factors can produce health troubles before, during and after
delivery. This implies regular prenatal checkups with her obstetrician and gynecologist
in order to track her health and the baby’s health [2]. During these checkups, health
data related to the pregnant woman and her baby are registered in their health records.
The classical form of these health records are paper-based health records. They allow
pregnant women access their health data, by meticulously following the progress of
their pregnancy [3]. Hence, pregnant women who can access their health records are
more informed and aware about potential risks while being pregnant, which can help
© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 36–43, 2020.
https://doi.org/10.1007/978-3-030-45697-9_4
A Complete Prenatal Solution for a Reproductive Health Unit in Morocco 37
them take better decisions about their health [4]. However, the use of paper-based
health records remains inadequate, since they are exposed to loss, they cannot be
shared with healthcare providers and it is difficult to identify each information included
in them.
Hence, since pregnant women should monitor their health away from the hospital,
there is a need to remotely communicate with the healthcare providers when there is a
necessity [5]. Prenatal mobile Personal Health Records (mPHRs) are useful for these
purposes. They are mobile applications available on the app stores, which can be
installed on smartphones and allow pregnant women consult, record and control their
health data whenever required [6]. Prenatal mPHRs can be connected to Electronic
Health Records (EHRs), which are implemented in hospitals to be used only by
obstetricians, gynecologists or their assistants. EHRs can be either web or desktop
applications accessible from computers.
Facilitating the access to health data, information about the progress of pregnancy
and communication with healthcare providers promotes the improvement of the health
status of pregnant women and their infants, and therefore indirectly lessening the
mortality rates.
As part of a collaboration with the maternity “Les Orangers” in Rabat, a prenatal
mPHR and EHR were developed, based on the specifications that were extracted while
visiting the maternity several times. Afterwards, these applications will be evaluated in
the course of an experiment, which will be conducted among selected pregnant women
and obstetricians or gynecologists who will participate in this experiment.
The remainder of this paper is structured as follows: An overview of prenatal
mobile personal health records is introduced in Sect. 2. The developed solution is
presented in Sect. 3. The implementation of the solution is explained in Sect. 4. Sec-
tion 5 describes the experiment design of the solution. Lastly, Sect. 6 provides con-
clusions and future work.
Prenatal mPHRs are mobile health applications that allow a pregnant woman to access,
record and share her health data with healthcare professionals, in order to carry on an
accurate and consistent monitoring for her health and the baby’s health [7].
According to a previous study [6], these mobile apps generally include features
such as: Calendar and reminders for follow-ups and important appointments, infor-
mation about the progress of pregnancy regarding the mother and the baby’s health,
health habits to be followed during pregnancy, in addition to recorders and counters for
baby kicks and contractions. Furthermore, among the data that should be collected in
the prenatal mPHRs are the pregnant woman’s personal details, the physical body
information (e.g. weight, blood pressure, glucose, heart frequency), her medical
history (e.g. allergies or immunizations) and her obstetrical history (e.g. contraception
methods used, information about previous pregnancies and the status of her current
pregnancy) [7].
38 M. Bachiri et al.
This Section presents the purpose of the new prenatal solution as well as its require-
ments’ specification.
3.1 Purpose
A prenatal mPHR was developed for the pregnant women to follow up their pregnancy
and stay in touch with their doctors, while having a vision on their personal health
records. This mobile application interacts with the EHR, which is a web application
that will be implemented for the healthcare providers (doctors and assistants). Hence, it
will permit the ease of interaction between the pregnant woman and her doctor.
The doctor’s role is to fill out the health records of his patients during each con-
sultation, to be able to follow their health state through the EHR. The management of
appointments, patients, doctors and their availability are also handled in the EHR.
As for pregnant women, they will be able to access and consult their personal health
records through the prenatal mPHR, in addition to taking appointments according to the
availability of doctors, consulting information about the progress of pregnancy as
regards the mother and the baby, recording contractions, baby kicks and the measured
weight and blood pressure, which will be accessible for doctors via the EHR.
Furthermore, the user can record baby kicks and contractions, using recorders that
permit to save the history of the records in the application. For instance, in order to
record contractions, the user marks the start of a session, and every time a contraction
occurs, she clicks on a button to record it until contractions stop, she then ends the
session and the records are automatically saved. Lastly, the user can enter their mea-
sured weight or blood pressure, according to a defined date and time, and then visualize
the progress history of these variables as graphs or lists.
Figure 1 (a–j) in Appendix demonstrates some screenshots of the developed pre-
natal mPHR. The Appendix is accessible via the following link: https://www.um.es/
giisw/prenatal/Appendix.pdf.
5 Experiment Design
This section presents the experimental design we will follow to evaluate the quality and
the usefulness of our prenatal solution. Firstly, we present the criteria we will use to
recruit participants (pregnant women and gynecologists). Secondly, we describe the
recruitment process.
A set of inclusion (ICs) and exclusion criteria (ECs) were defined for this selection.
Hence, pregnant women are eligible for the experiment if they:
• are aged between18 and 45 years.
• are resident in either Rabat or Casablanca.
• are currently pregnant.
• have a moderate level of experience with mobile applications.
• own a smartphone that runs a recent version of Android.
• are willing the comply with all study procedures.
Otherwise, pregnant women are ineligible if they:
• are planning to relocate from the study area within the next two years.
• are currently smoking.
• use alcohol and drugs.
• have a sever debilitating illness preventing their participation.
• are infertile.
• are sterilized.
• are in premature or normal menopause.
• are not planning to deliver at public hospitals in Casablanca or Rabat.
• are willing to terminate their pregnancy.
• have history of three or more consecutive pregnancy complications.
As for obstetricians/gynecologists, they are eligible for the experiment only if they
are practicing in public hospitals in Rabat or Casablanca.
The experiment will be then carried out in two phases: (1) Before delivery: The
selected pregnant women will be assigned one of the selected obstetricians/
gynecologists and will be asked to use the developed prenatal mPHR to monitor
their pregnancy. Throughout their use of the prenatal mPHR, they will have to access
and record their own health data (weight, blood pressure, baby kicks and contractions)
and connect with their obstetrician/gynecologist in real-time. Moreover, they will have
to set appointments for consultations. The obstetrician/gynecologist will guide the
pregnant women all along the progress of their pregnancy until their due date, and will
be at disposal if any complication occurs. (2) After delivery: After giving birth to the
baby and getting enough rest, the new mother will be asked to fill in questionnaires in
order to evaluate the potential and quality of the prenatal mPHR.
Acknowledgments. This work was conducted within the research project PEER 7-246 sup-
ported by the US Agency for International Development (USAID). The authors would like to
thank the NAS and USAID for their support.
A Complete Prenatal Solution for a Reproductive Health Unit in Morocco 43
References
1. Gu, B.D., Yang, J.J., Li, J.Q., Wang, Q., Niu, Y.: Using knowledge management and
mhealth in high-risk pregnancy care: a case for the floating population in China. In:
Proceedings of the IEEE 38th Annual International Computer Software Applications
Conference Workshops COMPSACW 2014, pp. 678–683 (2014)
2. Oh, S., Sheble, L., Choemprayong, S.: Personal pregnancy health records (PregHeR): facets
to interface design. Proc. Am. Soc. Inf. Sci. Technol. 43(1), 1–10 (2007)
3. Hoang, D.B., et al.: Assistive care loop with electronic maternity records. In: 2008 10th
IEEE International. Conference e-Health Networking, Applications Services Health,
pp. 118–123 (2008)
4. Homer, C.S.E., Davis, G.K., Everitt, L.S.: The introduction of a woman-held record into a
hospital antenatal clinic: the bring your own records study. Aust. N. Z. J. Obstet. Gynaecol.
39(1), 54–57 (1999)
5. Shaw, E., et al.: Access to web-based personalized antenatal health records for pregnant
women: a randomized controlled trial. J. Obstet. Gynaecol. Can. 30(1), 38–43 (2008)
6. Bachiri, M., Idri, A., Fernández-Alemán, J.L., Toval, A.: Mobile personal health records for
pregnancy monitoring functionalities: analysis and potential. Comput. Methods Programs
Biomed. 134, 121–135 (2016)
7. Idri, A., Bachiri, M., Fernández-Alemán, J.L.: A framework for evaluating the software
product quality of pregnancy monitoring mobile personal health records. J. Med. Syst. 40(3),
50 (2016)
8. Bachiri, M., Idri, A., Redman, L.M., Fernández-Alemán, J.L., Toval, A.: A requirements
catalog of mobile personal health records for prenatal care. In: Lecture Notes in Computer
Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in
Bioinformatics, vol. 11622 LNCS, pp. 483–495 (2019)
9. Application Fundamentals. https://developer.android.com/guide/components/fundamentals.
Accessed 07 Nov 2019
10. Google Firebase. https://firebase.google.com/. Accessed 06 Nov 2019
11. Sardi, L., Idri, A., Redman, L.M., Alami, H., Bezad, R., Fernández-Alemán, J.L.: Mobile
health applications for postnatal care: review and analysis of functionalities and technical
features. Comput. Methods Programs Biomed. 184, 105114 (2020)
12. Bachiri, M., Idri, A., Abran, A., Redman, L.M., Fernández-Alemán, J.L.: Sizing prenatal
mPHRs using COSMIC measurement method. J. Med. Syst. 43(10), 319 (2019)
13. Bachiri, M., Idri, A., Redman, L.M., Abran, A., de Gea, J.M.C., Fernández-Alemán, J.L.:
COSMIC functional size measurement of mobile personal health records for pregnancy
monitoring. Adv. Intell. Syst. Comput. 932, 24–33 (2019)
14. Idri, A., Bachiri, M., Fernández-Alemán, J.L., Toval, A.: Experiment design of free
pregnancy monitoring mobile personal health records quality evaluation. In: 2016 IEEE 18th
International Conference on e-Health Networking, Applications and Services (Healthcom),
pp. 1–6 (2016)
15. Bachiri, M., Idri, A., Fernández-Alemán, J.L., Toval, A.: Evaluating the privacy policies of
mobile personal health records for pregnancy monitoring. J. Med. Syst. 42(8), 144 (2018)
16. Bachiri, M., Idri, A., Fernández-Alemán, J.L., Toval, A.: A preliminary study on the
evaluation of software product quality of pregnancy monitoring mPHRs. In: Proceedings of
2015 IEEE World Conference on Complex Systems WCCS 2015 (2016)
Machine Learning and Image Processing
for Breast Cancer: A Systematic Map
Abstract. Machine Learning (ML) combined with Image Processing (IP) gives
a powerful tool to help physician, doctors and radiologist to make more accurate
decisions. Breast cancer (BC) is a largely common disease among women
worldwide; it is one of the medical sub-field that are experiencing an emergence
of the use of ML and IP techniques. This paper explores the use of ML and IP
techniques for BC in the form of a systematic mapping study. 530 papers
published between 2000 and August 2019 were selected and analyzed according
to 6 criteria: year and publication channel, empirical type, research type, medical
task, machine learning objectives and datasets used. The results show that
classification was the most used ML objective. As for the datasets most of the
articles used private datasets belonging to hospitals, although papers using
public data choose MIAS (Mammographic Image Analysis Society) which
make it as the most used public dataset.
1 Introduction
One of the most common cancers for women in the world is Breast Cancer. It happens
when the cell tissue of the breast cells grows abnormally and start to divide rapidly [1].
The BC disease is distinguished by an overgrowth of a malignant tumor in the breast
[2]. The goal of BC screening is to achieve an early diagnosis, which aims to discern
the Malignant and Benign tumor, as for prognosis helps to put a treatment plan. The
use of medical image processing and machine learning for breast cancer diagnosis,
prognosis and/or treatment is promising since it can help physicians, doctors and
experts in detecting efficiently abnormalities [3].
To the extent of the authors’ knowledge, no Systematic Mapping Study (SMS) was
carried out to summarize the findings of primary studies dealing with the use of
machine learning and image processing techniques for any breast cancer medical tasks
such as diagnosis, prognosis and treatment. However, Idri et al. [4] have carried out a
SMS on the use of data mining techniques in BC, and Hosni et al. [5] conducted a SMS
© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 44–53, 2020.
https://doi.org/10.1007/978-3-030-45697-9_5
Machine Learning and Image Processing for Breast Cancer 45
on the use of ensemble techniques in breast cancer. The present SMS searches the
primary studies dealing with the application of machine learning and image processing
for BC published between 2000 to August 2019 in six libraries: ScienceDirecte,
IEEEXPLORE, Pubmed, Springer, ACM and Google Scholar. It provides a synthesis
and a summary of 530 selected papers by means of six Research Questions (RQs):
(1) determine the year, publication channels and sources of the selected papers,
(2) identify the type of contributions and empirical methods, (3) examine the most used
machine learning objective, (4) discover the datasets employed for ML and IP for BC.
The paper is structured as follow: Sect. 2 describes the research methodology
followed by this review. Section 3 reports the results of the four RQs. Section 4
discusses the results obtained. Section 5 concludes this SMS.
2 Research Methodology
study, Survey, and Historical based evaluation. RQ3: Identification of the machine
learning objectives such as: classification, clustering, prediction and others. RQ4:
Identification of the datasets employed [4].
3 Results
years is presented in Fig. 3. We note that SP and ER come into sight in 2000 and rise
over the years. Also reviews started to get more interested from 2017.
The selected papers were empirically evaluated using three types of empirical
evaluation: case study, historical based evaluation and survey [8]. As shown in Fig. 4,
most of the solution proposal articles used historical based evaluation by using publicly
available databases. For the evaluation research, most of the papers used a case study
based empirical evaluation, and for the review they used survey empirical method. It’s
noticed that researchers started to give more importance to reviews for the large number
of articles published in the subject and the importance of information that needs to be
summarized, hence the importance of this research type.
3.5 RQ4: What are the Datasets Used for ML and IP in BC?
The aim of RQ6 is to identify the different datasets, the validation methods and the
performance measures used to evaluate the use of machine learning and image pro-
cessing in breast cancer. Table 3 shows the most used datasets in the 530 selected
papers selected. It can be noticed that 47% of datasets are private, MIAS is used by
15% of selected studies, Digital Database for Screening Mammography (DDSM)
(13%), Breast Cancer Histopathological (BREAKHIS) (5%), Breast Cancer Digital
Repository (BCDR), WISCONSIN and INBREAST (3% each), Mytos (2%), 1% The
Cancer Genome Atlas) TCGA. The remaining articles used other databases such as
IMAGENET, ICIAR, Camelyon challenge, BUS, IRMA, and AMIDA.
4 Discussion
explained by the fact that image processing steps include image preprocessing, seg-
mentation, feature extraction, feature selection and classification [12]. Therefore,
classification is an important step in IP for classifying properly the medical images to
detect the type of the tumor.
4.4 RQ4: What are the Datasets Used for ML and IP in BC?
Table 3 shows that 47% of the selected papers used private datasets collected from
hospitals; this is due to the privacy of the medical images and the fact that not all patients
want to share their medical images. Researchers are then encouraged to collaborate with
clinics and medical centers to collect the required images to evaluate and their BC
solutions. Moreover, the most used public datasets are MIAS (28%) and DDSM (25%)
for mammographic images due to the fact mammographic images are still the most used
for BC diagnosis [13–15]; Breakhis (9%) for histopathological images; and Wisconsin
(5%), Inbreast (5%) and BCDR (6%) for other medical imaging types. We note that
some studies used several datasets to compare their results [10, 16–19].
The purpose of this SMS was to present an overview of the use of ML and IP in breast
cancer. 530 papers published from 2000 to August 2019 were selected and classified
according to: year and source of publication, research type and empirical type, BC
discipline, ML methods and techniques, validation techniques and performance mea-
sures. This paper discussed the results of the six RQs. The findings per RQ are: (RQ1)
The use of ML and IP for BC is gaining more interest in the last years by researchers,
the number of published articles has increased significantly since 2015 and the majority
of the papers (71%) were published in journals. (RQ2) Most of the relevant papers were
identified as solution proposal and evaluation research, and the majority of the articles
used historical based evaluation. (RQ4) Classification is the most investigated objective
in ML and IP for BC, and that is explained by the fact that classification is a component
of any IP process. (RQ6) Private datasets are the most frequently used to evaluate ML
and IP for BC, followed by two public datasets MIAS and DDSM.
As future work we aim to: (1) use the results of the SMS study as the base to
perform a systematic literature review concerning the use of ML and IP in Breast
cancer, and (2) conduct an evaluation research using case study data collected from a
Moroccan hospital to investigate the performance of different ML and IP techniques.
References
1. Metelko, Z., et al.: Pergamon the world health organization quality of life assessment. 41(10)
(1995)
2. Bish, A., Ramirez, A., Burgess, C., Hunter, M.: Understanding why women delay in seeking
help for breast cancer symptoms B. J. Psychosom. Res. 58, 321–326 (2005)
Machine Learning and Image Processing for Breast Cancer 53
3. Zhang, G., Wang, W., Moon, J., Pack, J.K., Jeon, S.I.: A review of breast tissue
classification in mammograms. In: Proceedings of the 2011 ACM Research in Applied
Computation Symposium, RACS 2011, pp. 232–237 (2011)
4. Idri, A., Chlioui, I., El Ouassif, B.: A systematic map of data analytics in breast cancer. In:
ACM International Conference. Proceeding Series (2018)
5. Hosni, M., Abnane, I., Idri, A., Carrillo de Gea, J.M., Fernández Alemán, J.L.: Reviewing
ensemble classification methods in breast cancer. Comput. Methods Programs Biomed. 177,
89–112 (2019)
6. Kofod-petersen, A.: How to do a structured literature review in computer science.
Researchgate, no. May 2015, pp. 1–7 (2014)
7. Kitchenham, B., Pearl Brereton, O., Budgen, D., Turner, M., Bailey, J., Linkman, S.:
Systematic literature reviews in software engineering - a systematic literature review. Inf.
Softw. Technol. 51(1), 7–15 (2009)
8. Tonella, P., Torchiano, M., Du Bois, B., Systä, T.: Empirical studies in reverse engineering:
state of the art and future trends. Empir. Softw. Eng. 12(5), 551–571 (2007)
9. Rampun, A., Wang, H., Scotney, B., Morrow, P., Zwiggelaar, R.: School of Computing,
Ulster University, Coleraine, Northern Ireland, UK Department of Computer Science,
Aberystwyth University, UK. In: 2018 25th IEEE International Conference Image
Processing, pp. 2072–2076 (2018)
10. Agarap, A.F.M.: On breast cancer detection: an application of machine learning algorithms
on the Wisconsin diagnostic dataset. In: ACM International Conference. Proceeding Series,
no. 1, pp. 5–9 (2018)
11. Xiong, X., Kim, Y., Baek, Y., Rhee, D.W., Kim, S.H.: Analysis of breast cancer using data
mining & statistical techniques. In: Proceedings of the Sixth International Conference on
Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Comput-
ing and First ACIS International Workshop on Self-assembling Wireless Network,
SNPD/SAWN 2005, vol. 2005, pp. 82–87 (2005)
12. Sadoughi, F., Kazemy, Z., Hamedan, F., Owji, L., Rahmanikatigari, M., Azadboni, T.T.:
Artificial intelligence methods for the diagnosis of breast cancer by image processing: a
review. Breast Cancer Targets Ther. 10, 219–230 (2018)
13. Wei, X., Ma, Y., Wang, R.: A new mammography lesion classification method based on
convolutional neural network. In: ACM International Conference. Proceeding Series,
pp. 39–43 (2019)
14. Ting, F.F., Tan, Y.J., Sim, K.S.: Convolutional neural network improvement for breast
cancer classification. Expert Syst. Appl. 120, 103–115 (2019)
15. Torrents-Barrena, J., Puig, D., Melendez, J., Valls, A.: Computer-aided diagnosis of breast
cancer via Gabor wavelet bank and binary-class SVM in mammographic images.
J. Exp. Theor. Artif. Intell. 28(1–2), 295–311 (2016)
16. Hu, Z., Tang, J., Wang, Z., Zhang, K., Zhang, L., Sun, Q.: Deep learning for image-based
cancer detection and diagnosis – a survey. Pattern Recogn. 83, 134–149 (2018)
17. Mini, M.G.: Neural network based classification of digitized mammograms. In: Proceedings
of the 2nd Kuwait Conference on e-Services e-Systems, KCESS 2011, pp. 1–5 (2011)
18. Hamidinekoo, A., Dagdia, Z.C., Suhail, Z., Zwiggelaar, R.: Distributed rough set based
feature selection approach to analyse deep and hand-crafted features for mammography mass
classification. In: Proceedings of the 2018 IEEE International Conference on Big Data, Big
Data 2018, pp. 2423–2432 (2019)
19. Mendel, K., Li, H., Sheth, D., Giger, M.: Transfer learning from convolutional neural
networks for computer-aided diagnosis: a comparison of digital breast tomosynthesis and
full-field digital mammography. Acad. Radiol. 26(6), 735–743 (2019)
A Definition of a Coaching Plan to Guide
Patients with Chronic Obstructive Respiratory
Diseases
Abstract. With such a noticeable increase in the number of people with chronic
obstructive respiratory diseases the effectiveness of traditional healthcare sys-
tems has worsened significantly over the last years. There is an opportunity to
develop low cost and personalized solutions that can empower patients to self-
manage and self-monitor their health condition. In this context, the PHE project
is present whose main goal is to develop coaching solutions for remote moni-
toring of patients and that can be provided through the exclusive use of the
smartphone. In this work we explore how patients with chronic obstructive
respiratory diseases can adopt healthier behaviors by following personalized
healthcare coaching plans used throughout their daily lives. We explain how a
coaching plan can be defined to guide the patient and explore the mechanisms
necessary to operate automatically and adapt itself according to the interactions
between the patient and the system. As a result, we believe to be possible to
enhance user experience and engagement with the developed system and con-
sequentially improve his/her health condition.
1 Introduction
© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 54–64, 2020.
https://doi.org/10.1007/978-3-030-45697-9_6
A Definition of a Coaching Plan to Guide Patients 55
patient and the healthcare professional, separated by periods without structured support
or by the use of self-monitoring tools (such as flow meters, handheld spirometers,
oximeters) and self-management tools (such as symptom diaries, manuals, pamphlets
and web resources) between consultations. The reality, however, is that the constant
monitoring of patients’ condition has become a burden on the healthcare providers [4]
and traditional healthcare delivered through health professionals’ face-to-face interac-
tions becomes more difficult to achieve. As such, the necessity to develop cost-effective
solutions to monitor and treat patients with CORD has increased significantly in recent
years [5]. In this scope, concepts such as mobile health (mHealth) have emerged
towards the self-management of the patient’s disease, by providing mobile systems that
are capable of monitoring patients’ health status and giving customized feedback about
activities and behaviors that can be done to improve health and wellbeing [6, 7].
Furthermore, mobile devices now offer a wide set of features and embedded sensors
and the development of solutions that can exploit these components without or with
minimal access to external devices other than the smartphone itself seem to be adequate
and easy to integrate in the daily lives of patients to measure and monitor patients’
current health condition and support them in the management of their diseases [8].
Therefore, coaching solutions delivered through smartphones (mCoaching) that can
combine data gathering and processing, gamification elements for user engagement and
support to behaviors change seem to be an ideal platform to deliver both simple and
effective self-management interventions, while maintaining or improving quality of
care and reducing costs, specially in the context of CORD management [9–12].
The work here proposed is part of the PHE project1 which aims to empower people
to monitor and improve their health using personal data and technology assisted
coaching. To achieve this goal, PHE will apply innovative and intelligent measuring
and monitoring tools for preventive healthcare and allow cost-saving and self and
home-care solutions with increased patient involvement. Furthermore, PHE project will
exclusively use the smartphone and its embedded sensors to acquire all the necessary
data to provide personalized support to the CORD patient. In this work we explore the
personalization given to the CORD patient by providing him/her a coaching plan to
follow and to adopt healthier behaviors throughout his/her daily life. A conceptual
definition of the coaching plan is presented which includes four different phases of
operation (initialization, execution, completion and post completion). We describe each
of these phases and explain how the coaching plan can enhance the personalized
healthcare provided to the patient and define a proactive mechanism which is not
completely dependent on user input but also capable of adapting itself based on the data
collected over time while the patient uses and interacts with the PHE system.
1
https://itea3.org/project/personal-health-empowerment.html.
56 D. Martinho et al.
2 Proposed Model
The work here proposed has been extended from [13] in which an architecture for the
coaching module to support self-monitoring of CORD patients was defined. This
coaching module is responsible for processing patient data and generate recommen-
dations to improve patient’s health condition accordingly. Furthermore, and as will be
explained, the proposed model can operate independently from the PHE system due its
generic structure. Three main type of users have been identified which interact with the
PHE system: patient, healthcare professional and health manager. The first user is the
main user and will interact with the developed system by inserting clinical information
and receiving recommendations to adopt healthier behaviors and improve health
condition and wellbeing. The healthcare professional can access patient clinical
information and provide specific guidelines (through coaching plans). The health
manager can access and update available domain knowledge (which includes rules and
associated variables, recommendations, user profiles and non-specific coaching plans).
In this section, we first describe the architecture of the defined coaching module
considered for the CORD Management in the PHE system and associated components.
The Coaching Plans component is then discussed in more detail as it represents the
novel feature proposed in this work.
validation of different health variables, which in the case of this work correspond to
both patient demographic data and health state (for example, gender and weight, smoke
exposure, etc.). Besides that, each health variable has an associated periodicity and to
measure/collect its current value a mechanism has to also be defined to promote a
specific interaction between the patient and the system (for example, to know if an
exacerbation was detected within the last week, the associated health variable has to be
updated weekly using an interaction mechanism such as a visual notification). The
definition of each rule and corresponding recommendation is structured in a clinical
matrix format and are based on scientific evidence. Figure 2 shows an example of a
rule that was defined for a recommendation to send to the patient.
User Profiles specifies all the characteristics that can identify a certain profile which
is assigned to the patient. So far two main groups have already been identified (Asthma
and Rhinitis). Furthermore, the remaining groups will be defined using clustering
techniques to identify users sharing characteristics related to patient’s demographic
data, context, etc. The Coaching Plans component includes the selected recommen-
dations to be provided to the patient in a given time frame. Furthermore, each coaching
plan is related to a specific health topic which has been identified according to the
literature on clinical evidence and medical guidelines. This component will be dis-
cussed in the following subsection of this work. The recommendations component
verifies and processes the received data according to the defined Rules, User Profiles
and Coaching Plans and selects suitable recommendations. The Data Access Layer
serves as a middle layer between the Business Logic Layer and the different data
sources and controls all the read, insert, update and delete operations on the database.
The database contains information regarding the patient’s clinical data, health variables
associated with recommendations, the history of provided recommendations and
respective feedback. It also contains knowledge provided by health professionals, rules
used for the generation of recommendations, the defined user profiles and respective
characteristics, and the coaching plans provided to each patient. The coaching module
has been developed using JBoss Drools framework as it provides intuitive rule lan-
guage for non-developers, supports flexible and adaptive process, enhances intelligent
process automation and complex event processing and is easy to integrate with web
services.
58 D. Martinho et al.
Four steps have been identified to define a coaching plan in the context of the PHE
system: Plan Initialization, Plan Execution, Plan Completion and Plan Post Completion.
A Definition of a Coaching Plan to Guide Patients 59
Plan Initialization. The coaching plan initialization is a process that can configured
manually or automatically by the user. Manual coaching plans are defined either by the
healthcare professional or the health manager and differ by the fact that they can target
a specific patient (coaching plans created by the healthcare professional) or not
(coaching plans created by the health manager). Automatic coaching plans are created
by the patient himself/herself and are based on the coaching plans defined for the
associated user profile.
As can be seen in Fig. 3, coaching plan has an associated periodicity which can be
weekly, monthly or non-repetitive. The user must then select the topics and intended
goals to be achieved with the coaching plan. We define a goal as a desired state
regarding a specific patient-related variable according to a certain topic. For example,
in the context of smoking habits, one objective could be to decrease the number of
cigarettes smoked per day. Furthermore, to achieve a certain goal a list of intermediate
goals can also be defined. Following the given example, intermediate goals which
would allow the patient to decrease the number of cigarettes smoked per day could be
to start the coaching plan and smoke a maximum of 3 cigarettes in the morning, 3
cigarettes in the afternoon and finally 3 cigarettes in the evening/night. This means that
when defining a goal and its associated intermediate goals, the user should also define a
deadline to achieve each identified goal. The flowchart presented in Fig. 4 shows the
coaching plan initialization process that was described.
60 D. Martinho et al.
Plan Execution. In the second step, the coaching plan is put into practice and the
targeted patient is monitored according to the goals identified. As such, all patient-
related variables considered for the coaching plan are collected through patient inter-
action with the PHE system by having the patient insert new records and values for
those variables. The coaching framework will process those values and whenever a
recommendation is verified (if those values trigger all the conditions necessary to
activate a recommendation) it will be sent to the smartphone and provided to the patient
in different formats (such as an alert or a notification). In parallel, the coaching
framework will also verify if any goal established for the coaching plan was achieved
and update the coaching plan accordingly. The flowchart presented in Fig. 5 shows the
coaching plan execution process that was described.
A Definition of a Coaching Plan to Guide Patients 61
The ideal time to provide recommendations to the patient will depend on the feedback
provided while using the developed system. Several feedback mechanisms are defined to
identify the best moments during the day to provide recommendations to the patient and
to filter positive recommendations among all the available recommendations:
• Recommendation Evaluation – Whenever a detected recommendation is provided
to the patient, he/she can rate the same whether they liked or disliked it. This way,
unwanted recommendations can be filtered in future similar scenarios.
• Goal Evaluation – Whenever patient data is inserted which can modify the current
state of a defined goal, it will be evaluated to understand whether the patient was
capable of achieving the desired state configured in the coaching plan or if the state
associated to an already achieved goal was deteriorated into a previous state.
• Patient and System Interaction Evaluation – Different data can be obtained from the
interaction between the patient and the system. In this case, it is considered both
system utilization rate (which corresponds to utilization times and frequency of use
of the system, and response time (verify whether the patient answered a provided
recommendation or not and the corresponding response time). This information can
then be used to readjust deadlines and understand the most adequate times during
the day to interact with the user.
62 D. Martinho et al.
This way it will be possible to avoid unnecessary and very repetitive interactions
with the patient which may tire him/her and only increase his/her disinterest to keep
using the developed system. All the previous feedback mechanisms are considered in
the adaptative goal setting procedure that is executed automatically every day to
evaluate and readjust goals based on user performance for that day. For this, we have
taken into account the model proposed by Akker and colleagues in [14] where they
defined an automated personalized goal-setting feature in the context of physical
activity coaching in which they determined the goal line for an upcoming day by
combining either stored data from that day of the week or in default parameters defined
by the healthcare professional with the new acquired data. We have considered a
similar process which updates the coaching plan goals automatically every single day
by comparing the current acquired data from that day with the historical data (or with
the default parameters in case no data was provided by the user until then) for the same
day. We have considered both goals completion rate and average goals’ difficulty as
performance measures to identify if the user improved or worsened and depending on
the difference between both values the goals for the upcoming days will be updated
accordingly. After that we will consider the data obtained from patient and system
interaction to measure if an established deadline to achieve a certain goal could also be
adapted depending on the average utilization rate and response time that is obtained.
Plan Completion. The third step considered is the completion of the defined coaching
plan. The condition necessary to complete a defined plan, and as explained above, is
whenever the defined goals (excluding all the intermediate goals) have been achieved.
After this the patient is provided with a report containing all the information on his/her
performance while executing the coaching plan which includes the total number of
goals achieved (including all the intermediate goals) and other metrics such as the time
needed to achieve those goals, the number of deteriorations verified, the number of
generated recommendations while following the coaching plan, the number of
approved and disapproved recommendations, among others.
Plan Post Completion. The last step is the coaching plan completion in which the
achieved results are verified after the plan has been completed. As such, whenever the
patient provides more clinical data after he/she has completed a coaching plan, that
information will be verified once again to understand if the patient health condition was
deteriorated and if any achieved result has been compromised (For example, if the
patient completed a smoking cessation coaching plan successfully and then started
smoking again). As a result, the healthcare professional will be notified so that he/she
can set a new coaching plan for that patient.
The increasing number of people suffering from CORD has led to an overload of
healthcare resources to monitor and support patients in the management of their dis-
ease. Traditional methods of aiding these patients are no longer cost-effective nor
adequate more so when new treatments combining technological developments become
more relevant and allow patients to better self-monitor and self-manage their health
A Definition of a Coaching Plan to Guide Patients 63
condition. In this context, mobile coaching technologies can exploit the different fea-
tures and embedded sensors available on the smartphone and are now being considered
as an alternative option to directly monitor patients with CORD. The solution proposed
by the PHE system brings further advantages by providing a healthcare solution that
does not require any additional external devices other than the smartphone itself and
that is therefore more friendly and appellative cost wise to the patient and that can be
easily integrated in his/her daily life.
In this work we have presented the overall architecture of the coaching module
which is integrated in the PHE system and that is composed, among several compo-
nents, of a coaching plan which is used to guide patients with CORD to adopt better
and healthier behaviors. We have provided a conceptual definition of the different
phases necessary for this component to operate correctly and explained how it can
automatically adapt itself to the user preferences and interactions with the PHE system.
As future work we intend to integrate the defined coaching plan in the developed
protype for the PHE system and study its effectiveness and usability in a real case
scenario. After that, and as we collect more data from the interactions between the
patient and the PHE system, we will be able to apply more intelligent mechanisms
(predictive analytics) to enhance the interactions and recommendations provided to the
user and predict whether a certain interaction or recommendation is adequate at a given
moment in time or not.
Acknowledgments. The work presented in this paper has been developed under the EUREKA -
ITEA3 Project PHE (PHE-16040), and by National Funds through FCT (Fundação para a
Ciência e a Tecnologia) under the projects UID/EEA/00760/2019 and UID/CEC/00319/2019 and
by NORTE-01-0247-FEDER-033275 (AIRDOC - “Aplicação móvel Inteligente para suporte
individualizado e monitorização da função e sons Respiratórios de Doentes Obstrutivos Cróni-
cos”) by NORTE 2020 (Programa Operacional Regional do Norte).
References
1. GOLD: Pocket Guide to COPD Diagnosis, Management and Prevention. A guide for Health
Care Professionals. 2019 Report (2019)
2. Naghavi, M., Abajobir, A.A., Abbafati, C., Abbas, K.M., Abd-Allah, F., Abera, S.F.,
Aboyans, V., Adetokunboh, O., Afshin, A., Agrawal, A.: Global, regional, and national age-
sex specific mortality for 264 causes of death, 1980–2016: a systematic analysis for the
Global Burden of Disease Study 2016. Lancet 390, 1151–1210 (2017)
3. European Respiratory Society: The global impact of respiratory disease. Forum of
International Respiratory Societies (2017)
4. Gibson, G.J., Loddenkemper, R., Lundbäck, B., Sibille, Y.: Respiratory health and disease in
Europe: the new European Lung White Book. European Respiratory Society (2013)
5. Gobbi, C., Hsuan, J.: Collaborative purchasing of complex technologies in healthcare:
implications for alignment strategies. Int. J. Oper. Prod. Manag. 35, 430–455 (2015)
6. Steinhubl, S.R., Muse, E.D., Topol, E.J.: Can mobile health technologies transform health
care? JAMA 310, 2395–2396 (2013)
64 D. Martinho et al.
7. Luxton, D.D., McCann, R.A., Bush, N.E., Mishkind, M.C., Reger, G.M.: mHealth for
mental health: integrating smartphone technology in behavioral healthcare. Prof. Psychol.
Res. Pract. 42, 505 (2011)
8. Almeida, A., Amaral, R., Sá-Sousa, A., Martins, C., Jacinto, T., Pereira, M., Pinho, B.,
Rodrigues, P.P., Freitas, A., Marreiros, G.: FRASIS-Monitorização da função respiratória na
asma utilizando os sensores integrados do smartphone. Revista Portuguesa de Imunoaler-
gologia 26, 273–283 (2018)
9. Deterding, S., Sicart, M., Nacke, L., O’Hara, K., Dixon, D.: Gamification. Using game-
design elements in non-gaming contexts. In: Extended Abstracts on Human Factors in
Computing Systems, CHI 2011, pp. 2425–2428. ACM (2011)
10. Tinschert, P., Jakob, R., Barata, F., Kramer, J.-N., Kowatsch, T.: The potential of mobile
apps for improving asthma self-management: a review of publicly available and well-
adopted asthma apps. JMIR mHealth uHealth 5, e113 (2017)
11. Bashshur, R.L., Shannon, G.W., Smith, B.R., Alverson, D.C., Antoniotti, N., Barsan, W.G.,
Bashshur, N., Brown, E.M., Coye, M.J., Doarn, C.R.: The empirical foundations of
telemedicine interventions for chronic disease management. Telemed. e-Health 20, 769–800
(2014)
12. Watson, H.A., Tribe, R.M., Shennan, A.H.: The role of medical smartphone apps in clinical
decision-support: a literature review. Artif. Intell. Med. 101707 (2019)
13. Vieira, A., Martinho, D., Martins, C., Almeida, A., Marreiros, G.: Defining an architecture
for a coaching module to support self-monitoring of chronic obstructive respiratory diseases.
Stud. Health Technol. Inform. 262, 130–133 (2019)
14. Cabrita, M., op den Akker, H., Achterkamp, R., Hermens, H.J., Vollenbroek-Hutten, M.M.:
Automated personalized goal-setting in an activity coaching application. In: SENSORNETS,
pp. 389–396 (2014)
Reviewing Data Analytics Techniques in Breast
Cancer Treatment
Abstract. Data mining (DM) or Data Analytics is the process of extracting new
valuable information from large quantities of data; it is reshaping many indus-
tries including the medical one. Its contribution to medicine is very important
particularly in oncology. Breast cancer is the most common type of cancer in the
world and it occurs almost entirely in women, but men can get attacked too.
Researchers over the world are trying every day to improve, prevention,
detection and treatment of Breast Cancer (BC) in order to provide more effective
treatments to patients. In this vein, the present paper carried out a systematic
map of the use of data mining technique in breast cancer treatment. The aim was
to analyse and synthetize studies on DM applied to breast cancer treatment. In
this regard, 44 relevant articles published between 1991 and 2019 were selected
and classified according to three criteria: year and channel of publication,
research type through DM contribution in BC treatment and DM techniques. Of
course, there are not many articles for treatment, because the researchers have
been interested in the diagnosis with the different classification techniques, and
it may be because of the importance of early diagnosis to avoid danger. Results
show that papers were published in different channels (especially journals or
conferences), researchers follow the DM pipeline to deal with a BC treatment,
the challenge is to reduce the number of non-classified patients, and affect them
in the most appropriate group to follow the suitable treatment, and classification
was the most used task of DM applied to BC treatment.
1 Introduction
Breast cancer is the most common cancer and cause of death among women every year
[1]; it often causes confusion as to the adequate treatment to be adopted in different
cases. The field of BC treatment using DM techniques has known an important pro-
gress and researchers has become more interested about the topic, since the medical
decision-makers require to be supported by DM techniques. Every day, the occurrence
of BC is increasing [2], and researchers should be aware of that, so studies in this sense,
© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 65–75, 2020.
https://doi.org/10.1007/978-3-030-45697-9_7
66 M. Ezzat and A. Idri
especially in the treatment task should be richer. As the treatments for BC are
improving [3], patients could live longer with even the most advanced BC. Nowadays
since it is possible to access BC medical data, and given the powerful DM techniques
supporting the decision making in treatment, we could establish a strong strategy to
deal with BC treatment using either precision medicine and/or DM tools to draw the
roadmap through a robust decision making framework [3].
The most types of treatments in BC are surgery, radiotherapy, chemotherapy,
hormone therapy and biological therapy [4, 5]. The appeal to DM in medical field is
increasing, in particular in BC [4]. This is because DM is providing a variety of
techniques and tools dealing with complex problems [6]. In fact, DM could be defined
as the process of browsing data to extract useful knowledge. DM could behave under
two faces, either machine learning using artificial intelligence techniques, or statistical-
based techniques. BC treatment got advantage of the variety of DM objectives (clas-
sification, regression, clustering and association) to provide useful solution to oncol-
ogists [6]. However, according to the best of authors’ knowledge, no systematic
mapping study (SMS) was carried out to synthesize and summarize the findings of
primary studies dealing with the use of DM techniques for BC treatment, which
motivates the present study. Thus, the present study conduct a systematic map on
primary studies published in SpringerLink, PubMed, ACM, Google scholar, Science
Direct and IEEExplore between the period of 1991 to 2019. A set of 44 papers were
selected, synthesized, and classified according to: year and channel of publication,
research type, and DM techniques used.
The paper is composed of 5 sections. Section 2 shows the methodology followed to
carry out the present SMS. Section 3 presents the results of research questions. Sec-
tion 4 discusses the results. Finally, conclusion and future work are presented in
Sect. 5.
2 Research Methodology
The goal of a SMS is to build a classification scheme to structure a field of interest [4].
Whilst SMS involves a horizontal approach to the published studies, Systematic Lit-
erature Review (SLR) discusses and analyses the processes and outcomes of previous
works vertically. The SMS process can be summed up in five steps: Defining the
research question, conducting a search, screening the papers, assigning keywords to
each paper by using the abstract and data extracting and mapping results.
of search, and this was done by gathering terms and synonyms figured in the RQs. We
associated OR the alternatives and AND to link most present terms. The resulted search
string was: (Breast OR “Mammary gland” OR “Chemotherapy” OR “mammography”)
AND (cancer* OR tumor OR malignancy OR masses).
AND (treatment OR cure OR medication OR Prognosis) OR (Identification OR
Analysis OR monitoring) AND (data mining* OR machine learning* OR analytics*
OR categorization* OR intelligent OR classificat* OR cluster* OR associat* OR
predict*) AND (model* OR algorithm* OR technique* OR rule* OR method* OR
tool* OR framework* OR recommend).
3 Results
This section reports the results relatively to the research questions of the Table 1. To do
so, we present first an overview of the selection process, then the outcome of RQs 1–3.
3.2 RQ1: What Are Publications Sources and in Which Years Were
the Selected Studies Related to Data-Mining Application for Breast
Cancer Treatment Published?
Table 2 shows that the 44 selected papers were published in different channels
(especially journals or conferences). 44.72% of studies were published in journals,
38.64% were found in conferences, while 28.36% had books or reports as a source.
We observe from Table 2 that Breast Cancer Research and Treatment (6.38%),
Journal of Medical Systems (6.38%) and Journal of Biomedical Informatics (4.26%)
are the most targeted journals, and the ACM-BCB conference has published only two
papers (4.55%).
We observe from Fig. 2 that researchers finally tend to focus on BC treatment. This
could explain the inefficient work regarding BC treatment in the past. However, the
number of studies showed an increasing rate from the year 2016 to 2019; 40% of
studies done in 2016 were published in conferences, in 2017, 66.6% were published in
a conference, and during 2019, 100% of studies were published in journals.
3.3 RQ2: How Far Data Mining Has Contributed in Making Decisions
on Breast Cancer Treatment?
The selected papers could be divided into four research types: Evaluation Research
(ER) [8], Solution Proposal (SP) [7], Experience Papers or empirical evaluation
(EP) [9] and Reviews [10]: 46.51% of selected papers were SP, proposing new DM
tools or techniques dealing with BC treatment, 14% are ER, 16% were considered as R,
and 23.26% of the papers were classified as Experience [11]. Note that the Experience
studies are in general difficult to carry out due to the difficulty of getting data of
patients, moreover it needs a medical expertise to evaluate and validate the outcome.
The nature of the treatment task requires the experience, so we need solution proposal,
experience papers which will be evaluated in ER papers, and we need studies that will
gather all this in reviews.
Reviewing Data Analytics Techniques in Breast Cancer Treatment 71
0
1991 2001 2005 2007 2008 2010 2011 2012 2013 2014 2015 2016 2017 2019
SP ER EP R
3.4 RQ3: What Are the Most Commons Techniques and Methods to Deal
with BC Treatment?
Figure 4 shows the distributions of DM techniques used according to each DM
objective. The four objectives were investigated with: Association: 21.4%, Classifi-
cation: 52.3%, Clustering: 9.5%, and Prediction: 16.6%. We observe that DT [12–15]
is the most used DM technique for classification with a percentage of 50%, followed by
Fuzzy logic based models (18%), then SVM [16] (18.1%) and association rules [17]
(18.1%), after that we found Neural networks [18] GA [18] and BN [19] with a
percentage of 4.5% for each. For the clustering objective, K-means [20] (75%) is the
most frequent followed by association rules with a percentage of 25%, As for pre-
diction Neural networks and Decision trees are the most used with a percentage of 43%
each, followed by association rules (14.3%). Association rules are the most present
when it comes to association with a percentage of (55.6), then fuzzy logic-based
models and Apriori with a percentage of 22.2% each.
Fig. 4. Distribution of DM techniques per objective (ARM: Association rules mining. FM:
Fuzzy methods. BN: Bayesian network. DT: Decision tree. GA: Genetic algorithm. NN: Neural
networks. SVM: Support vector machine).
72 M. Ezzat and A. Idri
4 Discussion
This section discusses the results of this systematic map of data analytics in BC
treatment. It also analyses the results obtained for each RQ.
4.1 RQ1: What Are Publication Sources, and in Which Years Were
the Selected Studies Related to Data-Mining Application for Breast
Cancer Treatment Published?
This study selected 44 relevant articles dealing with DM techniques for BC treatment.
The variety of sources could be explained by the variety of DM techniques and
objectives to solve real world problems. These sources have relationship with computer
science, data analytics applied to BC treatment. Even though the number of studies in
BC treatment is still low, it has been increased in the last years. Therefore, we conclude
that the coming years will come up with very interesting outcomes in treatment of BC
using DM techniques; based on the result of Fig. 3 where SP is the highest type of
studies.
4.2 RQ2: How Far Data Mining Has Contributed in Making Decisions
on Breast Cancer Treatment?
Figure 3 shows that the number of SP is the highest, which can be explained by the fact
that the DM based solutions could bring an interesting push for BC treatment studies.
SP has reached its top in 2017, whereas experience papers start to show up remarkably
in 2017; the evaluation researches are not present due to the lack of empirical evalu-
ations to assess the treatment task. This shows that the use of DM techniques in BC
treatment is still not mature. EP studies are presents and this is very important in the
medical context, because it gives more credibility to any study guiding to the suitable
treatment.
4.3 RQ3: What Are the Most Common DM Techniques to Deal with BC
Treatment?
Several DM techniques were evaluated, and Decision tree is the most frequent DM
technique used when it comes to the classification objective which is the most recurrent
task in BC treatment. Moreover, the association rules are the best choice for researchers
to deal with association task; we could also note that the famous k-means still the most
powerful technique for clustering issues, whereas the neural networks still very
accurate for prediction problems.
We could explain that, by the fact that decision trees are faster and easier to use for
classification, especially with the large choice of libraries offering the possibility to get
advantage of this technique by reusing the functionalities with a large choice of lan-
guages. In addition, decision trees can be easily interpreted by the oncologist while
taking the decision, without any background of data mining. As for association, the
association rules, still the most preferred technique among researchers for the associ-
ation task, they are easily understood for oncologists who can therefore trust them
Reviewing Data Analytics Techniques in Breast Cancer Treatment 73
when deciding on the BC treatments. For clustering, k-means is still the most powerful
and popular clustering techniques, because it is easy to use, and can provide accurate
clustering [20]. For the prediction, neural networks were widely used due to their
robustness to model complex relationships and their flexibility to be adapted to more
complex situations [6–8].
Since there is no DM technique that can outperform all the others in all contexts,
many selected studies combined more than two DM techniques to deal with a specific
DM objective for BC treatment [21]. Also, combining more than two techniques allows
avoiding limitations and consolidating advantages of the used techniques [22].
This study carried out a systematic map of data analytics in breast cancer treatment. It
summarized and analyzed 44 selected papers published between 1991 and 2019
according to three RQs. The findings per RQ are: (RQ1) The use of DM among
researchers has increased during the last years; the number of publications has
remarkably increased since 2016; Most of the papers (44.72%) were published in
journals and (38.64%) were found in conferences. (RQ2) This SMS found out that the
contribution of DM in BC treatment is very low but has increased recently. Therefore,
researchers should devote more effort to the treatment task. Most of the selected papers
were captured as SP and EP. (RQ3) Classification is the most frequent objective in DM,
because the problem is a classification one. For classification, the Decision tree gained
more interest during the years followed by fuzzy methods and SVM. As for future
work we aim to: (1) Get advantage of this SMS outcome to perform a systematic
literature review about the ML techniques investigated in BC treatment. (2) Implement
a solution of BC treatment by evaluating the different ML techniques.
References
1. Soria, D., Garibaldi, J.M., Green, A.R., Powe, D.G., Nolan, C.C., Lemetre, C., Ball, G.R.,
Ellis, I.O.: A quantifier-based fuzzy classification system for breast cancer patients. Artif.
Intell. Med. 58, 175–184 (2013). https://doi.org/10.1016/j.artmed.2013.04.006
2. Umesh, D.R., Ramachandra, B.: Association rule mining-based predicting breast cancer
recurrence on SEER breast cancer data. In: 2015 International Conference on Emerging
Research in Electronics, Computer Science and Technology, ICERECT 2015, pp. 376–380
(2016). https://doi.org/10.1109/ERECT.2015.7499044
3. Alford, S.H., Michal, O.-F., Ya’ara, G.: Harvesting population data to aid treatment
decisions in heavily pre-treated advanced breast cancer. Breast 36, S76 (2017). https://doi.
org/10.1016/s0960-9776(17)30764-6
4. Idri, A., Chlioui, I., Ouassif, B.E.: A systematic map of data analytics in breast cancer. In:
Proceedings of the Australasian Computer Science Week Multiconference on - ACSW 2018,
pp. 1–10. ACM Press, Brisband (2018)
5. Breast Cancer (female) - Treatment - NHS Choices. http://www.nhs.uk/Conditions/Cancer-
of-the-breast
74 M. Ezzat and A. Idri
6. Khrouch, S., Ezziyyani, M., Ezziyyani, M.: Decision System for the Selection of the Best
Therapeutic Protocol for Breast Cancer Based on Advanced Data Mining: A Survey.
Springer, Cham (2019)
7. Fan, Q., Zhu, C.J., Xiao, J.Y., Wang, B.H., Yin, L., Xu, X.L., Rong, F.: An application of
Apriori Algorithm in SEER breast cancer data. In: Proceedings - International Conference on
Artificial Intelligence and Computer Intelligence, AICI 2010, vol. 3, pp. 114–116 (2010).
https://doi.org/10.1109/AICI.2010.263
8. Tran, W.T., Jerzak, K., Lu, F.-I., Klein, J., Tabbarah, S., Lagree, A., Wu, T., Rosado-
Mendez, I., Law, E., Saednia, K., Sadeghi-Naini, A.: Personalized breast cancer treatments
using artificial intelligence in radiomics and pathomics. J. Med. Imaging Radiat. Sci. 50, 1–
10 (2019). https://doi.org/10.1016/j.jmir.2019.07.010
9. Shen, S., Wang, Y., Zheng, G., Jia, D., Lu, A., Jiang, M.: Exploring rules of traditional
Chinese medicine external therapy and food therapy in treatment of mammary gland
hyperplasia with text mining. In: Proceedings - 2014 IEEE International Conference on
Bioinformatics and Biomedicine, IEEE BIBM 2014, pp. 158–159 (2014). https://doi.org/10.
1109/BIBM.2014.6999347
10. Ondrouskova, E., Sommerova, L., Nenutil, R., Coufal, O., Bouchal, P., Vojtesek, B., Hrstka,
R.: AGR2 associates with HER2 expression predicting poor outcome in subset of estrogen
receptor negative breast cancer patients. Exp. Mol. Pathol. 102, 280–283 (2017). https://doi.
org/10.1016/j.yexmp.2017.02.016
11. Oskouei, R.J., Kor, N.M., Maleki, S.A.: Data mining and medical world: breast cancers’
diagnosis, treatment, prognosis and challenges. Am. J. Cancer Res. 7, 610–627 (2017)
12. Razavi, A.R., Gill, H., Ahlfeldt, H., Shahsavar, N.: Predicting metastasis in breast cancer:
comparing a decision tree with domain experts. J. Med. Syst. 31, 263–273 (2007). https://
doi.org/10.1007/s10916-007-9064-1
13. Chao, C.M., Yu, Y.W., Cheng, B.W., Kuo, Y.L.: Construction the model on the breast
cancer survival analysis use support vector machine, logistic regression and decision tree.
J. Med. Syst. 38, 1–7 (2014). https://doi.org/10.1007/s10916-014-0106-1
14. Kuo, W.J., Chang, R.F., Chen, D.R., Lee, C.C.: Data mining with decision trees for
diagnosis of breast tumor in medical ultrasonic images. Breast Cancer Res. Treat. 66, 51–57
(2001). https://doi.org/10.1023/A:1010676701382
15. Takada, M., Sugimoto, M., Ohno, S., Kuroi, K., Sato, N., Bando, H., Masuda, N., Iwata, H.,
Kondo, M., Sasano, H., Chow, L.W.C., Inamoto, T., Naito, Y., Tomita, M., Toi, M.:
Predictions of the pathological response to neoadjuvant chemotherapy in patients with
primary breast cancer using a data mining technique. Breast Cancer Res. Treat. 134, 661–
670 (2012). https://doi.org/10.1007/s10549-012-2109-2
16. Coelho, D., Sael, L.: Breast and prostate cancer expression similarity analysis by iterative
SVM based ensemble gene selection. In: Proceedings of International Conference on
Information and Knowledge Management, pp. 23–26 (2013). https://doi.org/10.1145/
2512089.2512099
17. He, Y., Zheng, X., Sit, C., Loo, W.T.Y., Wang, Z.Y., Xie, T., Jia, B., Ye, Q., Tsui, K.,
Chow, L.W.C., Chen, J.: Using association rules mining to explore pattern of Chinese
medicinal formulae (prescription) in treating and preventing breast cancer recurrence and
metastasis. J. Transl. Med. 10(Suppl 1), 1–8 (2012). https://doi.org/10.1186/1479-5876-10-
s1-s12
18. Hasan, M., Büyüktahtakın, E., Elamin, E.: A multi-criteria ranking algorithm (MCRA) for
determining breast cancer therapy. Omega U. K. 82, 83–101 (2019). https://doi.org/10.1016/
j.omega.2017.12.005
Reviewing Data Analytics Techniques in Breast Cancer Treatment 75
19. Turki, T., Wei, Z.: Learning approaches to improve prediction of drug sensitivity in breast
cancer patients. In: Proceedings of Annual International Conferences of the IEEE
Engineering in Medicine and Biology Society, EMBS, October 2016, pp. 3314–3320
(2016). https://doi.org/10.1109/EMBC.2016.7591437
20. Radha, R., Rajendiran, P.: Using K-means clustering technique to study of breast cancer. In:
Proceedings - 2014 World Congress on Computing and Communication Technologies,
WCCCT 2014, pp. 211–214 (2014). https://doi.org/10.1109/WCCCT.2014.64
21. Fahrudin, T.M., Syarif, I., Barakbah, A.R.: Feature selection algorithm using information
gain based clustering for supporting the treatment process of breast cancer. In: 2016
International Conference on Informatics and Computing, ICIC 2016, pp. 6–11 (2017).
https://doi.org/10.1109/IAC.2016.7905680
22. Çakır, A., Demirel, B.: A software tool for determination of breast cancer treatment methods
using data mining approach. J. Med. Syst. 35, 1503–1511 (2011). https://doi.org/10.1007/
s10916-009-9427-x
Enabling Smart Homes Through Health
Informatics and Internet of Things
for Enhanced Living Environments
Abstract. As people spend most of their time inside buildings, indoor envi-
ronment quality must be monitored in real-time for enhanced living environ-
ments and occupational health. Indoor environmental quality assessment is
based on the satisfaction of the thermal, sound, light and air quality conditions.
The indoor quality patterns can be directly used to promote health and well-
being. With the proliferation of the Internet of Things related technologies,
smart homes must incorporate monitoring solutions for data acquisition, trans-
mission, and microsensors for several real-time monitoring activities. This paper
presents a low-cost and scalable multi-sensor smart home solution based on
Internet of Things for enhanced indoor quality considering acoustic, thermal and
luminous comfort. The proposed system incorporates three sensor modules for
data collection and use Wi-Fi communication technology for Internet access.
The system has been developed using open-source and mobile computing
technologies for real-time data visualization and analytics. The acquisition
modules incorporate light intensity and colour temperature, particulate matter,
formaldehyde, relative humidity, ambient temperature and sound sensor capa-
bilities. The results have successfully validated the scalability, reliability and
easy installation of the proposed system.
1 Introduction
© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 76–85, 2020.
https://doi.org/10.1007/978-3-030-45697-9_8
Enabling Smart Homes Through Health Informatics and IoT for ELE 77
The Environmental Protection Agency ranked indoor air quality (IAQ) on the top
five environmental risks to public health [18]. Therefore, IAQ monitoring must be a
requisite for all buildings. Reduced air quality levels are associated with numerous
health effects such as headaches, dizziness, restlessness, difficulty breathing, increase
heart rate, elevated blood pressure, coma and asphyxia [19–21].
The indoor light levels are also related to people’s health, well-being [22, 23] and
daylight exposure in buildings is also related to energy costs [24]. Luminous comfort
corresponds to the individual’s satisfaction regarding the environmental light levels,
and thermal comfort depends on physical parameters, which can be measured, such as
light intensity and color but also in personalized conditions. People’s attention on this
topic has been increased as now is perceived that light levels are directly related to
people’s psychological health, performance and productivity [25].
The IEQ assessment can perceive patterns on the indoor living quality, which can
be directly used to plan interventions for ELE. Regarding the proliferation of IoT
technologies, smart homes must incorporate different monitoring solutions that make
use of open source technologies for data acquisition, transmission, and microsensors
for several monitoring activities such as noise monitoring, activity recognition, and
thermal and light comfort assessment [26–33]. Therefore, this paper presents an inte-
grated solution for IEQ, which provides thermal, acoustic and luminous comfort
supervision. This solution incorporates open-source and mobile computing technolo-
gies for data consulting and analysis. The rest of the paper is structured as follows:
Sect. 2 presents the materials and methods used in the design of the proposed solution;
Sect. 3 presents the results and discussion, and the conclusion is presented in Sect. 4.
The system architecture of the proposed multi-sensor smart home solution is presented
in Fig. 1. The proposed method uses a native Wi-Fi compatible microcontroller for data
acquisition, process and transmission. The data collected is stored in a SQL Service
database using a web application program interface (API) developed in .NET. This API
contains the web services to receive and manage the data collected by the microcon-
troller and also to provide the data output for webpage visualization and analytics
features.
The proposed method incorporates several sensing features such as light intensity
and colour temperature, particulate matter (PM), formaldehyde, relative humidity,
ambient temperature and sound level using three sensor modules. Each sensor module
is connected to an ESP8266 microcontroller. The sensor selection was conducted with
the primary goal of creating an ELE to promote occupational health and enhanced IEQ
(Fig. 2).
Fig. 2. The proposed multi-sensor system block diagrams representing the sensor’s
components.
The PMS5003ST sensor (Beijing Plantower Co., Ltd., Beijing, China) has been
used for air quality and thermal comfort assessment. This sensor supports temperature,
humidity, PM and formaldehyde sensing features. It is a 5 V sensor which has a
100 mA and 200 lA for active and standby current consumption and a response time
lesser than 10 s. The particle counting efficiency is 98%, the PM2.5 measurement range
is 0–2000 ug/m3, and the maximum error is ±10 (PM2.5 100–500 lg/m3). The tem-
perature range is from −10 °C to 50 °C, and the maximum error is ±0.5 °C. The
relative humidity range is from 0–90%, and the maximum error is ±2%. Regarding the
formaldehyde sensing capabilities, the range is 0–2 mg/m3, and the maximum error is
less than ±5% of the output value. The PMS5003ST is connected using the I2C
interface.
The acoustic comfort is monitored using the calibrated sound sensor (DFRobot,
Shanghai, China), which is connected using analogue communication. This sensor has
a measurement range of 30 dBA–130 dBA with a measurement error of ±1.5 dBA.
The frequency response is 31.5 Hz–8.5 kHz and the response time is 125 ms.
80 G. Marques and R. Pitarma
The TCS3472 sensor (Adafruit Industries, New York, United States) has been
selected to monitor the luminous comfort. This sensor can detect RGB light color
temperature and intensity levels. This sensor supports high sensitivity and dynamic
range, which allow a reliable lighting conditions assessment. This sensor is connected
using the I2C interface.
The data collected by the proposed solution not only can be used to provide a
reliable IEQ assessment of the monitored space but also to support the energy man-
agement of the building using the web portal anywhere and anytime. The cost of the
systems is presented in Table 1, and the total system cost is below 175 USD.
The proposed multi-sensor system has been designed using the ESP8266, a low-
cost Wi-Fi microchip developed by Espressif Systems in Shanghai, China. This
microcontroller incorporates a 32-bit RISC microprocessor core based on the Tensilica
Xtensa Diamond Standard 106Micro with 80 MHz clock speed and supports 32 KiB
instruction RAM (Fig. 2). The modules are powered using a 230 V–5 V AC-DC 2 A
power supply. This smart home system is based on Wi-Fi connectivity for Internet
access to provide real-time IEQ data monitoring. Furthermore, the system supports easy
Wi-Fi configuration using a Wi-Fi compatible device with a web browser. When the
system is connected to a Wi-Fi network, the access credentials are saved on the
hardware memory for future access. If no saved network is available, the system enters
in hotspot mode, and the user can access this hotspot to configure the Wi-Fi network
which the system should be connected. After initialization, the system performs data
acquisition, and the data is then processed. If the defined timer is overflowed, the
system performs the data transmission and sends the collected data to the database for
storage. The sensing activities are performed every 15 s, but this timmer can be
updated according to the user’s requirements. Figure 3 represents the flowchart of the
sensor modules used in the proposed multi-sensor smart home system.
Enabling Smart Homes Through Health Informatics and IoT for ELE 81
Fig. 3. Flow diagram of the acquisition module used in the proposed multi-sensor system.
The proposed smart home solution supports IAQ, thermal comfort, luminous comfort
and acoustic comfort for enhanced occupational health and well-being. The proposed
system has been tested in a laboratory of a Portuguese university (Fig. 4). The mon-
itored room is typically occupied per 15 persons, 4 h per day, five days per week and is
used for teaching activities. The laboratory is constituted by two rooms. The room has
an area around 64 m2 and was monitored in real-time for two months.
Fig. 4. Installation schema of the tests conducted. R – router; 1 – luminous comfort module, 2 –
IAQ and thermal comfort module, 3 – acoustic comfort module.
The tests performed have the primary goal of testing the system functional
requirements of the proposed monitoring system. The data collected ensures the
operability and performance of the proposed smart home system for real-time data
collection and visualization.
82 G. Marques and R. Pitarma
Table 2. Comparison of the proposed systems and smart home monitoring solutions available in
the literature.
Microcontroller Sensors Connectivity IAQ Acoustic Luminous Thermal
comfort comfort comfort
PIC Temperature, relative nRF24L01 √ √
24F16KA102 [34] humidity and CO2
Arduino UNO CO2 ZigBee √
[35]
Waspmote [36] CO, CO2, PM, ZigBee √ √
temperature and
relative humidity
STM PM, temperature and IEEE √ √
32F103RC [37] relative humidity 802.15.4 k
ESP8266 Temperature, relative Wi-Fi √ √ √ √
[proposed system] humidity, noise, PM,
formaldehyde, light
development of critical alerts and notifications to notify the building manager when the
thermal, acoustic and luminous comfort requirements are not meet.
4 Conclusion
In this paper, a low-cost, open-source and scalable multi-sensor smart home solution
based on IoT for enhanced IEQ considering acoustic, thermal and luminous comfort, is
presented. The proposed method incorporates three sensor modules for data collection
and uses Wi-Fi communication technology for Internet access. The data collected is
available in real-time for data visualization and analytics through a web portal. This
smart home solution provides easy installation and easy Wi-Fi configuration methods.
Furthermore, the proposed solution was successfully tested and validated to ensure the
functional architecture. The tests conducted present positive results on behalf of an
essential contribution to enhanced occupational health and well-being. Furthermore,
based on the data collected in the tests performed, we conclude that under certain
conditions, IEQ circumstances are significantly lower than those considered healthy for
people’s health and well-being. Nevertheless, the proposed needs further experimental
validation to ensure calibration and accuracy.
References
1. Wilson, C., Hargreaves, T., Hauxwell-Baldwin, R.: Smart homes and their users: a
systematic analysis and key challenges. Pers. Ubiquit. Comput. 19, 463–476 (2015)
2. Marques, G., Pitarma, R., Garcia, N.M., Pombo, N.: Internet of Things architectures,
technologies, applications, challenges, and future directions for enhanced living environ-
ments and healthcare systems: a review. Electronics 8, 1081 (2019). https://doi.org/10.3390/
electronics8101081
3. Ganchev, I., Garcia, N.M., Dobre, C., Mavromoustakis, C.X., Goleva, R. (eds.): Enhanced
Living Environments: Algorithms, Architectures, Platforms, and Systems. Springer, Cham
(2019). https://doi.org/10.1007/978-3-030-10752-9
4. Marques, G., Garcia, N., Pombo, N.: A survey on IoT: architectures, elements, applications,
QoS, platforms and security concepts. In: Mavromoustakis, C.X., Mastorakis, G., Dobre, C.
(eds.) Advances in Mobile Cloud Computing and Big Data in the 5G Era, pp. 115–130.
Springer, Cham (2017). https://doi.org/10.1007/978-3-319-45145-9_5
5. Marques, G.: Ambient assisted living and Internet of Things. In: Cardoso, P.J.S., Monteiro,
J., Semião, J., Rodrigues, J.M.F. (eds.) Harnessing the Internet of Everything (IoE) for
Accelerated Innovation Opportunities, pp. 100–115. IGI Global, Hershey (2019). https://doi.
org/10.4018/978-1-5225-7332-6.ch005
6. Dobre, C., Mavromoustakis, C.X., Garcia, N.M., Mastorakis, G., Goleva, R.I.: Introduction
to the AAL and ELE systems. In: Ambient Assisted Living and Enhanced Living
Environments, pp. 1–16. Elsevier (2017). https://doi.org/10.1016/B978-0-12-805195-5.
00001-6
7. Yang, L., Yan, H., Lam, J.C.: Thermal comfort and building energy consumption
implications – a review. Appl. Energy 115, 164–173 (2014). https://doi.org/10.1016/j.
apenergy.2013.10.062
84 G. Marques and R. Pitarma
8. Havenith, G., Holmér, I., Parsons, K.: Personal factors in thermal comfort assessment:
clothing properties and metabolic heat production. Energy Build. 34, 581–591 (2002).
https://doi.org/10.1016/S0378-7788(02)00008-7
9. Stansfeld, S.A., Matheson, M.P.: Noise pollution: non-auditory effects on health. Br. Med.
Bull. 68, 243–257 (2003). https://doi.org/10.1093/bmb/ldg033
10. Auger, N., Duplaix, M., Bilodeau-Bertrand, M., Lo, E., Smargiassi, A.: Environmental noise
pollution and risk of preeclampsia. Environ. Pollut. 239, 599–606 (2018). https://doi.org/10.
1016/j.envpol.2018.04.060
11. Foraster, M., Eze, I.C., Schaffner, E., Vienneau, D., Héritier, H., Endes, S., Rudzik, F.,
Thiesse, L., Pieren, R., Schindler, C., Schmidt-Trucksäss, A., Brink, M., Cajochen, C., Marc
Wunderli, J., Röösli, M., Probst-Hensch, N.: Exposure to road, railway, and aircraft noise
and arterial stiffness in the SAPALDIA study: annual average noise levels and temporal
noise characteristics. Environ. Health Perspect. 125, 097004 (2017). https://doi.org/10.1289/
EHP1136
12. Gupta, A., Gupta, A., Jain, K., Gupta, S.: Noise pollution and impact on children health.
Indian J. Pediatr. 85, 300–306 (2018). https://doi.org/10.1007/s12098-017-2579-7
13. Zanella, A., Bui, N., Castellani, A., Vangelista, L., Zorzi, M.: Internet of Things for smart
cities. IEEE Internet Things J. 1, 22–32 (2014). https://doi.org/10.1109/JIOT.2014.2306328
14. Murphy, E., King, E.A.: An assessment of residential exposure to environmental noise at a
shipping port. Environ. Int. 63, 207–215 (2014). https://doi.org/10.1016/j.envint.2013.11.
001
15. Murphy, E., King, E.A.: Environmental noise and health. In: Environmental Noise Pollution,
pp. 51–80. Elsevier (2014). https://doi.org/10.1016/B978-0-12-411595-8.00003-3
16. Stansfeld, S.: Noise effects on health in the context of air pollution exposure. Int. J. Environ.
Res. Public Health 12, 12735–12760 (2015). https://doi.org/10.3390/ijerph121012735
17. Morillas, J.M.B., Gozalo, G.R., González, D.M., Moraga, P.A., Vílchez-Gómez, R.: Noise
pollution and urban planning. Curr. Pollut. Rep. 4, 208–219 (2018). https://doi.org/10.1007/
s40726-018-0095-7
18. Seguel, J.M., Merrill, R., Seguel, D., Campagna, A.C.: Indoor air quality. Am. J. Lifestyle
Med. 11(4), 284–295 (2016). https://doi.org/10.1177/1559827616653343
19. Tsai, W.-T.: Overview of green building material (GBM) policies and guidelines with
relevance to indoor air quality management in Taiwan. Environments 5, 4 (2017). https://doi.
org/10.3390/environments5010004
20. Singleton, R., Salkoski, A.J., Bulkow, L., Fish, C., Dobson, J., Albertson, L., Skarada, J.,
Ritter, T., Kovesi, T., Hennessy, T.W.: Impact of home remediation and household
education on indoor air quality, respiratory visits and symptoms in Alaska native children.
Int. J. Circumpolar Health 77, 1422669 (2018). https://doi.org/10.1080/22423982.2017.
1422669
21. Bruce, N., Pope, D., Rehfuess, E., Balakrishnan, K., Adair-Rohani, H., Dora, C.: WHO
indoor air quality guidelines on household fuel combustion: strategy implications of new
evidence on interventions and exposure–risk functions. Atmos. Environ. 106, 451–457
(2015). https://doi.org/10.1016/j.atmosenv.2014.08.064
22. Azmoon, H., Dehghan, H., Akbari, J., Souri, S.: The relationship between thermal comfort
and light intensity with sleep quality and eye tiredness in shift work nurses. J. Environ.
Public Health 2013, 1–5 (2013). https://doi.org/10.1155/2013/639184
23. Gropper, E.I.: Promoting health by promoting comfort. Nurs. Forum 27, 5–8 (1992). https://
doi.org/10.1111/j.1744-6198.1992.tb00905.x
24. Xue, P., Mak, C.M., Cheung, H.D.: The effects of daylighting and human behavior on
luminous comfort in residential buildings: a questionnaire survey. Build. Environ. 81, 51–59
(2014). https://doi.org/10.1016/j.buildenv.2014.06.011
Enabling Smart Homes Through Health Informatics and IoT for ELE 85
25. Hwang, T., Kim, J.T.: Effects of indoor lighting on occupants’ visual comfort and eye health
in a green building. Indoor Built Environ. 20, 75–90 (2011). https://doi.org/10.1177/
1420326X10392017
26. Marques, G., Roque Ferreira, C., Pitarma, R.: A system based on the Internet of Things for
real-time particle monitoring in buildings. Int. J. Environ. Res. Public Health 15, 821 (2018).
https://doi.org/10.3390/ijerph15040821
27. Feria, F., Salcedo Parra, O.J., Reyes Daza, B.S.: Design of an architecture for medical
applications in IoT. In: Luo, Y. (ed.) Cooperative Design, Visualization, and Engineering,
pp. 263–270. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46771-9_34
28. Marques, G., Pitarma, R.: A cost-effective air quality supervision solution for enhanced
living environments through the Internet of Things. Electronics 8, 170 (2019). https://doi.
org/10.3390/electronics8020170
29. Marques, G., Ferreira, C.R., Pitarma, R.: Indoor air quality assessment using a CO2
monitoring system based on Internet of Things. J. Med. Syst. 43, 67 (2019). https://doi.org/
10.1007/s10916-019-1184-x
30. Marques, G., Pitarma, R.: mHealth: indoor environmental quality measuring system for
enhanced health and well-being based on Internet of Things. JSAN 8, 43 (2019). https://doi.
org/10.3390/jsan8030043
31. Marques, G., Pitarma, R.: Noise monitoring for enhanced living environments based on
Internet of Things. In: Rocha, Á., Adeli, H., Reis, L.P., Costanzo, S. (eds.) New Knowledge
in Information Systems and Technologies, pp. 45–54. Springer, Cham (2019). https://doi.
org/10.1007/978-3-030-16187-3_5
32. Marques, G., Pitarma, R.: Noise mapping through mobile crowdsourcing for enhanced living
environments. In: Rodrigues, J.M.F., Cardoso, P.J.S., Monteiro, J., Lam, R., Krzhizha-
novskaya, V.V., Lees, M.H., Dongarra, J.J., Sloot, P.M.A. (eds.) Computational Science –
ICCS 2019, pp. 670–679. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-22744-
9_52
33. Marques, G., Pitarma, R.: Air quality through automated mobile sensing and wireless sensor
networks for enhanced living environments. In: 2019 14th Iberian Conference on
Information Systems and Technologies (CISTI), Coimbra, pp. 1–7. IEEE (2019). https://
doi.org/10.23919/CISTI.2019.8760641
34. Shah, J., Mishra, B.: IoT enabled environmental monitoring system for smart cities. In: 2016
International Conference on Internet of Things and Applications (IOTA), Pune, pp. 383–
388. IEEE (2016). https://doi.org/10.1109/IOTA.2016.7562757
35. Salamone, F., Belussi, L., Danza, L., Galanos, T., Ghellere, M., Meroni, I.: Design and
development of a nearable wireless system to control indoor air quality and indoor lighting
quality. Sensors 17, 1021 (2017). https://doi.org/10.3390/s17051021
36. Bhattacharya, S., Sridevi, S., Pitchiah, R.: Indoor air quality monitoring using wireless
sensor network. Presented at the December (2012). https://doi.org/10.1109/ICSensT.2012.
6461713
37. Zheng, K., Zhao, S., Yang, Z., Xiong, X., Xiang, W.: Design and implementation of LPWA-
based air quality monitoring system. IEEE Access 4, 3238–3245 (2016). https://doi.org/10.
1109/ACCESS.2016.2582153
38. Gao, Y., Dong, W., Guo, K., Liu, X., Chen, Y., Liu, X., Bu, J., Chen, C.: Mosaic: a low-cost
mobile sensing system for urban air quality monitoring. In: IEEE INFOCOM 2016 - The
35th Annual IEEE International Conference on Computer Communications, San Francisco,
pp. 1–9. IEEE (2016). https://doi.org/10.1109/INFOCOM.2016.7524478
MyContraception: An Evidence-Based
Contraception mPHR for Better Contraceptive
Fit
Abstract. The fulfillment of unmet needs for contraception can help women
reach their reproductive goals. It was proven to have a significant impact on
reducing the rates of unintended pregnancies, and thereby cut the number of
morbidity and mortality resulting from these pregnancies, and improving the
lives of women and children in general. Therefore, there is a growing concern
worldwide about contraception and women’s knowledge of making an advised-
choice about it. In this aspect, an outgrown number of apps are now available
providing clinical resources, digital guides, or educational information con-
cerning contraception whether it concerns natural contraception or modern
contraception. However, vast amounts of these apps contain inaccurate sexual
health facts and non-evidence based information concerning contraception. On
these bases, and in respect to the needs of women to effectively prevent unin-
tended pregnancies while conducting a stress-free healthy lifestyle, the World
Health Organization (WHO) Medical Eligibility Criteria (MEC) for contracep-
tion’s recommendations, and the results and recommendations of a field study
conducted in the reproductive health center Les Oranges in Rabat to collect the
app’s requirements, we developed an Android app named ‘MyContraception’.
Our solution is an evidence-based patient-centered contraceptive app that has
been developed in an attempt to facilitate: (1) Seeking evidence-based infor-
mation along with recommendations concerning the best contraceptive fit (ac-
cording to one’s medical characteristics, preferences and priorities) helping
users make informed decisions about their contraceptive choices. (2) Monitoring
one’s own menstrual cycle, fertility window, contraceptive methods usage, and
the correlation between these different elements and everyday symptoms in one
app. (3) Keeping record of one’s family medical history, medical appointments,
analyses, diagnoses, procedures and notes within the same app. In future work,
conducting an empirical evaluation of MyContraception solution is intended, to
exhaustively examine the effects of this solution in improving the quality of
patient-centered contraception care.
1 Introduction
Although the majority of women seeking out contraceptive measures are most likely to
be young, healthy, and with less medical challenges than women over 35 years old,
teenagers, or those with intercurrent diseases [1]. Yet, health care providers often
prescribe contraceptives to women of reproductive age with core medical conditions as
well [2]. Despite the fact that contraceptive counseling can be a challenge overall, it can
get more complicated in the presence of concomitant diseases or risk factors [3]. In this
vein, women with comorbidities may not receive adequate counseling on contraceptive
methods [2].
The first contraception consultation is of crucial importance that it requires a
minimum recommended time of 30 min [1]. Beyond sufficient time, offering a wide
range of contraceptive methods, evidence-based knowledge of the efficacy, risks, and
benefits of the different methods, as well as building a respectful and confidential
relationship between the doctor and the women, are key quality features of good
contraceptive counseling, allowing women to make informed decisions [3]. However,
many health care providers may find this protocol quite intimidating in practice.
Consequently, iatrogenic unintended pregnancies are a reality. Since they result from
errors or omissions that can be avoided during the consultation, especially the omission
of sufficient time [1]. Moreover, obsolete clinical guidelines and lack of knowledge of
new evidence can limit both the quality of contraceptive counseling and the user’s
access to safe and effective contraception [3].
According to the World Health Organization (WHO), an estimate of 33 million
unintended pregnancies over the world are a result of contraceptive failure or incorrect
use [4]. At a worldwide level, unintended pregnancy was and still one of the most
public health issues; it is considered the main sexual and reproductive health issue
associated with the highest risk of morbidity and mortality for women [5]. Women with
chronic conditions can have serious health consequences in the event of an unwanted
pregnancy. Since pregnancy can aggravate certain diseases or associate them with
harmful consequences endangering the life of the woman. In addition, drugs used to
treat many chronic diseases are potentially teratogenic [2], affect the development of
the embryo and fetus and when exposed to a pregnant woman may cause birth defects,
fetal loss or abnormal growth and development [6].
With the fast pace of medical advancement in the reproductive health sector,
especially contraception, quick, reliable, and accurate access to evidence-based infor-
mation is mandatory for health care providers to provide quality care to women based
on the most current available evidence [7]. In compliance with the expansion of
technology, the number of web and mobile applications (apps) available now to assist
clinicians in providing care for women is increasing. It is also becoming increasingly
common for women to use technology in the form of websites and apps to monitor and
track their cycles for fertility purposes and to inquire about contraception [8]. However,
only a few are reliable and exhaustive source of information [9]. In this light and taking
advantage of new technologies, we have developed an evidence-based Mobile Personal
Health Record (mPHR) to provide interactive, individually tailored information and
decision support for contraceptive use. The app is meant to prepare women for their
88 M. Kharbouch et al.
3 MyContraception Solution
3.1 Purpose
The use of contraception has become commonplace in modern society that nearly all
women are using contraception at some point in their lifetime [17]. Thus, when seeking
contraception, women need a justified, individualized contraceptive counseling in
which every decision about a contraceptive method, the advantages and drawbacks are
weighed and discussed individually [3]. Moreover, in order to achieve an optimal
contraceptive effect and a better adherence rate, Women should be involved in a shared
decision-making process [3].
In this respect, the main purpose of MyContraception solution consists of giving
women the control and ability to make an informed choice over contraception and to
organize and inform many other aspects of their contraception use. All in a convenient,
easy and discreet way. The fact of the matter is that these characteristics were recog-
nized to be valued by women when comes to their body decisions according to pre-
vious research in the field of health apps [18].
• Login: Given that a user has registered, then the user should be able to log in to the
mobile application. The log in information will be stored on the phone and in the
future, the user should be logged in automatically.
• Retrieve password: A user should be able to retrieve her password by email.
• Consult ‘About Contraception’: The user should be able to consult the ‘About
Contraception’ section to learn more about contraceptive methods, eligibility cri-
teria, efficiency, risks and more.
• Enter Menstrual Cycle Information: The user should be able to enter the date of
her last menstrual cycle, its length, and duration of period among other information.
• Monitor Menstrual Period: The user should be able to track and predict her
period, ovulation and know about chances of falling pregnant on a specific day.
• Take ‘Eligibility Test’: The user should take an Eligibility Test based on WHO’s
MEC for contraception to obtain a list of her best-suited contraceptive methods.
• ‘Eligibility Test’ Result: Once the eligibility status identified, the user should
obtain information about her recommended contraceptive methods.
• Chose a Contraceptive Method: The user should be able to choose one of her
recommended contraceptive methods upon which the app will be adapted.
• View contraception history: The user should be able to visualize the dated list of
her past contraceptive methods.
• Receive reminders: The user should be reminded of her ovulation period, to take
her pill, schedule a medical checkup… based on her current contraceptive method.
• Receive notifications: The user should be notified when it is her predicted first/last
day of the period, when her menstrual cycle is abnormal and when she needs to
enter some information (symptoms, mood, weight, temperature…).
• Change reminders settings: The user should be able to choose how and when she
would like to receive reminders based on her current contraceptive method.
• Change notification settings: The user should be able to choose how and when she
would like to receive notifications.
• Archive Medical Notice: The user should be able to scan or upload pictures of her
medical notice from her gallery to her medical notice archive on the app and add
notes on them.
• Archive Medical analysis: The user should be able to scan or upload pictures of
her medical analysis from her gallery to her medical analysis archive on the app and
add notes on them.
• Consult Medical Notice Archive: The user should be able to consult her medical
notice archive on the app.
• Consult Medical analysis Archive: The user should be able to consult her medical
analysis archive on the app.
Previous studies had implemented ISO/IEC 25010 standard [19] to health-related
software products. Ouhbi et.al had applied this standard on Mobile Personal Health
Record (mPHR) [20], while Idri et.al had conducted an evaluation of free mobile personal
health records for pregnancy monitoring based on the aforementioned standard [21], and
a quality evaluation of gamified blood donation apps using the same standard [22].
Likewise, a set of non-functional requirements was deemed to be improving the software
MyContraception: An Evidence-Based Contraception mPHR 91
4 Implementation
Our contraception software solution is developed using native Android while data is
stored in Firebase cloud service in order to enable data backup, sharing logs, and
securing access to the application for privacy concerns. In the current phase of
92 M. Kharbouch et al.
development, the application is dedicated to patients exclusively and not linked to any
kind of clinician-centered application. Few user interfaces are shown in the Appendix.
The Appendix is accessible at the following link: https://www.um.es/giisw/manal/
Appendix.pdf.
When authenticated successfully with his email or an existing login system as
depicted in Fig. 1 and Fig. 2 of the Appendix, the user logs in her menstrual infor-
mation concerning her menstrual cycle as shown in Fig. 3 and Fig. 4 of the Appendix.
Then the user is redirected to the home page illustrated in Fig. 5 of the Appendix. From
there, the user can: (1) Monitor her period and fertility windows as in Fig. 6 of the
Appendix. (2) Log her specific symptoms, mood, measurements, analysis/notices
records, journaling and questions for her next obstetric appointment. See Fig. 7 and
Fig. 8 of the Appendix. (3) Take an eligibility quiz to obtain her best-fitted contra-
ceptive method based on her age, health condition, and medical history to cite few as
referred to in Fig. 11, Fig. 12 and Fig. 13 of the Appendix. Once the user picks her
current contraceptive method or chooses one from suggested methods according to her
eligibility test results as in Fig. 14 of the Appendix, the whole application is person-
alized to meet her selected contraceptive method. Moreover, the user can consult her
contraceptive history, medical archive, menstruation history, and past obstetric
appointments as can be seen in Fig. 9 of the Appendix, consult awareness section about
her contraceptive method as described in Fig. 10 of the Appendix, set a reminder for
future obstetric appointments as detailed in Fig. 15 of the Appendix, and change the
settings of the app. In the settings, as Fig. 16 of the Appendix shows, the user is
allowed to customize the content of the app to her liking, choose the language of the
app to have a fair understanding of its content, and manage how and when she would
like to receive reminders/notifications. The user can log out from the app at any time
and navigate smoothly between the different activities thanks to a material design-based
menu.
Acknowledgments. This work was conducted within the research project PEER, 7-246 sup-
ported by the US Agency for International Development. The authors would like to thank the
National Academy of Science, Engineering, and Medicine, and USAID for their support.
References
1. Guillebaud, J.: Contraception Today, 9th edn. CRC Press, Boca Raton (2019)
2. Bonnema, R.A., McNamara, M.C., Spencer, A.L.: Contraception choices in women with
underlying medical conditions. Am. Fam. Phys. 82, 621–628 (2010)
3. Moffat, R., Sartorius, G., Raggi, A., et al.: Consultation de contraception basée sur
l’évidence. Forum Médical Suisse – Swiss Med. Forum (2019). https://doi.org/10.4414/fms.
2019.08065
4. World Health Organization (2014) WHO | Unsafe abortion: global and regional estimates of
the incidence of unsafe abortion and associated mortality in 2008. WHO. https://doi.org/10.
1017/CBO9781107415324.004
5. Kassahun, E.A., Zeleke, L.B., Dessie, A.A., et al.: Factors associated with unintended
pregnancy among women attending antenatal care in Maichew Town, Northern Ethiopia,
2017. BMC Res. Notes 12, 1–6 (2019). https://doi.org/10.1186/s13104-019-4419-5
6. Gweneth, L.: Pharmacovigilance in pregnancy. In: Doan, T., Renz, C., Bhattacharya, M.,
Lievano, F., Scarazzini, L. (eds.) Pharmacovigilance: A Practical Approach, 1st edn, p. 228.
Elsevier, Amsterdam (2019)
7. Arbour, M.W., Stec, M.A.: Mobile applications for women’s health and midwifery care: a
pocket reference for the 21st century. J. Midwifery Women’s Health (2018). https://doi.org/
10.1111/jmwh.12755
8. Mendes, A.: What’s new in the world of prescribing contraception? Nurse Prescr. 16, 410–
411 (2018). https://doi.org/10.12968/npre.2018.16.9.410
9. Rousseau, F., Da Silva Godineau, S.M., De Casabianca, C., et al.: State of knowledge on
smartphone applications concerning contraception: a systematic review. J. Gynecol. Obstet.
Hum. Reprod. 48, 83–89 (2019). https://doi.org/10.1016/j.jogoh.2018.11.001
10. Berglund Scherwitzl, E., Lundberg, O., Kopp Kallner, H., et al.: Perfect-use and typical-use
pearl index of a contraceptive mobile app. Contraception 96, 420–425 (2017). https://doi.
org/10.1016/j.contraception.2017.08.014
11. Berglund Scherwitzl, E., Gemzell Danielsson, K., Sellberg, J.A., Scherwitzl, R.: Fertility
awareness-based mobile application for contraception. Eur. J. Contracept. Reprod. Health
Care 21, 234–241 (2016). https://doi.org/10.3109/13625187.2016.1154143
12. Malarcher, S., Spieler, J., Fabic, M.S., et al.: Fertility awareness methods: distinctive modern
contraceptives. Glob. Health Sci. Pract. 4, 13–15 (2016)
13. Hubacher, D., Trussell, J.: A definition of modern contraceptive methods. Contraception 92,
420–421 (2015)
14. Chor, J., Rankin, K., Harwood, B., Handler, A.: Unintended pregnancy and postpartum
contraceptive use in women with and without chronic medical disease who experienced a live
birth. Contraception 84, 57–63 (2011). https://doi.org/10.1016/j.contraception.2010.11.018
15. Curtis, K.M., Tepper, N.K., Jatlaoui, T.C., et al.: U.S. medical eligibility criteria for
contraceptive use, 2016. MMWR Recomm. Rep. 65, 1–104 (2016). https://doi.org/10.1089/
jwh.2011.2851
16. WHO: Medical Eligibility Criteria for Contraceptive Use, 5th edn. WHO, Geneva (2015)
17. Daniels, K., Daugherty, J., Jones, J.: Current contraceptive status among women aged 15–
44: United States, 2011–2013. NCHS Data Brief 173, 1–8 (2014)
94 M. Kharbouch et al.
18. Newman, L.: Apps for health: what does the future hold? Br. J. Midwifery 26, 561 (2018).
https://doi.org/10.12968/bjom.2018.26.9.561
19. International Organization For Standardization ISO: Software Process Improvement
Practice. ISO/IEC 25010:34 (2011)
20. Ouhbi, S., Idri, A., Fern, L.: Applying ISO/IEC 25010 on mobile personal health records. In:
8th International Conference on Health Informatics, pp. 405–412 (2015)
21. Idri, A., Bachiri, M., Fernández-alemán, J.L., Toval, A.: ISO/IEC 25010 based evaluation of
free mobile personal health records for pregnancy monitoring. In: IEEE 41st Annual
Computing Software Application Conference, pp. 262–267 (2017)
22. Idri, A., Sardi, L., Fernández-alemán, J.: Quality evaluation of gamified blood donation apps
using ISO/IEC 25010 standard. In: 12th International Conference Health Informatics,
pp. 607–614 (2018)
23. WHO: Selected Practice Recommendations for Contraceptive Use, 3rd edn. WHO, Geneva
(2016)
24. Arbour, M.W., Stec, M.A.: Mobile applications for women’s health and midwifery care: a
pocket reference for the 21st century. J. Midwifery Women’s Health 63, 330–334 (2018).
https://doi.org/10.1111/jmwh.12755
25. Tebb, K.P., Trieu, S.L., Rico, R., et al.: A mobile health contraception decision support
intervention for Latina adolescents: Implementation evaluation for use in school-based
health centers. J. Med. Internet Res. 21 (2019). https://doi.org/10.2196/11163
26. Sardi, L., Idri, A., Readman, L.M., et al.: Mobile health applications for postnatal care:
review and analysis of functionalities and technical features. Comput. Methods Programs
Biomed. 184, 1–26 (2020). https://doi.org/10.1016/j.cmpb.2019.105114
27. Bachiri, M., Idri, A., Redman, L.M., et al.: A requirements catalog of mobile personal health
records for prenatal care. In: Lecture Notes in Computer Science (Including Subseries
Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), pp. 483–495.
Springer (2019)
28. Bachiri, M., Idri, A., Abran, A., et al.: Sizing prenatal mPHRs using COSMIC measurement
method. J. Med. Syst. 43, 1–11 (2019). https://doi.org/10.1007/s10916-019-1446-7
29. Bachiri, M., Idri, A., Redman, L., et al.: COSMIC functional size measurement of mobile
personal health records for pregnancy monitoring. In: Advances in Intelligent Systems and
Computing, pp. 24–33. Springer (2019)
30. Bachiri, M., Idri, A., Fernández-Alemán, J.L., Toval, A.: Evaluating the privacy policies of
mobile personal health records for pregnancy monitoring. J. Med. Syst. 42, 1–14 (2018).
https://doi.org/10.1007/s10916-018-1002-x
31. Idri, A., Bachiri, M., Fernández-Alemán, J.L., Toval, A.: Experiment design of free
pregnancy monitoring mobile personal health records quality evaluation. In: 2016 IEEE 18th
International Conference on e-Health Networking, Applications and Services, Healthcom
2016. Institute of Electrical and Electronics Engineers Inc., pp. 1–6 (2016)
32. Bachiri, M., Idri, A., Fernández-Alemán, J.L., Toval, A.: Mobile personal health records for
pregnancy monitoring functionalities: analysis and potential. Comput. Methods Programs
Biomed. 134, 121–135 (2016)
33. Bachiri, M., Idri, A., Fernandez-Aleman, J.L., Toval, A.: A preliminary study on the
evaluation of software product quality of pregnancy monitoring mPHRs. In: Proceedings of
2015 IEEE World Conference on Complex Systems, WCCS 2015. Institute of Electrical and
Electronics Engineers Inc., pp. 1–6 (2016)
34. Idri, A., Bachiri, M., Fernández-Alemán, J.L.: A framework for evaluating the software
product quality of pregnancy monitoring mobile personal health records. J. Med. Syst. 40, 1–
17 (2016). https://doi.org/10.1007/s10916-015-0415-z
Predictors of Acceptance and Rejection
of Online Peer Support Groups as a Digital
Wellbeing Tool
© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 95–107, 2020.
https://doi.org/10.1007/978-3-030-45697-9_10
96 J. McAlaney et al.
1 Introduction
Digital media including social networks, gaming and online shopping have various
benefits and represent an integral part of modern society. Such media empower social
connectedness, information exchange and freedom of information exchange intro-
ducing a new lifestyle and concepts such as digital humanity and digital citizenships.
However, some compulsive and obsessive usage styles and over-reliance on digital
media can lead to negative consequences such as reduced involvement in real-life
communities and a lack of sleep [1]. Some usage styles can be seen as addictive
meeting common criteria of behavioural addiction such as salience, conflict, mood
modification, and relapse [2, 3].
There is a limited number of preventative, control and recovery mechanisms
available for Digital Addiction (DA). Although the problematic relationship with
technology has been recognised in a wide range of literature, DA is still not classified
as a mental disorder in the latest 5th edition of the Diagnostic and Statistical Manual of
Mental Disorders (DSM 5). Recently, in 2018, the World Health Organization
recognised Gaming Disorder, which represents a significant step is searching for pre-
ventative and recovery mechanisms. Most of the existing research on DA focuses on
the reasons for people to become overly reliant on social media and the relationship of
that with factors such as personality traits [4]. Few works have placed software design
at the centre of the DA problems, both in facilitating and also in combatting DA, e.g.
the digital addiction labels and the requirements engineering for digital well-being
requirements in [6, 7].
With the advances in sensing and communication technology and internet con-
nectivity, there has been a proliferation of software and smartphone applications to assist
with behavioural change. It is still questionable whether these solutions are effective and
whether we understand the acceptance and rejection factors from the users’ perspective.
The perception of their role and trustworthiness of such proposed solutions has changed
following some failures and the recognition of associated risks [8].
Linking the intention to change behaviour with the act of doing so is the main
purpose of behaviour change theories [5]. Peer support groups are one of the approaches
to behaviour change which can be utilised to combat addictive behaviours by providing
support and helping in relapse prevention [9, 11]. Peer support groups consist of people
sharing similar interests and in view of supporting and influencing each other’s beha-
viour towards achieving common goals [10]. Alrobai et al. [13] focused on the processes
involved when running the group, e.g. the roles involved in doing so and the steps to be
taken to prevent relapse. Aldhayan et al. [18], explored the acceptance and rejection
factors of online peer support groups by people with DA. This exploration was meant to
inform the strategies used to introduce such online peer group software, as well as the
configuration and governance processes of their online platform.
Hsiao Shu and Huang [17] explored the relationships between personality traits and
compulsive usage of social media apps, and showed that extraversion, agreeableness,
and neuroticism have significant effects on such compulsive usage. Being an online
Predictors of Acceptance and Rejection of Online Peer Support Groups 97
social technique for behaviour change itself, acceptance and rejection of peer support
groups could be in turn subject to such personal and environmental factors. In this
paper, we study the effect of personality traits, self-control, gender, and perception of
usefulness, willingness to join and culture (comparing UK to Middle Eastern users) on
the acceptance and rejection factors of online peer support groups. To achieve this
target, we designed a survey around the acceptance and rejection factors reported in
[18] and derived from two focus groups and 16 interviews. The survey also consisted
of various demographics questions and measures for personality [20] and self-control
[19]. We collected 215 completed responses. We report on the statistical analysis
results and discuss their implications on the design of future online peer support groups
to combat DA.
2 Research Method
behaviour, e.g., obsessive or compulsive use. The sample consisted of 8 males and 8
females, aged between 18 and 35. Each interview lasted between 30 and 40 min. The
interviews were transcribed and analysed via thematic analysis [12].
The factors which affect users’ acceptance and rejection of online peer support groups
to combat DA are presented in Tables 1 and 2, respectively. The elaborated descrip-
tions of themes A1 to A4 and R1 to R4 can be found in [18]. Further analysis of the
data revealed another theme, which is A5.
Table 1. Online peer support groups to combat digital addiction: acceptance factors
Acceptance theme Sub-themes
[A1] Accepting online peer groups as [A1.1] Provide awards: gamification of performance
an entertainment auxiliary [A1.2] Peer comparison: to see how I and others do
[A1.3] Goal achievement: rewards, information and
graphs of my progress towards the goal
[A2] Accepting online peer groups as [A2.1] Self-Monitoring: show actual usage and
a DA awareness tool performance
[A2.2] Peer comparison: benchmarking through others
[A2.3] Goal achievement: awareness of how I am
achieving goals
(continued)
Predictors of Acceptance and Rejection of Online Peer Support Groups 99
Table 1. (continued)
Acceptance theme Sub-themes
[A3] Accepting online peer support [A3.1] Peer learning: learning from others how to
groups as an educational tool improve
[A3.2] Moderator role: learning from moderator,
learning from acting as moderator
[A3.3] Set up goals: learning how to set up SMART
goals
[A4] Accepting online peer support [A4.1] Peer feedback: alert/feedback through peer
groups as a prevention tool feedback
[A4.2] Moderator feedback: alert/feedback by a
moderator
[A4.3] Authority: steps and restrictions set by a
moderator
[A5] Accepting online peer support [A5.1] Provide advice: by experienced moderator;
groups as a support tool alternatives lifestyle
[A5.2] Emotional support: when struggling to avoid
relapse
[A5.3] Feedback: when performing well and under-
performing, sending warnings
Table 2. Online peer support groups to combat digital addiction: rejection factors
Rejection theme Sub-themes
[R1] Rejecting online peer support groups [R1.1] Negative feedback: dismissive feedback
when seen as intimidation tool when failing
[R1.2] Harsh penalty, e.g. banning and locking
out
[R2] Rejecting online peer support groups [R2.1] Being overly judged by a moderator
when seen as overly judgmental [R2.2] Being judged by peers, known and
unknown in person
[R3] Rejecting online peer supports group [R3.1] Weak management
when hosting unmanaged interactions [R3.2] Large group size
[R4] Rejecting online peer groups due to [R4.1] Relatedness: group including relatives
unclear membership protocol and friends
[R4.2] Exit control: free and uncontrolled exit as
well as conditions on exiting the group without
considering others
The survey questions around acceptance and rejection can be found in Appendix A.
A Likert scale indicating level of agreements was used for each of the statements under
each theme. A series of linear multiple regressions using the enter method were
100 J. McAlaney et al.
wellbeing goals; [A4.3] Steps, restrictions and plans set by an authorised moderators,
e.g. game usage limit for compulsive gamers;
[A5] Accepting online peer support groups as a support tool. The first model for [A5.1a]
Environment to provide experienced moderators who are able to provide advice and
guide members to manage the wellbeing issue was significant (R2 = .12, F(10,159) =
2.01, p <.05), accounting for 12% of the variance. The only significant predictor was
neuroticism (b = .07), with an increase in this personality trait being associated with an
increase in acceptance towards this statement. The rest of regression models under this
category were not significant. These were [A5.1b] Environment to suggest alternative
activities to replace and distance myself from the negative behaviours and enhance
wellbeing; [A5.2] Environment to provide emotional support, e.g. when struggling to
follow the healthy behaviour; [A5.3a] Environment to get positive and motivational
feedback when performing well; [A5.3b] Environment to get positive and motivational
feedback even when failing to achieve targets; [A5.3c] Environment to issue warning
feedback when members performance and interaction are not right;. This again suggests
that influences are limited when peer groups are seen as knowledge and advice source.
14% of the variance. Within the model, the only significant predictor was gender
(b = .56). This meant that female participants were more likely to accept this statement.
[R3] Rejecting online peer supports group when hosting unmanaged interactions. The
model for [R3.1a] I reject a group with a weak moderator, e.g. unable to stop or ban
members who are not adhering to the group norms was significant (R2 = .12,
F(10,159) = 2.1, p <.05), accounting for 12% of the variance. Within the model,
conscientiousness was the only significant predictor (b = 0.14), with an increase in this
trait being associated with an increase in agreement with this statement. The model for
[R3.1b] I reject a group which allows a loose and relaxed rules e.g. accepting con-
versations and interactions that are not related to the wellbeing issue, was significant
(R2 = .13, F(10,159) = 2.4, p <.05), accounting for 13% of the variance. Within the
model the predictors of conscientiousness (b = 0.14) and openness (b = −.2) were
both significant, with an increase in conscientiousness being associated with an
increase in acceptance of this statement. In contrast an increase in openness was
associated with a decrease in this acceptance of this statement. The remaining model of
[R3.2] I reject a group with a large size as it may not feel as a coherent group was not
significant.
[R4] Rejecting online peer groups due to unclear membership protocol. None of the
models under this category was significant, which were [R4.1a] I reject a group which
allows friends in real-life to join; [R4.1b] I reject a group which allows family
members to join; [R4.2a] I reject a group when members can leave the group anytime
without giving notice and explanation; [R4.2b] I reject a group when there are con-
ditions to exit the group, e.g. to tell the moderator in advance.
4.3 Discussion
In terms of acceptance factors, the majority of regression models were not significant,
with those that were only explaining a relatively small amount of the variance. The
significant predictors with such models were primarily personality traits, such as
extraversion and neuroticism. These occurred in the direction that could be as expected,
such as for example, an increase in extraversion being associated with acceptance of a
peer group to increase engagement in managing a wellbeing issue.
There were a greater number of significant regression models under the rejection
factors, although again when these were significant, they only accounted for a relatively
low amount of the variance. The greater number of significant models and predictors in
relation to rejection factors compared to acceptance may be a reflection of the reactance
effect [15], in which individuals respond negatively to being told that they are not
permitted to do something. Similar to the significant acceptance model personality
traits tended to be amongst the significant predictors. Gender was a significant predictor
in several models relating to group judgement, with female participants being found to
be more likely to reject statements that involved the possibility of social judgement.
Research into the gender and the use of peer groups has found that the relationship
between these can be complex [16]; however this result could be argued to be con-
sistent with the broad finding that females make greater use of social support structures.
This is because a peer group situation that includes explicit and trackable judgement of
Predictors of Acceptance and Rejection of Online Peer Support Groups 103
There is increasing societal concerns about the compulsive and excessive use of digital
technologies. These same technologies allow for prevention and intervention strategies
to be delivered in way that is faster and substantially less costly than traditional
strategies, but in order to make this meaningful we must better understand what factors
determine the acceptance and rejection of such approaches. In this paper, we studied
the effect of several personal and contextual factors on the acceptance and rejection of
the online peer support groups as a mechanism to enhance wellbeing. We took digital
wellbeing as a case study where both the behaviour and the behaviour change share the
same medium and where part of the behavior and performance towards behavioural
goals and limits can be tracked and monitored in part, i.e. the digital usage. There were
fewer differences than what we have expected if we consider similar research in the
context of social media. This would mean that online peer support groups, as a special
kind of social networks, would need to be thought as a purpose-driven gathering. For
example, accepting such a technique as an awareness tool and as an education tool was
little affected by personal and cultural differences. We, however indeed noted that
rejecting the groups for various reasons including being a medium of unmanaged and
loose interaction, with additional risks such as peer interactions tools being used for the
purpose of intimidation. This would again mean that people expect such groups to be
purpose-driven and reject their instantiation as ordinary social networks. As our find-
ings indicated, peer support groups are seen both as a motivational and educational tool
and hence game-based learning [14] can be a way to increase their acceptance. It is
important that further research is conducted within this emergent area, to ensure that
prevention and intervention strategies are informed by an evidence base.
– How do you see the usefulness of online peer support group as a method to help
members in managing their wellbeing issues? Very useful; Useful; Moderately
useful; Slightly useful; Not at all useful.
– Would you like to join an online peer support group to help you manage a well-
being issue? Very likely; Likely; Unlikely; Extremely unlikely.
– 10 Personality questions [20]: How well do the following statements describe your
personality? I see myself as someone who: is reserved; is generally trusting; tends
to be lazy; is relaxed; handles stress well; has few artistic interests; is outgoing,
sociable; tends to find fault with others does a thorough job; gets nervous easily; has
an active imagination.
– 13 Self-control questions [19]: Using the 1 to 5 scale below, please indicate how
much each of the following statements reflects how you typically are: I am good at
resisting temptation; I have a hard time breaking bad habits; I am lazy; I say
inappropriate things; I do certain things that are bad for me, if they are fun; I refuse
things that are bad for me; I wish I had more self-discipline; People would say that I
have iron self-discipline; Pleasure and fun sometimes keep me from getting work
done; I have trouble concentrating; I am able to work effectively toward long-term
goals; Sometimes I can’t stop myself from doing something, even if I know it is
wrong; I often act without thinking through all the alternatives.
Questions About Acceptance Factors (5 Points Likert Scale Reflecting Agreement
Degree)
[A1] Online peer support groups method is seen by some as an auxiliary mecha-
nism to ease and add more engagement to the management of the wellbeing issue.
Accordingly, the following features will increase my acceptance of them: [A1.1a]
Awards when achieving behavioural targets, e.g. points, badges, etc. [A1.1b] Awards
when making progress towards the behavioural target. [A1.2] Peer comparisons, i.e.
see how I and others are performing. [A1.3] Information and graphs how I am pro-
gressing to keep me engaged.
[A2] Online peer groups method is seen by some as an awareness tool to help raise
awareness and knowledge about the wellbeing issue and level of the problem.
Accordingly, the following features will increase my acceptance of them: [A2.1] Self-
Monitoring, e.g. showing your hourly, daily and weekly performance and progress
indicator. [A2.2] Peer comparisons, e.g. comparing you to other members in the group
who have similar profile and level of problem. [A2.3] Awareness on goal setting, e.g.
how to set and achieve goals, and how to avoid deviation from the plan you sat to
achieve them.
[A3] Online peer support group method is seen by some as an educational platform to
learn how to regulate the wellbeing issue and change behavior. Accordingly, the fol-
lowing features will increase my acceptance of them: [A3.1] Environment to learn from a
peers, e.g., by sharing real-life stories and successful strategies around the wellbeing
issue. [A3.2a] Environment to learn from experienced moderators, e.g. best practice
around the wellbeing issue. [A3.2b] Environment where I can learn through acting as a
mentor, i.e. when advising other members and when having to moderate the group. [A3.3]
Environment to learn how to set up achievable and effective goals and their plans.
Predictors of Acceptance and Rejection of Online Peer Support Groups 105
[A4] Online peer support groups method is seen by some as a prevention and pre-
cautionary mechanism when the wellbeing issue starts to emerge. Accordingly, the
following features will increase my acceptance of them: [A4.1] Feedback messages
sent by peers about performance and wellbeing goals. [A4.2] Guidance, feedback and
information sent by moderators based on performance and achieving wellbeing goals.
[A4.3] Steps, restrictions and plans set by an authorised moderators, e.g. game usage
limit for compulsive gamers.
[A5] Online peer support groups method is seen by some as a support tool to guide,
motivate and encourage the recovery processes of the wellbeing issue. Accordingly, I
accept online peer groups as an: [A5.1a] Environment to provide experienced mod-
erators who are able to provide advice and guide members to manage the wellbeing
issue. [A5.1b] Environment to suggest alternative activities to replace and distance
myself from the negative behaviours and enhance wellbeing. [A5.2] Environment to
provide emotional support, e.g. when struggling to follow the healthy behaviour.
[A5.3a] Environment to get positive and motivational feedback when performing well.
[A5.3b] Environment to get positive and motivational feedback even when failing to
achieve targets. [A5.3c] Environment to issue warning feedback when members per-
formance and interaction are not right.
Questions About Rejection Factors (5 Points Likert Scale Reflecting Agreement)
[R1] Online peer groups method is rejected by some as it can be intimidating if
used in certain modalities. [R1.1a] I reject a group with negative feedback, e.g. you
have repetitively failed in achieving your target, this is the 5th time this month. [R1.1b]
I reject a group with harsh feedback, e.g. Your interaction with peers shows anti-social
and disruptive patterns. You have been reported for annoying others. [R1.2] I reject a
group with harsh penalties e.g. banning from the group for a period of time if I
repetitively forget my target.
[R2] Online peer group method is rejected by some when seen as overly judgmental.
[R2.1] I reject a group if the group moderator judges my performance and interaction
frequently, even if this is for my benefit. [R2.2a] I reject a group if I am judged by
peers who are only online contact, e.g. not real-life contacts. [R2.2b] I reject a group if
I am judged by online peers who are also real-world contacts. [R2.2c] I reject a group if
the judgment online expands to other life aspects by peers who are real-world contacts.
[R3] Peer group is rejected when seen as a medium for a loose and unmanaged
interaction. [R3.1a] I reject a group with a weak moderator, e.g. unable to stop or ban
members who are not adhering to the group norms. [R3.1b] I reject a group which
allows a loose and relaxed rules e.g. accepting conversations and interactions that are
not related to the wellbeing issue. [R3.2] I reject a group with a large size as it may not
feel as a coherent group.
[R4] Online peer support group method is rejected when the membership protocol is
unclear. Please indicate your opinion of the following: [R4.1a] I reject a group which
allows friends in real-life to join. [R4.1b] I reject a group which allows family
106 J. McAlaney et al.
members to join. [R4.2a] I reject a group when members can leave the group anytime
without giving notice and explanation. [R4.2a] I reject a group when there are con-
ditions to exit the group, e.g. to tell the moderator in advance.
References
1. Hampton, K., Goulet, L.S., Rainie, L., Purcell, K.: Social networking sites and our lives.
Pew Internet Am. Life Proj. 16, 1–85 (2011)
2. Griffiths, M.: A ‘components’ model of addiction within a biopsychosocial framework.
J. Subst. Use 10(4), 191–197 (2005)
3. Widyanto, L., Griffiths, M.: Internet addiction: a critical review. Int. J. Mental Health Addict.
4(1), 31–51 (2006)
4. Winkler, A., Dörsing, B., Rief, W., Shen, Y., Glombiewski, J.A.: Treatment of internet
addiction: a meta-analysis. Clin. Psychol. Rev. 33(2), 317–329 (2013)
5. Webb, T.L., Sniehotta, F.F., Michie, S.: Using theories of behaviour change to inform
interventions for addictive behaviours. Addiction 105, 1879–1892 (2010)
6. Ali, R., Jiang, N., Phalp, K., Muir, S., McAlaney, J.: The emerging requirement for digital
addiction labels. REFSQ 9013, 198–213 (2015)
7. Alrobai, A., Phalp, K., Ali, R.: Digital addiction: a requirements engineering perspective.
Requir. Eng.: Found. Softw. Qual. 8396, 112–118 (2014)
8. Alrobai, A., McAlaney, J., Phalp, K., Ali, R.: Exploring the risk factors of interactive e-
health interventions for digital addiction. Int. J. Sociotechnol. Knowl. Dev. 8(2), 1–15 (2016)
9. Davidson, L., Chinman, M., Kloos, B., Weingarten, R., Stayner, D., Tebes, J.K.: Peer
support among individuals with severe mental illness: a review of the evidence. Clin.
Psychol. Sci. Pract. 6, 165–187 (2006)
10. Alrobai, A., McAlaney, J., Phalp, K., Ali, R.: Online peer groups as a persuasive tool to
combat digital addiction. In: International Conference on Persuasive Technology, pp. 288–
300 (2016)
11. Alrobai, A., Dogan, H., Phalp, K., Ali, R.: Building online platforms for peer support groups
as a persuasive behavior change technique. In: International Conference on Persuasive
Technology, pp. 70–83 (2018)
12. Braun, V., Clarke, V., Terry, G.: Thematic analysis. Qual. Res. Clin. Health Psychol. 24, 95–
114 (2014)
13. Alrobai, A.: Engineering social networking to combat digital addiction: the case of online
peer groups. Doctoral dissertation, Bournemouth University (2018)
14. Sousa, M.J., Rocha, Á., Game based learning contexts for soft skills development. In: World
Conference on Information Systems and Technologies, pp. 931–940 (2017)
15. Brehm, S., Brehm, J.: Psychological Reactance: A Theory of Freedom and Control.
Academic Press, New York (1981)
16. Matud, M.P.: Structural gender differences in perceived social support. Pers. Individ. Differ.
35, 1919–1929 (2003)
17. Hsiao, K.L., Shu, Y., Huang, T.C.: Exploring the effect of compulsive social app usage on
technostress and academic performance: perspectives from personality traits. Telemat. Inf.
34, 679–690 (2017)
18. Aldhayan, M., Cham, S., Kostoulas, T., Almourad, M.B., Ali, R.: Online peer support
groups to combat digital addiction: user acceptance and rejection factors. In: World
Conference on Information Systems and Technologies, pp. 139–150 (2019)
Predictors of Acceptance and Rejection of Online Peer Support Groups 107
19. Tangney, J.P., Baumeister, R.F., Boone, A.L.: High self-control predicts good adjustment,
less pathology, better grades, and interpersonal success. J. Pers. 2(72), 271–324 (2004)
20. Rammstedt, B., John, O.P.: Measuring personality in one minute or less: a 10-item short
version of the Big Five Inventory in English and German. J. Res. Pers. 1(41), 203–212
(2007)
21. Hofstede, G.H., Hofstede, G.J., Minkov, M.: Cultures and Organizations. Software of the
Mind, vol. xiv, 3rd edn, p. 561. McGraw-Hill, New York (2010)
Assessing Daily Activities Using a PPG Sensor
Embedded in a Wristband-Type Activity
Tracker
© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 108–119, 2020.
https://doi.org/10.1007/978-3-030-45697-9_11
Assessing Daily Activities Using a PPG Sensor Embedded 109
1 Introduction
When the heart beats, capillaries expand and then contract based on blood volume
changes. These changes can be inferred from a photoplethysmogram (PPG), which is a
register over time of the amount of light absorbed and reflected by the tissues when
illuminated by a pulse oximeter [1]. Photoplethysmography is an optical method used
to determine a wide range of physiological processes and vital biosignals such as blood
glucose levels, blood pressure and heart rate (HR) [2–4].
Due to the technological evolution of mobile wearable devices, PPG-derived
biosignals can be captured in a more continuous, non-invasive and non-obstructive way
and perfectly integrated into the individual’s daily lives. Despite that, this new para-
digm is creating scenarios where biosignals are collected without the scope and control
imposed by clinical rules and environments [5]. So, new information can be introduced
but also noise and artifacts, in particular, motion artifacts. Motion artifacts usually
affect the normal PPG signal by deviating it from the baseline or promoting large
fluctuations [6]. The presence of artifacts can highly distort PPG-derived biosignals
and, in particular, HRV indices that are afterward used for assessing the mechanism of
the rhythm of the heart and general health condition of the individuals.
When monitoring the heart, many wearable devices do not provide the original
PPG signal, but rather a transformation of it resulting in a signal composed by the time
differences between consecutive heartbeats - the RR signal. To detect the presence of
motion artifacts, as well as to characterize them, is not only vital for real-time use of the
data, but also non-trivial [5]. However, and to the best of our knowledge, the presence
of motion artifacts in the RR signals derived from a processed PPG is not usually
explored. Addressing this gap in the literature, this study proposes to simulate a natural
environment where individuals are invited to perform low impact activities for a short
period of time while wearing a PPG wristband sensor and then compare the resulting
biosignals of each activity based on two quantitative information metrics: location and
relative variation, taking into consideration interindividual variability. The importance
of interindividual variability relies on the fact that individuals have different physio-
logical baselines as well as different reactions to the transillumination of the skin,
resulting in different morphologies of the PPG signals and derived RR signals.
Moreover, we propose a practical protocol for daily human activity for defining a
ground-truth data in a natural environment.
This paper firstly revises, in Sect. 2, the state-of-the-art for artifact detection on
PPG signals. Section 3 presents the proposed protocol to assess RR sensorial data for
meaningful daily activities, as well as the description of the statistical analysis. In
Sect. 4 we present the main findings. Finally, we conclude and point to future work in
Sect. 5.
110 A. Oliveira et al.
2 Related Work
In the literature, some studies proposed different techniques to identify and reduce the
presence of artifacts on PPG signals, such as time- and frequency-domain filtering,
power spectrum analysis, blind source separation techniques, multipath, wavelets,
support vector machines (SVMs) and multichannel analysis [7–13].
In a recent study [8], Ban and Kwon proposed an algorithm to identify and mitigate
the impact of mobility on PPG measurements using multipath signals and wavelet.
They measured PPG signals at different locations on the body, but the type of
movement and the definition of the noise is not clear. Dao et al. [9] studied an approach
based on the time-frequency spectrum of the PPG to detect and determine motion
artifacts and establish the usability of the PPG segment. They compared several
datasets, collected with different, strict and limited protocols with devices mounted in
the forehead and in one finger. Also, they used human visual inspection to identify
motion and noise artifacts. Zhang et al. [10] developed a modular algorithm framework
for motion artifact removal from signals captured with wrist-worn PPG sensors. They
used a multichannel PPG and data collected from accelerometer and gyroscope in order
to account for the variability in the signal. A small dataset was used, related to per-
forming several macro and micro motions such as fist opening and closing. However,
accelerometers consume too much energy for being useful in long records, and the
typical daily activities were not considered in the study. Vandecasteele et al. [11] used a
multichannel approach and visual inspection of the records for determining the pres-
ence of motion artifacts in PPGs, and then used SVMS for classifying PPG segments
into normal or noisy. Cherif et al. [7] used a method based on waveform morphology
for detecting artifacts in PPG signals. They took into consideration the interindividual
and measurement conditions variabilities, but the type of induced motion artifact
protocol used is not clear. Zhao et al. [12] collected data while individuals were
running on a treadmill at a speed of 12 km/h and used a multichannel approach for
detecting motion artifacts. Since individuals were subjected to an intensive exercise, the
adaptive response of the heart to the external stimulus was not controlled and so it is
difficult to analyze the corresponding RR signal. Tabei et al. [13] analyzed PPG signals,
derived from smartphones, accounted for interindividual variability using probabilistic
neural networks with several extracted parameters and compared its performance with
other detection algorithms. Hand movement, fingertip misplacement, and lens-pressing
motion artifacts were considered.
3 Methodology
The choice of activities encompassed by the protocol used in this study was performed
in two steps. The first, a literature review was performed in order to identify an easy,
reliable and practicable measure of daily activities. Following these criteria, the liter-
ature’s findings suggested the use of a generic health 36-item survey focused on daily
activities - the Short-Form Health Survey [14]. SF-36 is a coherent, and easily
administered quality-of-life measures survey, widely used by health professionals and
researchers around the world. It is composed of 36 items, each measured in a Likert
Assessing Daily Activities Using a PPG Sensor Embedded 111
scale that ranges from 3 points to 6 points. Except for the one single-item of self-
evaluated transition (HT), the scores of the other 35 items are grouped into 8 multi-item
scales, including physical functioning (PF), limitations due to physical health problems
– role-physical (RP), bodily pain (BP), general health (GH), vitality (VT), social
functioning (SF), limitations due to emotional health problems (RE) and mental health
(MH) [14]. In our study, the default 4-week recall form was used. This survey was also
used for assessing the general health condition of the participants and for eliminating
some possible confounding effects. The second step consisted of some informal
interviews with a subsample of our target group, with the main goal of understanding
which activities were most common and crosscutting. In addition, interviews focused
on daily, often low-impact activities to minimize the effects on heart rate during the
performance.
Biometric data, namely RR signals, were measured using a commercial device - the
Microsoft Band®. The interest in this device, besides allowing the measurement of
biometric, is associated with its portability and low weight, being non-invasive for the
user, discreet and comfortable, and having a good quality/price relation [15, 16]. Prior
to the data collection process, all participants were provided with an informed consent
and data protection norms. The smart band was securely mounted on the dominant
hand. The protocol was formed by a sequence of activities, which can be grouped into
activities involving body movement with or without changing position in the room
(Fig. 1).
Fig. 1. Activities to perform: (a) walk 3 m at a comfortable speed, in a straight line from point A
to point B; (b) lift from a chair (point A) after 30 s sitting and walk a distance of 3 m at a
comfortable speed straight to point B; (c) from point A, walk 3; (d) tilting, (e) carry weights.
The first group of activities involved body movement such as: walking for 3 m at a
comfortable speed, in a straight line; walking 3 m at a comfortable speed, in a straight
line with the task of lifting, transporting and landing weight (a bag); lifting from a chair
after 30 s sitting and walking 3 m at a comfortable straight speed; walking 3 m at a
comfortable speed in a straight line and sit for 30 s. The second group of activities
involved body movement but not position change such as: tilting, lowering/crouching,
112 A. Oliveira et al.
coughing is fairly instantaneous and so the distribution of values is similar to the one
observed in the standing position. Sneezing is also instantaneous but, usually, is a
forceful expulsion of air resulting in more movement, in turn bringing a more
heterogeneous distribution of average values. Talking on the phone, drinking, sneezing,
reading, and rising the two hands in the air are the activities with more dispersion of
average values and, therefore, more variability. It is also important to notice that
114 A. Oliveira et al.
walking, walking with weights and crouching narrow boxes indicate more homo-
geneity in the distribution of average RR values. Also, from Fig. 3 it can be seen that
when compared to the standing position, the activities of drinking, sneezing and
reading have lower median. On the contrary, eating, coughing, reading, and sitting
have higher median.
Clustering the activities according to the type of movement and considering the
distribution of the average values of RR (see Fig. 5), it can be seen that movements
involving legs have a higher median and the tilting activity has more heterogeneous
observations. Comparing the average values of RR obtained in activities without
movement, the activities involving only arms have lower median but higher hetero-
geneity, similar to the tilting activity. The average values of RR had a higher median
and higher variability in the walking in a straight line activity.
When the movement involves the use of the arms, the median of the relative
variation coefficient increases. Comparing with the activities without movement, the
heterogeneity of the distribution of the relative variation coefficient is higher on the
walking in straight line activity, activities involving only legs movements and tilting
(Fig. 6).
Considering only groups composed of more than one activity, the median
(Interquartil Range - IQR) of averaged RR had a minimal median of 0.676 (0.099) in
the activities with only arms movements, and maximum median of 0.752 (0.116) and
0.776 (0.097) in the activities of walking in a straight line and activities involving only
leg movements, respectively. Considering the relative variation coefficient of the RR
signal, the minimum median (IQR) was observed in the activities with leg movements,
0.170 (0.100), and maximum median in activities with only arms movements 0.224
(0.067). From Friedman analysis, it was found that at least one group of activities was
statistically significantly different from the others in terms of average and in terms of
relative variation coefficient (v2(3) = 23.554, p < 0.001; v2(3) = 16.157, p = 0.001)
(Table 2).
The post hoc analysis with Wilcoxon signed-rank tests was conducted with a
Bonferroni correction for detecting which group is different for the two parameters. It
was found that there were significant differences between the activities involving arms
and activities involving legs (Z = −1.586, p < 0.001) and between activities involving
arms and moving in a straight line (Z = −1.121, p = 0.006). Despite an overall increase
in the median of the average RR in walking in a straight line and with only leg
movements versus the standing median, no significant differences were detected (see
Tables 2 and 3). Also, it was found that there were significant differences between the
activities involving arms and activities involving legs (Z = 1.25, p = 0.002) and
between activities involving arms and moving in a straight line (Z = 1.143, p = 0.006).
Despite an overall decrease in the median of the variation coefficient of the RR in
walking in a straight line and with only leg movements versus the standing median, no
significant differences were detected.
Assessing Daily Activities Using a PPG Sensor Embedded 117
movements had a lower median. In terms of the relative variance parameter, the lower
median was observed in the group of activities with leg movements. The results also
showed that at least one group of activities was statistically significantly different from
the others in terms of both analyzed criteria. The differences are more relevant when the
motion involves only arms, such as drinking and lifting the hands in the air, a kind of
movement that increased the variability of the observed values. Describing the different
kinds of motion artifacts presented in a bio-signals can be beneficial for its detection in
real-time. Therefore, collecting ground-truth daily human activity data is essential for
retrieving useful information. This is not trivial since the rhythm of the heart is not a
stationary signal, being instead highly influenced by external factors. Also, PPG
morphology varies among individuals.
This paper may contribute to detect unreliable data that should be discarded to
prevent inaccurate decision making and false alarms. As future work, we propose to
evaluate the impact of micromotion such as writing on a computer and to analyze the
presence of artifacts in a non-controlled environment since RR readings can be
influenced by other factors.
Acknowledgements. This work was supported by the European Regional Development Fund
through the programme COMPETE by FCT (Portugal) in the scope of the project PEst-
UID/CEC/00027/2015 and QVida+: Estimação Contínua de Qualidade de Vida para Auxílio
Eficaz à Decisão Clínica, NORTE010247FEDER003446, supported by Norte Portugal Regional
Operational Programme (NORTE 2020), under the PORTUGAL 2020 Partnership Agreement. It
was partially supported by LIACC (FCT/UID/CEC/0027/2020).
References
1. Shelley, K., Shelley, S., Lake, C.: Pulse oximeter waveform: photoelectric plethysmography.
In: Lake, C., Hines, R., Blitt, C. (eds.) Clinical Monitoring, pp. 420–428. WB Saunders
Company (2001)
2. Rapalis, A., Janušauskas, A., Marozas, V., Lukoševičius, A.: Estimation of blood pressure
variability during orthostatic test using instantaneous photoplethysmogram frequency and
pulse arrival time. Biomed. Sig. Process. Control 32, 82–89 (2017)
3. Gil, E., Orini, M., Bailon, R., Vergara, J.M., Mainardi, L., Laguna, P.: Photoplethysmog-
raphy pulse rate variability as a surrogate measurement of heart rate variability during non-
stationary conditions. Physiol. Measur. 31(9), 1271 (2010)
4. Höcht, C.: Blood pressure variability: prognostic value and therapeutic implications. ISRN
Hypertension (2013)
5. Rodrigues, J., Belo, D., Gamboa, H.: Noise detection on ECG based on agglomerative
clustering of morphological features. Comput. Biol. Med. 87, 322–334 (2017)
6. Sun, B., Wang, C., Chen, X., Zhang, Y., Shao, H.: PPG signal motion artifacts correction
algorithm based on feature estimation. Optik 176, 337–349 (2019)
7. Cherif, S., Pastor, D., Nguyen, Q.-T., L’Her, E.: Detection of artifacts on photoplethys-
mography signals using random distortion testing. In: 2016 38th Annual International
Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 6214–
6217. IEEE (2016)
8. Ban, D., Kwon, S.: Movement noise cancellation in PPG signals. In: 2016 IEEE
International Conference on Consumer Electronics (ICCE), pp. 47–48. IEEE (2016)
Assessing Daily Activities Using a PPG Sensor Embedded 119
9. Dao, D., Salehizadeh, S.M., Noh, Y., Chong, J.W., Cho, C.H., McManus, D., Darling, C.E.,
Mendelson, Y., Chon, K.H.: A robust motion artifact detection algorithm for accurate
detection of heart rates from photoplethysmographic signals using time–frequency spectral
features. IEEE J. Biomed. Health Inf. 21, 1242–1253 (2017)
10. Zhang, Y., Song, S., Vullings, R., Biswas, D.: Simão: motion artifact reduction for wrist-
worn photoplethysmograph sensors based on different wavelengths. Sensors 19, 673 (2019)
11. Vandecasteele, K., Lázaro, J., Cleeren, E., Claes, K., Van Paesschen, W., Van Huffel, S.,
Hunyadi, B.: Artifact detection of wrist photoplethysmograph signals. In: BIOSIGNALS,
pp. 182–189 (2018)
12. Zhao, J., Wang, G., Shi, C.: Adaptive motion artifact reducing algorithm for wrist
photoplethysmography application. In: Biophotonics: Photonic Solutions for Better Health
Care V. p. 98873H. International Society for Optics and Photonics (2016)
13. Tabei, F., Kumar, R., Phan, T.N., McManus, D.D., Chong, J.W.: A novel personalized
motion and noise artifact (MNA) detection method for smartphone photoplethysmograph
(PPG) signals. IEEE Access 6, 60498–60512 (2018)
14. Ware, J.E., Kosinski, M., Bjorner, J.B., Turner-Bowker, D.M., Gandek, B., Maruish, M.E.,
et al.: User’s manual for the SF-36v2 health survey. Quality Metric Lincoln (2008)
15. Fan, S., Zhang, W., Hu, L., Chen, S., Xiong, J.: Research on the openness of microsoft band
and its application to human factors engineering. Proc. Eng. 174, 425–432 (2017)
16. Nogueira, P., Urbano, J., Reis, L.P., Cardoso, H.L., Silva, D.C., Rocha, A.P., Gonçalves, J.,
Faria, B.M.: A review of commercial and medical-grade physiological monitoring devices
for biofeedback-assisted quality of life improvement studies. J. Med. Syst. 42(6), 101 (2018)
Simulation of a Robotic Arm Controlled
by an LCD Touch Screen to Improve
the Movements of Physically Disabled People
Abstract. This research is focused to help people who have problems to move
their bodies or do not have enough force to move it and support them in their
quotidian live to they have an easier way to reach objects with an easier control
which move a robot arm faster. To achieve this, this article presents a proposed
algorithmic that allows design a new way of mechanism on robotics arm with
three rotations joins which allows makes it is faster and adjustable. This pro-
posed algorithm includes a new form to get the kinematics of an anthropo-
morphic robot using an LCD touch screen and a new way to control a robotic
arm with only one finger, with less effort and touching it. This algorithm was
tested in Matlab to finding the faster way to get a point and was tested in
Arduino to prove it with the pressure sensor on the LCD touch screen.
1 Introduction
In the last two decades, the applications of robotic arms have helped in many areas,
human has been supported by these applications, especially in the automation area and
therefore in the industry, as well as those have helped in the quotidian life, mainly for
physically disabled people. In the last two decades, automation and control have
become a topic of interest for researchers from different areas. Mainly, in the industrial
robotics [1, 2] and the robotic systems applied in the medical area, such as tele-operated
surgery [3], surgical pattern cutting [4]. According to Shademan et al., the precision of
an industrial robot offers great advantages in this area [5]. Currently, there are endless
technological developments and various applications such as prostheses [6], orthoses
[7], exoskeletons [8–10], and devices for teleoperation in order to improve human
capabilities [11–14]. According to Chung et al. conducted a review of the literature
about the different robotic assistance manipulators from 1970 to 2012, mentioning that
© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 120–133, 2020.
https://doi.org/10.1007/978-3-030-45697-9_12
Simulation of a Robotic Arm Controlled by an LCD Touch Screen 121
2 Algorithm Description
It is common using a joystick to control the movements of a robotic arm, in fact, the
joystick is implemented in many sorts of electronic applications and one of these is to
control a robotic arm that gives assistant to physically disabled people. Nevertheless, a
joystick can require more effort and wait time by the patient and it could be possible
that the patient is a diabetic person or has another disease which can cause sudden
weakening or bad/difficult coordination. As it is known, using a joystick, sometimes, it
could be required more than two long movements, doing that the joystick is not an
efficient control for specifics problem. In a work by Chung et al. mention that the
operation of the robotic arm is controlled by holding and pulling the joystick to reach
the desired position, on another hand, however, when using an LCD screen it is only
necessary to touch the LCD screen [15]. Therefore, it has been thought in a new form to
control a robotic arm such as cellphones that have an intuitive touch control to
manipulate its functions. The new shape to control a robotic arm takes this idea,
making a touch control which is an LCD touch screen with an easy and intuitive
interface to use the control and manipulate the robotic arm.
Implementing this control, the physically disabled people can get a point on the
space by making a slight movement with one finger, covering all the necessities in the
best way. Moreover, comfortable manipulation is very important to the patient and with
this algorithm which is programmed for LCD touch screen and Arduino. It is also
122 Y. Quiñonez et al.
thought about the extreme cases, for example, a physically disabled people who are
diseases such as diabetes that suffer by hypoglycemia or lupus which are diseases that
make the patient lose the awareness, the force or bad coordination according to
background experience based on surveys for physically disabled people. Moreover, it is
important to mention that people with diabetes or lupus can have confusion during a
shock, therefore, making one movement with the finger touching a point in the LCD
touch screen can save lives because this move will allow the robotic arm moves quickly
in an emergency case.
The algorithm that will be presented in the next section, it finds the best way to get
a point in the space which is touched in the LCD touch screen by the physically
disabled people. This algorithm reaches the position, moving by PWM Pulses and it is
moved to get a specific number of degrees, depending on the Pulses width and the
coordinate which has been touched on the LCD touch screen. It makes a natural way to
control because of a robotic arm movement is based on the coordinate that the finger
has touched and does not need much concentration, unlike the methods that are con-
trolled through the thoughts sensor. This algorithm has been proved in Arduino and
simulated in Matlab, besides it has been designed a mechanism to the best function of
the algorithm to work with a system with 2 dimensions which is the LCD touch screen
and be able to move the anthropomorphic robotic arm in a plane of 3 dimensions.
The LCD touch screen has two dimensions which are referred as x and y, having the
coordinate ðx; yÞ it is not a problem because these can be obtained on the LCD touch
screen, but it is necessary to have another dimension, this dimension is calculated from
the coordinate x and y, and represent the rotation to reach the coordination ðx; yÞ, then
these two numbers (x and y) are divided and put them as a domain of a trigonometric
function. The first condition of using this algorithm is to take x and y as two legs of a
right triangle, to use a tangent function. Therefore, it can be obtained a as the next way:
y
a ¼ tang ð1Þ
x
where x, is a number different to zero. Then, this opens the first real condition of this
algorithm.
First Condition. For representing a coordinate that is touched in the LCD touch
screen, it is had a coordinate in a point on the screen and to get this point is necessary
that anthropomorphic robotic arm’s first joint rotates some degrees. Then, when x y,
it can be got coordinate located equal or less than 45°, such as it is shown below.
y y y
a 45 V tan1 ð Þ 45 V tanð45Þ V 1 ð2Þ
x x x
Simulation of a Robotic Arm Controlled by an LCD Touch Screen 123
therefore, y x this means that ny nx which represent the pulses to move on axis y
and axis x respectively. For programming, pulses conditions of axis x and y for when a
is less or equal than 45°: if a ¼ 45 then nx ¼ ny and if a \ 45 then nx [ ny .
Second Condition. As it is very known when a is equal to 45°, then yx ¼ 1, but what
y
happens when x approaches 1? The following happens:
y
limðyÞ!1 a ¼ limðyÞ!1 tan1 ¼ tan1 ð1Þ ¼ 45 ð3Þ
x x x
Fourth Condition. This condition is somewhat similar to the third condition, except
that it has the feature j xj j yj and therefore:
Figure 1 shows all conditions, which draws an area as a right triangle where it is
fulfilled that x y at the first case, y x at the second case, y x at the third case
and j xj j yj at the fourth case.
3 Algorithm Implementation
This algorithm is implemented on the LCD touch screen to moves three different
steppers motor, each one has 1.8° of resolution which is reached by a 10 ms PWM
Pulse, therefore the a motor reaches 180° in one second, plus the 5 ms of waiting of
each pulse, then the function which can explain the trajectory of the a motor using the
pulses domain is the next:
To reach 180° it is needed to send 100 pulses to the function (6), therefore in one
second it can be reached 180° and it will be the maximum theory time to work. then,
the discrete function (6) can work well, because it is very precise. However, pro-
gramming this function in the microcontroller and using the LCD touch screen, this can
give us some troubles, due to the a motor is programming different to the motors which
give the coordinate ðx; yÞ and to program the a motor with just this function can give a
saturation of pulses or rather a “traffic of pulses”. Therefore, control the three motors
could be a slow way to move the robotic arm, also with the traffic pulses can be that the
motors which give the point ðx; yÞ do not reach the coordinate which has been
requested. Then, it has been proposed another function, which can reach big degrees
measure in less time with few pulses. This function is the next:
Xs
f ðsÞ ¼ n¼0
1:8n ð7Þ
Although the function (7) can be no precise for some degree’s measures, this
discrete function is very easy to program and mix it with the function (6) can be very
useful for the job of a motor. Then, the function (7) need a limit superior s ¼
f1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13g to can join it with the function (6).
Each pulse has a width of 10 ms, while n increase until gets the s value, the pulse
will be increased its width 10 to 10 ms, such as it is shown in Table 1.
Table 1. n pulses width of time in the summation function f ðnÞ until n gets each value of s.
f ðsÞ Pulses s Time
1.8 1 10 ms
5.4 2 20 ms
10.8 3 30 ms
18 4 40 ms
27 5 50 ms
37.8 6 60 ms
50.4 7 70 ms
64.8 8 80 ms
81 9 90 ms
99 10 100 ms
118.8 11 110 ms
140.4 12 120 ms
163.8 13 130 ms
Simulation of a Robotic Arm Controlled by an LCD Touch Screen 125
2n
a limts !b a b ð8Þ
ts
The number a represents the number of measures of the desired degrees, the
number b helps to have an interval to find the limits time (superior or inferior limit
time) and helps to the inequation is fulfilled and 2n is the number of combinations to
search. To develop the algorithm, it is important to know that a\2n and b can have
values minor or equals than ð2n =aÞ or it can be rounded out as long as the in equation is
fulfilled. Then, when the value of b is known, it is equaled as the ratio of the limit
superior time between the limit inferior time, such as it is shown next.
tp
b¼ ð9Þ
td
where tp is the limit superior time and td is the limit inferior time. When it is known the
limits time, it can be found the time of the way which was taken.
tp t td ð10Þ
Founding the number b, could be difficult to found when it is had the desired
number very big, nevertheless, you have to pay attention to the limits time and the
inequation behavior when the number n is repetitive for the desired number, it means
that the limit superior time is near to a middle or maximum time of the process. For
example, an easy and clear example is taking a ¼ 16:2, then:
2n
16:2 limts !b 16:2 b ð11Þ
ts
25
Therefore n has to be equal to 5, due to 16:2\25 , and b 16:2 ¼ 1:9753. . ., but
b can have the value 2, because the in equation is fulfilled with a b ¼ 2, then:
25
16:2 limts !2 14:2 ð12Þ
ts
126 Y. Quiñonez et al.
tp
As b ¼ td and n ¼ 5, it can be intuited that the limit inferior time is 50 ms, then:
t
p
2¼ V tp ¼ 100 ms ð13Þ
50 ms
Therefore, the road can be found for when n ¼ 5 with a 32 number of ways and a
time between 50 ms and 100 ms, in Fig. 2 shows how the road is built.
Fig. 2. Algorithm development for when a = 16.2, the red roads means such ways are not
selected and are refused.
P2
The result is ½1:8ð5Þ þ n¼1 1:8n þ 1:8ð1Þ ¼ 16:2 in a time of 90 ms.
4 Matlab Simulation
It has been selected Matlab [22] for controlling the anthropomorphic robotic arm using
the algorithm and mixing the two discrete functions presented before. It has been
programmed a graphical user interface (GUI) (see Fig. 3), where the user can put the
degrees on Edit text or Slider, then the user has to select the push button, which is name
is “Get Position”. Then, when positions are calculated by the program, it is shown a
representation of a robotic arm with three rotation joins doing the kinematic. Moreover,
there are shown the calculus of way numbers, limit superior time, limit superior time
without waiting time of each pulse, limit inferior time, degrees taking the limit superior
time and inferior time.
Simulation of a Robotic Arm Controlled by an LCD Touch Screen 127
The kinematic is inspired in the cubic polynomials, in this case, degrees in limit
inferior time are calculated using the cubic polynomials and taking the limit inferior
time to approximate the degrees which can be if it is decided to take them (in this case,
the desired degrees are 45). In summary, the program calculates the degrees which
correspond to limit superior time and limit inferior time with cubic polynomials method
to compare them (the degrees for alpha join) by the kinematics obtained. Looking at
Fig. 3, using cubic polynomials method implementing limit superior time, it is obtained
54° and 41.9126° using limit inferior time, on another hand in the kinematic, it is
obtained 0.7854 radians (it equals 45°) using the limit superior time and the algorithm.
The ideas and behavior of the algorithm implemented in a robotic arm are clearer in a
simulation, due to it gives us a better context to design the mechanical structures and
how must them work, moreover, it can be appreciated the velocity of the movements
and of course, it is proved the great functioning of the algorithm as a result.
Fig. 3. Matlab simulation to represent the algorithm in an example of a 45° of trajectory and the
different characteristics that the interface shows. In the lower-left corner (command window), it
shows the final position q10 ¼ ½0:78541:9333 0:2657 is represented on radians.
hands of the watch and negative-sense is the movement towards hands of the watch, this
is the way that the a motor reach all of the position and get again to the 0-degree
position. Figure 4(b), shows the piece that moves another piece that has two vertical
axes and generates the torque. This piece is moved and remained in that position, while
the a motor and the first piece which was shown move until getting 0-degree position.
Finally, the final assembly which can be done possible that the algorithm can perform
and get to the position of a given time, it is shown below in Fig. 4(c).
Fig. 4. Representation of the pieces which are assembled in the stepper motor, the structure that
allows trajectory and movements to first joint.
5 Results
The different tests and results are focused on the Matlab simulation and the Arduino
code using the LCD touch screen and the stepper motors, which are presented then as
the desired value of a and the different results using the algorithm for Arduino code and
Matlab. The following results were obtained touching the LCD touch screen for when
the first condition is presented. Table 2 shows the results.
Table 2. The number of pulses that f ðsÞ and f ðnÞ need to approach a in the first condition.
y
a n f ðnÞ ¼ 1:8n s P
s
x f ðsÞ ¼ 1:8n
n¼0
5.71 0.1 3 5.4 2 5.4
11.3 0.2 6 10.8 3 10.8
16.69 0.3 9 16.2 3 10.8
18 0.35 12 21.6 4 18
21.8 0.4 12 21.6 4 18
26.56 0.5 14 25.2 4 18
27 0.5095 14 25.2 5 27
30.96 0.6 14 25.2 5 27
34.99 0.7 17 30.6 5 27
37.8 0.7756 19 34.2 6 38.7
38.65 0.8 21 37.8 6 38.7
41.98 0.9 23 41.4 6 38.7
45 1 25 45 6 38.7
Simulation of a Robotic Arm Controlled by an LCD Touch Screen 129
The functions which were the best options to programmed for this condition and
were used and the new function is:
8 Ps y y
< n¼0 1:8n if 0 x 0:35; whit sy¼ f1; 2; 3; 4g or if 0:51
> x 0:79; s ¼ f5; 6g
f ðs; nÞ :¼ 1:8n if 0:4 x 0:5 orif 0:6 x 0:7
y
>
: 1:8n if 0:8 x 1
y
ð14Þ
Table 3. The number of pulses that f ðsÞ and f ðnÞ need to approach a in the second condition.
y
a n f ðnÞ ¼ 1:8n s P
s
x f ðsÞ ¼ 1:8n
n¼0
45 1 25 45 8 64.8
63.434 2 35 63 8 64.8
75.963 4 42 75.6 8 64.8
80.53 6 44 79.2 8 64.8
81 6.313 45 81 9 81
82.87 8 46 82.8 9 81
84.298 10 47 84.6 9 81
87.137 20 48 86.4 9 81
88.56 40 49 88.2 9 81
88.85 50 49 88.2 9 81
89.427 100 49 88.2 9 81
89.93 900 49 88.2 9 81
Table 4. The number of pulses that f ðsÞ, f ðnÞ, and f ðs; nÞ need to approach a in the third
condition.
y
a x
n f ðnÞ ¼ 1:8n ðs; nÞ f ðs; nÞ ¼ f ðsÞ þ f ðnÞ
91.145 −50 50 90 (9, 0) 81
91.6 −30 51 91.8 (9, 0) 81
92.29 −25 51 91.8 (9, 0) 81
92.86 −20 52 93.6 (9, 0) 81
93.814 −15 52 93.6 (9, 0) 81
95.71 −10 53 95.4 (9, 0) 81
96.34 −9 54 97.2 (10, 0) 99
97.125 −8 54 97.2 (10, 0) 99
98.13 −7 55 99 (10, 0) 99
99 −6.313 55 99 (10,0) 99
99.462 −6 56 100.8 (10, 1) 100.8
100.8 −5.242 56 100.8 (10, 1) 100.8
101.309 −5 56 100.8 (10, 1) 100.8
102.6 −4.0473 57 102.6 (10, 2) 102.6
104.036 −4 58 104.4 (10, 3) 104.4
108 −3.0077 60 108 (10, 5) 108
108.434 −3 60 108 (10, 5) 108
115.2 −2.125 64 115.2 (10, 9) 115.2
116.565 −2 65 117 (10, 10) 117
133.2 −1.064 74 133.2 (10, 19) 133.2
135 −1 75 135 (10, 20) 135
Table 5. The number of pulses that f ðsÞ and f ðnÞ, and f ðs; nÞ need to approach a in the fourth
condition.
y
a tan1 x þ 180 n f ðnÞ ¼ 1:8n ðs; nÞ f ðs; nÞ ¼ f ðsÞ þ f ðnÞ
−0.9999 135.01 75 135 (10, 20) 135
−0.99 135.28 75 135 (10, 20) 135
−0.9 138.012 77 138.6 (10, 20) 135
−0.85 139.012 78 140.4 (10, 20) 135
−0.8 141.34 79 142.2 (10, 20) 135
−0.75 143.036 80 144 (10, 20) 135
−0.7 145.007 81 145.8 (10, 20) 135
−0.6 149.036 93 149.4 (10, 20) 135
−0.5 153.43 85 153 (10, 20) 135
−0.4 158.198 87 156.6 (10, 20) 135
−0.3 163.3 90 162 (10, 20) 135
−0.2905 163.8 91 163.8 (13, 0) 168.8
−0.2 168.69 94 169.2 (13, 3) 169.2
−0.158 171 95 171 (13, 4) 171
−0.1 174.28 97 174.6 (13, 6) 174.6
−0.031 178.2 99 178.2 (13, 8) 178.2
−1 180 100 180 (13, 9) 180
Simulation of a Robotic Arm Controlled by an LCD Touch Screen 131
The results of the function which is mixed with f ðsÞ and f ðnÞ, is very precise and
faster than f ðsÞ too. Also, in this case, f ðs; nÞ stays in 135 whiles ðy=xÞ is less than –
0.2905. Figure 5 shows the results of each function and the approximate of function
f ðnÞ and f ðsÞ, it can be observed how the combinations can be built and which can be
taken. Function result of the fourth condition
8 y
< Ps 1:8n if 0:999 x \ y 0:3
f ðs; nÞ :¼ P 1:8n if 0:29
y x 0:25
: s n¼0
0
n¼0 1:8n þ 1:8n if 0:25\ x 1 with s ¼ 13 y 0\n0 9
ð16Þ
Fig. 5. Results representation of the kinematics for each condition on the Matlab and Arduino
simulation.
6 Conclusions
In this paper, it has been presented and simulated an algorithm to control a robotic arm
with an LCD touch screen, which has been introduced through the length of this paper.
How is presented, it is a two-dimension plane and the desired touchpoint which is
located on the LCD touch screen, it can be represented as a robot kinematic in a three-
dimensions plane and utilize it for physically disabled people. Moreover, it was tested
the functioning of this algorithm and the combinations to program and create a faster
and precise touch control, building all the combinations which can be possible.
The algorithm works and it can be programmed such as was presented in this paper
and even more important, it can be used to help physically disabled people in an
intuitive, easy, useful with very few efforts because it has been proved that the control
can be controlled by only one finger and that is what we wanted to achieve. Basically,
this paper talks about the beginning of a new shape to create a kinematic robot arm
using a two-dimension control and help physically disabled people. Therefore, it is had
many future works to can improve this project, one time that be found the way to build
132 Y. Quiñonez et al.
this algorithm using one general equation, it will be easier to use in different pro-
gramming languages, such as in this paper Matlab and Arduino were used, and helping
people to have better and faster tools for the quotidian life.
References
1. Grau, A., Indri, M., Bello, L.L., Sauter, T.: Industrial robotics in factory automation: from
the early stage to the Internet of Things. In: 43rd Annual Conference of the IEEE Industrial
Electronics Society, pp. 6159–6164. IEEE Press, Beijing (2017)
2. Yenorkar, R., Chaskar, U.M.: GUI based pick and place robotic arm for multipurpose
industrial applications. In: Second International Conference on Intelligent Computing and
Control Systems, Madurai, India, pp. 200–203 (2018)
3. Burgner-Kahrs, J., Rucker, D.C., Choset, H.: Continuum robots for medical applications: a
survey. IEEE Trans. Rob. 31(6), 1261–1280 (2015)
4. Murali, A., Sen, S., Kehoe, B., Garg, A., Mcfarland, S., Patil, S., Boyd, W.D., Lim, S.,
Abbeel, P., Goldberg, K.: Learning by observation for surgical subtasks: multilateral cutting
of 3D viscoelastic and 2D Orthotropic Tissue Phantoms. In: IEEE International Conference
on Robotics and Automation, pp. 1202–1209. IEEE Press, Seattle (2015)
5. Shademan, A., Decker, R.S., Opfermann, J.D., Leonard, S., Krieger, A., Kim, P.C.:
Supervised autonomous robotic soft tissue surgery. Sci. Trans. Med. 8(337), 337ra64 (2016)
6. Allen, S.: New prostheses and orthoses step up their game: motorized knees, robotic hands,
and exosuits mark advances in rehabilitation technology. IEEE Pulse 7(3), 6–11 (2016)
7. Niyetkaliyev, A.S., Hussain, S., Ghayesh, M.H., Alici, G.: Review on design and control
aspects of robotic shoulder rehabilitation orthoses. IEEE Trans. Hum. Mach. Sys. 47(6),
1134–1145 (2017)
8. Proietti, T., Crocher, V., Roby-Brami, A., Jarrassé, N.: Upper-limb robotic exoskeletons for
neurorehabilitation: a review on control strategies. IEEE Rev. Biom. Eng. 9, 4–14 (2016)
9. Rehmat, N., Zuo, J., Meng, W., Liu, Q., Xie, S.Q., Liang, H.: Upper limb rehabilitation using
robotic exoskeleton systems: a systematic review. Int. J. Int. Rob. App. 2(3), 283–295 (2018)
10. Young, A.J., Ferris, D.P.: State of the art and future directions for lower limb robotic
exoskeletons. IEEE Trans. Neu. Syst. Reh. Eng. 25(2), 171–182 (2017)
11. Makin, T., de Vignemont, F., Faisal, A.: Neurocognitive barriers to the embodiment of
technology. Nat. Biom. Eng. 1(0014), 1–3 (2017)
12. Beckerle, P., Kõiva, R., Kirchner, E.A., Bekrater-Bodmann, R., Dosen, S., Christ, O.,
Abbink, D.A., Castellini, C., Lenggenhager, B.: Feel-good robotics: requirements on touch
for embodiment in assistive robotics. Front. Neurorobot. 12, 1–84 (2018)
13. Jiang, H., Wachs, J.P., Duerstock, B.S.: Facilitated gesture recognition based interfaces for
people with upper extremity physical impairments. In: Alvarez, L., Mejail, M., Gomez, L.,
Jacobo, J. (eds.) CIARP 2012. LNCS, vol. 7441, pp. 228–235. Springer, Heidelberg (2012)
14. Kruthika, K., Kumar, B.M.K., Lakshminarayanan, S.: Design and development of a robotic
arm. In: IEEE International Conference on Circuits, Controls, Communications and
Computing, pp. 1–4. IEEE Press, Bangalore (2016)
15. Chung, C.S., Wang, H., Cooper, R.A.: Functional assessment and performance evaluation for
assistive robotic manipulators: Literature review. J. Spinal Cord Med. 36(4), 273–289 (2013)
16. Perez-Marcos, D., Chevalley, O., Schmidlin, T., Garipelli, G., Serino, A., Vuadens, P., Tadi,
T., Blanke, O., Millán, J.D.: Increasing upper limb training intensity in chronic stroke using
embodied virtual reality: a pilot study. J. Neu. Eng. Reh. 14(1), 119 (2017)
Simulation of a Robotic Arm Controlled by an LCD Touch Screen 133
17. Levin, M.F., Weiss, P.L., Keshner, E.A.: Emergence of virtual reality as a tool for upper
limb rehabilitation: incorporation of motor control and motor learning principles. Phy. Ther.
95(3), 415–425 (2015)
18. Kokkinara, E., Slater, M., López-Moliner, J.: The effects of visuomotor calibration to the
perceived space and body through embodiment in immersive virtual reality. ACM Trans.
Appl. Percept. 13(1), 1–22 (2015)
19. Bovet, S., Debarba, H.G., Herbelin, B., Molla, E., Boulic, R.: The critical role of self-contact
for embodiment in virtual reality. IEEE Trans. Vis. Comput. Graph. 24(4), 1428–1436 (2018)
20. Atre, P., Bhagat, S., Pooniwala, N., Shah, P.: Efficient and feasible gesture controlled robotic
arm. In: IEEE Second International Conference on Intelligent Computing and Control
Systems, pp. 1–6. IEEE Press, Madurai (2018)
21. Badrinath, A.S., Vinay, P.B., Hegde, P.: Computer vision based semi-intuitive Robotic arm.
In: IEEE 2nd International Conference on Advances in Electrical, Electronics, Information,
Communication and Bio-Informatics, pp. 563–567. IEEE Press, Chennai (2016)
22. Matlab & Simulink. https://www.mathworks.com. Accessed 15 Oct 2019
Information Technologies in Education
Performance Indicator Based on Learning
Routes: Second Round
1 Introduction
Currently the human behavior manifested in the actions, reactions, because it develops
through the framework, family, social and educational, where ancestral values,
knowledge, skills, aptitudes and customs are acquired. However, the technological
globalization presented by education and learning methods has not had a significant
advance, the tradition is still used even when there are tools to innovate in the teaching
process, avoiding the poor performance of learning in students (Sorokin and Sotomayor
2016). In the province of ORO, due to the complexity of teaching practices and
pedagogy, innovation and the search for new ways are needed to boost the teaching
chair with teaching resources according to the scope, culture and idiosyncrasy of
© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 137–141, 2020.
https://doi.org/10.1007/978-3-030-45697-9_13
138 F. Chamba et al.
different societies, taking into account this perspective, the teaching practice in the
development of competences framed in the ability to design permanent and significant
learning experiences, in which students are the central point of the teaching-learning
process through the correct use of ICT and towards a digital culture to face the new
challenges. The CEPWOL Altamira Individual Educational Unit of the city of Santa
Rosa, El Oro province, with teachers of Basic General Education, shows that there is
disinterest in learning in students, presenting the need to innovate in the teaching-
learning process in routes of learning that improve academic performance. Faced with
the difficulties in treating social human behavior, this research is framed, in learning
paths to improve performance, ensuring that the topics that have the greatest difficulty
such as dyslexia, dyscalculia, hyperactivity, are taught technologically and achieve
meaningful.
2 Art State
Lacunza (2015) in her article Social skills and child prosocial behavior from positive
psychology when analyzing the results concludes that: “The identification of skills
favors socialization with peers enables the promotion of salugénic social skills and
prosocial behaviors, basic resources for the positive development of the child” (p. 6).
(Yépez 2010) in his job the absence and its influence on the Learning path of the first
year of basic education at the Alberto Albán Villamarín school in the community of
Noetanda states the following: It concludes that the children mostly do not match the
tasks received in the learning path when they have failed, in addition these children
who are missing (Zambrano-Ríos 2017) classes find it difficult to discuss the topics
covered in the route learning (p. 55). In his research report on the Factors in the socio-
affective behavior of children aged 3 to 4, he states the following: It concludes that
family problems are stressors that influence the development of the child causing
complications in the control of sphincters, as well as the image of the child, aspects that
alter the child’s behavior socially and affectively, also altering family well-being
(p. 56). (Jácome Mayorga 2017) in his thesis report Learning routes and communi-
cation skills in children in fourth and fifth years of basic education states the following:
It concludes that the lack of institutional self-management does not allow teachers to be
trained in a continuous and systematic way on these topics of innovative learning
strategies, so they are only directed by the guidelines given by the Ministry of Edu-
cation, but it is also the lack of interest of each of the teachers to innovate, so that the
learning outcomes are not ideal, teachers have a profile that is not competitive at the
local or provincial level, because they present many gaps throughout their school
education (p. 142). The authors (Galarsi et al. 2011) in the article Behavior, history and
evolution conclude that: For most of human history man has considered himself as a
superior being completely different from animals. But if you consider the contributions
of Charles Darwin, who suggested that throughout the evolution we have maintained
blood ties with other species. Today, a century later, this relationship is admitted by
many thinkers. The information society requires that students learn through the use of
ICTs, and in turn be protagonists of their learning, as the human being performs his
activities consciously and unconsciously, it is intended that the behavior be related to
Performance Indicator Based on Learning Routes: Second Round 139
knowledge, where the student develops his future through a critical paradigm because
the educational reality seeks solutions to the problem between learning and social
human behavior that is currently being lost, values and personality. The educational
processes in the different studies show that the tools offered by ICTs can help in an
effective and effective way to find better learning paths for the construction of
knowledge, objective experiences, transforming the student’s thinking for a better
human behavior in the current society. Education today is facing specific and cultural
problems, which basically refer to the need for the use of the most modern computer
technologies in order to meet quality standards and promote a digital culture that
represents the “Information Age”. And that has a decisive impact on the specific
objectives of current Education.
3 Methodology
4 Experimentation
Research is part of human behavior in general and therefore knowledge has been
defined as a process in which a cognitive subject (who knows) is related to an object of
knowledge (that which is known) which results in a product mental new, called
knowledge. Thus, the same term designates the process and the result of said process;
that is, we call the subjective operation that produces it, as well as the product itself,
knowledge (Rodríguez 2011). It affects social human behavior in education as an
indicator of performance based on learning routes for students of Basic General
140 F. Chamba et al.
Education of the CEPWOL ALTAMIRA Private Education Unit of the city of Santa
Rosa in the province of El Oro. The modality, on the other hand, allows an approach to
the problem of study, but with the actors of the educational community, through the
collection of information of knowledge, experiences and information that parents,
teachers have authorities on the communication strategies executed, evaluating through
of quantified indicators, and analysis with the participation of the study group (see
Fig. 1). Research conducted on a sample of subject’s representative of a larger pop-
ulation, which is carried out in the context of daily life, using standardized interrogation
procedures, in order to obtain quantitative measurements of a wide variety of objective
and subjective characteristics of the population. The survey is aimed at students,
authorities and teachers of the CEPWOL Altamira Individual Educational Unit, whose
instrument is the questionnaire, prepared with closed questions that allow the study
variable to be collected. The questionnaire allowed to gather information with open and
closed questions established beforehand, they are always posed in the same order and
formulated previously preparing and strictly standardized. The research instruments
were subject to criteria of validity and reliability. The validity is given through the
technique “expert judgments”; while the reliability was carried out through the appli-
cation of a pilot test to a group of students and teachers with characteristics similar to
the established sample, allowing to detect errors in the understanding of the questions
and the selection of answers to correct before your application. The internal consistency
method based on Cronbach’s alpha, to estimate the reliability of the measuring
instrument through a set of items that will be measured in the same theoretical
dimension by applying the SPSS software.
Final Test
Percent
Q1 Q2 Q3 Q4 Q5 Q6 Q7 Q8 Q9 Q10
Low 1 2.4 3.3 0 4.1 4.9 0 0 9.8 13.8 0 100
Nothing 2 0.8 0 0 0 0 0 4.1 0.8 0 0 100
Normal 3 15.9 7.3 18.7 13 17.1 6.5 7.3 14.6 13 0 100
High 4 80.5 89.4 81.7 82.9 78 93.5 88.6 74.8 73.2 100 100
In the verification survey (Histrogram) it can be seen that significant scores favor
the use of ClassFlow and the results of application, 80.5% changes favor learning, a
89.4% change the social human behavior of Students positively, 81.70%. The exchange
of skills and strengths improves the academic performance of students, in a social,
academic and cultural way.
5 Conclusions
Significant scores favor the use of ClassFlow and the results of application, 80.5%
changes favor learning, a 89.4% change in the social human behavior of students in a
positive way, 81.70% exchange of skills and strengths improves students’ academic
performance, in a social, academic and cultural way. 82.90% consider that the
ClassFlow tool contributes with the learning competencies in students, 93.50%, con-
tributes to the positive improvement of social human behavior in students, 88.60% of
students and teachers consider that ClassFlow that the use of this resource helps to
learn.
References
Lacunza, A.B.: Las habilidades sociales y el comportamiento prosocial infantil desde la
psicología positiva. Pequén, 12, 20. Recuperado el 26 de 12 de 2017, de (2015). http://
revistas.ubiobio.cl/index.php/RP/article/view/1831
Zambrano-Ríos, M.F.: Factores en el comportamiento socioafectivo de los niños de 3 a 4 años.
Informe de Investigación, Universidad Técnica Ambato, Facultad de Ciencias Humanas y de
la Educación, Ambato. Recuperado el 26 de 12 de 2017, de (2017). http://repositorio.uta.edu.
ec/jspui/handle/123456789/25880
Jácome Mayorga, M.E.: Las rutas de Aprendizaje y las habiliades de comunicación en los niños
de cuarto y quinto año de educación básica. Tesis, Universidad Técnica Ambato, Facultad de
Ciencias Humanas y de la Educación- Psicología Educativa, Ambato. Recuperado el 29 de 12
de 2017, de (2017). http://repositorio.uta.edu.ec/jspui/handle/123456789/26062
Galarsi, M.F., Medina, A., Ledezma, C.: Comportamiento, historia y evolución. Redalyc (24),
36, Recuperado el 26 de 11 de 2017, de (2011). http://www.redalyc.org/pdf/184/184269
20003.pdf
Rodríguez, J.M.: Métodos de investigación cualitativa. Revista de Investigación Silogismo 1(08),
108 (2011)
Yépez, A.A.: La inasistencia y su influencia en la ruta del Aprendizaje del primer año de
educación básica en la escuela Alberto Albán Villamarín de la comunidad de noetanda. Tesis,
Universidad Técnica de Ambato, Facultad de Ciencias Humanas y de la Educación, Ambato.
Recuperado el 21 de 12 de 2017, de (2010). http://repositorio.uta.edu.ec/handle/123456789/
3875
Sorokin, P., Sotomayor, M.A.: Ciencias Sociales Humanas y de Comportamiento: Dificulatades
regularias en países latinoamericanos. Revista de la Facultad de Ciencias Humanas de la
Universidad Autonoma de Colombia 14(1), 23 (2016)
Evaluating the Acceptance
of Blended-Learning Tools: A Case Study
Using SlideWiki Presentation Rooms
1 Introduction
c The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 142–151, 2020.
https://doi.org/10.1007/978-3-030-45697-9_14
Evaluating the Acceptance of Blended-Learning Tools 143
chatting, as well as adaption to the learners individual needs [3]. These functions
and possibilities aim to improve the perceived quality of the learning experience
of its users [3]. However, the actual effect on the learning experience of its users
has not been tested so far. To evaluate the acceptance and usage of the tool a
teaching experiment was conducted at an University of Applied Sciences.
The content of the paper is structured as follows: in Sect. 2, the tool and the
theoretical context are presented. These are used to explain the methods and
data collection in Sect. 3 followed by the results in Sect. 4. The study concludes
with a summary in Sect. 5 and showcases further potential for research addressing
this topic.
2 Related Work
Thus, the focus is on the individual with his or her learning needs and learning
activities, which entails or necessitates a constructivist understanding of learning
[7]. Yet, a purely constructivist perspective on learning with digital media is no
longer sufficient. Learning is no longer viewed from the perspective of information
processing (as in the paradigm of cognitivism), but is focused by the dominance
of the learning object in virtual space with simultaneous networking with other
learners (and also teachers) in a self-controlled way in adaption to one’s own
learning needs and in simultaneous connection with other users. Therefore, the
tools added values are identified by the connectivist theory. The question if the
benefit of these functions is also perceived by the users, who are likely used
to other instruction methods and tools such as Moodle, seminars and lectures.
How the users accept Slidewiki PR as a teaching environment must therefore be
explored.
Evaluating the Acceptance of Blended-Learning Tools 145
The basis for the success of a new technology and its use is regarded as its accep-
tance [10]. Thus it is appropriate for the evaluation of PR as a new technology.
As PR is a webinar solution, it enables large numbers of people to access content
simultaneously and collaboratively. Innovative approaches are used which seize
several advantages and demands of learners and teachers. A proven model that
defines factors influencing the acceptance of new technologies is the Davis tech-
nology acceptance model (TAM), which has been extensively empirically tested
and widely approved to measure the acceptance of a new technology [11,12]. The
development of this model was based on the lack of valid predictors of influenc-
ing factors on technology acceptance [13]. As a result of the model development
in a first form two factors have been identified which allow a prediction of the
actual use of new technologies. These are Perceived Usefulness (PU) and Per-
ceived Ease of Use (PEOU). PU refers to the probability that the use of the
new technology increases the added value for performance in the occupational
field. PEOU refers to the ease of use of the technology under assessment [14].
The two factors are recognized in the TAM as a cognitive reaction to the use of
the technology and lead to an increased acceptance of the attitude (consisting
of the attitude towards the use and the intention to use) and ultimately to the
actual use. This means that people who have a positive attitude towards the use
of a technology are highly likely to actually use it [14]. Since the probability of
acceptance and usage of SlideWiki PR is a central concern of this study the TAM
scale was chosen as an appropriate instrument to examine the following research
question (RQ): How do users perceive the acceptance of SlideWiki Presentation
Rooms, based on the perceived usefulness and the perceived ease of use?
3 Methodology
A teaching experiment has been conducted in order to evaluate and assess the
use of SlideWiki PR.
3.1 Sample
The experiment was carried out in a 90-min lecture about artificial intelligence
as part of a business information systems class that was held in late June 2019.
A group of fifth semester undergraduate students in a degree featuring both
economics and computer science elements was chosen as a fitting sample for
this experiment, since they speak sufficient English as fifth semester students
who are mostly German natives and might be interested in a digitized teaching
environment including translation and are all ready familiar with other tools like
f.e. moodle.
146 A. Martin et al.
3.2 Execution
As the number of participants was 41, the mean values were chosen as the central
criterion and the median and the standard deviation of tendencies towards the
1
Moodle is a Learning Management System, see: https://moodle.de/.
2
EvaSys evaluation system: https://en.evasys.de/main/add-ons-services/system-
security-performance.html.
Evaluating the Acceptance of Blended-Learning Tools 147
mean were taken into account as critical values. Tendencies towards positive (“I
totally agree”) and negative (“I fully disagree”) statements were detected by the
summation of percentages in two, or sometimes three, distinctive subcategories
with the median as a critical limit [17]. The conclusions derived from these
figures were supplemented by the information provided by the students in the
open question section, which was adopted uncoded.
4 Results
A total number of 41 participants answered the questions from Subsect. 3.2.
Usually a sample that is larger than 30 participants is recognized to lead to
representative results, which can be transferred to a more general population
[18]. In general, the students answered in favour of the tool and liked the idea of
the independent slide use as well as the chat and the simple usage. Despite an
announcement, a written introduction document and a brief oral introduction
before the experiment, the intended benefit of the tool was not clear for all of
the participants, maybe because it interrupted with their habitual work flow, as
the open answer section indicates. Table 1 provides an aggregated overview of
the collected results.
Perceived Usefulness: The vast majority disagrees with the statement that
the tool allows them to complete tasks faster. The students are uncertain about
the improvement of their productivity in task completion with 30,8% who either
slightly disagree and 30,8% who are uncertain. However, the slightly negative
tendency is confirmed in the following question about the perceived increase in
productivity in learning with 47,5% who disagree, 40% who are neutral and only
12,5% who agree.
Ease of Use: The perceived ease of use is the most positive aspect of this
evaluation. A majority of 72,5% of the students found the interaction with the
tool easy and understandable and even 80% agree that SlideWiki is easy to use
with 50% who even fully agree. A majority of 87,5% of the students also found
that the use of SlideWiki is easy to learn.
Attitude Towards Use: Question A1 divides the sample into two separate
parties with a small majority who either fully agrees or disagrees about the
statement, that the use of SlideWiki for this task is a good idea. When it comes
to the preferred mode of methods the students would clearly prefer manual
methods over SlideWiki. Yet the students think that the general usage of the
tool is a good idea, as 67,50 % agree.
Intention to Use: The general acceptance of the tool is slightly positive with
a concentration of agreement to use the tool in the future if offered. When asked
if others should use SlideWiki as well the participants almost all neither agreed
nor disagreed which can be interpreted as a rather neutral position. The third
question of the item set provides a clearer picture, that the students do not feel
like they would increase the usage in the future.
148 A. Martin et al.
Table 1. The statements represent the learners position on the questionnaire and
are numbered by their item group e.g. I = intention to use; A = attitude towards use;
P = perceived usefulness; E = ease of use. The results of the evaluation are based on a
7-point likert-scale, where 7 corresponds to “I totally agree” and 1 corresponds to “I
fully disagree”.
Open Question: Since the researchers had their own idea about the possible
benefit of the tool, the question what students liked best about SlideWiki Pre-
sentation Rooms was asked to involve their personal opinion on this subject
matter. While only 18 out of 41 students took the time to reply, their answers
reveal an interesting perspective, that is in line with the rest of the results. The
ease of use and opportunity to ask questions in the chat as well as the syn-
chronized flow of presentation slides were mentioned positively for two times
each. An interesting point of critique was that students were frustrated that
they could not take notes directly in the material, which probably stems from
their offline and PDF workflow. This indicates that the students have expected a
personal and persistent environment, which the tool is currently not providing.
Three participants mentioned that they were not satisfied that they could not
use their own device, which was a result of the individual network restrictions
at the University where this experiment was conducted (see Sect. 4.1). An unex-
pected result is the uncertainty detected about the actual benefit of the tool.
Evaluating the Acceptance of Blended-Learning Tools 149
While one participant understood the value especially for online lectures five had
issues to see the added overall value of the tool. These statements provide an
indication on the mixed results above, where students tended to have an either
neutral or slightly negative attitude towards SlideWiki.
4.1 Limitations
SlideWiki PR was used once in a sample of 41 students in one lecture. Probably
a part of this sample did not read the introduction into the tool, shown by their
doubt in the sense of the tool overall, so that the reliability of their statements
and general judgment is disputable. Furthermore, the 7-point likert-scale allows
a tendency to central scores which can be seen in the results. This increases
the difficulty to shape valid conclusions from the data [17]. These factors com-
bined weaken the overall validity of the study. Therefore, the presented data
can be interpret as a single case study with limited significance. Nonetheless, it
provides an insight into students perception of and adaptation to e-learning envi-
ronments. Another aspect that needs to be critically considered is the abandon-
ment of the originally intended bring-your-own-device (BYOD) approach after a
pretest. This was decided because of technical issues caused by the restrictively
adjusted wireless network. Thus, the environment was replaced by a controlled
one in an official computer pool at the university. Students were asked to use a
certain browser for maximum comparability even though it supports the BYOD
approach and various browsers in general [3]. These adjustments represent a dif-
ference from the intended use and must thus be regarded as a limitation of this
study.
References
1. López-Pérez, M.V., Pérez-López, M.C., Rodrı́guez-Ariza, L.: Blended learning in
higher education: students’ perceptions and their relation to outcomes. Comput.
Educ. 56(3), 818–826 (2011)
2. Drummer, J., Hambach, S., Kienle, A., Lucke, U., Martens, A., Müller, W., Rens-
ing, C., Schroeder, U., Schwill, A., Spannagel, C., et al.: Forschungsherausforderung
des E-Learnings. In: Rohland, H., Kienle, A., Friedrich, S. (eds.) DeLFI 2011 - Die
9. e-Learning Fachtagung Informatik, pp. 197–208. Gesellschaft für Informatik e.V,
Bonn (2011)
3. Meissner, R., Junghanns, K., Martin, M.: A decentralized and remote controlled
webinar approach, utilizing client-side capabilities: to increase participant limits
and reduce operating costs. In: Proceedings of the 14th International Conference
on Web Information Systems and Technologies - Volume 1: WEBIST, pp. 153–160.
INSTICC, SciTePress (2018)
Evaluating the Acceptance of Blended-Learning Tools 151
4. Elias, M., James, A., Ruckhaus, E., Suarez-Figueroa, M.C., de Graaf, K.A., Khalili,
A., Wulff, B.M., Lohmann, S., Auer, S.: SlideWiki - towards a collaborative and
accessible slide presentations. In: EC–TEL 2018, 13th European Conference on
Technology Enhanced Learning. Practitioner Proceedings, Leeds, UK. Fraunhofer
(2018)
5. Mikroyannidis, A.: Collaborative authoring of open courseware with slidewiki: a
case study in open education. In: EDULEARN 2018 Proceedings, vol. 1, pp. 2000–
2007 (2018)
6. Raspopovic, M., Cvetanovic, S., Medan, I., Ljubojevic, D.: The effects of integrat-
ing social learning environment with online learning. Int. Rev. Res. Open Distrib.
Learn. 18, 141–160 (2017)
7. Kergel, D., Heidkamp, B.: Digitalisierung der Lehre – Chancen für eBologna, pp.
145–160. Springer Fachmedien Wiesbaden, Wiesbaden (2018)
8. Siemens, G.: Connectivism: a learning theory for the digital age. Int. J. Instr.
Technol. Distance Learn. 2(1), 3–10 (2005)
9. Dräger, J., Müller-Eiselt, R.: Die digitale Bildungsrevolution - Der radikale Wandel
des Lernens und wie wir ihn gestalten können, 4th edn. DVA, München (2015)
10. Prilla, M., Nolte, A.: Fostering self-direction in participatory process design. In:
Proceedings of the 11th Biennial Participatory Design Conference, PDC 2010, pp.
227–230. ACM, New York (2010)
11. King, W.R., He, J.: A meta-analysis of the technology acceptance model. Inf.
Manag. 43(6), 740–755 (2006)
12. Schepers, J., Wetzels, M.: A meta-analysis of the technology acceptance model:
investigating subjective norm and moderation effects. Inf. Manag. 44, 90–103
(2007)
13. Birken, T.: IT-basierte Innovation als Implementationsproblem. Evolution und
Grenzen des Technikakzeptanzmodell-Paradigmas, alternative Forschungsansätze
und Anknüpfungspunkte für eine praxistheoretische Perspektive auf Innovation-
sprozesse. ISF München (2014)
14. Davis, F.D., Bagozzi, R.P., Warshaw, P.R.: User acceptance of computer technol-
ogy: a comparison of two theoretical models. Manag. Sci. 35(8), 982–1003 (1989)
15. Wilhelm, D.B.: Pre-Test eines Modells zur Erklärung der C.9 Nutzerakzeptanz von
web-basierten “sozialen” Unternehmensanwendungen. In: GeNeMe 2009 Gemein-
schaften in Neuen Medien, TU Dresden, 01./02.10.2009, Virtuelle Organisation
und Neue Medien 2009, pp. 203–214. TU Dresden (2009)
16. Venkatesh, V., Davis, F.: A theoretical extension of the technology acceptance
model: four longitudinal field studies. Manag. Sci. 46, 186–204 (2000)
17. Nadler, J.T., Weston, R., Voyles, E.C.: Stuck in the middle: the use and interpre-
tation of mid-points in items on questionnaires. J. Gen. Psychol. 142(2), 71–89
(2015)
18. Schöneck, N.M., Voß, W.: Das Forschungsprojekt: Planung, Durchführung und
Auswertung einer quantitativen Studie. Springer, Wiesbaden (2015)
Adaptivity: A Continual Adaptive Online
Knowledge Assessment System
Abstract. The main goal of this paper is to provide an insight into imple-
mentation of a model for continual adaptive online knowledge assessment
throughout Adaptivity, a web-based application. Adaptivity enables continual
and cumulative knowledge assessment process, which comprises of a sequence
of at least two (but preferably more) interconnected tests, carried-out throughout
a reasonably long period of time (i.e. one semester). It also provides personal-
ized post-assessment feedback, which is based on each student’s current results,
to guide each student in preparations for the upcoming tests. In this paper, we
provide description of adaptation model, reveal the design of Adaptivity and
results of testing of the proposed model.
1 Introduction
The adaptive online education is highly represented in current scientific and profes-
sional research. The main guiding principle of most research in this area is to provide a
complete e-learning system which is (i) capable of selecting and providing the
appropriate learning content for each individual, in order to (ii) improve the effects of
each individual’s education (see for example [1–3]).
When analyzing the research in the scope of adaptive online knowledge assess-
ment, it can be noted that most efforts are focused on studying various aspects of
adaptability within a single knowledge assessment, usually within a self-assessment
and/or formative assessment [4–8]. However, teachers should be able to continuously
monitor and evaluate students’ progress and to adapt their teaching to the needs of
different groups of students accordingly, as suggested by the Bologna Process.
In respect to the requirements identified in previous sections this paper does not
consider modelling of an entire adaptive learning system but focuses only on the model
and the example of an adaptive online knowledge assessment system (hereafter referred
to as Adaptivity). Adaptivity is designed to guide the individual towards continuous
improvement in achievement of learning goals, by announcing and applying types of
assessment which are appropriate for the learning content that is being assessed.
© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 152–161, 2020.
https://doi.org/10.1007/978-3-030-45697-9_15
Adaptivity: A Continual Adaptive Online Knowledge Assessment System 153
2 Research Background
Literature review did not reveal nowhere near as many examples of a research related to
the adaptive knowledge assessment that spans across the series of assessments (i.e.
continuous/continual adaptive knowledge assessment). Raman and Nedungadi [9]
describe continuous formative evaluation within Amrita Learning ALS, where multiple
assessments are carried-out in an adaptive way, but since each assessment covers
different learning topics, it cannot be classified as fully continual adaptive assessment,
where at least a portion of the subsequent assessment adaptively depends upon the
results of the previous assessment(s).
Grundspenkis and Anohina [10] describe an adaptive learning and assessment
system where concept maps are used as a more machine-friendly replacement for
essays. Course contents are introduced gradually in time, through multiple stages.
Adaptive knowledge assessments take place between stages, but although these
assessments encompass contents from all available stages (similarity with our
approach), the adaptivity is still limited to a single assessment. Also, there is no
evidence that an assessment that takes place during later stages takes into consideration
the results from the assessment conducted during earlier stages.
There are also examples of adaptive and continuous assessment within commercial
e-learning platforms – e.g. Khan Academy. Hu [11] describes Academy’s approaches
towards assessing students’ mastery of a topic. They use different proficiency model,
which uses logistic regression to select next task, based on user’s previously solved
tasks and current proficiency level. Despite being adaptive over the series of assess-
ments (tasks), these approaches also do not systematically include parts of previous
topics during subsequent assessments.
In a broader field of ALSs and intelligent tutoring systems (ITSs), there is a
common distinction between micro- and macro-adaptation. VanLehn [12] suggests
that, on ITS level, macro-adaptation focuses on a global task selection, while micro-
adaptation is concerned with in-task interactions. Within ITS, knowledge assessment is
usually placed on a micro-level and its results are used to update learner (user) models,
upon which the rest of the macro-adaptation is derived [13, 14].
Considering above-mentioned findings, continual knowledge assessment systems
that include the elements of adaptivity within a series of connected assessments have
not yet been sufficiently investigated. As one of the scientific contributions of this
study, this research aims to provide additional insights about such systems.
Although the main aim of this paper is to propose a design of an online knowledge
assessment system, firstly we give short overview of the underlying model (Adaptivity
Model) on which the online system was built upon.
154 M. Zlatović and I. Balaban
Learning
Questions
goals
Test evalua-
tion
Learning goals
Levels of learning goals achievement
achievement
The learning objects represent the thematic units of learning content, to which
learning goals are connected.
The questions element represents the test questions database. Questions are
assigned to the learning goals. Multiple types of questions are supported – multiple-
choice questions with both single (SC) and multiple (MC) correct answers, matching
questions (MATCH), fill-in the blanks questions (FILL) and free answers essay-type
(ESSAY) questions. To each question is also assigned the qualitative label (i.e. level)
indicating its difficulty in a context of assessing the related learning goal [16] – diffi-
culty level “1” (DL1) represents an easy question, DL2 represents the question of
medium difficulty, while DL3 represents a difficult question.
The test creation element is a central part of the system and takes into consider-
ation all the other main elements of the system except for feedback.
The learning goals achievement element is calculated during the test evaluation
activity. It represents the quantitative indicator of individual’s success in achieving
learning goal [16]. It is expressed as a percentage scale, with thresholds set to mimic
the traditional grading systems: 0–49, 99% (Fail, F or 1), 50–62, 49% (Sufficient, D or
Adaptivity: A Continual Adaptive Online Knowledge Assessment System 155
2), 62, 5–74, 99% (Good, C or 3), 75–87, 49% (Very good, B or 4), 87, 5%–100%
(Excellent, A or 5).
The feedback towards the students visualizes the individual achievement levels
related to the learning goals included in assessment and provides personalized sug-
gestions describing what type and difficulty of questions will be used predominantly in
following adaptive iteration, during repeated assessment of old learning content. The
feedback towards the teachers shows which questions are most difficult to solve, etc.
Fig. 2), during which the system automatically selects the questions (number, type
and difficulty), based on built-in adaptivity rules (R1–R5) which consider student’s
previous level of learning goals achievements for that object (for details see [17]):
(A) Rules R1–R3 to select the difficulty of questions based on previous iteration
(R1 if student failed to reach specific LG, R2 if the student was sufficient or
good, and R3 is student was very good or excellent)
(B) Rule R4 to decrease the number of questions used during the adaptive
phase– to avoid the inevitable question inflation caused by the repeated
assessment of all learning goals from all previous iterations
(C) Rule R5 to increase the number of questions used for individuals with poor
achievement (modification of R4), but not to exceed the total number of
questions determined by R4.
Adaptive Online
Web interface
Web interface
User
Assessment
Management
Structure Design
Module
Module
Teacher
Web interface
Web interface
Test Instances
Questions and
Database Evaluation
Answers Module
Module
Teacher
Learning Objects
Web interface
Web interface
Test Instances
and Learning
Generation
Goals Design
Module
Module Student
are currently in the adaptive phase of the assessment (based on student’s prior levels of
achievements at those LGs).
These steps are performed automatically by the system, per student, in real-time and
without teachers’ intervention.
The Test Instances Evaluation Module
The answers are automatically evaluated upon test submission, except for essay
questions where manual scoring is required. Students get immediately the partial
results, related to the portion of test that could be automatically evaluated. If there were
no essay questions in the test, levels of achievement for all the assessed goals would
also be determined and student would receive complete feedback about his/her success.
During the evaluation phase, Adaptivity uses pre-defined interval scale with the
achievement thresholds, which mimics the usual academic grading system. The
achievement level of each learning goal is calculated as a percentage of total points
awarded within the maximum points available for all the question used to assess that
goal.
When all the achievement levels have been calculated, student can see the complete
report of his/her success. In the context of adaptive assessment, the most important part
of the report is the detailed breakdown of the success at the level of individual learning
goals. The achievement levels are displayed alongside each of the assessed goals. At
the end of the report, there are direct remarks describing what type and what difficulty
of the questions will be used in the next iteration, during repeated assessment of these
learning goals. This information should incite the student to adjust his/her learning
strategy for the next assessment [18], in order to (i) improve knowledge levels for those
goals which had poor achievement levels or (ii) to maintain or improve high(er) levels
of knowledge for those goals which had satisfactory or great achievement levels.
Adaptivity was used to test the effectiveness of the underlying Adaptivity model. The
research involved all information systems and technology students who regularly
attended classes at the undergraduate course Y held at institution X. The total popu-
lation of students, i.e. convenience sample, was divided into two groups. The experi-
mental group (E) used Adaptivity for all assessments prescribed by the curriculum of
the course Y. The control group (C) was given assessments in a traditional paper-and-
pencil form, consisting of several essay-type questions.
Both groups of students have shared the same learning contents (the same learning
goals) in every assessment. Also, both groups were given cumulative type of the
assessment (see Fig. 2, Sect. 3.2), where each subsequent test had included new
learning topics along with the old ones. This should ensure that the different type of
assessment process (i.e. Adaptivity vs. pen-and-paper) was the major and most sig-
nificant difference between two groups.
To enable the comparison of the points between two groups, all the results were
converted into percentages. Classic written tests used in the C-group always allowed
for a maximum of 21 points, while within the Adaptivity maximum points available
Adaptivity: A Continual Adaptive Online Knowledge Assessment System 159
varied from student to student, because in adaptive stages all the students did not any
longer receive the same number of questions per learning goal.
The Table 1 shows that students in the E-group achieved higher average results (in
percentage) in all three tests with an increase of at least 11.05% (in the second test), up
to a maximum of 19.07% (in the third test). Finally, the E-group achieved an average of
15.89% better results than the C-group.
Table 1. Descriptive statistics of the assessment results for control (N = 104) and experimental
group (N = 78).
Group N Avg. score (in %) Std. dev.
Test1 C 104 45.42 20.365
E 78 59.29 13.396
Test2 C 104 46.10 20.912
E 78 57.15 14.997
Test3 C 104 36.21 23.854
E 78 55.28 14.450
Total C 104 42.58 18.385
E 78 58.47 12.881
The statistical significance of the observed differences was checked by using t-test
with two independent samples (student groups): one that used the Adaptivity appli-
cation and the other that did not use it. The significance of all four observed variables
(Table 2) is below the threshold of 0.01, which indicates that the difference in means
for both groups is statistically significant (p < 0.01). Therefore, it is shown that the
experimental group achieved significantly better results (15.89%) than the control
group.
Table 2. Results of t-test (independent samples) for the assessment results (control vs.
experimental group).
Levene test T-test
F Sig. t df Sig. (2-tail) Means diff.
(a)
Test1 15.062 0.000 −5.534 177.226 0.000 −13.878
Test2(a) 7.210 0.008 −4.149 179.677 0.000 −11.046
Test3(a) 25.938 0.000 −6.681 173.037 0.000 −19.071
Total(a) 12.891 0.000 −6.855 179.242 0.000 −15.896
(a)
Inequality of variances is assumed (based on Levene’s test)
Prior marks of the students from the prerequisite course Z were also analyzed, to
avoid possible comparisons between two groups of students whose academic capa-
bilities may be drastically unequal. The analysis of the grades showed that the
160 M. Zlatović and I. Balaban
significant difference between average marks for the C-group (2.183) and E-group
(2.179) does not exist.
5 Conclusion
Literature review has revealed that adaptive e-assessment systems are mostly based on
a single test and adaptivity is thus applied within a single test, considering students’
answer after each question. In addition, most of the e-assessment types mentioned in
the previous research are usually self-assessment or peer assessment and are sometimes
mixed with the traditional (scheduled) formative tests in a classroom.
However, similarity between the existing single-test adaptivity e-assessment sys-
tems and the Adaptivity model proposed in this paper is also evident: using Bloom’s
taxonomy and ontologies to create tests [4], generating tests based on students’ level of
knowledge [19], and testing feedback effects with multiple-tier tests [20].
Despite similarities, there are also several main differences that reflect the novelty
of our approach:
1. Adaptivity model involves adaptation of every subsequent test considering results
from previous iterations (tests) and learning goals. Therefore, we consider adap-
tation between tests rather than within single test, which means we are dealing with
continual e-assessment.
2. The model considers realization of the learning goals throughout multiple tests and
adapts the types of questions accordingly.
3. Feedback given to student is carefully coined to facilitate the desired learning
strategies [18], to improve student’s success considering all iterations (tests).
In this paper we demonstrated that the use of Adaptivity has helped students to
achieve better results at the end of semester. However, the practical implementation of
Adaptivity is adjusted to fit real academic practice within blended online education and
was piloted within such course. Assessment process is adjusted to fit one specific form
of continuous monitoring of students’ activities in the context of high-education
classes. Various formal obstacles may limit the application of this model of knowledge
assessment in different types of institutions. Therefore, caution is advised when
applying it to the environments that practice full online education or do not use con-
tinual assessments.
References
1. Graf, S., Kinshuk: Advanced adaptivity in learning management systems by considering
learning styles. In: WI-IAT 2009, IEEE/WIC/ACM International Joint Conferences on Web
Intelligence and Intelligent Agent Technologies 2009, Milan, Italy, vol. 3, pp. 235–238
(2009)
2. Hafidi, M., Bensebaa, T., Trigano, P.: Developing adaptive intelligent tutoring system based
on item response theory and metrics. Int. J. Adv. Sci. Technol. 43, 1–14 (2012)
Adaptivity: A Continual Adaptive Online Knowledge Assessment System 161
3. Ahuja, N.J., Sille, R.: A critical review of development of intelligent tutoring systems:
retrospect, present and prospect. Int. J. Comput. Sci. Issues 10(2), 39–48 (2013)
4. Ying, M.H., Yang, H.L.: Computer-aided generation of item banks based on ontology and
bloom’s taxonomy. In: Li, F., et al. (eds.) Advances in Web Based Learning - ICWL 2008.
LNCS, vol. 5145, pp. 157–166. Springer, Heidelberg (2008)
5. Huang, Y.M., Lin, Y.T., Cheng, S.C.: An adaptive testing system for supporting versatile
educational assessment. Comput. Educ. 52(1), 53–67 (2009)
6. Chrysafiadi, K., Virvou, M.: Create dynamically adaptive test on the fly using fuzzy logic.
In: 2018 9th International Conference on Information, Intelligence, Systems and Applica-
tions (IISA), pp. 1–8. IEEE (2018)
7. Snytyuk, V., Suprun, O.: Adaptive technology for students’ knowledge assessment as a
prerequisite for effective education process management. In: ICTERI, pp. 346–356 (2018)
8. Mangaroska, K., Vesin, B., Giannakos, M.: Elo-rating method: towards adaptive assessment
in e-learning. In: 2019 IEEE 19th International Conference on Advanced Learning
Technologies (ICALT), vol. 2161, pp. 380–382. IEEE (2019)
9. Raman, R., Nedungadi, P.: Adaptive learning methodologies to support reforms in
continuous formative evaluation. In: 2010 International Conference on Educational and
Information Technology, vol. 2, pp. V2–429. IEEE (2010)
10. Grundspenkis, J., Anohina, A.: Evolution of the concept map based adaptive knowledge
assessment system: implementation and evaluation results. J. Riga Techn. Univ. 38, 13–24
(2009)
11. Hu, D.: How Khan Academy is using Machine Learning to Assess Student Mastery (2011).
http://david-hu.com/2011/11/02/how-khan-academy-is-using-machine-learning-to-assess-
student-mastery.html. Accessed 22 Feb 2019
12. VanLehn, K.: The behavior of tutoring systems. Int. J. Artif. Intell. Educ. 16(3), 227–265
(2006)
13. Rus, V., Baggett, W., Gire, E., Franceschetti, D., Conley, M., Graesser, A.: Towards learner
models based on learning progressions (LPs) in DeepTutor. In: Sottilare, R.A., et al. (eds.)
Design Recommendations for Intelligent Tutoring Systems: Volume 1 – Learner Modeling,
pp. 183–192. Army Research Laboratory, Orlando (2013)
14. Chrysafiadi, K., Troussas, C., Virvou, M.: A framework for creating automated online
adaptive tests using multiple-criteria decision analysis. In: 2018 IEEE International
Conference on Systems, Man, and Cybernetics (SMC), pp. 226–231. IEEE (2018)
15. Bloom, B.S., Engelhart, M.D., Furst, E.J., Hill, W., Krathwohl, D.R.: Taxonomy of
Educational Objectives, The Classification of Educational Goals, Handbook I: Cognitive
Domain. McKay Press, Midland (1956)
16. Hatzilygeroudis, I., Koutsojannis, C., Papachristou, N.: Adding adaptive assessment
capabilities to an e-learning system. In: SMAP 2006, First International Workshop on
Semantic Media Adaptation and Personalization, Athens, Greece, pp. 68–73 (2006)
17. Zlatović, M., Balaban, I.: Personalizing questions using adaptive online knowledge
assessment. In: eLearning 2015-6th International Conference on e-Learning, Belgrade,
pp. 185–190 (2015)
18. Zlatović, M., Balaban, I., Kermek, D.: Using online assessments to stimulate learning
strategies and achievement of learning goals. Comput. Educ. 91, 32–45 (2015)
19. Conejo, R., Guzmán, E., Trella, M.: The SIETTE automatic assessment environment. Int.
J. Artif. Intell. Educ. 26(1), 270–292 (2016)
20. Maier, U., Wolf, N., Randler, C.: Effects of a computer-assisted formative assessment
intervention based on multiple-tier diagnostic items and different feedback types. Comput.
Educ. 95(1), 85–98 (2016)
The First Programming Language
and Freshman Year in Computer Science:
Characterization and Tips for Better
Decision Making
1 Introduction
© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 162–174, 2020.
https://doi.org/10.1007/978-3-030-45697-9_16
The First Programming Language and Freshman Year in Computer Science 163
Students often have the perception that the focus is on learning the syntax of the
programming language, leading them to focus on implementation activities rather than
activities such as planning, drawing, or testing [6].
The art of programming involves four steps [7]:
(a) To Think: the conceptualization and analysis phase in which problems are divided
into small and easily intelligible processes or tasks, the modular structure, whose
organization must follow a descending programming logic, Top-Down [8];
(b) To Solve: Translate Top-Down into Algorithm [1], which incorporates solution
rules using pseudo code;
(c) To Define: Using variables and data structures, characterize the data model to be
used in the algorithm;
(d) To Formalize: translate the algorithm into a programming language, its imple-
mentation and execution on the computer.
Then it comes the most important phase, the true moment of truth: Does the
program run, is error-free and give the correct result? And how are you sure that the
result is “the” or “probably” the correct solution?
The following Table 1 shows how each of ten of the most well-known program-
ming languages write the famous “Hello, World!”.
Each of the ten programming languages presented in the previous table has a
different notation, however it’s quite similar in a basic algorithm like “Hello World!”.
Some say that programming is very difficult [9, 10] while for others it may be easy
[11].
Success is achieved through a good deal of study, research, planning, persistence
and preferably a passion for the activity.
This article is divided into five parts: this introduction, the second part with Pro-
gramming languages: concept and characterization; the third part with Evolution of
programming languages in undergraduate computer science studies; the fourth part
164 S. R. Sobral
with Choosing the Initial Programming Language and the last part with conclusions
and future work.
A programming language is a system that allows the interaction between man and the
machine, being “understood” by both. It is a formal language that specifies a set of
instructions and rules. Programming languages are the medium of expression in the art
of computer programming. Program writing must be succinct and clear, because pro-
grams are meant to be included, modified, and maintained throughout life: a good
programming language should help others to read programs and to understand how
they work [12]. A program is a set of instructions that make up a solution after being
coded in a programming language [13].
There are several reasons why thousands of high-level programming languages
exist and new ones continue to emerge [14]:
Evolution: The late 1960s and early 1970s saw a revolution in “structured pro-
gramming,” in which the GoTo-based flow control of languages such as FORTRAN,
COBOL, and Basic gave way to while loops, case statements (switch). In the late
1980s, Algol, Pascal and Ada began to give way to the object-oriented languages like
Smalltalk, C++ and Eiffel. And so on.
– Special Purposes: Some programming languages are designed for specific purposes.
C is good for low level system programming. Prolog is good for reasoning about
logical relationships between data. Each can be successfully used for a wide range
of tasks, but the emphasis is clearly on the specialty.
– Personal preference: Different people like different things. Some people love C
while others hate it, for example.
According to Stack Overflow Annual Developer Survey [15], with over 90,000
answers to over 170 countries, by 2019 the most widely used programming language is
JavaScript (Table 2).
Table 2. (continued)
PL %
TypeScript 21.2%
C 20.6%
Ruby 8.4%
Go 8.2%
Assembly 6.7%
Swift 6.6%
Computer science became a recognized academic field in October 1962 with the cre-
ation of Purdue University’s first department [18]. The first curriculum studies
appeared in March 1968, when the Association for Computing Machinery
(ACM) published an innovative and necessary document, Curriculum 68: Recom-
mendations for academic programs in computer science [19], with early indications of
curriculum models for programs in computer science and computer engineering.
Prerequisites, descriptions, detailed sketches, and annotated bibliographies were
1
Wikipedia is not a reliable source of information because it has collaborative features!
166 S. R. Sobral
included for each of these courses. As initial unit, it presented B1. Introduction to
Computing (2-2-3)2 in which an algorithmic language was proposed, recommending
that only one language be used or two “in order to demonstrate the wide diversity of the
computer languages available”; “Because of its elegance and novelty, SNOBOL can be
used quite effectively for this purpose.”
With the emergence of many new courses and departments, ACM published a new
report, Curriculum’78: recommendations for the undergraduate program in computer
science [20], updating Curriculum 68. It presented for the first time the denomination
CS1: Computer Programming I (2-2-3): “The emphasis of the course is on the tech-
niques of algorithm development and programming with style. Neither esoteric features
of a programming language nor other aspects of computers should be allowed to
interfere with that purpose.”
Despite the importance of Curriculum’78 there has been much discussion, partic-
ularly regarding the sequence CS1 and CS2. In 1984 a new report is published:
“Recommended curriculum for CS1, 1984” [21] to detail a first computer science
course that emphasizes programming methodology and problem solving.” This report
refers Pascal, PL/1 e Ada: “These features are important for many reasons. For
example, a student cannot reasonably practice procedural and data abstraction without
using a programming language that supports a wide variety of structured control fea-
tures and data structures”. They said that “Although FORTRAN and BASIC are widely
used, we do not regard either of these languages as suitable for CS1” and ALGOL
“does satisfy the requirements but is omitted from our list of recommended languages
simply because it is no longer widely used or supported.”
2
(2-2-3) two hours of lectures and two hours of laboratory per week for a total of three semester hours
of credit.
The First Programming Language and Freshman Year in Computer Science 167
In 1991 [22] IEEE (Institute of Electrical and Electronics Engineers) and ACM
joined for a new document. This document emerged by breaking with some of the
concepts of previous documents, presenting a set of individual knowledge units cor-
responding to a topic that should be addressed at some point in the undergraduate
program. In this way, institutions have considerable flexibility in setting up course
structures that meet their particular needs.
In 2001 a new document was published [23]. This document questioned the
programming-first of previous documents, as early programming approaches may lead
students to believe that writing a program is the only viable approach to solving
problems using a computer and that focus only on programming reinforces the com-
mon misperception that “computer science” equals programming. They said “In fact,
the problems of the programming-first approach can be exacerbated in the objects-first
model because many of the languages used for object-oriented programming in
industry—particularly C++, but to a certain extent Java as well—are significantly more
complex than classical languages. Unless instructors take special care to introduce the
material in a way that limits this complexity, such details can easily overwhelm
introductory students”.
In 2008 a new report is presented: “Computer Science Curriculum 2008: An
Interim Revision of CS 2001” [24]; security is strongly mentioned, making minor
revisions to the 2001 document. Curriculum’2008 reinforces the idea that “Computer
science professionals frequently use different programming languages for different
purposes and must be able to learn new languages over their careers as the field
evolves. As a result, students must recognize the benefits of learning and applying new
programming languages. It is also important for students to recognize that the choice of
programming paradigm can significantly influence the way one thinks about problems
and expresses solutions of these problems. To this end, we believe that all students
must learn to program in more than one paradigm”.
When referring to languages and paradigms, the “Computer Science Curricula
2013: Curriculum Guidelines for Undergraduate Degree Programs in Computer Sci-
ence” [25], says that the choice of programming languages seems to be depend on the
chosen paradigm and “There does, however, appear to be a growing trend toward
“safer” or more managed languages (for example, moving from C to Java) as well as
the use of more dynamic languages, such as Python or JavaScript.” “Visual pro-
gramming languages, such as Alice and Scratch, have also become popular choices to
provide a “syntax-light” introduction to programming; these are often (although not
exclusively) used with non-majors or at the start of an introductory course”. And “some
introductory course sequences choose to provide a presentation of alternative pro-
gramming paradigms, such as scripting vs. procedural programming or functional vs.
object-oriented programming, to give students a greater appreciation of the diverse
perspectives in programming, to avoid language-feature fixation, and to disabuse them
of the notion that there is a single “correct” or “best” programming language”.
It is clear that curriculum recommendations do not indicate which programming
language to adopt. However, it is always said that they should have the simplest
possible usability and syntax for better learning. Language choice has always been a
matter of concern to educators [26–30].
168 S. R. Sobral
FORTRAN was selected as a high level language for the first introductory courses;
especially those linked to engineering departments. The less widely used COBOL was
adopted by departments that were more closely linked to information systems [31]. At
that time you couldn’t talk about methodology: everything was just programming.
With the emergence of BASIC in 1964 [32] has led some departments to use this
language for introductory students. In 1972 almost all computer science degree pro-
grams used ALGOL, FORTRAN or LISP, while most data processing programs used
COBOL. In Britain, BASIC was also important. In the late 60’s, some departments
tried various languages like PL/I [33].
With Dijkstra’s manifest [34] structured programming begins to be discussed [35,
36]. With the emergence of the Pascal language [37] seems to become almost con-
sensual [31]: an almost written language for the purpose of programming learning,
using a very friendly development environment [38], and obviously because of the
proliferation of personal computers and the availability of Pascal compilers [39].
Pascal’s decline began in the late 1980s, early 1990s, with object-oriented pro-
gramming. And also because Pascal has a difficult document reuse, but also because
Pascal is not a “real world” language [39]. McCauley e Manaris [40].
They say that as a first language Pascal was used by 36% and C++ by 32% in
1995–1996 but 22% intended to make a switch to C++, C, Ada or Java. There are
several studies that present the evolution of the languages adopted in initial pro-
gramming curricular units [41, 42] and even lists of programming languages taught in
various courses [43].
In Portugal [44], in the 2016–2017 school year, the most common first-year pro-
gramming language sequence in 46 courses analyzed was C (48%), followed by Java
(22%), C and Haskell (9%), C and Java (4%), Scheme and Java (4%). There were also
residual sequences Excel and C, Python, Python, HTML and Java, Python and Java,
Schem and C++ and XML and Java. Regarding the ten Portuguese first cycle (or with
integrated master’s degree) courses in Computer Engineering considered most signif-
icant [45], it was found that the most common sequences were only Java or Python and
C (both with 30%), C (20%), Python and Java or Haskell and C (both with 10%).
According to the document “An Analysis of Introductory Programming Courses at
UK Universities” [46]:
– 73.8% use only one programming language; 21% reported using two.
– The most widely used language is Java (46%), followed by the “C family” (C, C++
and C#) (23.6%) and Python (13.2%). Javascript and Haskell are much less
adopted.
– The reason given by 82.7% of those who uses Java, was to be object oriented, while
72.7% of those using Python refer to the pedagogical benefits.
According to the document “Introductory Programming Courses in Australasia in
2016” [46] referring to the Universities of Australia and New Zealand:
– 48 courses studied: 15 used Java, 15 Python, 8 C, 5 C#, 2 Visual Basic and 2
Processing. The remaining ten use another programming language.
– The reasons given for choosing Python and Java are quite different: pedagogical
benefits for Python (67%), availability/cost (53%) and platform independence
The First Programming Language and Freshman Year in Computer Science 169
(40%). The reasons given for using Java are industry-relevant (92%), object-
oriented (86%), and platform independence (62%).
“What language? - The choice of an introductory programming language” [47], A
study with 496 four-year courses in the United States, refere that Java is used by
41.94%, Python 26.45%, C++ 19.35%, C 4.52%, C# 0.65% and 7.10% by another. The
reasons for choosing were: Programming language features 26.19%, Ease of learning
18.81%, Job opportunities for students 14.76%, Popularity at the academy 13.10%,
Institutional tradition 8.57%, choice of advisory board 5.95%, availability of teachers
or scheduling restrictions 5%.
A 2016 study [48] analyse 218 colleges and 143 universities in 35 European
countries, indicating that the most commonly used programming language was C
(30.6%), following C++ (21.9%) and Java (20.7%).
A document [10] for 152 CS1 units from a number of different countries concludes
that Java is by far the most common CS1 language, used in 74 (49%) of the 152
programs. The second most frequent is Python, with 36 (24%). C++ comes in 30 (20%)
followed by C in 8 (5%), with the most obvious change being the rise of Python which
“probably occurred at the expense of Java and C++”.
Today, with few exceptions, the academy follows the “real world” and the “C
family” (C, C++, C#), Python, Java, and JavaScript are undoubtedly the programming
languages adopted in introductory programming units.
In 2004, Eric Roberts [49] commented that the languages, paradigms, and tools used to
teach computer science became increasingly complex; which pressures to cover more
material in an already overcrowded area. The problem of complexity is exacerbated by
the fact that languages and tools change rapidly, leading to profound instability in the
way computer science is taught. Roberts predicted that Java would be the way “we
must take responsibility for breaking this cycle of rapid obsolescence by developing a
stable and effective collection of Java-based learning applications that meet the needs
of the science education community”.
Dijkstra [50] wrote about the importance of the chosen programming language:
“the tools we are trying to use and the language or notation we are using to express or
record our thoughts, are the major factors determining what we can think or express at
all! The analysis of the influence that programming languages have on the thinking
habits of its users, and the recognition that, by now, brainpower is by far our scarcest
resource, they together give us a new collection of yardsticks for comparing the relative
merits of various programming languages.”
When selecting the first programming language for introductory programming
courses, it is important to consider whether it is suitable for teaching and learning. Over
time various pseudo-code languages have been created in search of the perfect teaching
language but no definitive solution has been found [51].
In document “Introductory Programming Subject in European Higher Education”
[48] discusses the need to teach introductory programming using educational
170 S. R. Sobral
programming languages. But in the past these languages have been discontinued: the
Pascal language being the most visible.
The programming language chosen for introductory programming courses often
seems like a religious or football issue. In reflection-teaser “The Programming Lan-
guage Wars” [52] it is even said that “Programming language wars are a major social
problem causing serious problems in our discipline” leading to “massively duplicating
efforts” and “reinventing the wheel constantly.” Choosing the best programming lan-
guage is often an emotional issue, leading to major debates [53] but for Guerreiro [54]
“It is up to us to have an open, exploratory attitude and at the same time not dog-
matically accept what those who make the most noise say. In fact, I think we should
even pass this on to students too, to help them develop their critical thinking, and to be
able, sooner or later, to choose the languages and tools that can best respond to their
needs”.
In fact, two of the most important points are pedagogical issues and student
preparation for the world of work. Parker e Devey [33] define them as pragmatic and
pedagogical: industry acceptance, market penetration as well as the employability of
graduates.
Keep in mind that “small programming” needs to be mastered before “large pro-
gramming” [55] since traditionally only “in the third or fourth year are faced with the
problems that arise in the design of large programs.” Collberg [55] said that the task of
choosing the initial language is not an easy task. It must obey factors such as simplicity,
expressiveness, suitability for tasks, availability of accessible resources, and reliable
compilers.
Programming languages are the fundamental basis of programming, but trends
change dramatically over time. Professionals will not use the same programming
language, or even the same programming model, for their entire professional career. In
addition, well-informed language choices can make a huge difference in programmer
productivity and program quality. Therefore, it is crucial that students master the
essential concepts of programming languages, so that they can choose and use lan-
guages based on a deep understanding of the abstractions they express and their ability
to solve programming problems [56].
Choosing the initial programming language to adopt should take into account
several points: Course objectives, Teacher preferences, available implementations, and
relationships with other course units, as well as the “real world”: students are often
more motivated to study a familiar language that is known to be requested by
employers [57].
Howatt [58] uses an evaluation method for programming languages using several
items: language design and implementation (accuracy and speed), human factors (us-
ability and ease), software engineering (portability, reliability and reuse) and applica-
tion mastery specific applications).
The paradigm chosen can be very important. [59] unless one adopts “exposing
students to all major paradigms through the use of a multiparadigmatic language, and
does not attempt to identify” the “correct paradigm” [60].
The document “A Formal Language Selection Process” [61] has a design of choice
with a weighted multicriteria method and where evaluation criteria are identified such
as Reasonable Financial Cost, Academic/Student Version Availability, Academic
The First Programming Language and Freshman Year in Computer Science 171
5 Conclusions
References
1. Knuth, D.: The Art of Computer Programming. Addison-Wesley, Reading (1968)
2. Gries, D.: The Science of Programming, Springer, New York (1981)
3. Dijkstra, E.W.: A Discipline of Programming. Prentice Hall, Englewood Cliffs (1976)
4. Aho, A., Ullman, J.D.: Foundations of Computer Science. Principles of Computer Science
Series, C edn. Freeman, W. H. (1994)
5. Sobral, S.R.: B-learning em disciplinas introdutórias de programação. Universidade do
Minho, Guimarães (2008)
6. McCracken, M., Almstrum, V., Diaz, D., Guzdial, M., Hagan, D., Kolikant, Y.B.-D., Laxer,
C., Thomas, L., Utting, I., Wilusz, T.: A multi-national, multi-institutional study of
assessment of programming skills of first-year CS students. In: ITiCSE on Innovation and
Technology in Computer Science Education (2001)
7. Sobral, S.R., Pimenta, P.: O ensino da programação: exercitar a distancia para combate às
dificuldades. In: 4ª Conferência Ibérica de Sistemas e Tecnologias de Informação (2009)
8. Lima, J.R.: Programação de computadores, Porto Editora, Porto (1991)
9. Bergin, S., Reilly, R.: Programming: factors that influence SuccessSusan. In: Proceedings of
the 36th SIGCSE Technical Symposium on Computer Science Education (2005)
10. Becker, B.A., Fitzpatrick, T.: What do CS1 syllabi reveal about our expectations of
introductory programming students? In: 50th ACM Technical Symposium on Computer
Science Education (2019)
11. Luxton-Reilly, A.: Learning to program is easy. In: ACM Conference on Innovation and
Technology in Computer Science Education (2016)
12. Mitchell, J.C.: Concepts in Programming Languages. Cambridge University Press,
Cambridge (2003)
13. Sprankle, M.: Problem Solving and Programming Concepts, 9 edn. Pearson, London (2011)
14. Scott, M.L.: Programming Language Pragmatics, 3rd edn. Elsevier, Amsterdam (2009)
15. Stackoverflow.com: Stackoverflow (2019). https://insights.stackoverflow.com/survey/2019
16. TIOBE Software BV: TIOBE, Set (2019). https://www.tiobe.com/tiobe-index/
17. Wikipedia: Programming languages used in most popular websites, Setembro (2019). https://
en.wikipedia.org/wiki/Programming_languages_used_in_most_popular_websites
18. Rice, J.R., Rosen, S.: History of the Computer Sciences Department at Purdue University.
Department of Computer Science, Purdue University (1990)
19. Atchison, W.F., Conte, S.D., Hamblen, J.W., Hull, T.E., Keenan, T.A., Kehl, W.B.,
McCluskey, E.J., Navarro, S.O., Rheinboldt, W.C., Schweppe, E.J., Viavant, W., Young Jr.,
D.M.: Curriculum 68: recommendations for academic programs in computer science: a
report of the ACM curriculum committee on computer science. Commun. ACM 11(3), 151–
197 (1968)
20. Austing, R.H., Barnes, B.H., Bonnette, D.T., Engel, G.L., Stokes, G.: Curriculum’78:
recommendations for the undergraduate program in computer science—a report of the ACM
curriculum committee on computer science. Commun. ACM 22(3), 147–166 (1979)
21. Koffman, E.B., Miller, P.L., Wardle, C.E.: Recommended curriculum for CS1, 1984.
Commun. ACM 27(10), 998–1001 (1984)
22. Tucker, A.B., ACM/IEEE-CS Joint Curriculum Task Force: Computing curricula 1991:
report of the ACM/IEEE-CS Joint Curriculum Task Force, p. 154. ACM Press (1990)
23. The Joint Task Force IEEE and ACM: CC2001 Computer Science, Final Report (2001)
24. Cassel, L., Clements, A., Davies, G., Guzdial, M., McCauley, R.: Computer Science
Curriculum 2008: An Interim Revision of CS 2001. ACM (2008)
The First Programming Language and Freshman Year in Computer Science 173
25. Task force ACM e IEEE: Computer Science Curricula 2013. ACM and the IEEE Computer
Society (2013)
26. Smith, C., Rickman, J.: Selecting languages for pedagogical tools in the computer science
curriculum. In: Proceedings of the Sixth SIGCSE Technical Symposium on Computer
Science Education (1976)
27. Wexelblat, R.L.: First programming language: consequences (panel discussion) (1979)
28. Tharp, A.L.: Selecting the “right” programming language. In: SIGCSE 1982 Technical
Symposium on Computer Science Education, Indianapolis, Indiana, USA (1982
29. Duke, R., Salzman, E., Burmeister, J., Poon, J., Murray, L.: Teaching programming to
beginners - choosing the language is just the first step. In: ACSE 2000 Proceedings of the
Australasian Conference on Computing Education (2000)
30. Mannila, L., de Raadt, M.: An objective comparison of languages for teaching introductory
programming. In: 6th Baltic Sea Conference on Computing Education Research: Koli
Calling 2006 (2006)
31. Giangrande Jr., E.: CS1 programming language options. J. Comput. Sci. Coll. 22(3), 153–
160 (2007)
32. Kemeny, J.G., Kurtz, T.E: BASIC - A Manual for BASIC, the Elementary Algebraic
Language. Dartmouth College (1964)
33. Parker, K., Davey, B.: The history of computer language selection. In: IFIP Advances in
Information and Communication Technology, pp. 166–179 (2012)
34. Dijkstra, E.W.: Go to statement considered harmful. Commun. ACM 11(3), 147–148 (1968)
35. Knuth, D.: Structured programming with go to statements. Comput. Surv. 6(4), 261–301
(1974)
36. Dahl, O., Dijkstra, E., Hoare, C.: Structured Programming. Academic Press Ltd., London
(1972)
37. Wirth, N.: The programming language pascal. In: Pioneers and Their Contributions to
Software Engineering. Springer (1971)
38. Gupta, D.: What is a good first programming language? Crossroads ACM Mag. Stud. 10(4),
7 (2004)
39. Levy, S.: Computer language usage in CS1: survey results. ACM SIGCSE Bull. 7(3), 21–26
(1995)
40. McCauley, R., Manaris, B.: Computer science degree programs: what do they look like? A
report on the annual survey of accredited programs. ACM SIGCSE Bull. 30(1), 15–19
(1998)
41. Farooq, M.S., Khan, S.A., Ahmad, F., Islam, S., Abid, A.: An evaluation framework and
comparative analysis of the widely used first programming languages. PLoS ONE (2014)
42. Sobral, S.R.: 30 years of CS1: programming languages evolution. In: 12th Annual
International Conference of Education, Research and Innovation (2019)
43. Siegfried, R.M., Siegfried, J., Alexandro, G.: A longitudinal analysis of the Reid list of first.
Inf. Syst. Educ. J. 10(4), 47–54 (2016)
44. Sobral, S.R.: Bachelor’s and master’s degrees integrated in Portugal in the area of
computing: a global vision with emphasis on programming UCS and programming
languages used. In: 11th Annual International Conference of Education, Research and
Innovation (2018)
45. Sobral, S.R.: Introduction to programming: portrait of higher education in computer science
in Portugal. In: 11th International Conference on Education and New Learning Technologies
(2019)
46. Murphy, E., Crick, T., Davenport, J.H.: An analysis of introductory programming courses at
UK universities. Art Sci. Eng. Programm. 1(2) (2017)
174 S. R. Sobral
47. Ezenwoye, O.: What language? - the choice of an introductory programming language. In:
48th Frontiers in Education Conference, FIE 2018 (2018)
48. Aleksić, V., Ivanović, M.: Introductory programming subject in European higher education.
Inform. Educ. 15(2), 163–182 (2016)
49. Roberts, E.: The dream of a common language: the search for simplicity and stability in
computer science education. In: 35th SIGCSE Technical Symposium on Computer Science
Education (2004)
50. Dijkstra, E.W.: The humble programmer. Commun. ACM 15(10), 859–866 (1972)
51. Laakso, M., Kaila, E., Rajala, T., Salakoski, T.: Define and visualize your first programming
language. In: 8th IEEE International Conference on Advanced Learning (2008)
52. Stefik, A., Hanenberg, S.: The programming language wars: questions and responsibilities
for the programming language community. In: 2014 ACM International Symposium on New
Ideas, New Paradigms, and Reflections on Programming & Software (2014)
53. Goosen, L.: A brief history of choosing first programming languages. In: History of
Computing and Education 3 (2008)
54. Guerreiro, P.: A mesma velha questão: como ensinar Programação? In: Quinto Congreso
Iberoamericano de Educación Superior (1986)
55. Collberg, C.S.: Data structures, algorithms, and software engineering. In: Software
Engineering Education - SEI Conference 1989 (1989)
56. Bruce, K., Freund, S.N., Harper, R., Larus, J., Leavens, G.: What a programming languages
curriculum should include. In: SIGPLAN Workshop on Undergraduate Programming
Language Curricula (2008)
57. King, K.N.: The evolution of the programming languages course. ACM SIGCSE Bull. 24
(1), 213–219 (1992)
58. Howatt, J.: A project-based approach to programming language evaluation. ACM SIGPLAN
Not. 30(7), 37–40 (1995)
59. Luker, P.A.: Never mind the language, what about the paradigm? In: Twentieth SIGCSE
Technical Symposium on Computer Science Education (1989)
60. Budd, T.A., Pandey, R.K.: Never mind the paradigm, what about multiparadigm languages?
ACM SIGCSE Bull. 27(2), 25–30 (1995)
61. Parker, K.R., Chao, J.T., Ottaway, T.A., Chang, J.: A formal language selection process.
J. Inf. Technol. Educ. 5(1), 133–151 (2006)
62. Alzahrani, N., Vahid, F., Edgcomb, A., Nguyen, K., Lysecky, R.: Python versus C++: an
analysis of student struggle on small coding exercises in introductory programming courses.
In: 49th ACM Technical Symposium on Computer Science Education (2018)
63. Wainer, J., Xavier, E.: A controlled experiment on Python vs C for an introductory
programming course: students’ outcomes. ACM Trans. Comput. Educ. 18(3), 1–16 (2018)
64. McMaster, K., Sambasivam, S., Rague, R., Wolthuis, S.: Java vs. Python coverage of
introductory programming concepts: a textbook analysis. Inf. Syst. Educ. J. 15(3), 4–13 (2017)
65. Farag, W., Ali, S., Deb, D.: Does language choice influence the effectiveness of online
introductory programming courses? In: 14th Annual ACM SIGITE Conference on
Information Technology Education (2013)
66. Koffman, E.B., Stemple, D., Wardle, C.E.: Recommended curriculum for CS2, 1984: a
report of the ACM curriculum task force for CS2. Commun. ACM 28(8), 815–818 (1985)
67. The Joint Task Force for Computing Curricula 2005: Computing Curricula 2005: The
Overview Report. ACM (2005)
68. The Joint Task Force on Computing Curricula: Curriculum Guidelines for Undergraduate
Degree Programs in Software Engineering. ACM (2004)
69. The Joint Task Force on Computing Curricula: SE2004: Curriculum Guidelines for
Undergraduate Degree Programs in Software Engineering. ACM (2004)
Design of a Network Learning System
for the Usage of Surgical Instruments
Abstract. To improve nursing clinical staff training, this study combines the
methods of e-learning with situated simulation and follows the six steps of the
design science research process to construct a surgical instruments network
learning system. Based on lean management concepts to restructure the number
of surgical instruments and establish basic equipment package, it eliminates the
excess wastes of production, transportation, action and excessive treatment.
Then, the surgical instrument images are linked to the relational database to
establish the network learning system. To evaluate the effectiveness of the
system, the learning outcomes and the satisfaction investigation of subjective
learning were measured. The results show that it can improve the learning
effectiveness of operating room new nursing staff, and enhance professional
knowledge.
1 Introduction
With the development of information technology and the popularity of the network
environment, many companies use e-learning technology and emphasize learner-
centered teaching to reflect the diversified way of staff training. In addition, the
innovated teaching mode of situated simulation is adopted to speed up the staff
members’ competency development. Bertoncelj pointed out that competency refers to
the required capability in a specific position of the workplace, which includes personal
knowledge, technology, ability and attitude [1]. So that, it can successfully complete
the responsibility and performance expectations of the job. Especially in medical
industry, the relevant personnel are required to have a high degree of expertise.
Whether the competency is fully functional, it is very important for hospital man-
agement. In addition, nursing staff is an important human resource for the hospital.
Competency management of nursing staff therefore becomes one of the major tasks for
medical institution operators. With the innovation of surgical instruments and in the
face of complicated equipment and procedure during the surgical operations, how to
© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 175–181, 2020.
https://doi.org/10.1007/978-3-030-45697-9_17
176 T.-K. Hwang et al.
properly operate the instrument and get use of its main functions is the learning focus
for scrubbing nurses. Wherein the correctness of the instrument preparation is more
directly affecting operation process and safety of patients. For new nursing staff as
scrubbing nurses, they often have a lot of pressure. Especially, when they are not
familiar with surgical instruments, pass the instrument incorrectly or do not have
sufficient capacity preparation, it often induces suffering from the blame and complaint
of the surgeon and affects the teamwork atmosphere. The surgery may even be forced
to extend the operation time. Thus new nursing staff in the operating room withstands a
lot of frustration, and further loses the intention to continue working at the position.
This study adopts design science approach to develop a surgical instrument network
learning system, which includes empathy to identify problems, defining the objectives
of the solution, design and development, prototype display, test evaluation, communi-
cation and correction. Based on in-depth interviews and comments in symposium, the
needs are collected. Then, we modularize surgical instruments and build the database.
Through the situated simulation of interactive learning, the new nursing staff in the
operating room can learn effectively in the most saving way. This can strengthen the
cultivation of nursing talents, enhance learning effectiveness and working efficiency,
and maintain the quality of the surgery that the patient should obtain.
2 Literature
To achieve the lean management of surgical instruments, this study adopts design
science approach to develop a network learning system. Therefore, we reviewed the
related literature of design science approach, network learning theory and lean man-
agement as follows
that can strengthen the knowledge and improve performance [9]. The research of
Bryant et al. showed that the effect of e-learning is better than traditional learning [2].
Pedaste et al. indicated that in the web-based virtual learning environment, users can
explore an actual learning process [7]. Through appropriate learning arrangement, users
can even cultivate the ability of critical thinking and problem solving. Many studies
explore the willingness and behavior of using e-learning. The results show that easier
use of the learning system, easy to observe the use of others and having the opportunity
to learn and try first will help to enhance the willingness to use [3, 4].
3 Research Methodology
3.1 Research Procedure
This study adopts design science approach. The procedure includes six steps as
follows.
1. Empathy to identify problems
The research problems should be clearly defined first with the illustration of the
solution value. In the current related research on surgical instruments, most of the
focus is on the establishment of instrument management standards and sterilization
tracking systems. There is less research exploring the instrument learning needs for
the operation room scrubbing nurses. While in the surgery, scrubbing nurses often
face the lack of instrument-related education. There is no unified artifact for nursing
staff to learn.
2. Define the objectives of the solution
Following the defined problem, the relevant knowledge is applied to assess the
availability of the solution and infer the goal. The goal of this study is to develop a
surgical instrument network learning system. Therefore, it is needed to establish the
correspondence between the name of surgical instrument and the name of surgical
operation. In addition, the data required for subsequent system development should
also be considered.
3. Design and development
At this stage, the expected function of the output artifact and its conceptual
framework must be determined. Then, the actual output artifact is constructed. This
study integrates the existing research concepts and methods to provide a surgical
instrument network learning system suitable for operating room nursing staff.
4. Prototype display
At this stage, it is necessary to show how to use the output artifact of the third step
to solve the problems on learning.
5. Test evaluation
The objectives of the solution defined in the second step must be compared with the
system produced in the fourth step. It observes and measures whether the output
proposed in the third step can help to solve the problem well. At the end of this
stage, researchers can decide whether to go back to the third step to enhance output
efficiency and give the subsequent improvement suggestions.
This study selected the system learning satisfaction and self-learning performance
satisfaction as the evaluation criterion to measure whether it is feasible or not.
Finally, there are discussions and explanations about the method and implemen-
tation process of the research with the informatics nurses who are responsible for
the system development.
6. Communication and correction
After the research proposed, communication about the research issues, its impor-
tance, practicality and novelty of output, rigor and efficiency of design, etc. with
other related people or professionals is essential. The design prototype therefore
gets improvement.
Design of a Network Learning System 179
In accordance to the six steps of design science and the structure of this study, design
and development of surgical instruments network learning system are described as
follows.
content, online web learning and overall appeal. The satisfaction is also up to 99.2% on
self-learning effectiveness. It indicates the established system meets the learning pattern
of new nursing staff.
5 Conclusions
The purpose of this study is to design and develop a surgical instrument network
learning system to optimize the surgical device management process and improve the
satisfaction of surgical instrument network learning. The system design is divided into
four blocks, which are instrument area, teaching video area, test area and satisfaction
survey area. The survey results show that the study objects are satisfied with the
enhancement of the instrument knowledge and ability to apply what they have learned
to the work. There is also a positive correlation between conscious learning effec-
tiveness and overall appeal. Basically, the attitudes of the network learning system
users are affected by the subjective perception on the system. Therefore, when planning
the network learning system, the simple and clear design is important. It should include
interface with clear guidance to increase the ease of use.
References
1. Bertoncelj, A.: Manager’s competencies framework: a study of conative component.
Ekonomska Istrazivanja 23(4), 91–101 (2010)
2. Bryant, K., Campbell, J., Kerr, D.: Impact of web based flexible learning on academic
performance in information systems. J. Inf. Syst. Educ. 14, 41–50 (2003)
3. Hsu, C.-N.: Combining innovation diffusion theory with technology acceptance model to
investigate business employees’ behavioral intentions to use e-learning system. Master’s
thesis. National Central University, Taoyuan City, Taiwan (2010)
4. Hsu, S.-C., Liu, C.-F., Weng, R.-H., Chen, C.-J.: Factors influencing nurses’ intentions
toward the use of mobile electronic medical records. Comput. Inform. Nurs. 31, 124–132
(2013)
5. Liker, J.K.: The Toyota Way: 14 Management Principles from the World’s Greatest
Manufacturer. McGraw Hill, New York (2004)
6. March, S., Smith, G.: Design and natural science research on information technology. Decis.
Support Syst. 15, 251–266 (1995)
7. Pedaste, M., Sarapuu, T.: Developing an effective support system for inquiry learning in a
Web-based environment. J. Comput. Assist. Learn. 22(1), 47–62 (2006)
8. Peffers, K., Tuunanen, T., Rothenberger, M.A., Chatterjee, S.: A design science research
methodology for information systems research. J. Manag. Inf. Syst. 24(3), 45–77 (2007)
9. Rosenberg, M.J.: E-learning: Strategies for Delivering Knowledge in the Digital Age.
McGraw-Hill, New York (2001)
10. Womack, J.P., Jones, D.T.: Banish Waste and Create Wealth in your Corporation. Free
Press, New York (2003)
CS1 and CS2 Curriculum Recommendations:
Learning from the Past to Try
not to Rediscover the Wheel Again
1 Introduction
The importance of the initial programming curriculum units has a great importance to
the academic life and professional future of a computer science student. The content of
these units, the objectives, the programming languages, and the way everything is
learned and taught needs a lot of thought to be successful.
Since the 1960s, efforts have been made to make curriculum recommendations for
universities around the world: the importance of ACM and later Association for Com-
puting Machinery (ACM) in conjunction with the Institute of Electrical and Electronics
Engineers (IEEE), it is a huge relevance to computer science field. The lessons of those
studies should be taken, even when they seem to be outdated or no longer making sense.
Of course, these documents are closely linked to the emergence of new programming
languages and paradigms, but they all have in common the need to give directions to the
© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 182–191, 2020.
https://doi.org/10.1007/978-3-030-45697-9_18
CS1 and CS2 Curriculum Recommendations 183
course directors and teachers of these curricular units. It is very important for the evo-
lution to be made but even more important is not to follow the trends and try to affirm it
just for the sake of modernity or because another university has made that change.
At the beginning of this century, there was made a distinction between Computer
Engineering (CE), Computer Science (CS), Information Systems (IS), Information
Technology (IT), and Software Engineering (SE). In this way the new curriculum
reports have come to focus on a specific area, and there is a departure from the core and
elective areas by each of the specific cases.
This article gives a description of each of the curriculum recommendation docu-
ments and identifies key points for the initial programming units. A reflection is made
of the way that has been travelled, the present and the clues for the next years.
The first curriculum studies for undergraduate studies in Computer Science appeared in
March 1968, when the Association for Computing Machinery (ACM) published an
innovative and necessary document, Curriculum 68: Recommendations for academic
programs in computer science [1], with early indications of curriculum models for
programs in computer science and computer engineering.
With the emergence of many new courses and departments, ACM published a new
report, Curriculum’78: recommendations for the undergraduate program in computer
science [2], updating Curriculum 68.
Despite the importance of Curriculum’78 there has been much discussion, partic-
ularly regarding the sequence CS1 and CS21. In 1984 a new report is published:
“Recommended curriculum for CS1, 1984” [3] and in 1985 appear a new document
“Recommended curriculum for CS2, 1984” [4].
In 1991 [5] IEEE (Institute of Electrical and Electronics Engineers) and ACM
joined for a new document (this report does not contain a single prescription of courses
for all undergraduate programs, but a collection of subject matter modules called
knowledge units).
In 2005, the Computing Curriculum 2005: The Overview Report [6], covering
undergraduate degree programs in Computer Engineering, Computer Science, Infor-
mation Systems, Information Technology, and Software Engineering; provides under-
graduate curriculum guidelines for five defined sub-disciplines of computing:
(1) Computer Engineering (Curriculum Guidelines for Undergraduate Degree Pro-
grams in Computer Engineering 2004 ([7] and then 2016 [8]).
(2) Computer Science (Computing Curricula 2001 [9], then in 2008 [10] and 2013 [11])
(3) Information Systems (Association for Computing Machinery (ACM), Association
for Information Systems (AIS) and Association of Information Technology
1
The terms CS1 and CS2 are used since 1978 [2] to designate the first two courses in the introductory
sequence of a computer science. Introduction to programming courses as CS1 and basic data
structures courses as CS2. Or Computer Programming I (CS1) as the initial unit, prerequisite for
Computer Programming II (CS2).
184 S. R. Sobral
Programming. Both with 2-2-3, two hours of lectures and two hours of laboratory per
week for a total of three semester hours of credit.
Curriculum’78 [2] introduced a new core structure, as shown in the following
Fig. 2. Computer Programming I (CS1) is the initial unit, prerequisite for Computer
Programming II (CS2), which is prerequisite for Introduction to Computer Systems
(CS3), Introduction to Computer Organization (CS4), and Introduction to File Pro-
cessing (CS5). CS1 and CS2 have the same model as the initial course units of the
previous report: 2-2-3.
Curriculum’91 [5] has made a change from previous recommendations: this report
does not contain a single prescription of courses for all undergraduate programs. Each
knowledge unit corresponds to a topic that must be covered at some point during the
undergraduate curriculum. It contains “a set of curricular and pedagogical considera-
tions that govern the mapping of the common requirements and advanced/supplemental
material into a complete undergraduate degree program” and a collection of subject
matter modules called knowledge units that comprise the common requirements for all
undergraduate programs. Each individual institutions have flexibility to assemble the
knowledge units into course structures that fit their particular needs. The reason given
is that “a curriculum for a particular program depends on many factors, such as the
purpose of the program, the strengths of the faculty, the backgrounds and goals of the
students, instructional support resources, infrastructure support and, where desired,
accreditation criteria. Each curriculum will be site-specific, shaped by those responsible
for the program who must consider factors such as institutional goals, opportunities and
constraints, local resources, and the preparation of the students”. The appendix of the
full report contains the 12 sample curricula, showing how the knowledge units can be
combined to form courses and programs (see next Table 1): Implementation A to K: A
Program in Computer Engineering, in Computer Engineering (Breadth-First), in
Computer Engineering (Minimal Number of Credit-Hours), in Computer Science, in
Computer Science (BreadthFirst), in Computer Science (Theoretical Emphasis), in
Computer Science (Software Engineering Emphasis), a Liberal Arts Program in
Computer Science (Breadth-First), a Program in Computer Science and Engineering, a
186 S. R. Sobral
Table 4. Three and two-course sequences for each implementation strategies, CC2001.
Implemen. Three-course sequences Two-course sequences
strategies
Imperative CS101I. Programming Fundamentals CS111I. Introduction to
first CS102I. The Object-Oriented Paradigm Programming
CS103I. Data Structures and Algorithms CS112I. Data Abstraction
Objects CS101O. Introduction to Object-Oriented CS111O. Object-Oriented
first Programming Programming
CS102O. Objects and Data Abstraction CS112O. Object-Oriented
CS103O. Algorithms and Data Structures Design and Methodology
Functional CS111F. Introduction to Functional
first Programming
CS112F. Objects and Algorithms
Breadth A one-semester course (CS100B) that serves as
first a prerequisite
A preliminary implementation of a breadth-first
introductory sequence (CS101B/102B/103B)
that seeks to accomplish in three semesters what
has proven to be so difficult in two
Algorithms CS111A. Introduction to
first Algorithms and
Applications
CS112A. Programming
Methodology
Hardware CS111H. Introduction to
first the Computer
CS112H. Object-Oriented
Programming Techniques
CS2008 [10] “only” updates CS2001 Body of Knowledge and put additional
commentary/advice in the accompanying text.
The CS2013 report [11] includes examples of courses from a variety of universities
and colleges to illustrate how topics in the Knowledge Areas may be covered and
combined in diverse ways. Has a separate chapter discusses introductory courses, with
identification of some factors to give a set of tradeoffs that must be considered when
trying to decide what should be covered early in a curriculum. Design Dimensions,
Pathways through Introductory Courses, Programming Focus, Programming Paradigm
and Choice of Language, Software Development Practices, Parallel Processing, Plat-
form and Mapping to the Body of Knowledge. Included in examples of initial courses:
CS1101: Introduction to Program Design, WPI (Worcester, MA), COS 126: General
Computer Science (Princeton University, NJ), a background course CS 106A: Pro-
gramming Methodology (Stanford University), and the sequences CS 115 Introduction
to Computer Programming and CS 215 Introduction to Program Design, Abstraction
and Problem Solving (Bluegrass Community and Technical College) and CSCI 134 –
Introduction to Computer Science and CSCI 136 – Data Structures and Advanced
Programming (Williams College).
CS1 and CS2 Curriculum Recommendations 189
Table 5. Introductory Computing Sequence, start software engineering in first or second year,
SE2004.
1st Semester 2nd Semester
A: Start software SE101 Introduction to Software SE102 Software
engineering in first year Engineering and Computing Engineering and
Computing II
B: Introduction to software CS101I Programming CS102I The Object-
engineering in second year Fundamentals Oriented Paradigm
4 Conclusion
Since Curriculum’68 there have been 6 new general reports and 2 or 3 more for each of
the five specific areas defined in 2005 by the CC2005: Computer Engineering, Com-
puter Science, Information Systems, Information Technology and Software Engi-
neering. If the initial curriculum included a lot of information about unit content
(including programming languages and even annotated bibliography), the curriculum
by area is no longer so complete.
The first curriculum recommendations were straightforward, while the curriculum
recommendations are modular-type: at this moment it is more important to see
examples of curriculum that can be considered successful and implement them
according to several different items, namely the length of the degrees, the students’
knowledge from secondary education and if the university can impose prerequisites. In
this way each university adopts the recommendations, fitting them in their own
realities.
The objectives of the course (and the intended career opportunities for under-
graduate students) are crucial for curriculum design. In the case of this article, and
looking only at the initial programming courses, we see the differences of each different
area: IS2010 model curriculum removed the application development from the pre-
scribed core. IT2008 and IT2017 presented one course each.
In this context, and after listing the introductory programming curricular units for
the first year of undergraduate degrees, it would be interesting to list the current
curricula of the best universities in the world and to see what is being done on each
continent, each country and each area.
References
1. Atchison, W.F., Conte, S.D., Hamblen, J.W., Hull, T.E., Keenan, T.A., Kehl, W.B.,
McCluskey, E.J., Navarro, S.O., Rheinboldt, W.C., Schweppe, E.J., Viavant, W., Young Jr.,
D.M.: Curriculum 68: recommendations for academic programs in computer science: a
report of the ACM curriculum committee on computer science. Commun. ACM 11(3), 151–
197 (1968)
2. Austing, R.H., Barnes, B.H., Bonnette, D.T., Engel, G.L., Stokes, G.: Curriculum ‘78:
recommendations for the undergraduate program in computer science—a report of the ACM
curriculum committee on computer science. Commun. ACM 22(3), 147–166 (1979)
3. Koffman, E.B., Miller, P.L., Wardle, C.E.: Recommended curriculum for CS1. Commun.
ACM 27(10), 998–1001 (1984)
4. Koffman, E.B., Stemple, D., Wardle, C.E.: Recommended curriculum for CS2, 1984: a
report of the ACM curriculum task force for CS2. Commun. ACM 28(8), 815–818 (1985)
CS1 and CS2 Curriculum Recommendations 191
5. Tucker, A.B.: ACM/IEEE-CS Joint Curriculum Task Force. Computing curricula 1991:
report of the ACM/IEEE-CS Joint Curriculum Task Force, p. 154. ACM Press (1990)
6. The Joint Task Force for Computing Curricula 2005, “Computing Curricula 2005: The
Overview Report”. ACM (2005)
7. The Joint Task Force on Computing Curricula, “Curriculum Guidelines for Undergraduate
Degree Programs in Software Engineering”. ACM (2004)
8. Joint Task Group on Computer Engineering Curricula, “CE2016: Computer Engineering
Curricula 2016”. ACM (2016)
9. The Joint Task Force IEEE and ACM, “CC2001 Computer Science, Final Report” (2001)
10. Cassel, L., Clements, A., Davies, G., Guzdial, M., McCauley, R.: Computer Science
Curriculum 2008: An Interim Revision of CS 2001. ACM (2008)
11. Task force ACM e IEEE, “Computer Science Curricula 2013,” ACM and the IEEE
Computer Society (2013)
12. Davis, G.B., Gorgone, J.T., Couger, J.D., Feinstein, D.L., Longenecker Jr., H.E.: Model
Curriculum and Guidelines for Undergraduate Degree Programs in Information Systems.
ACM (1997)
13. Gorgone, J.T., Davis, G.B., Valacich, J.S., Topi, H., Feinstein, D.L., Longenecker Jr., H.E.:
IS2002: Curriculum Guidelines for Undergraduate Degree Programs in Information
Systems. ACM (2002)
14. Topi, H., Valacich, J.S., Wright, R.T., Kaiser, K.M., Nunamaker Jr., J., Sipior, J.C., de
Vreede, G.: IS2010 Curriculum Update: Curriculum Guidelines for Undergraduate Degree
Programs in Information Systems. ACM (2010)
15. Lunt, B.M., Ekstrom, J.J., Gorka, S., Hislop, G., Kamali, R., Lawson, E., LeBlanc, R.,
Miller, J., Reichgelt, H.: IT2008: Computing Curricula Information Technology Volume.
ACM (2008)
16. Task Group on Information Technology Curricula, “IT2017: Curriculum Guidelines for
Baccalaureate Degree Programs in Information Technology”. ACM (2017)
17. The Joint Task Force on Computing Curricula, “SE2004: Curriculum Guidelines for
Undergraduate Degree Programs in Software Engineering”. ACM (2004)
18. Joint Task Force on Computing Curricula, “SE2014: Curriculum Guidelines for Under-
graduate Degree Programs in Software Engineering”. ACM (2014)
19. Koffman, E.B., Stemple, D., Wardle, C.E.: Recommended curriculum for CS2, 1984.
Commun. ACM 28(8), 815–818 (1985)
On the Role of Python
in Programming-Related Courses
for Computer Science and Engineering
Academic Education
1 Introduction
Computer literacy, including computer programming, have received much
stronger attention in the primary, secondary, and academic education during
the last decade, as compared to previous decades. This trend spans virtually all
the application domains of science and engineering. While in the past thought
as a specific computer science and engineering skill, Computer Programming is
now seen as playing a major role in teaching many academic disciplines with
computation-oriented focus in engineering, as well as in natural and social sci-
ences. At the same time, the list of programming languages has expanded a lot,
with new languages emerging, addressing the recent needs of the developers on
the various devices, platforms, and networks that are in use nowadays.
These contexts and trends raise new challenges for computer science and engi-
neering teachers and educators in designing novel methods and approaches of
c The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 192–202, 2020.
https://doi.org/10.1007/978-3-030-45697-9_19
On the Role of Python 193
with updated data extracted from many relevant sources: Google Search, Google
Trends, Twitter, GitHub, Stack Overflow, Reddit, Hacker News, CareerBuilder,
IEEE Job Site, and IEEE Xplore. The selection of utilized data sources has a very
broad coverage, including both academic and industry relevant repositories. The
application is configured to support 4 default rankings (IEEE Spectrum, Trend-
ing, Jobs, Open). The ranking is adjustable by manually configuring the weights
of the included data sources.
Table 1 presents Python yearly rankings by IEEE Top Programming Lan-
guages. This table clearly shows that Python language holds the highest rank
during the last 3 years, independently of the ranking criteria.
Project Euler is serving a community of users interested in mastering problem
solving on the border of computer science and mathematics using algorithms and
computer programming. So the community has both educational/training and
intellectually challenging goals.
Currently there are 687 problems in Project Euler and their number is con-
stantly growing. The community has 957235 registered members using 100 pro-
gramming languages who have solved at least one problem [6]. Table 2 presents
the 5 highest values of the number of persons using a given programming lan-
guage in Project Euler, showing that Python holds the top position.
Table 2. The top 5 values of the number of persons using a given programming lan-
guage in Project Euler among 957235 registered members actively using 100 program-
ming languages (values recorded at November 9, 2019)
The topics that we cover during our 14-weeks AD course are as follows: intro-
duction to analysis and design of algorithms, divide and conquer, correctness and
testing of algorithms, sorting algorithms, abstract data types and lists, stacks
and queues, graphs and trees, dynamic programming, greedy algorithms, back-
tracking, and introduction to NP-completeness. A similar mandatory course is
also part of the Curriculum at the Faculty of Sciences, Novi Sad. It is based on
Java programming language, as it immediately follows the “Introduction to Pro-
gramming in Java” course. However, having in mind the popularity of Python
in local ICT companies as well, we decided to offer a similar course in Python,
as elective one. Some basic elements of Python are planned to be presented to
the students at the beginning of the course.
Practical work is focused on implementation and experiments with algo-
rithms using Standard C and Python programming languages. Students are
encouraged to use online resources [2] that we find appropriate for mastering
introductory programming.
Students benefit for being previously exposed to the basics of computer pro-
gramming using Standard C during their first semester. One of our goals is to
guide students during AD course in using C for efficient implementation of algo-
rithms. Moreover, we motivate the selection of Python as a second implementa-
tion language and then we follow by briefly exposing students to Python with a
focus on its basic elements including program structure, functions, modules and
higher-level data structures (lists and dictionaries). As students were already
On the Role of Python 197
exposed to the basics of computer programming during the first semester, one
aspect that we encourage and emphasize is self-learning.
It is worth noting key aspects that we emphasize during teaching the AD
course:
– Correctly introduce low-level aspects of data structures: pointers and linked
structures;
– Accurately evaluate performance of different ways of encoding same solu-
tion/algorithm, for example by imperative versus declarative programming;
– Present two different programming languages (C and Python), by outlining
differences regarding low-level versus high-level constructs, also in relation to
previous two items.
– Issues of aliases, shallow and deep copy of Python complex objects were better
explained using pointer diagrams. This was easier to understand after stu-
dents were firstly exposed to low-level details of pointers and references, that
in our opinion are better introduced using C examples. This closed the gap
between high-level Python structures and low-level details that are needed to
correctly understand their implementation;
– Python is multi-paradigm high-level language, therefore same solution can
be realized in different ways raising several issues like verbosity versus con-
ciseness or readability versus efficiency. For example, using high-level Python
features of list and set comprehension can result in very compact representa-
tions of some algorithms. But such different solutions do not have the same
efficiency. Very concise solutions can be often easily read as mathematical
specifications, while at the same time they can be very inefficient from the
running time point of view. We addressed this issue by discussing solution
features like readability, conciseness, theoretical time complexity, as well as
experimental evaluation of running time.
The second aspect concerns students’ practical work during AD course. This
was organized as 2 lab assignments LA1 and LA2 and 1 course assignment CA,
as follows:
From a total number of 169 students enrolled in AD course, there were 143
submissions of LA1 (21 done also with Python), 138 submissions of LA2 (10
done also with Python), and 138 submissions of CA (90 done also with Python).
Some of the results obtained at the AD assignments are presented in Table 3.
Note that the use of Python in LA1 and LA2 was optional and the use of C was
mandatory, while the use of both languages was mandatory in CA. This explains
why the highest Python usage figures were obtained in CA.
Table 3. Assignment results in AD course. Each assignment was graded with a number
of points from 0 to 15.
Python C
0–4 5–9 10–15 0–4 5–9 10–15
LA1 3 6 12 1 16 126
LA2 1 7 2 3 55 80
CA 2 18 70 3 24 101
200 C. Bădică et al.
– The first lab assignment (LA1) required the use of Prolog (compulsory) to
solve logic-based reasoning and representation problems;
– The second lab assignment (LA2) required the use of Python (compulsory)
to solve an AI-based algorithmic problem;
– For the course assignment (CA), students were allowed to use a programming
language of their own choice, by motivating their decision. They were provided
with an initial list of possible options;
– Assignment tasks required students to approach AI problems and algorithms
addressing AI methods discussed during the lectures. For each solution, they
had to prepare non-trivial test cases and use them to experiment with the
code. Finally, they had to describe their achievements in a technical report.
From a total number of 194 students enrolled in AI course, there were 140
submissions of LA2 (all done with Python) and 165 submissions of CA (121
done with Python). Some of the results obtained at the AI assignments are
presented in Table 4. Results of LA1 are not shown, as LA1 involved the use
of Prolog, and is not relevant here. Also, the use of Python was mandatory for
LA2, thus explaining the figures shown in the table. Finally, in CA, the choice
of the programming language was left to the students, and we can easily notice
that the highest figures were obtained with Python by all students with low,
average and high performance.
Table 4. Assignment results in AI course. Each assignment was graded with a number
of points from 0 to 100.
LA2 CA
0–49 50–79 80–100 0–49 50–79 80–100
Python 30 80 30 24 22 75
C/C++ − − − 15 11 4
Java − − − 2 10 1
JavaScript − − − 0 1 0
References
1. Bădică, C., Vidaković, M., Ilie, S., Ivanović, M., Vidaković, J.: Role of agent mid-
dleware in teaching distributed systems and agent technologies. J. Comput. Inf.
Technol. 27(1) (2019). http://cit.fer.hr/index.php/CIT/article/view/4464
2. Becheru, A., Bădică, C.: Online resources for teaching programming to first year
students. In: Vlada, M., Albeanu, G., Adascalitei, A., Popovici, M. (eds.) Pro-
ceedings of the 11th International Conference on Virtual Learning, pp. 138–144.
Bucharest University Press (2016). http://c3.icvl.eu/papers2016/icvl/documente/
pdf/section1/section1 paper17.pdf
202 C. Bădică et al.
3. Brusilovsky, P., Malmi, L., Hosseini, R., Guerra, J., Sirkiä, T., Pollari-Malmi, K.:
An integrated practice system for learning programming in Python: design and
evaluation. Res. Pract. Technol. Enhanc. Learn. 13(1) (2018). https://doi.org/10.
1186/s41039-018-0085-9
4. Cass, S.: The Top Programming Languages 2019. IEEE Spectrum, 06 Septem-
ber 2019. https://spectrum.ieee.org/computing/software/the-top-programming-
languages-2019
5. Cormen, T.H., Leiserson, C.E., Rivest, R.L., Stein, C.: Introduction to Algorithms,
3rd edn. The MIT Press, Cambridge (2009)
6. Project Euler. https://projecteuler.net/
7. Fangohr, H.: A comparison of C, MATLAB, and Python as teaching languages
in engineering. In: Proceedings 4th International Conference on Computational
Science – ICCS 2004 (Part IV). Lecture Notes in Computer Science, vol. 3039,
pp. 1210–1217. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-
25944-2 157
8. Hromkovič, J., Kohn, T., Komm, D., Serafini, G.: Combining the power of Python
with the simplicity of logo for a sustainable computer science education. In: Brod-
nik A., Tort F. (eds.) Informatics in Schools: Improvement of Informatics Knowl-
edge and Perception, ISSEP 2016. Lecture Notes in Computer Science, vol. 9973,
pp. 155–166. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46747-
4 13
9. Ivanović, M., Budimac, Z., Radovanović, M., Savić, M.: Does the choice of the first
programming language influence students’ grades? In: Proceedings of the 16th
International Conference on Computer Systems and Technologies, CompSysTech
2015, pp. 305–312. ACM (2015). https://doi.org/10.1145/2812428.2812448
10. Ivanović, M., Xinogalos, S., Pitner, T., Savić, M.: Technology enhanced learning in
programming courses - international perspective. EAIT 22(6), 2981–3003 (2017).
https://doi.org/10.1007/s10639-016-9565-y
11. Joint Task Force on Computing Curricula, Association for Computing Machinery
(ACM) and IEEE Computer Society: Computer Science Curricula 2013: Curricu-
lum Guidelines for Undergraduate Degree Programs in Computer Science. ACM
and IEEE Computer Society, 20 December 2013. https://doi.org/10.1145/2534860
12. Klimeková, E., Tomcsányiová, M.: Case study on the process of teachers transition-
ing to teaching programming in Python. In: Informatics in Schools, Fundamentals
of Computer Science and Software Engineering, ISSEP 2018. Lecture Notes in
Computer Science, vol. 11169, pp. 216–227. Springer, Cham (2018). https://doi.
org/10.1007/978-3-030-02750-6 17
13. Martin, R.D., Cai, Q., Garrow, T., Kapahi, C.: QExpy: A python-3 module to
support undergraduate physics laboratories. SoftwareX 10, 100273 (2019). https://
doi.org/10.1016/j.softx.2019.100273
14. Python. https://www.python.org/
15. Vergnaud, A., Fasquel, J.-B., Autrique, L.: Python based internet tools in control
education. IFAC-PapersOnLine 48(29), 43–48 (2015). https://doi.org/10.1016/j.
ifacol.2015.11.211
16. Xinogalos, S., Pitner, T., Ivanović, M., Savić, M.: Students’ perspective on the
first programming language: C-like or Pascal-like languages? EAIT 23(1), 287–302
(2018). https://doi.org/10.1007/s1063
Validating the Shared Understanding
Construction in Computer Supported
Collaborative Work in a Problem-Solving
Activity
© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 203–214, 2020.
https://doi.org/10.1007/978-3-030-45697-9_20
204 V. Agredo-Delgado et al.
1 Introduction
A proposal of an initial process is presented that contains phases, stages, activities, and
steps that will allow executing a collaborative work in problem resolving activities and
thus allow to achieve a shared understanding. For developing the collaboration process,
we followed the collaboration engineering design approach [18], which addresses the
challenge of designing and deploying collaborative work practices for high-value
recurring tasks and transferring them to practitioners to execute without the ongoing
support from a professional collaboration expert [16]. To model the process, we use the
conventions based on the elements proposed by Spem 2.0 [19].
According to our proposed process, the computer-supported collaborative work we
divide it into 3 phases, Pre-Process, Process, and Post-Process, which were taken from
Collazos’s work [7], phases that were improved and adapted to the collaborative work.
The first phase Pre-Process begins with the activity design and specification, in the
Process phase, the collaboration activity is executed to achieve the objectives based on
the interaction among group members. At the end of the activity, in the Post-Process
phase, the activity coordinator performs an individual and collective review to verify
the proposed objective achievement.
For the first Pre-Process phase, its activities were updated, to each one of them was
assigned with the respective description, the responsible person, the inputs and outputs
of such activity. This paper focuses mainly on the Process phase since it is here where
the collaborative work interactions take place, where we can obtain shared under-
standing. For this phase, four stages were defined (See Fig. 1), each one with activities,
steps, roles, inputs and outputs.
The Organization stage aims to be a stage in which the coordinator, organize all the
necessary elements to start with the activity.
The Shared understanding stage seeks to get the group members to agree on what
the problem to solve in the activity before starting their development, this stage is
formed by (See Fig. 2), the Tacit Pre Understanding activity which underlies people’s
ability to understand individually, The Construction activity happens when one of the
group members inserts meaning by describing the problematic situation. The fellow
teammates are actively listening and trying to grasp the given explanation, the Col-
laborative Construction activity is a mutual activity of building meaning by re-fining,
building or modifying the original offer, and finally the Constructive Conflict activity,
which is where the differences of interpretation between the group members are treated
through arguments and clarifications. These last three activities are based on the group
206 V. Agredo-Delgado et al.
cognition research from learning sciences and organizational sciences of Van den
Bossche et al. [15] who examined a model of the team learning behaviors, that we
adapted in our research to be used in collaborative work.
Considering these activities, we define for each one a series of tasks that allowed to
achieve its objective. The detail of the process is only defined until the shared under-
standing stage, it is intended that in the next moments of the research, to continue
improving, refining and detailing the all proposed process, so that later it can be
completely validated.
3 Related Work
Researches have focused on the measurement of shared understanding, but not on the
construction of it, here are some of them: Smart [20] used a cultural model, where the
nodes represent concepts and its linkages reflect the community’s beliefs. Rosenman
et al. [21] worked with interprofessional emergency medical teams, where they mea-
sure the shared understanding through team perception and a team leader effectiveness
measure. White et al. [22], describe a range of techniques, as the use of concept maps,
relational diagrams, and word association tests. Sieck, et al. [23] determined that the
similarity of mental models might provide a measure of shared understanding. Bates
et al. [24] developed and validated the Patient Knowledge Assessment tool question-
naire that measured shared clinical understanding of pediatric cardiology patients.
On the other way, there are works about collaborative problem solving (CPS) as:
Edem [25] examines the occurrences of the target group of CPS activities, as well as
individual contributions. Roschelle et al. [26] focus on the processes involved in the
collaboration, where they concluded that the students used language and action to
overcome impasses in shared understanding and to coordinate their activity. Barron
[27] identified 3 dimensions in the interactive processes among the group, the mutuality
of exchanges, the achievement of joint attentional engagement, and the alignment of
the goals. Häkkinen et al. [28] present their pedagogical framework for the twenty-first-
century learning practices, among those that are collaborative problem-solving skills
and strategic learning skills. Graesser et al. [29] developed an assessment of CPS of
student skills and knowledge, by crossing three major CPS competencies with four
problem-solving processes. The CPS competencies are (1) establishing and maintaining
shared understanding, (2) taking appropriate action, and (3) establishing and main-
taining team organization.
Validating the Shared Understanding Construction in CSCW 207
4 Experiment
4.3 Analysis
There are several different kinds of results from the experiment: the observation made
by the researchers, where it could be observed that those groups that obtained poor
results (in terms of notes) were those that did not have a good performance in the
process application, did not generate discussions internal to solve doubts, they did not
appropriate the assigned role, and did not have the disposition to work in group. Also, it
was found that just a text-based collaboration is inconvenient for problem-solving
tasks, the process should include additional ways of communication among the par-
ticipants. It was also observed that following the process was exhausting for the par-
ticipants and that this generates a lack of commitment to the rest of the activity, due to
its high cognitive load.
On the other hand, to ensure that the differences in the results found are not only
apparent but statistically significant was used the student’s t-distribution [30], which
allowed validating the specific hypotheses. Depending on the information to be ana-
lyzed, there are three types of test a) T-test for means of two paired samples, b) T-test
for two samples with equal variances c) T-test for two samples with unequal variances.
For the type of T-tests, the values that were used to make the calculation: Relia-
bility level = 95%, Significance level = 5%, Critical value in two-tailed, Observations
or cases = 9 for the T-tests type a) and 9 (UM), 3 (UP) for T-tests type b) and c),
Degrees of freedom = 8 for T-tests type a) and 10 for T-tests type b) and c).
Validating the Shared Understanding Construction in CSCW 209
For the T-tests type b) and c), initially, it was necessary to determine if the vari-
ances of the values were equal or unequal, for this we use the Fisher test [31].
We also consider for the 3 test types, for the acceptance or rejection of the null
hypothesis:
• If P-value or F-Value <= Significance level, the null hypothesis is rejected
• If P-value or F-Value > Significance level, the null hypothesis is accepted
Applying the statistical analysis in the values obtained, the following results of
Table 2 were generated.
4.4 Discussion
Statistically, it was verified that the process used, improves the participants’ individual
understanding, improve the group understanding about the activity, generate a
homogeneous understanding of the activity, it does not generate a discrepancy of each
participant regarding the group understanding, in the same way, with the use of the
process the shared understanding activities generated better results and were better
fulfilled among the participants, it was also obtained that the participants have high
clarity and understanding about the descriptions of their peers, this perhaps because at
the beginning everyone can have the same doubts or the same mistakes. With the final
artifact generated, it was validated that the use of the process generates final products
with better quality levels. With respect to the questions generated to the coordinator of
the activity, with the use of the process a smaller amount is generated since the
activities allow to resolve internally the greatest number of questions. The process
allowed to obtain better achievement participants’ satisfaction with the objectives
proposed by the activity. Conversely, it cannot be determined that the elements of the
process are satisfactory for the participants and in the same way, the outcomes of the
activity. With the observation, it was possible to determine that the process generates a
high cognitive load before starting the development of the activity, which does not
allow the participants to carry out the activity with the necessary interest since It is a
process that contains many steps.
4.5 Threats
Construct Validity: The shared understanding construction was observed and mea-
sured by the perceptions of the participants, but the constructs underlying these
behaviors are still unknown. In order to minimize the subjectivity in the support
instruments for the information collection, these underwent validations by expert
personnel. Another threat is the incorporation of new conceptual and language ele-
ments in the activity development to the participants, in order to reduce this threat, an
initial activity was assigned in which were contextualized in the activity theme.
Internal Validity: We analyzed the results of applying the guide but not the commu-
nication of the participants, for trying to minimize this threat, the participants operated
the process in the presence of the observer though, without, the participants were
encouraged to write down their questions and issues. Another validity threat may be the
210 V. Agredo-Delgado et al.
Table 2. (continued)
Variable Results Hypothesis accepted
Improvement in the F-Value = 0, 82; H1.3.8a = There is a statistically
discrepancy in UM and UP t (9,3) = 3, 90; significant difference in the
P (0,002) average of results obtained from
differences in individual
knowledge versus group
knowledge, between the UM and
UP groups
H1.4 Improvement in the F-Value = 0, 97; H1.4.2a = There is a statistically
Construction activity t (9,3) = 2, 79; significant difference in the
P (0, 019) average of results obtained from
the activities of construction
between the UM and UP groups
Improvement in the Co- F-Value = 0, 70; H1.4.4a = There is a statistically
Construction activity t (9,3) = 2, 32; significant difference in the
P (0, 043) average of results obtained from
the activities of Co-construction
between the UM and UP groups
Improvement in the F-Value = 0, 61; H1.4.6a = There is a statistically
Constructive conflict activity t (9,3) = 2, 30; significant difference in the
P (0, 044) average of results obtained from
the activities of Constructive
conflict between the UM and UP
groups
H2.1 Improvement in the quality F-Value = 0, 12; H2.1.2a = There is a statistically
of the results t (9,3) = 2, 42; significant difference in the
P (0,036) average of the notes from the
results after applying the guide
between the UP and UM groups
H2.2 Improvement in the number F-Value = 0, 21; H2.2.2a = There is a statistically
of questions t (9, 3) = 15, 32; significant difference in the
P (0,000000028) number of questions asked to the
activity coordinator between the
UM and UP groups
H2.3 Improvement in the F-Value = 0, 60; H2.3.2a = There is a statistically
perception about the t (9, 3) = 2, 88; significant difference in the
achievement of the P (0,016) average of results obtained from
objectives satisfaction perceived by the
participants about the attainment
of the objectives between the UM
and UP groups
(continued)
212 V. Agredo-Delgado et al.
Table 2. (continued)
Variable Results Hypothesis accepted
H2.4 Improvement in the F-Value = 0,09; H2.4.10 = There is no statistically
perception about the t (9,3) = 1,36; significant difference in the
satisfaction with the process P (0, 204) average of results obtained from
elements satisfaction perceived by the
participants about process items
between the UM and UP groups
Improvement in the F-Value = 0, 13; H2.4.30 = There is no statistically
perception about the t (9, 3) = 0.68; significant difference in the
satisfaction with the activity P (0, 514) average of results obtained from
outcome satisfaction perceived by the
participants about activity
outcomes between the UM and
UP groups
time invested since they are long sessions where participants in the final stages may
perceive fatigue, which may influence the results. To try to mitigate it in the midst of
experimentation, participants took a break without communication between them.
External Validity: The guide that they had to follow the participants was about a
problem solution of process lines, this topic is very little analyzed with university
students. We tried to mitigate this effect by looking for groups who had a higher level
of experience with the subject.
From the experiment, we can conclude that the proposed initial process is feasible for
the construction of shared understanding in a problem-solving activity and is useful for
achieving its objectives. However, it cannot be determined that it improves the per-
ception of the participants ‘satisfaction about the achievement of the objectives set by
the activity performed, and, about the process elements and with the activity outcomes.
In addition, the main contribution to collaboration engineering practice is a validated
process proposal through an experiment research study, that can be used by designers
of collaborative work practices to systematically and repeatedly induce the develop-
ment of shared understanding in heterogeneous groups. As shared understanding has
been identified as crucial for collaboration success in heterogeneous groups, the
compound process presented may foster better group processes and better results.
While we used existing measurement items for shared understanding for our survey
combined with open exploration, a need is revealed for more advanced measurement
instruments that allow all categories of shared understanding to be identified, in
addition to the need to include monitoring and assistance mechanisms that allow
maintain it during the development of the activity, since when achieved, this can also
be lost in the process. In the same way, although the results of this study are stable and
Validating the Shared Understanding Construction in CSCW 213
promising, we identify as future work the need for further investigation of mechanisms
leading to shared understanding, at better understanding the complex phenomenon, its
antecedents, and effects, thus generating more promising opportunities for developing
more techniques to leverage its benefits for effective group work. Considering that the
process should become lighter so that the cognitive load is avoided at the beginning of
the activity.
References
1. Carstensen, P.H., Schmidt, K.: Computer supported cooperative work: new challenges to
systems design. In: Itoh, k (ed.) Handbook of Human Factors, pp. 619–636. CiteSeer (1999)
2. Grudin, J.: Why CSCW applications fail: problems in the design and evaluation of
organizational interfaces. In: Proceedings of the 1988 ACM Conference on Computer-
Supported Cooperative Work, pp. 85–93 (1988)
3. Rummel, N., Spada, H.: Learning to collaborate: an instructional approach to promoting
collaborative problem solving in computer-mediated settings. J. Learn. Sci. 14(2), 201–241
(2005)
4. Persico, D., Pozzi, F., Sarti, L.: Design patterns for monitoring and evaluating CSCL
processes. Comput. Hum. Behav. 25(5), 1020–1027 (2009)
5. Scagnoli, N.: Estrategias para motivar el aprendizaje colaborativo en cursos a distancia
(2005)
6. Hughes, J., Randall, D., Shapiro, D.: CSCW: discipline or paradigm?. In: Proceedings of the
Second European Conference on Computer-Supported Cooperative Work ECSCW 1991,
pp. 309–323 (1991)
7. Collazos, C.A., Muñoz Arteaga, J., Hernández, Y.: Aprendizaje colaborativo apoyado por
computador, LATIn Project (2014)
8. Agredo Delgado, V., Collazos, C.A., Paderewski, P.: Descripción formal de mecanismos
para evaluar, monitorear y mejorar el proceso de aprendizaje colaborativo en su etapa de
Proceso, Popayán (2016)
9. Agredo Delgado, V., Collazos, C.A., Fardoun, H., Safa, N.: Through monitoring and
evaluation mechanisms of the collaborative learning process. In: Meiselwitz, G., (ed.) Social
Computing and Social Media. Applications and Analytics, pp. 20–31. Springer, Vancouver
(2017)
10. Leeann, K.: A Practical Guide to Collaborative Working. Nicva, Belfast (2012)
11. Barker Scott, B.: Creating a Collaborative Workplace: Amplifying Teamwork in Your
Organization, pp. 1–9, Queen’s University IRC (2017)
12. DeFranco, J.F., Neill, C.J., Clariana, R.B.: A cognitive collaborative model to improve
performance in engineering teams—a study of team outcomes and mental model sharing.
Syst. Eng. 14(3), 267–278 (2011)
13. Oppl, S.: Supporting the collaborative construction of a shared understanding about work
with a guided conceptual modeling technique. Group Decis. Negot. 26(2), 247–283 (2017)
14. Christiane Bittner, E.A., Leimeister, J.M.: Why shared understanding matters–engineering a
collaboration process for shared understanding to improve collaboration effectiveness in
heterogeneous teams. In: 46th Hawaii International Conference on System Sciences
(HICSS), pp. 106–114 (2013)
15. Van den Bossche, P., Gijselaers, W., Segers, M., Woltjer, G., Kirschner, P.: Team learning:
building shared mental models. Instr. Sci. 39(3), 283–301 (2011)
214 V. Agredo-Delgado et al.
16. de Vreede, G.-J., Briggs, R.O., Massey, A.P.: Collaboration engineering: foundations and
opportunities: editorial to the special issue on the journal of the association of information
systems. J. Assoc. Inf. Syst. 10(3), 7 (2009)
17. Mohammed, S., Ferzandi, L., Hamilton, K.: Metaphor no more: a 15-year review of the team
mental model construct. J. Manag. 36(4), 876–910 (2010)
18. Kolfschoten, G.L., De Vreede, G.-J.: The collaboration engineering approach for designing
collaboration processes. In: International Conference on Collaboration and Technology,
Heidelberg (2007)
19. Ruiz, F., Verdugo, J.: Guía de Uso de SPEM 2 con EPF Composer, Universidad de Castilla-
La Mancha (2008)
20. Smart, P.R.: Understanding and shared understanding in military coalitions, Web & Internet
Science, Southampton (2011)
21. Rosenman, E.D., Dixon, A.J., Webb, J.M., Brolliar, S., Golden, S.J., Jones, K.A., Shah, S.,
Grand, J.A., Kozlowski, S.W., Chao, G.T., Fernandez, R.: A simulation-based approach to
measuring team situational awareness in emergency medicine: a multicenter, observational
study. Acad. Emerg. Med. 25(2), 196–204 (2018)
22. White, R., Gunstone, R.: Probing Understanding. The Falmer Press, London (1992)
23. Sieck, W.R., Rasmussen, L.J., Smart, P.: Cultural network analysis: a cognitive approach to
cultural modeling. In: Network Science for Military Coalition Operations: Information
Exchange and Interaction, pp. 237–255 (2010)
24. Bates, K.E., Bird, G.L., Shea, J.A., Apkon, M., Shaddy, R.E., Metlay, J.P.: A tool to
measure shared clinical understanding following handoffs to help evaluate handoff quality.
J. Hosp. Med. 9(3), 142–147 (2014)
25. Quashigah, E.: Collaborative problem solving activities in natural learning situations: a
process oriented case study of teacher education students, Master’s thesis in Education, Oulu
(2017)
26. Roschelle, J., Teasley, S.D.: The construction of shared knowledge in collaborative problem
solving. In: Computer Supported Collaborative Learning, pp. 69–97, Heidelberg. Springer
(1995)
27. Barron, B.: Achieving coordination in collaborative problem-solving groups. J. Learn. Sci. 9
(4), 403–436 (2000)
28. Häkkinen, P., Järvelä, S., Mäkitalo-Siegl, K., Ahonen, A., Näykki, P., Valtonen, T.:
Preparing teacher-students for twenty-first-century learning practices (PREP 21): a
framework for enhancing collaborative problem-solving and strategic learning skills. Teach.
Teach.: Theory Pract. 23, 25–41 (2017)
29. Graesser, A.C., Foltz, P.W., Rosen, Y., Shaffer, D.W., Forsyth, C., Germany, M.-L.:
Challenges of assessing collaborative problem solving. In: Care, E., Griffin, P., Wilson, M.,
(eds.) Assessment and Teaching of 21st Century Skills, pp. 75–91. Springer (2018)
30. Neave, H.R.: Elementary Statistics Tables. Routledge, London (2002)
31. Freeman, J.V., Julious, S.A.: The analysis of categorical data. Scope 16(1), 18–21 (2007)
Improving Synchrony in Small Group
Asynchronous Online Discussions
1 Introduction
c The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 215–224, 2020.
https://doi.org/10.1007/978-3-030-45697-9_21
216 S. Laato and M. Murtonen
2 Background
Historically synchronous communication required participants to be in the same
place at the same time. When the term was adopted to describe online commu-
nication, the spatial requirement faded away leaving only the temporal, as the
internet allows communication over distance. Thus, synchronous online com-
munication is currently defined to be conversations which take place in real
time [24] or Communication in an online setting that requires simultaneous par-
ticipation [29].
On the flip side of synchronous communication is the asynchronous. In west-
ern society people partake in asynchronous discussions everyday. Emails, text
messages, voice messages and discussion forums are just some examples of asyn-
chronous communication. In e-learning and elsewhere, asynchronous discussions
are widely used for their convenience - as participants do not need to be online
at the same time, they can communicate at a time which they find conve-
nient [4,14,25]. For many, it has become the preferable choice over synchronous
alternatives. For example, the youth are showing a trend of preferring messag-
ing over phone calls [3] and students have been found to rather communicate
with faculty in an asynchronous manner instead of traditional or virtual office
hours [21]. Also before synchronous meetings can even be held, they are often
first agreed to asynchronously.
Asynchronous discussions are also criticized. They provide less diverse com-
munication opportunities and lack the psychological motivating effects of syn-
chronous discussions such as social arousal and increased exchange of social sup-
port [14]. Asynchronous discussions have been shown to hinder the outcomes of
cooperation in comparison to synchronous communication [28]. These drawbacks
can mostly be attributed to the root cause that defines asynchronous discussions:
delayed feedback [26]. Immediate feedback has been found to motivate humans
and allow them to take their ideas further [18]. This can be due to humans hav-
ing limited cognitive capacity, and the working memory of humans will be filled
with other things as time progresses, hindering the ability to effectively respond
when feedback is delayed [8]. On the other hand, asynchronous messages can be
re-read over and over again, providing the opportunity to meditate on specific
parts that require thought.
is counted as its own entity. E-mails are currently in the process of this disrup-
tion as some, perhaps more formal communication, still include greetings while
increasingly the greetings are omitted. All this constitutes to an increasing blur
between synchronous and asynchronous communication and is a symptom of our
society being “always online”.
As the temporal dimension is in a key role in defining whether the form of
communication is synchronous or asynchronous, we observe when participants
engage in discussion during an online course. With this focus we seek to answer
the following research question: What are the key temporal challenges in peer
communication during online courses? Through identifying these issues we are
then able to theorize solutions based on previous work.
3 Methods
For answering the research question, data from UNIPS employee training ped-
agogical online course Becoming a Teacher which took place in autumn 2017
is used. UNIPS is an open online repository of educational materials which can
be self-studied or completed in guidance with local universities for certificates
or ECTS credits [17,19]. The course Becoming a Teacher is a micro-credential
course worth one credit (ECTS), and has been shown to change conceptions
of pedagogy especially for young learners [31]. 42 students who gave permis-
sion to use their discussions for research participated in a two week teamwork
period where they used Google Docs to comment on each other’s essays on how
they see themselves as teachers. Groups of 4–5 students were formed, and all
students were either PhD students or faculty at the university. The teamwork
period contained loose instructions and minimal participation by the facilitator,
and focused on peer-interaction. Participants were given three deadlines during
the period which were: (1) submit your essay and introduce yourself to others.
(2) Go write at least three comments on each others essays and discuss with
them about the content of their essays and (3) Go reply to all the comments you
received and continue the discussion.
As we analyze the temporal dimension of the discussions, we looked into
obtaining the following information:
– How often do participants come online during a two week discussion period?
– Are there students who are unable to discuss and develop their ideas further
because their group members are not online often enough?
– Did the interaction change if two participants were online at the same time?
4 Results
During the two-week asynchronous team work period we observed clear spikes
in discussion activity right before deadlines. These spikes can be seen in Fig. 1.
One crucial aspect for the success of asynchronous discussions is that students
are online often enough for discussions to be able to occur, which we found
Synchronizing Asnychronous Discussions 219
was not the case. In fact, more than half the students commented the bare
minimum, while some did not do even that. Zero students managed to comment
on more than half the days the teamwork period was running. The amount of
days individual students came to comment online can be seen below:
– 0–1 days: 3 students
– 2 days: 23 students
– 3 days: 9 students
– 4 days: 7 students
– 5 or more days: 0 students
Fig. 1. Showing how student activity was highest right before or during the deadline
dates 5.11 and 9.11.
The mean participation rate was two days, with the average amount of days
student came to write comments being 2.45. According to these findings, the
majority of students write their comments and questions on one day in the middle
of the team work period and return to reply to the comments they have receive
close to the deadline. This indicates most students are unable to produce effective
discussions during the teamwork period, as their teammates are statistically not
likely to be online for often enough.
Furthermore, we observed situations where student A came online to write
comments and student B replied the next day as visualized in Fig. 2. Student
B then came online the next day, but as Student A had not yet replied, this
time could not be used for discussion. Also cases occurred where both Student
A and Student B were online at the same time, but due to the nature of the
communication platform, they were unable to utilize this simultaneous presence
for more direct synchronous communication.
220 S. Laato and M. Murtonen
5 Discussion
In order to make a better use of students time, the presented data indicates
that more synchronization between students that take part in asynchronous dis-
cussions is needed when the groups are small. An ideal situation to aim for would
be such where students take turns to come online and reply to each other, as
visualized in Fig. 3. But how to get there?
Academia has come up with solutions to combat the issues described above,
such as the copyrighted Intelligent Discussion Boards [16] and incremental dead-
lines [10]. Also increasing the number of participants has been suggested in the
context of non-mandatory discussions [5], however, it is unclear what kind of an
impact it would have on mandatory communication. Simply forcing students to
come online at specific times defeats the purpose of asynchronous communica-
tion, as one of the reasons projects such as UNIPS are choosing asynchronous
technologies for their courses is that students are not able to come online at
specific times [19]. The trend of being more and more online [12], and the influ-
ence it can have on asynchronous discussions, is an interesting aspect for future
research.
We notice cases where it is difficult to explicitly define whether certain com-
munication is synchronous or asynchronous, such as instant messaging, where
people can drift in and out of synchronization constantly. It can be argued that
it is more fruitful to visualize communication based on delay, or the possible
delay, between exchange of information instead of using the binary categoriza-
tion. In an online message board a comment can be replied to immediately, or
in two days, or never. To truly synchronize asynchronous discussion, solutions
should be sought where this delay is minimized. This idea can be taken further
by placing different forms of communication on an axis based on how much delay
there is between exchange of ideas. This axis is displayed in Fig. 4. If the delay
in feedback is used as the sole defining feature of asynchronous communica-
tion compared to synchronous, then we arrive into the conclusion that there are
“more synchronous” activities than others. Thus, we can increase the synchrony
of asynchronous discussions.
5.3 Limitations
The empirical data collected in this study was from a specific course in a geo-
graphically limited area and used a specific technology (Google Docs) for organiz-
ing discussions. The instructions and behavior of the course facilitator influenced
the discussion activity. Furthermore, increasing the intrinsic motivation of par-
ticipants via, for example, giving them a concrete common goal which they had
222 S. Laato and M. Murtonen
to achieve and which required cooperation might have increased the discussion
activity.
All these limitations in mind, the purpose of the empirical data was to iden-
tify challenges which might arise in pure asynchronous communication. It is
likely the findings are present in other asynchronous online courses as well. Cur-
rently UNIPS courses have shown to have a positive impact on students’ learning
despite the challenges in the teamwork period [31]. It is thus possible that par-
ticipants learn also simply by viewing discussion instead of contributing to it
themselves, as suggested by Chiu and Hew [6].
6 Conclusions
We used empirical data from group discussions during a UNIPS online pedagog-
ical course to identify three temporal issues in the asynchronous communication
that took place: (1) Discussion activity peaked around deadlines (2) Students
reserved time to write comments on days where there was nothing for them
to do and (3) students were unable to discuss synchronously even if they were
online at the same time. We theorize that these challenges could be mitigated if
participants synchronized their activities better with each other. As a solution,
the actions of the course facilitator, instructions given to participants and cho-
sen communication technologies should be looked into. We also discussed what
follows if activities are observed based on the delay between the exchange of
ideas, and used this to place activities traditionally categorized as asynchronous
or synchronous on a spectrum. Future work will include empirically testing the
effects the proposed solutions will have on the quality of the discussions and
consequently, on students’ learning.
References
1. An, H., Shin, S., Lim, K.: The effects of different instructor facilitation approaches
on students’ interactions during asynchronous online discussions. Comput. Educ.
53(3), 749–760 (2009)
Synchronizing Asnychronous Discussions 223
20. Latchman, H., Salzmann, C., Thottapilly, S., Bouzekri, H.: Hybrid asynchronous
and synchronous learning networks in distance education. In: International Con-
ference on Engineering Education (1998)
21. Li, L., Finley, J., Pitts, J., Guo, R.: Which is a better choice for student-faculty
interaction: synchronous or asynchronous communication? J. Technol. Res. 2, 1
(2011)
22. Mabrito, M.: A study of synchronous versus asynchronous collaboration in an
online business writing class. Am. J. Dist. Educ. 20(2), 93–107 (2006)
23. Madden, L., Jones, G., Childers, G.: Teacher education: modes of communication
within asynchronous and synchronous communication platforms. J. Classr. Inter-
act. 52(2), 16–30 (2017)
24. Murphy, E.: Recognising and promoting collaboration in an online asynchronous
discussion. Br. J. Educ. Technol. 35(4), 421–431 (2004)
25. Murphy, E., Rodrı́guez-Manzanares, M.A., Barbour, M.: Asynchronous and syn-
chronous online teaching: perspectives of canadian high school distance education
teachers. Br. J. Educ. Technol. 42(4), 583–591 (2011)
26. Offir, B., Lev, Y., Bezalel, R.: Surface and deep learning processes in distance
education: synchronous versus asynchronous systems. Comput. Educ. 51(3), 1172–
1183 (2008)
27. Oztok, M., Zingaro, D., Brett, C., Hewitt, J.: Exploring asynchronous and syn-
chronous tool use in online courses. Comput. Educ. 60(1), 87–94 (2013)
28. Peterson, A.T., Beymer, P.N., Putnam, R.T.: Synchronous and asynchronous dis-
cussions: effects on cooperation, belonging, and affect. Online Learn. 22(4), 7–25
(2018)
29. Rosenberg, J., Akcaoglu, M., Willet, K.B.S., Greenhalgh, S., Koehler, M.: A tale of
two twitters: synchronous and asynchronous use of the same hashtag. In: Society
for Information Technology and Teacher Education International Conference, pp.
283–286. Association for the Advancement of Computing in Education (AACE)
(2017)
30. Swan, K.: Virtual interaction: design factors affecting student satisfaction and per-
ceived learning in asynchronous online courses. Dist. Educ. 22(2), 306–331 (2001)
31. Vilppu, H., Södervik, I., Postareff, L., et al.: The effect of short online pedagogi-
cal training on university teachers’ interpretations of teaching-learning situations.
Instr. Sci. 47, 679–709 (2019). https://doi.org/10.1007/s11251-019-09496-z
32. Watts, L.: Synchronous and asynchronous communication in distance learning: a
review of the literature. Q. Rev. Dist. Educ. 17(1), 23 (2016)
33. Yamagata-Lynch, L.C.: Blending online asynchronous and synchronous learning.
Int. Rev. Res. Open Distrib. Learn. 15(2), 189–212 (2014)
Academic Dishonesty Prevention in E-learning
University System
1 Introduction
© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 225–234, 2020.
https://doi.org/10.1007/978-3-030-45697-9_22
226 D. Bylieva et al.
The problem of cheating common for the online environment remains unresolved.
Some studies confirm that academic dishonesty in e-learning is higher than in face-to-
face training. Watson and Sottile examined students of various specialties and noted
that in all cases they significantly more likely to obtain answers from other students
during an online test or quiz [10]. However, others, on the contrary, indicate a lower
level of cheating compared to face-to-face learning [11] or the absence of difference in
indicators [12]. We may assume that the academic dishonesty level is more influenced
by other factors than digital or traditional form of study.
It should be noted that the special academic temptation for students is this simple
way of academic dishonesty practiced in classes, unless special measures are taken to
prevent it. For instance, King, Guyette, and Piotrowski found out that for 73.6% of
surveyed students it was easier to cheat online [13].
Disappearing study in the library and intensive using of online resources have made
“copy-paste” operation the most regular activity when students are doing assignments.
They do it without realizing that plagiarism is a violation of academic regulations and
copyrights. Blau and Eshet-Alkalai’s study indicates that school students perceive
digital plagiarism and digital facilitation as legitimate behaviors [14]. Moreover, it
turns out that digital dishonesty entails less punishment, as analysis of Disciplinary
Committee’s protocols shows for a 4-years period in Israel, as being perceived as less
harmful and therefore receives lighter penalties [15]. Researchers note that academic
dishonesty is being ‘normalised’ as students rationalise some levels of academic dis-
honesty [16, 17].
In its most general form, traditionally academic dishonesty is divided into three cate-
gories: cheating, plagiarism, and collusion [18, 19]. Although the latter may resemble
quite acceptable collaboration as exchange of ideas. Though it implies joint actions
connected with active, intentional and obvious act of cheating [19]. Pavela identifies 4
main types of academic dishonesty:
– cheating – using learning materials, information, or other aids, whose use was
explicitly banned.
– plagiarism – use of content prepared by others and presenting it as one’s own,
without giving reference to the source.
– fabrication – invention or citation of non-existent information.
– facilitating academic dishonesty – intentionally helping someone else perpetrate
academic dishonesty [20].
Researchers indicate that plagiarism and helping others to conduct dishonest
activities were perceived in our study as more legitimate in the digital setting [16].
We will study cheating during the tests and examinations within the on-line courses
in more detail. The most striking example of misconduct is passing control by another
person instead of the student. In some situations, students do their assignments together
though it may be individual or group tasks and with the inclusive “expert”. In case of
Academic Dishonesty Prevention in E-learning University System 227
individual tests, it will be cheating if a student uses answers. Students may get these
answers by different ways: directly from those who have already passed the test (in the
current period or in the previous one if the test is not changed a lot). They may get it
indirectly by technical means, when the database is collected for example, by sharing
google-tables, social networks, etc. Then all data is collected in a convenient form for
searching the right answer and placed in the network by an initiator. However, there
may be simpler ways of testing when it is not required even such an easy intellectual
activity as finding and choosing the right question and answer. In St. Petersburg
Polytechnic University students developed a special extension for Google Chrome
PolyTestHelper, which operates for a specific list of sites. These sites contain tests to
control students’ knowledge. Correct answers are “highlighted” in such tests. The text
of the question is sent to the add-on server and the answer is received from the server
according to the students’ the most frequent choice. Thus, the supplement autono-
mously collects a database of questions and answers with special weighs - the number
of selected answers to a specific question. Correct answers are highlighted on the test
page (<<<) (Fig. 1) or the text of the answer is entered. In this case, the student may not
be bothered not only by finding an answer in the database or in the network, but even
by reading the question.
Fig. 1. Illustration of highlighting the correct answer with the PolyTestHelper extension
(original and translation)
Plagiarism in the university environment can be divided into two main types:
presentation someone else’s results as yours ‘Ghost writing’ (purchased, downloaded
on the network, taken from other students) and “classical plagiarism” (using sources
without references). The common university practice of verifying students’ papers at
the level of originality with the help of special services does not solve the problem
since there is constant renovation of ways changing the text so that it is perceived as
original. For example, by inserting invisible (written in white colour) additional letters
in words, it was possible to “trick” the anti-plagiarism service until 2016. In the text the
Russian letters were successfully replaced with Latin letters with the same spelling.
When the anti-plagiarism system was programmed to track this replacement, students
began to use similar Greek or Arabic letters. Now none of these methods works. Today
228 D. Bylieva et al.
more sophisticated techniques of working with text are used. To get a text similar in
meaning to the existing one, but not perceptible for programs, students can translate the
text into another language, and then translate this text again into the original. More
complicated method involves creative processing of the text by using synonyms.
Automation of this process with the help of synonymizers is not yet effective enough,
as the readability of the text decreases. In addition, there is the possibility of inserting
into the text neutral adverbs, adjectives, interjections, prepositions, automatic
hyphenation, etc., as well as the shingle rule consideration used for verification.
assignments for which the ability to use electronic resources by students in the exam
will not be an obstacle that must be painfully overcome, but a brilliant opportunity for
implementing an innovative educational strategy. The exam in the digital environment
should become more “dynamic, interactive, immersive, intelligent, authentic and
ubiquitous” [27]. Today, there are examples of Open-Book, Open-Web exam, where
the student is offered the contemporary problem of the real world submitted as a mini-
case, that requires the application of the skills, techniques and knowledge of the field
concerned [28].
However, such a solution is not often used for monitoring online mass courses.
Such tasks in the vast majority of cases exclude the possibility of automatic electronic
verification and require a lot of time to interact with students, which is usually not
intended for tutors and facilitators of online courses.
At the same time, such traditional tasks such as randomly selecting questions from
the database and mixing answers in multiple choice tests are easily solved by students
using the methods described in the previous paragraph.
Mass courses with a semester enrollment of several thousand students are espe-
cially vulnerable to “uncovering” the base of questions. Creating such a course as
absolutely invulnerable for students searching the answers together today seems to be a
technically difficult task.
New forms of cheating require strategic decisions from educational institutions. In
order to neutralize activities from extension Google Chrome PolyTestHelper The
Centre of Open Education of the St. Petersburg Polytechnic University took the fol-
lowing steps:
– For distance learning portals was developed web application for blocking page
changes when writing the correct answer. The launch of this application made it
possible to identify another class of “add-ons” useful for students - online trans-
lators (Google translate, Yandex translator), which can also be started when passing
a test in a foreign language with translation of tasks and used to simplify the test.
– Metrics of abnormal behavior of students during the course were developed:
copying and pasting from the clipboard, switching between tabs when passing the
test, launching add-ons for automatically passing the test or online translator with
collecting this kind of information on a local analytics server, similar to Yandex
metrika, Google Analytics.
– Collecting information about how much time the student spent on the test and on a
specific element in the course (the browser tab was active, not the data from the log)
to further block the test if the previous material was not viewed for a certain time.
– An analytics system was developed on the basis of the student’s actions log in the
course: analysis of the learning trajectory of a particular student (the topics he/she
opened depending on time, etc.).
Invasive technologies can be used to prevent academic dishonesty (such as
blocking the use of other software). Blocking IP address if the audience has a common
external IP address. Testing in safe browser mode when the test is configured to pass by
the means of a specific browser and with a specific key, and in the browser settings this
key is indicated (https://docs.moodle.org/37/en/Safe_exam_browser). It solves the
problem when one student starts the change in the class (and interrupts the attempt) and
230 D. Bylieva et al.
the second student from the campus under the credentials of the first one continues the
attempt out of the class and sends the result to the server (Fig. 2).
verification
‘Ghost writing’ / identifica- technical methods
tion making AD difficult
impersonation
cheating
invasive behavior
technologies analysis
use of unauthorized
materials or ready- abnormal behavior metrics
made answers blocking the use Analysis of student be-
of other software havior during the course
and change the by logs and computer
page activity on the screen
Fig. 2. Scheme of the main types of academic dishonesty and methods of prevention
Berry, Thornton and Baker offer to prevent cheating using sites like turn-it-in.com,
and submit the results with the assignment, helps to deter the propensities to digital
cheating. You may use other software like LAN School, that shows all computer
activity to determine if digital cheating is occurring, is a tool that is a major deterrent to
online cheating in the classroom [29]. Peter the Great St. Petersburg Polytechnic
University (SPbPU) uses the danware netop school program which allows you to
monitor the desktops of students in one or several classes, connect to a specific screen
and drive the mouse instead of a student.
However, the ability to fully control all the activities occurring on the computer
screen does not guarantee that the student is tested independently. Leaving aside the
possibility of substituting an examiner or taking a test together, we know that each
student usually has more than one device connected to the Internet. When the
Academic Dishonesty Prevention in E-learning University System 231
awareness of students about “tracking” becomes high, students take a test on a com-
puter screen and at the same time get information from the database on a smartphone or
tablet screen.
The technical complexity of the exam process eliminates the possibility of
impersonation and significantly reduces the possibility of using extra information
sources (for example, visual identification (Webcam), secure remote proctor software
using biometric verification (uni-modal or bimodal biometric), from using common
webcam, microphone up to special devices (digital fingerprint readers or high-
definition cameras for iris recognition). In addition, to control the actions of students,
web cameras can be used to view the workplace and the entire class for 360° before
starting the test, the position of hands and eye direction during the whole test.
Proctoring brings closer the situation of the online exam as the traditional class-
room testing (regardless of whether it is electronic or happening in the classroom) with
all the usual methods of cheating like digital (with communication tools) and traditional
(cheat sheets, tips, etc.). SPbPU has offline proctoring for the national portal Openedu.
ru for manual activation of student sessions when passing the test. The teacher
approves the launch of the test for those students who are in the classroom and rejects
attempts by those students who are at home. It allows you to get away from the lists of
rules and restrictions when passing the test with a large number of groups and a long
session time.
Companies issuing professional certificates have long been faced with the need for
strict control over the passing of exams. So at the Pearson VUE tests there are the
following set of preventive measures: 1) the student is identified by two documents, 2)
the student puts everything out of pocket before the exam; 3) a camera is placed above
the student and monitors the hands (that should be above the table), knees (there should
be no cheat sheets), 4) the test center employee monitors the cameras and there is a
recording for VUE of student’s behavior and operator’s activity too, 5) the student
passes a test on a PC on which there is only software for testing and there is no way to
install another software. At the same time, it is obvious that measures acceptable for
one-time testing at a certification center can be destructive for relations in university
environment, which implies mutual respect and trust. It really contributes to a signif-
icant increase of stress level.
Specifically designed for the academic environment application project TeSLA (an
Adaptive Trust-based e-assessment System for Learning) is the use of a system for
student authentication and authorship checking integrated within institutional Virtual
Learning Environment [24, 30]. Imbedded in Learning management system (for
example, Moodle) plug-in directly integrated into the most used assessment activities
such as assignment, forum and quiz help the teacher to choose the authentication forms
necessary for the special case - Face Recognition, Voice Recognition and Keystroke
Dynamics (for typing rhythm), and to check authorship, including Forensic Analysis
(for writing style) and Plagiarism Detection. However, even if Face and Voice
Recognition are not completely reliable today, the plug-in innovations like Keystroke
Dynamics and Forensic Analysis raise the question of how much you can rely on
technical means for detecting cheating. False positives of anti-cheating programs can
seriously undermine the trust that is important for relations between students and
teachers. Today formal reliance on anti-plagiarism indicators sometimes leads to
232 D. Bylieva et al.
disappointing consequences, when smart cheaters get the highest score, and the inde-
pendently completed work is rejected.
References
1. Shipunova, O., Evseeva, L., Pozdeeva, E., Evseev, V.V., Zhabenko, I.: Social and
educational environment modeling in future vision: infosphere tools. In: E3S Web of
Conference, vol. 110, p. 02011 (2019). https://doi.org/10.1051/e3sconf/201911002011
2. Shipunova, O.D., Berezovskaya, I.P., Mureyko, L.M., Evseeva, L.I., Evseev, V.V.: Personal
intellectual potential in the e-culture conditions. Espacios 39, 15 (2018)
Academic Dishonesty Prevention in E-learning University System 233
20. Pavela, G.: Applying the power of association on campus: a model code of academic
integrity. J. Bus. Ethics 16, 97–118 (1997)
21. Du Plessis, I.: E-learning as the cornerstone to academic integrity in the 21st century. In:
Unpublished Paper Presented at the Council on Higher Education Quality Promotion
Conference: Promoting Academic Integrity in Higher Education, 26–28 February. CSIR
International Convention Centre, Pretoria (2019)
22. Waghid, Y., Davids, N.: On the polemic of academic integrity in higher education. S. Afr.
J. High. Educ. 33, 1–5 (2019). https://doi.org/10.20853/33-1-3402
23. Hong, J.-C., Ye, J.-H., Fan, J.-Y.: STEM in fashion design: the roles of creative self-efficacy
and epistemic curiosity in creative performance. Eurasia J. Math. Sci. Technol. Educ. 15
(2019). https://doi.org/10.29333/ejmste/108455
24. Mellar, H., Peytcheva-Forsyth, R., Kocdar, S., Karadeniz, A., Yovkova, B.: Addressing
cheating in e-assessment using student authentication and authorship checking systems:
teachers’ perspectives. Int. J. Educ. Integr. 14, 2 (2018). https://doi.org/10.1007/s40979-018-
0025-x
25. Berisha, E., Trindade, R.T., Bürgi, P.Y., Benkacem, O., Moccozet, L.: A versatile and
flexible e-assessment framework towards more authentic summative examinations in higher-
education. Int. J. Contin. Eng. Educ. Life-Long Learn. 29, 1 (2019). https://doi.org/10.1504/
IJCEELL.2019.10019538
26. Chuchalin, A.I.: Engineering education in the epoch of industrial revolution and digital
economy. High. Educ. Russ. 27, 47–62 (2018). https://doi.org/10.31992/0869-3617-2018-
27-10-47-62
27. Guàrdia, L., Crisp, G., Alsina, I.: Trends and challenges of e-assessment to enhance student
learning in higher education. In: Innovative Practices for Higher Education Assessment and
Measurement, pp. 36–56 (2017). https://doi.org/10.4018/978-1-5225-0531-0.ch003
28. Williams, J.B., Wong, A.: Closed book, invigilated exams versus open book, open web
exams: an empirical analysis. In: ASCILITE-Australian Society for Computers in Learning
in Tertiary Education Annual Conference, pp. 1079–1083. Australasian Society for
Computers in Learning in Tertiary Education (2007)
29. Berry, P., Thornton, B., Baker, R.K.: Demographics of digital cheating: who cheats, and
what we can do about it! In: Proceedings of the 2006 Southern Association for Information
Systems Conference, pp. 82–87 (2006)
30. Noguera, I., Guerrero-Roldán, A.-E., Rodríguez, M.E.: Assuring Authorship and Authen-
tication Across the e-Assessment Process (2017). https://doi.org/10.1007/978-3-319-57744-
9_8
Curriculum for Digital Culture at ITMO
University
1 Introduction
© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 235–244, 2020.
https://doi.org/10.1007/978-3-030-45697-9_23
236 E. Mikhailova et al.
management spheres. Big data became a growth driver and a new resource in the
economy. A popular phrase “data is the new oil” formulated by Clive Humby, the
British mathematician and the founder of the Tesco, means that data is a raw material
for economic growth. However, just like oil, raw data needs to be processed before
further application.
Information technologies have penetrated all activities of a modern human, i.e.
production, science, policy, commerce, everyday life, communications and culture. Our
future and already our present are the internet of things, blockchain technology and
distributed networking, production automation and robot economy, smart houses and
digital vision. Nowadays information technologies mean more than just using a
computer to perform tasks that traditionally have been done manually. Organizations
and individuals aim at executing their tasks better, faster and often in the other way
than in the past.
The digitalization of the society causes changes in the labour market as well. By the
year 2020 two million of jobs will be added to the global market, but at the same time
about 7 million of working places will disappear. The jobs will open in the intellectual
and high-tech fields and will be reduced in the real economy and administrative sectors.
By 2020 the Big Data technologies will increase employment, e.g. in the field of
mathematics and computer technology by 4.6%, in management by 1.3%, and in sales
by 1.25% per year, but will reduce the number of office staff workplaces by 6.06% [1].
The Russian government has adopted the programme named “Digital Economy in
the Russian Federation”, which is aimed at forming a full-fledged digital environment
in Russia [9]. Building a new economic structure based on developing digital economy
makes new demands on the vocational education system. The future specialists require
skills to apply and moreover to develop modern and secure digital technology and
platform solutions in the most important sectors of the economy and in the social
sphere.
Bachelor’s students study the disciplines of the module for 6 terms, while for
master’s students the module is shorter and takes 2 terms. The structure of the disci-
plines cluster divided by terms for bachelor’s degree programme is shown in Fig. 1.
II
III
IV
V,
VI
Fig. 1. The structure of discipline’s cluster “Digital culture” for bachelor’s degree programme.
Obligatory courses are marked with the solid line, elective courses (i.e. students must select one
course each term) are marked with the dashed line. The term numbers are shown in the right
column.
Bachelor’s begin their studying with the discipline “Introduction to Digital culture”
consisting of three sections. In the first “fundamental” section, the following basic
concepts are revealed: computer and operating system architecture, coding technolo-
gies, network technologies, information security, Internet and web technologies.
The second section is devoted to the personal information and the interaction of a
human with digital technologies. The questions of digital ethics, Internet communi-
cations culture, personal security and blockchain technologies are considered: how to
communicate effectively with other users and organizations, how to present informa-
tion about yourself correctly, what data is public and what is private, how to ensure
information security, what legislation exists in the field of data management in Russia
and other countries.
Curriculum for Digital Culture at ITMO University 239
The lectures in the third service section are devoted to the modern technology
achievements in the field of information technology, such as: virtual, augmented and
mixed reality, quantum technology, digital humanities, social networks and biblio-
graphic retrieval.
The next three terms of the bachelor’s programme are devoted to the disciplines
forming the core of the cluster. They develop universal competences of data processing.
We need to know how to deal with lots of information that we face with every day. It is
easy to get lost in the data flow if you lack skills to structure and visualize the data. The
goal of the first core discipline named “Data storage and processing” is to show practical
techniques and methods to store, process and analyze large amounts of data. The practical
assignments of this discipline are to be made by means of MS Excel, coding (not nec-
essary), relational and NoSQL database management systems. The discipline starts with
the description of data sources and data types, types and harmonization of measurement
scales, data cleaning and normalization methods, time series smoothing, then proceeds to
data visualization types. Later, structured and unstructured data storage and processing by
means of relational and NoSQL database management systems are discussed, such as
tables processing, queries building and optimization.
The second discipline is called “Applied statistics”. It explains the basic concepts of
statistics and applied statistics techniques in simple words. The goal of this discipline is
to teach students how to apply statistical methods to their vocational tasks and chal-
lenges and to how interpret the obtained results correctly. Among the discussed topics
are the point and interval data estimates, sample and distribution characteristics,
hypotheses testing and goodness of fit criteria.
The third discipline is closely related to statistics and examines various types of
machine learning and complex data analysis methods. The comparative advantages and
drawbacks of different methods, the underlying mathematics as well as various applied
task examples are given. Students learn the main approaches needed for big data
processing, such as modern regression and classification methods, data structure search,
outcomes analysis and basics of Python programming. Two learning paths (basic and
advanced) are developed for both “Applied statistics” and “Machine learning” disci-
plines, so that students can choose a path depending on their background. The dif-
ference between two learning paths is in software tools used to carry out practical
assignments of the course (either Python or MS Excel and MS Azure).
For the last year of bachelor’s degree programme, several electives are prepared
focused on the implementation of the studied methods to solving professional tasks in
different areas. Among these tasks the following topics are considered: queuing theory,
image processing, technical computing systems, Internet of things etc.
As the initial background of master’s degree students and their experience in IT
field differ considerably, and they have only two terms to study, two learning paths
have been developed for them as well (see Fig. 2). The discipline of every term
consists of two courses for both learning paths. The first term starts with the course
“Initial data storage and processing”, which is mandatory for all students regardless of
the chosen learning path. The course discusses initial data processing, data visualiza-
tion, as well as large volume data storage and processing by means of relational DBMS
and NoSQL systems. Afterwards students can take a recommendation testing to assess
their knowledge in statistics that helps them to choose the further learning path.
240 E. Mikhailova et al.
All students must take the course “Introduction to Machine Learning”, and depending
on their knowledge of statistics they can either choose the course “Advanced machine
learning” afterwards or take the course named “Elements of statistical data analysis”
prior to “Introduction to Machine Learning”, in this case skipping “Advanced Machine
Learning”.
Fig. 2. The structure of disciplines’ cluster “Digital culture” for master’s degree programme.
Obligatory courses are marked with the solid line, elective courses (i.e. students must select two
courses each term) are marked with the dashed line. The term numbers are shown in the right
column.
The course “Elements of statistical data analysis” describes the main statistical
methods, such as the point and interval estimates, confidence intervals, statistical
hypothesis testing. This course is introduced by the necessary basics concepts of the
probability theory.
The course “Introduction to Machine learning” is devoted to machine learning
types and applied tasks challenges solved by means of machine learning methods. The
main attention is paid to regression types, classification techniques, data clustering
tasks, and to the comparative analysis of different approaches. The students who took
the basic learning path take this course in the second term.
In the second term master’s students take the discipline named “Applied artificial
intelligence”. This discipline consists also of two courses. In the advanced learning
path, the discipline includes the course named “Advanced machine learning”.
This course discusses the factor analysis methods, variable reduction problems, mul-
ticlass regression and reinforcement learning methods.
Curriculum for Digital Culture at ITMO University 241
Three elective courses of the discipline “Applied artificial intelligence” give stu-
dents the overview of the artificial intelligence application in different scopes.
The course named “Artificial intelligence in science and business” shows the
application of IT achievements in the information security, production automatization,
speech synthesis and recognition as well as knowledge graphs.
The course named “Text processing” is devoted to natural language processing
tasks. It discusses the information retrieval, language modelling, thesauruses and
ontologies as well as machine translation.
“Image processing” course considers the artificial vision and the basics of image
processing, neural network application in artificial vision tasks, faces and gestures
recognition and objects detection.
Fig. 3. The implementation of the blended learning method in the disciplines’ cluster “Digital
Culture”.
242 E. Mikhailova et al.
1.5 Results
The cluster of disciplines “Digital Culture” was launched in September 2018. Almost
4000 students (both bachelors and masters) are enrolled at ITMO University every
year. All of them were subscribed for the disciplines’ cluster “Digital culture”.
At the end of the first term in 2018 the students that have studied the disciplines of
the cluster were interviewed. About 16% of masters and 24% of bachelors took part in
this survey. Most of students believe that the courses will help them in their future
professional activity and rate the quality of course materials as high or good.
Most of those students who contacted technical support were satisfied with the
request results, while 40% of masters and 62% of bachelors had no need to contact the
Curriculum for Digital Culture at ITMO University 243
support at all. 64% of the surveyed masters and 72% of bachelors have not contacted
teachers for clarification, while the other students mostly used e-mail to communicate
with the teacher, and the next most popular form of contact was the online forum where
a student can get an answer to the questions from both the teacher and his fellow mates.
Most undergraduates believe that the course is organized logically and consistently,
and the information given is clear and well supported by examples. However, for the
students specialized in less “technical” fields, whose focus area is management,
international relations, or biotechnology, the material seemed to be rather difficult to
master. They have found both theoretical material and practical tasks difficult. That’s
why the basic learning path has been introduced in this (2019–2020) year of study. The
statistics of the last year showed that the most students coped with the disciplines
excellently (i.e. got the grade “5”).
The new topics and information are supplemented to the courses every year, as well
as new tasks are elaborated. The Higher school of Digital Culture works closely with
the heads of educational programmes of ITMO University in order to understand the
needs of students from different specialties and to elaborate relevant and important
courses for the third year of bachelor’s degree programme.
We also collaborate with other Higher Schools and companies in Russia and
abroad, as all the methods described in the courses are aimed at practical implemen-
tation and the cluster of the disciplines can be provided to other listeners divided into
different courses depending on their initial interest. In particular, the course “Intro-
duction to Digital Culture” has been already provided to the students of Ural Federal
University, as well as to the secondary school teachers of Russian schools in the CIS
countries.
We see that the new educational approach is approved by the students’ attitude and
their results. We are planning to increase the number of elective courses to show the
students how digital technology and data processing methods are used in different
scopes of science and business. The learning paths are planned to be more detailed
allowing students to choose the needed path according to their experience and future
requirements.
References
1. Andreyeva, G.N., Badalyanc, S.V., Bogatyreva, T.G., et al.: The Development of the Digital
Economy in Russia as a Key Factor in Economic Growth and Population Life Quality
Improving. Professional Science, Novgorod (2018)
2. Barnard, L., Lan, W.Y., To, Y.M., Paton, V.O., Lai, S.: Measuring self-regulation in online
and blended learning environments. Internet High. Educ. 12(1), 1–6 (2009)
3. Bersin & Associates: Blended Learning: What Works? An Industry Study of the Strategy,
Implementation, and Impact of Blended Learning. Bersin & Associates, Oakland (2003)
4. Bonk, C.J., Graham, C.R.: Handbook of Blended Learning: Global Perspectives, Local
Designs. Pfeiffer Publishing, San Francisco (2006)
5. Dziuban, C., Graham, C.R., Moskal, P., Norberg, A., Sicilia, N.: Blended learning: the new
normal and emerging technologies. Int. J. Educ. Technol. High. Educ. 15(3) (2018). http://
doi.org/10.1186/s41239-017-0087-5
6. http://web.media.mit.edu/*nicholas/Wired/WIRED3-02.html
244 E. Mikhailova et al.
1 Introduction
This article is framed within the Timonel R&D Excellence Project (Ref. EDU2016-
75892-P) and focuses on the elaboration of a Recommendation System based on the
orientation and mentoring needs of undergraduates and graduates, focusing on their
academic, personal, professional and ICT orientation. As objectives prior to its cre-
ation, the analysis of the needs of students and teachers in relation to the orientation and
tutorial function in European universities, the perception of the factors and elements
that determined the quality of the tutorial action plan was established according to the
teaching staff, as well as detecting good practices by tutorial action plan established
since the first decade of 2000 in European universities (Álvarez González 2012).
Tutoring is currently a potential strategic factor for the quality of the educational
model of the EHEA, since through its actions you can achieve an improvement in the
processes of access and adaptation of students, the optimization of the training process,
the prevention of abandonment of studies and improvement in professional develop-
ment processes (Álvarez 2012; Pantoja and Campoy 2005). A good tutoring practice in
the university implies a new learning approach, in which the tutorial practice becomes
an element of teaching quality and an essential requirement to respond to the demands
of university students (Durán and Estay-Nicular 2016; Pantoja 2005).
© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 245–252, 2020.
https://doi.org/10.1007/978-3-030-45697-9_24
246 A. Pantoja Vallejo et al.
2 Method
2.1 Design
This study presents a mixed perspective according to the triangulation of the data
collection performed. This type of method combines the qualitative and quantitative
method giving relevance to the research potential (Creswell and Zhang 2009). The
qualitative part analyzed the results obtained in discussion groups. The quantitative
phase follows a descriptive-exploratory design through two ad hoc validated ques-
tionnaires for undergraduates.
2.2 Participants
A total of 2779 students completed the questionnaire. The students belonged to the
universities of Jaen and Granada of Spain (UJA, UGR), Polytechnic of Coimbra of
Portugal (PIC) and Queen Mary of London (QML). Those universities were selected
since they were participants in the R + D project. Those faculties and degrees that were
similar in the four universities were selected. The conclusions could be extended given
the previous context analysis.
With regard to quantitative research, this has been formed by a proportional random
stratified sampling (except in the PIC and QML that has been intentional due to the
variability of the degrees and the difficulty of accessing the subjects in the case of the
QML) with a calculated error of 5% and according to the variables included in Table 1.
In the cases of the UJA and UGR, only the common degrees are taken into
consideration.
The sample selection of the participants of the discussion groups was intentional
depending on the research needs (Table 2).
2.3 Instruments
The scale La práctica orientadora y tutorial en el alumnado y egresados universitarios
(POTAE-17) was designed for the quantitative analysis of the project. POTAE-17 has 4
dimensions: academic, personal, professional orientation and use of ICT. Only one out
of the four dimensions that form them, referring to the use of ICT in university tutoring,
is taken into consideration.
The scales have a Likert format with 5 response options (from totally disagree to
totally agree) and each scale has 61 items. In addition, the item «What overall score do
you grant to the use you have made of ICT in student orientation? (From 0 to 10)». The
psychometric characteristics of both tests, based on content validity, strength and
reliability, guarantee the degree of trust, replicability and internal consistency. As a
result, the Cronbach’s alpha, in POTAE-17, reached a value of .87, KMO (= .853) and
Bartlett’s sphericity test (v2 = 6701.698; p = .000). In addition, 4 factors are extracted
through the Kaiser criteria, which also happens to match with the theoretical model
proposed in the Confirmatory Factor Analysis.
For the qualitative analysis, discussion groups were carried out with the selected
students from different grades. In the discussion groups, questions were raised about
the needs that the students encountered in terms of academic, personal, professional
orientation and regarding the use of ICT.
2.4 Procedure
Firstly, the sample calculation was carried out for the different universities. Once the
number of students to whom the scale would be administered was established, the
different professors were asked to go to their classrooms. The scale was answered in
10 min. Subsequently, random students of 2nd and 4th year, master, doctorate and
graduates were chosen for the discussion groups. Finally, quantitative and qualitative
data of independent form were analyzed in order to reach common conclusions.
3 Result
3.1 Quantitative Study
In the first place, a descriptive study was carried out analyzing the students’ academic
degree with the frequency of having selected the option of “Totally agree”.
The item most often found is “On the university platform there is information
related to my subjects” especially in 2nd grade students. The most often found item in
4th grade students is “I use email in tutoring”. In this sense, García (2010) states that,
for students, the tools of technological base such as email and the virtual campus and
the different tools that this includes - are gaining ground to others more traditional
media such as the telephone, as evidenced by the fact that the frequency of use of the
first two is much higher than this last. On the other hand, the item in which the option
“Totally agree” was selected with less frequency has been “I use videoconferencing
(Skype or similar) in my tutorials” in 4th grade students, followed by the item “In
general, teachers have a social network (Facebook or similar) with their supervised
students” for 2nd grade students (Table 3).
Table 3. Frequency and percentages in relation to the academic grade of the response “Totally
agree” (value 5)
Item Frequency Percentagee
2° 4° Postgraduate 2° 4° Postgraduate
Classes promote the 119 106 138 36.6 32.0 28.3
mastery of ICT
I know the job search 226 188 236 34.8 28.9 36.3
online
Teachers, in general, have a 214 130 124 45.7 27.8 26.5
professional website
The teacher’s website is up 166 98 64 50.6 29.9 25.2
to date
On the university platform 392 287 225 43.4 31.7 24.9
there is information related
to my subjects
I use email in tutoring 292 294 292 33.3 33.5 33.3
I use videoconferencing 19 18 31 27.9 26.5 45.6
(Skype or similar) in my
tutorials
In general, teachers have a 24 27 29 30.0 33.8 36.3
social network (Facebook
or similar) with their
supervised students
The class group has a 23 20 33 30.3 26.3 43.4
WhatsApp group in which
teachers participate
(continued)
250 A. Pantoja Vallejo et al.
Table 3. (continued)
Item Frequency Percentagee
2° 4° Postgraduate 2° 4° Postgraduate
I have a specific forum in 172 125 132 40.1 29.1 30.8
the university platform
There is a repository of 156 112 134 38.8 27.9 33.3
digital resources at our
disposal
I have a list of links to Web 133 88 110 40.2 26.6 32.2
pages that help me as
guidance in the subjects
I know resources or digital 138 85 130 39.1 24.1 36.8
networks about my studies
I am informed of the 49 34 42 39.2 27.2 33.6
possibilities of teleworking
Taking the universities as variables to be taken into account, the item “On the
university platform there is information related to my subjects” is the most frequent
when looking at the option “Totally agree” (UJA, UGR and QML). The item “I know
the job search online” is the most valued in PIC. The items less valued by students in
the different universities are “I use videoconferencing (Skype or similar) in my tuto-
rials” (UJA, PIC and QML) and “In general, teachers have a social network (Facebook
or similar) with their supervised students” (UGR) (Table 4).
Table 4. Frequency and percentages in relation to universities of the response “Totally agree”
(value 5)
Items Frequency Percentagee
UJA UGR PIC QML UJA UGR PIC QML
Classes promote the 179 118 62 4 49.3 32.5 17.1 1.1
mastery of ICT
I know the job search 271 118 134 20 41.7 34.6 20.6 3.1
online
Teachers, in general, 191 165 107 5 38.8 39.8 22.9 1.1
have a professional
website
The teacher’s website 140 113 72 3 42.7 34.5 22.0 0.8
is up to date
On the university 435 294 101 25 48.1 32.5 15.3 5.2
platform there is
information related to
my subjects
I use email in tutoring 402 360 97 19 45.8 41.0 11.0 2.2
(continued)
ICT Impact in Orientation and University Tutoring 251
Table 4. (continued)
Items Frequency Percentagee
UJA UGR PIC QML UJA UGR PIC QML
I use 20 32 15 1 29.4 47.1 22.1 1.5
videoconferencing
(Skype or similar) in
my tutorials
In general, teachers 34 29 17 0 42.5 36.3 21.3 0.0
have a social network
(Facebook or similar)
with their supervised
students
The class group has a 37 31 25 1 48.7 40.8 19.5 0.8
WhatsApp group in
which teachers
participate
I have a specific forum 204 157 58 10 47.6 36.6 13.5 2.3
in the university
platform
There is a repository 184 142 64 12 45.8 35.3 15.9 3.0
of digital resources at
our disposal
I have a list of links to 146 120 54 11 44.1 36.3 16.3 3.3
Web pages that help
me as guidance in the
subjects
I know resources or 163 124 57 9 46.2 35.1 16.1 2.5
digital networks about
my studies
I am informed of the 54 44 25 2 43.2 35.2 20.0 1.6
possibilities of
teleworking
4 Conclusions
It can be perceived in the quantitative analysis that the students of the different uni-
versities require greater contact through videoconferences with the tutor to be able to be
orientated at any place and time. In addition, students demand a link with their tutor
through social networks as students consider themselves to be tools that need to be
incorporated into the academic context.
On the other hand, in the qualitative analysis, students reflect that virtual platforms
do adequately inform them about the content of the different areas of knowledge in
addition to the guidance services that universities offer. Communication channels such
as email is the tool that both tutor and students continue to use most frequently.
It could be concluded by expressing the desire of the students to obtain through
other more updated communication channels at the time of the orientation as well as a
greater development of competences by the teaching staff to be able to meet the
students’ needs in terms of technological development.
As new approaches, it is proposed to establish guidelines so that university tutors
can offer better guidance services to students through the use of ICT.
References
Álvarez, P.: Los planes de tutoría de carrera: una estrategia para la orientación al estudiante en el
marco del EEES. Educar 48(2), 247–266 (2012)
Bustos López, M., Hernández Montes, A.J., Vásquez Ramírez, R., Alor Hernández, G., Zatarain
Cabada, R., Barrón Estrada, M.L.: EmoRemSys: Sistema de recomendación de recursos
educativos basado en detección de emociones. Risti. Revista lbérica de Sistemas y
Tecnologías de Información 17, 80–95 (2016). https://doi.org/10.17013/risti.17.80-95
Coll, C., Moreno, C.: Psicología de la Educación Virtual. Morata, Madrid (2008)
Creswell, J.W., Zhang, W.: The application of mixed methods designs to trauma research.
J. Trauma. Stress.: Off. Publ. Int. Soc. Trauma. Stress. Stud. 22(6), 612–621 (2009)
Durán, R., Estay-Niculcar, C.A.: Las buenas prácticas docentes en la educación virtual
universitaria. REDU. Revista de Docencia Universitaria 14(2), 159–186 (2016). https://doi.
org/10.4995/redu.2016.5905
Fernández-Salinero, C., González, M.R., Berlando, M.R.: Mentoría pedagógica para profesorado
universitario novel: estado de la cuestión y análisis de buenas prácticas. Estudios sobre
educación 33, 49–75 (2017)
García, B.: La tutoría en la universidad de Santiago de Compostela: percepción y valoración de
alumnado y profesorado. Tesis doctoral. Universidad de Santiago de Compostela. (2010)
Martínez Clares, P., Pérez Cusó, J., Martínez Juárez, M.: Las TICS y el entorno virtual para la
tutoría universitaria. Educación XX1 19(1), 287–310 (2016). https://doi.org/10.5944/
educxx1.13942
Pantoja, A.: La acción tutorial en la universidad: propuestas para el cambio. Cultura y Educación
17(1), 67–82 (2005)
Zambrano, D.L., Zambrano, M.S.: Las Tecnologías de la Información y las Comunicaciones
(TICs) en la educación superior: consideraciones teóricas. Revista Electrónica Formación y
Calidad Educativa (REFCalE) 213–228 (2019)
Blockchain Security and Privacy in Education:
A Systematic Mapping Study
1 Introduction
In the last decade, Blockchain became famous, beginning with the financial sector with
Bitcoin, Ethereum and other cryptocurrency and then the technology has spread to
other sectors like Education, it is a technology that secures settlement of transactions
using a cryptography and creating a block that will be broadcasted to the blockchain
network.
The technology is promoting security and privacy to make it difficult to alter the
content but the risk of data leakage is not impossible [5, 8], a systematic review was
published in 2019 for Blockchain cyber security and concluded that 45% of the studies
concerned IoT and only 7% on data privacy [19], we hope that this study will help to
join security and privacy with the application of Blockchain in Education.
A lot of studies has proven that blockchain is applicable to Education dAppER an
automated decentralized application for examination [15], SmartCert that guarantee
data security and confidentiality using cryptographically sealed records [13],
© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 253–262, 2020.
https://doi.org/10.1007/978-3-030-45697-9_25
254 A. Nabil et al.
2 Method
In this section we used all the stored data about the relevant articles to answer our
research questions.
RQ1 - How Many Research Papers are Produced?
We have found 452 papers in total, 66% from Science Direct, 15% from IEEE Xplore,
14% from Springer Link and only 9% from ACM Digital Lib, as shown in (Fig. 1) and
(Table 4). All the publications were found between 2014 and 2020.
ACM Digital
Lib
5%
Springer
Link
14%
IEEE Xplore
15%
Science
Direct
66%
After Exclusion process with the criteria in (Table 3) only those between 2018 and
2019 were relevant for our research, 56% of the papers were in IEEE Xplore, 19% in
Science Direct, 13% in Springer Link and 12% in ACM Digital Lib. We can see that the
concern about security and privacy in education began in 2018 with the first publication
of blockchain in education was about data privacy and it was in January 2018 it
Blockchain Security and Privacy in Education 257
11
1
9 1 ACM
Science Digital
Direct Lib
3
6 19% 12%
1
Springer
1 Link
0
4 13%
5
4
1
IEEE
-2 Xplore
2018 2019 56%
Fig. 2. Evolution of publications per year Fig. 3. Libraries in which the selected articles
have been published
Brazil
USA 9%
18%
Germany
9%
United
Kingdom
9%
Oman India
9% 28%
Jordan
9% Indonesia
9%
Brazil Germany
India Indonesia
Jordan Oman
United Kingdom USA
• For journals: (+5) if the journal ranking is Q1, (+4) if the journal ranking is Q2, (+3)
if the journal ranking is Q3, (+2) if the journal ranking is Q4, and (+1) for others.
Blockchain Security and Privacy in Education 259
Degree
Smart contract Verification
10% 8%
Academic
Authorization Certificate
10% 15%
Authentication
3%
Cryptography
3%
Data Privacy
23%
Data Security
28%
Table 7. Artifacts
Artifacts Studies System
Analysis [3–5, 11, 16, 19] Design
Case study [2, 6, 13, 14] Proof of 6%
concept
Framework [8, 10, 17] Analysis
13%
Proof of concept [15, 18] 37%
System design [7]
Frame-
work
19% Case
Analysis Study
Case Study 25%
Framework
Proof of concept
System Design
Smart contract 1 3
Authorization 1
Authentication 1
Cryptography 2 2
Data Security 1 2 1 3 3
Data Privacy 1 2 1 5
Academic
1 4 2
Certificate
Degree
1 1 2
Verification
Fig. 7. Coverage of domain of blockchain security and privacy in education by proposed artifacts
Blockchain Security and Privacy in Education 261
4 Discussion
This SMS was conducted to investigate blockchain security and privacy and its
application in Education field. We selected 16 relevant papers framed around three
major themes: Blockchain in Education, Security and Privacy. To the author’s
knowledge, this is the first Systematic Study around this topic. Most of the studies we
found were between 2018 and 2019 even if the Search String was done to retrieve all
papers from 2014 but most of them didn’t answer the questions of Blockchain Security
and Privacy in Education. Concerning the artifacts, 37% of the papers were analysis
and 25% case study but only 13% were Proof of concept which prove that we are in the
beginning of the application of data security and data privacy to Blockchain in
Education.
The Blockchain technology is a great asset to Education it will help to move forward
with digitalizing schools and universities and to reduces the cost and secure student’s
data. A lot of studies were done for Blockchain application in Education but just few of
them were concerned about data protection which was proven in this Systematic Study.
The data and information that will be stored into blockchain will be targeted by many
malicious persons and will be at risk. This Systematic Study was the first step to get an
idea of the state of art in blockchain security and privacy and its application in edu-
cation, our future work will focus on blockchain vulnerabilities in data protection and
data right management and the possibilities of applying it to Blockchain in Education.
References
1. Ali Alammary, A., Alhazmi, S., Almasri, M., Gillani, S.: Blockchain-based applications in
education: a systematic review. Appl. Sci. 9(12), 2400 (2019)
2. Arenas, R., Fernandez, P.: CredenceLedger: A Permissioned Blockchain for Verifiable
Academic Credentials (2018)
3. Chen, G., Xu, B., Lu, M., Chen, N.S.: Exploring blockchain technology and its potential
applications for education. Smart Learn. Environ. 5(1), 1 (2018)
4. Farah, J.C., Vozniuk, A., Rodríguez-Triana, M.J., Gillet, D.: A Blueprint for a Blockchain-
Based Architecture to Power a Distributed Network of Tamper-Evident Learning Trace
Repositories (2018)
5. Feng, Q., He, D., Zeadally, S., Khan, M.K., Kumar, N.: A survey on privacy protection in
blockchain system. J. Netw. Comput. Appl. 126, 45–58 (2019)
6. Franzoni, A.L., Cárdenas, C., Almazan, A.: Using Blockchain to Store Teachers’
Certification in Basic Education in Mexico (2019)
7. Ghaffar, A., Hussain, M.: BCEAP - A Blockchain Embedded Academic Paradigm to
Augment Legacy Education through Application (2019)
8. Gilda, S., Mehrotra, M.: Blockchain for Student Data Privacy and Consent (2018)
9. Yumna, H., Khan, M.M., Ikram, M., Ilyas, S.: Use of blockchain in education: a systematic
literature review. In: Intelligent Information and Database Systems, January 2019
262 A. Nabil et al.
10. Han, M., Li, Z., He, J., Wu, D., Xie, Y., Baba, A.: A Novel Blockchain-based Education
Records Verification Solution (2018)
11. Al Harthy, K., Al Shuhaimi, F., Al Ismaily, K.K.J.: The Upcoming Blockchain Adoption in
Higher-Education: Requirements and Process (2019)
12. Petersen, K., Feldt, R., Mujtaba, S., Mattsson, M.: Systematic mapping studies in software
engineering. In: Proceedings of the 12th International Conference on Evaluation and
Assessment in Software Engineering, June 2008
13. Kanan, T., Obaidat, A.T., Al-Lahham, M.: SmartCert BlockChain Imperative for Educa-
tional Certificates (2019)
14. Lizcano, D., Lara, J.A., White, B., Aljawarneh, S.: Blockchain-Based Approach to Create a
Model of Trust in Open and Ubiquitous Higher Education (2019)
15. Mitchell, I., Hara, S., Sheriff, M.: dAppER: Decentralised Application for Examination
Review (2019)
16. Mohanta, B.K., Jena, D., Panda, S.S., Sobhanayak, S.: Blockchain Technology: A Survey on
Applications and Security Privacy Challenges (2019)
17. Srivastava, A., Bhattacharya, P., Singh, A., Mathur, A., Prakash, O., Pradhan, R.: A
Distributed Credit Transfer Educational Framework based on Blockchain (2018)
18. Taufiq, R., Trisetyarso, A., Kosala, R., Ranti, B., Supangkat, S. and Abdurachman, E.:
Robust Crypto-Governance Graduate Document Storage and Fraud Avoidance Certificate in
Indonesian Private University, August 2019 (n.d.)
19. Taylor, P.J., Dargahi, T., Dehghantanha, A., Parizi, R.M., Choo, K.K.R.: A Systematic
Literature Review of Blockchain Cyber Security, February 2019
The Development of Pre-service Teacher’s
Reflection Skills Through Video-Based
Classroom Observation
Ana R. Luís(&)
1 Background
In this section we survey background literature on initial teacher education (Sect. 1.1)
and discuss the pedagogical motivation underlying the use of video-based classroom
observation (Sect. 1.2).
gap between theory and practice [5]. Through micro-teaching students are given the
opportunity to experiment different teaching methods, explore pedagogical strategies
and receive constructive comments within a controlled teaching environment. This
article reports on a pedagogical strategy involving video-based classroom observation
which is also aimed at preparing future language teachers for an effective classroom
performance and which, in effect, may be combined with micro-teaching practices (cf.
Sect. 4).
In this section we lay out the pedagogical experiment by focusing on the goals, the
target audience and the methodology adopted to evaluate the students’ perception.
2.3 Evaluation
The degree of achievement of the experiment was assessed through an open-ended
questionnaire survey which was completed by the students at the end of the year (cf.
Table 4). The research questions underlying the questionnaire are shown below:
• To what extent does video-based classroom observation enhance pre-service
teachers’ awareness of the multidimensional nature of teacher practice within an
English language context?
• What are the perceptions of pre-service teachers’ of the benefits of video-based
classroom observation on their ability to improve their future teaching practice?
The open-ended questionnaire (cf. Table 4) was organized as follows: Section 2,
containing questions 2.1 to 2.10, was aimed at finding out what pre-service teachers
The Development of Pre-service Teacher’s Reflection Skills 267
had learned from observing and discussing video-recorded English language classes.
Section 3 focused specifically on the students’ perception of the benefits of engaging in
classroom observation. Section 4 attempted to determine which pedagogical domains
had the greatest impact on pre-service teachers.
Table 4. Questionnaire
In what follows, we report on the results of our qualitative research study. We survey
the answers provided by pre-service teachers to our open-ended questionnaire.
As mentioned before, the goal of asking students “What did you learn about …?”
was to evaluate their awareness of the various teaching domains and their ability to
apply previously studied theoretical-didactic content to concrete teaching and learning
contexts. Table 5 contains two samples of the students’ answers which effectively
reveal that the students were able to identify to link previous methodological course
content to detailed and specific aspect of the observed teacher performance.
In line with studies about the role of video in teacher professional development,
these results confirm that video-based classroom observation encourages more inex-
perienced teachers to engage confidently in the observation experience. What seems
evident is that the development of pedagogical thinking clearly benefits from the
flexibility provided by video giving them the opportunity to adjust the observation, the
analysis and the reflection to their own pace and rhythm [10].
268 A. R. Luís
Table 5. Sample of students’ answers to the question “What did you learn about…?”
In response to the question “What is your overall opinion about observing video-
taped English classes?”, the replies revealed that pre-service teachers had a very pos-
itive perception of engaging in video-based classroom observation. As the sample
provided in Table 6 shows, their receptive attitude was determined by their growing
awareness of the multiple layers of teacher performance, such as i) the pedagogical
strategies developed by the teachers to enhance learning; ii) the activities carried out by
teachers for the development of specific language skills; iii) the teacher feedback; iv)
the instructions; v) the management of time and space; vi) body language; vii) teacher-
student interaction, among others.
It is worth noting that pre-service teachers were able to share their analytical and
reflexive skills during their joint observation and discussion. As has been noted in
previous research, video-enhanced teacher reflection does effectively stimulate col-
laborative learning and can contribute to the development of an emerging teacher
identity [10, 16].
Table 6. Sample of students’ answers to the question “What is your overall opinion about
observing video-recorded English classes?”
The Development of Pre-service Teacher’s Reflection Skills 269
The last question of the questionnaire (“Is there anything you would like to apply to
your own practice …?”) was aimed at identifying the impact the experiment had on
students’ own practice as future language teachers. A small sample of the student’s
answers is given in Table 7. The responses further reveal a growing awareness of
fundamental classroom-specific issues such as careful lesson planning, enhancement of
oral skills through group and peer work, teacher-student interaction, a balance between
more controlled and more autonomous classroom activities, the carefully- planned use
of the white/black-board, error correction, teacher body language, among others.
These findings are also consistent with previous studies which show that students
are willing to adjust their practice to the teaching strategies observed in the video-
recorded classes and that therefore classroom observation effectively improves stu-
dents’ own lesson planning and classroom performance [13]. Ultimately, video-based
classroom observation promotes sustained teacher reflection and enhances teacher
noticing [9, 14].
Table 7. Sample of students’ answers to the question “Is there anything you would like to apply
to your own practice …?”
4 Conclusion
Underlying this experiment was the observation that initial teacher education programs
need to be supplemented with opportunities that allow students to become familiar with
the dynamics of the classroom and teacher practice. Classroom observation raises
students’ awareness of the classroom dynamics and the interaction between the various
pedagogical domains. In addition, the use of video allows students to adjust the
observation experience to their own needs and feel more encouraged to train their
analytical and reflection skills. The sample answers have revealed a very positive
perception of video-enhanced classroom observation.
Based on what we have learned, a future line of work would include the video-
recording of the students own classes within a micro-teaching context [5]. Self-
observation through video has been shown to significantly raise teachers’ awareness of
themselves giving them the opportunity to make more detailed reflections of their own
weaknesses and strengths [16]. It also broadens the scope of classroom observation not
only by developing self-assessment skills but also because classroom reflection would
be directed at the students’ own teaching practice [13]. These and other research
questions may also be productively extended to other subject knowledge areas, espe-
cially within the context of MA programs in which theory and practice are still poorly
articulated [1, 2].
270 A. R. Luís
References
1. Korthagen, F.: How teacher education can make a difference. J. Educa. Teach. 36(4), 407–
423 (2010)
2. Korthagen, F., Kessels, J., Koster, B., Lagerwerf, B., Wubbels, T.: Linking Practice and
Theory: The Pedagogy of Realistic Teacher Education. Routledge, London (2001)
3. Wallace, M.: Training Foreign Language Teachers. Cambridge University Press, Cambridge
(1991)
4. Flores, M., Vieira, F., Ferreira, F.: Formação inicial de professores em Portugal: problemas,
desafios e o lugar da prática nos mestrados em ensino pós-Bolonha. In: Borges, M.C.,
Aquino, O.F. (eds.) A formação inicial de professores: olhares e perspectivas nacionais e
internacionais, pp. 61–96. EDUFU, Uberlândia (2014)
5. Allen, D., Eve, A.: Microteaching. Theory Into Pract. 7(5), 181–185 (1968)
6. Dewey, J.: How We Think: A Restatement of the Relation of Reflective Thinking to the
Educative Process. Heath, Boston (1933)
7. Ottesen, E.: Reflection in teacher education. Reflective Pract. Int. Multi. Perspect. 8(1), 31–
46 (2007)
8. Larrive, B.: Transforming Teaching Practice: Becoming the critically reflective teacher.
Reflective Pract. Int. Multi. Perspect. 1(3), 293–307 (2010)
9. Scida, E., Firdyiwek, Y.: Video reflection in foreign language teacher development. In:
Allen, H.W., Maxim, H.H. (eds.) Issues in Language Program Direction: Educating the
Future Foreign Language Professoriate for the 21st Century, pp. 231–237. Heinle, Boston
(2013)
10. Marsh, B., Mitchell, N.: The role of video in teacher professional development. Teach. Dev.
10, 403–417 (2014)
11. Richards, J., Farrell, T.: Classroom observation in teaching practice. In: Richards, J., Farrell,
T. (eds.) Practice teaching: A Reflective Approach, pp. 90–105. Cambridge University Press,
New York (2011)
12. Thornbury, S., Watkins, P.: The CELTA Course. Trainer’s Manual. Cambridge University
Press, Cambridge (2007)
13. Grant, T., Kline, K.: The impact of video-based lesson analysis on teachers’ thinking and
practice. Teacher Dev. Int. J. Teachers’ Prof. Dev. 14(1), 69–83 (2010)
14. Sherin, M.: New perspectives on the role of video in teacher education. In: Brophy, J. (ed.)
Using video in teacher education, pp. 1–27. Elsevier, London (2004)
15. Sherin, M., Rus, R.: Teacher noticing via video. In: Calandra, B., Rich, P. (eds.) Digital
Video for Teacher Education: Research and Practice. Routledge, New York (2014)
16. Maclean, R., White, S.: Video reflection and the formation of teacher identity in a team of
pre-service and experienced teachers. Reflective Pract. Int. Multi. Perspect. 8(1), 47–60
(2007)
17. Welsch, R., Devlin, P.: Developing preservice teachers’ reflection: examining the use of
video. Action Teacher Educ. 28(4), 53–61 (2012)
Formative Assessment and Digital Tools
in a School Context
Abstract. Digital tools with an emphasis on the so-called apps are a current
topic whose potential, in school context, is still little explored. On the other
hand, the formative evaluation is also very little explored and is of particular
importance in the context of inclusive education, as an extended vision, in which
all students have specificities. The review of the literature in the context of the
cross-referencing of the two topics - formative evaluation and the use of apps -
presents good indicators of how technology can complement the formative
evaluation, guarantee a greater rooting of the same and guarantee performance
more aligned with education inclusive, taking into account the profile of the
student after leaving compulsory schooling. In this context, a descriptive and
analytical study was carried out, using a survey on the use of digital tools and
formative evaluation. The results obtained allowed us to conclude that the
school environment will have to evolve, above all, to the level of material
resources. This evolution is less pressing at the level of human resources and
attitudes that promote the use of formative assessment techniques (FATs) and
apps. Thus, there is an opportunity to improve existing applications in order to
allow greater aid of formative evaluation, by attenuating its greater limitations.
Due to the evolution of the applications, the prospects of gaining more use of the
apps and FATs are widened.
1 Introduction
In the school context, the use of digital tools, particularly those commonly called apps,
is a current topic whose potential is still little explored. Paradoxically, formative
assessment is a longstanding topic. However, it is also little used and may be of prime
importance in the context of inclusive education as a broad view in which all students
have specifics. Pacheco [1] states that formative assessment is absent from assessment
practices, although the normative framework since 2005 considers it the main mode of
assessment. Consequently, the pedagogical differentiation in school curricular prac-
tices, from elementary to secondary school, is also absent, with a prevailing curriculum
uniformity [1]. The assessment of subjects and organizations has been the central axis
© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 271–283, 2020.
https://doi.org/10.1007/978-3-030-45697-9_27
272 S. Paiva et al.
in the various attempts to model education in the same way as markets, as there is no
competitiveness without assessment. The processes of differentiation are performed
through external evaluations (exams, intermediate tests, PISA, etc.) and rankings, thus
replacing the logic of internality and formative evaluation (Machado, 2017, p. 337).
The diversity of apps focused on existing education, the panoply of Formative
Assessment Techniques (FATs) and their reduced knowledge, as well as the lesser use
of both, especially the two realities analyzed in this study, in order to enhance synergies
between your intersection. The main focus of this study also focuses on the reasons
inherent in the reduced rooting of Formative Assessment Techniques and the support
that the school provides in the use of Information and Communication Technologies
(ICT). This focus extends to teachers’ attitudes towards some changes that the digital
environment brings to the school environment.
The importance of deepening knowledge about the difficulties of establishing
formative evaluation is due to the fact that this evaluation presents innumerable sci-
entific evidences of its potential to support students and teachers in overcoming
weaknesses and disabilities, with very positive impacts on motivation, engagement,
achievement and autonomy.
Spector et al. [2] highlight that Ecclestone [3], supported by Johnson et al. [4],
Narciss [5], Spector [6], Woolf [7], argued that formative assessment or assessment for
learning is now considered an integral component of good teaching, student motiva-
tion, engagement and higher levels of achievement [2].
Faber and Visscher [8], highlight their view on feedback referring to Konstan-
topoulos, Miller and Van der Ploeg [9] and Kluger and Denisi [10], so feedback seems
to be more efficient when assessments are more frequently. Feedback focused on the
learning task can be efficient, while directed towards the self as a person is not efficient
Kluger and Denisi [10]. They also conclude that, in general, the elaborated feedback is
more effective in learning than the simple one, continuing the conclusions of Bangert-
abafa et al. [11] and Shute [12]. More effective is the feedback developed when it is
combined with the establishment of performance goals (Locke and Latham [13]).
Bhagat and Spector [14], in the light of an extended retrospective analysis by
Narcisse [5] and Spector and Yuen [15] on research conducted over the past 50 years or
more on learning, concluded that there are three main outcomes. (a) time on task
predicts learning outcomes, (b) formative feedback tends to improve learning, and
(c) prior knowledge influences the learning experience. The second result is of rele-
vance to the present study [4].
Spector et al. [2], referring to Ellis [16], identify as one of the main limitations of
formative assessment the difficulty of collecting learning interaction data and outcomes,
and also the analysis of formative feedback and assessment. With the use of 21st
century new technologies these limitations of formative assessment are removed, given
easy access to and analysis of performance and evaluation data. The development of
key 21st century skills - critical thinking and problem solving can also be supported by
new technologies. For the development of challenging situations formative assessments
are also valued (large and multiclass classrooms, problem-based learning), particularly
technology-supported assessment [2].
Formative Assessment and Digital Tools in a School Context 273
2 Methodology
The research is carried out according to the Mixed Methodology (MM), meaning, two
methods coexist - the qualitative and quantitative. In this context, a quantitative
investigation was carried out. In the qualitative component, content analysis techniques
are applied to the few scientific productions, in the thematic scope - formative eval-
uation and new technologies. From the identification of patterns and trends in the
results of previous studies, the quantitative component is defined. This component
applies quantitative techniques to capture research evidence that corroborates or con-
trasts the results of the first component.
Day-to-day observation is the starting point of this component. An observation
focused on the contradictions or incongruities of everyday life, emerging from the
reality around formative assessment. Aimed more specifically at the difficulties of
establishing formative assessment, in a reality that is tendentially shrouded in summa-
tive assessment, this reality, by contrast, is legally regulated for an essentially formative
assessment [21]. This incongruity has been going on for a long time since 1992, when
formative assessment is more legally defined (Normative Order 98/A/92) [22].
The collection of documentary data provided information on this social phe-
nomenon and continued this study by directing the quantitative component.
274 S. Paiva et al.
Spector et al. [2], one of the main limitations of formative assessment is the dif-
ficulty of collecting learning interaction data and outcomes and also the formative
feedback and assessment analysis.
Tsai, Tsai and Lin [23], for formative assessment individualized online learning is
crucial because the feedback provided by formative online assessment is immediate
and because the computer allows students to self-assess and improve immediately
(Tsai, Tsai, and Lin, [8], accessed July 2018, p. 260).
Also, from day-to-day observation was another incongruity: the fact that, on the
one hand, the most popular apps do not include problem solving and other complex
learning; and, on the other hand, the fact that the Student Profile Technical Report -
21st Century Competencies undervalued knowledge and valued metacognitive
knowledge and meta-competence. For Ferraz and Belhot [24], meta-knowledge is an
equivalent knowledge: knowledge used for problem solving and/or the choice of the
best method, theory or structure; strategic knowledge.
In order to understand this phenomenon, we proceeded to collect data, highlighting
some of the most relevant one. De Spector et al. [2], given the history of emphasis on
formative assessment and the potential of new technologies to extend formative
assessment in complex problem-solving areas, the potential for a greater impact of
formative assessment on skills development with regard to higher order learning is high
[2]. From Bhagat and Spector [14], there is not enough research on formative
assessment to support learning by simple tasks, with results focused on simple concepts
and procedures. The explosion of new technologies makes this support increasingly
effective. What needs further understanding is how best to support the learning of
complex and poorly structured tasks and how best to use new technologies [4].
The quantitative component was based on data collection through the use of a
questionnaire on the use of digital tools and formative assessment to a random sample
of active teachers from different levels of education (1st cycle to higher education).
It should be noted that no more specific criteria were defined according to the
project topic or research questions, i.e. the selection of teachers with experience in the
application of form-active assessment or the use of mobile devices was not defined as a
requirement. and apps. (Creswell ) [26].
In the questionnaire we chose, above all, the type of information related to facts,
opinions and attitudes, consisting of several sections: The profile of teachers using ICT;
Formative and Summative Assessment and Apps used in education. The survey was
based on two relevant projects: the Acer-European Schoolnet Pilot Project (2013) [27]
and the Project: Teachers’ Perceptions of the Digital Transformation of the Classroom
through the Use of Tablets: A Study in Spain (2016) [28].
3 Questionnaire
3.1 Profile of ICT Teachers
The survey was randomly applied to forty-six teachers from different cycles and levels
of education and training. There was a massive participation of teachers of the 1st cycle
Formative Assessment and Digital Tools in a School Context 275
Fig. 1. Degree of agreement by teachers on the need for guidance from students on the use of
technology for deeper and more meaningful use.
agreeing, there are nonetheless 0% of teachers who fully agree. This optimistic view is
indifferent to 11.4% of teachers. It is seen as a negative change by 15.9% of teachers
and is seen as an ambivalent change by 2.3%, i.e. both negative and positive. Addi-
tionally, 2.3% of teachers doubt this change.
Fig. 2. Teachers ‘degree of agreement on students’ cognitive, perceptive, sensory, and motor
transformations.
Regarding the level of support provided by the school in the use of ICT varies
depending on whether it is the use of interactive whiteboards or mobile devices. In
terms of both maintenance and maintenance, these conclusions are expressed in Fig. 3.
Regarding the use of the interactive whiteboard, the percentage of teachers who
agree or disagree with the support provided is close, 45% and 46%, respectively. Only
9% of teachers are indifferent to this support. Regarding the use of mobile devices, the
distribution of opinions is less balanced, as 61% of teachers disagree with the existence
of the support provided and 22% agree and only 17% show indifference on the subject.
Regarding the support provided in the provision of training workshops, it was
concluded that only 33% of respondents disagree with its existence, as opposed to 50%
agree, 17% of teachers remain indifferent to this issue. The opinion of respondents on
the availability of debates is different, 52% disagree, 31% agree and 17% are
indifferent.
Regarding the training received on ICT on the use of the Internet and general
applications, the pedagogical use of ICT or related training on such devices and
equipment we found that 39% of respondents indicate that they receive it very often
and often. This percentage somewhat contrasts with the figure of 50% of respondents
who, in the previous question, partially or fully agree that their school offers training
workshops. 24% of teachers admit that they rarely or never receive this training and
37% occasionally. These data are shown in the graph of Fig. 4.
The answers given in the context of receiving training at the level of debates
communities were closer to those given in question 7. Thus, gathered 57 percentage
values that indicated they never or rarely participated in debates communities and 28%
participated occasionally.
As for the use of ICT for different purposes leading the use for performing
administrative tasks, 100% of respondents reported use Very often and Often followed
by use for follow-up classes and assessment - 87% also followed for use for plan and
teach - 85% and finally use 66% to communicate with parents. The remaining values
corresponding to occasional, rare use and total non-use have a low value expression.
This use of ICT is expressed in the graph of Fig. 5.
278 S. Paiva et al.
In translating this use into years, depending on the different ends, we find that the
three ends most indicated in question nine also correspond to the time intervals of most
years indicated by the respondents. The use of ICT for planning and teaching is most
widespread at 6 to 10 years - 22%, 11 to 15 years - 24% and 16 to 20 years - 46%. To
perform administrative tasks ICTs are most commonly used in the following time
frames: 6 to 10 years - 24%, 11 to 15 years - 26% and 16 to 20 years - 41%. To follow
up the classes and for the evaluation the same intervals have the following percentage
expression: 28%, 35% and 30%, respectively. Finally, the purpose for which ICT is
least used - communicating with parents, the most marked time intervals are only the
following: 6 to 10 years (33%) and 11 to 15 years (28%). To this analysis it is
interesting to add that the time interval of the 0-year scale was only indicated for this
last purpose and has a weight of 11%.
Fig. 6. Teachers’ level of agreement with the evidence that formative assessment can sometimes
be less emphasized and supported by the excessive emphasis on summative assessment.
The survey of twelve FATs points to a small number. Regarding the knowledge of
eight techniques, 78% of the teachers present some, reduced or no knowledge. Of the
remaining four techniques, about 60% of teachers have better knowledge - high and
good knowledge. Since the first two techniques - Constructive Mini-Tests and Filling in
Text Gaps, the survey represents a greater knowledge in relation to the last two -
Learning Portfolio and Student Logbook.
The number of reasons presented as determinants for the less emphasis given to
formative assessment was 60, as shown in the graph in Fig. 7. Of this universe, 20% of
respondents attribute the cause to the overvaluation of external evaluation, either by
Formative Assessment and Digital Tools in a School Context 279
Finally, our attention was directed to the use of apps in education, primarily in the
use of ICT in teaching activities, i.e. the use of interactive whiteboard, mobile devices,
publisher software, apps and from others.
From the graph data in Fig. 8, we find that there is a significant percentage of
respondents who never use ICT in teaching activities: Interactive whiteboard - 35%;
280 S. Paiva et al.
Mobile devices - 41%; Publisher software - 20%; apps - 33%; others - 28%. Possibly
these values are related to the conclusions drawn from the analysis of the answers to
question 7. Therefore, it seems logical that there are values close to the non-use of
interactive whiteboards - 35%, and the 46% of the teachers agree that they do not feel
supported by the school in providing and maintaining interactive whiteboards. It also
seems to be justified that mobile devices have the highest percentage of non-use - 41%
and also have the highest percentage - 61%, in relation to the level of support that the
school provides.
We conclude that there is a balance between the number of teachers who use Very
Frequently and Frequently the interactive whiteboard, mobile devices, apps and others,
and teachers who use them occasionally or infrequently. The values are respectively
35% and 30% - interactive whiteboard; 31% and 28% - mobile devices; 30% and 37% -
apps; 35% and 37% - others. The values for publishers’ software use differ from this
balance, as 67% of respondents indicate that they use it very often and frequently and
only 13% use it occasionally or rarely.
We conclude that publisher software is largely one of the most commonly used
ICTs in class, compared to using the interactive whiteboard, mobile devices, apps, and
more.
The translation of the use of mobile devices in school time corresponds to the
following values: 0% of the time - 41.3%; 20% of the time - 41.3%; 40% of the time -
10.9%; 60% of the time - 2.2%; 80% of the time - 2.2%; 100% of the time - 0%; indefinite
time depending on planning - 2.2%. There is evidence that their use is low, so the vast
majority of teachers - 82.6% never use mobile devices or use them only 20% of the time.
Formative Assessment and Digital Tools in a School Context 281
This study shows that digital tools, with an emphasis on applications, are a current
topic whose potential, in the school context, is still little explored. In addition, for-
mative assessment is also little explored, but it is of particular importance in the context
of inclusive education. The literature review made by cross-referencing these two
topics - shows good indicators of how technology can complement formative assess-
ment. The descriptive and analytical study made it possible to conclude that the school
environment should evolve, above all, to the level of material resources. This evolution
is less pressing in terms of human resources and attitudes that promote the use of
282 S. Paiva et al.
References
1. Pacheco, J.: A avaliação das aprendizagens: para além dos resultados in Revista Portuguesa
de Pedagogia, p. 261 (2006)
2. Spector, J., et al.: Technology-enhanced formative assessment for 21st century learning.
Educ. Technol. Soc. 19(3), 58–71 (2016)
3. Ecclestone, K.: Transforming Formative Assessment in Lifelong Learning. McGraw-Hill
Education, Berkshire (2010)
4. Johnson, L., Adams Becker, S., Cummins, M., Estrada, V., Freeman, A., Hall, C.: NMC
Horizon Reporthigher Education Edition. New Media Consortium, Austin (2016)
5. Narciss, S.: Feedback strategies for interactive learning tasks. In: Spector, J.M., Merrill, M.
D., van Merriënboer, J.J.G., Driscoll, M.P. (eds.) Handbook of Research on Educational
Communications and Technology, 3rd edn., pp. 125–144 (2008)
6. Spector, J.M.: Foundations of Educational Technology: Integrative Approaches and
Interdisciplinary Perspectives, 2nd edn. Routledge, New York (2015)
7. Woolf, B.P.: A Roadmap for Education Technology. The National Science Foundation,
Washington, DC (2010). http://cra.org/ccc/wp-content/uploads/sites/2/2015/08/GROE-
Roadmap-for-Education-Technology-Final-Report.pdf
Formative Assessment and Digital Tools in a School Context 283
8. Faber, J., Visscher, A.: The effects of a digital formative assessment tool on spellingachieve-
ment: results of a randomized experiment (2018). https://www.sciencedirect.com/science/
article/pii/S0360131518300617
9. Konstantopoulos, S., Miller, S.R., van der Ploeg, A.: The impact of Indiana’s system of
interim assessments on mathematics and reading achievement. Educ. Eval. Policy Anal. 35
(4), 481–499 (2013). https://doi.org/10.3102/0162373713498930
10. Kluger, A.N., DeNisi, A.: The effects of feedback interventions on performance: a historical
review, a meta-analysis, and a preliminary feedback intervention theory. Psychol. Bull. 119
(2), 254–284 (1996). https://doi.org/10.1037/0033-2909.119.2.254
11. Bangert-Drowns, R.L., Kulik, C.-L.C., Kulik, J.A., Morgan, M.: The instructional effect of
feedback in test-like events. Rev. Educ. Res. 61(2), 213–238 (1991)
12. Shute, V.J.: Focus on formative feedback. Rev. Educ. Res. 78(1), 153–189 (2008). https://
doi.org/10.3102/0034654307313795
13. Locke, E.A., Latham, G.P.: Building a practically useful theory of goal setting and task
motivation: a 35-year odyssey. Am. Psychol. 57(9), 705–717 (2002). https://doi.org/10.
1037//0003-066X.57.9.705
14. Bhagat, K., Spector, J.: Formative assessment in complex problem-solving domains: the
emerging role of assessment technologies. Educ. Technol. Soc. 20(4), 312–317 (2017)
15. Spector, J.M., Yuen, H.K.: Educational Technology Program and Project Evaluation.
Routledge, New York (2016)
16. Ellis, C.: Broadening the scope and increasing the usefulness of learning analytics: the case
for assessment analytics. Br. J. Educ. Technol. 44(4), 662–664 (2013)
17. Bransford, J.D., Brown, A.L., Cocking, R.R.: How People Learn: Brain, Mind Experience,
and School (expanded edition). National Academies Press, Washington, DC (2000)
18. Clariana, R.B.: A comparison-until-correct feedback and knowledge-of-correct response
feedback under two conditions of contextualization. J. Comput.-Based Instr. 17(4), 125–129
(1990)
19. Epstein, M.L., et al.: Immediate feedback assessment technique promotes learning and
corrects inaccurate first responses. Psychol. Rec. 52, 187–201 (2002)
20. Hannafin, M.J.: The effects of systemized feedback on learning in natural classroom settings.
J. Educ. Res. 7(3), 22–29 (1982)
21. PT Decree-Law no. 17/2016 of 4 April gives an eminently formative dimension to the
evaluation
22. Normative Order 98 / A / 92
23. Tsai, F.-H., Tsai, C.-C., Lin, K.-Y.: The evaluation of different gaming modes and feedback
types on game-based formative assessment in an online learning environment, Elsevier,
p. 260 (2014)
24. Ferraz, A.P.C.M., Belhot, R.V.: Taxonomia de Bloom: revisão teórica e apresentação das
adequações do instrumento para definição de objetivos instrucionais, Scielo, p. 428 (2010)
25. Faria, E., Rodrigues, I., Perdigão, R., Ferreira, S.: Perfil do aluno - competências para o
sécxulo XXI, relatório técnico, Conselho Nacional de Educação, p. 7 (2017)
26. Creswell, J.: Projeto de pesquisa: métodos qualitativo, quantitativo e misto. Porto Alegre:
Artmed, p. 189 (2010)
27. Balanskat, A.: Introdução de Tablets nas Escolas: de Tablets nas Escolas: Avaliação do
Projeto-Piloto de Tablets Acer-European Schoolnet, pp. 1–8 (2013)
28. Suárez-Guerrero, C., Lloret-Catalá, C., Mengual-Andrés, S.: Teachers’ perceptions of
thedigital transformation of the classroom through the use of tablets: a study in spain.
ComunicarXXIV, 81–89 (2016). nº. 49
Information Technologies in
Radiocommunications
Compact Slotted Planar Inverted-F Antenna:
Design Principle and Preliminary Results
1 Introduction
Planar Inverted-F Antenna (PIFA) is one of the most popular configuration in consumer
electronics [1, 2]. It is widely used, due to its compact size and desirable radiation
features. PIFA provides a relatively high gain for electrically small antennas, and it is
able to conform with SAR regulations [3, 4]. However, with never ending miniatur-
ization of mobile devices, there is a need for ever smaller antennas [5].
PIFA is usually integrated on printed circuit boards, so the most straightforward
method is to adopt a substrate material with high permittivity. However, this would
lead to a higher loss, resulting in a lower gain and reduced radiation efficiency [6].
Loading PIFA with a capacitive or resistive impedance can also reduce its size.
Unfortunately, these techniques also suffer from similar drawbacks [7]. Modern
methods to reduce the PIFA footprint include the use of metamaterial ground planes or
superstrates [8–10], but these specialized 3D structures can imply very high manu-
facturing costs, as well limiting the type of substrate materials which can be used.
In this paper, the authors propose a new slot configuration able to reduce the
resonant frequency of PIFA configuration, while maintaining its physical size, thus
being equivalent to a miniaturization [11]. The adoption of slots to modify the resonant
wavelength of PIFA is not completely new [12], however the majority of existing
designs can be classified as meandered antennas [7], since they rely on increasing the
length of the current path between the short circuit and the open circuit ends of the
radiating element. Meandered designs often result into degraded radiation patterns and
lower efficiencies, due to the zig-zag current flow [13, 14]. The method presented in the
present contribution does not depend on meandering. A parametric analysis of the
design is presented to explain its behavior. Side-by-side comparison of the standard
© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 287–292, 2020.
https://doi.org/10.1007/978-3-030-45697-9_28
288 S. Costanzo and A. M. Qureshi
PIFA with an enhanced bandwidth, ISM-band variant of the proposed design, is also
presented. The proposed slotted antenna provides almost identical radiation features
with electrically smaller size.
A conventional square PIFA [15, 16], shown in Fig. 1(a), is adopted as starting point in
the present slotted design. Two identical rectangular slots are introduced in the radi-
ating element (Fig. 1(b)) to create a ‘notch’ (Fig. 1(c)) along the diagonal of the square
patch. As a result of the above modifications, the slotted PIFA includes three new
design parameters, namely the width ‘W’ and the depth ‘D’ of the slots, as well as the
minimum width of the notch ‘B’. Each one of these parameters is varied to examine
their effect on the resonant frequency and the impedance bandwidth. The basic
parameters of PIFA, such as the size of the radiating element, the ground-plane and the
shorting plate, are leaving unchanged in the parametric analysis. Since the input
impedance of a PIFA is highly sensitive to the feed position, it has to be optimized for
each variation. However, in order to limit the effect of the feed position on the resonant
frequency, the feed location is constrained to be within a range of ±1 mm around the
original location.
6 mm
W
17.5 mm
Ground
Plane
Figure 2 shows the simulated return loss for slots of width ‘W’ varying from
2.5 mm up to 6.5 mm. The resonant frequency varies from 2.35 GHz (@
W = 2.5 mm) back to 2.315 GHz (@ W = 3.5 mm), while all others fall inside this
range. Thus, the overall variation in the resonant frequency of the antenna is less than
1.5%, despite a nearly threefold change in the width of the slots. Furthermore, as the
resonant frequency does not monotonically increase or decrease with the width, the
corresponding relationship is not straightforward.
Compact Slotted Planar Inverted-F Antenna 289
Fig. 2. Simulated return loss of the slotted PIFA design for different slot widths
Figure 3 shows the behavior of the slotted PIFA as the slot depth ‘D’ is varied from
4 mm up to 9 mm. Again, the resonant frequency remains almost unchanged; the small
variations that exist do not seem to follow a discernable pattern. The highest resonant
frequency (2.35 GHz) is recorded for the smallest depth (D = 4 mm), while the lowest
resonant frequency (2.315 GHz) is observed at a depth of 7 mm. It is clear from Figs. 3
and 4 that the size and position of the slots is not directly related to the miniaturization.
Fig. 3. Simulated return loss of the slotted PIFA design for different slot depths.
Figure 4 shows the return loss of the slotted antenna for different sizes of the notch.
The minimum width ‘B’ is strongly coupled with the resonant frequency of the
antenna. As the notch is constricted, the resonant frequency of the antenna is reduced,
thus giving a miniaturization effect. In particular, at a value B = 0.7 mm, the slotted
antenna is 25% smaller than the square PIFA design (Fig. 4).
290 S. Costanzo and A. M. Qureshi
Fig. 4. Simulated return loss of the slotted PIFA design for different notch widths, compared
with the simple square PIFA.
Based on the above parametric analysis, it is evident that the minimum width ‘B’ of
the notch determines the resonant frequency of the slotted PIFA. The size and the
position of the slots is irrelevant, as long as the width of the notch is preserved. It may
be also observed from Fig. 4 that miniaturization is achieved at the cost of bandwidth.
As the resonant frequency is reduced, the useable bandwidth also becomes smaller.
However, the loss of bandwidth can be easily compensated by specific enhancement
methods, as demonstrated in the following section.
After the preliminary parametric analysis, a slotted PIFA design, optimized for oper-
ation in the Industrial, Medical and Scientific (ISM) band (2.4–2.5 GHz), is simulated.
The ISM band is commonly used by consumer electronics employing WiFi and
Bluetooth standards for communication. The overall size of the antenna is identical to
the square PIFA earlier described (Fig. 1). The square PIFA has a resonant frequency
equal to 2.82 GHz, whereas the slotted PIFA is resonant at 2.41 GHz. However, the
impedance bandwidth of the slotted PIFA is much smaller, less than 10%, while the
square PIFA has a bandwidth of 20% (Fig. 5). To improve the bandwidth of the
proposed slotted PIFA, a T-shaped ground plane modification is introduced [17], with a
resulting design revealing an impedance bandwidth of over 19.5% (Fig. 5).
A comparison of the radiation patterns of the two PIFA designs is shown in Fig. 6.
The slotted PIFA, despite being 15% smaller (electrical size), has almost identical
radiation pattern. The antenna provides linearly polarized radiation with a peak gain
around 3.2 dBi.
Compact Slotted Planar Inverted-F Antenna 291
Fig. 5. Return loss comparison of the slotted PIFA, slotted PIFA with T-shaped ground plane
(inset) and the Square PIFA.
Fig. 6. Co-polar (solid) and Cross-polar (dashed) radiation patterns of the slotted compact PIFA
and the classical square PIFA.
4 Conclusion
A new slot configuration for microstrip PIFA miniaturization has been demonstrated.
The technique has revealed to reduce the resonant frequency of the PIFA, which is
equivalent to a reduction in the antenna size. The gain and the impedance bandwidth of
the compact slotted PIFA, designed for ISM band, is comparable to the standard PIFA
design. The proposed architecture is particularly useful for portable and wearable
electronics.
292 S. Costanzo and A. M. Qureshi
References
1. Fujimoto, K. (ed.): Mobile Antenna Systems Handbook, 3rd edn. Artech House, Boston
(2008)
2. Young, P.R., Aanandan, C.K., Mathew, T., Krishna, D.D.: Wearable antennas and systems.
Int. J. Antennas Propag. 2012, 1–2 (2012)
3. Rais, N.H.M., Soh, P.J., Malek, F., Ahmad, S., Hashim, N.B.M., Hall, P.S.: A review of
wearable antenna. In: 2009 Loughborough Antennas & Propagation Conference, Lough-
borough, pp. 225–228. IEEE (2009)
4. Rogier, H.: Textile antenna systems: design, fabrication, and characterization. In: Tao, X.
(ed.) Handbook of Smart Textiles, pp. 1–21. Springer, Singapore (2015)
5. Nepa, P., Rogier, H.: Wearable antennas for off-body radio links at VHF and UHF Bands:
challenges, the state of the art, and future trends below 1 GHz. IEEE Antennas Propag. Mag.
57(5), 30–52 (2015)
6. Lo, T.K., Yeongming Hwang.: Bandwidth enhancement of PIFA loaded with very high
permittivity material using FDTD. In: IEEE Antennas and Propagation Society International
Symposium, 1998 Digest. Antennas: Gateways to the Global Network. Held in conjunction
with: USNC/URSI National Radio Science Meeting, vol. 2, pp. 798–801 (1998). (Cat.
No.98CH36
7. Waterhouse, R.B. (ed.): Printed Antennas for Wireless Communications. Wiley, Chichester
(2007)
8. Hall, P.S., Hao, Y. (eds.): Antennas and Propagation for Body-Centric Wireless Commu-
nications, ser. Artech House Antennas and Propagation Library: Artech House, Boston
(2006). oCLC: ocm70400755
9. Gao, G., Hu, B., Wang, S., Yang, C.: Wearable planar inverted-F antenna with stable
characteristic and low specific absorption rate. Microwave Opt. Technol. Lett. 60(4), 876–
882 (2018)
10. Gao, G., Yang, C., Hu, B., Zhang, R., Wang, S.: A wearable PIFA with an all-textile
metasurface for 5 GHz WBAN applications. IEEE Antennas Wireless Propag. Lett. 18(2),
288–292 (2019)
11. Costanzo, S., Venneri, F.: Miniaturized fractal reflectarray element using fixed-size patch.
IEEE Antennas Wireless Propag. Lett. 13, 1437–1440 (2014)
12. Wong, K.L.: Planar Antennas for Wireless Communications. Wiley, Hoboken (2003)
13. Rothwell, E.J., Ouedraogo, R.O.: Antenna miniaturization: definitions, concepts, and a
review with emphasis on metamaterials. J. Electromagn. Waves Appl. 28, 2089–2123
(2014). https://doi.org/10.1080/09205071.2014.972470
14. Fallahpour, M., Zoughi, R.: Antenna miniaturization techniques: a review of topology- and
material-based methods. IEEE Antennas Propag. Mag. 60, 38–50 (2018). https://doi.org/10.
1109/MAP.2017.2774138
15. Taga, T., Tsunekawa, K.: Performance analysis of a built-in planar inverted-F antenna for
800 MHz band portable radio units. IEEE J. Sel. Areas Commun. 5, 921–929 (1987). https://
doi.org/10.1109/JSAC.1987.1146593
16. PIFA - Planar Inverted-F Antennas. http://www.antenna-theory.com/antennas/patches/pifa.
php
17. Wang, F., Du, Z., Wang, Q., Gong, K.: Enhanced-bandwidth PIFA with T-shaped ground
plane. Electron. Lett. 40, 1504–1505 (2004). https://doi.org/10.1049/el:20046055
Technologies for Biomedical
Applications
Statistical Analysis to Control Foot
Temperature for Diabetic People
1 Introduction
Diabetes, often referred to by doctors as diabetes mellitus (DM), describes a
group of metabolic diseases in which the person has high blood glucose (blood
sugar), either because insulin production is inadequate, or because the body’s
cells do not respond properly to insulin, or both. Patients with high blood sugar
will typically experience polyuria (frequent urination), they will become increas-
ingly thirsty (polydipsia) and hungry (polyphagia). If left untreated, diabetes can
cause many complications. Acute complications can include diabetic ketoacido-
sis, hyperosmolar hyperglycemic state, or death. Serious long-term complications
include cardiovascular disease, stroke, chronic kidney disease, foot ulcers, and
damage to the eyes.
Several different studies talk about the quantity of people with diabetes:
between 382 million people (in 2013) and 422 million people around the world
c The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 295–306, 2020.
https://doi.org/10.1007/978-3-030-45697-9_29
296 J. Torreblanca González et al.
(in 2014, according to the World Health Organization). And, in 2017, there
were 425 million people with diabetes [3]. This represents 8.3–8.5% of the adult
population [13] (in 1980 was around 4.7%), with equal rates in both women
and men [15]. As of 2014, trends suggested the rate would continue to rise [2].
Diabetes at least doubles a person’s risk of early death. From 2012 to 2015,
approximately 1.5 to 5.0 million deaths each year resulted from diabetes. The
global economic cost of diabetes in 2014 was estimated to be US$612 billion [1],
but in 2017, it was estimated in US$727 billion [3].
Therefore, this is a very important problem in the whole world. In this paper,
we focus on the so-called diabetic foot, one of its complications. A diabetic foot
is a foot that exhibits any pathology that results directly from diabetes mellitus
or any long-term (or “chronic”) complication of diabetes mellitus. Presence of
several characteristic diabetic foot pathologies are infection, diabetic foot ulcer
and neuropathic osteoarthropathy. Its symptoms vary depending on the affected
nerves. Some patients have no signs or evidence. But these symptoms and prob-
lems get worse over time and may include: ulcers, numbness; tingling; loss of
sensation and pain in the limbs; loss of muscle in the feet or hands; and changes
in heart rate.
The evaluation of the temperature of the plantar surface of the foot is a useful
tool to determine the possible risk of developing pathologies associated with dia-
betic foot. Thus, certain asymmetries have been determined, such as an increase
in temperature of 2.2 ◦ C in an area with respect to its contralateral, indicating
an underlying subclinical inflammation without apparent signs [11]. This could
open a procedure to know the risk of ulceration in the area. The determination
of temperatures in the foot is usually done by thermal imaging cameras, with
the researchers choosing different ROIs, the number and location of which are
very variable. The choice of areas of interest can be of great importance, since it
relates the increase or decrease in temperature with the risk of that area of suf-
fering an injury, such as a plantar ulcer. However, the literature offers numerous
studies, with disparity in number, location and reasons to choose ROIs. Recent
literature studies have been found that analyze from 4 [5] to 12 [11] zones, with
a number of 5-6 being the most common.
In this way, Astasio-Picado et al. [5] propose to monitor the plantar surface
of the foot in four zones, heel, first and fifth metatarsal head and first finger.
Chatchawan et al. [7] and Bagavathiappan et al. [6] propose 6 areas of interest.
These studies coincide in analyzing the heel and the plantar surface of the first
finger, although Astasio-Picado et al. [5] analyze the first metatarsal head, while
Chatchawan et al. [7] and Bagavathiappan et al. [6] extend this area to the
medial forefoot (1st and 2nd metatarsal bones). Other studies [12] focus their
attention on 5 areas of the forefoot: 1st, 3rd and 5th metatarsal bones and first
and fourth fingers. In the same way, Gatt et al. carried out a study in 8 areas of
the forefoot: medial, lateral, central and the 5 fingers [9]. However, other studies
do not specify the exact number of regions nor their location [8,14].
Although there seems to be consensus on some areas chosen, such as the heel,
first metatarsal head and first finger, the criterion of choice of the areas is not
Foot Temperature in Diabetic People 297
clear, since in these studies it was not specified. Thus, it seems that the choice
of these areas could be related to the zones of frequent appearance of ulcers [5],
but in others, the criteria were not specified.
That is why the objective of this study is to give important reasons about
the amount of ROi’s necessary to perform a good screening in the diabetic foot,
and to be able to optimize the analisis, adding or eliminating areas that offer
redundant results.
The outline of the paper is as follows: In Sect. 2 we provide an overview of
the most common sensors employed to measure the temperature, and we analyze
their main features to place them in a sock. In Sect. 3, we briefly describe the
survey taken to study the most important variables to study the temperature
in both feet. In this paper, dendrograms are used to understand where sensors
for the temperature should be placed (Sect. 3.1). Finally, some conclusions and
goals are given in Sect. 4.
There are more sensors with which the temperature of the foot could be
measured, but it makes them uncomfortable to wear and would be very bulky
to assemble.
– Programmable electronic devices: This type of sensors are very new and are
already integrated circuits in which the measurement of temperature variation
is made electronically, like diodes, by variation of voltage and current in the
PN junction of semiconductors. The great difference of these is that they
are already encapsulated in very small elements and that they communicate
directly with a microprocessor to know the temperature around them. There
is a great variety of models being typical examples the MAX30205, the Si7006,
the AD590, etc.
3 Statistical Analysis
The processing of the thermal images of the sole of the foot is a new topic and
has not been investigated deeply, so there is a lack of information on thermal
patterns of the behavior of the diabetic foot [10]. This reason together with
the main conclusions obtained in the previous section motivate the statistical
analysis of this work.
We took temperatures in both feet (in sole and also dorsal) before and after
walking 100 m, in 9 points in the sole and eight corresponding points in the dorsal.
These points are shown in Fig. 1. These are the usual points considered in the
scientific literature, and they are related with areas where it is very likely that
a foot ulcer occurs according to some studies, see [4]. These areas are illustrated
in the right side of Fig. 1. Detecting problems in these zones is of great interest.
The smallest areas at risk are more or less a circle of 1 cm of diameter. This
characteristic is very important for systems that are built to detect problems in
diabetic foot.
Fig. 1. In the left side, we show a scheme with the points where the temperature data
is studied: the plant and dorsal areas correspond to the same position, except for the
number one that is only in the sole: (1) heel, (2) medial midfoot, (3) lateral mid-foot,
(4) first metatarsal head, (5) central metatarsal heads, (6) fifth metatarsal head, (7)
first finger, (8) central fingers, (9) fifth finger. In the right side, there is an illustration
with areas at risk on the foot taken from [4].
men_43%
women_57%
diabetic men_23%
diabetic women_25%
Fig. 2. Pie chart with data distribution between diabetic and non–diabetic women,
and diabetic and non–diabetic men.
Apart from the temperature of 9 points of the sole and 8 on the dorsal, other
variables, such as sex, weight, height, age, blood pressure, etc., have been taken
(Fig. 1).
This study is intended to determine if there is any variation in the temper-
ature of the foot when walking, in diabetic patients. We focused on the sole of
the foot, where there will be a greater variation in temperature, since it is the
one that is most forced when doing the walk. In future studies we will see what
happens with the dorsal and with the differences in temperatures between the
indexes of the right and left feet, as well as before and after the walk.
A basic statistical analysis has been carried out including all indices (from 1
to 9) in the sole of the foot (we will denote it later with SOLE or S) and dorsal
(denoted with the letter D), left and right feet (denoted by L and R, respectively),
before (denoted with PRE) and after (POST) the short walk. This was made
for diabetic and non-diabetic women and men.
In all cases (men and women, diabetic or not), the indices with the highest
average value are those of point 2 on both the right and left foot, both before
and after the walk, and those with the lowest average value are those of fingers
(7, 8 and 9) both left and right and before and after the walk. However, we found
the opposite situation in terms of the variation of the data with respect to the
mean (standard deviation): the ones that have more variation with respect to
the average before and after the walk are 7, 8 and 9 indices, and index 2 is the
one with the least variation.
302 J. Torreblanca González et al.
Table 3. Variation, maximum and minimum values of some basic statistics, in diabetic
and non–diabetic men (MD and MN D respectively).
MD MN D
Var Max Min Var Max Min
mean 4.18 30.42 26.25 2.98 32.02 29.03
sd 2.68 5.28 2.60 1.98 3.94 1.96
se (mean) 0.58 1.15 0.57 0.46 0.90 0.45
IQR 6.40 10.20 3.80 2.85 4.55 1.70
skewness 0.75 0.14 −0.61 2.53 0.38 −2.15
kurtosis 1.74 0.01 −1.72 5.72 5.74 0.02
Min 0% 6.00 24.90 18.90 7.10 26.70 19.60
1stQu 25% 7.20 28.50 21.30 3.80 31.15 27.35
Median 50% 5.20 30.70 25.50 3.20 31.80 28.60
3rdQu 75% 2.30 32.80 30.50 1.90 32.85 30.95
Max 100% 1.80 35.00 33.20 3.50 36.30 32.80
Table 4. Variation, maximum and minimum values of some basic statistics, in diabetic
and non–diabetic women (WD and WN D respectively).
WD WN D
Var Max Min Var Max Min
mean 3.16 30.25 27.09 4.28 30.82 26.54
sd 2.24 4.22 1.98 2.42 4.48 2.05
se (mean) 0.47 0.88 0.41 0.44 0.82 0.38
IQR 4.85 7.05 2.20 4.55 7.10 2.55
skewness 0.88 0.52 −0.36 1.16 0.57 −0.59
kurtosis 2.91 1.69 −1.22 1.76 0.36 −1.40
Min 0% 7.80 27.10 19.30 7.00 27.10 20.10
1stQu 25% 4.85 28.70 23.85 5.98 29.43 23.45
Median 50% 3.60 30.30 26.70 5.20 30.75 25.55
3rdQu 75% 2.70 31.85 29.15 2.53 31.98 29.45
Max 100% 3.40 36.00 32.60 3.10 36.80 33.70
Table 3 shows that diabetic men reach the highest interquartile range (IQR)
and the lowest value is reached by non–diabetic men. There is a small differ-
ence between diabetic and non–diabetic women for this variable (see Table 4).
Foot Temperature in Diabetic People 303
35 35
SRPRE1
SLPRE1
30
30
25
25
Fig. 3. Violin plot for non-diabetic men: SLPRE1 (sole, left, before the walk, point 1)
on the left and SRPRE1 on the right.
Comparing diabetic and non–diabetic men, we observe that the indices show
greater asymmetry (skewness) and kurtosis in the case of non–diabetics. In this
group (MN D ), the index of greatest asymmetry in absolute value is –2.15 and
the one with greater kurtosis is 5.74, which is SRPOST5 and corresponds to the
central part of the metatarsus of the right foot.
The closest values to zero in kurtosis are SLPRE1 and SRPRE1. We can see
it represented with a violin plot in Fig. 3. This graph corresponds to the heel of
the right and left feet. Lowest skewness value occurs in SLPRE8 and SLPRE9
(-0.22). Moreover, values with less kurtosis also have a low skewness, less than
0.4. In SLPRE1 there is an outlier with a value of 35.4.
When we look at these distributions, we appreciate that the average value
(red dot) match with the median (central line). We have represented a heat
map of diabetic men before and after the walk (right sole in this case). The
variation of values is the same, between 20 and 34 degrees, but if we look at
the dendrograms (upper part of Fig. 4) we see that after the walk the index 2
becomes the most important.
Fig. 4. Heat map (men) of the right sole before the walk (left), and heat map (men)
of the right sole after the walk (right).
304 J. Torreblanca González et al.
Fig. 5. Heat map (women) of the right sole before the walk (left), and heat map
(women) of the right sole after the walk (right).
3.1 Dendrograms
The main goal of this paper is to provide some strong reasons about how many
sensors we need, what they measure and which type of sensors we can use. In
this work, we found some results supporting that temperatures might be very
important, but other factors may affect (as humidity and pressure, for example).
We need to obtain tools and procedures that allow us to reduce the number of
necessary sensors.
For this reason, we also studied the dendrograms of the temperatures in
the control group (individuals without diabetes) and also for diabetic patients,
separately. We also developed a similar study before and after the walk.
Results are always quite similar: indices 7, 8 and 9 are usually strongly related
(especially 7 and 8). Something similar happens between indices 4, 5 and 6
(especially 4 and 5). Index 2 is also clearly separated from the others. Indices 1
and 3 are strongly related too. We consider that these results can be explained
with the different postures of the feet. We did not find big differences, when we
separate in different groups, between people with or without diabetes, men or
women.
Most of the ulcers appear in the sole, however, we also repeated the study in
the dorsal part of the foot. As a curiosity, in this case index 2 is more connected
Foot Temperature in Diabetic People 305
with zone 3. The rest of the areas are related in a similar way as it was described
above for the sole: indices 7, 8 and 9 between them; indices 4, 5 and 6; and
2 with 3.
4 Conclusions
Nowadays, diabetes is one very important disease in the world. The number of
patients with diabetes is growing as well as its cost. In this paper, we analyzed
the temperature in the feet as one of the main variables to control complications
in patients with diabetic foot. In the future, we would like to develop a smart
sock able to control some measures such us temperature or humidity and tells
the patient if any problem is appearing, and with this new smart sock analyzing
more persons, and more continuously these factors.
As one of the first steps, we studied the best sensors to take the temperature
in the feet, and we utilized dendrograms to obtain some conclusions about the
best places where sensors should be placed. For example, if only 4 sensors might
be employed, the best zones would be: one sensor in point 2, another in 1 or
3, another one in one of the fingers, and another one in the metatarsal heads
(points 4, 5 or 6).
In the future, we would like to go deeper into the variables with higher
correlation with the temperature in the feet, and to obtain linear regressions of
the temperatures depending on these variables. In this way, we may obtain in
advance when any increase in the temperature is not explained by the model,
and therefore there might be any complication. At the same time, we continue
controlling our diabetic patients to observe which features can be utilized in
forecasting possible ulcers.
References
1. International diabetes federation. IDF diabetes atlas (2013). https://www.idf.org/
e-library/epidemiology-research/diabetes-atlas/19-atlas-6th-edition.html
2. International diabetes federation. IDF diabetes atlas (2015)
3. International diabetes federation. IDF diabetes atlas (2017). https://www.idf.
org/e-library/epidemiology-research/diabetes-atlas/134-idf-diabetes-atlas-8th-
edition.html
4. Apelqvist, J., Bakker, K., van Houtum, W., Schaper, N.: Practical guidelines on
the management and prevention of the diabetic foot: based upon the international
consensus on the diabetic foot (2007) prepared by the international working group
on the diabetic foot 24(Suppl 1), S181–S187 (2008)
5. Astasio-Picado, A., Martı́nez, E.E., Nova, A.M., Rodrı́guez, R.S., Gómez-Martı́n,
B.: Thermal map of the diabetic foot using infrared thermography. Infrared Phys.
Technol. 93, 59–62 (2018)
306 J. Torreblanca González et al.
6. Bagavathiappan, S., Philip, J., Jayakumar, T., Raj, B., Rao, P.N.S., Varalakshmi,
M., Mohan, V.: Correlation between plantar foot temperature and diabetic neu-
ropathy: a case study by using an infrared thermal imaging technique. J. Diab.
Sci. Technol. 4(6), 1386–1392 (2010)
7. Chatchawan, U., Narkto, P., Damri, T., Yamauchi, J.: An exploration of the rela-
tionship between foot skin temperature and blood flow in type 2 diabetes mellitus
patients: a cross-sectional study. J. Phys. Ther. Sci. 30, 1359–1363 (2018)
8. Gatt, A., Falzon, O., Cassar, K., Camilleri, K.P., Gauci, J., Ellul, C., Mizzi, S.,
Mizzi, A., Papanas, N., Sturgeon, C., Chockalingam, N., Formosa, C.: The applica-
tion of medical thermography to discriminate neuroischemic toe ulceration in the
diabetic foot. Int. J. Lower Extremity Wounds 17(2), 102–105 (2018)
9. Gatt, A., Falzon, O., Cassar, K., Ellul, C., Camilleri, K.P., Gauci, J., Mizzi, S.,
Mizzi, A., Sturgeon, C., Camilleri, L., Chockalingam, N., Formosa, C.: Establishing
differences in thermographic patterns between the various complications in diabetic
foot disease. Int. J. Endocrinol. 2018, 1–7 (2018). Article ID 9808295
10. Kaabouch, N., Hu, W.-C., Chen, Y., Anderson, J.W., Ames, F., Paulson, R.: Pre-
dicting neuropathic ulceration: analysis of static temperature distributions in ther-
mal images. J. Biomed. Opt. 15(6), 1–6 (2010)
11. Macdonald, A., Petrova, N.L., Ainarkar, S., Allen, J., Plassmann, P., Whittam,
A., Bevans, J.T., Ring, F., Kluwe, B., Simpson, R.M., Rogers, L., Machin, G.,
Edmonds, M.: Thermal symmetry of healthy feet: a precursor to a thermal study
of diabetic feet prior to skin breakdown. Physiol. Meas. 38(1), 33–44 (2017)
12. Petrova, N.L., Whittam, A., MacDonald, A., Ainarkar, S., Donaldson, A.N.,
Bevans, J., Allen, J., Plassmann, P., Kluwe, B., Ring, F., Rogers, L., Simpson,
R., Machin, G., Edmonds, M.E.: Reliability of a novel thermal imaging system for
temperature assessment of healthy feet. J. Foot Ankle Res. 11(1), 1–22 (2018)
13. Shi, Y., Hu, F.B.: The global implications of diabetes and cancer. Lancet
383(9933), 1947–1948 (2014)
14. Skafjeld, A., Iversen, M., Holme, I., Ribu, L., Hvaal, K., Kilhovd, B.: A pilot study
testing the feasibility of skin temperature monitoring to reduce recurrent foot ulcers
in patients with diabetes - a randomized controlled trial. BMC Endocr. Disord. 15,
55 (2015)
15. Vos, T., Flaxman, A.D., Naghavi, M., Lozano, R., Michaud, C., Ezzati, M.,
Shibuya, K., et al.: Years lived with disability (ylds) for 1160 sequelae of 289
diseases and injuries 1990–2010: a systematic analysis for the global burden of
disease study 2010. Lancet 380(9859), 2163–2196 (2012)
Sensitive Mannequin for Practicing
the Locomotor Apparatus Recovery
Techniques
Abstract. This paper presents the theoretical concepts and the practical
approaches involved in constructing a mannequin (dummy) used for teaching
and practicing the recovery techniques specific to different injuries that can
affect the human locomotive apparatus. The dummy consists of a hardware
system that model the human anterior and posterior limbs. The bones, joints and
muscular tissue are replicated so that the dummy movements are very similar to
the actual movements of the human body. The mannequin is equipped with
software-controlled movement sensors. A computer that monitors the data
received from the sensors registers the parameters of the correct recovery pro-
cedures performed by a recovery specialist doctor (trainer). The students who
want to learn the procedures can practice the same maneuvers on the dummy.
The control system analyses the movement parameters, compares them with the
correct ones produced by the teacher and immediately assists the trainees by
providing an automatic feedback reflecting the correctness of the actions. This
controlled environment takes the pressure off the students and also spares the
injured patient of the inherent mistakes done involuntarily during learning the
recovery procedures.
1 Introduction
Nowadays, computer assisted systems are used in various fields, and healthcare is one
of them. For a good practical training and in accordance with the ethical principles, the
medical schools and the healthcare providers use dummies, who come to assist the
acquiring of proper and sufficient practical skills, specific to each medical field.
In the recovery process of the human body segments, the physical therapist must
know all the movement angles, the exact points where he must apply the necessary
force for facilitating or gaining the correct movements. Physical therapists treat patients
with fractures, tissue reconstructions, wounds or burned tissue, segments without
sensitivity or incapable of voluntary movements, stiff joints.
© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 307–313, 2020.
https://doi.org/10.1007/978-3-030-45697-9_30
308 C. Strilețchi and I. D. Cădar
From an ethical point of view all the recovery techniques and maneuvers that have
to be learned by the new practitioners cannot be taught directly on patients. Also, the
majority of patients are reluctant in cooperating with students, because they are not
confident in their abilities.
The mannequin described in this paper will assist the physical therapy students in
their learning process [1]. Currently, the physical therapy students learn all the nec-
essary maneuvers on themselves, but the physiology of a healthy body doesn’t respond
in the same manner as a damaged one.
The proposed dummy will also have a significant contribution in helping the
physical therapists maintain their abilities and in avoiding malpractice or further injury
of real patients [2, 3].
The dummies are designed to provide a real-time feedback concerning the used
techniques and confer safety to the student or practitioner by eliminating the concern
about injuring a living being, thus facilitating the learning process [4].
If a therapist is trained in a controlled environment and becomes aware of tissue
feedback, the risk of pathology aggravation due to incorrect joint manipulations is
eliminated. This can also lead to optimizing the recovery time [5, 6].
At the moment, there are devices for simulating laparoscopic surgeries, there are body
segments made of different composite materials for simulating orthopedic, abdominal,
chest, heart surgeries, endoscopic examinations, there are dummies for obstetrics,
gynecology, pediatric, samples of artificial tissues for learning the surgical techniques,
and also dummies for learning the intensive care maneuvers [7–9].
Existent CPR dummies have two parts, chest compressions and rescue breaths. For
training purposes, a good CPR dummy should have both compressions and breaths.
Besides the bare minimum, newer CPR dummies have audio and visual feedback to
quickly teach trainees proper compression depth, hand placement, rescue breaths, etc.
Currently, there are no dummies able to reproduce the feedback of the damaged
tissues and joints. Our proposed mannequin will be able to signal the errors occurred
during performing the rehabilitation maneuvers. The alarm thresholds will depend on
the lesion type (wounds, burns, fractures, inflammations, stiff joints, etc.) and on the
time elapsed since the lesion occurred.
The current dummies lack electro-hydraulic joints and also provide limited feed-
back to the practitioner [10]. The joint capsules have to be able to be programmed to
reproduce joint restrictions due to ligament, muscles and fascia tensions.
The current means of feedback (usually sonorous) have to be developed for pro-
viding extensive information in order to forewarn the therapist about the tissue tension
that appears during the joint and segments mobilizations. In addition, the feedback
should provide specific information about the part of the procedure that was prob-
lematic (applied pressure, rotation degree, duration, etc.) [11].
Sensitive Mannequin for Practicing the Locomotor Apparatus Recovery Techniques 309
3 System Description
The implemented system is composed of physical, electronic and software modules that
work together into a teaching environment specific to the human locomotor system
recovery procedures.
For each physical procedure type, the system stores several sets of valid infor-
mation produced by the professor. This data is used for matching against the datasets
obtained from the students that perform the same procedure.
Physical Dummy Components. The dummy body parts (arms, legs, shoulder or
pelvic joints, etc.) are made of a metallic structure covered with rubber/plastic coatings
and they model the human skeleton parts and the muscle/skin tissue that surround
them. The joint capsules model the physical limbs articulations.
Data Acquisition System. The dummy components have electronic wireless sensors
inserted into special pockets located inside the rubber parts. The sensor system is
responsible with registering and transmitting all the acquired data reflecting the
physical movements of the dummy parts (Fig. 1).
* normalization
* preprocessing
storage
valid
DB
Data Processing and Analysis System. During the training phase, the teacher per-
forms on the dummy a certain procedure. Before storing the data, the acquired signal’s
samples are normalized and prepared for interpretation (preprocessed). A decision
system eliminates the irrelevant values and extracts the main characteristics of the
acquired information thus preparing it for future use.
The information acquired while the students perform the same procedure follows
the same route (normalizing and preprocessing) and the result is matched against the
valid datasets created by the tutor. The result is returned to the student as generic or
detailed feedback.
The generic feedback mechanisms display visual warnings and acoustic signals
when a certain procedure is poorly performed, while the detailed feedback provides
information about the parameters that were out of their specific range thus leading to
generating the warning signals (Fig. 2).
* normalization
* preprocessing
feedback match?
valid
DB
Fig. 2. The student’s procedures are verified against the valid datasets
The software modules that control all the dataflow are divided into several categories,
depending on the roles they play in the system. The Human Computer Interaction
(HCI) is performed using classical peripheral devices (mouse, keyboard, touch screen)
or voice commands.
Some of the modules described below are already implemented and some have to
be developed as the system for practicing the locomotor apparatus recovery techniques
is in development.
The system control software modules are responsible with handling the entire
developed system. Once started, the Graphical User Interface (GUI) will allow the user
to:
– select the functioning mode (tutor or student)
– create or specify a procedure identifier
– enter or exit the data acquisition mode
– start or stop the current acquisition
– confirm the acquisition, store or receive feedback for the current acquired data
The data acquisition software modules begin running once the user desires to use this
facility. Due to the fact the person that performs physical procedures has the hands
occupied with maneuvering the dummy, these software modules can have vocal
commands. They perform the following tasks:
– interrogate the available sensors and open a specific channel for each one of them;
– wait for the spoken “start acquisition” command; once received, the process data
acquisition begins;
– collect the data received from the registered sensors, until the “stop acquisition”
command is pronounced;
– ask the user to validate the current acquisition with “YES”/“NO” pronunciation;
These modules are already implemented, except for the vocal command system.
The data normalization and preprocessing software modules work with acquired
data. This process is not controlled by the user and performs its tasks once the current
acquisition is finished.
– the trailing and finishing noise are eliminated;
– the spike sensory data is eliminated;
– a few signal characteristics are computed: maximum and minimum amplitude,
duration in milliseconds, general average value and specific average values on
signal sequences;
These modules are already implemented.
The database communication software modules store/ retrieve the data in/from the
database and are currently implemented and functional.
312 C. Strilețchi and I. D. Cădar
References
1. Ishikawa, S., Okamoto, S., et al.: Assessment of robotic patient simulators for training in
manual physical therapy examination techniques. PLoS ONE 10, e0126392 (2015)
2. Silberman, N.J., Panzarella, K.J., Melzer, B.A.: Using human simulation to prepare physical
therapy students for acute care clinical practice. J. Appl. Health 42, 25–32 (2013)
3. Thomas, E.M., Rybski, M.F., Apke, T.L., Kegelmeyer, D.A., Kloos, A.D.: An acute
interprofessional simulation experience for occupational and physical therapy students: key
findings from a survey study. J. Interprof. Care 31, 317–324 (2017)
4. Boykin, G.L.: Low fidelity simulation versus live human arms for intravenous cannulation
training: a qualitative assessment. In: Duffy, V., Lightner, N. (eds.) Advances in Human
Factors and Ergonomics in Healthcare. Advances in Intelligent Systems and Computing, vol.
482. Springer, Cham (2016)
5. Wells, J.: Development of a high fidelity human patient simulation curriculum to improve
resident’s critical assessment. Ann. Behav. Sci. Med. Educ. 29, 10–13 (2014)
6. Friedrich, U., Backhaus, J., et al.: Validation and educational impact study of the NANEP
high-fidelity simulation model for open preperitoneal mesh repair of umbilical hernia (2019)
7. Shoemaker, M.J., Riemersma, L., Perkins, R.: Use of high fidelity human simulation to teach
physical therapist decision making skills for the intensive care setting. Cardiopulm. Phys.
Ther. J. 20, 13 (2009)
8. Leocádio, R.R.V., Segundo, A.K.R., Louzada, C.F.: A sensor for spirometric feedback in
ventilation maneuvers during cardiopulmonary resuscitation training. Sensors (Basel) 19,
5095 (2019)
9. Heraganahally, S., Mehra, S.: New cost-effective pleural procedure training: manikin-based
model to increase the confidence and competency in trainee medical officers. Postgrad. Med.
J. 95, 245–250 (2019)
10. Anatomical Models and Educational Supplies. http://www.mentone-educational.com.au.
Accessed 04 Nov 2019
11. Kim, Y., Jeong, H.: Virtual-reality cataract surgery simulator using haptic sensory
substitution in continuous circular capsulorhexis. In: 2018 Conference Proceedings IEEE
Engineering in Medicine and Biology Society, pp. 1887–1890 (2018)
12. Monnit. https://www.monnit.com/. Accessed 06 Nov 2019
13. Althen Sensors and Controls. https://www.althensensors.com/. Accessed 06 Nov 2019
14. Java. https://www.java.com/. Accessed 06 Nov 2019
15. MariaDB. https://mariadb.org/. Accessed 06 Nov 2019
Pervasive Information Systems
Data Intelligence Using PDME for Predicting
Cardiovascular Predictive Failures
1 Introduction
As reported by the World Health Organization, Cardiovascular Diseases (CVD) are the
prime cause of death worldwide. At 2016 at least *29% died of Ischaemic heart
disease and *10% from a stroke both diseases are deeply connected to CVD which
makes 39% of all the 56.9 million deaths in 2016 [1]. The most used heart disease
treatment protocols and CVD prevention is costly and require continuous visits to a
healthcare facility, which is a big roadblock to the elderly and seniors. These visits can
become a big challenge to the elderly as their health continuous to decrease especially
the ones that suffer a chronic heart failure.
In 2009, the cost of CVD and stroke indirect and direct costs exceed $475 billion in
the US only, direct costs include healthcare, hospitals and nursing home, the indirect
costs associated to lost productivity, caregiver burden, disability, and mortality [2].
Studies showed in 2005 that at least 82% of adults aged above 65 the principal
cause of death are all related to the CVD, with this values spike even more as soon
people get older, CVD are racelessness and genderlessness and at 70 years old the
© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 317–326, 2020.
https://doi.org/10.1007/978-3-030-45697-9_31
318 F. Freitas et al.
lifetime risk of having a first Coronary Heart Disease is 34.9% in men and 24.2% in
women [2]. According to data from World Population Prospects: the 2019 Revision, by
2050, one in six people in the world will be over age 65 (16%), up from one in 11 in
2019 (9%). By 2050, one in four persons living in Europe and Northern America could
be aged 65 or over. In 2018, for the first time in history, persons aged 65 or above
outnumbered children under five years of age globally. The number of persons aged 80
years or over is projected to triple, from 143 million in 2019 to 426 million in 2050 [3].
Therefore, with a growth of elderly people in the world and the continuous costs for
CVD treatments and prevention, there is a need to make these tendencies go slower,
and the way that it is possible with the usage of data mining. With the improvement of
technology, is possible to use vital biological parameters such as Electrocardiogram
(ECG), heart rate, systolic/diastolic pressure and temperature can be measured accu-
rately and in real-time by wearable and mobile sensors and transmitted wirelessly to a
gateway device (e.g. smartphone, tablet) [4].
With the collection of this information, it is possible to find out tendencies and
develop prediction models with the help of analysis from medical experts making
decisions in a faster and autonomous way. There is already some Data Mining Engines
being used at in medical or health care centers, but most of them cannot be used
without a DM specialist [5]. The objective of this project is to know if the PDME is a
reliable tool in a medical study and mainly in a CVD data analysis standpoint, to be
introduced to people with less DM knowledge.
2 Background
2.1 Data Mining
The constant evolution of Information Technology (IT) has created a huge amount of
databases and bigger amounts of data in various areas. A new approach started to form,
the usage and manipulation of the data for further decision making [6].
Data mining, is the analysis of factual data or datasets to find uncontested rela-
tionships and to compile the data in unique ways that are both coherent and fruitful to
the data owner [7]. In a DM project the objective is to make discoveries from data, the
main goal is to be as confident as possible about our results which they can reach a
conclusion that is not what we pretended mainly because the presence of uncertainty in
the data. On a DM project we are working with samples of data the we draw con-
clusions from that applies to the universe of data that we gather. DM is knowed as an
undisputed language for dealing with data ambiguity [7].
With the development of newer and even more accurate technologies, Fraunhofer
developed their own NIHT system called SmartBEAT being Fraunhofer, Portugal
Association the Coordinator of the project [9].
SmartBEAT is a smartphone-based HF NIHT system designed to detect congestion
through daily monitoring of HF symptoms, weight, peripheral blood oxygen saturation,
blood pressure, heart rate, physical activity, and therapy adherence. This solution
comprises a monitoring device and a system to collect, analyze, store, manage and
transmit data [9].
2.3 PDME
In the present day the growth of data mining engines and data mining in general creates
a new era of data where everything can be used to obtain information, tendencies and
patterns, but all of them require the knowledge of DM concepts [5].
Pervasive Data Mining Engine (PDME) [5], simplifies the knowledge with the
connection of general characteristics present in a data mining engine with pervasive
computing, which means putting all the technological needs on the “background”,
making the users just provide the data, since PDME has a fully automatic configuration
and manual data mining services, the user can choose the one that wants, both of them
are simple to use and notoriously logical and innate for the user [5]. Providing all the
DM tools and their outcome mainly in dashboards and probabilities in real-time at any
place and time, makes PDME a truly powerful tool in data mining in general especially
in data mining engines [5].
Obesity - Obesity enhanced the probability of having CVD even if there are no other
risks attached, the excess of weight increases the tightness in the heart raising blood
pressure and cholesterol and higher triglyceride levels. These factors can increase the
risk of having, atherosclerosis and thrombolytic embolism that are CVD [12];
Blood pressure - Mainly knowed as hypertension, is deeply linked to cardiac diseases,
high blood pressure affects the heart by thickening and stiffening him, making harder
to the blood to flow, and that can lead to heart attacks and strokes [12].
All these CVD’s risks have symptoms or pathologies that are deeply inherent on the
disease causing conditions on the people affected by them, and it is important to
define them, to make diagnosis easier or to be easily associated to a specific branch of
CVD’s;
Syncope - is a condition defined as a self-limited loss of consciousness without
maintain ability to sustain a postural tone, followed by spontaneous recovery [13].
Orthopnea - is having the sensation of breathlessness in any situation [13].
Dyspnea - is a condition about shortness of breath in relaxed positions [13].
Edema - is swelling caused by excess fluid trapped in your body’s tissues [13].
With the relation between the symptoms and results is possible to predict a classifi-
cation system to facilitate a diagnosis in our case is the NYHA classification.
• PDME (Pervasive Data Mining Engine) - the main tool used in this project, to
make, analysis, predictions and data models.
4 Data Study
4.3 Modeling
After the separation of all the tables to work in the data transformations, we first
transformed the data to a DATE type syntax (dd/mm/yyyy). The timeline of the data
starts in 15 of February and it ends 28 of March of 2019. The main point was to define
the data granularity because there is various types of data granularity,
After all transformations we have a dataset with 691 rows in a daily granularity
standpoint, in Table 3 and 4 is the patient data and the dyspnea level records by
frequency.
5 Evaluation
The strategy that was used was essentially to show initially the ability of the prediction
models to be accurate on predicting the severity levels of dyspnea by showing the clear
relationship between the risk factors, having a minimum, maximum and average per-
centage accuracy level in a 10-fold (10 sets of model training and test);
Scenario 1 - In this scenario we used metrics that are prevalent in heart failure
patients.
Attributes used; q3(target), age, fragility, BMI
Technique used: Caret_C50 / Results: min: 83,1% avg: 91.4% max: 97.6%
In the Table 5 we can see a accuracy average off 91%, we can conclude that the
data is in a universe of patients that are already ill and using attributes that are char-
acteristic of people with this disease it is normal to have great results not because of the
ability of the model but because of the data.
Scenario 2 - In this scenario we decided that it would be important to test all
variables in our dataset, not to draw big conclusions from a medical stand point but to
see how models behave with many variables and which ones stand out on the models
Attributes used; q3(t), age, avgHR, diabetes, diastolicBP, fragility, gender, BMI,
medicine, q1, q2, q4, vmaxHRminHR, vsystodias.
Technique used: randomUnionForest / Results: min: 82,7% avg: 91.7% max:
96.6%
Table 6 shows that, with all attributes the models had no problem predicting as they
also had about 92%, we can take from here that there are certain attributes that are
important to the data and the models discard the rest, one solution will be to see the
weight or importance of each attribute.
The Table 7 reveals how the models focuses on BMI and use this parameter as the
main value to predict the level of dyspnea, which demonstrates the relationship of BMI
with people with heart failure. Scenario 3 - After the first 2 scenarios, we decided to
explore variables that are common to all people, that can be tracked easily, for this to be
possible we remove variables that are hard to obtain and also remove variables with
bigger importance like BMI.
Attributes used; q3(t), age, avgHR, diastolicBP, gender, vmaxHRminHR,
vsystodias
Technique used: Caret_C50 / Results: min: 75,6% avg: 85.1% max: 92.8%
In the Table 8 we can see that from the moment we remove the most important
variables the accuracy levels start to decrease, in this scenario we have all the data
concerning the users’ hearts and only the age and gender. The models are still able to
correctly demonstrate the PDME’s capacity, its data versatility, as in our opinion is the
best model, Table 9 gives us the set of rules created by the data model:
Scenario 4 - In these scenario only data about the heart is used, we removed most of
the variables to show the volatility of the data when we exclude the heavier variables.
Attributes used; q3(t), avgHR, diastolicBP, vmaxHRminHR, v_systodias
Technique used: randomUnionForest / Results: min: 14,9% avg: 31.1%
max: 40.6%
The scenario 4 exhibited in the Table 10 has the lowest accuracy off all the sce-
narios, we can see the liability of the models to obtain good results, because this data is
dependent of the patient’s age and the patient’s gender, and the cardiovascular
parameters deviates both with gender and age.
6 Deployment
The data that we obtained was from a group of people with some level of heart failure,
so it was natural to see the scenarios with great levels of accuracy, because the data is
deeply connected to all the cardiovascular risk factors.
Initially, the results on first scenarios, PDME showed the capacity of defining the
most important variables how they were connected to the NYHA classification, it
shows the relationship between them and how they affect the data models via their
weight importance in the data mining models. Secondly, the way the models were
starting to be worse as soon we started to create scenarios with lesser CVD risks
attributes continues to show how some variables are really important for the prediction
levels. But the most important thing is the data models corroborates the science behind
this disease showing that the tendencies and patterns were deeply rooted to the medical
concepts. When we had BMI on the scenarios or even the main risk factors the
accuracy levels were truly accurate, but as soon we removed this factors the value starts
to decline, without them it is impossible the obtain some sort of good analysis or
prediction. Finally, we can say that PDME is a data-mining engine competent on this
field of study and in these type of datasets.
We can say that PDME can be used in cardiovascular disease prediction standpoint,
developing patterns and tendencies in this area. There are some difficulties in this work
most of them are connected to the complexity of the CVD, it is important to have a
grasp of knowledge in this area to capitalize all the potential that PDME gives.
Surely, with better knowledge of this disease is possible to obtain improved clinical
significance and even breakthroughs in this area of medicine with the possibility to be
used in real life environment. This work opens a new research field in the medicine area
applied to the early detection of CVD. Future work will be focused in exploring more
complex and extensive datasets. A deep analysis is need to apply the same concepts but
using more data and attributes. After that create prediction models that can help doctors
to predict automatically levels of a heart failure of a patient or to be used as a pre-
vention tool for this type of disease.
References
1. World Health Organization: Global Health Estimates 2016: Deaths by Cause, Age, Sex, by
Country and by Region, 2000–2016. WHO, Geneva (2018)
2. Yazdanyar, A., Newman, A.B.: The burden of cardiovascular disease in the elderly:
morbidity, mortality, and costs. Clin. Geriatr. Med. 25, 563–577 (2009)
3. United Nations: World Population Ageing 2019 - Highlights. United Nations (2019)
4. Lappa, A., Goumopoulos, C.: A home-based early risk detection system for congestive heart,
Patras, Greece (2019)
5. Peixoto, R.D.F.: Pervasive data mining engine, Guimarães (2015)
6. Ramageri, B.M.: Data mining techniques and applications. Indian J. Comput. Sci. Eng. 1,
301–305 (2010)
7. Mannila, H., Smyth, P., Hand, D.: Principles of Data Mining. The MIT Press, Cambridge
(2001)
8. Koudstaal, S., Asselbergs, W., Brons, M.: Algorithms used in telemonitoring programmes
for patients with chronic heart failure: a systematic review. Eur. J. Cardiovasc. Nurs. 17,
580–588 (2018)
9. Cardoso, J., Moreira, E., Lopes, I.: SmartBEAT: a smartphone-based heart, Porto (2016)
10. Sullivan, P.L.: Correlation and Linear Regression. Boston University School of Public
Health. http://sphweb.bumc.bu.edu/otlt/MPH-Modules/BS/BS704_Correlation-Regression/
BS704_Correlation-Regression_print.html
11. INS Português Doutor Ricardo Jorge: Doenças Cardiovasculares (2016)
12. Mukerji, V.: Clinical Methods: The History, Physical, and Laboratory Examinations.
Butterworth-Heinemann, Boston (1990)
13. Nason, E.: An overview of cardiovascular disease and research (2007)
14. Su, J.: Developing an early warning system for congestive heart failure using a Bayesian
reasoning network. Doctoral dissertation, Massachusetts Institute of Technology (2001)
15. Auble, T.E.: A prediction rule to identify low-risk patients with heart failure. Acad. Emerg.
Med. 12, 514–521 (2005)
16. Visweswaran, S., Angus, D.C., Cooper, G.F.: Learning patient-specific predictive models
from clinical data, University of Pittsburgh (2010)
17. Varma, D., Shete, V., Somani, S.B.: Development of home health care self. Int. J. Adv. Res.
Comput. Commun. Eng. (2015). https://www.ijarcce.com/upload/2015/june-15/IJARCCE%
252054.pdf&ved=2ahUKEwipn8Xz_vjoAhU6DGMBHdpRA8kQFjAKegQIBhAB&usg=
AOvVaw3jCeLDzja9paQTjBoiRxoK
18. Alturki, A., Bandara, W., Gable, G.: DSR and the core of information systems (2012)
19. Hevner, A.: Design science in information systems research (2004)
20. Chapman, P., Clinton, J., Kerber, R., Khabaza, T., Reinartz, T., Shearer, C., Wirth, R.:
CRISP-DM 1.0. CRISP-DM Consortium, p. 76 (2000)
21. New York Heart Association: Specifications Manual for Joint Commission National Quality
Measures (2016)
Design of a Microservices Chaining
Gamification Framework
Ricardo Queirós(B)
Abstract. With the advent of cloud platforms and the IoT paradigm,
the concept of micro-services has gained even more strength, making
crucial the process of selection, manipulation, and deployment. However,
this whole process is time-consuming and error pruning. In this paper,
we present the design of a framework that allows the chaining of several
microservices as a composite service in order to solve a single problem.
The framework includes a client that will allow the orchestration f the
composite service based on a straightforward API. The framework also
includes a gamification engine to engage users not only to use the frame-
work, by contributing with new microservices. We expect to have briefly
a functional prototype of the framework so we can prove this concept.
1 Introduction
Nowadays, enterprise systems are mostly based on loosely coupling interoperable
services – small units of software that perform discrete tasks – from separate
systems across different business domains [1]. A crucial aspect in this context is
service composition. Service composition is the process of creating a composite
service using a set of available Web services in order to satisfy a user request
or a problem that cannot be satisfied by any individual Web service [1]. The
service composition can be defined from a global perspective (choreography) or
using a central component that coordinates the entire process (orchestration).
However, despite all the software that implements those concepts, there are few
that can be used, in a very simple way, and focused on the new paradigm of
cloud services, where the JSON specification increasingly assumes a prominent
role in the data exchange formalization within the client-server model.
This paper focuses on the design of a framework that aims to integrate a set
of micro-services to solve a particular problem. In order to engage users to use
the platform based on the framework, a gamification engine is injected which
will perform tasks such as grading micro-services and their aggregation.
The framework is composed of 3 big components: an authoring tool, a gamifi-
cation engine, and a Web client engine. The former allows the submission, chain-
ing, sharing and grading of the micro-services. The gamification engine allows
c The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 327–333, 2020.
https://doi.org/10.1007/978-3-030-45697-9_32
328 R. Queirós
the user to grade micro-services and their aggregation as a Web Service Con-
tainer (WSC). The later, allows developers to iterate over all the micro-services
in a service container, through a simple API. The main advantage of this app-
roach regarding the existent approaches is the simplicity and the separation of
concerns. Firstly, a service container is formalized with a simple JSON schema
that can be loaded from the cloud. Then, developers through a simple API can
manage the execution flow of the process without worrying about HTTP clients
and messages. Secondly, the framework fosters the separation of concerns by
giving to the developer the mission to formalize and submit micro-services and
interact with the engine and to the business analysts the chance to chain the
micro-services and generating containers to address a particular problem. The
chaining process will be very simple, using drag-and-drop techniques, and will
automatically inform of invalid pairings based on the matching of the services’
response/request types.
This work is organized as follows. Section 2 discusses some key concepts
regarding service composition, namely, choreography and orchestration. In
Sect. 3 we present the framework that was designed to help developers/annalist
to compose and interact with Web services. Section 4 evaluates the proposed
framework through the creation of a prototype for an healthcare case study.
Finally, we enumerate the main contributions of this work and future directions.
2 State of Art
One of the hot topics in this paper is service composition. This concept encour-
ages the design and aggregation of services that can be reused in several sce-
narios. The next section focuses on the two most used techniques: orchestration
and choreography.
The framework proposed in this paper has some similar aspects with several
automation pipeline tools that exist in the Web. For instance, in the field of ser-
vice automation tools, several tools appeared in recent years. The best examples
are IFTTT5 , Pipes6 , Node-RED7 and SOS [4].
IFTTT is a free web-based service which is mostly used to create chains of
simple conditional statements, called applets.
Pipes is a visual programming editor specialized on feeds that gives a UI with
blocks that can fetch and create feeds, and manipulate them in various ways such
as filtering, extracting, merging and sorting. The user only needs to connect those
1
http://www.bpmn.org/.
2
https://www.w3.org/TR/ws-cdl-10/.
3
https://www.w3.org/TR/wsci/.
4
http://www.chor-lang.org/.
5
https://ifttt.com/.
6
https://www.pipes.digital/.
7
https://nodered.org/.
330 R. Queirós
blocks with each other so data can flow through such a pipe, flowing from block
to block. The final output is a news feed, which can be served to other programs
that support open web standards. As input formats Pipes supports RSS, Atom,
and JSON feeds, it can scrape HTML documents, and it can work with regular
text files.
Node-RED is a flow-based development tool for visual programming devel-
oped originally by IBM for wiring together hardware devices, APIs and online
services as part of the Internet of Things. Node-RED provides a web browser-
based flow editor, which can be used to create JavaScript functions. Elements of
applications can be saved or shared for re-use. The runtime is built on Node.js.
The flows created in Node-RED are stored using JSON.
Simple Orchestration of Services (SOS) is a pipeline service environment that
has only a logical architecture defined (without any functional prototype). The
goals of SOS are to abstract developers of the burden of dealing with HTTP
bureaucratic aspects and to centralize a set of services as tasks allowing its
composition in a bigger service that can be used by a Web client.
3.1 Architecture
In Fig. 2 is presented the architecture of the proposed framework.
The architecture is composed of the following components:
– The editor - Web-based component with a GUI for the submission, chaining,
and generation of composite services;
– The manifest builder - acts as a component responsible for the conversion
and serialization of the final chain in a downloadable format;
– The gamification engine - a component which will retain data related to
productivity and challenges. The engine should have the chance to commu-
nicate with GBaaS (Gamification Backend as a Service);
– The API - interface exposing all the actions which users can do to interact
with a particular instance of the framework;
– The client - a client component responsible for the use and potential orches-
tration of the composite service.
3.2 Editor
The editor is a Web-based component that will help users in the submission and
aggregation of microservices. The final result is a new composite service as a
Web manifest that can be stored in the cloud or saved in the user’s computer.
A user can perform the following operations in the editor:
Design of a Microservices Chaining Gamification Framework 331
4 Conclusion
In this paper, we present the design of a framework as a tool for service com-
position. The main idea is to use a Web editor to aggregate small microservices
and chain them in a composite service. These composite services can be loaded
in a client library that will manage the execution flow of the composite ser-
vice and abstract the developer of all the bureaucratic aspects related to HTTP
messaging management.
The main contribution of this work is the design of a services chaining frame-
work which includes the interaction of several components.
As future work we intend to:
– Define the formats for the microservices, the composite services, the mani-
fests, and the API;
– Create a prototype by choosing a specific domain and by implementing all
these components;
– In the editor, we intend to include visual programming constructs such as
conditional and cyclic blocks.
References
1. Foster, H., Uchitel, S., Magee, J., Kramer, J.: Model-based analysis of obligations
in web service choreography. In: Advanced International Conference on Telecom-
munications and International Conference on Internet and Web Applications and
Services, AICT-ICIW 2006, pp. 149–149, February 2006
2. Leymann, F. Decker, G., Kopp, O., Weske, M.: BPEL4Chor: extending BPEL for
modeling choreographies. In: 2007 IEEE International Conference on Web Services,
pp. 296–303 (2007)
3. McNeile, A.: Protocol contracts with application to choreographed multiparty col-
laborations. Serv. Oriented Comput. Appl. 4(2), 109–136 (2010)
4. Queirós, R., Simões, A.: SOS - simple orchestration of services. In: 6th Sympo-
sium on Languages, Applications and Technologies, SLATE 2017, Vila do Conde,
Portugal, 26–27 June 2017, pp. 13:1–13:8 (2017)
5. Chao, C., Hongli, Y., Xiangpeng, Z., Zongyan, Q.: Exploring the connection of chore-
ography and orchestration with exception handling and finalization/compensation,
pp. 81–96. Springer, Heidelberg (2007)
6. Zaha, J.M., Barros, A., Dumas, M., ter Hofstede, A.: Let’s dance: a language for
service behavior modeling, pp. 145–162. Springer, Heidelberg (2006)
PWA and Pervasive Information
System – A New Era
Abstract. Nowadays, and increasingly, the users’ demand over the applications
requires them to be a lot more flexible, adaptable, capable of being executed
over different operational systems. This happens according to the need of
accessing those applications no matter where, when, nor what they use to do so.
Well, this is the basis under the concept of Pervasive Information Systems (PIS).
But, how can such complex thematic be handled? How can an application be
developed towards global usage? A new development methodology has been
arising, Progressive Web Application (PWA), by mixing the web pages with the
mobile applications world. So, in a nutshell, PWAs appear as a concretization of
what is PIS concept. This article aims to explore this thematic and provide a few
insights on what is PWA, what are its strengths, weaknesses, opportunities, and
threats.
1 Introduction
For a long time now, it is possible to see the growth of the variety of devices that
people use on their daily basis. That became even more intense with the implemen-
tation of the Internet of Things (IoT) concept, described as a model that allows different
devices (the “things”) to be connected as part of the internet [1]. These “things” are
used to capture several kinds of data related to the environment they are into or related
to human beings that use them. In 2018, around 23.14 billions of devices were part of
this IoT universe worldwide, and it is estimated that in 2025 this value might triplicate
reaching 75.44 billion [2]. Taking this as the start point for this article becomes clear
the enormous variety of user interfaces for which many applications must be prepared.
With this and the paired technology evolution, a context-aware becomes necessary
[3], along with the need to develop solutions towards their combination with another
IoT solutions, which implies them to be flexible and modular but also with a strong
core architecture [4].
All of this meets the Pervasive Information Systems (PIS) concept since it is based
in “non-traditional computing devices that merge seamlessly into the physical envi-
ronment” [5].
So, by now, it is understandable that thinking in cross-platform development might
be the most efficient way of dealing with such variety. Therefore, intersecting mobile
© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 334–343, 2020.
https://doi.org/10.1007/978-3-030-45697-9_33
PWA and Pervasive Information System – A New Era 335
apps and the web becomes a good way of achieving that [4]. With this paper it is
intended to approach how mobile computing intersected with software engineering
helps to materialize the PIS concept, in this case, using PWAs, demystifying and
clarifying the differences between the usual Desktop Information Systems (DIS) and
Pervasive Information Systems (PIS). Further in the article, is included a SWOT
analysis over PWA towards PIS.
The present paper is subdivided into five main sections, starting with the present
Introduction, providing a few insights on the field of study along with an overview of
what is expected from this paper. Next to it appears the Background section in which
the concepts needed to understand thematic here discussed (Information Systems, and
Pervasive Computing), followed by a section that aims to cross these two concepts.
Then a new section, entirely dedicated to explaining Progressive Web Application (the
idea, why use this approach based on its characteristics). And at the end of this paper, it
is presented a set of conclusions on the studied thematic (in the Conclusions section).
2 Background
Before getting deeper into this thematic, there are some basic concepts that must be
fully understood, such as Information Systems, Pervasive Computing, and their merged
result, Pervasive Information Systems. The Information Systems concept can be easily
understood if thought as the result of four components: Input, Process, Output, and
Feedback, as shown in Fig. 1.
associated with this research field as minimizing the impact that these systems might
have on user’s perception and how to make a system invisibly built-in the environment.
Regarding Pervasive Information Systems, it is crucial to define it clearly before
proceeding. So, according to Kourothanassis, P. and Giaglis, G. [8], a pervasive
information system can be defined as “Interconnected technological artefacts diffused
in their surrounding environment, which work together to sense, process, store, and
communicate information to ubiquitously and unobtrusively support their users’
objectives and tasks in a context-aware manner”.
In other words, a PIS is a highly embedded system, at the point users might not
notice that they are using it, has a constant presence through different types of devices
(which have no formal obligation of being participants of a concrete network requiring
the system to support spontaneous networks), and receiving stimuli from the envi-
ronment not necessarily form the user (context-awareness) [5, 9].
In Table 1, it is expected to clarify some of the differences between what is
Information Systems and what is Pervasive Information Systems. This way becomes
easier to understand the additional complexity of PIS when compared to IS, and the
reason for some problems and question that it raises.
3 PIS vs DIS
Now that it is clarified how in what a pervasive information system consists, it is time
to understand how it can be concretized, this means, pass the concept into an actual
usable system. To solve this question appeared a new way of dealing with the web
world along with the applications’ one, trying to combine them in the best possible
way, the Progressive Web Applications (PWA) [11].
338 G. Fernandes et al.
Taking a look into this approach name fragments, it became easier to understand its
goals and coverage. So, by being progressive it is secure to assume that they evolve
across time according to their usage, the web concept indicates that they are built using
web models, and app that they count with the typical app features [11].
So, PWAs are optimized, and reliable web apps accessible on the web, counting
with its best parts, such as a wide reach, instant access and updates, and an easy
shareability. On the other hand, it also includes offline storage, access to native fea-
tures, making them indistinguishable from native apps at most of the times [12].
In a nutshell, PWAs main goal is to provide to its users a similar experience while
using a web app, either through the browser, either as a mobile application [13].
Comparison. So, presented the PWA concept along with its main characteristics and
benefits, it is plausible to compare it with native applications, and standard web
applications, since it aims to merge them towards a better solution leveraging cross-
platform technology.
Before going deeper on this topic, it is important to clarify what is a cross-platform
app, which, in a simple way, is an app that has as main goal support multiple platforms
using just one code-base. This can be divided into two paradigms: runtime environ-
ments (the implication of providing a native app for each supported platform) and
generative approaches (implying the app generation over a unique code-base) [16].
Hereupon, in Table 3, it is possible to consult the comparison between the different
types of applications taking a set of seven relevant parameters: installation, updates,
size, offline access, user experience, push notification and discoverability.
Back in 2017, in Googles’ developer conference, Google I/O, PWAs were intro-
duced and discussed in areas going from user experience, technical frameworks, per-
formance testing, and migration. They are being pushed to be the new era of what is the
user experience of, both mobile and web worlds. Therefore, it started being imple-
mented in several reputed companies, such as Forbes, Financial Times, Lyft, Expedia,
AliExpress, Tinder, Flipkart, Housing.com, Twitter, and OLA. As it is possible to infer,
there is extensive coverage of this approach through very different niches and markets
denoting that PWAs can fit no matter the context [16].
As an example of why such companies have been satisfied with the adoption of this
approach and underlying the stated benefits for PWA adoption, here are considered two
particular cases: Twitter (social network used worldwide), and OLA (India’s largest
ride-hailing app). Both applications have seen its size abruptly reduced when compared
with their native versions (Table 4).
Regarding OLA, another question that was deeply affected by PWAs adoption is
related to the fact that they do not need an internet connection to work properly. This is
important to them, due to the fact that they operate over three distinguishable areas in
what concerns internet connection (the first tier is considered the one with the best
connection, and the third one is associated with low-connectivity). Well, after imple-
menting PWAs, they saw a 68% usage growth in the third tier, and the conversion rate,
in this tier, increased 30%. Regarding the other tiers, the values were similar to the
native applications [16].
• Better and Faster Performance: this type of applications is optimized for speed
and content delivery on demand, according to web orientation. Furthermore, the
caching system on the client-side, allowing offline interaction puts PWAs ahead
when compared to both, native applications, and web applications;
• Increased Conversion Rates: this means the increasing number of subscriptions,
bookings, engagement, retention rate, loyalty rate, and so on. This is due to the fact
that PWAs improve the user experience (faster page loads, easier installs, instant
updates, smaller websites), and that is highly converted profit.
Weaknesses
• IOS Implementation: associated with this type of application is related to the fact
that the accessibility to native functions when using IOS is quite limited;
• Battery Usage: once PWAs are based on high-level programming languages, this
means an increase in CPU usage, which leads to more battery usage;
• Mobile Technical Requirements: specific technical requirements, like fingerprint
authentication, that is available only in native platforms (not in the web).
Opportunities
• Everything is Discoverable, Sharable, Linkable, and Rankable: by this, it is
supposed to understand that users can easily share links between each other helping
the growth of the users’ base along with making thing more convenient for the
users;
• App Stores are optional: since the download is made directly from the browser all
the bureaucracy associated with uploading the application to an app store, and
update it;
• Reduced Development Costs: both resource and time are reduced due to the fact
that just on the application is being developed, and demands posterior maintenance,
instead of four (web, Android, IOS, and Windows);
• Better User Adoption: this is given to the fact that PWAs are much easier to be
installed, and provide the user the possibility of testing the application (on the
browser) before installing it.
Threats
• New Approaches: the appearance of new approaches that might solve, nowadays,
PWAs problems, such as, limitations related to IOS, and some technical
requirements;
• Legacy Systems: this means that some older systems, and browsers, less used
nowadays, do not deal as well as they should with this new approach, which can
conduct to the abandon of some users;
• Accessibility: once again, associated with features that javascript does not allow to
access, reducing the possibilities of connection with devices, such as sensors, so
used in the Internet of Things emerging world.
342 G. Fernandes et al.
5 Conclusions
This paper served the purpose of contextualizing the problem in hands, the con-
cretization of Pervasive Information Systems along with its problems, and present a
recent approach, Progressive Web Applications, that aim to solve, at least regarding
some cases, those same problems.
To do so, a substantiated explanation of what are Pervasive Information Systems
was provided, turning clear the coverage, complexity, and its imminence in our lives.
With that, the problematics involved while trying to implement such a system are
exposed.
Therefore, towards the concretization of what it is the Pervasive Information
System, it is possible to see, through this paper, that Progressive Web Apps come to
solve its questions, at least regarding the most imminent problem associated to this
matter, experience uniformization across different devices used by the common users.
This means that the PWA approach is not specialized to solve all sort of problems that
come along with PIS, but it is boosting the possibilities around user experience
regardless of users’ environment conditions.
In a nutshell, this article aims to be an easy explanation on how PWAs appear as a
concretization of the premisses involving what is a PIS, after explaining the necessary
concepts to fully understand what is the real need in hands, along with a proper
solution.
So, by the end of this paper, it is expected that the goal of alerting the community
for a new way of development, that allows its acceleration, along with solving different
problems in what concerns the PIS implementation, by solving just one application.
Moreover, the main contribute associated with this article is the SWOT analysis over
PWAs, that can be seen as a start point to whoever wants to develop different and
transverse solutions.
In a future article, it is expected to take this matter into another level, giving it a
stronger background, allowing this question to be deeper explored, along with an easy
to follow guide towards a simple PWA development. By other means, an extended
version of the present paper is going to be produced with the goal of providing some
guidelines to develop a PWA.
Acknowledgements. The work has been supported by FCT – Fundação para a Ciência e a
Tecnologia within the Project Scope: UID/CEC/00319/2019.
References
1. Simmhan, Y., Perera, S.: Big data analytics platforms for real-time applications in IoT. In:
Big Data Analytics: Methods and Applications, India. Springer, Heidelberg (2016)
2. Columbus, L.: https://www.forbes.com/sites/louiscolumbus/2016/11/27/roundup-of-internet-
of-things-forecasts-and-market-estimates-2016/#130f4040292d, 27 November 2016
3. Majchrzak, T.A., Schulte, M.: Context-dependent testing of applications for mobile devices.
J. Web Technol. 2(1), 27–39 (2015)
PWA and Pervasive Information System – A New Era 343
4. Gronli, T.-M., Biorn-Hansen, A., Majchrzak, T.A.: Software development for mobile
computing, the internet of things and wearable devices: inspecting the past to understand the
future. In: Proceedings of the 52nd HICSS Hawaii (2019)
5. Kourouthanassis, P.E.: A Design Theory for Pervasive Information Systems, pp. 1–2 (2006)
6. Stair, R., Reynolds, G.: Principles of Information Systems. Cengage Learning, Boston
(2014)
7. Kurkovsky, S.A.: Pervasive computing: past, present and future, January 2008
8. Kourouthanassis, P.E., Giaglis, G.M.: Pervasive Information Systems. M.E. Sharpe, New
York (2008)
9. Fernandes, J.E., Machado, R.J., Carvalho, J.Á.: Model-Driven Methodologies for Pervasive
Information Systems Development, 9 May 2004
10. Henricksen, K., Indulska, J., Rakotonirainy, A.: Modeling Context Information in Pervasive
Computing Systems, 21 August 2002
11. Upplication: Progressive Web Applications (2018). www.upplication.com
12. Ionic. The Architect’s Guide to PWAs. Whitepaper (2018)
13. Walker, H.: 10 reasons why you should consider Progressive Web apps (2018)
14. Wong, J.: Gartner Blog Network, Gartner, 24 March 2017. https://blogs.gartner.com/jason-
wong/pwas-will-impact-your-mobile-app-strategy/. Accessed 4 Nov 2019
15. Tandel, S.S., Jamadar, A.: Impact of progressive web apps on web app development.
IJIRSET 7(9), 9439–9444 (2018)
16. Gronli, T.-M., Biorn-Hansen, A., Majchrzak, T.A.: Progressive web apps: the definite
approach to cross-platform development? In: Proceedings of the 51st Hawaii International
Conference on System Sciences (2018)
17. Warcholinski, M.: What Are the Advantages and Disadvantages of Progressive Web Apps?
Brainhub https://brainhub.eu/blog/advantages-disadvantages-progressive-web-apps/. Acces-
sed 17 Nov 2019
Inclusive Education through ICT
Young People Participation in the Digital
Society: A Case Study in Brazil
Abstract. Young people are key drivers of new behaviors and understandings.
Their participation in the society allows the integration of their ideas and con-
structive analysis to foster policies and innovative solutions in which technology
are an intrinsic element. Citizen science can be used to give voice to children
and young people through the development of citizenship and approach to
scientific debate. In the European context, the WYRED project has developed a
methodological framework to support the participation of young people in the
Digital Society through social dialogues and the support of a technological
ecosystem to allow internationalization. This work aims to lay the groundwork
to transfer the WYRED framework to the Brazilian context through a case study
conducted in the Universidade Presbiteriana Mackenzie. The study has allowed
to identify the key topics that concern Brazilian young people in relation to
desired social change: tolerance to different cultures/opinions; mental wellbeing;
necessary changes in education (e.g. future-oriented education); self-image, self-
confidence; and Internet safety & privacy.
1 Introduction
© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 347–356, 2020.
https://doi.org/10.1007/978-3-030-45697-9_34
348 E. Knihs and A. García-Holgado
Table 1. (continued)
Title netWorked Youth Research for Empowerment in the Digital Society
Budget 993.662,50€
Start date 01/11/2016
End date 31/10/2019
Web https://wyredproject.eu
The nine partners have guided over 1500 children and young people between 7 and
30 years old over three years to ask questions and carry out researches about themes
and ideas that affect and shape their interactive, performative and communicative
worlds. The framework is composed by the methodology and the WYRED Ecosystem
[16–18], a technological ecosystem to facilitate the interaction not only between young
people from the participant countries but also the interaction of children and young
people with stakeholders and decision-makers. The ecosystem works as a catalyst to
give voice to children and young people, so their ideas and projects can have an impact
on the decision-making processes related to the Digital Society.
The WYRED Project was applied only in the participant countries, although it is
possible to apply the framework in other countries due to the methodology and the
main software components of the WYRED Ecosystem are available in several lan-
guages (English, Spanish, Hebrew, Italian, and Turkish).
In this context, this work aims to lay the groundwork to transfer the WYRED
framework to the Brazilian context. The first phase of the WYRED methodology is a
set of social dialogues among children and young people. It is necessary to identify the
key themes that concern children and young people in relation to the desired social
change in order to initiate the dialogue process. For this reason, the case study
described in this work is focused on identifying the key themes for Brazilian youth and
conduct a set of dialogues, both face-to-face and through the WYRED Ecosystem.
The case study was conducted in the Universidade Presbiteriana Mackenzie (Sao
Paulo, Brazil) in the bachelor’s degree in Information Systems and Computer Science
courses as an activity inside the discipline of Science, Technology, and Society in
Mathematics and Computing. Four different groups were involved, with a total of 95
students between 18 and 33 years old.
The rest of the paper is organized as follows. Section 2 describes the methodology
used to conduct the study. Section 3 presents the case study. Section 4 describe the
main results of the survey. Finally, Sect. 5 discusses the results and summarizes the
main conclusions of this work.
2 Methodology
Regarding the methodology used in this work, the beginning occurred through a
process of reflection and importance that young people may have on certain subjects;
about what they are concerned about some important themes in the Digital Society. The
reading, analysis, and discussion among the authors on the subject led to the research of
350 E. Knihs and A. García-Holgado
3 Case Study
The case study was based on young people’s perception of the main topics of interest in
the Digital Society. The goal is to give young people a voice so that their opinions are
taken into account when making technology-related decisions.
Young People Participation in the Digital Society 351
3.3 Participants
Four different groups were involved, with a total of 95 students between 18 and 33
years old, 13 women (13.68%), 78 men (82.11%), and 4 others (4.21%). The survey
was answered by 88 students, 12 women (13.64%), 73 men (82.95%), 2 selected that
his gender is non-binary (2.27%), and 1 preferred not to answer (1.14%). Regarding the
skin color/race of the sample, 66 are white (75%), 14 are pardo (15.91%), 7 are Asian
(7.95%), and 1 is black (1.14%).
Through the answers obtained in the survey (Table 2), it was possible to divide the
students according to their work interests. The second phase was conducted with all the
population. The students established dialogues on the selected topics through a set of
communities in the WYRED Platform and during a face-to-face session in each class.
4 Results
As a first step in the analysis process, the descriptive statistics of the answers of the
students were calculated (Table 3). Furthermore, the results were calculated per student
group.
The results indicate five topics that were of most significant interest, namely:
• Mental wellbeing.
• Tolerance to different cultures/opinions.
• Necessary changes in education.
• Self-image, self-confidence.
• Internet safety & privacy.
354 E. Knihs and A. García-Holgado
There are some differences among the most valued topics regarding the results per
group. In the class 1G, “Crime” and “Environmental problems” appears among the
five most valued topics. Regarding the class 1J, appears “Gender stereotypes/
discrimination” instead “Internet safety & privacy.” In the class 1N, “Reliability of
information on the Internet and social media” instead “Self-image, self-confidence.”
Finally, in the class 1X, emerges the “Roles of parents, friends and peer groups” instead
“Internet safety & privacy.”
The need for youth participation in discussion and analysis promotes a con-temporary
format of transformative involvement in addressing key issues of the Digital Society.
Including young people in the process of transforming skills and discussing creative
initiatives into knowledge and skills can foster protagonism and drive positive con-
clusions for the future. This paper aims to promote youth protagonism, enable dis-
cussion and interaction on relevant aspects in the Digital Society and present the
importance of implementing and consolidating participatory and analytical activities
directed at young people, increasing the importance of their opinions and the debates
they participate.
The case study transfers an experience based on the results of a project funding by
the European Union, to the Brazilian context. In particular, it adapted the WYRED
methodological framework in order to identify the main topics that concern Brazilian
young people concerning desired social change.
Regarding the most important topics rated by the young people in Brazil, the results
are similar to those obtained in Europe (Austria, Belgium, Israel, Italy, Spain, Turkey,
United Kingdom). In the European survey, 355 children and young people answer the
survey, although 632 respondents submitted answers to part of the questions, namely
(in most cases) full answers to the topics rating [21]. The European sample was
composed of young people between 15 and 30 years old, 48.7% women and 51.3%
men. The most valued topics in Europe were: necessary changes in education; tolerance
to different cultures and opinions; mental wellbeing; self-image, self-confidence; and
gender stereotypes/discrimination.
Even though the samples are different according to the size and gender balance, the
results show a high degree of similarity. Four of the five topics are the same in Brazil
and Europe. In Brazil, “Internet safety & privacy” was better rated, while in Europe,
there is a particular interest in “gender stereotypes/discrimination”.
It is also important to highlight that the topics were better rated in Brazil than in
Europe. However, this difference may be related to the survey in Brazil was applied in
the same socio-economic and cultural context; meanwhile, in Europe was applied in a
heterogeneous context in different countries and regions.
Acknowledgments. With the support of the EU Horizon 2020 Programme in its “Europe in a
changing world – inclusive, innovative and reflective Societies (HORIZON 2020: REV-
INEQUAL-10-2016: Multi-stakeholder Platform for enhancing youth digital opportunities)”
Call. Project WYRED (netWorked Youth Research for Empowerment in the Digital society)
Young People Participation in the Digital Society 355
(Grant agreement No. 727066). The sole responsibility for the content of this webpage lies with
the authors. It does not necessarily reflect the opinion of the European Union. The European
Commission is not responsible for any use that may be made of the information contained
therein.
References
1. Kunsch, M.M.K., Kunsch, W.L.: Relações Públicas Comunitárias: A comunicação numa
perspectiva dialógica e transformadora. Summus, São Paulo, Brazil (2007)
2. Felice, M.: As formas digitais do social e os novos dinamismos da sociedade contemporânea.
Relações Públicas Comunitárias: A comunicação numa perspectiva dialógica e transfor-
madora. Summus, São Paulo, Brazil (2007)
3. Santos, M.E.V.M.: Cidadania, conhecimento, ciência e educação CTS. Rumo a “novas”
dimensões epistemológicas. Revista Iberoamericana de Ciencia, Tecnología y Sociedad 6,
137–157 (2005)
4. Mamede, S., Benites, M., Alho, C.J.R.: Ciência Cidadã e sua Contribuição na Proteção e
Conservação da Biodiversidade na Reserva da Biosfera do Pantanal. Revbea, São Paulo, V.
Revista Brasileira de Educação Ambiental (RevBEA) 12, 153–164 (2017)
5. Bueno Campos, E., Casani, F.: La tercera misión de la Universidad. Enfoques e indicadores
básicos para su evaluación. Econ. Ind. 366, 43–59 (2007)
6. García-Peñalvo, F.J.: La tercera misión. Educ. Knowl. Soc. 17, 7–18 (2016)
7. Vilalta, J.M.: La tercera misión universitaria. Innovación y transferencia de conocimientos
en las universidades españolas. Studia XXI. Fundación Europea Sociedad y Educación,
Madrid (2013)
8. García-Peñalvo, F.J., Conde, M.Á., Johnson, M., Alier, M.: Knowledge co-creation process
based on informal learning competences tagging and recognition. Int. J. Hum. Cap. Inf.
Technol. Prof. (IJHCITP) 4, 18–30 (2013)
9. Ramírez-Montoya, M.S., García-Peñalvo, F.J.: Co-creation and open innovation: systematic
literature review. Comunicar 26, 9–18 (2018)
10. Etzkowitz, H., Leydesdorff, L.: Universities and the Global Knowledge Economy. A Triple
Helix of University-Industry-Government Relations. Pinter, London (1997)
11. García-Peñalvo, F.J., Kearney, N.A.: Networked youth research for empowerment in digital
society: the WYRED project. In: García-Peñalvo, F.J. (ed.) Proceedings of the Fourth
International Conference on Technological Ecosystems for Enhancing Multiculturality
(TEEM 2016), Salamanca, Spain, 2–4 November 2016, pp. 3–9. ACM, New York (2016)
12. García-Peñalvo, F.J.: WYRED project. Educ. Knowl. Soc. 18, 7–14 (2017)
13. García-Peñalvo, F.J., García-Holgado, A.: WYRED, a platform to give young people the
voice on the influence of technology in today’s society. A citizen science approach. In:
Villalba-Condori, K.O., García-Peñalvo, F.J., Lavonen, J., Zapata-Ros, M. (eds.) Proceed-
ings of the II Congreso Internacional de Tendencias e Innovación Educativa – CITIE 2018,
Arequipa, Perú, 26–30 November 2018, pp. 128–141. CEUR-WS.org, Aachen (2019)
14. Grupo GRIAL: Producción Científica del Grupo GRIAL de 2011 a 2019. Grupo GRIAL,
Universidad de Salamanca (2019)
15. GRIAL Group: GRIAL Research Group Scientific Production Report (2011–2017). Version
2.0. GRIAL Research Group, University of Salamanca (2018)
16. Durán-Escudero, J., García-Peñalvo, F.J., Therón-Sánchez, R.: An architectural proposal to
explore the data of a private community through visual analytic. In: Dodero, J.M., Ibarra
Sáiz, M.S., Ruiz Rube, I. (eds.) Proceedings of the 5th International Conference on
356 E. Knihs and A. García-Holgado
Technological Ecosystems for Enhancing Multiculturality (TEEM 2017), Cádiz, Spain, 18–
20 October 2017, Article 48. ACM, New York (2017)
17. García-Peñalvo, F.J., Vázquez-Ingelmo, A., García-Holgado, A.: Study of the usability of
the WYRED Ecosystem using heuristic evaluation. In: Zaphiris, P., Ioannou, A. (eds.)
Proceedings of 6th International Conference on Learning and Collaboration Technologies.
Designing Learning Experiences, LCT 2019, Held as Part of the 21st HCI International
Conference, HCII 2019, Orlando, FL, USA, 26–31 July 2019, Part I, pp. 50–63. Springer,
Cham (2019)
18. García-Peñalvo, F.J., Vázquez-Ingelmo, A., García-Holgado, A., Seoane-Pardo, A.M.:
Analyzing the usability of the WYRED Platform with undergraduate students to improve its
features. Univers. Access Inf. Soc. 18(3), 455–468 (2019)
19. WYRED Consortium: WYRED Research Cycle Infographic. WYRED Consortium (2017)
20. Hauptman, A., Soffer, T.: WYRED Delphi Study. Results Report (2017)
21. Hauptman, A., Kearney, N.A., Raban, Y., Soffer, T.: WYRED Second Delphi Study Results
Report (2018)
Blockchain Technology to Support Smart
Learning and Inclusion: Pre-service Teachers
and Software Developers Viewpoints
Abstract. In support of an open ecosystem for lifelong and smart learning, this
study evaluates the perception of educational stakeholders such as pre-service
teachers and blockchain developers about the feasibility of the blockchain
technology in addressing the numerous gaps in the implementation of smart
learning environment. This research was designed within the international
project, Smart Ecosystem for Learning and Inclusion (SELI). A total of 491 pre-
service teachers and 3 blockchain developers from these countries participated
in the study. The study data was collected from a questionnaire and interview.
Descriptive statistics and content analysis was performed on the collected data.
Results from this study indicates that Blockchain technology in the educational
field is rarely known, and the frequency of use is quite low. The pre-service
teachers surveyed, for the most part, are unaware of the degree of effectiveness
of blockchain technology in education. Blockchain developers are of the opinion
that Blockchain is still new to many people and resources for education based
application are very rare, even if it is there, yet not many are open-source.
1 Introduction
2 Literature Review
Nowadays, some universities and institutes have applied blockchain technology into
education, and most of them use it to support academic degree management and sum-
mative evaluation for learning outcomes [1]. There are many application and devel-
opment occurring in technical industry with the integration of blockchain technology
which aims to strengthen the effort of open ecosystem for learning by securing col-
laborative learning environment, protecting learning objects, identifying the necessary
technologies and tools, enhancing the students interactions with educational activities
and provides a pedagogical support for lifelong learning [2]. This chain block is thus a
transaction log or ledger (ledger) public, shared by all nodes in the network [16].
eliminates the risk of falsified certifications available on the market by so many unli-
censed issuers [3]. In the aim of integration of blockcerts, MIT used blockchain wallet
which solves the problem of public and private keys in securing bitcoin blockchain
transactions though bitcoin network is getting bigger raising the question mark to a new
additional fee to the stakeholders. Based on the success of the blockcerts, the university
of Nicosia was the first higher education institute that adapted the distribution of
academic certificate through bitcoin blockchain [4, 5] alongside the Malta, as the first
European Country to follow the lead [6].
In the wave of new technological needs to the education system, decentralized
Autonomous credit system is an ideal approach to digitize the sector [7]. EduCTX, a
blockchain based higher education credit platform was invented to fill the gap, espe-
cially in Europe where the European Credit Transfer and Accumulation System
(ECTS) is used as a common academic credibility [7]. Turkanovic, Holbl, Kosic,
Hericko and Kamislac’s proposed global blockchain-based higher education credit
platform took the advantage of ARK [8], open-source blockchain platform in building a
unified, simplified and globally ubiquitous higher education credit and grading system
which supports various stakeholders including HEIs in their activities related to stu-
dents and organizations and provides a gateway of fraud detection and early prevention.
It also enables future employees’ possibilities to track the students’ academic
achievements in a transparent way through a peer-to-peer network and proof of work
[9, 10].
Building digital trust in cyberspace is a risk judgement among stakeholders.
Blockchain came along to prove a safe environment for many financial institutions, it is
now ready to bring the trust in the education sector as well such as validating processes
to documents including certificates, course assessment and evaluation of student
competencies. Bandara, Loras and Arraiza [11] argued quite nicely in their work and
proposed a blockchain-secured Digital Syllabus. This infrastructure reduces the inter-
dependency and allows Digital Syllabus to store on a public database (blockchain
network) through hash function before validating and producing a validated Syllabus
[11]. The overall process encourages more openness to our education and set a great
example to have an impact on our society.
This research was designed within the international project, Smart Ecosystem for
Learning and Inclusion (SELI) [15]. The main objective of this study was to investigate
the conditions related to the integration of Blockchain technology in ICT-supported
learning, teaching and educational inclusion. These goals are primarily diagnostic but
they will also enable comparative analyses of the selected European and Latin
American countries. While conducting the research among pre-service teachers, we
answer the following questions:
How often are blockchain used in the school environment and among the pre-
service teachers?
360 S. S. Oyelere et al.
4 Results
The pre-service teachers surveyed, for the most part, are unaware of the degree of
effectiveness of blockchain technology in education. In Uruguay and Bolivia, lack of
knowledge of the degree of effectiveness is high; 75.86% in Uruguay and 77.92% in
Blockchain Technology to Support Smart Learning and Inclusion 361
Bolivia. In Poland, 40.67% of respondents and 45.1% of them in Turkey state that they
do not know the degree of effectiveness of this technology (see Fig. 2).
Considering only the pre-service teachers who have a perception of the degree of
effectiveness of blockchain in education, there is a tendency to evaluate it as acceptable
by approximately one-third of the respondents, meanwhile, Poland reaching almost half
of those respondents (47.19%) with acceptable approval of this technology. The high
percentage of respondents who consider it low effective are in Bolivia and Turkey; The
case of Turkey has a high percentage of respondents who used this technology (reaches
54.9%) and has a high percentage of disenchanted with the experience in its use.
Poland is a compelling case, since the group that has experience of use with this
technology, only 14.61% consider it ineffective (see Fig. 3).
Turkey (36.7%) (see Fig. 4). In the case of Uruguay and Turkey, neutrals (which are
presumed not to have an interest but could have one) are approximately one-fifth of the
respondents. 17.24% in the Uruguayan case and 22.5% in the Turkish case. Bolivia and
Poland have the lowest neutral rates.
the education sector, they iterated the privacy and security concern. In using such
technology, it allows in-depth verification without having to be dependent on third-
parties and the data structure in a blockchain technology is append-only. Thus, data
cannot be altered or deleted so easily. In addition, it establishes a token of education,
creates a coin that has no value, instead an educational value that can be used to
subscribe to new courses or to receive job offers.
In order to observe their experiences so far in developing blockchain for SELI
project, we wanted to ask developers the understanding and the needs of this Project
Model to them. Developers describe it as challenging but day by day it becomes clear
and structure was formed. Moreover, it is an undeniable fact that each developer has
different approaches to the development environment regardless of the project and
process, hence Mateo used SCRUM, while Andres went for a framework called
‘NextJS’ to deploy the client part in blockchain and go-Ethereum to implement the
network through RPC and Web3 (Library of NextJS). Furthermore, developers
expressed that a few key things to keep in mind if one wants to be a blockchain
developer which are knowledge of how blockchain works, programing languages that
allow to deploy a blockchain network like go-Ethereum or solidarity for contracts and
some advance knowledge on nodes, security, the logic of smart contracts. Not to
mention, developers shared mixed experiences such as Mateo whom it was simple as
he focused on connectivity of the platform with web services like REST. On the other
hand, Alvaro highlighted some challenges especially communication and unstructured
methodology of building the application with available technologies which needs
experimentation before execution. In overall, developers said working in the SELI
project provides a new experience and skills which can be used to build more edu-
cational application in the future. Moreover, the SELI system brought us a very good
alternative to other platforms like Moodle since it involves the use of blockchain for the
issuance of certificates which was not available on any other platform by the date.
5 Discussion
Although there is an increasing amount of study exist about blockchain use in edu-
cation, there is a lack of empirical study which gathers data from actual target users
regarding use of blockchain. In this study we collect data and investigate findings from
target users as well as developers in order to reveal end user perceptions about
blockchain technology. The very first research question aimed to explore how often is
blockchain used in the school environment and among the students of teaching degrees.
According to obtained results although pre-service teachers in Turkey is an exception,
in which about half of the respondents used this technology, pre-service teachers who
have never been used blockchain technology exceeds 75% in Uruguay, Poland, and
Bolivia. This shows that although many institutions across the globe have started
initiatives to develop blockchain-based solutions that will address pedagogical gaps
Grech & Camilleri [17] very few percentage of potential users have actually used
blockchain in educational settings.
The second research question aimed to explore pre-service teachers’ subjective
evaluation of the blockchain used to support learning, teaching and digital inclusion.
364 S. S. Oyelere et al.
Findings show that the pre-service teachers surveyed, for the most part, are unaware of
the degree of effectiveness of blockchain technology in education. In Uruguay and
Bolivia, lack of knowledge of the degree of effectiveness is high; 75.86% in Uruguay
and 77.92% in Bolivia. In Poland, 40.67% of respondents and, 45.1% of them in
Turkey state that they do not know the degree of effectiveness of this technology.
Considering research project carried out by researchers from the University of New
England where one of the key identified problems in education was lack of pedagogical
responses to the needs of the students [12] it is important to investigate pre-service
teachers understanding about the technology in relation to pedagogy. Because today’s
pre-service teachers are the first generation who can actually use it in their future
classes as the technology is mostly limited with university use cases. Unfortunately
study reveals only a few preservice teachers perceive blockchain as useful despite
many application and development occurred in technical industry with the integration
of blockchain technology which aims to strengthen the effort of open ecosystem for
learning by securing collaborative learning environment, protecting learning objects,
identifying the necessary technologies and tools, enhancing the students interactions
with educational activities and provides a pedagogical support for lifelong learning [2].
Perhaps when blockchain use cases such as the case of the blockcerts from the uni-
versity of Nicosia which was the first higher education institute that adapted the dis-
tribution of academic certificate through bitcoin blockchain [4, 5] and Malta, as the first
European Country to follow the lead [6] are increased in universities, pre-service
teacher’ awareness about the use of blockchain in education may increase. Their
awareness can be even enlarged with a blockchain based higher education credit
platform such as the European Credit Transfer and Accumulation System (ECTS) is
used as a common academic credibility [8] or similarly, blockchain based approach for
connecting learning data across several learning platforms, institutions and organiza-
tions as studied by [20]. In this way blockchain may have inevitable influence in
teacher’s carrier as it enables future employees’ possibilities to track the students’
academic achievements in a transparent way through a peer-to-peer network and proof
of work [10].
The third research question aimed to explore interest in new online trainings
focused on the development of blockchain in learning, teaching, development support
and digital inclusion. According to the results, half of the respondents (51.3%) in
Bolivia showed interest in learning about blockchain for education, followed by
Uruguay (44.02%), Poland (37.33%) and Turkey (36.7%). Blockchain Technology
considered as a potential technology to support pedagogy of professional education like
nursing and health care through this decentralized academic degree management and
secured evaluation tools for learning outcomes [1, 14]. However we can say that still
the users are not ready to accept technology. Although some universities and institutes
have applied blockchain technology into education, and most of them use it to support
academic degree management and summative evaluation for learning outcomes [1] our
findings reveals that there is a relatively low level of blockchain technology.
Blockchain Technology to Support Smart Learning and Inclusion 365
The blockchain technology has created a new paradigm in the information society.
More and more applications are made every day, including the education sector. The
use of blockchain in education presents a great opportunity to increase the agility and
transparency in the academic process. However, the use of this technology for edu-
cation is on an incipient stage, especially in Latin American countries. This situation is
an excellent opportunity to revolutionize the way of how education services are con-
ceived, in terms of academic information system, recording academic achievements,
security of information, collaborative learning environment, learning management
system and contributing to the reliability of online education. Analysis of blockchain
use, in preservice teachers in three of the four studied countries, shows that the use of
blockchain is very low. Findings suggest that they are not aware of the effectiveness of
the technology, nevertheless, more of the third part of the respondents of all the
countries represented are interested in acquiring competencies in the new technology.
This information brings the chance to state as a starting point the following
recommendations:
(i) Promote the inclusion of blockchain technology in the different aspect of edu-
cation sector. (ii) Develop a capacity building plan for the teachers to use the available
technology, as a SELI Platform, to improve the education experiences. (iii) Promote
and establish synergies between the regulation institutions and private institutions that
provide education services to promote the implementation of blockchain technology.
(iv) Create a showcase environment of the possible uses of blockchain technology in
education. (v) Promote a legal framework for support and enable the use of blockchain
technology in the academic process.
Acknowledgement. This work was supported by the ERANET-LAC project which has
received funding from the European Union’s Seventh Framework Programme. Project Smart
Ecosystem for Learning and Inclusion, ERANet17/ICT-0076SELI.
References
1. Sharples, M., Domingue, J.: The blockchain and kudos: a distributed system for educational
record, reputation and reward. In: European Conference on Technology Enhanced Learning,
pp. 490–496. Springer, Cham (2016)
2. Alammary, A., Alhazmi, S., Almasri, M., Gillani, S.: Blockchain-based applications in
education: a systematic review. Appl. Sci. 9(12), 2400 (2019)
3. Huynh, T.T., Huynh, T.T., Pham, D.K., Ngo, A.K.: Issuing and verifying digital certificates
with blockchain. In: 2018 International Conference on Advanced Technologies for
Communications (ATC), pp. 332–336. IEEE (2018)
4. BlockCerts to be developed in Malta. http://www.educationmalta.org/blockcerts-to-
bedeveloped-in-malta/
5. Sharples, M., Roock, R., Ferguson, R., Gaved, M., Herodotou, C., Koh, E., Kukulska-
Hulme, A., Looi, C.-K., McAndrew, P., Rienties, B., Weller, M., Wong, L.H.: Innovating
pedagogy 2016. Open University innovation report 5 (2016)
6. Case Study Malta Learning Machine. https://www.learningmachine.com/casestudies-malta
366 S. S. Oyelere et al.
7. Li, Y., Liang, X., Zhu, X., Wu, B.: A blockchain-based autonomous credit system. In: 15th
International Conference on e-Business Engineering (ICEBE), pp. 178–186. IEEE (2018)
8. Turkanović, M., Hölbl, M., Košič, K., Heričko, M., Kamišalić, A.: EduCTX: a blockchain-
based higher education credit platform. IEEE Access 6, 5112–5127 (2018)
9. Ark: All-in-One Blockchain Solutions. http://www.ark.io
10. Lizcano, D., Lara, J.A., White, B., Aljawarneh, S.: Blockchain-based approach to create a
model of trust in open and ubiquitous higher education. J. Comput. High. Educ. 32(1), 109–
134 (2019)
11. Bandara, I.B., Ioras, F., Arraiza, M.P.: The emerging trend of blockchain for validating
degree apprenticeship certification in cybersecurity education (2018)
12. Green, N.C., Edwards, H., Wolodko, B., Stewart, C., Brooks, M., Littledyke, R.:
Reconceptualising higher education pedagogy in online learning. Dist. Educ. 31(3), 257–
273 (2010)
13. Jirgensons, M., Kapenieks, J.: Blockchain and the future of digital learning credential
assessment and management. J. Teach. Educ. Sustain. 20(1), 145–156 (2018)
14. Skiba, D.J.: The potential of blockchain in education and health care. Nurs. Educ. Perspect.
38(4), 220–221 (2017)
15. Martins, V., Oyelere, S.S., Tomczyk, L., Barros, G., Akyar, O., Eliseo, M.A., Amato, C.,
Silveira, I.F.: The microsites-based blockchain ecosystem for learning and inclusion. In:
Brazilian Symposium on Computers in Education (SBIE), pp. 229–238 (2019)
16. Oyelere, S.S., Tomczyk, L., Bouali, N., Agbo, F.J.: Blockchain technology and gamification
– conditions and opportunities for education. In: Veteška, J. (ed.) Adult Education –
Transformation in the Era of Digitization and Artificial Intelligence. Andragogy Society,
Prague (2019)
17. Grech, A., Camilleri, A.F.: Blockchain in education. Publications Office of the European
Union, Joint Research Centre (2017)
18. Arenas, R., Fernandez, P.: CredenceLedger: a permissioned blockchain for verifiable
academic credentials. In: IEEE International Conference on Engineering, Technology and
Innovation, pp. 1–6. IEEE (2018)
19. Ocheja, P., Flanagan, B., Ueda, H., Ogata, H.: Managing lifelong learning records through
blockchain. Res. Pract. Technol. Enhanc. Learn. 14(1), 1–19 (2019)
20. Tomczyk, L., Oyelere, S.S., Puentes, A., Sanchez-Castillo, G., Muñoz, D., Simsek, B.,
Akyar, O.Y., Demirhan, G.: Flipped learning, digital storytelling as the new solutions in
adult education and school pedagogy. In: Veteška, J. (ed.) Adult Education – Transformation
in the Era of Digitization and Artificial Intelligence. Czech Andragogy Society, Prague
(2019)
Digital Storytelling in Teacher Education
for Inclusion
1 Introduction
There are various ways of using Digital Storytelling (DST) as an educational tool in the
field of education including pre-school, K-12, higher education and non-formal edu-
cation. As we discussed in another paper digital stories can be created both by teachers
and students in a formal education [1]. Therefore, similar researches made in the field
of higher education provide evidence for the expected results of our study. For
example, a study implemented with college students from the Industrial Design pro-
gram reflects benefits of using digital storytelling as the authentic learning, the polished
end products, the engagement of students with the material, the decidedly independent
learning, and for the collaborative practice highlighted [2]. In another study,
researchers developed a digital storytelling system called Digital Storytelling Teaching
System-University (DSTS-U) in order to help college students to quickly create stories
© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 367–376, 2020.
https://doi.org/10.1007/978-3-030-45697-9_36
368 Ö. Y. Akyar et al.
with a structural architecture and enhance the variety of the contents of stories through
different story structures. In this study researchers see DST not only useful for skill
development but also it provides learning from experience as it allows listening and
sharing together.
Burgess, argues that debates about the digital divide based on the difficulty of access to
ICT have shifted towards concerns about social inclusion and inequality in access to
“voice” [3]. We cannot simply expect from disadvantaged groups to provide inclu-
siveness on their own. It can be said that the institutions determining the educational
policy and the teachers who directly implement this policy have an important role.
Particularly institutions need to make sure quality teacher education can work with
disadvantaged groups.
Hargreaves and Fullan reminds teachers are not only need to have knowledge and
skills but also able to create trust-based relation with others and to have judgmental
skills [4]. They call the combination of these three capitals Professional capital. In order
to contribute to the continuing professional development of this multi-faceted capital of
the teacher, the opportunities provided by new technologies to create participatory and
inclusive learning communities can be utilized by taking into consideration the
opportunities provided by today’s rapid changes and developments in teacher educa-
tion. In [4] states that in order to improve teachers and teaching, the conditions in
which teachers are involved and the communities and cultures in which they are part
should be improved.
Therefore, it can be foreseen that workshop-based DST may contribute to the
improvement of the cultures in which prospective teachers and teachers are present by
allowing the creation of a climate of trust and telling experiences and sharing of
experiences with the wider ecosystem. By using DST in teacher education, a positive
contribution can be made to the prospective teachers’ learning and active participation.
Therefore, DST can be used as a means of empowering prospective teachers to build
their stories based on their specific contexts so that they can reflect on their own
experiences and engage in constructive actions for educational transformation. This
corresponds to the autobiographical learning described by Rossiter and Garcia [5] as
the third use of DST. In particular, as a change agent, prospective teachers may be
provided with the opportunity to experience active participation that the main source of
motivation is not the satisfaction of school principal but directly contributing to the
student’s life. In this context, DST can provide an empowering resource to enhance the
professional capital of the teacher through sharing experiences. In this regard, we can
say DST has great potential for supporting teachers and prospective teachers’ learning
as an educational empowerment tool.
Digital Storytelling in Teacher Education for Inclusion 369
Randall tries to clarify the strong relationship between story and life by saying that life
is never given, it is always partially created, built, re-created, just like the story [6].
When these statements are combined with John Dewey’s statement, “Education is life
itself” we can simulate that education is an art and teacher is a designer/artist and
should use his/her creativity continuously. Therefore, teachers may need to be able to
produce innovative and creative educational activities in order to create inclusive
learning environments for the needs of students of different characteristics. As Randall
quoted researchers such as John Dixon and Leslie Stratta, who followed John Davey’s
philosophy in the field of education describe narrative as the main action of the mind
and telling as “as a basic human trait” which is an indispensable way of making human
experience meaningful [6]. His explanations to write the poetics of learning provide a
very important resource for a researcher interested in the professional development of
teachers. He argues that the mentoring approach primarily uses a version of the “story”
model. The basic assumption here is that consciousness rises, knowledge is created,
society is created and a perspective with transformative powers is established by
sharing personal and public stories. We need to highlight that not only the story, but
also the process of story formation is necessary. Therefore, our basic assumption is that
the teachers and prospective teachers, have an active role in improving the education
system to be more inclusive in terms of creating learning opportunities and will con-
tribute to the improvement of the quality of life of the individuals with their stories. In
addition, there is a need for digital story to be used beyond self-expression and
communication, as Hartley reminded digital media should be used to create a new
target, definition and imagination [7]. It is precisely at this point that the use of DST by
physical education teachers and students in the context of active quality life research
offers the goal of creating a new world design. Because, education starts with education
of the body. Moreover, the majority of physical education and sports activities are
based on learning by doing, and each learning process creates a story of its own. On the
other hand, it can be said that human life is based on movement. However, cognitive
and affective domains cannot be denied. This perspective leads us to serve holistic
development. These are the main components of active and quality life, and this can be
achieved through holistic development. The Quechua and Aymara peoples build a
collective memory through stories of oral tradition. These peoples do not have a written
tradition; they have begun to recover the stories in writing only in the twentieth century
with higher intensity. These stories are not considered as memories, instead of as the
history and thought of the people. Its conception is of transmission of values and
teachings that go form the worldview, philosophical, religious, economic, artistic,
techno- logical, and political knowledge of an entire culture. These oral stories also
make up the social order in the town. A story that is part of the oral tradition is a
complex construction of language and does not require writing. The story demands a
skilled narrator, who knows the tradition and guarantees the transmission of the most
in-depth ideas present in the story. In the stories, these native peoples create a link
370 Ö. Y. Akyar et al.
between the past and the future. The past is interpreted and chained to the interpretation
of current actions (the present) to project into the future. This projection into the future
corresponds to the people as a whole. The story is a circular experience for these
cultures. That is transforming the past into a present continuum. Tradition and its
experiences are always current according to the social life of the people. In the nar-
ration, the opening and ending of a story break with the temporal boundary. The
temporal boundary break is between when happened the story and the time it is
narrated. These two moments have no distance; on the contrary, the events can be
going on in the present at the moment of the story. According to [8] coloniality refers to
the unique patriarchal power of western expansion against the original people. This
coloniality point to the idea of differentiating races: a superior and a lower one.
Superiority is transferred to all areas like knowledge, society, work. The result is the
hegemony of thinking and building from the cultural approach of those who conquered
the indigenous. According to [8] “The construction of knowledge is a complex situ-
ation that requires the rupture of the dominant culture”. At this point, the traditional
story telling of the Aymara and Quechua peoples, in their conception of collective
memory, has maintained a space of rupture with the forms of Western thought.
According to [9] the native people (Quechua and Aymara, among others) in Bolivia
have been victims of non-national and non-democratic states. Victims in the sense of
freedom to develop a culture. They seek democratization by creating another state
approach that includes their history, which flows as the oral narration in their villages,
so far closed in written sources of western tradition. According to [10], in the 2001
census, 62.2% of the Bolivian population declares that they belong to some of the
original peoples: Aymara, Quechua, between others. In the 2012 census, referred to
[11], the population identified as part of an original inhabitant is 40.6%, a reduction
near to 20% since the 2001 census. Some people did not declare belonging to original
inhabitants due to the omission of the “mestizo” option (eliminated because it had a
pejorative concept in Bolivia). The population in Bolivia is approximately 10,896,000
inhabitants in the 2015 Household Survey [12], 31.5% of Bolivians live in the rural
area. The inhabitants of the rural area, for the most part, belong to some native people.
The native language is the most spoken in rural areas (46.4% of rural inhabitants
declare to speak Spanish). Quechuas and Aymaras can get rid of stories in a digital
format than technologies that favor writing. This empowerment will improve the
possibilities to remain in the time the collective oral memory. The chances of this
culture to accept technology help is digital storytelling; because digital storytelling is
the expressiveness approach likely to his oral experience, they have a preference for the
oral transfer of knowledge and history. The oral narrative helped by DTS will promote
re-discovering roots for people living in Bolivia, and also alleviate the misunder-
standing between indigenous and people living in cities. It also will help the second
generation of rural area inhabitants who migrate to cities know about his roots. The
empowerment of digital narration as a mechanism of education and inclusion for
cultures originating in Bolivia relies on the PRONTIS Program (Programa Nacional de
Telecomunicaciones de Inclusión Social-Prontis) [13], which in its first stage aims to
reduce the digital divide in connectivity.
Digital Storytelling in Teacher Education for Inclusion 371
SELI (Smart Ecosystem for Learning and Inclusion) project is based on situative and
sociocultural perspectives [14–17] to understand teacher learning in a digital
storytelling-embedded learning ecosystem instead of conceptualizing learning as
changes in an individual’s mental structure, we consider “learning by individual in a
community as a trajectory of that person’s participation in the community a path with a
past and present, shaping possibilities for future participation” [16] by calling from
[18]. Therefore, we prefer implementing tool based on workshop based digital story-
telling as a process rather than various examples of using digital storytelling as a
product or tool. Workshop-based dig- ital storytelling practices are used in higher
education ecologies as a co-creative process in which six main stages of the workshop
process defined by the six following phases defined by Lambert [19], see Fig. 1:
The first approach for Digital Storytelling design shows a Story compound by a
Scene sequence. The Scene has two media type resources and a text description to
represent the author’s socio-cultural expression in the story. The concrete implemen-
tation is under Meteor.js framework using Material-IU and React.js as components for
the client side. In the server side the Story and each Story-Scene is a javascript object
made persistent as MongoDB compound document.
The class diagram for Digital Storytelling design (See Fig. 3) shows a Story
compound by a Scene sequence. The Scene has two media type resources and a text
description to represent the author’s socio-cultural expression in the story. The concrete
implementation is under Meteor.js framework using Material-IU and React.js as
components for the client side. In the server side the Story and each Story-Scene is a
javascript object made persistent as MongoDB compound document.
The classes observed in the Fig. 3 represents the design for the component Sto-
rytelling Tool in the server side Meteor.js implementation. The Story is an activity in
the platform. As an activity is part of a Course.
This implementation does not support the upper story circle stages (Story circle,
text, and in group screening). The three upper story circle are high collaborative
activities usually taken face-to-face. Meanwhile the lower three circles (Voice
recording, Images, Images and voice) can be done by only one student/editor. The
recording voice, uploading image, linking of image-voice scenes and finally publishing
the story is made by the tool implemented; the publishing action let other users (stu-
dents and teachers) to view and play the story, the collaborative screening with feed-
back is missing in this naive approach for digital storytelling approach in the platform;
but it not prohibits a face-to-face meeting to screening and feedback activities.
Allows to reuse and add stories by several storytellers designing story circles each
one with an own contribution to the stories in a collaborative way in a workshop. An
Active Repository allows to manage persistence and to implement flow of the trans-
action behavior (Fig. 5).
374 Ö. Y. Akyar et al.
6 Conclusion
In conclusion, we discussed the use of DST in the educational context and the use of
DST as both a pedagogical strategy and a research method for the training of teachers,
who play a very important role in the education system.
The novelty we have added to the discussions in the field of literature on the
training of prospective teachers is the creation of a story worlds provided through
workshop based DST. First story world we call is Active Quality Life Research
Digital Storytelling in Teacher Education for Inclusion 375
Guidance, which allows sharing the stories of teachers and learners regarding active life
relevant as a learning area of physical education. Second story world is Discovering
Roots with a digital storytelling resembling oral narrative. It will allow stories of
Quechua people regarding their roots and cultures for promoting intercultural learning
and going towards inclusion of his history and thoughts in the Bolivia state, and of
course together with many other cultures wide in the world.
Secondly, we share an architectural view of DST to explain how we aim to handle
the digital aspect of the DST process together its conceptualization and intercultural
context. DST has many variants of conceptualization and many technological
approaches. Someone’s conceptualizations points to reach different approaches.
The analysis for different approaches, architectures and situations shows that is
important to define first the conceptualization and then works for design it. DST tool is
not only interesting but also inspiring for teachers to use DST in their classroom.
Researchers who attempt to use DST for literacy teaching of students discovered use of
technology can be a game changer in the classroom as it changed the mood of the
students just by modifying the way to do it, from physical cards to digital ones [21].
The analysis for different approaches, architectures and situations shows that is
important to define first the conceptualization and cultural approach before to design it.
SELI Project as a trans-national project contributes to this process by exploring distinct
approaches for a digital storytelling tool in order to enhance inclusion in education. The
strong communication architecture and knowledge exchange between owner of con-
ceptualization and developers is a key during the development of SELI DST tool.
SELI project initiatives in the Digital Storytelling approach for education and
inclusion, is the presentation of the approach with pre-service teachers and Quechua
spoken teachers validates the acceptation and appropriation of DST, in both: education
and cultural indigenous empowerment.
Acknowledgement. This work was supported by the ERANET-LAC project which has
received funding from the European Union’s Seventh Framework Program. Project Smart
Ecosystem for Learning and Inclusion – ERANet17/ICT-0076SELI.
References
1. Tomczyk, L., Oyelere, S.S., Puentes, A., Sanchez-Castillo, G., Muñoz, D., Simsek, B.,
Akyar, O.Y., Demirhan, G.: Flipped learning, digital storytelling as the new solutions in
adult education and school pedagogy. In: Jaroslav, V. (ed.) Adult Education (2018) –
Transformation in the Era of Digitization and Artificial Intelligence. Ceská and ragogická
společnost/Czech Andragogy Society Praha, Prague (2019). ISBN 978-80-906894-4-2
2. Barnes, V.: Telling timber tales in higher education: a reflection on my journey with digital
storytelling. J. Pedag. Dev. 5(1), 72–83 (2015)
3. Burgess, J.: Hearing ordinary voices: cultural studies, vernacular creativity and digital
storytelling. Continuum 20(2), 201–214 (2006)
4. Hargreaves, A., Fullan, M.: Professional Capital: Transforming Teaching in Every School.
Teachers College Press, New York (2012)
5. Rossiter, M., Garcia, P.A.: Digital storytelling: a new player on the narrative field. New Dir.
Adult Continuing Educ. 126, 37–48 (2010)
376 Ö. Y. Akyar et al.
6. Randall, W.: Bizi Biz Yapan Hikayeler. Ayrıntı Yayınları, Istanbul (2014)
7. Hartley, J., McWilliam, K.: Story circle. Wiley-Blackwell, Chichester (2009)
8. Roque, P.: Relato oral en la construccion de saberes y conocimientos de la cultura Aymara.
Escuela Superior de Formacion de Maestros - THEA. http://unefco.minedu.gob.bo/app/
dgfmPortal/file/publicaciones/articulos/ae2465defbefd1aa87d17dd4d146b966.pdf. Accessed
4 Dec 2019
9. Estudios Latinoamericanos: La tradicion oral, estudio comparativo indigena Mexico-Bolivia
(2010). https://cidesespacio.blogspot.com/2010/12/la-tradicion-oral-estudio-comparativo.ht
ml. Accessed 4 Dec 2019
10. Comision Economica para America Latina y el Caribe: Porcentaje de Poblacion Indigena.
https://celade.cepal.org/redatam/PRYESP/SISPPI/Webhelp/helpsispi.htm#porcentaje_de_
poblacionindig.htm. Accessed 3 Dec 2019
11. Centro de Estudios Juridicos e Investigacion Social: Bolivia Censo 2012: Algunas claves
para entender la variable indígena (2013). http://www.cejis.org/bolivia-censo-2012-algunas-
claves-para-entender-la-variable-indigena/. Accessed 3 Dec 2019
12. Instituto Nacional de Estadística, INE: Censo de Poblacion y Vivienda 2012 Bolivia,
Características de la Población (2015). https://www.ine.gob.bo/pdf/Publicaciones/CENSOP
OBLACIONFINAL.pdf. Accessed 30 Nov 2019
13. Ministerio de Obras Públicas, Servicios y Vivienda, PRONTIS, Bolivia: “Plan Estratégico de
telecomunicaciones y TIC de inclusión social 2015–2025” (2014). http://prontis.gob.bo/
infor/PlanEstrategicodelPRONTIS.pdf. Accessed 21 Nov 2019
14. Simsek, B., Usluel, Y.K., Sarıca, H.C., Tekeli, P.: Türkiye’de Egitsel Baglamda Dijital
Hikaye Anlatımı Konusuna Eleştirel Bir Yaklaşım. Egitim Teknolojisi Kuram ve Uygulama
8(1), 158–186 (2018)
15. Greeno, J.G.: Learning in activity. In: Sawyer, R.K. (ed.) The Cambridge Handbook of the
Learning Sciences, pp. 79–96. Cambridge University Press, NewYork (2006)
16. Greeno, J.G., Gresalfi, M.S.: Opportunities to learn in practice and identity. In: Assessment,
Equity, and Opportunity to Learn, pp. 170–199 (2008)
17. Lave, J., Wenger, E.: Situated Learning: Legitimate Peripheral Participation. Cambridge
University Press, Cambridge (1991)
18. Kang, H.: Preservice teachers’ learning to plan intellectually challenging tasks. J. Teach.
Educ. 68(1), 55–68 (2017)
19. Lambert, J.: Digital Storytelling: Capturing Lives. Creating Community. Routledge,
Abingdon (2013)
20. Martins, V., Oyelere, S.S., Tomczyk, L., Barros, G., Akyar, O.Y., Eliseo, M.A., Amato, C.,
Silveira, I.F.: The microsites-based blockchain ecosystem for learning and inclusion. In:
Brazilian Symposium on Computers in Education (SBIE), pp. 229–238 (2019). ISSN 2316-
6533. https://br-ie.org/pub/index.php/sbie/article/view/8727. http://dx.doi.org/10.5753/cbie.
sbie.2019.229
21. Flórez-Aristizábal, L., Cano, S., Collazos, C.A., Benavides, F., Moreira, F., Fardoun, H.M.:
Digital transformation to support literacy teaching to deaf Children: from storytelling to
digital interactive storytelling. Telematics Inform. 38, 87–99 (2019)
In Search of Active Life Through Digital
Storytelling: Inclusion in Theory and Practice
for the Physical Education Teachers
1 Introduction
The digital storytelling “movement” has been around for a long time [3] and digital
storytelling workshops have been in use in the higher education contexts both for
teaching, learning and research purposes worldwide. This paper discusses the potential
of using workshop based digital storytelling for the developing an understanding about
inclusion in the context of physical education teacher education. First, we give a brief
overview about the uses of digital storytelling workshops in the higher education
settings. Then we provide the details about the digital storytelling workshop that we
facilitated with physical education teachers. Then we suggest using digital storytelling
workshops in the curriculum of Sport Sciences undergraduate program, that aims to
train physical education teacher education. Here we connect the discussion to the
overall existence of inclusion topic in the course content, relating to the program
competencies matrix, that is supposed to be met by all of the higher education programs
in Turkey in line with the Bologna process. Doing so, we take a close look at the
current courses that might be related to inclusion issues in particular. In this attempt, we
try to attract attention to the importance of inclusion in practice through the circulation
of experiences, in this case in-service physical education students.
Digital storytelling is in use as an education tool in various fields of education
including pre-school, K-12, higher education and non-formal education. Digital stories
can be created both by teachers and students in formal education using various
© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 377–386, 2020.
https://doi.org/10.1007/978-3-030-45697-9_37
378 B. Şimşek and Ö. Y. Akyar
This paper finds its foundation in an interdisciplinary approach as the researchers in this
paper come from social sciences and education sciences backgrounds. We take
“learning by individual in a community as a trajectory of that person’s participation in
the community—a path with a past and present, shaping possibilities for future par-
ticipation” [1]. Therefore, we value the digital storytelling workshop process as a way
of informal learning opportunity from one another’s experiences as well as frames of
references as well as the digital stories that come out of this co-creative process.
Workshop-based digital storytelling practices are used in higher education ecolo-
gies as a co-creative process in which six main stages of the workshop process defined
as practiced and tailored according to the theme and the purpose of the practice [17].
The digital storytelling workshop is facilitated by a trained facilitator team with par-
ticipants willing to share their experiences and produce a digital story from these first
personal narratives (Fig. 1).
In Search of Active Life Through Digital Storytelling 379
Digital Story
Circle
Fig. 1. The digital story circle: the phases of a digital storytelling workshop [14].
A digital storytelling workshops starts with the story circle which is a dialogic
phase in which the participants share their stories in a setting facilitated by the trained
and experienced facilitators, opening up the circle through sharing their own stories. In
the stories circle, the participants are encouraged to tell a fragment of a personal
experience that will be later turned into a digital story, forming the foundations of their
digital story. This stage is a dialogic stage in which, no digital aspect is mentioned
rather the archaic storytelling practices are called in and practiced by the participants
through the facilitation process. Inclusion is in the core of the digital storytelling
workshops and the facilitators are the moderators for the equal say principles. The role
of the facilitators here is crucial as some voices are more dominant and willing to speak
whereas some others cannot create an opportunity to share their stories and need
encouragement to share their ideas. This core element in the facilitation process of the
workshop-based digital storytelling provides us the grounds for using the practice both
for exercising inclusion as well as sharing stories and learning from each other’s
experiences and lives. This exercise also gives us the opportunity to listen to various
experiences that are reflected in various forms of narratives rather than fitting into one
form. Once such a circle of listening and sharing is formed, creating an individual
digital story through the digitalization stages becomes an experience of collaboration,
in which trust and understanding is cultivated, rather than competition. The digital-
ization of each individual story in the workshop through the technical phases also
provide the facilitators the ground to encourage the participants to collaborate and share
their feedback on the digital story of the fellow participants. Then during the in-group
screening process, the final stage of a workshop, the story circle is completed through
watching the final version of the individual digital stories and sharing thoughts and
ideas about each other’s digital stories. This process also contributes to building up a
public conversation ecology. On the other hand, digital storytelling workshops provide
the social scientists a rich data set that can be collected with multiple methodologies.
Digital storytelling workshops has been used by Hacettepe University Faculty of
Communication Digital Storytelling Unit in its projects and the courses. Through the
380 B. Şimşek and Ö. Y. Akyar
research linked to digital storytelling workshops, the unit has been connecting the
academic sphere with various communities in and off campus, such as the LBGTI
communities, refugees, as well as NGOs focusing on gender equality and violence.
There has been interdisciplinary collaboration that has aroused through the Digital
Storytelling MA course that also serves as the facilitator training module for the
interested parties. With such a combination, the Unit connects the theory behind digital
storytelling for inclusion with communities, in other words the field. Close collabo-
ration with educational sicences started first with a PhD thesis completed recently by
Çıralı with the title “Teachers’ Professional Self-understanding and its reception by
prospective teachers through digital storytelling”. In this thesis, Çıralı focused on the
reflections of the teachers’ professional self-understanding and the reception of these
reflections by the prospective teachers [18]. It is important here to point out to the fact
that sharing experiences empower participants of the digital storytelling workshops as
they get the chance to reflect on their own personal experiences as well as get the
chance to listen to others in a setting the aim is to complete the task of creating a digital
story. The digital storytelling workshops are processes in which both the participants
are active members of a small community of practice for a day or two, depending on
the length of the workshop. This brings up to the connection of inclusion and being
active members of a community.
This study is part of ERANET-LAC project titled “Smart Ecosystem for Learning and
Inclusion” funded by the European Union. This project lays emphasis on the topic of
digital exclusion and the inaccessibility of education for the disadvantaged groups. As
they form sets of challenges that then, offers the potential for improving the digital
competences of teachers in the LAC and EU regions, which also have led to the
extensive participation of citizens who have relatively poor access to innovative
technologies involved in education, training and inclusion through ICT. Current
research aims at empowering physical education teachers throughout workshop-based
Digital Storytelling as one of the inclusive approach identified in the project. In the
Digital Storytelling Unit at Hacettepe University in October 2019, seven participants
joined the digital storytelling workshop facilitation team formed of Burcu Şimşek,
Özgür Yaşar Akyar, Şengül İnce and Çağrı Çakın. The participants are physical edu-
cation and sports pre-service or teachers who were interested in taking part in the
digital storytelling workshop that consists of 3 pre-service teachers (Göktuğ, Reyhan,
Zeynep), 2 Ph.D students (Emre, Nehir), 1 graduate student (Eren) studying in Physical
Education and Sports Teaching MA Program and one experienced physical education
teacher (Evren). The participants join the workshop filling in participation consent
forms.
This digital storytelling workshop was also the first facilitation experience of Özgür
Yaşar Akyar (second author of this paper) who took the Digital Storytelling course by
Burcu Şimşek (the first author of this paper) from the Communication Sciences MA
Program. The idea for this digital storytelling workshop was developed by Akyar who
holds a master’s degree in Sports Technology and continues his Ph.D. studies as a
In Search of Active Life Through Digital Storytelling 381
1
https://vimeo.com/376768915.
2
https://vimeo.com/376770075.
3
https://vimeo.com/376770577.
4
https://vimeo.com/376773905.
5
https://vimeo.com/376772917.
In Search of Active Life Through Digital Storytelling 383
“I believe a hectic life reduces the quality. For this reason, I always try to be
prepared for the places where I’m going. We can also call it the responsibility of being
an athlete.”(Zeynep)6.
3.5 Self-determination
In the stories of Evren, Nehir and Reyhan, we realised that the distance to self-
determination was in various levels depending on the challenges they faced.
“I had lost my independence like a child whose toys were taken away. The com-
ments on my profession like “You are a Physical Education teacher, you are always
moving, you are playing sports every day” did not reflect the truth.”(Evren).
“For me, active life means satisfying and clearing my mind, and participating in
environments where I feel free.”(Nehir)7.
“When I was at High School, I started to have problems with my back pain, and it
caused sensitivity particularly in my back. It also had a negative effect on my waist
flexibility.
Right now, I do Pilates for myself, and I’ve been teaching and included it in my
life.
I had difficulties at first, but in time, I started to feel positive effects on my body.
My Scoliosis levels improved.” (Reyhan).
3.6 Rights
Emre’s story pointed strongly to the rights and the conditions of living together. In
other words, Emre defined active life as being a responsible citizen, taking action once
faced with a public issue. In this respect, Emre’s story contributed to our argument
about active life that can not only be defined as being physically active but also has
close connections to social aspects.
“When I go jogging in the park, I immediately report the problems I see around me
to the Hello Blue Desk. Now the Blue Desk recognizes me, and they answer my phone
as “Emre Bey”. When I walk down a street and see a blown water valve or some
6
https://vimeo.com/376773905.
7
https://vimeo.com/376771327.
384 B. Şimşek and Ö. Y. Akyar
garbage, and when you report these, it is active citizenship, active life. Obviously, it is
our duty.”(Emre).
“In my opinion, active life equals active citizenship.”(Emre).
The United Nations Sustainable Development Goals10 aims to provide the member
states a frame for developing life for their citizens. Goal number 3 - Good health and
well-being, goal number 4 - Quality education, goal number 5 - gender equality, goal
number - 10 Reduced inequalities are directly connected to the wider issue of inclusion.
Once we focus on the meanings of the active life that are provided to us by our digital
storytellers in our Active Living Digital Storytelling Workshop, we realize that the
8
https://vimeo.com/376770986.
9
https://vimeo.com/376773905.
10
https://sustainabledevelopment.un.org/?menu=1300.
In Search of Active Life Through Digital Storytelling 385
gender inequalities might affect woman’s engagement with being active physically for
themselves as their lives might be occupied with duties delegated to them due to sex
roles. Quality education seems to be the other important connection as all of our
participants find personal development as an important part of their active life. Overall,
good health and well-being in relation to active life is not seen only as being physically
well but also being well emotionally. Here it is important to point to the fact that well-
being is not a personal matter but a social one. Şimşek [19] points to the women’s well-
being in relation to political participation and social inclusion. In our case, being an
responsible and responsive citizen is one of the significant meanings of an active life.
This research on active life can help physical education teacher education program
developers to design inclusive learning settings through getting inspired by diverse
views of pre-service and in-service teachers. When we closely examined the Sport
Sciences Undergraduate Program at Hacettepe University, in which most of our par-
ticipants are either graduates or current students, we came across with the fact that the
courses that can contribute to the understanding of students about inclusion in relation
to their contribution to National Qualifications Framework of Higher Education
Council, that regulates the higher education programs according to learning outcomes,
are not directly about inclusion but about communication, social competences, work
related competences and learning competences. The courses that seem to contribute to
inclusion in the program are: Introduction to education, Instructional principles and
methods, Training Theory, Outdoor Sports, Drama, Critical Thinking, School Expe-
rience, Teaching Practice and Community Service. However, in none of these courses
digital storytelling workshops are used as a tool to ignite an inclusive ecology for
sharing learning and teaching experiences that shortens the distance between the parties
of education. As Şimşek et al. pointed out in an earlier study there needs to be more of
critical research on educational sciences in relation to digital storytelling where com-
munication sciences can provide alternatives [5].
Acknowledgement. This work was supported by the ERANET-LAC project which has
received funding from the European Union’s Seventh Framework Program. Project Smart
Ecosystem for Learning and Inclusion – ERANet17/ICT-0076SELI.
References
1. Greeno, J.G., Gresalfi, M.S.: Opportunities to learn in practice and identity. In: Assessment,
Equity, and Opportunity to Learn, pp. 170–199 (2008)
2. Gu, X., Chang, M., Solmon, M.A.: Physical activity, physical fitness, and health-related
quality of life in school-aged children. J. Teach. Phys. Educ. 35(2), 117–126 (2016)
3. Hartley, J., McWilliam, K.: Story Circle. Wiley-Blackwell, Chichester (2009)
4. Heesch, K.C., van Gellecum, Y.R., Burton, N.W., van Uffelen, J.G., Brown, W.J.: Physical
activity and quality of life in older women with a history of depressive symptoms. Prev.
Med. 91, 299–305 (2016)
5. Şimşek, B., Usluel, Y.K., Sarıca, H.Ç., Tekeli, P.: Türkiye’de eğitsel bağlamda dijital hikâye
anlatımı konusuna eleştirel bir yaklaşım. Eğitim Teknolojisi Kuram ve Uygulama 8(1), 158–
186 (2018)
386 B. Şimşek and Ö. Y. Akyar
6. Kotluk, N., Kocakaya, S.: Researching and evaluating digital storytelling as a distance
education tool in physics instruction: an application with pre-service physics teachers.
Turkish Online J. Distance Educ. 17(1), 87–99 (2016)
7. Lok, N., Lok, S., Canbaz, M.: The effect of physical activity on depressive symptoms and
quality of life among elderly nursing home residents: Randomized controlled trial. Arch.
Gerontol. Geriatr. 70, 92–98 (2017)
8. MoNE program. http://mufredat.meb.gov.tr/Dosyalar/2018120201950145-
BEDENEGITIMIVESPOROGRETIMPROGRAM2018.pdf. Accessed 5 Dec 2019
9. Rödjer, L., Jonsdottir, I.H., Börjesson, M.: Physical activity on prescription (PAP): self-
reported physical activity and quality of life in a Swedish primary care population, 2-year
follow-up. Scand. J. Primary Health Care 34(4), 443–452 (2016)
10. Schalock, R.L., Verdugo, M.A., Gomez, L.E., Reinders, H.S.: Moving us toward a theory of
individual quality of life. Am. J. Intellect. Dev. Disabil. 121(1), 1–12 (2016)
11. Şimşek, B.: Enchancing women’s participation in Turkey through digital storytelling.
J. Cult. Sci. 5(2), 28–46 (2012)
12. Jamissen, G., Hardy, P., Nordkvelle, Y., Pleasants, H.: Digital Storytelling in Higher
Education - International Perspectives. Springer, Cham (2017)
13. Schalock, R.L., Verdugo, M.A.: Handbook on Quality of Life for Human Service
Practitioners. American Association on Mental Retardation, Washington, DC (2002)
14. Şimşek, B.: Hikâye anlattıran, Hikâyemi Anlatan, Kendi Hikâyesini Yaratan Çember. In:
Ergül, H. (der.) Sahanın Sesleri. İstanbul Bilgi Üniversitesi Yayınları, İstanbul (2013)
15. Şimşek, B.: İletişim çalışmaları bağlamında dijital hikâye anlatımı: Kavramlar ve Türkiye
deneyimi. Alternatif Bilişim, İstanbul (2018)
16. Vescio, V., Ross, D., Adams, A.: A review of research on the impact of professional learning
communities on teaching practices and student learning. Teach. Teach. Educ. 24, 80–91
(2008)
17. Lambert, J.: Digital Storytelling: Capturing Lives. Creating Community. Routledge,
Abingdon (2013)
18. Çıralı Sarıca, H.: Öğretmenlerin Dijital Hikâye Anlatımı Üzerinden Mesleki Kendini
Anlayışları ve Öğretmen Adaylarınca Alımlanması (2019). http://openaccess.hacettepe.edu.
tr:8080/xmlui/handle/11655/8043. Accessed 5 Dec 2019
19. Şimşek, B.: Digital storytelling for women’s well-being in Turkey. In: Dunford, M., Jenkins,
T. (eds.) Digital Storytelling: Form and Content. Palgrave Macmillan, London (2018)
Accessibility Recommendations for Open
Educational Resources for People
with Learning Disabilities
1 Introduction
From the earliest years of life, the human being acquires knowledge through learning.
According to [1], learning is a necessary and universal process for the development of
culturally organized and particularly human psychological functions. Regarding formal
education, it has to be pointed out that access to learning is a right for all, regardless of
ones disabilities.
On the other hand, learning disabilities are related to significant difficulties in the
acquisition and use of writing, speaking, listening, reading and mathematical problem
solving skills [2, 3]. Despite concerns about improving the theoretical foundation and
© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 387–396, 2020.
https://doi.org/10.1007/978-3-030-45697-9_38
388 V. F. Martins et al.
attempts to increase the quality of teacher education, there are still high rates of
unattended children with learning disabilities.
Many children present specific learning disabilities, such as dyslexia, dysgraphia
and dyscalculia. Research by the National Center for Education Statistics (NCES) in
the US, indicates that there are 34% of students aged 3 to 21 who have specific learning
disability [4]. Schools have the mission of bringing knowledge to each child, with a
unique cognitive and genetic profile, maximizing their skills and knowledge. Thus,
children with learning disabilities should receive attention that minimizes their dis-
abilities. Therefore, using Universal Design for Learning [5], combined with Infor-
mation and Communication Technology, seems to be a way to address the issue of
exclusion of people with disabilities.
The different modes of learning showed that students have specific needs to make
learning effective. The Universal Design for Learning intends to make the school
curriculum more flexible to meet specific learning needs of the students, i.e. their skills
and knowledge, as well as their experiences. Prioritizing a set of principles intended to
provide students with the same opportunities to learn but focusing on the inequalities of
the individual in relation to their skills, needs and interests [5]. With technology as an
ally, it sets out to adopt the most efficient and appropriate materials and methods to
reach all students. The combination of different media in content transmission supports
the development of flexible learning content that can meet the different learning needs
of students.
Besides, the adoption of Open Educational Resources (OER) brings a whole new
scenario of possibilities for adapting already existing content to meet specific
requirements [6]. By dwelling on open licenses and formats, OER makes it possible to
reduce adoption costs and makes more feasible the process of design and deliver
courses that comply with specific accessibility needs.
In this context, the aim of this paper is to present recommendations for the con-
struction of accessible OER for people with learning disabilities, aimed at teachers with
or without previous ICT knowledge. When designing accessible OER, it is important to
know the students’ profile and to establish the limitations arising from learning
difficulties.
This paper is structured as follows. Section 2 provides the necessary background
for understanding this paper’s context: developmental disorders, universal design for
learning; accessibility guidelines and related work. Next, in Sect. 3 are the materials
and methods. Section 4 provides accessibility recommendations. Finally, in Sect. 5,
some final conclusions of this paper are drawn.
2 Background
2.1 Learning Disabilities
Learning disabilities are relatively common conditions and refers to a heterogeneous
group of disorders that manifest as significant difficulties in the acquisition and use of
writing, speaking, listening, reading and mathematical problem solving skills [2, 3].
According to the International Classification of Diseases (ICD) learning disability is
Accessibility Recommendations for Open Educational Resources 389
interferes with the level of compliance achieved by the website [17]. Priority levels are
numbered from 1 to 3, describing as required application requirements, otherwise it
will be impossible for one or more groups to access web content; as requirements that
should have in the application, otherwise some groups will have difficulty accessing the
content; and as requirements they might have in the application, so that it is easier for
some groups to gain access, respectively [16].
WCAG 1.0 was updated in 2008 resulting in the publication of WCAG 2.0 com-
plementing it and being designed to be widely applied to Web technologies. These
guidelines are divided into four main topics, which are: perceptible (information and
interface components must be presented so that users can capture them); operable
(interface and navigation components must be operable); understandable (information
and use of the interface must be understandable); robust (content must be robust to be
fully interpreted by a wide variety of users) [18].
The last W3C accessibility guidelines update took place in 2018, with the publi-
cation of WCAG 2.1, which also does not nullify WCAG. On the contrary it com-
plements it. The WCAG 2.1 goal to improve accessibility guidance for three major
groups: users with cognitive or learning disabilities, users with low vision, and users
with disabilities on mobile devices [19].
From the DUA, W3C guidelines, OER recommendations, features of people with
learning disabilities and the authors’ expertise in building accessible material, we
propose some recommendations for educators as authors of accessible OER.
A methodological cut was made and taken as target audience for the OER only that
people with some learning disability or barrier.
We then generated a list of recommendations for authors of OER for the audience
regarding the care they should take to create or use text, video, images, sounds and
other resources. This list of recommendations was created from studies and also from
the authors’ knowhow in the generation of accessible material. These resources should
be used with the support of an authoring tool for the generation of accessible teaching
courses.
The concept of UDL is closely associated with the use of technology; however, UDL is
not just the use of technology in education [32]. It is also about pedagogical or
instructional practices used by students with or without disabilities.
Accessibility Recommendations for Open Educational Resources 393
Thus, to build accessible OER, we can think of two complementary scenarios: the
use of technologies to provide facilitators for students (such as screen readers, increase
the font size, calculators, speech recognition, speech synthesizers, etc.) and the
instructional and pedagogical practices that teachers should think about to meet the
conditions of their students. As a background, all aspects of openness brought by OER
must be considered in the authoring process [31].
The work presented by [33] already pointed to the technologies that could be used
to help people with learning difficulties. The author cites, for example, Word Pro-
cessing, Spell Checking, Proofreading Programs, Speech Recognition to minimize
Written Language problems; Speech Synthesis, Optical Character Recognition Systems
for Reading Problems; Personal Data Managers and Free-Form Databases for Orga-
nization and Memory Issues; Talking Calculators for Math Problems.
Table 1 presents a summary of the main difficulties presented by people with
learning disabilities and how this is minimized by technological and/or pedagogical
resources, based on [32, 33] and in the authors’ practice in developing accessible digital
material.
Table 1. Relationship between the difficulties presented by people with learning difficulties and
the computational resources. (source: authors)
Difficulties Technological and pedagogical resources
Reading Screen Reader, small and simpler text, use auxiliary vocabulary,
don’t use abbreviations
Writing Typing text, spell checker
Calculating Calculator, numeric ruler
Attention Use more than one media resource (image, video, text, sound), use
feedback frequently
Time planning Don’t use time in the activities or giving more time to do the
activities
Long or short-term Videos/images/links
memory
Organization Index of contents
Cognitive problems Use of alternative texts in images and links, use of simpler texts, use
of videos and other multimedia resources to complement the
understanding of texts, tips and glossary for less common words
Some of the features presented in Table 1 may be provided through digital tech-
nologies to be made available to students. However, there are strategies to be imple-
mented by the authors of teaching materials. In order to guide educators to build
accessible OER for people with learning disabilities, the following recommendations
were generated, divided into general content, non-textual content (video, image, ani-
mation, audio) and exercises/activities. These recommendations can be inserted in an
authoring tool for creating accessible digital material.
394 V. F. Martins et al.
• Use student daily words. If you need to use unusual words, create a glossary with
the meaning of these words.
• Do not use color, sound, shapes as the sole resource for understanding content and
for feedback.
• Avoid using text in images unless they are essential (examples: trademarks and
logos) or can be customized by the user.
• Do not insert animation of more than 5 s if it is not essential.
• Create an index of content that will be displayed.
• Avoid using abbreviations.
• Maintain pattern of the objects that make up the material, such as titles, content,
feedbacks and image description.
• The contents (textual, video, sound, etc.) cannot be too long. This means the content
must be smaller than you would use for typical students. The content should be
more direct and clearer.
• If possible, create materials at different levels of depth. Use the shallowest level to
present the context and a deeper level, such as “read more”.
• Use images, graphics and videos to help in understanding the content.
• Enter information about the meaning of that content, its purpose (what it is for), and
some accessible description of it.
• It is desirable that the video has subtitles in the same mother language as the readers.
• If the video has no subtitles, subtitle software can be used (such as Movavi Clips
(https://www.movavi.com/), Wave.video (https://wave.video/), InShot (https://
inshoteditor.br.uptodown.com/android), Clipomatic (https://www.apalon.com/
clipomatic.html).
• Images should not have many visual elements, not to confuse.
• Do not use too long sound-based information with too much different information.
4.3 Exercises/Activities
• Establish different difficulty levels. Start with the least complex exercises/activities.
• Give feedback on the response to exercises/activities.
• If the exercise/assessment has a time limit, the teacher may set extra time or disable
the use of time.
5 Conclusion
This paper has made recommendations for creating accessible OER for people with
learning disabilities. These recommendations were based on the W3C, Universal
Design for Learning guidelines, the openness principles and also on the authors’
Accessibility Recommendations for Open Educational Resources 395
Acknowledgment. This work was supported by the ERANET-LAC project which has received
funding from the European Union’s Seventh Framework Programme. Project Smart Ecosystem
for Learning and Inclusion – ERANet17/ICT-0076SELI. The work was also supported by the
Coordenação de Aperfeiçoamento de Pessoal de nível superior - Brazil (CAPES) - Programa de
Excelência - Proex 1133/2019 and Fundação de Amparo à Pesquisa do Estado de São Paulo
(FAPESP) 2018/04085-4.
References
1. Vygotsky, L.S., Luria, A.R., Leontiev, A.: Linguagem, desenvolvimento e aprendizagem
[Language, Development and Learning]. Ícone, São Paulo (1991)
2. National Joint Committee on Learning Disabilities: Operationalizing the NJCLD definition
of learning disabilities for ongoing assessment in schools. Learn. Disabil. Q. 21, 186–193
(1998)
3. Lagae, L.: Learning disabilities: definitions, epidemiology, diagnosis, and intervention
strategies. Pediatr. Clin. North Am. 55(6), 1259–1268 (2008)
4. National Center for Education Statistics (NCES). https://nces.ed.gov/programs/coe/
indicator_cgg.asp. Accessed 21 Nov 2019
5. Rose, D.: Universal design for learning. J. Spec. Educ. Technol. 15(3), 45–49 (2000)
6. Baldiris, N., Margarita, S., et al.: A technological infrastructure to create, publish and
recommend accessible open educational resources. Revista Observatório 4(3), 239–282
(2018)
7. National Collaborating Centre for Mental Health UK: Challenging behaviour and learning
disabilities: prevention and interventions for people with learning disabilities whose
behaviour challenges (2015)
8. American Psychiatric Association: Diagnostic and statistical manual of mental disorders.
BMC Med. 17, 133–137 (2013)
9. García, T., Rodríguez, C., et al.: Executive functioning in children and adolescents with
attention deficit hyperactivity disorder and reading disabilities. Int. J. Psychol. Psychol. Ther.
13(2), 179–194 (2013)
10. Moreau, D., Waldie, K.E.: Developmental learning disorders: from generic interventions to
individualized remediation. Front. Psychol. 6, 2053 (2016)
11. Davis, S., Laroche, S.: Mitogen-activated protein kinase/extracellular regulated kinase
signalling and memory stabilization: a review. Genes, Brain Behav. 5, 61–72 (2006)
12. Samuels, I.S., Saitta, S.C., Landreth, G.E.: MAP’ing CNS development and cognition: an
ERK some process. Neuron 61(2), 160–167 (2009)
396 V. F. Martins et al.
13. González-Valenzuela, M.J., Soriano-Ferrer, M., Delgado-Ríos, M.: “How are reading
disabilities operationalized in Spain”. A study of practicing school psychologists. J. Child.
Dev. Disord. 2, 3 (2016)
14. Mace, R.L.: Universal design in housing. Assist. Technol. 10(1), 21–28 (1998)
15. Alnahdi, G.: Assistive technology in special education and the universal design for learning.
Turk. Online J. Educ. Technol.-TOJET 13(2), 18–23 (2014)
16. W3C: Web content accessibility guidelines 1.0 (1999). https://www.w3.org/TR/WAI-
WEBCONTENT/. Accessed 11 Oct 2019
17. Rocha, J.A.P., Duarte, A.B.S.: Diretrizes de acessibilidade web: um estudo comparativo
entre as WCAG 2.0 e o e-MAG 3.0. Inclusão Soc. 5(2), 75–78 (2012)
18. W3C: Web content accessibility guidelines 2.0 (2008). https://www.w3.org/TR/WCAG20/.
Accessed 11 Oct 2019
19. W3C: Web content accessibility guidelines 2.1 (2018). https://www.w3.org/TR/WCAG21//.
Accessed 11 Oct 2019
20. Hall, T.E., et al.: Addressing learning disabilities with UDL and technology: Strategic
reader. Learn. Disabil. Q. 38(2), 72–83 (2015)
21. Kuzmanovic, J., Labrovic, A.J., Nikodijevic, A.: Designing e-learning environment based on
student preferences: conjoint analysis approach. Int. J. Cognit. Res. Sci. Eng. Educ.
(IJCRSEE) 7(3), 37–47 (2019)
22. Hollingshead, A.: Designing engaging online environments: universal design for learning
principles. In: Cultivating Diverse Online Classrooms Through Effective Instructional
Design, pp. 280–298. IGI Global (2018)
23. Martin, N., et al.: Implementing inclusive teaching and learning in UK higher education–
Utilising Universal Design for Learning (UDL) as a route to excellence (2019)
24. Courtad, C.A.: Making your classroom smart: universal design for learning and technology.
In: Smart Education and e-Learning, pp. 501–510. Springer, Singapore (2019)
25. McKeown, C., McKeown, J.: Accessibility in online courses: understanding the deaf learner.
TechTrends 63, 506–513 (2019)
26. Menke, K., Beckmann, J., Weber, P.: Universal design for learning in augmented and virtual
reality trainings. In: Universal Access Through Inclusive Instructional Design: International
Perspectives on UDL, p. 294 (2019)
27. Armstrong, A.M., Franetovic, M.: UX and instructional design guidelines for m-learning. In:
Society for Information Technology & Teacher Education International Conference.
Association for the Advancement of Computing in Education (AACE) (2019)
28. Hockings, C., Brett, P., Terentjevs, M.: Making a difference—inclusive learning and
teaching in higher education through open educational resources. Distance Educ. 33(2), 237–
252 (2012)
29. Teixeira, A., et al.: Inclusive open educational practices: how the use and reuse of OER can
support virtual higher education for all. Eur. J. Open Distance E-Learn. 16(2) (2013). https://
www.eurodl.org/?p=special&sp=articles&inum=5&abstract=632&article=632
30. Navarrete, R., Luján-Mora, S.: Improving OER websites for learners with disabilities. In:
Proceedings of the 13th Web for All Conference. ACM (2016)
31. Silveira, I.F.: OER and MOOC: the need for openness. Issues Inf. Sci. Inf. Technol. 13, 209–
223 (2016)
32. King-Sears, M.: Universal design for learning: technology and pedagogy. Learn. Disabil. Q.
32(4), 199–201 (2009)
33. Raskind, M.H.: Assistive technology for adults with learning disabilities: a rationale for use.
In: Gerger, P.J., Reiff, H.B. (eds.) Learning Disabilities: Persisting Problems and Evolving
Issues, pp. 152–162. Andover Medical Publishers, Boston (1994)
Digital Storytelling and Blockchain
as Pedagogy and Technology to Support
the Development of an Inclusive Smart
Learning Ecosystem
1 Introduction
In recent times, there has been a massive interest to revamp the educational environ-
ment to be open, accessible, trustworthy, and meeting the expectations of all stake-
holders, including teachers, students, parents, regions, and governments. These
growing requests led to the birth of a joint project, Smart Ecosystem for Learning and
© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 397–408, 2020.
https://doi.org/10.1007/978-3-030-45697-9_39
398 S. S. Oyelere et al.
Inclusion (SELI, seliproject.org) supported by the European Union, Latin America and
Caribbean [1, 16]. SELI addresses the crucial gap of 21st century educational goals
through the design science research framework [2] by identifying the needs and
requirements of different regions; outline the learning ecosystem and define the
requirements; design and develop the ecosystem; and finally, validate and evaluate the
solution. Emerging pedagogies, methods, strategies and technologies that are capable
of supporting the seamless implementation of the learning ecosystem were identified
and developed according to the universal accessibility standard [3–5]. The main aspects
of the SELI ecosystem includes: authoring services, microsites (a small cluster of web
pages that presents the course with all didactic contents, independent of the authoring
tool), learning management system (LMS) and content management system
(CMS) services, digital storytelling pedagogy, learning analytics services, and block-
chain support. An ecosystem can be defined as “a community of organisms in con-
junction with environmental components interacting as a (semi-) closed system” [6].
Briscoe and DeWalde [7] define a digital ecosystem as “an artificial system that aims to
harness the dynamics that underlie the complex and diverse adaptations of living
organisms in biological ecosystems”. Boley and Chang [8], brought the following
definition: “an open, loosely coupled, domain clustered, demand-driven, self-
organizing agent environment, where each agent of each species is proactive and
responsive regarding its own benefit/profit but is also responsible to its system.” Many
researchers have been developing digital ecosystems-based solutions to address dif-
ferent problems in our society. As an example, Mendoza et al. [9] developed a digital
ecosystem for the digital literacy gap. Before them, Silveira et al. [6] proposed LATIn,
a digital ecosystem for open textbooks in the context of Latin American Higher
Education. More recently, Burns and Dolan [10] proposed a set of policies, platforms,
and systems as an ecosystem to help to include people as participants of the so-called
“digital economy”. However, SELI ecosystem addresses the problem of inclusive
education using new technologies and pedagogy.
achieved. This means that, when we think of an inclusive learning environment, we are
intrinsically thinking of a learning environment designed for diversity. According to
Bourk et al. [19] traditional diversity is defined by gender, race, nationality, age, and
demographic differences, but from a new perspective, diversity is defined in a broader
context, including concepts of “diversity of thought” also addressing people with
autism and other cognitive differences.
In a collaborative diverse environment, inclusion can then be defined as: Individual
is treated as an insider and also allowed/encouraged to retain uniqueness within the
work group. Inside SELI Project, we see the learning environment as an “ecosystem”.
In other words, ecosystem is the union of individuals or services with an environment
where different interactions occur. From a technical perspective, Blockchain is the
platform that gives support in order that these interactions occur in a transparent and
secure way. From a social perspective, Blockchain is the environment that allows
inclusion preserving the individuality (without intermediates). Therefore inside SELI
project, using blockchain as an inclusive digital ecosystem is seen from two perspec-
tives [11]: (1) from an infrastructure perspective: Blockchain is a useful tool to support
the ecosystem of services (for example, content authoring tools services, LMS, CMS,
recommendation services, learning analytics services). Blockchain provides a dis-
tributed platform, with transactions between these services with secure identifications.
This is the more traditional use of Blockchain, as a secure environment for transactions.
There are several projects that follow this line in education, where Blockchain is used
for certificate issuance for example. (2) from the social perspective of inclusion:
Blockchain democratizes the education, gives possibilities, voice and value from each
student and teacher. Our contribution in this direction is to encourage the use of
Blockchain through giving support to storytelling as a tool for social interaction. Perret-
Clermont [20], based on Piaget’s work, focused on fluence of social interactions for
cognitive development, with the assumption that learning takes place within each other,
but it is dependent on social exchanges, and assigns interactions a major role in the
cognitive development of the subject. We are of the opinion that digital learning
ecosystems must tear down the boundaries of current education, one of them being the
physical limits that are imposed on possible interactions. In this sense, a distributed
environment like Blockchain can be a great solution.
We use workshop based digital storytelling for enhancing teacher education with a
inclusion in mind as the process provide opportunities for teachers to make reflections
on their practices on handling diversity in the classroom. SELI team support teacher’s
professional development through workshops by allowing the teachers to tell their
stories based on their experiences with those that have similar interest in working with
the disadvantaged groups. SELI ecosystem allows to create a community of practice
with an area of shared interest in inclusion, relationships built through discussions as
well as stories of their practices. SELI ecosystem also provides storytelling tool for the
use of students. This allows teachers to transfer their experience in workshop-based
digital storytelling into their classroom.
The design science research addresses real-world problems in a holistic and innovative
ways. According to Johannesson and Perjons [2], the design science framework fol-
lows a feedback-loop process including problem explication, outline artifact and define
requirements, design and develop artifact, validate artifact and evaluate artifact (Fig. 2).
SELI’s design and creation of learning ecosystem started by bringing together diverse
stakeholders from EU, and LAC to explicate the challenges of digital exclusion and the
inaccessibility of education for disadvantaged groups. As part of the requirement
definition, SELI discovered the needs and requirements of implementing and inte-
grating emerging pedagogies, methods and technologies such as blockchain, global
sharing pedagogy, digital storytelling, flipped learning, and educational games, through
workshops, and focus group sessions with stakeholders and target groups across the
regions. The design and development of the smart learning ecosystem follows inte-
grative process of agile, open and co-design approaches, in which researchers, software
developers, students and business experts collaborate through several online meetings.
Digital Storytelling and Blockchain as Pedagogy and Technology 401
At the moment, we are on the design science research phase of validating the learning
ecosystem through workshops with teachers in different forums such as conferences,
seminars, and other strategic events.
In the following sections we present the main aspects of the SELI ecosystem.
this course. The microsite is a web page that will present the course proposed by the
teacher with all didactic content, independent of the authoring tool and should execute
in the same way in different architecture and should be able to display correctly in
diverse devices. This will provide the student with content presentation like selected
matter for the class (previously selected text readings, video lessons and/or podcast)
about concepts that will be learned. In addition, it may provide activities that explore
skills acquisition to make the student verify how much he/she understood about the
subjects presented so far, like practical activities. Seizing the dynamic features offered
by microsites the content presented will encourage discussion and collaboration
between the student through some collaborative tool. With the microsite, the teacher
can verify the acquired skills allowing a student to present the ability for example,
asking the students make a video with a storytelling about the concepts learned.
In Fig. 6, we present the general architecture view of the learning analytics com-
ponent. The data is collected from the Service Bus view components. These components
follow the microsite infrastructure, where events are trapped to feed the Ecosystem
memory with raw event data related to each indicator. The Ecosystem Memory concept
is the databases across the Ecosystem services and tools. The event capture is a
requirement to be implemented inside each service in order to feed the memory with
data related to user events and behavior detected for each indicator. The capturer is a
Javascript event handler in the client-side; it follows the W3C standards. The data
collector interface will gather all data sent by the service component side (Tool) and feed
the corresponding part of ecosystem memory. The ecosystem memory is not as simple
as depicted in the Fig. 6; it is evolving during the development and testing process (the
current stage of the project). Our memory repository is MongoDB databases and File
System but is open to other technologies like PostgreSQL in the future. The ETL
implementation is with ToroDB. It produces a database living in PostgreSQL.
After ETL guided by ToroDB, the SELI team cleans the data manually, however, we are
working on the automatization of this task with scripts. The automatization requires
maturity in understanding the raw data gathered and the way ToroDB performs the
PostgreSQL database. The techniques for analysis will be statistics and information
visualization. Techniques related to classification and clustering will be discussed and
implemented in the future when the Ecosystem get a large amount of data.
New media have been permanently integrated into the learning and teaching process.
However, this simple statement has many important implications. The transformation
concerns mainly opportunities related to increasing the effectiveness of learning and
social inclusion [17]. Undoubtedly, new technologies make it possible to cross many
borders. These are not only territorial restrictions, but also those resulting from dis-
ability or belonging to disadvantaged groups. Pedagogy as a science on educational
ideals may currently use the potential of new technologies, thus developing the highest
objectives related to social inclusion. Such an example of ideal synergy between social
sciences and new technologies is the SELI platform.
406 S. S. Oyelere et al.
The digital storytelling and flipped learning used in the SELI ecosystem shows the
possibility of symmetry in the transfer of knowledge and skills. The openness of the
platform creates an opportunity to combine fragmented content, both from professional
sources and from sources not representing the higher education. Reverse learning is
also the use of activation methods using new technologies that allow effective inter-
action with the use of everyday content, classical didactic methods transferred into
digital space to exchange experiences. This is especially important when we consider
the fact that participation in SELI brings together people with different cultural, and
organizational experiences. Therefore, in the text, the authors repeatedly refer to the
concept of “smart”. This keyword shows the flexibility of education, which is mani-
fested by openness (not only to integrate different contents into the whole), but also the
lack of borders for people with disabilities, and the possibilities offered by the mix of
technology and pedagogy.
The SELI platform is a learning environment that exploits the potential of fast data
collection and transfer. The pedagogy of sharing has its own exemplification also in the
dimension of effective use of digital storytelling. This inconspicuous technique, which
is rarely used, has an extraordinary potential. The SELI platform has implemented the
possibility of collecting valuable research and teaching material for almost every
course, referring to the sharing of experiences of the learning platform users. Based on
the collected stories relating to courses such as the prevention of cyberbullying or
preparation for being an educator of excluded people, a powerful database of cases is
built up [18]. Based on the experiences and stories of cyberbullying, it is possible to
redefine the content of online courses or to use archived cases (in the form of digital
written stories or recordings) to learn from other people’s biographies. Besides, digital
inclusion cases (e.g. didactic failures of trainers) provide an opportunity to combine
digital storytelling with the reversed classroom method.
The presented SELI ecosystem has several important perspectives. It is a per-
spective of knowledge, skills and biographical experience transfer between selected
European and Latin American countries. SELI ecosystem also creates the possibility to
quickly connect and refer to distributed data, to authenticate the effects of didactic
activities (certification through blockchain). The wisdom of the described solution is
primarily broadly understood inclusiveness, i.e. inclusion regardless of physical, lin-
guistic, state or age restrictions. Within SELI ecosystem, there is also a perspective of
scientific research, didactic activities, exchange of experiences, transfer of values and,
above all, construction of wise solutions, i.e. allowing to keep up with the universal
needs of learning subjects.
Acknowledgement. This work was supported by the ERANET-LAC project which has
received funding from the European Union’s Seventh Framework Programme. Project Smart
Ecosystem for Learning and Inclusion - ERANet17/ICT-0076SELI, including funding from
FAPESP, 2018/04085-4.
Digital Storytelling and Blockchain as Pedagogy and Technology 407
References
1. Martins, V., Oyelere, S.S., Tomczyk, L., Barros, G., Akyar, O., Eliseo, M.A., Amato, C.A.
H., Silveira, I.F.: A blockchain microsites-based ecosystem for learning and inclusion. In:
Brazilian Symposium on Computers in Education (Simpósio Brasileiro de Informática na
Educação-SBIE), Brazil, pp. 229–238 (2019)
2. Johannesson, P., Perjons, E.: A Design Science Primer. Springer, Heidelberg (2014)
3. Martins, V.F., Amato, C.A.H., Eliseo, M.A., Silva, C., Herscovici, M.C., Oyelere, S.S.,
Silveira, I.F.: Accessibility recommendations for creating digital learning material for
elderly. In: 2019 XIV Latin American Conference on Learning Objects and Technology
(LACLO). IEEE (2019)
4. Martins, V.F., Amato, C.A.H., Ribeiro, G.R., Eliseo, M.A.: Desenvolvimento de Aplicações
Acessíveis no Contexto de Sala de Aula da Disciplina de Interação Humano-Computador.
Revista Ibérica de Sistemas e Tecnologias de Informação E17, 729–741 (2019)
5. Martins, V.F., Amato, Souza, A.G., Sette, G.A., Ribeiro, G.R., Amato, C.A.H.: Material
digital Acessível Adaptado a partir de um Livro Didático Físico: Relato de Experiência,
Revista Ibérica de Sistemas e Tecnologias de Informação (2020, in Press)
6. Silveira, I.F., Ochoa, X., Cuadros-Vargas, A.J., Casas, A.H.P., Casali, A., Ortega, A.,
Sprock, A.S., Silva, C.H.A., Ordoez, C.A.C., Deco, C., Cuadros-Vargas, E., Knihs, E.,
Parra, G., Muoz-Arteaga, J., Santos, J.G., Broisin, J., Omar, N., Motz, R., Rods, V.,
Bieliuskas, Y.C.H.: A digital ecosystem for the collaborative production of open textbooks:
the LATIn methodology. J. Inf. Technol. Educ.: Res. 12(1), 225–249 (2013)
7. Briscoe, G., De Wilde, P.: Digital ecosystems: evolving service-orientated architectures. In:
Proceedings of BIONETICS 2006 the 1st International Conference on Bio Inspired Models
formation and Computing Systems. ACM, New York (2006)
8. Boley, H., Chang, E.: Digital ecosystems: principles and semantics. In: Proceedings of the
2007 Inaugural IEEE Conference on Digital Ecosystems and Technologies, pp. 1–6 (2007)
9. Mendoza, J.E.G., Arteaga, J.M., Rodriguez, F.J.A.: An architecture oriented to digital
literacy services: an ecosystem approach. IEEE Lat. Am. Trans. 14(5), 2355–2364 (2016)
10. Burns, C., Dolan, J.: Building a foundation for digital inclusion: a coordinated local content
ecosystem. Innov.: Technol. Gov. Global. 9(3–4), 33–42 (2014)
11. Oyelere, S.S., Tomczyk, L., Bouali, N., Agbo, F.J.: Blockchain technology and gamification
– conditions and opportunities for education. In: Veteška, J. (ed.) Adult Education 2018 –
Transformation in the Era of Digitization and Artificial Intelligence. Andragogy Society,
Prague, ISBN 978-80-906894-4-2 (2019)
12. Tomczyk, L., Oyelere, S.S., Puentes, A., Sanchez-Castillo, G., Muñoz, D., Simsek, B.,
Akyar, O.Y., Demirhan, G.: Flipped learning, digital storytelling as the new solutions in
adult education and school pedagogy. In: Veteška, J. (ed.) Adult Education 2018 –
Transformation in the Era of Digitization and Artificial Intelligence. Czech Andragogy
Society, Prague, ISBN 978-80-906894-4-2 (2019)
13. Lambert, J.: Digital Storytelling: Capturing Lives, Creating Community. Routledge,
Abingdon (2013)
14. W3C: Web content accessibility guidelines 2.1 (2018). https://www.w3.org/TR/WCAG21//.
Accessed 11 Oct 2019
15. Chatti, M.A., Dyckhoff, A.L., Schroeder, U., Thüs, H.: A reference model for learning
analytics. Int. J. Technol. Enhanced Learn. (IJTEL) 4(5/6), 318–331 (2012)
16. CAST: Universal design universal for learning. http://www.cast.org. Accessed 20 Dec 2019
408 S. S. Oyelere et al.
17. Tomczyk, Ł., Eliseo, M.A., Costas, V., Sánchez, G., Silveira, I.F., Barros, M.J., Amado-
Salvatierra, H.R., Oyelere, S.S.: Digital divide in Latin America and Europe: main
characteristics in selected countries. In: 14th Iberian Conference on Information Systems and
Technologies (CISTI), pp. 1–6. IEEE (2019)
18. Tomczyk, Ł., Włoch, A.: Cyberbullying in the light of challenges of school-based
prevention. Int. J. Cognit. Res. Sci. Eng. Educ. (IJCRSEE) 7(3), 13–26 (2019)
19. Bourke, J., Garr, S., Berkel, A., Wong, J.: Diversity and inclusion: the reality gap-2017
Global Human Capital Trends (2017)
20. Perret-Clermont, A.-N., et al.: La construction de l'intelligence dans l'interaction sociale
(1996)
Aggregation Bias: A Proposal to Raise
Awareness Regarding Inclusion
in Visual Analytics
1 Introduction
Information has grown in size and relevance over the last years; technology has not
only increased the generation of data but also their accessibility. People with an Internet
connection can consult a wide range of datasets about almost any topic: crime data,
healthcare data, weather data, financial data, etc.
These data can be employed to make informed decisions regarding different
domains. For example, businesses can employ demographic data to create personalized
advertisements or to segment the market. Governments can employ their data to design
new policies. Any person regularly uses data to make informed decisions. A simple
question like “should I get a coat to go out today?” can be answered through data
(made available by weather services) to make an informed decision that, in the end,
seeks some kind of benefit (in this case, the benefit of avoiding hypothermia).
However, delegating decisions solely in data might turn out to be a two-edged
sword. Data not only can be wrong or false, but it can also be incomplete, and making
© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 409–417, 2020.
https://doi.org/10.1007/978-3-030-45697-9_40
410 A. Vázquez-Ingelmo et al.
decisions using wrong data leads to wrong decisions. There are several cases in which
relying on the wrong data has provoked undesired results, mostly because of data bias
or even algorithmic bias [1–3].
So it seems clear that if the data that you are using to make decisions is not the best
for your problem, you could end up with decisions that are also not the best for your
problem. But how can people avoid such inconveniences with data? Bias is generally
introduced unconsciously, and it can be hard to detect our own biases and be aware of
them while collecting data. For these reasons, data should be thoroughly examined to
identify gaps or inconsistencies before using them in decision-making processes.
One of the most used methods to ease the analysis and exploration of datasets is visual
analytics [4, 5]; using information visualizations, users can interact and explore datasets
through visual marks that encode certain information [6]. However, visualizations could
hide data issues by lifting the attention from the analysis process carried out on the raw
data to the discovered patterns. Patterns can be seen as shortcuts that tell us properties
about the data, for example, if there are correlations among the visualized variables [7].
But visual analysis shouldn’t be reduced to just the identification of patterns and to trust
them blindly, because patterns can likewise lead to wrong conclusions [8].
This work describes a proposal for raising awareness during visual analysis,
helping users to make informed decisions taking into account the flaws or potential
issues of their datasets. Specifically, issues related to data aggregation, which can be
very harmful in data-driven decision-making processes. The main goal is not only to
improve decision-making, but to address inclusion problems when dealing with data, as
data biases can lead to decisions that (involuntarily, or not) discriminate individuals.
The rest of this paper is organized as follows. Section 2 introduces some issues
related to data analysis and data aggregation. Section 3 describes the methodology
followed to design the proposal. Section 4 presents a proposal to raise awareness
during visual data analysis. Section 5 discusses the proposal, following by Sect. 6, in
which the conclusions derived from this work are outlined.
2 Background
The outcomes of decision-making processes are actions that affect the context in which
decisions are being made. When deciding which action to take, the decision-maker will
have an assumption on how the action’s effect will affect the context, looking for a
benefit or a pursued result. However, the critical fact is that assumptions can be very
personal and could vary depending on the person’s beliefs, background, domain
knowledge, etc.
Even when the decision-maker support its decisions on data (embracing data-driven
decision-making [9]), there are still problems. As introduced before, data is not the holy
grail of decision-making, because as well as personal traits can influence the decision-
maker, the collected data and performed analyses can be influenced by other harmful
factors like data biases [10] or poor analysis.
There are specific fields of study, like uncertainty visualization, that try to find
methods to visualize uncertain data, thus warning users regarding the uncertain nature
of the results they are consuming through their displays [11, 12]. However, uncertainty
Aggregation Bias: A Proposal to Raise Awareness Regarding Inclusion in Visual 411
Fig. 1. The Anscombe’s Quartet. The four datasets have the same mean and variance values on
both variables represented on the X and Y axes.
So aggregated data ease the analysis process, but they can lead to a loss of
information. Aggregated data can also be vulnerable to phenomena like the ecological
fallacy and the Simpson’s paradox [15].
Inferring individual behavior by using aggregated data is a common extrapolation
mistake, where analysts might conclude that the behavior of a group is also accurate to
explain the behavior of the individuals within that group [16, 17].
Simpson’s paradox is also related to the data aggregation-level. In this case, there
might exist lurking variables that could entirely “change” the conclusions derived from
aggregated data [18, 19].
These aggregation-related issues can be very harmful if not taken into account [20],
especially if the audience is biased or not statistically-trained (or both).
Some works have tried to address these aggregation drawbacks through detection
algorithms [21, 22], but a few tried to address them during visual exploration [23].
412 A. Vázquez-Ingelmo et al.
3 Methodology
The proposal focuses on how to draw attention to potential aggregation biases and
fallacies during visual analysis. A simple workflow has been considered to automati-
cally seek for aggregation issues regarding the data being presented to the user.
Specifically, issues involving the Simpson’s paradox and underrepresentation of
categories.
Each categorical variable is considered as a potentially influencing variable. Of
course, as it will be discussed, this methodology is limited to the available variables
within the dataset. If the whole dataset has a small set of categories, the results would
not be as useful as it could be with a richer dataset.
The workflow follows a naïve approach to detect Simpson’s paradoxes [23]:
1. Every possible grouping at any possible level is computed on categorical to obtain a
set of potential disaggregation variables.
2. When the user visualizes data, the current aggregation level is retrieved (i.e., the
categorical columns used to group the data)
3. These data are then grouped by the variables identified in the first step.
4. The results of the performed disaggregation are sorted and compared with the
original scenario (i.e., the aggregated data values) trend.
5. If the disaggregation results differ from the originally aggregated results (a threshold
can be defined to specify which proportion of values need differ from the original
trend to consider the paradox), the Simpson’s paradox is considered for the dis-
aggregated attributes
However, even visualizing the disaggregated data by the identified attributes in the
fifth step, there could still be aggregation issues if data are in turn aggregated by a
function such as the mean, mode, ratios, etc. These functions can, in turn, distort the
reality of data.
To avoid relying on aggregation functions, when the detected Simpson’s paradoxes
are inspected, a sunburst diagram complements the display to give information about
the raw data sample sizes regarding the disaggregated values.
Sunburst diagrams are usually employed to represent hierarchies; in this context,
they are useful to display how the number of observations of the variable being
inspected varies its size among the different disaggregation levels.
The primary purpose is to have another perspective of data, drawing attention over
potential underrepresentation or overrepresentation in datasets.
4 Proposal
A simple proof-of-concept has been developed to illustrate the proposal. The employed
test data is from one of the most famous cases involving Simpson’s paradox: the
student admission at UC Berkeley in 1975 [24]. This dataset holds the following
information about each student: gender, the department in which the application was
issued, and the result of the application (admitted or rejected).
Aggregation Bias: A Proposal to Raise Awareness Regarding Inclusion in Visual 413
If the gender variable aggregates this data, the results yield a significant gender bias
against women: only 35% of women were admitted, in contrast with the 44% of
admitted males. This data could help the decision-makers to design new policies trying
to address the discovered gender bias.
However, this high-level aggregation hides some parts of the picture. If data is, in
turn, disaggregated using the department in which the application was issued, we see a
different scenario: the majority of the departments shown higher admission rates for
women than men. What was happening is that women applied to more competitive
departments than men, who issued the majority of applications to departments with a
high rate of admissions (resulting in higher admissions rates among male students).
This case is a famous example of Simpson’s paradox, but misleading conclusions
can be present in any context if these potential issues in data analysis are not accounted
for. For this reason, the interface presented in Fig. 2 is proposed.
When the user is exploring her dataset, Simpson’s paradox detector starts searching
for potentially influential groupings that change the trend of the currently displayed
414 A. Vázquez-Ingelmo et al.
variables. If any grouping changes the trend, the categorical variables identified are
displayed (top section of Fig. 2).
The user then can click on each detected grouping to explore how the disaggre-
gation affects the value that she was examining, in addition to a sunburst diagram that
shows the distribution of occurrences of each observation under the selected grouping
(bottom section of the Fig. 2). In this specific example, the user can observe how
women apply less to departments with high admission rates (like department A, for
example) and issue more applications to more competitive departments, obtaining a
complete view of the examined data.
5 Discussion
The proposal has been focused on raising awareness regarding how disaggregating
data could change the patterns identified during the analysis of aggregated data. It also
could be used as an informative tool to educate people through a friendly interface
regarding the underlying issues of data aggregation and their dangerous effects on
decision-making processes.
Educating people in data skepticism and regarding potential biases is important
because data visualizations can be very persuasive and could influence people’s beliefs.
Relying on data visualizations tools to raise awareness can be powerful due to the
possibility of presenting information in understandable manners and also to the pos-
sibility of enabling individuals to freely interact with data [26, 27].
The methodology seeks for sub-groups that “change” the original scenario (i.e., the
trends identified on aggregated data). It is important to mention that, in this case,
statistical significance has not been considered because the main goal was to draw
attention to changes in visual patterns, no matter how small. However, complementing
this methodology with the computation of statistical significance could be more
powerful in some contexts [23].
Statistically-trained audiences might be aware of these issues. However, other
audiences could reach wrong insights about data if attention is not raised regarding
potential issues, thus distorting the decision-making process without even notice.
For example, when dealing with policies that affect individuals, it is crucial to rely
on disaggregated data to avoid ignoring the necessities of minorities [28–30].
But when talking about disaggregated data, there are some limitations to take into
account. Demographic variables are meaningful for inclusion-related research contexts,
but also sensitive. Some of these variables can be difficult to collect because of privacy
policies or privacy concerns.
In fact, for some activities as for example, hiring people, having such data available
could introduce the risk of biasing the decisions made during some phases of the
process [31, 32]. So analysts and decision-makers must understand the level of analysis
and goals to anonymize or omit these attributes accordingly.
To sum up, it is important to foster critical thinking and some skepticism toward
data. When dealing with information about individuals, accounting for data gaps is a
responsibility, because the decisions made could have a high impact in the context of
application, and sometimes, this impact is not beneficial for everyone.
6 Conclusions
Future work will involve the evaluation and refinement of the proposal to improve
its effectiveness to obtain a tool to raise awareness about inclusion in different fields.
Acknowledgments. This research work has been supported by the Spanish Ministry of Edu-
cation and Vocational Training under an FPU fellowship (FPU17/03276). This work has been
partially funded by the Spanish Government Ministry of Economy and Competitiveness
throughout the DEFINES project (Ref. TIN2016-80172-R) and the Ministry of Education of the
Junta de Castilla y León (Spain) throughout the T-CUIDA project (Ref. SA061P17).
References
1. Sweeney, L.: Discrimination in online ad delivery. arXiv preprint arXiv:1301.6822 (2013)
2. Garcia, M.: Racist in the machine: the disturbing implications of algorithmic bias. World
Policy J. 33, 111–117 (2016)
3. Hajian, S., Bonchi, F., Castillo, C.: Algorithmic bias: from discrimination discovery to
fairness-aware data mining. In: Proceedings of the 22nd ACM SIGKDD International
Conference on Knowledge Discovery and Data Mining, pp. 2125–2126. ACM (2016)
4. Keim, D.A., Andrienko, G., Fekete, J., Görg, C., Kohlhammer, J., Melançon, G.: Visual
analytics: definition, process, and challenges. In: Kerren, A., Stasko, J., Fekete, J., North, C.
(eds.) Information Visualization, pp. 154–175. Springer, Heidelberg (2008)
5. Thomas, J.J., Cook, K.A.: Illuminating the path: the research and development agenda for
visual analytics. National Visualization and Analytics Center, USA (2005)
6. Munzner, T.: Visualization Analysis and Design. AK Peters/CRC Press, Boca Raton (2014)
7. Harrison, L., Yang, F., Franconeri, S., Chang, R.: Ranking visualizations of correlation using
weber’s law. IEEE Trans. Visual Comput. Graph. 20, 1943–1952 (2014)
8. O’Neil, C.: On Being a Data Skeptic. O’Reilly Media, Inc., Newton (2013)
9. Patil, D., Mason, H.: Data Driven. O’Reilly Media Inc, Newton (2015)
10. Shah, S., Horne, A., Capellá, J.: Good data won’t guarantee good decisions. Harvard Bus.
Rev. 90, 23–25 (2012)
11. Bonneau, G.-P., Hege, H.-C., Johnson, C.R., Oliveira, M.M., Potter, K., Rheingans, P.,
Schultz, T.: Overview and state-of-the-art of uncertainty visualization. In: Scientific
Visualization, pp. 3–27. Springer, Heidelberg (2014)
12. Brodlie, K., Osorio, R.A., Lopes, A.: A review of uncertainty in data visualization. In:
Expanding the Frontiers of Visual Analytics and Visualization, pp. 81–109. Springer,
Heidelberg (2012)
13. https://medium.com/multiple-views-visualization-research-explained/uncertainty-visualizat
ion-explained-67e7a73f031b
14. Anscombe, F.J.: graphs in statistical analysis. Am. Stat. 27, 17–21 (1973)
15. Pollet, T.V., Stulp, G., Henzi, S.P., Barrett, L.: Taking the aggravation out of data
aggregation: a conceptual guide to dealing with statistical issues related to the pooling of
individual-level observational data. Am. J. Primatol. 77, 727–740 (2015)
16. Kramer, G.H.: The ecological fallacy revisited: aggregate-versus individual-level findings on
economics and elections, and sociotropic voting. Am. Polit. Sci. Rev. 77, 92–111 (1983)
17. Piantadosi, S., Byar, D.P., Green, S.B.: The ecological fallacy. Am. J. Epidemiol. 127, 893–
904 (1988)
18. Blyth, C.R.: On Simpson’s paradox and the sure-thing principle. J. Am. Stat. Assoc. 67,
364–366 (1972)
19. Wagner, C.H.: Simpson’s paradox in real life. Am. Stat. 36, 46–48 (1982)
Aggregation Bias: A Proposal to Raise Awareness Regarding Inclusion in Visual 417
20. Perez, C.C.: Invisible Women: Exposing Data Bias in a World Designed for Men. Random
House, New York (2019)
21. Alipourfard, N., Fennell, P.G., Lerman, K.: Can you trust the trend?: discovering Simpson’s
paradoxes in social data. In: Proceedings of the Eleventh ACM International Conference on
Web Search and Data Mining, pp. 19–27. ACM (2018)
22. Xu, C., Brown, S.M., Grant, C.: Detecting Simpson’s paradox. In: The Thirty-First
International Flairs Conference (2018)
23. Guo, Y., Binnig, C., Kraska, T.: What you see is not what you get!: detecting Smpson’s
paradoxes during data exploration. In: Proceedings of the 2nd Workshop on Human-In-the-
Loop Data Analytics, p. 2. ACM (2017)
24. Bickel, P.J., Hammel, E.A., O’Connell, J.W.: Sex bias in graduate admissions: data from
Berkeley. Science 187, 398–404 (1975)
25. Nickerson, R.S.: Confirmation bias: a ubiquitous phenomenon in many guises. Review of
general psychology 2, 175–220 (1998)
26. Hullman, J., Adar, E., Shah, P.: The impact of social information on visual judgments. In:
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems,
pp. 1461–1470. ACM (2011)
27. Kim, Y.-S., Reinecke, K., Hullman, J.: Data through others’ eyes: the impact of visualizing
others’ expectations on visualization interpretation. IEEE Trans. Visual Comput. Graph. 24,
760–769 (2018)
28. Mills, E.: ‘Leave No One Behind’: Gender, Sexuality and the Sustainable Development
Goals. IDS (2015)
29. Stuart, E., Samman, E.: Defining “leave no one behind”. ODI Briefing Note. London: ODI
(www.odi.org/sites/odi.org.uk/files/resource-documents/11809.pdf) (2017)
30. Abualghaib, O., Groce, N., Simeu, N., Carew, M.T., Mont, D.: Making visible the invisible:
why disability-disaggregated data is vital to “leave no-one behind”. Sustainability 11, 3091
(2019)
31. Rice, L., Barth, J.M.: Hiring decisions: the effect of evaluator gender and gender stereotype
characteristics on the evaluation of job applicants. Gend. Issues 33, 1–21 (2016)
32. Alford, H.L.: Gender bias in IT hiring practices: an ethical analysis (2016)
A Concrete Action Towards Inclusive
Education: An Implementation
of Marrakesh Treaty
1 Introduction
As stated by its declaration, the goal of the Marrakech Treaty (MT) is to “Facilitate
Access to Published Works for Persons Who Are Blind, Visually Impaired or Other-
wise Print Disabled”. The Marrakesh Treaty (MT) was adopted on June 27, 2013 in
Marrakesh and it forms part of the body of international copyright treaties administered
by WIPO [1, 3]. The treaty allows for copyright exceptions to facilitate the creation of
accessible versions of books and other copyrighted works for visually impaired per-
sons. It sets a norm for countries ratifying the treaty to have a domestic copyright
exception covering these activities and allowing for the import and export of such
materials.
Sixty three countries signed the treaty as of the close of the diplomatic conference
in Marrakesh. The ratification of 20 states was required for the treaty to enter into
effect; the 20th ratification was received on 30 June 2016, and the treaty entered into
force on 30 September 2016. The European Union ratified the treaty for all 28 members
on October 1, 2018. The MT is not an automatic application instrument, but it is a norm
that obliges the ratifying states to adapt their laws, so it requires that each country
implement it to start functioning within each jurisdiction. The International Federation
of Library Associations and Institutions (IFLA) periodically review whether govern-
ments have passed the necessary national laws to make a reality of Marrakesh, the
monitoring reports can be found at: https://www.ifla.org/publications/node/81925.
One of the key aspects of the TM is that it depends on the creation of international
networks with agile processes of production and exchange of accessible copies, trying
© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 418–425, 2020.
https://doi.org/10.1007/978-3-030-45697-9_41
A Concrete Action Towards Inclusive Education 419
The need to have institutional OER repositories is mainly based on the objective of
achieving sustainability of resources, allowing to manage necessary changes so that the
resource remains usable, which can be located and referenced in a easy and precise
way. These are good reasons for institutions to invest in OER repositories, in addition
to considering the reputation benefits that an institution generates by the number of
times an open resource generated by its teachers is reused and from where it is
accessed, measuring also the national, regional or international scope of the institution.
420 V. Rodés and R. Motz
A digital library is an information system that allows the access and transfer of digital
information, structured around collections of digital documents on which services are
offered to users [5–8]. Digital libraries are the product of a deliberate strategy for the
development of collections by library professionals, where they often have content
beyond institutional ownership. Institutional repositories can offer limited services to
users, with regard to digital libraries that include important aspects of service, such as:
reference, assistance, interpretation of contents, that is, support of personnel in the
search for additional information.
However, according to Xia and Opperman [8], institutional repositories and digital
libraries currently offer similar services and the use of each term depends on the scope
where it is applied and therefore on the resources with which they wish to work.
In the case of our institutional repository, the differentiation of the Digital and
Accessible Library collection (BIDYA [4]) from the rest of the collections is based on:
(i) access to the BIDYA collection is done through user authentication, since access is
only open for people covered by the Marrakesh Treaty, and (ii) the materials that are
available in accessible versions of the BIDYA collection are text selections, in this first
phase primary and secondary textbooks. Figure 1 describes this process.
Unlike most institutional repositories where all users can access the material
without the need for a user name and password, due to the Marrakesh Treaty the
material can only be accessed by registered users. Using a button in the accessible
portal of BIDYA, one can directly access the user entry of the repository.
The limited availability of study material in Braille, audio, electronic support or
extended characters is one of the greatest difficulties encountered by students with
visual disabilities inserted into the education system. This proposal consists in the
creation of a book digitization system which is available online through a repository of
books and other materials in accessible formats. The Accessible Digital Library will
allow universal access without geographic distinction, physical barrier, or travel
restrictions.
422 V. Rodés and R. Motz
In order to efficiently retrieve the materials from the BIDYA library, it is essential
to have a set of meta-data for the cataloging and subsequent retrieval of the informa-
tion. It was decided to use the Dublin Core extended meta-data set due to its com-
patibility with other platforms, its adaptability to materials and its simplicity for
handling. The meta-data schema was extended to address two aspects. One corre-
sponding to the set of meta-data necessary to account for the digitization process
performed on the original material and the second corresponding to the set of meta-data
that refer to the level of accessibility achieved in the available material. This process
was carried out with focal meetings between representatives of the National Union of
Blinds of Uruguay, librarians and computer scientists.
cells or combined cells, columns and blank rows), alternative text is added to the
images and a description when they provide relevant information.
With regard to the format of the documents included in the Accessible Library, the
docx format is used because it has an open specification based on the XML mark
definition language, in compliance with the ordinance of the repository that only
supports the deposit of digital resources in open formats. It also adapts to the
requirements of the screen readers used by the end users of the Accessible Library.
With respect to workflow, the line established for the repository was followed,
respecting the different roles associated with the users (adding, editing, publication,
administration) and preserving the final review by the administrators of the repository.
This latter ensures the descriptions quality.
The registration protocol of authorized users was also defined, taking into account
the decree that regulates this exception. The decree (N° 295/017, https://www.impo.
com.uy/bases/decretos/295-2017) defined the figure of organizations authorized to
register users, and these organizations are the only ones that send the necessary data for
the creation and authentication of the users of these collections.
This Accessible Digital Library, is operating and available for elementary and high
school students, and in some cases for parents, tutors and teachers. As of December
2019, it hosts about 1000 resources, number that is expected to increase soon. New
collections will continue to be incorporated according to the needs of the target pop-
ulation, particularly those that favor access to and permanence in higher education.
Finally, it must be ensured that web access to the repository is accessible. For
accessibility, we apply an evaluation methodology that follows the steps of the WCAG-
EM proposed by W3C. Those steps consist of: 1) Defining the scope of the evaluation,
2) Explore the website, 3) Select the representative sample, 4) Evaluate the selected
sample and, finally, 5) Report the findings of the evaluation.
In step 1 we define the scope at AA level, taking into account that several inter-
national legislation recommend this level as the minimum level of accessibility required.
In step 2, we explore repository’s website and in step 3 we select the initial page of the
repository as an example to the repository. This selection is based on the consideration
that the page where the user searches for resources is the most relevant for recovery and
encounter with OER, fundamental objective of the repository. While a single page may
seem unrepresentative of the general state of the accessibility of the repository, the
selected page is the gateway to find and use resources, so it level of accessibility is
crucial for the experience of user. Finally, we evaluate this page manually and also using
the tool TAW1. This analysis showed a series of accessibility errors that were resolved.
One of the problems encountered was that the conformance criterion Non-text content
(numbered 1.1.1 in the WCGA Level A specification) indicates that all non-textual
content that is presented to the user has a textual alternative that serves the same
purpose. The benefit of this criterion is that the information can be interpreted through
any sensory modality such as a screen reader. Its absence implies access barriers for
blind users of screen readers and people with difficulties to understand the meaning of
some image, among others. Despite being probably the best known example of
1
TAW: http://www.tawdis.net/.
424 V. Rodés and R. Motz
The BIDYA Project is one of the first initiatives developed as an institutional repository
of OER in the framework of the implementation of the Marrakesh Treaty to facilitate
access to published works to blind people, visually impaired or with other difficulties to
access the printed text in Latin American. As a result of the activities carried out for the
implementation of the accessible digital library BIDYA, guides to the process of
digitalization and correction of the materials were written to make them accessible that
are published as open access and are currently working on a program with librarians for
a campaign of Digital Literacy for blind people.
Acknowledgement. This work was supported by the Innovation Sector Fund of the National
Agency for Research and Innovation (ANII) through the project ININ_1_2017_1_137280.
References
1. WIPO-Administered Treaties. https://www.wipo.int/treaties/en/ip/marrakesh/. Accessed 08
Jan 2020
2. Seroubian, M., de León Colibri, M.: Conocimiento libre repositorio institucional.
Montevideo, Jornada de Difusión sobre el Acceso Abierto (2014)
3. United Nations: Convención sobre los derechos de las personas con discapacidad (2006).
http://www.un.org/esa/socdev/enable/documents/tccconvs.pdf. Accessed 08 Jan 2020
4. Biblioteca Digital y Accesible (BIDYA) (2017). http://www.bibliotecaaccesible.ei.udelar.
edu.uy/biblioteca-digital-y-accesible-2-bidya/. Accessed 08 Jan 2020
5. Borgman, C.L.: What are digital libraries? Competing visions. Inf. Process. Manag. 35(3),
227–243 (1999)
6. Guo, L.: On construction of digital libraries in universities. In: 2010 3rd IEEE International
Conference on Computer Science and Information Technology (ICCSIT), vol. 1, pp. 452–
456. Presented at the 2010 3rd IEEE International Conference on Computer Science and
Information Technology (ICCSIT) (2010). https://doi.org/10.1109/iccsit.2010.5564750
7. Nguyen, S., Chowdhury, G.: Digital library research (1990–2010): a knowledge map of core
topics and subtopics. In: Digital Libraries: For Cultural Heritage, Knowledge Dissemination,
and Future Creation. Springer (2011). http://www.springerlink.com/content/21h17m2nh10kr
l1w/abstract/
8. Xia, J., Opperman, D.B.: Current trends in institutional repositories for institutions offering
master’s and baccalaureate degrees. Ser. Rev. 36(1), 10–18 (2010). https://doi.org/10.1016/j.
serrev.2009.10.003
9. Flores Ch., J., Ruiz C., K.J., Castaño, N., Tabares M., V., Duque, N.: Accesibilidad en Sitios
Web que Apoyan Procesos Educativos. Anales de la Novena Conferencia Latinoamericana
de Objetos y Tecnologías de Aprendizaje, LACLO 2014 (2014). http://www.laclo.org/
papers/index.php/laclo/article/viewFile/225/20. Accessed 08 Jan 2020
A Concrete Action Towards Inclusive Education 425
10. Adepoju, S., Shehu, I.: Usability evaluation of academic websites using automated tools. In:
International Conference on User Science and Engineering (i-USEr) (2014)
11. da Rosa, S., Motz, R.: Tenemos Repositorios de REA Accesibles. Handle Gredos de la
monografía completa. Ediciones Universidad de Salamanca (España) (2016)
12. JLIS.IT, Redazione. Budapest Open Access Initiative (2002). JLIS.it, [S.l.], v. 3, n. 2,
October 2012. ISSN 2038-1026. http://dx.doi.org/10.4403/jlis.it-8629. https://www.jlis.it/
article/view/8629. Accessed 08 Jan 2020
13. Hilera-González, J.R., Campo-Montalvo, E. (eds.): Guía para crear contenidos digitales
accesibles: Documentos, presentaciones, vídeos, audios y páginas web, 1st edn. Universidad
de Alcalá, Alcalá de Henares (2015)
Intelligent Systems and Machines
Cloud Computing Customer
Communication Center
1 Introduction
The scope of the communication system is the interaction with customers in call
centers. Usually, customer interaction is performed via call centers where agents of
different organizations provide customer service over the phone. Call centers require
centralized headquarters for receiving and transmitting a large volume of requests.
Through the “Cloud Computing Customer Communication Center” system, the call
center is automated, organizations using the provided infrastructure through this sys-
tem. A “Voice Interaction Module” has been developed, which is an intelligent voice
communication system to be integrated into a Unified Communications Platform. By
using this solution, the client communicates with the Communication Center system in
© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 429–438, 2020.
https://doi.org/10.1007/978-3-030-45697-9_42
430 G. Suciu et al.
Romanian, through natural language that is recognized and analyzed semantically. The
customer is directly connected to the requested entity so that the transactions take place
as soon as possible. Nowadays, there is a tendency to move from “Call Center”
solutions to “Cloud Client” solutions that in addition to “Call Center” integrate unified
channel management solutions and communications applications, text-to-speech
(TTS) and speech-to-text (ASR) applications.
The simplified scenario of an inbound call center (which processes incoming calls)
is the following: a client calls the number assigned to that call center, and once logged
into, accesses the IVR (Interactive Voice Response) and identifies. Through IVR it can
access or even complete certain specific simple transactions automatically and if an
operator is available and the accessed service was not completed, the client is redirected
to that operator.
The main trends and technologies involved in nowadays call center configuration
are natural language processing and artificial intelligence (AI) such as deep learning
(DL). Voice-based interfaces are included in order to facilitate communication with
users and improve the quality of the supplied services. This trend is illustrated in [1]
that points out that there are tens of millions of devices in the world having a vocal
main interface. Such devices are operated through platforms that include speech
recognition software (ASR, speaker) and speech synthesis (TTS). An important role of
the devices that use voice as the main interaction method (voice first) is an intelligent
agent or assistant for its users. One of the most popular applications are the ones for
voice banking or voice payments where artificial AI Nuance and Personetics are leaders
in implementing voice support solutions.
Using last minute techniques can solve recognition problems, such as speech
recognition, to ensure high accuracy. For example, DNN (Deep Neural Networks) is a
day-to-day technique used for speech recognition. Various open-source applications for
developing voice-based systems have already implemented it, including Kaldi [1],
Nuance [2] and PoketSphinx [3]. Being an active research area, it is difficult to keep
them up to date with constant technological updates that may involve source code
modifications or even rethinking the architecture of the application. For example, Kaldi
[4, 5] has three different source codes for DNN. The adoption of “Cloud Computing”
technology allows immediate, permanent, convenient and on-demand access over the
network to a shared resource base (server networks, storage resources, applications and
services) that can be used or released involving managerial effort or interaction with the
minimum service provider.
The rest of the paper is organized as following: Sect. 2 analyzes related work,
Sect. 3 presents the developed system, including requirements and key performance
indicators, Sect. 4 presents the testing the functionality of the Cloud Computing
Customer Communication Center, while Sect. 5 draws the conclusions and envisions
future work.
Cloud Computing Customer Communication Center 431
2 Related Work
The advantages of the “Cloud” Call Center technology, according to [6], are not just
about costs. This technology brings many other benefits such as scalability, flexibility,
ease of use or development as well as reliability. Among the advantages reported by the
enterprises that have adopted the model of “Cloud Computing” as a solution for the call
center are: ease of use, reliability in case of disaster and an increased security level.
Therefore, the uses of this feature are very varied. For example, the calibration of the
system according to the customer’s needs and emotions, thereby improving staff
training, the recording and processing of the complaints and ultimately to improve the
performance of the organization. This domain, also called audio mining, includes
elements of speech recognition, semantic speech analysis and identification of voice
characteristics [7].
Among the elements to be adopted by a successful call-center are [8]:
1. Real-time speech analysis (Speech Analytics), which can refer to several aspects:
(a) identifying vocabulary elements, (b) detecting emotions or prozoda, (c) Recog-
nition of the voice tag.
2. Text Analysis (Text Analytics) from organizational interactions with customers
through SMS, email, fax, chat, messenger. This element processes and identifies
keywords allowing detection of words or phrases related to a particular topic. Any
automated alerts related to the content of the processed text may be generated.
3. Chatbots: 2017 was probably the year when more and more centers adopted the
chatbot technology to handle a large volume of simple calls or requests. Chatbots
will not replace the human operators, but they will relieve them of some of their
activities to allow them to focus on more complex issues.
4. MRCP (Media Resource Control Protocol) implementation, which is a standard for
the developers of telephony and voice technologies based applications.
5. The adoption of “Cloud Computing” technology that allows immediate, permanent,
convenient and on-demand access over the network to a shared resource base.
Although it has been observed that there is a high degree of interest in robotized
voice communication with customers from different fields, such as banking (ING),
GSM (Vodafone), utilities service providers (Engie), an analysis of the current
world or local status suggests that these applications are currently limited to a
restricted set of operations. The estimated performance of these applications is
below 50%. However, it is estimated that over the next ten years, 50% of banking
operations will be performed through Voice First solutions.
The disadvantages of the methods mentioned above do not offer a unified com-
munications center that allows the unification of communications (fixed and mobile
phone, fax, e-mail, internet access, SMS, etc.) using TTS and ASR technologies; the
access to the processed data cannot be performed anytime, anywhere and from any
terminal that can access the Internet. In the analyzed methods utilized for verbal
interaction with customers, the extracted information from the user’s dialogue is not
stored in specific data structures to be used for later processing (Speech Analysis) [8].
432 G. Suciu et al.
In this section are presented the main requirements, architecture, implemented appli-
cations and key performance indicators of the “Cloud Computing Customer Com-
munication Center” system.
A. Requirements
The main functional and technical requirements based on the analysis presented in
the related work section are:
– Reducing personnel costs as a result of automating interaction with customers
(direct voice);
– The integration of a voice interaction module with clients of the beneficiary orga-
nization. The module can be configured according to the scheme and the type of
dialogue adapted to the type of application;
– “Cloud Computing” technology utilization will introduce several advantages:
scalability, flexibility, simplicity, ease of use/development, reliability in case of
disasters and security;
– Increasing the Service Level value or maintain it with a small number of call center
agents;
– Decreasing the number of abandoned calls;
– Enhancing the quality of customer service by reducing the waiting time;
– Real-time visualization and adjustment of the KPIs (Key Performance Indicators);
– Optimization of agent monitoring reports;
– Increasing the level satisfaction of the agents by offering them the opportunity to
work in their available time slots;
– Increasing retention;
– Reducing the work schedule;
– Performance monitoring.
B. General architecture
Figure 1 illustrates the general architecture of the “Cloud Computing Customer
Communication Center” system and its functional components. This architecture
allows:
a) implementation exclusively on Cloud for reduced costs;
b) hybrid implementation, where end customers purchases hardware such as: PBX
(Private Branch Exchange), IP phones, DECTs, etc.
Cloud Computing Customer Communication Center 433
The figure above illustrates the case of implementation exclusively in the Cloud
(a) with blue color. The caller agent (within a company/call-centers) benefits from 5G
facilities through a SaaS (Software as a Service) service and/or VPN (Virtual Private
Network), and customers interact with the agent via Internet/2G, 3G and 4G mobile
phone services (via GSM to VoIP Trunk Gateway). The beneficiary can install and
configure in “Cloud” several equipments, for example: Unified Communications
System, Data Processing Server, Voice Interaction Module, IP Phones, DECTs (Digital
Enhanced Cordless Telecommunications), etc.
In the case of hybrid implementation (b), the components in green color frames
were added. Customers can make phone calls directly through their local PBX,
reducing the Internet traffic between Cloud and headquarters. In this way, various
problems such as QoS (Quality of Service) in case of large-scale call centers are
avoided. The first category of implementation, exclusively in “Cloud” (a), can be
dedicated to small companies that cannot afford to purchase additional hardware and
only pay a monthly subscription. The second variant of implementation (b), namely the
hybrid implementation, can be used in large-scale call centers, where the quality of
conversation is of great importance in voice recognition (MIV mode).
An intelligent voice communication system has been developed to be integrated
into a Unified Communications Platform (UCP). By using this solution, the client
communicates with the Communication Center System, in Romanian using natural
language that is recognized and analyzed semantically [9]. The customer is directly
connected to the requested entity so that the transactions take place as soon as possible.
Figure 2 illustrates the system architecture of the patent pending system. The patent
was published in the “Buletinul de Proprietate Industrială”, section “Brevete de
invenție”, no. 11/2018.
434 G. Suciu et al.
The figure presented above integrates additional modules and the “Voice Interac-
tion System”, modules which will be described in the next subsection.
C. Voice processing applications implemented
The realization of the Communication Center and 5C Platform consisted in the
implementation of several voice processing applications (Speech-to-Text-ASR, Text-
to-Speech-TTS) and of a Dialogue Management Module (M-DIAG) in Romanian,
connected to a Unified Communication Platform, dedicated to several types of appli-
cations (fixed and mobile phones, fax, e-mail, messaging, Internet communications,
etc.) in various fields of activity (banks, Local or Central Public Administration, media
agencies, retail chains, etc.). Finally, the natural voice dialogue solution is an inter-
active system with a voice-based interface to facilitate the communication with users
and improve the quality of the provided services.
The “Voice Interaction System” module consists of the following elements, as
presented in Fig. 2:
Cloud Computing Customer Communication Center 435
– Speech Recognition Module, which translates what the user speaks. Its input is the
voice of the user and the output is represented by the transcription of the speech;
– Voice Synthesis Module has the role of generating the voice signal corresponding
to the system response to the user. Inputs are represented by the generated responses
in the Dialogue Management Module;
– The Dialogue Management Module is designed to generate appropriate responses,
extract the needed to transmit user requests and connect the user with the human
operator where applicable;
– The back-end adapter (which contains the database and the data processing
resources of the beneficiary) and the MRCP Interface form the module that
implements the communication protocols for transmitting messages to the systems
performing specific functions;
– Call Center agent (human operator) intervenes when the system cannot fulfill the
user request.
Additional modules can be integrated in the system, such as:
– Power supply system, provided by renewable energy sources that can come from
photovoltaic panels and/or wind farms, the system endurance being ensured by the
presence of an electric generator system;
– Transmission environment can be achieved through a LAN network and Wi-Fi
components. The equipments mentioned above can be managed locally or through a
Cloud management module. This environment will have to accept external Internet
connections (VPNs);
– Multimedia Connectors should accept audio, video and text sources that can come
from the GSM environment (4G/5G), VoIP (Voice Over IP) and multimedia
channels from web pages;
– Online management (parameters from the transmission environment);
– Reports, Decision Support Management System (ML – Machine Learning);
– Agents/Customers will interact with fixed or wireless phones, smart devices
(smartphones, tablets, PC, etc.).
5 Conclusions
This paper provides an analysis of related work and presents the developed system,
including requirements and key performance indicators, taking into account the global
market needs regarding the new ways of ensuring the most efficient channels of
communications. The paper was derived from the project POC-5C and the resulting
patent “Cloud Computing Customer Communication Center” for improvement of such
communication technologies. As future work we envision to provide measurement
results from a practical implementation.
Acknowledgements. This work has been supported by a grant of the Ministry of Innovation and
Research, POC-5C project.
References
1. Ravanelli, M., Parcollet, T., Bengio, Y.: The PyTorch-Kaldi speech recognition toolkit. In:
ICASSP IEEE International Conference on Acoustics, Speech and Signal Processing
(ICASSP), pp. 6465–6469 (2019)
2. Këpuska, V., Bohouta, G.: Comparing speech recognition systems (Microsoft API,
Google API and CMU Sphinx). Int. J. Eng. Res. Appl. 7(03), 20–24 (2017)
3. Huggins-daines, D., Kumar, M., Chan, A., Black, A.W., Ravishankar, M., Rudnicky, A.I.:
PocketSphinx: a free, real-time continuous speech recognition system for hand-held devices.
In: Proceedings of ICASSP, pp. 1–8 (2006)
4. Plátek, O.: Speech recognition using Kaldi. Masters thesis, Charles University (2014)
5. Kaldi documentation. http://kaldi-asr.org/doc/
6. Suciu, G., Toma, Ş.A., Cheveresan, R.: Towards a continuous speech corpus for banking
domain automatic speech recognition. In: 2017 International Conference on Speech
Technology and Human-Computer Dialogue (SpeD), Bucharest, pp. 1–6 (2017)
7. Gergely, T., Halmay, E., Szőts, M., Suciu, G., Cheveresan, R.: Semantics driven intelligent
front-end. In: 2017 International Conference on Speech Technology and Human-Computer
Dialogue (SpeD), Bucharest, pp. 1–6 (2017)
8. Jurafsky, D., Martin, J.H.: Speech and Language Processing, 2nd edn. Pearson Education
Inc., Prentice Hall, London (2009)
9. Toma, S.A., Stan, A., Pura, M.L., Bârsan, T.: MaRePhoR – an open access machine-
readable phonetic dictionary for Romanian. In: SpeD (2017)
10. Mustafa, M., Allen, T., Appiah, K.: A comparative review of dynamic neural networks and
hidden Markov model methods for mobile on-device speech recognition. Neural Comput.
Appl. 31(2), 891–899 (2019)
11. Shah, N.B., Thakkar, T.C., Raval, S.M., Trivedi, H.: Adaptive live task migration in cloud
environment for significant disaster prevention and cost reduction. In: Information and
Communication Technology for Intelligent Systems, pp. 639–654 (2019)
International Workshop on Healthcare
Information Systems Interoperability,
Security and Efficiency
A Study on CNN Architectures for Chest
X-Rays Multiclass Computer-Aided Diagnosis
Abstract. X-rays are the most commonly used medical images and are
involved in all areas of healthcare because they are relatively inexpensive
compared to other modalities and can provide sensitive results. The interpre-
tation by the radiologist, however, can be challenging because it depends on his
experience and a clear mind. There is also a lack of specialized physicians,
mainly in the least developed areas, which increases the need for alternatives to
X-ray analysis. Recent research shows that the development of Deep Learning
based methods for chest X-rays analysis has the potential to replace the radi-
ologists analysis in the future. However, most of the published DL algorithms
were developed to classify a single disease. We propose an ensemble of Deep
Neural Networks that can classify several classes. In this work, the network was
used to classify five chest diseases: Atelectasis, Cardiomegaly, Consolidation,
Edema, and Pleural Effusion. An AUC of 0.96 was achieved with the training
data and 0.74 with the test data.
1 Introduction
© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 441–451, 2020.
https://doi.org/10.1007/978-3-030-45697-9_43
442 A. Ramos and V. Alves
of skin lesions [15, 16]. Recent approaches using pre-trained Deep Neural Networks
(DNN) with nonmedical images for medical applications, Transfer Learning technique.
Lakani et al., studied the use of DNN, namely AlexNet and GoogLeNet, which were
pre-trained with the ImageNet dataset, to detect pulmonary tuberculosis in chest X-
rays, and achieved satisfactory results, an AUC of 0.99 [6].
The analysis of X-rays images is a crucial task for radiology experts. Most DL
classification approaches perform a binary classification, i.e., detects a single label.
Stephen et al. [7] investigated the use of a DL approach to detect pneumonia in chest
X-rays and achieved an accuracy of 95.31%. Liu et al. [8] used chest X-rays images to
detect tuberculosis and achieved an accuracy of 85.68%. Also, Yates et al. [9], studied
an approach to detect chest anomalies in X-rays and achieved an accuracy of 94.60%.
However, a multiclassification can also be performed, e.g., Yaniv et al. [10] used a
CNN pre-trained with a non-medical dataset (ImageNet) to detect pleural effusion,
cardiomegaly, and mediastinal enlargement in chest X-rays, and achieved an AUC of
0.86.
According to a study published in JAMA Network Open in March 2019, an
Artificial Intelligence (AI) algorithm was able to analyze chest X-rays and classify the
diseases even better than radiologists. The algorithm used was the Lunit Insight for
Chest Radiography and was trained using 54,221 chest X-rays with normal findings
and 35,613 chest X-rays with abnormal findings. The validation dataset contained 486
normal chest X-rays and 529 chest radiographs with abnormal results. To compare the
performance of the AI algorithm with the performance of radiologists, five non-
radiology physicians, five board-certified radiologists, and five radiologists examinated
a subset of the validation dataset [17].
Table 1 shows that the AI algorithm performed significantly better than the radi-
ologists. The emergence of publicly available annotated medical image datasets has
been a powerful mechanism for the development of DL based diagnostic models since
one of the main limitations has been the lack of labelled data. The public dataset used in
this work was the CheXpert dataset [18]. This dataset is also used for the CheXpert
Challenge, which aims to classify five chest disorders. Our proposed ensemble of Deep
Neural Networks also perform the classification of five labels, but the number of classes
can be easily extended.
A Study on CNN Architectures for Chest X-Rays 443
2 Materials
The dataset used was the CheXpert (Chest eXpert), which contains 224,316 chest X-
ray images of 65,240 patients. The images were collected from October 2002 to July
2017 in Stanford Hospital, along with medical reports from radiologists. The dataset
contains the already separated train and validation data with 224,113 and 203 X-ray
images, respectively, each with frontal and lateral views [18] (Fig. 1).
The label extraction from the radiological reports was done in three steps: “Mention
Extraction”, “Mention Classification” and “Mention Aggregation”. The first, “Mention
Extraction”, summarizes the main findings in the section Impression of the reports [18].
The Impression is one of the most important sections in the reports of the radiologists
[19]. There, the professionals summarize the clinical impression, relevant clinical
information and laboratory findings achieved by all image features and make conclu-
sions and suggestions about the patient’s health [18, 20]. The second step, “Mention
Classification” uses the synopses sentences of “Mention Extraction” and classifies the
mentions as Positive if the observation showed evidence of a pathology; Negative if
there are no pathological findings; and Uncertain, expressing uncertainty and ambi-
guity of the report. Finally, “Mention Aggregation” group all 14 labels consisting of
“No Finding”, “Support Devices” and 12 pathologies. For each of the 14 labels one of
the following values can be assigned:
• Positive, 1, if it had at least one positive mention.
• Negative, 0, if it had at least one negative mention.
• Uncertain, u, if it had not positive mentions and at least one uncertain mention.
• Blank, if it had no mention on the observation.
The label “No Finding” was positive when it didn´t have a disorder classified as
Positive or Uncertain. Only 5 of the 14 labels were used in this work: Atelectasis,
Cardiomegaly, Consolidation, Edema, and Pleural Effusion. Each one was assigned as
Positive, Negative, or Uncertain. This was also the labelling used in the CheXpert
Challenge [18, 21].
444 A. Ramos and V. Alves
3 Methodology
In this work, various experiments were performed to classify chest diseases. The tested
neural networks were trained using the CheXpert dataset to detect five different chest
disorders. Since the images are not all the same size, they were all resized to
320 320 pixels. The frontal and lateral X-ray views were processed separately, i.e.
each network inputs are only from a specific view. In the first experiments, only the
frontal view was considered, as it contains more information than the lateral view.
Besides, in the frontal view, both lungs are visible and there are more cases available
than in the lateral view.
Fig. 2. Distribution of Positive cases of each class for frontal and lateral views.
Both the training and validation data are unbalanced. To mitigate this problem, an
asymmetric loss function was used which mapped the class indexes to a weighting
value, based on the class frequency, Eq. 1.
Nc Ni
Weighti ¼ ð1Þ
Ntotal
Where, i denotes the index of the class, Nc denotes the number of classes, Ni denotes
the number of cases of a specific class and Ntotal denotes the total number of cases.
extracted features. The last FCL has five nodes with a sigmoid activation function
corresponding to the five chest X-rays classes to be classified.
Fig. 3. Model 1and Model 3. conv stands for a convolutional and FC stands for a Fully
Connected layers.
3.3 Evaluation
The CheXpert paper [18] describes various approaches on using Uncertain instances
during the training phase. Our study used the binary mapping approach. With binary
mapping, the Uncertain values can be mapped to Negative (U-zeros) or Positive (U-
ones). It also introduces the Mixed approach, which uses the U-ones for Atelectasis,
Edema and Pleural Effusion, and U-zeros for Cardiomegaly and Consolidation.
Since the loss function used is asymmetric and the different approaches lead to a
different distribution of cases, Table 2 presents the weights of each class for frontal and
lateral views, calculated using Eq. 1.
446 A. Ramos and V. Alves
Mixed was the first binary mapping approach used in this work. The results
obtained were compared to the best AUC values of the CheXpert paper [18]. Several
experiments were evaluated and analyzed, testing different network architectures, the
binary mapping approaches and the effect of using Leaky ReLU function instead of the
ReLU function.
4 Results
All performed experiments used the SGD optimizer and the Categorical Cross-entropy
loss function. A callback was used to trigger an early stop when three consecutive
epochs achieved the same loss result.
Table 3 presents the parameters used Table 3. Parameters used in the tests.
in the various tests. Parameter Value
The results obtained by training Batch size 22
the network with frontal data and Learning rate 1 10−3
Mixed approach are given in Table 4, Learning rate decay 1 10−6
where ACC stands for accuracy and Number of epochs 120
Diff stands for the subtraction of the
mean of the AUC scores obtained in
this study by the mean of the AUC scores published in the CheXpert paper.
When analyzing the AUC score, Model 3 performed best followed by Model 4. For
this reason, these two models were tested considering the U-zeros and the U-one’s
approaches, Table 5.
It can be seen from Table 5 that a better AUC value was obtained for Model 3
using the U-zeros approach. The results of training with the Leaky ReLu, using
a = 0.3, and with frontal data are shown in Table 6. The Mixed approach was con-
sidered as it performed best in the previous tests.
Table 6. Results obtained using Leaky ReLU activation function and frontal data.
Binary mapping Model Loss AUC ACC Diff
Mixed Model 3 1,1248 0,9558 0,6908 0,2170
Model 4 1,0821 0,9561 0,6960 0,1810
The results obtained when testing the different binary mapping approaches using
Model 3 and Model 4 with lateral images are shown in Table 7. Since the best overall
performance in frontal images was achieved with ReLU activation, it was also used for
the lateral images.
Table 8 presents our and the CheXpert paper AUC results for each class, when
predicted using the validation data while considering the lateral and frontal views. For
our results Model 3 was used.
5 Discussion
this will deliver false negative information to the network. Considering a sick person as
healthy is more serious than the opposite.
For the lateral view images, just as for the frontal view ones, Model 3 and the
Mixed approach provided the best results. The lateral view images achieved a better
AUC than the frontal view, which was not expected since there were fewer samples.
Also, the lateral view of X-rays did not allow as good visualization as the frontal view
once they show a smaller area of the chest and the lungs are overlapping. A very similar
study for the detection of chest diseases, by Rubin et al. [30] also found a better AUC
in lateral chest x-rays than in frontal chest x-rays.
Table 9. Average of the best overall results for training and validation. The same
CheXpert AUC was used for train and validation because the training values were not specified.
Phase Average AUC Average CheXpert AUC Diff
Train 0.9662 0.9056 +0.0610
Validation 0.7429 −0.1627
Comparing our training results with CheXpert paper yielded nearly the same
results, but the validation results are quite divergent (Table 9). On October 16, 2019,
the final position in the leaderboard of CheXpert Challenge achieved an AUC of
72.70% and our best results achieved 74.29%. The results of the CheXpert paper were
used as benchmark. It was not the goal of this work to make an absolute comparison of
our results with their results, but to compare how different DL techniques perform in
classifying chest X-rays.
6 Conclusions
Several CNN architectures, hyperparameters and labelling metrics were tested. The
best performing architecture was achieved using a transfer learning technique. The first
two convolution layers of the CNN were initialized with weights from a VGGNet
model pre-trained on ImageNet. Transfer learning has proven to be a good choice, as
the CheXpert dataset has a small dimension. It allowed the network to start the training
process with prior knowledge of important features, even coming from another domain
(i.e. ImageNet dataset).
Artificial intelligence has potential to facilitate or even replace the diagnosis of
patients by performing tasks such as detection, qualification and quantification. The use
of artificial intelligence in medical imaging will allow the professionals, to spend more
time communicating with patients or deliberating with colleagues. They will not be
overloaded with the amount of medical exams to be analysed. This work pretends to be
a contribution in using DL techniques for performing multilabel classification in
medical imaging analysis, as it is possible to identify findings and detect patterns
efficiently. Medical professionals should be allowed to use their time to treat patients
and not waste their time treating medical images.
450 A. Ramos and V. Alves
Acknowledgements. This work has been supported by FCT – Foundation for Science and
Technology within the Project Scope: UIDB/00319/2020. We gratefully acknowledge the sup-
port of the NVIDIA Corporation with their donation of a Quadro P6000 board used in this
research.
References
1. Hill, D.L., Batchelor, P.G., Holden, M., Hawkes, D.J.: Medical image registration. Phys.
Med. Biol. 46, 1–45 (2001)
2. NHS England. Diagnostic Imaging Dataset. https://www.england.nhs.uk/statistics/statistical-
work-areas/diagnostic-imaging-dataset/. Accessed 17 Apr 2019
3. Benseler, J.: A pocket guide to medical imaging. In: The Radiology Handbook. Ohio
University Press, Ohio (2006)
4. Waseda, Y., Matsubara, E., Shinoda, K.: X-ray Diffraction Crystallography: Introduction,
Examples and Solved Problems. Springer, Heidelberg (2011)
5. Ballard, D., Sklansky, J.: Tumor detection in radiographs. Comput. Biomed. Res. 6, 299–
321 (1973)
6. Lakhani, P., Sundaram, B.: Deep learning at chest radiography: automated classification of
pulmonary tuberculosis by using convolutional neural networks. Radiology 284, 574–582
(2017)
7. Stephen, O., et al.: An efficient deep learning approach to pneumonia classification in
healthcare. J. Healthcare Eng. 2019, 7 (2019)
8. Liu, C., et al.: TX-CNN: detecting tuberculosis in chest X-ray images using convolutional
neural network. In: 2017 IEEE International Conference on Image Processing (ICIP). IEEE
(2017)
9. Yates, E.J., Yates, L.C., Harvey, H.: Machine learning “red dot”: open-source, cloud, deep
convolutional neural networks in chest radiograph binary normality classification. Clin.
Radiol. 73(9), 827–831 (2018)
10. Yaniv, B., Diamant, I., Wolf, L., Lieberman, S., Konen, E., Greenspan, H.: Chest pathology
detection using deep learning with non-medical training. In: IEEE 12th International
Symposium on Biomedical Imaging (ISBI), New York (2015)
11. Pan, I., Agarwal, S., Merck, D.: Generalizable inter-institutional classification of abnormal
chest radiographs using efficient convolutional neural networks. J. Digit. Imaging 32, 888–
896 (2019)
12. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with
region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 39, 1137–1149 (2017)
13. Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmen-
tation. IEEE Trans. Pattern Anal. Mach. Intell. 39, 640–651 (2017)
14. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image
recognition. In: ICLR, vol. 6 (2015)
15. Ting, D., Cheung, C., et al.: Development and validation of a deep learning system for
diabetic retinopathy and related eye diseases using retinal images from multiethnic
populations with diabetes. JAMA 318, 2211–2223 (2017)
16. Esteva, A., Kuprel, B., Novoa, R., Ko, J., Swetter, S., Blau, H., Thrun, S.: Dermatologist-
level classification of skin cancer with deep neural networks. Nature 542, 115–118 (2017)
17. Ridley, E.: AI outperforms physicians for interpreting chest x-rays. Aunt Minnie (2019)
A Study on CNN Architectures for Chest X-Rays 451
18. Irvin, J., Rajpurkar, P., Ko, M., Yu, Y., Ciurea-Ilcus, S., Chute, C., Marklund, H., Haghgoo,
B., Ball, R., Shpanskaya, K., Seekins, J., Mong, D., Halabi, S., Sandberg, J., Jones, R.,
Larson, D., Langlotz, C., Patel, B., Lungren, M., Ng, A.: CheXpert: a large chest radiograph
dataset with uncertainty labels and expert comparison. Association for the Advancement of
Artificial Intelligence (2019)
19. Clinger, N., Hunter, T., Hillman, B.: Radiology reporting: attitudes of referring physicians.
In: RSNA 1988 Annual Meeting (1988)
20. European Society of Radiology (ESR): Good practice for radiological reporting. Guidelines
from the European Society of Radiology (ESR). Insights Imaging 2, 93–96 (2011)
21. Stanford ML Group: CheXpert: A Large Chest X-Ray Dataset and Competition,
Stanford ML Group. https://stanfordmlgroup.github.io/competitions/chexpert/. Accessed
16 Oct 2019
22. Chollet, F.: Xception: deep learning with depthwise separable convolutions. In: Proceedings
of the IEEE Conference on Computer Vision and Pattern Recognition (2017)
23. Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M.,
Adam, H.: MobileNets: efficient convolutional neural networks for mobile vision
applications, arXiv preprint arXiv:1704.04861 (2017)
24. Nain, A.: Beating everything with Depthwise Convolution. https://www.kaggle.com/
aakashnain/beating-everything-with-depthwise-convolution. Accessed 02 July 2019
25. ImageNet, Large Scale Visual Recognition Challenge (ILSVRC). http://www.image-net.org/
challenges/LSVRC/. Accessed 18 May 2019
26. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception
architecture for computer vision. In: The IEEE Conference on Computer Vision and Pattern
Recognition (CVPR), pp. 2818–2826 (2016)
27. Schroff, F., Kalenichenko, D., Philbin, J.: FaceNet: a unified embedding for face recognition
and clustering. In: The IEEE Conference on Computer Vision and Pattern Recognition
(2015)
28. Chen, L.-C., et al.: Encoder-decoder with atrous separable convolution for semantic image
segmentation. In: Proceedings of the European Conference on Computer Vision (ECCV)
(2018)
29. Wang, S.-H., et al.: Classification of Alzheimer’s disease based on eight-layer convolutional
neural network with leaky rectified linear unit and max pooling. J. Med. Syst. 42(5), 85
(2018)
30. Rubin, J., et al.: Large scale automated reading of frontal and lateral chest x-rays using dual
convolutional neural networks. arXiv preprint arXiv:1804.07839 (2018)
A Thermodynamic Assessment of the Cyber
Security Risk in Healthcare Facilities
Abstract. Over the last decades a number of guidelines have been proposed for
best practices, frameworks, and cyber risk assessment in present computational
environments. In order to improve cyber security vulnerability, in this work it is
proposed and characterized a feasible methodology for problem solving that
allows for the evaluation of cyber security in terms of an estimation of its
entropic state, i.e., a predictive evaluation of its risk and vulnerabilities, or in
other words, the cyber security level of such ecosystem. The analysis and
development of such a model is based on a line of logical formalisms for
Knowledge Representation and Reasoning, consistent with an Artificial Neural
Networks approach to computing, a model that considers the cause behind the
action.
© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 452–465, 2020.
https://doi.org/10.1007/978-3-030-45697-9_44
A Thermodynamic Assessment of the Cyber Security Risk 453
1 Introduction
Security and privacy are becoming more and more challenging issues, despite the
enormous efforts to combat cybercrime and cyber terrorism. Cybersecurity is concerned
with the security of data and the applications and infrastructure used to store, process
and transfer data. It is understood to be a process of protecting data and information by
preventing, detecting, and responding to cybersecurity events. Such events, which
include intentional attacks and accidents, are changes that may affect organizational
processes [1–3]. Once many of these technologies are wireless and therefore depend on
custom protocols and encryption platforms, it is even more of concern that action plans
need to be developed that address responses to potential cyber-attacks on Organiza-
tional Services, Infrastructure, Information and Communication Technology (ICT), i.e.,
structures that evolve as a set-up in terms of hardware, software, data and the people
who use them [1, 4].
Several guidelines have been issued for the design, management and monitoring of
Technological Security Infrastructures, that includes ISO 27001 [5], COBIT Infor-
mation and Technology Control Goals [6], and directives such as ITIL (Information
Technology Infrastructure Library) [7, 8]. In terms of best practices, frameworks, and
cyber risk assessment one may take an account from the Financial Industry Regulatory
Authority [9], Frameworks of the Cybersecurity Framework of the USA National
Institute of Standards and Technology (NIST) [10], SANS Critical Security Controls
for Effective Cyber Defense [11], ISO 27032 (Security Techniques - Cybersecurity
Guidelines) [12], or the Cyber Security Risk Assessment (CSRA) [13–18]. In another
aspect, a risk assessment framework defines the rules for evaluating the persons to be
included, the terminology for discussing the risk, the criteria for quantifying, quali-
fying, and comparing the risk levels and the required documentation based on
assessments and follow-up activities.
A framework is designed to establish an objective measure of risk that enables a
business to understand the business risk for critical information and assets both qual-
itatively and quantitatively. Finally, the risk assessment framework provides the nec-
essary tools to make business decisions regarding investment in people, processes and
technologies to bring the risk to an acceptable level.
Three of the most prevalent risk frameworks in use today are OCTAVE (Opera-
tionally Critical Threat Asset, Vulnerability Evaluation) [19], and the NIST risk
assessment [10]. Others frameworks that have a substantial following are ISACA’s
RISK IT (part of COBIT) [6], and ISO 27005:2008 (part of the ISO 27000 series that
includes ISO 27001 and 27002) [5, 12]. All the frameworks have similar approaches
but differ in their high level goals. OCTAVE, NIST, and ISO 270xx focus on security
risk assessments, while RISK IT applies to the broader IT risk management space.
The case study considers the use of Logic Programming (LP) for Knowledge Rep-
resentation and Reasoning (KRR) [20], and Artificial Neural Networks (ANNs) as a
natural way of computing [21, 22]. On the other hand, data is embedded and transformed
according to the Laws of Thermodynamics [23, 24] to capture either the key components
affecting cyber security environments or human behavior such as attitudes, motivations
454 F. Fernandes et al.
and habits, i.e., the human factors that characterize each CSRA. Finally, in the last
section, the main conclusions are drawn and the future work is outlined.
2 Fundamentals
This paper uses the references to the Open Web Application Security Project (OWASP)
on improving the security of software [25, 26], which evolves according to the steps,
viz.
• Step 1 – Identifying the risk. It should identify a security risk that needs to be
evaluated. The tester must collect information about the threat agent involved, the
attack used, the vulnerability, and the impact of a successful exploit on the business;
• Step 2 – Factors for estimating likelihood. There are a number of factors that can
determine the likelihood. The first set is related to the threat agent involved; the goal
is to estimate the likelihood of a successful attack from a group of potential
attackers. The second one stands for the vulnerability factors and is related to the
weakness; the aim is to estimate the likelihood with which the respective openness
is discovered and exploited;
• Step 3 – Factors for estimating impact. When considering the effects of a successful
attack, it is important to realize that there are two types of impact. The former stands
for the “technical implications” for the application, the data used and the functions
provided. The latter refers to the “impact” on the company and how it executes the
application;
• Step 4 – Determining the severity of the risk. There are typically two methods, the
informal and the repeatable ones. In the former and in many environments, there is
nothing wrong with checking the factors and simply grasping the answers. In the
later, the tester should think through the factors and identify the key “driving”
factors that control the outcome; and
• Step 5 – Studying the changes in temperature, pressure, and volume on physical
systems on the macroscopic scale by analyzing the collective motion of their par-
ticles through observation and statistics.
The next lines describe the designed approach, which focuses on Thermodynamics
to describe Knowledge Representation and Reasoning (KRR) practices as a process of
energy devaluation [24, 25]. To understand the basics of the proposed approach,
consider the first two laws of Thermodynamics. The former one describes the energy
saving, i.e., for an isolated system, the total amount of energy is constant. Energy can
be converted, but not generated or destroyed. The latter describes entropy, a property
that quantifies the state of order of a system and its development. These characteristics
fit our vision when Knowledge Representation and Reasoning (KRR) practices are
understood as a process of energy devaluation. Indeed, it is understood that a data
element is in an entropic state, the energy of which can be broken down and used in the
sense of devaluation, but never used in the sense of destruction, viz.
A Thermodynamic Assessment of the Cyber Security Risk 455
which allows one to capture the entropic variations that occur in the system. Moreover,
it is included a neutral term, neither agree nor disagree with the IT security appraisal,
which stands for uncertain or vague. The reason for the individual’s answers is in
relation to the query, viz.
As an individual, how much would you agree with the valuation of each individual
answer to the TAFQ-4 referred to above?
In order to create a comprehensible process, the related energy properties are
graphically displayed. For purposes of illustration and simplicity, full calculation
details are provided for TAFQ – 4’s answers. Therefore, consider Table 1 as the result
of an individual answer to TAFQ – 4. For example, the answer to Q1 was Advanced
computer users and Some technical skills, in that order, i.e., it is stated that the person’s
456 F. Fernandes et al.
answer was Advanced computer users, but he/she does not reject the possibility that the
answer may be Some technical skills in certain situations. It shows a trend in the
development of the system with an added entropy, i.e., there is a degradation of system
performance. On the other hand, the answer to Q2, Advanced computer users and
Network and programming skills, shows a trend in the development of the system with
a decrease in entropy, i.e., there is an increase in system performance. Figure 1
describes such answers regarding the different forms of energy, i.e., exergy, vagueness
and anergy. Considering that the markers on the axis correspond to one of the possible
scale options, each system behaves better as entropy decreases, which is the case with
respect to Q2, whose entropic states are evaluated as untainted energy (in the form of
Best/Worst case scenarios) as shown in Table 2.
Scale
Questions
(5) (4) (3) (2) (1) (2) (3) (4) (5) vagueness
Q1 × ×
Q2 × ×
Q3 ×
Q4 ×
Leading to Leading to
Fig. 1
1 1 1 1
π π π π
(1) (1) (1) (1) (1) (1) (1) (1)
Q3 Q2 Q4 Q3 Q1 Q4 Q2 Q1
Leading to Leading to
Fig. 2
1 1 1
π π π
(1) (1) (1) (1) (1) (1)
Q3 Q2 Q3 Q2 Q3 Q2
Leading to Leading to
Table 2
Fig. 2. A graphical representation of the energy consumed in terms of a single answer to the
TAFQ – 4 questions.
The data collected above may now be structured in terms of the extent of predicate
threat agent factors questionnaire (tafq – 4) in the form, viz.
a construct that speaks for itself, whose extent and formal description follows (Table 3
and Program 1).
{
}
Program 1. The extent of the tafq – 4 predicate for the best case scenario.
The evaluation of CSRA and QoI for the different items that make the TAFQ – 4
are now given in the form, viz.
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
• CSRA is figured out using CSRA ¼ 1 ES2 (Fig. 3), where ES stands for an
exergy’s that may have been consumed in the Best-case scenario (i.e.,
ES ¼ exergy þ vagueness), a value that ranges in the interval 0…1.
458 F. Fernandes et al.
Table 2. Evaluation of the Best and Worst-case scenarios for the TAFQ – 4 questions regarding
their entropic states.
Leading to Leading to
Table 3
A Thermodynamic Assessment of the Cyber Security Risk 459
Table 3. The extent of the tafq – 4’s predicate from a person’s answers to TAFQ – 4.
Questionnaire Ex VA CSRA QoI EX VA CSRA QoI
BCS BCS BCS BCS WCS WCS WCS WCS
BIFQ – 6 0.29 0.36 0.76 0.35 0.66 0 0.75 0.33
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
CSRA ¼ 1 ð0:29 þ 0:36Þ2 ¼ 0:76
1
CSRA
?
0 1
ES
3 Case Study
To complement the process of data collection, and to add the possibility of assessing
CSRA factors on a more comprehensive enlargement, it was considered three more
questionnaires, namely the one entitled as the Cyber Security Questionnaire-Four-Item
(CSQ – 4), which was designed to assess individual differences in the proneness to take
risks when using the information technology infrastructure. It is given in the form, viz.
Q1 – How may an exploit to be detected?
Q2 – How the organization takes countermeasures to block these attempts?
Q3 – How are you aware of the difference between Symmetric and Asymmetric
encryption? and
Q4 – How are you aware of the difference of how to protect your home Wireless
Access Point?
460 F. Fernandes et al.
For this questionnaire the answer scale was confined to the following options, viz.
Very effectively (4), effectively (3), ineffectively (2), ineffectively at all (1), ineffectively (2),
effectively (3), Very effectively (4)
Moreover, it is included a neutral term, neither agree nor disagree with the IT
security appraisal, which stands for uncertain or vague. The reason for the individual’s
answers is in relation to the query, viz.
As an individual, how much would you agree with the valuation of each individual
answer to the CSQ-4 referred to above?
Another questionnaire is entitled as Self-Assessment and Identity Questionnaire-
Three-Item (SAIQ – 3), and is set as follows, viz.
Q1 – How much data could be disclosed and how sensitive is it?
Q2 – How prevention of unauthorized access and use are you conscientious? and
Q3 – How much encrypted backups have you done to storage the Information?
For these two questionnaires the answer scale was confined to the following
options, viz.
Very much indeed (4), much (3), not too much (2), not much at all (1), not too much (2), much
(3), Very much indeed (4)
Moreover, it is included a neutral term, neither agree nor disagree with the IT
security appraisal, which stands for uncertain or vague. The reason for the individual’s
answers is in relation to the query, viz.
As an individual, how much would you agree with the valuation of each individual
answer to the SAIQ - 3 and BIFQ - 6 referred to above
A Thermodynamic Assessment of the Cyber Security Risk 461
Scale
Questionnaire Questions
(4) (3) (2) (1) (2) (3) (4) vagueness
Q1 ×
CSQ – 4
Q2 × ×
Q3 ×
Q4 ×
Q1 ×
SAIQ – 3 Q2 × ×
Q3 ×
Q1 × ×
Q2 × ×
BIFQ – 6 Q3 ×
Q4 ×
Q5 × ×
Q6 ×
Leading to Leading to
Table 5
Table 5. The threat agent factors questionnaire (tafq – 4), cyber security questionnaire (csq –
4), self-assessment and identity questionnaire (saiq – 3) and business impact factors
questionnaire (bifq – 6) predicates’ scopes obtained according to the individual answers to the
TAFQ – 4, CSQ – 4, SAIQ – 3, and BIFQ – 6 questionnaires.
Questionnaire Exergy Vague CSRA QoI Exergy Vague CSRA QoI
BCS BCS BCS BCS WCS WCS WCS WCS
TAFQ – 4 0.29 0.36 0.76 0.35 0.66 0 0.75 0.33
CSQ – 4 0.28 0.42 0.71 0.30 0.75 0 0.66 0.25
SAIQ – 3 0.22 0.37 0.81 0.41 0.65 0 0.47 0.12
BIFQ – 6 0.34 0.12 0.93 0.54 0.78 0 0.68 0.27
4 Computational Make-Up
The following describes nothing less than a mathematical logic program that, through
insights that are subject to formal proof, allows one to understand and even adapt the
actions and attitudes of individuals or groups and toward them the organization as a
462 F. Fernandes et al.
whole, i.e., assess the impact on the functioning and performance of the organization
through logical inference. A system that is not programmed for specific tasks; rather, it
is told what it needs to know and is expected to infer the rest. It is now possible to use
this data to train an Artificial Neural Network (Fig. 4) in order to get on the fly
assessments of the cyber security assessment [22, 23].
{
Program 2. The make-up of the logic program or knowledge base for a user answer.
where ¬ denotes strong negation and not stands for negation-by-failure. It is now
possible to use this data to train an Artificial Neural Network (ANN) [22, 23] (Fig. 4) in
order to get on the fly an evaluation of the Cyber Security Risk Assessment (CSRA),
plus a measure of its Sustainability; indeed, the ANN approach to data processing
enables one to process the data in relation to a system context. For example, to an
enterprise with 30 (thirty) users, the training set may be gotten by making obvious the
theorem, viz.
A Thermodynamic Assessment of the Cyber Security Risk 463
in every way possible, i.e., generating all the different possible sequences that combine
the extents of the predicates tafq – 4, scq – 4, saiq – 3 and bifq – 6, viz.
In terms of the output of the ANN, it is considered the evaluation of the CSRA, i.e.,
its implication on System Performance, which may be weigh up in the form, viz.
0.80 0.40
Cyber Security Risk
Assessment (CSRA) CSRA Sustainability ⇔ QoI
Output Layer
Pre-processing
Layer
0.29
0.36
0.71
0.28
0.42
0.72
0.22
0.37
0.78
0.34
0.12
0.66
Fig. 4. An abstract view of the topology of the ANN for assessing Cyber Security Risk.
CSRAtafq4 þ CSRAcsq4 þ CSRAsaiq3 þ CSRAbifq6 =4 ;
ffð0:76 þ 0:71 þ 0:81 þ 093Þ=4 ¼ 0:80g; g
and, viz.
464 F. Fernandes et al.
QoItafq4 þ QoIcsq4 þ QoIsaiq3 þ QoIbifq6 =4 ;
ffð0:35 þ 0:30 þ 0:41 þ 0:54Þ=4 ¼ 0:40g; g
This study focused on the human factors that characterize each CSRA and how their
perception of such factors contributes to CSRA vulnerability. As future work, and
considering how social factors may shape the cyber security perception of the envi-
ronment, we intend to look at ways to figure out the environment according to the cyber
security modes. We will also consider the implementation of new questionnaires and
the process of collecting data from a wider audience.
Acknowledgments. This work has been supported by FCT – Fundação para a Ciência e Tec-
nologia within the R&D Units Project Scope: UIDB/00319/2020.
References
1. Zhang, K., Ni, J., Yang, K., Liang, X., Ren, J., Shen, X.: Security and privacy in smart city
applications: challenges and solutions. IEEE Commun. Mag. 55(1), 122–129 (2017)
2. Khatoun, R., Zeadally, S.: Cybersecurity and privacy solutions in smart cities. IEEE
Commun. Mag. 55(3), 51–59 (2017)
3. Gaur, A., Scotney, B., Parr, G., McClean, S.: Smart city architecture and its applications
based on IoT. Procedia Comput. Sci. 52, 1089–1094 (2015)
4. Ijaz, S., Shah, M., Khan, A., Mansoor, A.: Smart cities: a survey on security concerns. Int.
J. Adv. Comput. Sci. Appl. 7(2), 612–625 (2016)
5. ISO/IEC 27001 Information security management. https://www.iso.org/isoiec-27001-
information-security.html. Accessed 19 Nov 2019
6. COBIT: Information Systems Audit and Control Association, Control Objectives for
Information and Related Technology, 5th edn. IT Governance Institute (2019)
7. OGC: Official Introduction to the ITIL Service Lifecycle, Stationery Office, Office of
Government Commerce. https://www.itgovernance.co.uk. Accessed 23 Nov 2019
8. Armin, A., Junaibi, R., Aung, Z., Woon, W., Omar, M.: Cybersecurity for smart cities: a
brief review. Lecture Notes in Computer Science, vol. 10097, pp. 22–30 (2017)
9. Financial Industry Regulatory Authority: Financial Industry Regulatory Practices. https://
www.finra.org/file/report-cybersecurity-practices. Accessed 22 Nov 2019
10. National Institute of Standards and Technology: Cybersecurity Framework. https://www.
nist.gov/sites/default/files/documents/cyberframework/cybersecurity-framework-021214.pdf
. Accessed 22 Nov 2019
11. SANS Institute: Critical Security Controls for Effective Cyber Defense. https://www.sans.
org/critical-security-controls. Accessed 22 Nov 2019
12. ISO 27032 - Information technology – Security techniques – Guidelines for cybersecurity.
https://www.iso.org/standard/44375.html. Accessed 22 Nov 2019
13. Liu, C., Tan, C.-K., Fang, Y.-S., Lok, T.-S.: The security risk assessment methodology.
Procedia Eng. 43, 600–609 (2012)
14. Lanz, J.: Conducting information technology risk assessments. CPA J. 85(5), 6–9 (2015)
A Thermodynamic Assessment of the Cyber Security Risk 465
15. Tymchuk, O., Iepik, M., Sivyakov, A.: Information security risk assessment model based on
computing with words. MENDEL Soft Comput. J. 23, 119–124 (2017)
16. Amini, A., Norziana, J.: A comprehensive review of existing risk assessment models in
cloud computing. J. Phys: Conf. Ser. 1018, 012004 (2018)
17. European Union Agency for Network and Information Security (ENISA). https://www.
smesec.eu. Accessed 22 Nov 2019
18. Ribeiro, J., Alves, V., Vicente, H., Neves, J.: Planning, managing and monitoring
technological security infrastructures. In: Machado, J., Soares, F., Veiga, G. (eds.)
Innovation, Engineering and Entrepreneurship. Lecture Notes in Electrical Engineering,
vol. 505, pp. 10–16. Springer, Cham (2019)
19. Caralli, R.A., Stevens, J.F., Young, L.R., Wilson, W.R.: Introducing OCTAVE Allegro:
improving the information security risk assessment process. Technical report CMU.
Software Engineering Institute (2007)
20. Neves, J.: A logic interpreter to handle time and negation in logic databases. In: Muller, R.,
Pottmyer, J. (eds.) Proceedings of the 1984 Annual Conference of the ACM on the 5th
Generation Challenge, pp. 50–54. Association for Computing Machinery, New York (1984)
21. Cortez, P., Rocha, M., Neves, J.: Evolving time series forecasting ARMA models.
J. Heuristics 10, 415–429 (2004)
22. Fernández-Delgado, M., Cernadas, E., Barro, S., Ribeiro, J., Neves, J.: Direct Kernel
Perceptron (DKP): ultra-fast kernel ELM-based classification with non-iterative closed-form
weight calculation. J. Neural Netw. 50, 60–71 (2014)
23. Wenterodt, T., Herwig, H.: The entropic potential concept: a new way to look at energy
transfer operations. Entropy 16, 2071–2084 (2014)
24. Neves, J., Maia, N., Marreiros, G., Neves, M., Fernandes, A., Ribeiro, J., Araújo, I., Araújo,
N., Ávidos, L., Ferraz, F., Capita, A., Lori, N., Alves, V., Vicente, N.: Entropy and
organizational performance. In: Pérez García, H., Sánchez González, L., Castejón Limas,
M., Quintián Pardo, H., Corchado Rodríguez, E. (eds.) Hybrid Artificial Intelligent Systems.
Lecture Notes in Computer Science, vol. 11734, pp. 206–217. Springer, Cham (2019)
25. OWASP Open Cyber Security Framework Project. https://www.owasp.org/index.php/
OWASP_Open_Cyber_Security_Framework_Project. Accessed 21 Nov 2019
26. OWASP Risk Rating Methodology. https://www.owasp.org/index.php/OWASP_Risk_
Rating_Methodology. Accessed 21 Nov 2019
How to Assess the Acceptance of an Electronic
Health Record System?
Abstract. Being able to access a patient’s clinical data in due time is critical to
any medical setting. Clinical data is very diverse both in content and in terms of
which system produces it. The Electronic Health Record (EHR) aggregates a
patient’s clinical data and makes it available across different systems. Consid-
ering that user’s resistance is a critical factor in system implementation failure,
the understanding of user behavior remains a relevant object of investigation.
The purpose of this paper is to outline how we can assess the technology
acceptance of an EHR using the Technology Acceptance Model 3 (TAM3) and
the Delphi methodology. An assessment model is proposed in which findings
are based on the results of a questionnaire answered by health professionals
whose activities are supported by the EHR technology. In the case study sim-
ulated in this paper, the results obtained showed an average of 3 points and
modes of 4 and 5, which translates to a good level of acceptance.
1 Introduction
Health information technologies, such as the Electronic Health Record (EHR), and
information management are fundamental in transforming the health care industry [4].
The flow of information in any hospital environment can be characterized as highly
complex and heterogeneous. Its availability across systems in due time is critical to the
success of clinical processes. Thus, the implementation and use of information systems
that aggregate patient data can facilitate the work of health professionals and maximize
their productivity. However, this is only possible if the system is fully accepted by its
users.
This paper aims to outline how the level of acceptance of an EHR can be assessed
through the combination of the Technology Acceptance Model (TAM) and the Delphi
methodology. A simulation was performed through the application of these method-
ologies in a case study that evaluates the level of acceptance of the EHR used in the
Intensive Care Unit (ICU) of Centro Hospitalar do Porto (CHP). The assessment is
based on the application of a questionnaire and subsequent statistical analysis of the
results. The results were produced by an algorithm that generated responses to the
questionnaire according to the characteristics of the questions. The simulation was
© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 466–475, 2020.
https://doi.org/10.1007/978-3-030-45697-9_45
How to Assess the Acceptance of an Electronic Health Record System? 467
designed to represent various possible results and outline how its analysis can be
performed. The use of a simulated environment also ensured data integrity and
anonymity. The analysis process was optimized to facilitate its replication in a realistic
scenario, where the questionnaire should be answered by health professionals whose
activities are supported by the EHR technology. The replication of the assessment
model proposed will allow to evaluate the level of user acceptance, to identify the
factors that influence health professionals’ resistance to the EHR and to put forward a
set of improvements which will increase user acceptance.
This paper is composed of five sections. The first section introduces the study. The
second section defines relevant concepts. The third section presents the assessment
model proposed. The fourth section describes the application of the model in a case
study. Finally, the conclusions are presented in the fifth section.
2 Background
2.1 Intensive Medicine
Intensive Medicine is a multidisciplinary field in health care with focus on the pre-
vention, diagnostic and treatment of patients with dysfunction or failure of one or more
organs, particularly respiratory and cardiovascular systems [1, 9]. These patients are
admitted to Intensive Care Units (ICU), which are specially prepared to continuously
monitor vital functions and offer mechanical or pharmacological support [1, 3]. Due to
the high complexity and severity of the cases handled in the ICU, it is essential that
health professionals make the right decisions in a timely manner. However, the
decision-making process can be hindered by the extensive amount of data generated
across different systems and hospital services.
is “the degree to which a person believes that using a particular system would enhance
his or her job performance”, while PEOU can be defined as “the degree to which a
person believes that using a particular system would be free of effort” [5]. A second
version of this model was proposed to further specify the external variables that
determine the PU. These can be categorized in terms of social influence (subjective
norms) and cognitive instrumental processes (image, job relevance, output quality and
results demonstrability) [17]. Another version was proposed in the same year that
defines the variables that determine the PEOU. These can be divided in anchors
(computer self-efficacy, perceptions of external control, computer anxiety and com-
puter playfulness) and adjustments (perceived enjoyment and objective usability) [15].
More recently, the Technology Acceptance Model 3 (TAM 3) combines all the vari-
ables that determine both constructs (PU and PEOU) and presents new relationships
regarding user experience [16]. TAM 3 is comprised of four constructs: PU, PEOU,
Behavioral Intention (BI) and Use Behavior (UB).
The Delphi methodology is an iterative process of application of questionnaires
used to obtain a consensus regarding a specific matter [11]. This method consists in
collecting and analyzing the results of each questionnaire and, subsequently, creating a
new round of questionnaires based on those results. The process ends once all parties
come to a satisfactory agreement. The participant pool should include field experts with
similar cultural and cognitive levels while also representing different points of view
within the study area [18]. By using this methodology, we can determine and predict a
group’s behaviors, needs and priorities [11].
The assessment of TAM 3 constructs can be achieved through the application of
questionnaires. The combination of this model (quantitative method) with the Delphi
methodology (qualitative method) allows to evaluate the acceptance of a certain
technology while reducing the level of uncertainty and ensuring the presence of
complementary views, which will increase the quality of the results [11].
3 Assessment Model
To assess the level of acceptance of an EHR through the combination of TAM and
Delphi, a questionnaire must be designed based on both methodologies. The first step is
to structure the questionnaire in sections. Table 1 shows how sections should be
structured, the motivation behind each group of items and how these should be
evaluated.
A 5-point Likert scale [6] is applied for items designed to evaluate the TAM
Table 1. Questionnaire structure.
Section Goal Evaluation
Level of Understand system user types and assess Answer options are
technological their level of experience regarding dependent on the type of
experience computer use in day-to-day activities question
Overall system Provide an overall view of the system by Likert scale
functioning assessing global characteristics and
functionalities
Technical and Evaluate technical and functional Likert scale
functional characteristics of specific system
characteristics panels/sections
Additional Promote further comments from the Free text field
comments participants
constructs PU, PEOU, BI and UB. This scale allows the participant to specify their
level of agreement with a certain statement [12]. The use of a short 5-point scale, with
two negative values (1, 2), two positive values (4, 5) and a neutral value (3), narrows
the results, avoiding their dispersion and reducing inaccuracy [10].
Considering the structure proposed, a sample of items for each questionnaire sec-
tion is offered in Table 2.
Table 2. (continued)
Section Item Answer Options
Technical and Does the image enhance the 1 – Strongly
functional registration/consultation of procedures? disagree
characteristics 2 – Disagree
3 – Neither agree
nor disagree
4 – Agree
5 – Strongly agree
Additional comments In your opinion, what are the major issues Free text field
in the system?
To ensure that each TAM construct is evaluated by at least one item, it is necessary
to show the relationship between questions and constructs. Table 3 shows an example
of how these relations can be represented through a matrix. Each table row should be
read as “Item A evaluates constructs PU and PEOU”.
After obtaining answers to the questionnaire, the results must be analyzed. The
analysis process is divided into two phases: technological experience analysis and
univariate statistical analysis. The first aims to better understand system user types
regarding experience in technology. The second phase consists of several statistical
analyses by participant, item, TAM construct and questionnaire section. Table 4 shows
examples of indicators and metrics that can be used in the analysis.
The coefficient selected to analyze the level of agreement between answers was
Kendall’s tau [2]. This is a non-parametric correlation coefficient which evaluates the
correlation between two ordinal variables. Negative values (closer to −1) represent a
greater divergence between answers while positive values (closer to 1) mean a greater
level of agreement.
The application of TAM to assess the EHR can also result in a SWOT analysis.
This technique can be used to help identify strengths and weaknesses of the EHR
system, factors/threats that influence user resistance and, subsequently, to put forward a
set of improvements/opportunities which will increase acceptance.
4 Case Study
The evaluation model presented in the previous section was applied to a case study.
The goal was to assess the level of acceptance of the EHR used in the ICU of CHP.
The questionnaire created is composed of 41 items divided into 12 sections. The
first section assesses the level of technological experience of the participants. Section 2
evaluates global characteristics and functionalities of the system. Sections 3 through 11
assess functional and technical characteristics of different panels within the EHR
system, such as: Header, Explorer, Discharge Notes, Problems, Daily Round Checklist,
Procedures, Requests, Appointments and Clinical Research. These sections are eval-
uated by a 5-point Likert scale. Finally, a free text field was provided in the last section
to accommodate additional comments. The relationships between items and TAM
constructs are presented in Table 5.
Table 5. (continued)
Item PE PEOU BI UB
2.10. Increases productivity? X X X X
2.11. Facilitates decision-making? X X X X
2.12. Are section/panel titles correct? X X – –
3. Header
3.1. Is MCDT information (upper left corner) relevant? X – – –
3.2. Is patient data enough? X – – –
3.3. Are the hospitalization details (upper right corner) enough? X – – –
3.4. Does the information layout facilitate system use? – X – X
3.5. Is the position of the “Sair” and “Actualizar” buttons adequate? – X – –
3.6. Are all tabs (Alertas, Mensagens, etc.) necessary and relevant? X – – –
4. Explorer
4.1. Allows to efficiently consult information? X X – X
4.2. Is all information necessary and relevant? X – – –
5. Discharge Notes
5.1. Allows to efficiently register information? X X – X
5.2. Allows to efficiently consult information? X X – X
5.3. Is the number of fields adequate for decision-making? X – – –
5.4. Are all fields necessary and relevant? X – – –
6. Problems
6.1. Allows to efficiently register information? X X – X
6.2. Is the number of fields adequate for decision-making? X – – –
6.3. Are all fields necessary and relevant? X – – –
7. Daily Round Checklist
7.1. Allows to efficiently register information? X X – X
7.2. Is the number of fields adequate for decision-making? X – – –
7.3. Are all fields necessary and relevant? X – – –
8. Procedures
8.1. Allows to efficiently consult information? X X – X
8.2. Allows to efficiently register information? X X – X
8.3. Does the image enhance the registration/consultation of X X X –
procedures?
8.4. Does the information layout facilitate decision-making? – X – X
9. Requests
9.1. Allows to efficiently register information? X X – X
10. Appointments
10.1. Allows to efficiently register information? X X – X
11. Clinical Research
11.1. Allows to efficiently register information? X X – X
12. Closing Remarks
12.1. What are your main issues with the system? What – – – –
improvements would you like to see implemented?
Number of items 31 23 5 20
Percentage of total (%) 75,6 56,1 12,2 48,8
How to Assess the Acceptance of an Electronic Health Record System? 473
5 Results
After generating 100 answers to the questionnaire through an algorithm that generated
responses to the questionnaire according to the characteristics of the questions, the
results were analyzed in two phases: technological experience analysis and univariate
statistical analysis. The first aims to understand the level of experience of the partici-
pants regarding the use of a computer in daily activities. An example is presented in
Table 6. The percentage of autonomous users in this case is 68%, which means the
participants had an acceptable level of experience with the use of a computer. Thus,
any issues with the system would not be the result of technological inexperience by its
users.
In the second phase of analysis, different statistical properties were used: mean,
mode, standard deviation and correlation coefficient. A global analysis was performed
by participant and by item. Both analyses showed similar results with an overall
average of 3 points and standard deviation values close to 0. The correlation values in
this analysis were mostly positive, which indicates a good level of agreement among
the participants. Results were also analyzed by construct and section. The global results
from both analyses are aggregated in Table 7. It can be observed that:
• Mean values are close to 3 points;
• Mode values are mostly of 4 and 5 points;
• All TAM constructs have similar results, but the best evaluated was BI with mean of
3,08 and mode of 5;
• Section 4 obtained the best results with mean of 3,98 and mode of 5;
• Section 5 had the lowest level of acceptance with mean of 2,93 and mode of 2.
474 C. Fernandes et al.
6 Conclusion
The assessment model presented in this paper successfully combines the constructs of
TAM3 and the Delphi methodology to evaluate the acceptance of an EHR system.
A structure for the questionnaires is proposed along with examples of possible items
per section and the evaluation scale to be used.
This paper also suggests the type of results analysis that should performed with its
indicators and metrics. The model is then applied to a case study to assess the EHR in
the ICU of CHP. The results obtained by this simulation showed an average of 3 points
and modes of 4 and 5, which translates to a good level of acceptance. The application
of the model in a real-life scenario will help in identifying the factors that influence the
user’s resistance to the system and, then, to put forward a set of improvements which
will increase acceptance.
In the future, the model proposed can be improved and extended as more accep-
tance assessments are performed.
Acknowledges. The work has been supported by FCT – Fundação para a Ciência e Tecnologia
within the Project Scope: UID/CEC/00319/2019.The work has been supported by FCT – Fun-
dação para a Ciência e Tecnologia within the Project Scope DSAIPA/DS/0084/2018.
How to Assess the Acceptance of an Electronic Health Record System? 475
References
1. Bennett, D., Bion, J.: ABC of intensive care: organisation of intensive care. BMJ 318(7196),
1468–1470 (1999). https://doi.org/10.1136/bmj.318.7196.1468
2. Bolboaca, S.-D., Jäntschi, L.: Pearson versus Spearman, Kendall’s tau correlation analysis
on structure-activity relationships of biologic active compounds. Leonardo J. Sci. 5(9), 179–
200 (2006)
3. Braga, A., Portela, F., Santos, M.F., Machado, J., Abelha, A., Silva, Á., Rua, F.: Step
towards a patient timeline in intensive care units. Procedia Comput. Sci. 64, 618–625 (2015).
https://doi.org/10.1016/j.procs.2015.08.575
4. Chaudhry, B., Wang, J., Wu, S., Maglione, M., Mojica, W., Roth, E., Shekelle, P.G.:
Systematic review impact of health information technology on quality, efficiency, and costs
of medical care. Ann. Intern. Med. 144(10), 742–752 (2006)
5. Davis, F.D.: A technology acceptance model for empirically testing new end-user
information systems: Theory and results. Management, Ph.D. (April), 291 (1986)
6. Johns, R.: Survey question bank: methods fact sheet 1 -likert items and scales. Univ.
Strathclyde 1(March), 1–11 (2010). https://doi.org/10.1108/eb027216
7. Marinho, R., Machado, J., Abelha, A.: Processo Clínico Electrónico Visual. In: INForum
2010 : Actas Do II Simposio de Informatica, May 2014, pp. 767–778 (2010)
8. Novo, A., Duarte, J., Portela, F., Abelha, A., Santos, M.F., Machado, J.: Information systems
assessment in pathologic anatomy service. Adv. Intell. Syst. Comput. 354, 199–209 (2015).
https://doi.org/10.1007/978-3-319-16528-8_19
9. Paiva, J., Fernandes, A., Granja, C., Esteves, F., Ribeiro, J., Nóbrega, J., Coutinho, P.: Rede
de referenciação de medicina intensiva. Redes de Referenciação Hospitalar, 1–87 (2016).
https://bit.ly/2UqG7SY
10. Portela, F., Aguiar, J., Santos, M.F., Abelha, A., Machado, J., Rua, F.: Assessment of
technology acceptance in Intensive Care Units. Adv. Intell. Syst. Comput. 279–292 (2013)
https://doi.org/10.4018/ijssoe.2014070102
11. Santos, L.D.D., Amaral, L.: Estudos Delphi com Q-Sort sobre a web – A sua utilização em
Sistemas de Informação. In: Associação Portuguesa de Sistemas de Informação, December,
vol. 13 (2004)
12. Silva, P.M.D.: Modelo De Aceitação De Tecnologia (Tam) Aplicado Ao Sistema De
Informação Da Biblioteca Virtual Em Saúde (Bvs) Nas Escolas De Medicina Da Região
Metropolitana Do Recife (2008)
13. Surendran, P.: Technology acceptance model: a survey of literature. Int. J. Bus. Soc. Res. 2
(4), 175–178 (2012)
14. Tan, J.: E-health Care Information Systems: An Introduction for Students and Professionals.
Wiley, Hoboken (2005)
15. Venkatesh, V.: Determinants of perceived ease of use: integrating control, intrinsic
motivation, and emotion into the technology acceptance model. Inf. Syst. Res. 11(4), 342–
365 (2000). https://doi.org/10.1287/isre.11.4.342.11872
16. Venkatesh, V., Bala, H.: Technology acceptance model 3 and a research agenda on
interventions subject areas: design characteristics, interventions, management support,
organizational support, peer support, technology acceptance model (TAM), technology
adoption, training, User A. Decis. Sci. 39(2), 273–315 (2008)
17. Venkatesh, V., Davis, F.D.: A theoretical extension of the technology acceptance model:
four longitudinal field studies. Manag. Sci. 46, 186–204 (2000)
18. Zackiewicz, M., Salles-Filho, S.: Technological Foresight – Um instrumento para política
científica e tecnológica. Parcerias Estratégicas 6(10), 144–161 (2010)
An Exploratory Study of a NoSQL Database
for a Clinical Data Repository
1 Introduction
Since 1970, Relational Database Management Systems (RDBMS) has been the dom-
inant model for database management. It has been used in most applications to store
and retrieve data. However, new applications have been requiring a fast and large
amount of data storage due to the advancement of the Internet and the emergence of
distributed computing [1]. Thus, a new type of database called NoSQL has emerged to
try to meet the new challenges.
NoSQL appeared when organizations realized that RDBMs systems had a fault in
terms of scalability, i.e., the adaptability of this system to the growth of resources and
users. RDBMS adopt “scaling up” techniques, vertical scalability, focusing only on
increasing capabilities on a single machine, such as memory or CPU. Instead, NoSQL
databases adopt “scaling out” methods, horizontal scalability, focusing on increasing
the number of machines for better performance.
This new data storage technique has led to many properties and processes of a
traditional database system undergoing a process of change. For example, transactional
© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 476–483, 2020.
https://doi.org/10.1007/978-3-030-45697-9_46
An Exploratory Study of a NoSQL Database for a CDR 477
2 Background
Following a key-value store, data is hashed in partitions through the primary key,
building replication groups, also called shards. Consequently, distributed data are
stored in Storage Nodes (SN), that represent a physical machine with its own memory,
storage and IP address. Each SN contains Replica Nodes (R) that perform writing and
reading functions [10]. Thus, as the number of SN’s increases, the better the system
will perform.
An Exploratory Study of a NoSQL Database for a CDR 479
4 Proposed Architecture
Nowadays, data production and the need to achieve results have been grow expo-
nentially worldwide, especially in healthcare. This advancement has been extremely
remarkable in recent years, resulting in improved patient care and a more robust
decision-making support.
Afterwards clinical data registration by a health professional, each record is stored
in a different source, depending on its context and type. According to adopted
approaches in previous studies, the open data model allows to combine knowledge with
clinical information, following guidelines and clinical coded terms in a structured way.
480 F. Hak et al.
This required in-depth research on how to manipulate this data produced by the
healthcare organization and to turn it into intelligent and optimized solutions. As it is
exposed in Fig. 2, the new CDR requires a solution that can support large amounts of
data and flexibility, providing decision support.
Through the exploration of Oracle NoSQL Database some important concepts for
its correct performance have been identified. Data storage represented by the last tier of
the architecture, is divided in Zones that correspond to physical locations according to
An Exploratory Study of a NoSQL Database for a CDR 481
the capacity of the system, could be primary or secondary type. Storage Nodes (SN) are
represented within a Zone, corresponding to machines that perform both data writing
and reading functions [9].
Increasing the number of SN in the system enables better performance and will
decrease system storage latency, as it is formalized in horizontal scalability. In addition,
for effective communication between the SN’s it is necessary to activate the responsible
agent for this function, the Storage Node Agent (SNA), as well as verify its correct
operation [8].
According to the proposed architecture based on N Storage Nodes and X Zones, for
this case study only one SN in a Zone was deployed, represented in grey by SN1 in
Fig. 3. For a correct activity of SN1, some configuration parameters were established
such as the IP address of the respective machine and the communication ports, as well
as the system capacity and administrative security system.
Regarding the distribution of data in the cluster, this is done by the Sharding
technique that distributes the data uniformly by the Shards in a set of Partitions. This is
a fundamental NoSQL method that aims to easily organize and distribute data between
machines through the primary key of each record according to key-value, in order to
not overload the system.
For effective understanding of the data, each Shard makes up a group of Replication
Nodes (R) that perform read functions, the Master Nodes (M) being responsible for
writing. The master node always has the most up-to-date value for a given key as
opposite of read Replicas that can have slightly older versions [9]. Accordingly, the set
of Replication Nodes is called the Replication Factor (RF). In the case applied of
implementing one-single node, the formula used to calculate the number of partitions
required was as follows [8]:
Capacity 1
Partitions ¼ 10 ¼ 10 ¼ 10
Replication Factor 1
5 Discussion
This article was aimed to explore a solution to propose an architecture for the new
Clinical Data Repository. It must be qualified in volume, velocity, scalability and
elasticity, that matches with NoSQL concepts. Thus, the Oracle NoSQL Database was
the chosen technology for the proposed architecture, with a one-single node
deployment.
Furthermore, one of the main features that sparked interest in the Oracle’s NoSQL
database was the key-value storage. Being the simplest type of NoSQL data models,
the key-value is based on array and also comparable to dictionary and hash functions,
482 F. Hak et al.
mapping a key to a value. A key is a unique identifier and a value the data identified,
that is a string of bytes in arbitrary length.
The data model is characterized for being schema free due to the fact that each
record can have its own structure as opposed to relational models, giving flexibility to
the database. Hereupon, key-value pairs are located in a Distributed Hash Table (DHT),
allowing a node to effectively access a value through a key filling up scalable
resources [6].
As mentioned before, data distribution is performed by Shards containing a hashed
set of records or partitions, stored based on the primary key. Both the key and the value
are application-defined, given some loose restrictions according to the NoSQL Driver
[9]. In this way, the records inserted in the store are uniformly organized in key-value
pairs in partitions.
In Oracle NoSQL, data is stored in particular shards depending on the hashed value
of the primary key of the table. Thus, the key or primary key are a combination of
major and minor key that the major component identifies the partition which contains a
record and what shard is stored, so all of the records with the same major key will be
co-located on the same server [10].
With all of this data storage and management mechanism, it is important that the
database is configured to track desired performance. This requires that the records do
not focus on the same major key, otherwise the system will suffer performance issues
as the data is entered.
The need to explore new solutions capable to support large amounts of heterogeneous
data led to the characterization of the NoSQL concept. NoSQL and Big Data concepts
are also directly linked when it comes to large amounts of data. In this way, NoSQL
meets the requirements proposed by Big Data characteristics such as Volume, Variety
and Velocity, the 3Vs that characterize Big Data.
Thereupon, that represents the capacity to handle a large amount of data of various
types with different structures, generating and querying data quickly in the store. In this
way, the article was developed to address the lack of scalability and speed of a rela-
tional database system, leading to the exploration of the NoSQL concept as one of the
requirements imposed for the work developed.
As a result, the Oracle NoSQL database was the chosen technology for in-depth
study to its functions and data manipulation with key-value store. The proposed
architecture for the Clinical Data Repository (CDR) comprehends that structure of the
technology.
The study concluded that Oracle’s NoSQL tool has adequate functionality for the
required implementation, particularly in resource allocation and easier troubleshooting.
The key-value data schema is also attractive for future implementation as it has simple
and efficient management of data manipulation. Although there are some restrictions on
its tool installation, Oracle NoSQL brings high expectations for the implementation of
the new Clinical Data Repository.
An Exploratory Study of a NoSQL Database for a CDR 483
Future work focuses on building an Oracle NoSQL Database application for the
CDR in a multi-node deployment for better system performance. This targets to a
deepening of clinical knowledge, improving of care service and supporting the
decision-making processes. Business Intelligence techniques for NoSQL database will
also be explored as focal points for future work.
Acknowledgments. The work has been supported by FCT – Fundação para a Ciência e Tec-
nologia within the Project Scope UID/CEC/00319/2019 and DSAIPA/DS/0084/2018.
References
1. Shertil, M., Jowan, S., Swese, R., Aldabrzi, A.: Traditional RDBMS to NoSQL database:
new era of databases for big data. J. Humanit. Appl. Sci. 29, 83–102 (2016)
2. Costa, C., Santos, M.Y.: Big Data: state-of-the-art concepts, techniques, technologies,
modeling approaches and research challenges. IAENG Int. J. Comput. Sci. 43(3), 285–301
(2017)
3. Madison, M., Barnhill, M., Napier, C., Godin, J.: NoSQL database technologies. J. Int.
Technol. Inf. Manag. 24(1), 1–14 (2015)
4. Moniruzzaman, A.B.M., Hossain, S.A.: NoSQL database: new era of databases for big data
analytics - classification, characteristics and comparison. Int. J. Database Theor. Appl. 216
(2895), 43–45 (2013)
5. Anand, V., Rao, C.M.: MongoDB and Oracle NoSQL: a technical critique for design
decisions. In: Proceedings of the International Conference on Emerging Trends in
Engineering, Technology and Science (ICETETS 2016) (2016)
6. Abramova, V., Bernardino, J., Furtado, P.: Experimental evaluation of NoSQL databases.
Int. J. Database Manag. Syst. 6(3), 01–16 (2014)
7. Han, J., Haihong, E., Le, G., Du, J.: Survey on NoSQL database. In: 2011 6th International
Conference on Pervasive Computing and Applications, pp. 363–366. IEEE (2011)
8. Oracle: Oracle NoSQL Database: Fast, Reliable, Predictable, pp. 1–38, November 2018
9. Oracle: Oracle® NoSQL Database: Concepts Manual, April 2018
10. Oracle: Oracle® NoSQL Database: Getting Started with Oracle NoSQL Database Key/Value
API, August 2019
11. Einbinder, J.S., Scully, K.W., Pates, R.D., Schubart, J.R., Reynolds, R.E.: Case study: a data
warehouse for an academic medical center. J. Heal. Inf. Manag. 15(2), 165–175 (2001)
12. Gartner: Information Technology: Clinical Data Repository (2018)
13. Hamoud, A.K., Hashim, A.S., Awadh, W.A.: Clinical data warehouse: a review.
Iraqi J. Comput. Inform. 44(2), 1–11 (2018)
14. Collins, A., Joseph, D., Bielaczc, K.: Design research: theoretical and methodological issues.
Am. Heal. Drug Benefits 3(3), 171–178 (2004)
15. Kunda, D., Phiri, H.: A comparative study of NoSQL and relational database. Zambia ICT
J. 1(1), 1 (2017)
Clinical Decision Support Using Open Data
1 Introduction
Upon a patient going to a healthcare unity, a data set about his health is stored in an
Electronic Health Record (EHR) system. Such practice aims primarily to eliminate the
use of paper and has been increasing on a large scale currently.
Commonly, a health facility integrates several heterogeneous systems that, some-
how, speak different languages [1]. These systems must be interoperable, i.e., be able to
communicate in a noticeable and effective manner. This focus is based on building
communication without data loss and on the meaning of its content, specifically
referred to as semantic interoperability.
Thereby, the OpenEHR approach and the clinical terminologies aim to achieve
universal interoperability between EHR systems [2]. The first, structure archetypes to
represent clinical concepts, and the second is based on the use of structured vocabu-
laries correlative to clinical terms. Thus, data exchange between systems does not
compromise the quality of receptive information.
Nevertheless, a simple EHR system became incapable to support decision making
in daily basis, because its primary role was merely to store and consult clinical records
[3]. Hence, this inefficiency propelled the need to explore new techniques for gaining
© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 484–492, 2020.
https://doi.org/10.1007/978-3-030-45697-9_47
Clinical Decision Support Using Open Data 485
2 Background
2.1 Open Data
The Open Knowledge International’s Foundation [4], the most important international
pattern to open data, granted the first definition to open data as “data that can be used in
a freeway, shared and built for anyone, anywhere and to everything”. Consequently,
open knowledge is defined as “any content, information or data that society is free to
use, reuse and redistribute, without any legal, social or technological restriction”.
To summarize, the term “open data” has gained popularity in the transparency and
open government movement around the world, as it manages the access to public
information as a rule and can be freely used, modified and shared by anyone and
without any financial or other restrictions and it is also applied to the health domain [5].
2.2 OpenEHR
The OpenEHR approach follows the open data and free access standard for health
information specifications and is used in management, storage and querying of elec-
tronically clinical data. The use of this model provides an interoperable framework that
organizes clinical content with patient information, thus enabling integration with
different health information systems [6].
The OpenEHR Foundation states that OpenEHR has “multilevel single-source
modelling within a service-oriented software architecture where models built by
domain experts are in their own layer”. In this sense, the OpenEHR architecture
consists of two levels, which information and knowledge are separated.
In the first level, the information model groups and defines the information pro-
cessed in the system for each patient, followed by information components such as
quantity, text or date concepts. The second model holds the clinical knowledge applied
in a structured and archetype-oriented manner, according to the Archetype Definition
Language (ADL), promoting semantic interoperability [7].
486 F. Hak et al.
3 Knowledge Acquisition
3.1 Identification
The need to acquire knowledge through daily production of electronical health records
(EHR) has spread. However, it is essential to distinguish three concepts for a better
understanding of what is approached as clinical knowledge.
According to Zins [18], data are unstructured facts resulting in a qualified or
quantified interpretation without context. Information is the contextualized and struc-
tured data, following a purpose. In this sense, knowledge is the information acquired
with theoretical and practical understanding based on experience, involving informed
individuals, practices and organizational norms in a given domain.
As the knowledge encompasses several areas of learning, the clinical scope is also
covered. Therefore, the clinical knowledge is defined by Winters-Miner et al. [19] as
the “cognitive understanding of a set of known clinical rules and principles based on
the medical literature, that guide decision-making processes”.
3.2 Conceptualization
In order to complement EHR systems in decision support, the clinical knowledge
acquisition through clinical guidelines and practices are crucial for the validation of this
new process. For such purpose, it is necessary to define the essential components to
acquire the desired clinical knowledge.
Thereupon, the new process is based on the implementation of the OpenEHR
approach and clinical terminologies for the construction of the new open knowledge
model. Following the essence of the OpenEHR approach, clinical records inserted in a
determined system are modelled on two levels [20].
OpenEHR architecture models data in two levels, sorting out information and
knowledge (Fig. 1). In this sense, information is defined as quantified and qualified
patient-oriented data, as well as their respective demographic data, grouped in the
OpenEHR reference model.
488 F. Hak et al.
On the other hand, knowledge aggregates the clinical content based on medical
guidelines that are represented by archetypes, in order to form templates that document
such knowledge. For proper implementation, archetypes and templates must follow a
set of rules encoded in Archetype Definition Language (ADL).
The ADL defines the structure of the document that embodies medical knowledge,
represented by the templates. In order to code clinical terms, the terminologies are also
implemented on the structured template and it is applied directly on template
modelling.
To summarize, this mechanism separates these two concepts to allow the manip-
ulation of data in an organized manner, relating them by the need to fill the template
components with patient information.
3.3 Formalization
The two-level modelling approach defines an archetype-based architecture which
provides the desired information of a given patient, combining clinical knowledge
without losing the clinical meaning of the content and preserving the confidentiality of
each data as intended by the patient [21].
OpenEHR methodology focuses on interoperability between systems, with the
adoption of open specifications and clinical content. Thus, the main purpose of such
adoption is to transform clinical records into a structured and interpretable model.
The formalization of this new process (Fig. 2) is initiated by the representation of
the health professionals who represents all entities involved in a health facility that
have authority to record both administrative and patient data in an EHR system.
Clinical Decision Support Using Open Data 489
Thus, CDR is defined as a distributed real-time database that allows data storage
originally entered from other clinical data sources [22]. Their function is querying data
in an easy and arbitrary way to a possible analysis of reports and results.
In order to represent clinical decision support practices, an architecture was pro-
posed that represents such activity using open clinical knowledge techniques (Fig. 3). It
is crucial to emphasize the need to distinguish activity and technology. Thus, the CDR
was also highlighted and was developed in parallel with another case study.
The final layer of decision support activity integrates the computer system that will
represent such acquired knowledge. A CDSS is a technology or tool that integrates
patient knowledge and clinical information in a structured way to support decisions and
actions in health care delivery. For that purpose, other analytical tools are also applied,
representing data analysis.
This case study aimed to explore components that provide clinical knowledge in order
to build a Clinical Decision Support (CDS) module to improve EHR systems and
quality in healthcare. Thus, this case study integrates knowledge acquisition methods
such as OpenEHR and clinical terminologies, providing a two-level modelling.
As a result of separating clinical records into information and knowledge, a set of
archetypes and a common controlled vocabulary are modelled to represent clinical
concepts and terms in a structured form. This set of activities characterizes the open
knowledge model, that aggregates templates and visual forms following a clinical
purpose.
Thereby, a set of rules to the decision support activity were applied, capable of
ensuring the consistency and coherence of the information that was risen by the use of
such techniques, granting clinical knowledge in an organized and standardize way.
Overall, the clinical decision support activity is characterized by a knowledge base
through the open knowledge model. As a result, these interactions allow the system to
be faster, interoperable, organized and easier to use. In addition, the new process
applied allows professional with no clinical expertise to be able to intervene and
contribute to the conceptualization and structuring of health information systems.
To sum up, the CDS activity complements a simple EHR system, providing clinical
guidelines and documented models, to find suitable standards for representing clinical
data in order to achieve decision support benefits in healthcare.
Future work is sustained by two approaches. Firstly, the use of open knowledge
models will be continued in order to framework all the necessary templates for
structured visualization of clinical information through forms. In the second instance,
the Clinical Data Repository will be explored and implemented, in real time, incor-
porating information and knowledge models.
Acknowledgments. The work has been supported by FCT – Fundação para a Ciência e Tec-
nologia within the Project Scope UID/CEC/00319/2019 and DSAIPA/DS/0084/2018.
References
1. Peixoto, H., Machado, J., Abelha, A.: Interoperabilidade e o Processo Clínico Semântico, no.
513, p. 8846 (2010)
2. Min, L., Tian, Q., Lu, X., Duan, H.: Modelling EHR with the openEHR approach: an
exploratory study in China. BMC Med. Inform. Decis. Mak. 18(1), 1–15 (2018)
492 F. Hak et al.
3. Ribeiro, T., Oliveira, S., Portela, C., Santos, M.: Clinical workflows based on OpenEHR
using BPM (2019)
4. Open Knowledge International’s Foundation: Open Knowledge International Foundation
(2005)
5. Pires, M.T.: Guia de dadados abertos. J. Chem. Inf. Model. 53(9), 1689–1699 (2015)
6. Filho, C.H.P., de Freitas Dias, T.F., Alves, D.: Arquétipos OpenEHR nas fichas do fluxo do
controle da tuberculose. Rev. da Fac. Med. Ribeirão Preto e do Hosp. das Clínicas da FMRP,
January 2014
7. César, H., Bacelar-Silva, G.M., Braga, P., Guimaraes, R.: OpenEHR-based pervasive health
information system for primary care: first Brazilian experience for public care. In:
Proceedings of the CBMS 2013 - 26th IEEE International Symposium on Computer-Based
Medical Systems, pp. 572–573 (2013)
8. Heiler, S.: Semantic interoperability. Encycl. Libr. Inf. Sci. Third Ed. 27(2), 4645–4662
(1995)
9. Park, H.-A., Hardiker, N.: Clinical terminologies: a solution for semantic interoperability.
J. Korean Soc. Med. Inform. 1515(11), 1–111 (2009)
10. Rector, A.L.: Clinical terminology: why is it so hard? Methods Inf. Med. 38(4–5), 239–252
(2000)
11. Breant, C., Borst, F., Campi, D., Griesser, V., Momjian, S.: A hospital-wide clinical findings
dictionary based on an extension of the International Classification of Diseases (ICD). In:
Proceedings of the AMIA Symposium on ICD, pp. 706–710 (1999)
12. Cornet, R., Schulz, S.: Relationship groups in SNOMED CT. J. Sci. Islam. Repub. Iran
26(3), 265–272 (2009)
13. Friedman, C.P.: A ‘fundamental theorem’ of biomedical informatics. J. Am. Med. Inform.
Assoc. 16(2), 169–170 (2009)
14. International Health Terminology Standards Organisation (IHTSDO): Decision Support with
SNOMED CT. SNOMED CT Document Library (2018)
15. HIMSS: What is Clinical Decision Support System? (2016)
16. Collins, A., Joseph, D., Bielaczc, K.: Design research: theoretical and methodological issues.
Am. Health Drug Benefits 3(3), 171–178 (2004)
17. Liou, Y.I.: Knowledge acquisition: issues, techniques and methodology, pp. 59–64 (1985)
18. Zins, C.: Conceptual approaches for defining data, information and knowledge. J. Am. Soc.
Inf. Sci. Technol. 58, 479–493 (2007)
19. Winters-Miner, L.A., et al.: Biomedical informatics. Pract. Predict. Anal. Decis. Syst. Med.
42–59 (2015)
20. Pereira, V.A.A.: Governance of an OpenEHR based local repository compliant with
OpenEHR International Clinical Knowledge Manager. J. Chem. Inf. Model. 53(9), 1689–
1699 (2018)
21. Santos, M.R., Bax, M.P., Kalra, D.: Building a logical EHR architecture based on ISO 13606
standard and semantic web technologies. Stud. Health Technol. Inform. 160(Part 1), 161–
165 (2010)
22. Nadkarni, P.: Clinical data repositories: warehouses, registries, and the use of standards.
Clin. Res. Comput. 173–185 (2016)
Spatial Normalization of MRI Brain Studies
Using a U-Net Based Neural Network
1 Introduction
Medical imaging, as the name implies, is an area that deals with the process of visu-
alizing the interior of the human body for medical purposes. There are several types of
medical imaging modalities, e.g. Magnetic Resonance Imaging (MRI), Computed
Tomography (CT), Positron Emission Tomography (PET) whose images may be
structural, functional or molecular depending on the study objective. Some of these
modalities can also be combined by using medical image fusion to take advantage of
the best parts of each modality [1].
© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 493–502, 2020.
https://doi.org/10.1007/978-3-030-45697-9_48
494 T. Jesus et al.
2 Materials
The dataset used, stored on a XNAT server, contained a total of 213 MPRAGE [17]
images that were properly anonymized to ensure the privacy of the subject. They were
divided into three groups of 71 in NIfTI format, with each image in a group having a
match in the other two. The groups were as following: a group of original studies (the
raw data obtained on MRI without spatial normalization), studies with the skull
extracted from the brain (obtained from the corresponding original study) and finally
normalization of the first one (result of the normalization of the first group by an
Spatial Normalization of MRI Brain Studies 495
already existing tool) which is used as the ground truth. From these groups, the
‘original’ was used as input for the deep learning model and the ‘normalized’ as output.
The brain extracted group was only used to preprocess the input for the model.
In order to build and train this model, a programing language, as well as a suitable
working environment was needed. The programing language chosen was Python,
which has very comprehensive imaging and DL libraries. Jupyter Notebooks was used
as the environment in which the tools described in this work were created. To create the
model architecture itself, Keras and TensorFlow were used. Keras is a high-level yet
user-friendly library, that provides high-level building blocks for developing Deep
Artificial Neural Networks [8]. TensorFlow offers multiple levels of abstraction and
extremely powerful structural bases for the network especially when using it with
Keras which makes it easy to use and allows fast prototyping. One main advantage of
these tools is the possibility of using the GPU as well as the CPU to train the model.
The settings of the computer on which this work was performed were Ubuntu 16 as the
operating system, an Intel Xeon 12 cores processor (with 64 GB RAM) and a
NVIDIA QUADRO P6000 GPU (with 24 GB of GDDR5X dedicated memory).
3 Methods
to be re-trained. From all scans, 50 were used for training (about 70% of the dataset)
and the remaining 21 (about 30%) were used for testing. The train set is used to train
the model and the test set evaluates the model skills during the training.
After successfully loading the data from the files, the next step was to create and
compile the models and then train them.
The model is then compiled using Dice Similarity Coefficient (DSC) as the loss
function and Adam (Adaptive Moment Estimation) with a learning rate of 1 10−6 as
the optimizer. As activation functions, ReLU and Leaky ReLU were strategically used.
Several learning rates were experimented, but the one given above was the one that
yielded the best results.
This module also contains helpful parameters such as the input size and callbacks
that can be used when training the model. The callbacks used were: ModelCheckpoint,
which stores the weights of the model at defined points of training, in this case, stores
Spatial Normalization of MRI Brain Studies 497
the best possible weights of training; EarlyStopping which monitors the model training
and automatically terminates the training if the model does not learn as desired;
ReduceLROnPlateau which changes the learning rate (lr) of the model to adapt its
learning and hopefully get better results; Finally, PlotLossesKeras which implements a
plot that updates each epoch to display the model history, i.e. accuracy and loss at each
epoch. All of these callbacks are very helpful and improve the chances of getting good
results with the model.
First, the model is created and compiled with the previously explained function.
Next, some important settings will be set, such as the number of epochs for training the
model as well as the batch size and the callbacks to use. As mentioned earlier, the
functions that are present in the callback part of the settings are specified in the model
module.
Both the number of epochs and the batch size have been changed to influence the
training of the model to improve model performance. Where the number of epochs
represents the number of passes through the training samples during training. The batch
size essentially indicates the number of samples submitted to the model at each time
point. As expected, a larger batch size means a higher number of images presented to
the model and, therefore, more memory is needed to hold all the data.
After training with the GPU, the model is evaluated to see if it performs as
expected. It’s structure in “.json” format and weights in “.h5” format using h5py
package are then saved on the disk for further reference. The weights of the model at
the best stage were saved. At this point, the charts of loss and accuracy of the model are
displayed. Thanks to the PlotLossesKeras callback (Fig. 2) they can also be displayed
in constantly updated charts throughout the training process. These are then used to
measure the performance of the model.
3.3 Evaluation
After training with the given dataset, the model was evaluated to understand its per-
formance. This was done using the module for evaluation and prediction. The evalu-
ation evaluates the model and outputs its scores, loss and accuracy, after the training.
The prediction uses the model and predicts an output for the given input.
498 T. Jesus et al.
If the values differ greatly from the target values, the image is mispredicted, but a
range of values that approximate the expected ones could be accepted as a good
prediction. An accuracy of, for example, 90% or more would probably mean that the
model is working as expected, as it will in fact normally get the predicted output
Spatial Normalization of MRI Brain Studies 499
correctly to achieve such high accuracy. However, low accuracy does not necessarily
mean that the model does not predict correctly as already shown.
In order to overcome this issue, an alternative method of evaluating the results was
needed. To correctly evaluate the model performance, the dice loss was used in the U-
Net based model. This loss function is based on the Dice Similarity Coefficient
(DSC) [18]. The Sørensen–Dice Coefficient or Dice Similarity Coefficient is used to
measure the overlap of two areas. If the overlap is perfect, the result of DSC is 1, which
means that 100% are overlapped. On the other hand, a DSC of 0 would mean areas that
are completely spaced apart, or an overlap of 0%. Equation 1 describes the DSC, where
X and Y denote the two different regions for which the DSC is to be calculated. In the
case of the model, X represents the expected image and Y the predicted output.
jX \ Y j
DSC = 2 : ð1Þ
j X j þ jY j
The Dice Loss (Eq. 2), in contrast to the DSC on which it is based, tends to 0 as the
overlap improves. A Dice Loss of 0 would mean a perfect overlap and consequently a
model predicting correctly the output.
jX \ Y j
Dice Loss ¼ 1 DSC = 1 2 ð2Þ
j X j þ jY j
4 Results
The output predicted by the model for a random input case is shown in Fig. 4. In the
figure the input is the column ‘FSL in’, the output by the FSL, or the considered ground
truth, is ‘FSL out’ and the output predicted by the model is ‘DL out’. A visual rep-
resentation of the data is required because the metrics used to evaluate the model are
not always able to properly evaluate them in every situation.
The resulting Dice Loss value obtained with the U-Net based architecture was
0.00313, which means that the overlap was near perfect as 0 would mean a perfect
result. The accuracy metric did not evaluate the model as it should, as it was a constant
at 16.69% throughout the training of the model which took about 6 h with the high-
performance GPU. Although the training takes quite a long time to complete, the model
is able to, at the end, overcome the FSL tools in time taken to normalize MRI studies.
The model performed the process in an average of 8 s instead of more than an hour as
FSL.
500 T. Jesus et al.
Fig. 4. Visual comparison between the original image, the image obtained by FSL and the
prediction of the trained model.
For a perfect result, the images of the third column should match with images of the
second, since the second is the ground truth obtained with FSL and the third is the
output of the trained DL model. As can be seen, the results are not perfect, but they are
close to the target image as the shape of the output coincides almost perfectly to what
was expected. In particular, when examining the top row, shown in more detail in
Fig. 5, we can see that the model has correctly predicted the features. Hence, good
results were achieved, but not good enough to compete with existing tools.
Fig. 5. Comparison between the expected (Left) and the obtained result by the U-Net based
(Right).
Spatial Normalization of MRI Brain Studies 501
By analyzing the model’s dice loss function, which measures the overlapping areas
of the images, its value gets very close to 0, which means that the model performed
well while deforming the original brain to get the final shape, even though the image as
a whole does not look as expected. The accuracy graph however, is not shown because
it is not a good measure of how well the model has learned in this particular case. This
is because even though two corresponding pixels have very similar values but are not
identical, this is considered a misprediction, although this might be acceptable.
5 Conclusions
To conclude, the results achieved in this work open the path to a yet unexplored
possibility for spatial normalization of brain MRI studies. Although the model is not
yet able to compete with the already existing tools while performing the full normal-
ization, the shape was accomplished correctly having a dice loss value of 0.00313 at the
final stage of training. This means 99.687% of the predicted output was overlapped
with the expected image, i.e. an almost perfect shape was predicted. The model was
also able to outperform existing tools in time spent to normalize. Although the training
process took about 6 h with the high-performance GPU, the model performed the
prediction in an average of 8 s instead of more than an hour as FSL did with the same
MRI study. This is an advantage of the Deep Learning approach as the slow training
needs to be made only once and the prediction can be made quickly as many times as
needed instead of the existing tools where a lot of time is always used to perform the
process. With some more modifications to the model it could achieve even better
results. For example, adapting the model to predict the warp matrix (as the one gen-
erated by FSL) instead of a full normalized image. The matrix could then be used in the
FSL to quickly perform the normalization as the matrix contains all the information
needed to distort the original image to obtain the normalization. Other possible strategy
would be to use a 3-dimensional model instead of a 2-dimensional like the one used.
This could be done by adapting the convolutional layers of the model to perform
3-dimensional convolutions. Although being a computationally harder approach, it
would probably achieve better results.
Acknowledgements. This work has been supported by FCT – Fundação para a Ciência e
Tecnologia within the R&D Units Project Scope: UIDB/00319/2020. We gratefully acknowledge
the support of the NVIDIA Corporation with their donation of a Quadro P6000 board used in this
research.
References
1. James, A.P., Dasarathy, B.V.: Medical image fusion: a survey of the state of the art. Inf.
Fusion 19(1), 4–19 (2014)
2. Poldrack, R., Mumford, J., Nichols, T.: Spatial normalization. In: Handbook of
Functional MRI Data Analysis, pp. 53–69. Cambridge University Press (2011)
3. FSL Wiki page. https://fsl.fmrib.ox.ac.uk/fsl/fslwiki. Accessed 18 Nov 2019
4. BET. https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/BET. Accessed 18 Nov 2019
502 T. Jesus et al.
5. Smith, S.M.: Fast robust automated brain extraction. Hum. Brain Mapp. 17(3), 143–155
(2002)
6. FLIRT. https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/FLIRT. Accessed 18 Nov 2019
7. FNIRT. https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/FNIRT. Accessed 18 Nov 2019
8. Chollet, F.: Deep Learning with Python. Manning Publications Co. (2018)
9. Buduma, N.: Fundamentals of Deep Learning: Designing Next-Generation Machine
Intelligence Algorithms, vol. 44, no. 5 (2017)
10. Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge (2016)
11. Altaf, F., Islam, S.M.S., Akhtar, N., Janjua, N.K.: Going deep in medical image analysis:
concepts, methods, challenges and future directions (2019)
12. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with
region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 39, 1137–1149 (2017)
13. Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmen-
tation. IEEE Trans. Pattern Anal. Mach. Intell. 39, 640–651 (2017)
14. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image
recognition. In: ICLR, vol. 6 (2015)
15. Park, S., Lee, S.M., Lee, K.H., et al.: Deep learning-based detection system for multiclass
lesions on chest radiographs: comparison with observer readings. In: European Radiology,
pp. 1–10 (2019)
16. Nam, J., Park, S., Hwang, E., Lee, J., Jin, K., Lim, K., Vu, T., Sohn, J., Hwang, S., Goo, J.,
Park, C.: Development and validation of deep learning-based automatic detection algorithm
for malignant pulmonary nodules on chest radiographs. Radiology 290(1), 218–228 (2019)
17. Brant-Zawadzki, M., Gillan, G., Nitz, W.: MP RAGE: a three-dimensional, T1-weighted,
gradient-echo sequence–initial experience in the brain. Radiology 182(3), 769–775 (1992)
18. Liu, Q., Fu, M.: Dice loss function for image segmentation using dense dilation spatial
pooling network (2018)
Business Analytics for Social Healthcare
Institution
1 Introduction
Nowadays, organizations all over the world are increasingly subject to handling large
amounts of data. Their understanding reflects directly on the organizations’ success, as
a good understanding of data turns it into useful information that allows the organi-
zations to achieve improvements in their business process such as reduced waiting
times and increased service efficiency. Since they are dealing with large amounts of
data, it is hard to complete the task of understanding it in an efficient way without using
technology. This is where the Business Analytics (BA) components come to action.
The preference of a BA component for this job has to do with the efficient process
of achieving knowledge through data understanding that this kind of component
© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 503–509, 2020.
https://doi.org/10.1007/978-3-030-45697-9_49
504 M. Quintal et al.
performs, transforming raw data into useful information. Subsequently, this kind of
component also provides an interactive and intuitive visualization of the most relevant
information, making it much easier to analyze and understand it. This leads to better
decision-making processes, more thoughtful decisions, and efficiency on health care
provision.
In general, the literature about BA components focuses more on their usefulness in
industrial organizations. However, healthcare institutions are not far behind when it
comes to reaping the benefits of this technology to improve their business process.
Therefore, this article aims to clarify and justify the usefulness of a BA component to a
Social Healthcare Institution.
2 Background
Through the presented quotation, we can then perceive the difficulty in distin-
guishing these four concepts that revolve around the same “industry”, since Eckerson,
in different chronological phases, used the same definition to describe each of these
concepts, concluding that they all have the same purpose.
Therefore, to clarify and understand the definition of BA, there will be presented
alternative definitions of the same concept presented by different authors.
From the perspective of Turban et al. (2008), BA is a component that provides
analysis models and procedures for analyzing DW information to gain a competitive
advantage over other organizations on the same business.
Business Analytics for Social Healthcare Institution 505
According to Schniederjans et al. (2014), BA is the process that begins with the
collection of business data from an organization, to which the main types of existing
analysis are sequentially applied, namely descriptive, predictive and prescriptive
analysis, achieving a result that supports and demonstrates business decision-making as
well as organizational performance.
As we can see, the definition of BA, as well as the definition of the other concepts
referred to, depends on the consulted literature. Even so, we can say that the consulted
literature reaches an agreement over the definition of the BA concept, assuming that it
consists on transforming the collected data into useful information, capable of sup-
porting the decision-making process of an organization in order to exponentiate the
performance of their business and to create a competitive advantage for it.
2.2 Visualization
The concept of Information Visualization is defined as “The use of computer-
supported, interactive, visual representations of abstract data to amplify cognition”
(Card et al. 1999). The main purpose of information visualization is to improve
understanding of the data with graphical presentations availing the powerful image
processing capabilities of the human brain. This technique extends the working
memory; reduces the search of information and enhances the recognition of patterns,
increasing the human cognitive resources (Järvinen et al. 2009).
As previously mentioned, today, organizations are increasingly subject to handling
large amounts of data. Their understanding reflects directly on the organizations’
success, as a good understanding of data turns it into useful information, capable of
generating competitive advantage over other organizations on the same business.
However, a good understanding of the data depends on how it is presented. This is
precisely where the concept of Visualization comes in, which bridges the gap between
data and knowledge. Human vision contains millions of photoreceptors capable of
recognizing patterns (Ware 2004), so the visual representation of data is the most
favourable technique for understanding data and acquiring knowledge.
In short, Visualization aims to improve data comprehension through the visual
representation of data supported by visual analytics technological tools, such as
dashboards, and the visual and intellectual abilities of the human being.
The dashboard is a visual analytics tool that allows the user to visualize the most
important information needed to achieve one or more objectives, consolidated and
organized in a single screen, allowing the quick monitoring of it (Few 2004).
In order to allow easy visualization and a good understanding of the data, Few
(2004) states that firstly a dashboard must present a high-level, general, simple and in-
depth view focused only on the exceptions to be reported immediately to inform the
user of what is happening without specifying why it is happening. Secondly, this high-
level view should emphasize the aspects and variables that, through its visualization,
communicate useful information for decision-making, to further analyze it in more
detail. Finally, Few (2004) considers that the dashboard should also allow easy drill
down within the dimensions and metrics underlying the useful information from the
previous point, allowing a more detailed analysis of it.
506 M. Quintal et al.
3 Case Study
Within the scope of the nature of a Social Healthcare Institution and to certify its
Quality Management System with the ISO 9001 standard, the need arose to develop a
Business Analytics (BA) solution focused on quality management that allows effi-
ciency on health care provision in order to strengthen a set of organizational principles
required by ISO 9001. We can identify as examples of these principles the customer
focus and the use of tools that allow top managers to execute the organization’s
processes with efficiency in order to make appropriate decisions that promote the
continuous improvement of the organization.
Business Analytics for Social Healthcare Institution 507
This information helps the institution to analyze the efficiency of their medical
services by monitoring the number of patients waiting for their surgery to be accom-
plished, and also to examine which of the medical specialities has the longest queues.
This allows the institution to manage speciality queues by, for example, increasing the
number of doctors in a speciality with a large number of patients on the waiting list,
which contributes to satisfying the three identified critical success factors, and con-
sequently the fulfilment of business objectives.
The second and last dashboard to be presented in this article offers a view over the
performed surgeries. Its consultation allows us to observe the number of operated
patients by gender, by age group and per day of the week. This same data is also
subject to filters related to the time frame, medical speciality and surgery type (Fig. 2).
Having prompt access to this information allows the institution to monitor the
number of surgeries performed on a time gap and to identify the day of the week where
most of the surgeries are performed in order to support the schedule of surgeries and to
reduce the number of patients that are waiting for their surgeries to be accomplished.
Therefore, this dashboard also helps the institution to achieve its goals.
4 Conclusion
As stated before, this institution’s business goals are to improve user satisfaction,
improve the quality of the provided services (continuous improvement), and
improve/optimize organizational activities. Through the Key Performance Indicators’
Methodology, the respective critical success factors were exposed, such as reducing
waiting days for services, ensuring sufficient clinical professionals and increase the
number of performed surgeries.
These factors led to the definition of key performance indicators that monitor their
success, and therefore, the attainment of business goals. Once understood, these
indicators support the dashboards conception, which later supports the monitoring of
the same indicators in a much more interactive and intuitive way, namely through
dashboards visualization. This helps the institution to make better decisions promptly
that lead to the achievement of goals. Once the institution achieves their goals, they are
ready to be certified by ISO 9001.
Acknowledgements. The work has been supported by FCT – Fundação para a Ciência e
Tecnologia within the Project Scope UID/CEC/00319/2019.
References
DATASUS: Notebook Presentation (2004). http://datasus.saude.gov.br/apresentacao-caderno
DGS: Healthcare Dashboards: Past, Present and Future. A perspective of evolution in Portugal
(2017)
Eckerson, W.: A Practical Guide to Advanced Analytics – Ebook (2011)
Few, S.: Dashboard Confusion (2004)
Järvinen, P., Puolamäki, K., Siltanen, P., Ylikerälä, M.: Visual analytics (2009)
Schniederjans, M.J., Schniederjans, D.G., Starkey, C.M.: Business Analytics – Principles,
Concepts and Applications. Pearson Education, New York (2014)
Quintela, H.: Magazine dos Sistemas de Informação na Saúde (2013)
Turban, E., Sharda, R., Aronson, J., King, D.: Business intelligence - a managerial approach
(2008)
Ware, C.: Information Visualization: Perception for Design. Morgan Kaufmann, Burlington
(2004)
Step Towards Monitoring Intelligent
Agents in Healthcare Information
Systems
1 Introduction
Thinking about today’s society, everything around it involves technology. The
idea that the human being has changed with the technological evolution is a little
bit frightening, but perhaps it is the most realistic thinking of today. Over the
years Information Technology (IT) has been profoundly embedded in society, to
the point that it has dramatically altered mankind’s way of thinking and living
in such a way that every day-to-day activity depends on the proper functioning
of technologies.
In recent years, IT has emerged in several areas and healthcare is no excep-
tion. IT is a very broad concept with applicability in many industries. Thus,
the clarification of the term is important for the acceptance of its use by insti-
tutions and its professionals. So, IT is the set of all activities, solutions and
human and/or computational resources that allow access, consultation, manage-
ment and use of information. Part of the success of IT, in the health area, has
c The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 510–519, 2020.
https://doi.org/10.1007/978-3-030-45697-9_50
Health Information Systems 511
to do with the correct acting of Health Information Systems (HIS). They are
responsible for the acquisition, processing, and presentation of all information
about the institution and its services. For this, they have tools to improve the
care-giving in an efficient and sustainable way [24].
In the healthcare area, the treatment and processing of information has
changed quite a lot, from the traditional way of recording all the information in
paper to its electronic register. Therefore, nowadays the HIS are already intrinsic
or even determinant in the success of the hospital delivery of care. The use of
this type of systems is not only beneficial for patients, but also for health profes-
sionals throughout the healthcare community, since it will ease several everyday
responsibilities involved in their work [10].
It is imperative to make quick and quality decisions in the health sector
as these are almost always related to human life. Therefore, medical decision-
making needs to integrate the best available evidence with the experience of
clinical professionals and the specific values regarding the patient health status
[9,13]. Often, in the absence of timely access to high-quality information or
even when facing difficulties in constructing a functional historic process of the
patient in question, the health professionals are obligated to make decisions
based solely on their experience and intuition without considering the facts and
information required [1,9]. Obviously, without appropriate access to relevant
data, practically no decisions on procedures, diagnostic, therapy, and others can
be made without occurring medical errors or other problems, which may result
in fatal consequences for patients. This difficulty can be overcome through the
implementation of Clinical Decision Support Systems (CDSS), which are based
on medical knowledge to assist clinicians in the elaboration of diagnoses and in
the decision-making of therapies through the analysis of patient specific clinical
variables [23].
In a more technical perspective, CDSS can retrieve relevant documents, cre-
ate and send alerts or recommendations to patients or healthcare professionals,
organize and present relevant information on dashboards, diagrams, documents,
and reports, in order to ease, speed up and improve the clinical decision-making
[7,19]. Accordingly, these systems should consider information from various sys-
tems and platforms implemented in the health institution that due to their
diversity constitute another weakness. Thus, the primary objective for the solu-
tion of all these problems is the implementation of interoperability platforms
in an effective way. These platforms should be based on intelligent agents that
interact with each other and organized in robust and efficient architectures, so
that the access and interpretation of the information is almost immediate.
The remainder of the paper is organized as follows: Sect. 2 includes a brief
description of intelligent agents, from its definition to the advantages of its indi-
vidual or multi-agent use. The following section, Sect. 3, describes the AIDA
platform, its main characteristics, operation, architecture and vulnerabilities.
Subsequently, Sect. 4 explains the worth, the significance and the impact of mon-
itoring computational applications, focusing on the description of the proposed
solution, AIDAMonit, a platform for efficiently monitoring the behavior of the
512 R. Sousa et al.
intelligent agents that make up the AIDA platform. Finally, Sects. 5 and 6 dis-
closes the proof of concept and the main conclusions as well as some perspectives
for future work.
3 AIDA
In Healthcare, access to information in a fast and effective way, is a determinant
factor for the reduction of medical errors and the consequent improvement of
the care provided. However, as much as desired this goal is, it has not yet been
achieved much due to the individuality and heterogeneity of the different health
information systems. Although these systems increase the quality of the health
services, they are developed in an isolated way, failing in the capacity to interact
together effectively.
IEEE defines interoperability in healthcare as the “ability to communicate
and exchange data accurately, effectively, securely and consistently with dif-
ferent information technology systems, software applications, and networks in
various settings and exchange data such that clinical or operational purpose and
Health Information Systems 513
meaning of the data are preserved and unaltered” [25]. The benefits of imple-
menting interoperability in healthcare facilities and the consequent homogeneity
among HIS are countless. Such benefits include better information quality by sin-
gle patient identification, time reduction in diagnostic and appointments, since
physicians have access to relevant information whenever and wherever they need
it, a correct association between all the information systems and, consequently,
collaboration at local, regional, national and international level.
The process of implementing interoperability in health organizations is even
more difficult because each specialty has its own particularities as well as different
methods. Interoperability among systems is one common and comprehensive
interest within the entire scientific community. In recent years, the group of
Artificial Intelligence (AI) of the University of Minho dedicated itself to building
a platform to answer all these needs, the AIDA.
The Agency for Integration, Dissemination and Archiving of medical informa-
tion (AIDA) is the result of many research partnerships between the University of
Minho and several Portuguese health units, including the Centro Hospitalar Uni-
versitário do Porto (CHUP). AIDA is a complex system consisting of specialized
and straight forward intelligent agents, that seeks the integration, dissemination
and archiving of large volumes of data from heterogeneous sources (e.g. comput-
ers and medical devices) and the uniformity of the clinical systems [8,16,17,21].
This platform was designed to aid medical applications, their main subsystems,
and functional role, as well as to control the entire flow of information through a
network of intelligent information processing systems. AIDA uses a multi-agent
architecture of the type service-oriented architecture (SOA) to ensure interoper-
ability between various information systems [3,4,15,20]. AIDA is implemented
in five health institutions throughout Portugal and has a paramount influence
in the quality of the services provided by the healthcare professionals, and it
is already installed and in use in five health institutions. Accordingly, all its
components must have a form of monitoring and prevention of failure, so that
AIDA is available 24 hours a day, every day of the year, to ensure efficient health
care delivery. This allows to implement interoperability in a distributed environ-
ment according to different types of agents that have very distinct scopes and
functions, inside the platform.
The systems that constitute the platform are:
Not only the structure of the messages and the type of fields contained in
them are necessary for the complete understanding of a message. That is, for the
absence of ambiguity, also the meanings, the context and the relations between
the different terms must be known and used by both parties in communication. In
health institutions, standards are considered the main source for ensuring inter-
operability between HIS. The HL7 protocol is perhaps the most internationally
recognized and is a major contributor to interoperability in health facilities. HL7
is a set of standard formats that define a message structure to exchange infor-
mation between different heterogeneous hospital applications [5,15,22]. In short,
this is used to enable communication from application to application through
well-established messages. There are several message templates, each with its
own structure and fields. Each message has its own structure and consists of an
accumulation of multiple threads that represent a logical grouping of data fields.
The security of the AIDA platform is fundamental because it is a platform
associated with healthcare, and, consequently, must be available 24 hours a day,
every day of the year. Currently, it is installed in five Portuguese hospitals,
including the CHUP, and even a short period of shutdown can bring serious
and devastating consequences to the health organization, either directly in the
management of resources, and/or indirectly in the quality of the services pro-
vided and consequently in the health status of the different patients. Therefore,
the prevention of failures as well as the monitoring of the AIDA platform is
indispensable and of extreme value to the health institutions.
of the services has a division of four sub-modules - Panels, Tables, Agents and
Servers. Each module is essential for the proper monitoring, detection and cor-
rection of any errors that may occur in the agent’s software, improving the
provision of hospital services, as well as facilitating the daily work of the pro-
fessionals who have the task of monitoring the continuous behavior of AIDA’s
intelligent agents.
5 Proof of Concept
Any research project that is aimed at its implementation must pass a proof of
concept where questions such as “Is this technology needed?” And “Who will
use this technology?” are paramount to its success.
Health Information Systems 517
Parameter Analysis
Strengths - Centralization of information
- Fast error detection
- Centralized activity history
- High usability
- Easy maintenance
- High scalability
- Ease of adaptation and evolution
Weaknesses - Dependency on CHUP’s internal network
- Complexity in historical research, namely in dates
- Delay in executing complex requests
Opportunities - Construction of indicators that allow detection of error patterns
- Direct connection to agent software for possible error correction
- Imminent need for smart agent monitoring
- Improve the quality and effectiveness of CHUP services
Threats - Modification of the structures or databases that feed the application
- User rejection of new technologies
- Internet network connectivity issues
- Competition with new technological innovations that may appear
6 Conclusions
to predict and avoid failures as well as to monitor the activity of the intelligent
agents.
In this sense the developed platform monitors the behavior of the intelligent
agents that constitute of AIDA platform. The monitoring platform responds to
several requirements such as:
– Real-time monitoring of intelligent agents (individually and collectively);
– Exhibition of statistical metrics for consultation and knowledge construction;
– Consultation of past events using date filters;
– Extraction of relevant insights about agent’s behavior through charts and
dashboards;
– Identification of root causes of poor performance, errors and inconsistencies.
With this platform, managers will be able to ensure the proper function-
ing of the intelligent agents that make up the AIDA and, consequently, ensure
excellence in the provision of healthcare to the patient. ReactJS, a JavaScript
library for building user interfaces, was chosen to give body and shape to the
platform, as it is a modern and powerful tool that is taking over the frontend
development because of its fast rendering due to the existence of a virtual DOM
and the ability to reuse and combine components. The backend of the platform
is in NodeJS and ensure the connection between the Oracle database and the
interface.
References
1. Brandão, A., Pereira, E., Esteves, M., Portela, F., Santos, M., Abelha, A.,
Machado, J.: A benchmarking analysis of open-source business intelligence tools
in healthcare environments. Information 7, 57 (2016). Simulation and Real Time
Applications (2013)
2. Cardoso, L.: Desenvolvimento de uma Plataforma baseada em Agentes para a
Interoperabilidade (2013)
3. Cardoso, L., Martins, F., Portela, F., Santos, M., Abelha, A., Machado, J.: A multi-
agent platform for hospital interoperability. In: Ambient Intelligence - Software and
Applications Advances in Intelligent Systems and Computing, pp. 127–134 (2014)
4. Cardoso, L., Martins, F., Portela, F., Santos, M., Abelha, A., Machado, J.: The
next generation of interoperability agents in healthcare. Int. J. Environ. Res. Public
Health 11, 5349–5371 (2014)
5. Cardoso, L., Martins, F., Quintas, C., Portela, F., Santos, M., Abelha, A.,
Machado, J.: Interoperability in healthcare. In: Cloud Computing Applications
for Quality Health Care Delivery Advances in Healthcare Information Systems
and Administration, pp. 689–714 (2014)
6. Cardoso, L., Martins, F., Quintas, C., Portela, F., Santos, M., Abelha, A.,
Machado, J.: Interoperability in healthcare. In: Health Care Delivery and Clin-
ical Science, pp. 689–714 (2018)
Health Information Systems 519
7. Castaneda, C., Nalley, K., Mannion, C., Bhattacharyya, P., Blake, P., Pecora, A.,
Goy, A., Suh, K.S.: Clinical decision support systems for improving diagnostic
accuracy and achieving precision medicine. J. Clin. Bioinform. 5, 4 (2015)
8. Duarte, J., Salazar, M., Quintas, C., Santos, M., Neves, J., Abelha, A., Machado,
J.: Data quality evaluation of electronic health records in the hospital admission
process. In: 2010 IEEE/ACIS 9th International Conference on Computer and Infor-
mation Science (2010)
9. Foshay, N., Kuziemsky, C.: Towards an implementation framework for business
intelligence in healthcare. Int. J. Inf. Manag. 34, 20–27 (2014)
10. Haux, R.: Health information systems - past, present, future. Int. J. Med. Inform.
75, 268–281 (2006)
11. Isern, D., Sanchez, D., Moreno, A.: Agents applied in health care: a review. Int. J.
Med. Inform. 79, 145–166 (2010)
12. Jennings, N.R., Wooldridge, M.: Applications of intelligent agents. In: Jennings,
N.R., Wooldridge, M.J. (eds.) Agent Technology. Springer, Heidelberg (1998)
13. Lenz, R., Reichert, M.: IT support for healthcare processes - premises, challenges,
perspectives. Data Knowl. Eng. 61, 39–58 (2007)
14. Machado, J., Abelha, A., Neves, J., Santos, M.: Ambient intelligence in medicine.
In: 2006 IEEE Biomedical Circuits and Systems Conference, pp. 95-97 (2006)
15. Machado, J., Abelha, A., Novais, P., Neves, J., Neves, J.: Quality of service in
healthcare units. Int. J. Comput. Aided Eng. Technol. 2, 436 (2010)
16. Machado, J.M., Miranda, M., Gonçalves, P., Abelha, A., Neves, J., Marques, J.A.:
AIDATrace - Interoperation Platform for Active Monitoring in Healthcare Envi-
ronments. ISC, Eurosis (2010)
17. Martins, F., Cardoso, L., Esteves, M., Machado, J., Abelha, A.: An agent-based
RFID monitoring system for healthcare. In: Advances in Intelligent Systems and
Computing, pp. 407–416 (2017)
18. Miranda, M., Pontes, G., Abelha, A., Neves, J., Machado, J.: Agent based inter-
operability in hospital information systems. In: 2012 5th International Conference
on Biomedical Engineering and Informatics (2012)
19. Musen, M.A., Middleton, B., Greenes, R.A.: Clinical decision-support systems. In:
Biomedical Informatics, pp. 643–674 (2014)
20. Peixoto, H., Santos, M., Abelha, A., Machado, J.: Intelligence in interoperability
with AIDA. In: Lecture Notes in Computer Science, pp. 264–273 (2012)
21. Pontes, G., Portela, C., Rodrigues, R., Santos, M., Neves, J., Abelha, A., Machado,
J.: Modeling intelligent agents to integrate a patient monitoring system. In: Trends
in Practical Applications of Agents and Multiagent Systems, pp. 139–146 (2013)
22. Rodrigues, R., Gonçalves, P., Miranda, M., Portela, F., Santos, M., Neves, J.,
Abelha, A., Machado, J.: Monitoring intelligent system for the Intensive Care Unit
using RFID and multi-agent systems. In: 2012 IEEE International Conference on
Industrial Engineering and Engineering Management (2012)
23. Shojania, K.G., Duncan, B.W., McDonald, K.M., Wachter, R.M., Markowitz, A.J.:
Making health care safer: a critical analysis of patient safety practices. Evid. Rep.
Technol. Assess. (Summ.) 43, 668 (2001)
24. Taylor, S., Todd, P.A.: Understanding information technology usage: a test of
competing models. Inf. Syst. Res. 6, 144–176 (1995)
25. Tolk, A.: Interoperability, composability, and their implications for distributed sim-
ulation: towards mathematical foundations of simulation interoperability. In: 2013
IEEE/ACM 17th International Symposium on Distributed Simulation and Real
Time Applications (2013)
Network Modeling, Learning and
Analysis
A Comparative Study of Representation
Learning Techniques for Dynamic
Networks
1 Introduction
Network analysis has increasingly gained attention both in academia and indus-
try because it offers a framework that analyzes interrelationships within natu-
ral structures: we find applications in churn prediction [7,20], crime detection
[26,27], recommendation systems [14]. However, network analysis traditionally
requires extensive preprocessing: data analysts have relied on handmade feature
engineering based on expert knowledge or summary statistics (e.g. clustering
coefficients) [17]. Despite the popularity of ad-hoc feature engineering, it lacks
flexibility and requires extensive domain knowledge [12,14,20].
One response to traditional feature engineering is representation learning
(RL); it is sometimes referred as feature learning. RL aims at finding a low-
dimensional representation or embedding of the data so further downstream
tasks become more automatic [3]. However, most early techniques in RL can only
handle static networks [12,16,22,28]. In contrast, a real-world network displays
dynamic processes that changes its topological structure [25]. Recent techniques
for RL in dynamic graphs have relied on random walks [8,21,24], autoencoder
c The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 523–530, 2020.
https://doi.org/10.1007/978-3-030-45697-9_51
524 C. Ortega Vázquez et al.
2 Related Work
The literature on RL for graphs have diversified into several lines of research
[6,29]: for network transductive tasks, dynamic RL exploits the topological evo-
lution [10,20,24] while inductive RL leverages extra information for unseen nodes
[13,27]. We can categorize RL techniques in dynamic networks regarding time
granularity [9]: some methods handle discrete time and others, with continuous
time evolution [21]. We focus on the former as more works have been developed
in that line of research.
We consider two main types of RL techniques in dynamic networks that are
relevant for this study: random walk and graph-autoencoder approaches. On the
one hand, the random walk techniques, related to shallow embedding methods
[13], learn the network embedding based on the nodes that co-occur on random
walks. One strong merit of random-walk techniques is the use of a stochastic
similarity measure (e.g. co-occurrence in random walks) which leads to lower
complexity compared to deep learning approaches. However, these techniques
require fine-tuning of the random walks. On the other hand, graph-autoencoder
techniques leverage the adjacency matrix that captures non-linear relationship in
the node neighbourhood [28]: similar neighbourhood leads to similar embedding.
Compared to the random-walk approach, graph-autoencoders can reconstruct
the whole graph since they learn from its adjacency representation; dependence
on the adjacency matrix also constrains its applicability in large real-world net-
works [20].
Both approaches originally handle static networks [12,28] so recent works
developed extensions for dynamic networks. Most of the techniques depends
either on reusing parameters in each time-step [10,11,19] or aligning the static
embeddings [8,24]: However, these efforts lack theoretical foundation. The
Bayesian framework considers a prior probability distribution that can allow
a smoother drift of the network embedding across time. Bayesian word embed-
dings have been explored in NLP [1,2,5]: Bayesian word embeddings offer noise
robustness by time slices and uncertainty measurement via density. Despite its
Representation Learning in Dynamic Networks 525
networks given their low density. The Enron dataset is also a dense network
but it contains substantially less nodes and edges. However, the dataset shows
a higher average clustering coefficient.
First, we need to split the datasets into different snapshots. The choice of the
time frame size follows as in [8]. For each snapshot, we have an edge list that
represents a network. The test snapshot in both interpolation and extrapolation
settings derive from the last time-step GT . For the interpolation approach, we
randomly divide into two sets for the subsequent downstream task: 70% of the
edges are used for training and 30%, for test. For the extrapolation approach,
we use the whole GT for evaluation. Additionally, we sample non-edges as many
as the edges so the datasets are balanced in all snapshots. We only use previ-
ously seen training nodes so we extract a subgraph from GT that complies with
this requirement. All training snapshots share information of the training nodes
even if they are not active in a particular snapshot (i.e. a node without any
link to others). We follow the approach in [12] to get the edge features from
the node embeddings: the Hadamard operator is used to combine a couple of
node embeddings into one vector for representing edges. At the end, we obtain a
matrix of edge features: for the extrapolation setting, embeddings for each snap-
shot are stacked vertically while for the interpolation setting, embeddings are
stacked horizontally. Subsequently, a classifier can learn from the edge features
for predicting in the corresponding test set if the two nodes hold an edge. Three
well-known classifiers in link prediction are used: Logistic Regression, Random
Forest, and Gradient Boosting.
We perform a hyperparameter tuning on the training snapshots for the
RL techniques based on the link prediction performance. For the random-
walk techniques, the grid search is as follows: p ∈ {0.25, 0.5, 0.75, 1}, q ∈
{0.1, 0.5, 1, 2, 5, 10, 100}. For the graph-autoencoder, the grid search is as fol-
lows: α ∈ {10−6 , 10−5 }, and β ∈ {2, 5}. All experiments, including the data, can
be found in https://github.com/CarlosOrtegaV/dyn-bae.
Representation Learning in Dynamic Networks 527
4 Results
Table 2 and 3 contain, for each classifier and RL technique, the highest Area
Under the Receiver Operating Curve (AUC) scores across hyperparameter com-
binations with its corresponding Average Precision (AP). We can observe that
the extrapolation setting poses a more challenging task since RL techniques have
a lower AUC score than in the interpolation setting; Facebook forum dataset
also obtained lower AUC scores because of the higher number of nodes and
edges. Node2vec consistently outperforms all other RL techniques in both inter-
polation and extrapolation setting. Despite its simpler structure compared to
dyngraph2vecAE (dynae), dynGEM reaches the second place among the RL
techniques. The dynamic Bayesian node2vec (dynbae) scores low compared to
the standard node2vec. Figure 1 and 2 display the variability of the AUC scores
across hyperparameters for two classifiers in the Facebook forum dataset. Inter-
estingly, the graph-autoencoder techniques have higher variability in the extrap-
olation setting compared to the interpolation counterpart.
5 Conclusions
References
1. Bamler, R., Mandt, S.: Dynamic word embeddings. In: Proceedings of the 34th
International Conference on Machine Learning, ICML 2017, pp. 380–389. PMLR
(2017)
2. Barkan, O.: Bayesian neural word embedding. In: Proceedings of the Thirty-First
AAAI Conference on Artificial Intelligence, AAAI 2017, pp. 3135–3143. AAAI
Press (2017)
3. Bengio, Y., Courville, A., Vincent, P.: Representation learning: a review and new
perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 35(8), 1798–1828 (2013)
4. Bishop, C.M.: Pattern Recognition and Machine Learning. Information Science
and Statistics. Springer, New York (2006)
5. Bražinskas, A., Havrylov, S., Titov, I.: Embedding words as distributions with a
Bayesian skip-gram model. In: Proceedings of the 27th International Conference
on Computational Linguistics. Association for Computational Linguistics (2018)
6. Cai, H., Zheng, V.W., Chang, K.C.C.: A comprehensive survey of graph embed-
ding: problems, techniques, and applications. IEEE Trans. Knowl. Data Eng. 30(9),
1616–1637 (2018)
7. Dasgupta, K., Singh, R., Viswanathan, B., Chakraborty, D., Mukherjea, S., Nana-
vati, A.A., Joshi, A.: Social ties and their relevance to churn in mobile telecom net-
works. In: Proceedings of the 11th International Conference on Extending Database
Technology: Advances in Database Technology, EDBT 2008, pp. 668–677. ACM,
New York (2008)
Representation Learning in Dynamic Networks 529
8. De Winter, S., Decuypere, T., Mitrović, S., Baesens, B., De Weerdt, J.: Combining
temporal aspects of dynamic networks with Node2Vec for a more efficient dynamic
link prediction. In: 2018 IEEE/ACM International Conference on Advances in
Social Analysis and Mining (ASONAM), pp. 1234–1241. IEEE (2018)
9. Goel, R., Jain, K., Kobyzev, I., Sethi, A., Forsyth, P., Poupart, P.: Relational rep-
resentation learning for dynamic (knowledge) graphs: a survey. arXiv.org (2019).
http://search.proquest.com/docview/2231646581/
10. Goyal, P., Chhetri, S.R., Canedo, A.: dyngraph2vec: Capturing network dynam-
ics using dynamic graph representation learning. Knowl.-Based Syst. 187, 104816
(2020)
11. Goyal, P., Kamra, N., He, X., Liu, Y.: DynGEM: deep embedding method for
dynamic graphs. arXiv preprint arXiv:1805.11273 (2018)
12. Grover, A., Leskovec, J.: Node2Vec: scalable feature learning for networks. In:
Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge
Discovery and Data Mining, pp. 855–864. ACM (2016)
13. Hamilton, W., Ying, Z., Leskovec, J.: Inductive representation learning on large
graphs. In: Advances in Neural Information Processing Systems, pp. 1024–1034
(2017)
14. Hamilton, W.L., Ying, R., Leskovec, J.: Representation learning on graphs: meth-
ods and applications. arXiv preprint arXiv:1709.05584 (2017)
15. Kalman, R.E.: A new approach to linear filtering and prediction problems. J. Basic
Eng. 82(1), 35–45 (1960)
16. Kipf, T., Welling, M.: Variational graph auto-encoders (2016). arXiv.org
17. Liben-Nowell, D., Kleinberg, J.: The link-prediction problem for social networks.
J. Am. Soc. Inform. Sci. Technol. 58(7), 1019–1031 (2007)
18. Ma, X., Sun, P., Wang, Y.: Graph regularized nonnegative matrix factorization for
temporal link prediction in dynamic networks. Phys. A 496, 121–136 (2018)
19. Mahdavi, S., Khoshraftar, S., An, A.: dynnode2vec: scalable dynamic network
embedding. In: 2018 IEEE International Conference on Big Data (Big Data), pp.
3762–3765. IEEE (2018)
20. Mitrović, S., Baesens, B., Lemahieu, W., Weerdt, J.D.: tcc2vec: RFM-informed
representation learning on call graphs for churn prediction. Inf. Sci. (2019)
21. Nguyen, G.H., Lee, J.B., Rossi, R.A., Ahmed, N.K., Koh, E., Kim, S.: Continuous-
time dynamic network embeddings. In: Companion Proceedings of the The Web
Conference 2018, WWW 2018, pp. 969–976. International World Wide Web Con-
ferences Steering Committee (2018)
22. Perozzi, B., Al-Rfou, R., Skiena, S.: DeepWalk: online learning of social represen-
tations. In: Proceedings of the 20th ACM SIGKDD International Conference on
Knowledge Discovery and Data Mining, pp. 701–710. ACM (2014)
23. Rossi, R.A., Ahmed, N.K.: The network data repository with interactive graph
analytics and visualization. In: AAAI (2015). URL http://networkrepository.com
24. Singer, U., Guy, I., Radinsky, K.: Node embedding over temporal graphs. arXiv
preprint arXiv:1903.08889 (2019)
25. Trivedi, R., Farajtabar, M., Biswal, P., Zha, H.: Representation learning over
dynamic graphs. arXiv preprint arXiv:1803.04051 (2018)
26. Troncoso, F., Weber, R.: A novel approach to detect associations in criminal net-
works. Decis. Support Syst. 128, 113–159 (2019)
27. Van Belle, R., Mitrović, S., De Weerdt, J.: Representation learning in graphs for
credit card fraud detection. In: ECML PKDD 2019 Workshops. Springer (2019)
530 C. Ortega Vázquez et al.
28. Wang, D., Cui, P., Zhu, W.: Structural deep network embedding. In: Proceedings
of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and
Data Mining, KDD 2016, 13–17 August 2016, pp. 1225–1234. ACM (2016)
29. Wang, Y., Yao, Y.: A brief review of network embedding. Big Data Min. Anal.
2(1), 35–47 (2019)
30. Yang, Y., Ren, X., Wu, F., Zhuang, Y.: Dynamic network embedding by modeling
triadic closure process. In: Thirty-Second AAAI Conference On Artificial Intelli-
gence, pp. 571–578. AAAI (2018)
Metadata Action Network Model for Cloud
Based Development Environment
1 Introduction
Platform as a Service (PaaS) has been promoted as a panacea for the long-standing
software development problem of delivering successful solutions with high performed
or collaborative actors including developers, users. The premise behind PaaS is that it
can foster user performance (novice developers) in terms of delivering better, faster,
cheaper, and high-quality enterprise software solutions. Global enterprise solutions
providers such as SalesForce, Mendix have adopted PaaS solutions and strived for
© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 531–543, 2020.
https://doi.org/10.1007/978-3-030-45697-9_52
532 M. N. Aydin et al.
2 Background
platforms are evolving as more experimental and real-life platform use experience
driven studies carried out.
One of the challenges with DTD is to validate found digital trace data. The researchers
and the company worked together to fix several issues with the digital trace resulted
from user actions. For instance, creating a Plain Menu Item produces:
82494,[email protected],bb1ecc8c-9473-4322-8fb2-
221a6ea2d41c,CREATE_NEW_MENU,null,null,null,null,2018-11-16
10:06:09.0,null,null,null,null,null,null,null,null,null,null,null,null,Mars,null,null,null
82495,[email protected],8cdadf96-e81a-4c8e-87c8-
d1399f4aede2,ADD_TRANSIENT_ENTITY_TO_MENU,null,null,null,null,2018-
11-16
10:06:09.0,null,null,null,null,null,null,null,null,null,null,null,null,Mars,null,null,null
aPaaS platform records both the digital trace data of user actions as metadata
actions that occur at particular points in time as longitudinal data and attributes of the
software artifacts some of the actions, such as the kind of abstract data type the action
creates. One converts digital trace data into a network by determining what corresponds
to a node and an edge, as well as what corresponds to node/edge attribute. We consider
that there is only one edge type. Edges represent binding between two software arti-
facts. On the other hand, we classify nodes as Model (M), View (V), or Controller
(C) (see the Appendix). The node classifications are incorporated into the network
model as node attributes. Figure 1 depicts an overall structure of the services provided
to users and furthermore shows that the generated digital trace data that can be mapped
to architectural views, which is Model-View-Controller (MVC). MVC is a useful
pattern for separations of concern in software engineering. It helps developers to
partition the application as early as the design phase and especially is applied to web-
based applications.
It is this sparseness [18] that calls for Network Science approach to analysis of
metadata actions on aPaaS. Many systems can be regarded as networks, sets of things
and their interactions. In a graphic representation of a network, nodes (or vertices) are
the things of interest, and interacting nodes are joined in pairs by arcs (or links).
A network is a mathematical object specifically designed to represent sparse data, and
network science is concerned with analyzing and modeling such systems. Figure 3
depicts an overall network construction process from the raw data creation (users’
actions) to network visualization and analytics.
Metadata Action Network Model for Cloud Based Development Environment 537
We can regard the sparse data of recorded metadata actions on the aPaaS data base
as a network. To emphasize this, from now on, we will refer to the network produced
by the developers who develop applications as the “metadata action network” (MAN).
In MAN the objects of interest are the software artifacts created by metadata actions,
and the artifacts are joined in pairs by arcs if the developer opts to link them.
The digital trace data under examination is found on imona.com, which is an appli-
cation development platform (Application Platform as Service: aPaaS) where devel-
opers can not only create new applications, but it also offers the possibility of extending
the functionality of any application already placed in its marketplace [4]. Imona.com is
a one type of aPaaS, called metadata aPaaS [4]. Metadata aPaaS provides visual tools to
customize data models, application logic, workflow, and user interface. The underlying
metadata model for this aPaaS is essential to this research as it provides us a meta-
model [9] to reflect on network models of digital trace data to be discussed later on.
The dataset includes the list of metadata actions that were created by developing a
tutorial app by seven developers. This tutorial app is chosen as the name suggests it is
used for a training purpose, simple enough to monitor all development activities and all
steps with visual guidance in a 42-minutes video are provided. It should be noted that
538 M. N. Aydin et al.
even the tutorial video provides a clear guidance on how to develop app, there is no
single pathway to follow while developing the app. This gives us an opportunity to
compare development activities from the proposed network model. Another interesting
point is that in advance only a verbal brief on where and what to develop is given and
no any conceptual support (such as use cases or any documents) is provided during the
development activities. The application is based on three conceptual entities: User,
Post, Comment. It is similar to typical online user-content sharing apps where a new
user is to be created so that the user can create a post and comment on a post.
Additionally, a user can search posts or comments. This app essentially consists of
three distinct pages, which is referred as transient entity in the metadata action
description.
While presenting MAN, we use the tutorial app and refer to MVC to describe
metadata actions. We distinguish metadata actions that create nodes, actions that create
edges between nodes, and furthermore actions that create both nodes and edges. For
each node we use an MVC label as metadata. That means, the node can be model or
view or controller. An application in general is composed of several screens. For
example, a developer can add a new screen to an application by using a given service
(in our case, it is using “an add button with a label of “add””). This act invokes the
metadata “CREATE_ ENTITY”, which we model as a creation of a new network node,
type M(Model). Instantiation of this metadata can be “user”. Another example would
be “ADD_TERM_TO_ENTITY”, which we model as a creation of a new network
edge. Instantiation of this metadata can be a form field, such as “name” of the “user”
entity. One can find descriptions of the rest of all metadata actions as a MAN Catalog in
the Appendix.
Table 1 summarizes basic network statistics of the cases under investigation. The
research question was to decide on what should constitute elements of a graph (what is
node and what is edge) and graph representation (directed or undirected, multigraph,
weighted, bipartite) itself. We have demonstrated that MAN proposed is viable to
model applications developed on an aPaaS environment with promising outcomes.
Regarding viability of the proposed network, MAN of an application is able to
reveal the underlying MVC architectural paradigm [15]. That is, three network layers
(colored as black, white, and grey) we observed are in accordance with what the
underlying architecture of the platform provides (Fig. 4). The outer most layer has to be
the View because this is what an end user interacts with. The middle layer corresponds
to the Controller because this is the layer that bridges the gap between the Model and
the View layers. The inner most layer indicates the Model where the data reside.
Metadata Action Network Model for Cloud Based Development Environment 539
C1 C2 C3
C4 C5 C6
C7
Fig. 4. MAN of the application developed by each group. For graph layout, force Atlas2 is used
in Gephi [14]
Table 1. Comparison of different cases based on connected components and network diameter.
Cases Network measures Status of release candidate
#of CC Diameter Radius
C1 1 8 5 Beta
C2 1 8 5 Beta
C3 1 11 6 Alpha
C4 1 7 4 Alpha
C5 4 4 0 Premature
C6 1 9 5 Release candidate
C7 1 9 5 Release candidate
Regarding promising outcomes, one of the challenges the platform owner would
face is to provide developers with a revision control system (RCS) [16]. Although it
may not be possible to provide RCS in the same way as traditional IDE we contend that
MAN could be employed to provide a new kind of RCS. In conventional software
development, the user can visually check whether all software components are
540 M. N. Aydin et al.
integrated whereas on an aPaaS this is not the case. The MAN graph consisting of a
single connected component indicates that all software artefacts are interconnected. We
suggest that the total number of components can be used as a revision control system, if
it is more than one it means that some artefacts are still not interconnected (so-called
premature, Alpha or Beta) and the user should make additional metadata actions to
complete a version. If there is only one component and if other analytics results are
fulfilled, then the application developed may deserve to be a Release Candidate.
C1 C2 C3
C7
C4 C5 C6
Yet another promising outcome is that basic network statistics such as degree
distributions, network radius, and network diameter can be related to aPaaS user
performance analytics. Figure 5 depicts degree distributions of each case. Visually one
can see that the degree distributions of Case 7, Case 6 and to some extent Case 1 and
Case 3 exhibit an approximate straight line on doubly logarithmic scales which is
typical of real-world networks, whereas the degree distribution of Case 5 is clearly
distinct from a power-law [19]. MAN for MVC architectural paradigm constraints the
Diameter of the developed application to a certain size, which we believe is the number
of screens developed for an aPaaS application multiplied by the software architecture
layers (MVC), which is 3. So, for example, for the case at hand it has to be 9. When we
compare 7 cases, two of them satisfy this result, which would be another indicator for
them to be Release Candidates. The Radius provides us with another salient analytics,
which indicates whether MVC layers are interconnected with shorter pathways. The
middle layer of the software architecture (the Controller layer) and the other two layers
it bridges, namely the View and the Model layers should be equally spaced. In line with
this argument, we suggest yet another formula: The Radius of MAN for a Release
Candidate has to be approximately half the Diameter. So, for the app examined, the
Radius should be approximately equal to the number of screens developed, which is
three, divided by two (half the Diameter), which is 4.5. The observed value of the
Diameters for Release Candidates and Beta conform to this formula, which are 5.
Metadata Action Network Model for Cloud Based Development Environment 541
Appendix
(continued)
METADATA ACTIONS NETWORK CONSTRUCTION
NodeCreation Edge Creation NodeType
ADD_TRANSIENT_FIELD_TO_TRANSIENT_ENTITY x
CREATE_ECONTAINER_COMPONENT x M
CREATE_ENTITY x M
CREATE_LIST x M
CREATE_LIST_ITEM x M
CREATE_NEW_GLOBAL_FUNCTIOIN x C
CREATE_NEW_MENU x V
CREATE_NEW_SUB_MENU x V
CREATE_REST_SERVICE x C
CREATE_SCRIPT x C
CREATE_TERM x M
CREATE_TRANSIENT_ENTITY x V
References
1. Armbrust, M., Fox, A., Griffith, R., et al.: A view of cloud computing. Commun. ACM 53
(4), 50–58 (2010)
2. Beimborn, D., Miletzki, T., Wenzel, S.: Platform as a service (PaaS). Bus. Inf. Syst. Eng. 3
(6), 381–384 (2011)
3. Teixeira, C., Pinto, J.S., Azevedo, R., et al.: The building blocks of a PaaS. J. Netw. Syst.
Manag. 22(1), 75–99 (2014)
4. Aydin, M.N., Perdahci, N.Z., Odevci, B.: Cloud-based development environments: PaaS. In:
Encyclopedia of Cloud Computing, p. 62 (2016)
5. Vespignani, A.: Twenty years of network science. Nature 558, 528 (2018)
6. Bezemer, C.P., Zaidman, A., Platzbeecker, B., et al.: Enabling multi-tenancy: an industrial
experience report. In: Proceedings of the 2010 IEEE International Conference on Software
Maintenance, September 2010, pp. 1–8. IEEE (2010)
7. Premkumar, G., Potter, M.: Adoption of computer aided software engineering (CASE)
technology: an innovation adoption perspective. ACM SIGMIS Database: DATABASE
Adv. Inf. Syst. 26(2–3), 105–124 (1995)
8. Henkel, M., Stirna, J.: Pondering on the key functionality of model driven development
tools: the case of mendix. In: International Conference on Business Informatics Research.
Springer, Heidelberg (2010)
9. Aydin, M.N., Kariniauskaite, D., Perdahci, N.Z.: Validity issues of digital trace data for
platform as a service: a network science perspective. In: World Conference on Information
Systems and Technologies, pp. 654–664. Springer, Cham (2018)
10. Howison, J., Wiggins, A., Crowston, K.: Validity issues in the use of social network analysis
with digital trace data. J. Assoc. Inf. Syst. 12(12), 767 (2011)
11. Barabási, A.L.: Network Science. Cambridge University Press, Cambridge (2016)
12. Borgatti, S.P., Mehra, A., Brass, D.J., Labianca, G.: Network analysis in the social sciences.
Science 323(5916), 892–895 (2009)
13. Boccaletti, S., Latora, V., Moreno, Y., Chavez, M., Hwang, D.U.: Complex networks:
structure and dynamics. Phys. Rep. 424(4), 175–308 (2006)
Metadata Action Network Model for Cloud Based Development Environment 543
14. Bastian, M., Heymann, S., Jacomy, M.: Gephi: an open source software for exploring and
manipulating networks. In: The Proceedings of the Third International ICWSM Conference
ICWSM, San Jose, California, pp. 361–362. AAAI Press, Menlo Park (2009)
15. Leff, A., Rayfield, J.T.: Web-application development using the model/view/controller
design pattern. In: Proceedings of the Fifth IEEE International Enterprise Distributed Object
Computing Conference, pp. 118–127. IEEE, September 2001
16. Karsai, G., Sztipanovits, J., Ledeczi, A., Bapty, T.: Model-integrated development of
embedded software. Proc. IEEE 91(1), 145–164 (2003)
17. Giessmann, A., Stanoevska-Slabeva, K.: What are developers’ preferences on platform as a
service? An empirical investigation. In: Forty-Sixth Hawaii International Conference on
System Sciences, January 2013, pp. 1035–1044. IEEE (2013)
18. Demaine, E.D., Reidl, F., Rossmanith, P., Villaamil, F.S., Sikdar, S., Sullivan, B.D.:
Structural sparsity of complex networks: Bounded expansion in random models and real-
world graphs. J. Comput. Syst. Sci. 105, 199–241 (2019)
19. Clauset, A., Shalizi, C.R., Newman, M.E.: Power-law distributions in empirical data. SIAM
Rev. 51(4), 661–703 (2009)
20. Newman, M.E.: Assortative mixing in networks. Phys. Rev. Lett. 89(20), 208701 (2002)
Clustering Foursquare Mobility Networks
to Explore Urban Spaces
1 Introduction
Location data became ubiquitous due to global adoption of smartphones, the
worldwide availability of the GPS and advanced location-based applications. The
value of such data is immense as they contain spatial-temporal patterns of mas-
sive number of people. Among popular location-based applications is Foursquare,
a social network application founded in 2009. In the application, users are able
to notify their friends about their current location through check-ins for which
they can receive virtual rewards. Apart from that, it allows users to a leave note
about their experience in the specific venue, which can be utilized for building a
recommendation system. With its initiatives to open some of data they collect,
Foursquare attracted researches to explore their rich source of information and
evaluate its potential for understanding social behaviour, mobility, and propose
location intelligence services.
Many research efforts were dedicated to the analysis of Foursquare data and
interesting patterns were discovered. Preoţiuc-Pietro applied k-means clustering
on users and used the result for the prediction of user future movements [12].
c The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 544–553, 2020.
https://doi.org/10.1007/978-3-030-45697-9_53
Clustering Foursquare Mobility Networks to Explore Urban Spaces 545
Joseph et al. clustered users via topic modeling, an approach that is usually
used in the classification of text documents according to the latent themes [7].
On the opposite, Cranshaw et al. performed a clustering algorithm on venues
regarding the spatial and social characteristics of venues [3]. Jun Pang et al.
applied algorithms PageRank and HITS on Foursquare data for the purpose of
performing friendship prediction and location recommendation [11]. D’Silva et
al. used Foursquare data and machine learning in order to predict the crime [5]
and Noulas et al. used machine learning on the data with the aim of predicting
the next venue that user will visit [10]. Yang et al. explored the tourist-functional
relations between different POI types present in Foursquare data in the city of
Barcelona [15]. Moreover, researchers made a comparison of Foursquare data
and data from additional location-based services (LBS) in order to check the
similarity of patterns, the validity of check-ins, etc [13,16]. Foursquare data were
also utilized to characterize competition between new and existing venues [4] by
measuring change in throughput of a venue before and after the opening of a
new nearby venue.
Our study is part of wider initiative - the Future Cities Challenge - launched
by Foursquare that provided data to selected participants. Following sections
describe data, our research questions, methods and obtained results.
Future Cities Challenge included two types of data from Foursquare for ten
cities (Chicago, Istanbul, Jakarta, London, Los Angeles, New York, Paris, Seoul,
Singapore and Tokyo) provided in textual format. The first type of data provides
‘venue information’ where each venue is described with id, name, coordinates and
venue category in each line. The second type contains ‘movements’ information
where each line corresponds to an edge between a pair of venues, the month and
year that movements were aggregated for the given venue pair and the period
of the day1 . The last number in the line represent the “weight” which reflects
the number of check-ins that took place by any user for the given venue pair.
Number of venues differ for each city, where the city with highest number of
venues is Istanbul, followed by Tokyo and New York (Fig. 1).
Anonymized and aggregated location visit data provide opportunity to study
cities as complex systems. In this study we explored how venues are clustered
based on mobility flows, what is the semantic content of clusters and how cities
compare to each other in terms of semantic content of detected clusters. All of
those topics are relevant for building recommendation system for user coming
in the city and matching his preferences to the group of venues that can jointly
offer content.
1
Overnight (between 00:00:00 and 05:59:59), morning (between 06:00:00 and
09:59:59), midday (between 10:00:00 and 14:59:59), afternoon (between 15:00:00 and
18:59:59), and night (between 19:00:00 and 23:59:59).
546 O. Novović et al.
In this work, we focus on clustering venues based on the movements across the
city to quantify venues grouping through time and thus inspect urban dynamics.
Due to the size and complexity of the input data we decided to use Apache
Spark platform for distributed processing to perform graph clustering analysis.
Apache Spark is a unified distributed engine with a rich and powerful API for
Scala, Python, Java and R [8]. Graphs are made from input Foursquare data
on monthly basis, where venues represent nodes in the graph and aggregated
movements between two consecutive venues represent edges. If the movement
occurred more than once during different days or day time periods, the weights
are aggregated, so in the final graph we have unique edges over one month. To
cluster movements across the city we used Louvain algorithm [2] which proved
to be very efficient when working with large, complex graphs [14].
boundary of the clusters, which may act as brokers between the modules and,
in that case, could play a major role both in holding the modules together and
in the dynamics of spreading processes across the network. In the context of
location-based social networks detecting a cluster means detecting a group of
venues that are frequently visited together by users. Detecting such places could
give us better insight in urban dynamics and evolution of the cities.
Graphs made from Foursquare movements data are massive and prone to
dynamical evolution, since the structure of location-based social network is
changing very fast. When choosing optimal algorithm to perform clustering over
movements graphs we need to focus on two major issues: i) the algorithmic tech-
niques applied must scale well with respect to the size of the data, which means
that the algorithmic complexity should stay below O(n2 ) (where n is the number
of graph nodes), and ii) the number of clusters is unknown in advance, the algo-
rithms used must be flexible enough to be able to infer the number of clusters
during the course of the algorithm. To meet the aforementioned requirements,
the authors proposed in [14] to apply a modularity-based algorithm described
in [2]. This algorithm is based on the concept of modularity [9] presented by
Eq. 1 where A ij is the weight of the edge connecting the i-th and the j-th node
of the graph, j Aij is the sum of the weights of the edges attached to the i-th
node, ci is the cluster where the i-th node is assigned to, m = (1/2) i,j Aij , and
δ(x, y) is zero if nodes x and y are assigned to the same cluster and 1 otherwise.
1 j Aij · i Aji
Q= Ai,j − δ(ci , cj ) (1)
2m i,j 2m
are formed with smaller number of venues that are densely grouped together,
compared to the peripherally located clusters which have many venues widely
distributed in space.
Travel & Transport, Other. Category Other is used for those venues that do not
fit in any of the main categories.
We calculated percentage of each category present in the cluster. Each city
has unique digital footprint of the categories that are dominant across clusters.
Some cities have similar patterns, while for others significantly different clusters
emerged.
4 Results
Clustering is performed over graphs made from movements data, for each city, for
each month between 2017-04 and 2019-03. The results obtained from clustering
show high diversity between cities, and even between months for the same city.
To get a better insight in the variability of clusters across the cities, we presented
the scatter plot in Fig. 3. The plot shows the dependency between the average
number of clusters and their size in each city per month. From the Fig. 3 we can
notice the cluster variation by months within one city, as well as the variation
between cities. Although Istanbul has the highest number of clusters, there are
relatively small. However, the opposite pattern can be noted for the city of Paris
in which clusters are relatively large, but there are fewer compared to Istanbul.
Moreover, some similarities between cities can be observed. The cities of Chicago
and Los Angeles have relatively similar number and size of clusters.
Furthermore, we explored which categories are present in the clusters. We
selected the largest clusters across each city, those that are consisted from more
than 50 venues and calculated percentage of occurrences for each category. Pres-
ence, variation and distribution of categories inside clusters can give us valuable
550 O. Novović et al.
input about venues semantic that are strongly connected by users movement.
Figure 4 presents the largest cluster in the city of Chicago classified by category.
From Fig. 4 we can notice high variety of categories, where the most present
category is the Food, followed by Shop&Service. It implies that people in this
cluster generally move between places related to food and shopping.
In the city of Chicago another large cluster is formed around O’Hare Interna-
tional Airport, in which the most present categories are Travel&Transport and
Food. From Fig. 5 we can notice how venues are spread in almost regular form
following Interstate 90 road, which is one of the main highways in the State of
Illinois. As can be seen, clusters are formed around spatially close or well con-
nected places with some categories frequently occurring together. Consequently,
we can classify clusters by dominant presence of one, two or even more categories.
To obtain global view and compare the cities, we performed hierarchical clus-
tering based on average profile of probability distribution of categories. Result
of hierarchical clustering in the form of dendrogram (Fig. 6) showed which cities
are similar, with colors of branches indicating how we could group them.
From Fig. 6 we can notice high similarity between US cities Chicago, New
York and Los Angeles, and European cities Paris and London, while Tokyo with
its community profiles stands between US and European cities. Another group
of similar cities include Istanbul, Jakarta and Singapore, while Seoul has unique
pattern completely different from all cities. To provide more details, we present
semantic profiles of venue clusters for four different cities Istanbul, Seoul, Chicago
Clustering Foursquare Mobility Networks to Explore Urban Spaces 551
Fig. 6. Hierarchical tree presenting the similarity and diversity between cities
and Tokyo (Fig. 7). From visual inspection of the profiles we can notice that cat-
egory Food has high peak in each city, while category Residence has low peak.
With comparative analysis between profiles we could notice some general trends
related to category variability between clusters. We can conclude that in Chicago
and Tokyo people are very likely to move between venues related to categories
Food and Shop&Services. In Istanbul people are very likely to move between
venues related to categories Food, Shop&Services and Professional&Other, while
Seoul has strong dominance of Food category. More specific city profiling is pos-
sible with exploring more detailed subcategories that are present in the clusters.
552 O. Novović et al.
5 Conclusions
Mobility networks generated by users are valuable data source for exploring
urban spaces. By performing clustering over graphs made from mobility data we
gain deeper knowledge about grouping of mobility flows. With further investiga-
tion of venue semantics inside clusters we can detect location types and categories
that are frequently visited together by users. Detecting relations between clusters
and venues inside cluster can help us in building a recommendation application
that would serve users who are visiting new cities, based on their preferences.
Majority of venues in cluster are either spatially close or they are well connected
with transport infrastructure, indicating that users tend to move between loca-
tions in limited spatial distance forming in this way urban sub-spaces. Knowledge
about urban sub-spaces that stand out as entities could be very valuable input
for urban policy making and development, and also for developing new services.
Data set provided in Future Cities Challenge can be analysed in more
details. For future work we plan to perform clustering in higher time resolu-
tion (daily based, including day periods, such as morning, midday, afternoon,
night, overnight) to get more detailed insights into evolving patterns in clustering
results.
Clustering Foursquare Mobility Networks to Explore Urban Spaces 553
References
1. Foursquare categories. https://developer.foursquare.com/docs/api/venues/catego
ries. Accessed 20 May 2019
2. Blondel, V.D., Guillaume, J.-L., Lambiotte, R., Lefebvre, E.: Fast unfolding of
communities in large networks. J. Stat. Mech: Theory Exp. 2008(10), P10008
(2008)
3. Cranshaw, J., Schwartz, R., Hong, J., Sadeh, N.: The livehoods project: utilizing
social media to understand the dynamics of a city. In: Sixth International AAAI
Conference on Weblogs and Social Media (2012)
4. Daggitt, M.L., Noulas, A., Shaw, B., Mascolo, C.: Tracking urban activity growth
globally with big location data. R. Soc. Open Sci. 3(4), 150688 (2016)
5. D’Silva, K., Noulas, A., Musolesi, M., Mascolo, C., Sklar, M.: If i build it, will
they come?: Predicting new venue visitation patterns through mobility data. In:
Proceedings of the 25th ACM SIGSPATIAL International Conference on Advances
in Geographic Information Systems, p. 54. ACM (2017)
6. Harush, U., Barzel, B.: Dynamic patterns of information flow in complex networks.
Nat. Commun. 8(1), 2181 (2017)
7. Joseph, K., Tan, C.H., Carley, K.M.: Beyond “local”, “categories” and “friends”:
clustering foursquare users with latent “topics”. In: UbiComp (2012)
8. Karau, H., Konwinski, A., Wendell, P., Zaharia, M.: Learning Spark: Lightning-
Fast Big Data Analytics, 1st edn. O’Reilly Media, Inc., Sebastopol (2015)
9. Newman, M.E.J., Girvan, M.: Finding and evaluating community structure in net-
works. Phys. Rev. E 69, 026113 (2004)
10. Noulas, A., Scellato, S., Lathia, N., Mascolo, C.: Mining user mobility features for
next place prediction in location-based services. In: 2012 IEEE 12th International
Conference On Data Mining, pp. 1038–1043. IEEE (2012)
11. Pang, J., Zhang, Y.: Quantifying location sociality. In: Proceedings of the 28th
ACM Conference on Hypertext and Social Media, pp. 145–154. ACM (2017)
12. Preoţiuc-Pietro, D., Cohn, T.: Mining user behaviours: a study of check-in patterns
in location based social networks. In: Proceedings of the 5th Annual ACM Web
Science Conference, WebSci 2013, New York, NY, USA, pp. 306–315. ACM (2013)
13. Silva, T.H., Vaz de Melo, P.O., Almeida, J.M., Salles, J., Loureiro, A.A.: A com-
parison of foursquare and instagram to the study of city dynamics and urban social
behavior. In: Proceedings of the 2nd ACM SIGKDD International Workshop on
Urban Computing, p. 4. ACM (2013)
14. Truică, C.-O., Novović, O., Brdar, S., Papadopoulos, A.N.: Community detection
in who-calls-whom social networks. In: International Conference on Big Data Ana-
lytics and Knowledge Discovery, pp. 19–33. Springer (2018)
15. Yang, L., Durarte, C.M.: Identifying tourist-functional relations of urban places
through foursquare from Barcelona. GeoJournal (2019)
16. Zhang, Z., Zhou, L., Zhao, X., Wang, G., Su, Y., Metzger, M., Zheng, H., Zhao,
B.Y.: On the validity of geosocial mobility traces. In: Proceedings of the Twelfth
ACM Workshop on Hot Topics in Networks, p. 11. ACM (2013)
Innovative Technologies Applied to
Rural Regions
The Influence of Digital Marketing Tools
Perceived Usefulness in a Rural Region
Destination Image
1 Introduction
© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 557–569, 2020.
https://doi.org/10.1007/978-3-030-45697-9_54
558 F. Jorge et al.
H3a: Tourists’ attitude toward tourism digital marketing tools has a positive effect
on website perceived usefulness.
H3b: Tourists’ attitude toward tourism digital marketing tools has a positive effect
on e-mail perceived usefulness.
H3c: Tourists’ attitude toward tourism digital marketing tools has a positive effect
on s-WOM perceived usefulness.
H3d: Tourists’ attitude toward tourism digital marketing tools has a positive effect
on booking perceived usefulness.
H3e: Tourists’ attitude toward tourism digital marketing tools has a positive effect
on mobile devices perceived usefulness.
Tourists’ consumption behavior is changing because they require more information,
are becoming more independent, and make their tourism products purchases in mul-
tichannel [14]. Travel agencies were the most affected with the introduction of e-
commerce in the tourism industry [21], because the tourist’s behavior changes may
represent a challenge to travel agencies [12, 14]. Tourist’s motivations to use travel
agencies or online platforms are different, tourists that use travel agencies require a
personalized service and, generally, are more traditional [21, 22]. If tourists have a
positive attitude toward tourism digital marketing tools, it is expected that they would
perceive less usefulness on travel agencies to search for information, plan or purchase a
destination. Therefore, the hypothesis below is suggested:
H3f: Tourists’ attitude toward tourism digital marketing tools has a negative effect
on travel agencies perceived usefulness.
According to [22], the destination image is defined as “the perceptions held by
potential visitors about a destination” (p. 1). Tourism destination image research has
been done for many years [23], and several studies demonstrated that destination image
can be analyzed pre and post visit [4, 24]. Some destination image formation models
recognize the information sources as one of the determinants influencing tourists’
destination image, which means that tourist uses the information sources to formulate
perceptions and evaluations about a destination [1, 24].
Several digital marketing tools are presented in the conceptual model through their
perceived usefulness in Douro destination travel decision from tourists’ point of view.
All these tools are communication channels that destinations use to promote or dis-
tribute their tourism products, representing different information sources for potential
tourists [11, 12]. Besides, it was included a conventional information source, travel
agencies’ perceived usefulness, to analyze an opposite type of information source.
Considering this, it is proposed to evaluate the positive impact that each perceived
usefulness of digital marketing tools used to search, plan or purchase Douro destination
would have on this destination image. On the other hand, it also is evaluated if the
perceived usefulness of travel agencies to search for information, plan or purchase
Douro destination would have a positive impact on this destination image. Therefore,
the followed hypotheses are proposed:
H4a: Website perceived usefulness has a positive impact on destination image.
H4b: e-mail perceived usefulness has a positive impact on destination image.
H4c: s-WOM perceived usefulness has a positive impact on destination image.
The Influence of Digital Marketing Tools 561
3 Methodology
In order to conduct this research, it was implemented a survey method and quantitative
methods were applied. The cross-transversal method was used, and the primary data
was obtained through a personal questionnaire, applied personally during the tourists’
stay in Douro destination. The tourists visiting this region constitute the research
population and the sample is composed by 555 tourists that are in Douro at least two
days. The sample is stratified by tourists’ country of origin and county of stay and data
was collected in the summer of 2018. In Table 2 is presented the sample profile.
In the questionnaire, the constructs utilized were based on previous empiric liter-
ature and measured on a seven-point Likert scale, which ranges from 1 which means
“strongly disagree” to 7 which means “strongly agree”.
562 F. Jorge et al.
4 Results
In conceptual model presented above are exposed nine constructs, eight of them
reflective and the other, destination image, formative. To evaluate the formative con-
struct measurement model, it is necessary to have into consideration the outer weights
and loadings of each indicator and their VIFs, expressed in Table 2. Three indicators
have outer weights not significant, but as these three indicators have loadings signif-
icant and greater than 0.5, they demonstrated their relevance and significance to the
construct. These indicators VIFs values are smaller than 3.0, which demonstrate that
there is no collinearity issues [27].
Table 5. (continued)
DI WB EM s-WOM BK MB TA ATT TR
WB1 0,498 0,879 0,159 0,481 0,543 0,559 −0,244 0,476 0,457
WB2 0,518 0,898 0,216 0,526 0,579 0,586 −0,224 0,493 0,463
WB3 0,508 0,888 0,185 0,531 0,567 0,597 −0,222 0,527 0,495
WB4 0,494 0,894 0,176 0,520 0,554 0,604 −0,259 0,499 0,495
WB5 0,521 0,876 0,193 0,537 0,561 0,580 −0,261 0,513 0,528
EM1 0,152 0,191 0,951 0,394 0,338 0,115 0,000 0,177 0,245
EM2 0,144 0,197 0,943 0,394 0,308 0,099 −0,027 0,175 0,225
EM3 0,120 0,197 0,940 0,404 0,273 0,111 0,003 0,179 0,230
EM4 0,149 0,210 0,952 0,376 0,332 0,136 0,005 0,165 0,237
s-WOM1 0,422 0,550 0,373 0,926 0,504 0,447 −0,220 0,419 0,430
s-WOM2 0,377 0,517 0,380 0,908 0,486 0,383 −0,246 0,399 0,370
s-WOM3 0,360 0,550 0,388 0,927 0,506 0,427 −0,225 0,413 0,394
s-WOM4 0,389 0,557 0,400 0,922 0,539 0,422 −0,249 0,398 0,373
s-WOM5 0,390 0,526 0,368 0,926 0,484 0,427 −0,225 0,411 0,387
BK1 0,453 0,602 0,329 0,510 0,935 0,514 −0,273 0,412 0,445
BK2 0,461 0,605 0,298 0,505 0,938 0,526 −0,277 0,400 0,431
BK3 0,451 0,573 0,311 0,511 0,930 0,513 −0,276 0,398 0,455
BK4 0,460 0,578 0,297 0,510 0,923 0,517 −0,292 0,399 0,419
MB1 0,546 0,601 0,101 0,398 0,502 0,934 −0,219 0,522 0,502
MB2 0,569 0,628 0,099 0,447 0,539 0,932 −0,231 0,560 0,564
MB3 0,543 0,600 0,128 0,424 0,517 0,924 −0,225 0,547 0,545
MB4 0,548 0,628 0,125 0,432 0,509 0,934 −0,209 0,527 0,537
TA1 −0,269 −0,273 −0,021 −0,247 −0,299 −0,239 0,978 −0,158 −0,132
TA2 −0,279 −0,280 −0,004 −0,252 −0,298 −0,239 0,978 −0,169 −0,127
TA3 −0,256 −0,247 −0,008 −0,244 −0,284 −0,223 0,972 −0,147 −0,121
TA4 −0,267 −0,266 0,014 −0,243 −0,290 −0,227 0,980 −0,149 −0,128
ATT1 0,415 0,461 0,200 0,392 0,364 0,483 −0,124 0,898 0,609
ATT2 0,440 0,569 0,131 0,408 0,391 0,520 −0,154 0,878 0,604
ATT3 0,447 0,470 0,138 0,374 0,368 0,530 −0,117 0,849 0,570
ATT4 0,446 0,482 0,177 0,380 0,393 0,502 −0,163 0,886 0,597
TR1 0,383 0,444 0,215 0,327 0,386 0,480 −0,057 0,575 0,861
TR2 0,431 0,509 0,215 0,396 0,445 0,536 −0,169 0,626 0,917
TR3 0,451 0,533 0,239 0,421 0,438 0,543 −0,121 0,631 0,927
estimate the validity of the constructs. To test significance was used bootstrapping
technique with 555 individuals, 500 subsamples and no sign change.
The proposed model explains 42.3% of the total variance in the destination image.
In Table 6 we can observe the estimated path coefficients and their significances.
Almost all hypotheses were supported with three exceptions, in particular, H3b,
H4b, and H4c.
Concerning H3b, this hypothesis was not significant, which means that tourist’s
attitude toward tourism digital marketing tools had no effect on their perceived use-
fulness of e-mails received during the search for information, planning and purchasing
process of Douro destination. The hypothesis H4b was not also supported by our data,
verifying that the perceived usefulness of the e-mails received about Douro destination
during the searching for information, planning or purchasing process had no impact on
this destination image. Possible explanations for these results are that tourists didn´t
receive emails about Douro destination or are perceiving those e-mails eventually
received as not important, and so they do not valuate information presented in e-mails
to its destination choose process as they valuate their attitude toward tourism products
platforms or to construct their perceptions about the destination.
The result of hypotheses H4c reveal that s-WOM has no significate effect on
destination image, which means that social media contents about the destination have
566 F. Jorge et al.
low influence on Douro destination image formation, instead several previous empirical
researches verified this influence [31, 32]. A possible explanation to this result can be
that Douro destination is promoted by a DMO responsible for other territories and with
few contents on their social media pages related specifically with Douro destination
[33].
The hypothesis H4f was significant but not with the positive effect hypothesized in
the conceptual model. This result evidences the opposite influence of perceived use-
fulness on destination image between digital marketing tools and travel agencies. This
negative effect can be explained by our sample low level of travel agencies’ perceived
usefulness in search of information, plan, and purchase of Douro destination because
most individuals have the habit to purchase tourism products online, as can be verified
in Table 1. Another justification for this result can be in Douro destination dimension
because this research is focused on a small and rural destination, which can make it
difficult to find in travel agencies diverse tourism products offer about it.
In Table 7, indirect effects and their significances are presented.
5 Final Considerations
Rural destinations use the same information sources to promote themselves than other
bigger destinations, as capital cities. However, these bigger destinations have more
awareness than rural destinations. Therefore, rural destinations must stand out to be
valued and purchased by potential tourists. To accomplish these objectives, technology,
in general, and digital marketing tools, in particular, could have a great importance
because it can act as trigger for the desired tourist behavior, and, allows a larger
diffusion to potential tourist’s with a lower cost, compensating the smaller size and
marketing capacity of rural destinations.
This research provides a theoretical contribution revealing the influence that some
digital marketing tools perceived usefulness, in particular, websites, booking, and
mobile devices, have in the destination image. On the other hand, perceived usefulness
of travel agencies as an information source about Douro destination had a significant
relation with this rural destination image, but contrary to expectations, in a negative
way. This is a contribute to the discussion about travel agencies’ role in the decision
process on an internet era and their influence on destination image formation because
today’s’ tourists are accustomed to using online platforms and tools to search infor-
mation, plan and purchase destinations.
Moreover, tourist’s trust in digital marketing tools in the tourism products purchase
process and their attitude toward the same tools influenced the perceived usefulness of
the individual tools analyzed in this research, except for e-mail. This exception may
suggest that tourists may be bored with receiving e-mails that promote tourism prod-
ucts, thus they no longer recognize it so useful as other digital marketing tools.
Trust and attitude toward tourism digital marketing tools reveal indirect effects on
Douro destination image. This last result seems to indicate that tourist’s trust and
positive attitude toward these tools in general, during their search for information, plan
and purchase process of Douro destination, influences its image toward some of these
tools’ usefulness.
As practical contributions, this research results about the influence of digital
marketing tools perceived usefulness on Douro destination image, indicate that private
and public organizations related to tourism destination marketing must improve their
digital marketing strategies related to destination communication processes. The use of
innovative technologies associated with these digital marketing tools allows the cre-
ation of a rural destination differentiation. In particular, the evidence about the impact
that digital marketing tools as website, booking and mobile devices can have on
destination image indicates the increasing importance that these technologies must have
in their communication strategies and actions.
Besides, not significant or negative results can also leave some indications to
practitioners. For instance, the not significant influence of s-WOM on Douro desti-
nation image indicates that public or private entities responsible for tourism products or
destinations promotion should be concerned to have an active presence on social media
platforms. These entities should encourage contents publication about their destination
by tourists that recently visited the destination through the use of destination labels,
such as hashtags, and should sensitize tourists during their stay in a destination for the
568 F. Jorge et al.
importance of create and sharing positive contents related to the destination or some of
its tour operators. On the other hand, the negative result about the influence of travel
agencies on Douro destination image indicates that destinations should also work with
travel agencies to have available their tourism products and to promote them to their
customers. Destination management organizations or operators may also provide
information or training travel agencies so they have real and updated knowledge about
that destination, which may improve its image perceived by their customers.
For this research, data only was collected in one moment and, for that reason, we
can’t explore the evolution in destination image tourists’ perceptions. Future studies
should consider analyzing that evolution across time in longitudinal research or when
tourists are exposed to different technological information sources. Besides, further
research should provide deeper analysis to understand the reasons or motivations that
explain why some digital marketing tools have such influence in rural destinations
image, as was in the case of Douro.
References
1. Baloglu, S., McCleary, K.W.: A model of destination image formation. Ann. Tour. Res. 26
(4), 868–897 (1999)
2. Isaac, R.K., Eid, T.A.: Tourists’ destination image: an exploratory study of alternative
tourism in Palestine. Curr. Issues Tour. 22(12), 1499–1522 (2019)
3. Wu, C.W.: Destination loyalty modeling of the global tourism. J. Bus. Res. 69(6), 2213–
2219 (2016)
4. Zhang, H., Fu, X., Cai, L.A., Lu, L.: Destination image and tourist loyalty: a meta-analysis.
Tour. Manag. 40(February), 213–223 (2014)
5. Martins, J., Gonçalves, R., Branco, F., Barbosa, L., Melo, M., Bessa, M.: A multisensory
virtual experience model for thematic tourism: a port wine tourism application proposal.
J. Destin. Mark. Manag. 6(2), 103–109 (2017)
6. Agag, G.M., El-Masry, A.A.: Why do consumers trust online travel websites? Drivers and
outcomes of consumer trust toward online travel websites. J. Travel Res. 56(3), 347–369
(2016)
7. Venkatesh, V., Morris, M.G., Davis, G.B., Davis, F.D.: User acceptance of information
technology: toward a unified view. MIS Q. 27(3), 425–478 (2003)
8. Bonsón Ponte, E., Carvajal-Trujillo, E., Escobar-Rodríguez, T.: Influence of trust and
perceived value on the intention to purchase travel online: integrating the effects of assurance
on trust antecedents. Tour. Manag. 47, 286–302 (2015)
9. Ayeh, J.K., Au, N., Law, R.: Investigating cross-national heterogeneity in the adoption of
online hotel reviews. Int. J. Hosp. Manag. 55, 142–153 (2016)
10. Buhalis, D., Law, R.: Progress in information technology and tourism management: 20 years
on and 10 years after the Internet-The state of eTourism research. Tour. Manag. 29(4), 609–
623 (2008)
The Influence of Digital Marketing Tools 569
11. Navío-Marco, J., Ruiz-Gómez, L.M., Sevilla-Sevilla, C.: Progress in information technology
and tourism management: 30 years on and 20 years after the internet - Revisiting Buhalis &
Law’s landmark study about eTourism. Tour. Manag. 69, 460–470 (2018)
12. Del Chiappa, G., Alarcón-Del-Amo, M.-D.-C., Lorenzo-Romero, C.: Internet and user-
generated content versus high street travel agencies: a latent gold segmentation in the context
of Italy. J. Hosp. Mark. Manag. 25(2), 197–217 (2016)
13. Davis, F.D.: Perceived usefulness, perceived ease of use, and user acceptance. MIS Q. 13(3),
319–339 (1989)
14. Rajaobelina, L.: The impact of customer experience on relationship quality with travel
agencies in a multichannel environment. J. Travel Res. 57(2), 206–217 (2018)
15. Sahli, A.B., Legohérel, P.: The tourism Web acceptance model. J. Vacat. Mark. 22(2), 179–
194 (2016)
16. Agag, G., El-Masry, A.A.: Understanding consumer intention to participate in online travel
community and effects on consumer intention to purchase travel online and WOM: an
integration of innovation diffusion theory and TAM with trust. Comput. Hum. Behav. 60,
97–111 (2016)
17. Besbes, A., Legohérel, P., Kucukusta, D., Law, R.: A cross-cultural validation of the tourism
web acceptance model (T-WAM) in different cultural contexts. J. Int. Consum. Mark. 1530
(Apr), 1–16 (2016)
18. Jeng, C.-R.: The role of trust in explaining tourists’ behavioral intention to use e-booking
services in Taiwan. J. China Tour. Res. 15(4), 478–489 (2019)
19. Morosan, C.: Toward an integrated model of adoption of mobile phones for purchasing
ancillary services in air travel. Int. J. Contemp. Hosp. Manag. 26(2), 246–271 (2014)
20. Chung, N., Lee, H., Lee, S.J., Koo, C.: The influence of tourism website on tourists’
behavior to determine destination selection: a case study of creative economy in Korea.
Technol. Forecast. Soc. Change 96, 130–143 (2015)
21. Devece, C., Garcia-Agreda, S., Ribeiro-Navarrete, B.: The value of trust for travel agencies
in achieving customers’ attitudinal loyalty. J. Promot. Manag. 21(4), 516–529 (2015)
22. Hunt, J.D.: Image as a factor in tourism development. J. Travel Res. 13(3), 1–7 (1975)
23. de la Hoz-Correa, A., Muñoz-Leiva, F.: The role of information sources and image on the
intention to visit a medical tourism destination: a cross-cultural analysis. J. Travel Tour.
Mark. 36(2), 204–219 (2019)
24. Beerli, A., Martín, J.D.: Factors influencing destination image. Ann. Tour. Res. 31(3), 657–
681 (2004)
25. Ringle, C.M., Wende, S., Becker, J.-M.: SmartPLS 3. SmartPLS GmbH, Boenningstedt (2015)
26. IBM Corp: IBM SPSS Statistics for Windows, Version 25.0. IBM Corp, Armonk, NY
27. Hair, J.F., Hult, G.T.M., Ringle, C., Sarstedt, M.: A Primer on Partial Least Squares
Structural Equation Modeling (PLS-SEM), 2nd edn. SAGE Publications Inc., Thousand
Oaks (2017)
28. Churchill, G.A.: A paradigm for developing better measures of marketing constructs.
J. Mark. Res. 16(1), 64 (1979)
29. Straub, D.W.: Validating instruments in MIS research. MIS Q. 13(2), 147 (1989)
30. Fornell, C., Larcker, D.F.: Evaluating structural equation models with unobservable
variables and measurement error. J. Mark. Res. 18(1), 39 (1981)
31. Jalilvand, M.R., Samiei, N.: The effect of electronic word of mouth on brand image and
purchase intention. Mark. Intell. Plan. 30(4), 460–476 (2012)
32. Abubakar, A.M., Ilkan, M., Al-tal, R.M., Eluwole, K.K.: eWOM, revisit intention,
destination trust and gender. J. Hosp. Tour. Manag. 31, 220–227 (2017)
33. Jorge, F., Teixeira, M.S., Fonseca, C., Correia, R.J., Gonçalves, R.: Social media usage
among wine tourism DMOs. In: Marketing and Smart Technologies, pp. 78–87 (2020)
Ñawi Project: Visual Health for Improvement
of Education in High Andean Educational
Communities in Perú
Abstract. The UN General Assembly adopted the 2030 Agenda for Sustain-
able Development, an action plan in support of people, the planet and prosperity.
Within the 17 Sustainable Development Objectives (SDO), we have quality
education (fourth objective) and the reduction of inequalities (the tenth SDO).
Rural communities tend to be one of the most disadvantaged environments and
development education is one of the most effective mechanisms to alleviate
these inequalities. In this framework, the Urubamba Project for International
Cooperation is presented, and within it, the Ñawi Project for visual education.
Both projects are developed in the Andean areas of Cusco (Peru) through uni-
versity community participation. In these lines of action, Information and
Communication Technologies (ICTs) are presented as a relevant and effective
element to achieve their objectives: introduction of ICTs in education to create
inclusive socio-educational environments and ICTs as an analysis tool for the
visual education project (prevention and correction). This work focuses on the
Ñawi Project and the satisfactory results that have been obtained.
1 Introduction
© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 570–578, 2020.
https://doi.org/10.1007/978-3-030-45697-9_55
Ñawi Project: Visual Health for Improvement of Education 571
Within the action’s framework that the UNESCO cathedra in Education, Devel-
opment and Technology of the Ramon Llull University has been promoting, since
2013, La Salle Campus Barcelona (together with its ONGD Proide Campus) has been
developing the Urubamba Project for International Cooperation [1]. This project was
created in coordination with La Salle Urubamba Institute in the Incas Sacred Valley
(Cusco, Peru). Its initial objective has focused on performing different actions always
with a common effort: education for development. And at this point, Information
Technology has been an element of motivation, cost efficiency and effectiveness in
enacted actions. The High Andean educational communities, located at more than
4,000 meters of altitude, are a point of special interest in this project given its low level
of resources and its limited growth.
In this way, different projects have been implemented in the seven campaigns
between 2013 and 2019. The Haku Wiñay Project (learning together) enhanced the
training of teachers and students. The e-Yachay Project (e-learning) introduced
Information Technology with the objective of creating inclusive socio-educational
environments. The Kurku Kallpanchay Project (strengthening the body) enhanced
gender equality and cooperative work through physical education. And more recently
the Willachikuy Project (communication) created emergency communication structures
for isolated High Andean educational communities.
And it was precisely the e-Yachay Project that motivated the appearance of Ñawi
Project. The introduction of ICT in High Andean educational institutions causes
changes in students learning habits. The introduction of computer classrooms, the use
of educational software and the use of office automation, can lead to different actions
and reactions in the children`s visual habits. This research has focused on the results
obtained from the Ñawi (eye) project of visual education. The students’ vision is a
crucial issue that directly affects their learning. A high percentage of students learning
deficiencies are due to their vision problems that have not been detected or corrected.
The technology has a relevant role to analyze the results obtained, extract relevant
information automatically and, through data mining techniques, obtain knowledge that
facilitates the improvement of the project’s actions in the coming year’s campaigns.
The article continues in Sect. 2 with the contextualization of visual field termi-
nology. Section 3 identifies the method of the project application in the study envi-
ronment. Section 4 presents the main results obtained from the analysis of collected
data, to finish with Sect. 5 of conclusions and subsequent acknowledgments.
2 Contextualization
A large part of the Urubamba Project focuses on actions in the neediest areas of the
Cusco zone (Peru): the peasant communities. More specifically, the project focuses on
High Andean educational institutions. These institutions are public schools that are
located at more than 4,000 m of altitude and that might be close to a peasant com-
munity or be isolated in the Andean mountains. These institutions are the only means
that peasant communities have for bringing education to their sons and daughters. In
addition to the harsh orographic and climatic conditions, they are schools that have few
resources to carry out the educational work. Many of these institutions do not have any
572 X. Canaleta et al.
type of medical coverage and much less have access to the possible eye examination of
their students.
Visual health is an indispensable element in the teaching - learning processes. The
lack of visual problems detection and the misinformation in this field, are a common
feature throughout this high Andean zone. We define visual health [2] as a visual
system devoid of diseases in the sense of sight and in the eyes structures, at the same
time that this system enjoys good visual acuity. Visual acuity [3] is defined as the
ability to identify letters, numbers or symbols for a specific and standardized distance
by an eye test.
Even with good visual health, refractive defects such as myopia, farsightedness or
astigmatism can be suffered. In most cases, the refractive defect is due to genetics itself,
but there are cases in which a refractive defect may appear due to poor visual habits.
According to the WHO (World Health Organization) [4], less developed regions have a
higher proportion of visual defects. The child population is the most delicate, since they
are in a learning period and may have difficulties both in school progress and the
psychosocial development process. Visual ergonomics [5] is defined as the correct
work environment adaptation to the visual needs, it means, adopting appropriate body
postures to favor visual tasks, correct lighting, adequate working distances and
guidelines and pauses for visual rest.
In the Ñawi Project, refractive defects are the priority to be taken into account,
since the improvement of visual ergonomics is essential for the improvement in edu-
cation. Thus, the project must focus on two main objectives: the detection of eye
problems and the education of the population to prevent, avoid and detect these
problems.
3 Application
The actions were carried out in two different locations: the population of Urubamba and
the 50,187 Educational Institution of Pampallacta. Urubamba is a population located in
the valley, at 2,980 m of altitude, with approximately 6,000 inhabitants. Pampallacta is
an isolated educational community located in the high Andean area at 4,000 m above
sea level without any nearby peasant community. These locations were chosen to be
able to contrast data from two realities that are geographically close but far apart in
terms of situation and means.
The educational function of the Ñawi Project aims to raise awareness among stu-
dents and families of the importance of sight and revisions to avoid, prevent or min-
imize visual damage. This was one of the essential actions ended by the Urubamba
2017 team. For this, a small manual was made in which the most relevant aspects to be
taken into account for a correct visual health were indicated:
• Eye anatomy: explanation of the meaning of a healthy eye and various refractive
defects such as myopia, farsightedness and astigmatism.
• Visually correct positions: recommendations were indicated about position, light-
ing, working distance, reading angle and reset times.
Ñawi Project: Visual Health for Improvement of Education 573
• Eye diseases: alerted when it would be necessary to go to the doctor, indicating the
most relevant pathologies considering the patient’s environment, (altitude and,
therefore, higher incidence of solar rays): photophobia, conjunctivitis, keratitis,
cataracts, pingueculae, pterygium. We report on common effects from eye trauma
such as blows with or without a wound and how to act when fluid enters the eye. It
also refers to the suspicion of retinopathies or other pathologies.
To optimize the large volume of revisions to be performed, the following procedure
was done:
• Step 1: List of people to graduate and collection of personal data to be able to carry
out a personalized follow-up.
• Step 2: Evaluation of 3D vision using the Titmus Stereo Test [6].
• Step 3: Evaluation of color vision using the Ishihara test [7].
• Step 4: Only done in Urubamba for logistical issues. Evaluation of ocular move-
ment through cover test [8] and PPC (Next Convergence Point) [9], where we get a
basic idea of your binocular vision.
• Step 5: Perform an optometric examination by retinoscopy and/or subjective test by
the patient. The subjective examination of refraction by the patient, consists of a
monocular exchange of lenses until the patient refers to having found the lens that
provides a better visual quality, that is, a better visual acuity (VA). This is how we
obtain patient’s refraction and visual acuity (VA).
– In the case of Urubamba, it is the patient whom, subjectively, appreciates visual
acuity using the LogMAR [10] eye test in a numerical format in which the
patient must indicate the numbers he or she is seeing.
– In the case of Pampallacta and due to communication problems and interpre-
tation of the comments (since the boys and girls are Quechua speakers and could
not have a translator during the whole performance), the patient’s AV was
assessed using the Snellen optotype table [11]. In this test, the patient must
indicate in what position the letter “E” of the test is (up, down, right or left).
• Step 6: In the event that the patient needs visual correction using glasses, these are
searched in the database, where the inventory of all donated glasses is located, and
once the corresponding one is found, it is delivered to the patient together with an
explanation of the correction made and for what or at what times he/she should use
the glasses.
• Step 7: Once the glasses are delivered, an optometric approximation analysis is
done to assess the optometric similarity between the glasses delivered and the
review performed since the eyeglasses are not always available with 100% of the
appropriate graduation with respect to the refraction found.
The images shown in Fig. 1 reflect the system used for optometric revision in the
high Andean community of Pampallacta.
574 X. Canaleta et al.
Fig. 1. Pampallacta. Girl indicating the direction of the Snellen Test. Retinoscopy with test
glasses and sciascopy rule.
4 Results
The results presented correspond to the actions done by the Ñawi Project in the third
week of July 2017. The actions were carried out in two different locations: the pop-
ulation of Urubamba and the Pampallacta Educational Institution. In Urubamba the
Project was handled through the collaboration of the Public Superior Institute of La
Salle Urubamba.
In the campaign carried out between January and May 2017 in the city of Barcelona
and its surroundings, selfless donations from different institutions and individuals were
obtained. A total of 1,500 prescription and sunglasses were collected, and after a
subsequent selection where very specific graduations and useless glasses were elimi-
nated, a total of 713 prescription and 300 sunglasses were transferred to destination.
During the performance in the Incas Sacred Valley, a total of 328 glasses were
delivered: 139 prescription glasses and 189 sunglasses. 269 eye exams were performed
for 5 days. In the Pampallacta, there were a total of 99 and the population of Urubamba
170.
These results can be considered highly satisfactory given the premises of the Ñawi
Project: all the glasses come from altruistic and previously graduated donations. It is
not possible to graduate glasses at the destination due to the characteristics of the
Project.
Type of Patients. The patients are mainly students, but revisions are also made to the
faculty of educational communities and family members who request an eye examina-
tion. Both in Pampallacta and Urubamba the percentages are 63% women and 37% men.
While in Pampallacta there are good results in terms of visual health and refractive
problems (85% in students and 62% in adults), in Urubamba there are significantly
different results (31% in students and 18% in adults). The first conclusion is that
individuals in the high Andean areas have greater visual health than those living in the
Urubamba population. But this statement can - and should be contextualized - with the
fact that in the educational institution of Pampallacta, it was practically the entire
community that was reviewed, while in Urubamba the assistance to the revisions was
voluntary. It is possible that students, teachers and families attending Urubamba were
already aware that they had a problem and those who had the perception of having
good vision no longer requested the review. In any case, visual health in the high
Andean zone seems clearly better than in the population of the valley.
Visual Acuity (VA) is a parameter that measures the quality of the patient’s vision,
in the project it is measured by a decimal scale where 1 corresponds to a vision of
100% and 0.1 to a vision of only 10%. In the case of Pampallacta and due to the
difficulty of communicating with the Quechua language, a Test E is performed where
the letter E (in uppercase) is seen in different sizes and orientations. In this case, the
patient is asked to indicate the orientation of the letters according to what he thinks he
sees.
In Pampallacta, a VA between 0.6 and 1 is obtained, but instead in Urubamba it is
between 0.2 and 1. What these data explain is that the individuals reviewed in the
Pampallacta educational community have a better overall vision than those reviewed in
the population from Urubamba.
5 Conclusions
The objective of this study is based on obtaining a first approximation of the Ñawi
Project as visual health improvement integrated in the actions of the Urubamba Project.
In order to improve the process efficiency in the following campaigns, it was
necessary to perform an analysis of data obtained in the 2017 campaign to be able to
provide better coverage of the optometric needs of the population of Sagrado Valley.
ICTs provide us with the tools to perform the quantitative analyzes.
The methodology applied requires a small adaptation according to the community
in which it is carried out, adapting both the temporality in each patient and the opto-
metric review process that is performed.
The optometric approach note is essential to assess the percentage of success in the
patient’s visual correction, but is based on expert’s interpretation. A future line of work
could be to create an automatic metric to calculate this indicator.
As a future line for upcoming campaigns, it will be key to be able to incorporate the
follow-up of the glasses delivered (graduated and sunglasses), in order to assess the
impact and changes of social habits in the High Andean Educational Institutions of
Cusco.
References
1. Canaleta, X., Badia, D., Vadillo, J.L., Maguiña, M.: Proyecto Urubamba. Lasallistas sin
fronteras en un Proyecto socioeducativo inclusivo, 1º Edición, Junio 2019. Ed. Vanguard
Gràfic (2019). ISBN 978-84-697-4132-O
2. Abel, R.: The Eye Care Revolution: Prevent and Reverse Common Vision Problems.
Kensington Books, New York (2004)
3. Fermandois, T.: Agudeza Visual, 1 (2011)
578 X. Canaleta et al.
Abstract. Rural regions are a typology of region rooted around the world. Its
identity and matrix are differentiated from the most urbanized regions. Asso-
ciated with rural areas is a strong negative feeling of depopulation, undeveloped
business fabric, less wealth and less ability to attract investment and where
public and private services from various sectors of activity are not concentrated.
This reality cannot be socially accepted and must be fought for greater equity
within countries. To leverage this change, rural regions will have to become co-
competitive and attractive regions. In order for this transformation to take place,
Information and Communication Technologies (ICT) play a major role. This
article characterizes the rural regions in their demographic and economic
dimensions, emphasizing the case of the Northeast region of Portugal. Analyse
and review a set of fundamental vectors where ICT can be a key driver and
enabler for smart rural regions to be created. Finally, it is presented a conceptual
model of what can be a smart rural region.
1 Introduction
In [1] is argued that the definition of rural areas is a much-discussed issue and it is
difficult to have a commonly accepted definition as various countries have different
indicators to define rural areas. The official document of the European Union (EU), the
“Proposal for a Council Regulation on support to Rural Development by the European
Agricultural Fund for Rural Development” identifies areas as rural if the population
density is below 150 inhabitants per one square kilometer [1].
The report [2] identifies the indicators used in selected countries to define rural
areas like: Australia: Population clusters of fewer than 1,000 people, excluding certain
areas such as holiday resorts; Austria: Towns of fewer than 5,000 people; Canada:
© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 579–589, 2020.
https://doi.org/10.1007/978-3-030-45697-9_56
580 C. R. Cunha et al.
Places of fewer than 1,000 people, with a population density of fewer than 400 per
square kilometer; Denmark and Norway: Agglomerations of fewer than 200 inhabi-
tants; England and Wales: No definition but the Rural Development Commission
excludes towns with more than 10,000 inhabitants; France: Towns containing an
agglomeration of fewer than 2,000 people living in contiguous houses or with not more
than 200 meters between the houses; Portugal and Switzerland: Towns of fewer than
10,000 people; Malaysia: Areas less than 10,000 people; India: Locations with a
population of less than 10,000 people.
Over the past decades, we are experiencing a massive migration of population to
the urban zones, potentiating the desertification of the rural areas. These rural areas are
deprived of young population and workplaces.
In EU 28% of the population lives of the in rural areas [3], the majority of European
countries are called the “countries of the elderly” and its rural environments are being
affected by depopulation and social exclusion [4]. This is for sure one of the biggest
social problems left in the EU [5]. In rural regions despite of those factors, they also
have low-density population thus, dispersing regions; scarcity of public transports; low
accessibility services; small business; and high levels of illiteracy, especially in terms
of digital illiteracy.
Rural regions have several differences compared to urban regions. Rural regions
have a lower population density, an older population, greater economic weight of
agricultural activities, greater fragility of the business fabric, more difficult to access
public services, less attractiveness for investment, among others. By contrast, rural
regions have much to offer, including a sense of community, affordable housing prices,
access to open and green spaces and top-quality products. Often have great tourism
potential (although not properly explored), superior food production, and an important
material and especially immaterial heritage. This reality reveals in itself a set of
challenges and opportunities that is important to discuss to enable the development of
rural regions.
Rural regions can make use of the reorientation process to adapt to new emerging
paradigms. Gerontechnology; business cooperation; Information and Communication
Technologies (ICT) applied to agriculture, ICT applied to tourism; Digital Marketing
may be challenges to the rural regions’ problems.
The ICT assume a role that facilitates the constitution of cooperation networks,
making easier the development of alliances, allowing to create of virtual organizations
with other business partners and to develop inter-organizational information systems
that support strategic business relationships, with clients, suppliers, subcontractors and
others [6, 7]. This paper in the next chapters starts to make a demographic and
economical characterization of rural regions presenting the case of the Northeast of
Portugal. After, presents some major challenges and opportunities for rural regions that
can be faced through the use of ICT based solutions where are explain the role of ICT
for facing each vector. In the next chapter is presented a conceptual model of what can
and should be the vision of a smart rural regions. Finally, there are made some con-
clusions and final remarks.
Building Smart Rural Regions: Challenges and Opportunities 581
There isn’t a single definition of what a rural region is and, naturally, not all rural
regions are the same, each one with its own peculiarities. As a typical example of a
rural area, in this section we will make a brief characterization of Terras de Trás-os-
Montes (TTM). TTM is a region in the northeast of Portugal, bordering Spain,
aggregating nine municipalities though an area of about 5,538 km2.
more companies in the region than nationally, which leads us to believe that the sector
could provide more value, and that ICT could contribute to it.
The companies are majorly of small dimension in terms of number of employees:
about 98.6% had less than 10 employees and there were only one with more than 249
workers [8]. In 2011, about 98% had an average number of two employees [13].
Regarding the survival rate of the companies, from activity areas that can be
internationalized, we can observe that only 63.21% survive after two years of labour. In
2016, only 0.66% of all the new created companies referred to companies in sectors of
high and medium-high technology [12]. In fact, a national study about micro com-
panies (until 10 workers) showed that local entrepreneurs considered the district the
worst nationally to open a new business even though they think the district has a quite
favourable economic situation [14].
The TTM region is characterized by extended agricultural and forestry resources
having almost 38% as its extension used on agricultural purposes. As such, it is not a
surprise that the agro industrial sector is predominates in the region covering horti-
cultural, fruit and mycological. The livestock production is equally important in the
region’s economy [13].
To strength local economy and improve internationalization capability it is urgent
to invest in Research and Development (R&D), but in 2009 only less than 1.4 million
were invested in the TTM region (0.5% of national R&D expenses) and in 2016 the
investment in R&D reached 2,348 million euros [13, 15].
The same opinion is shared by regional experts sustaining that the region’s
development strategy cannot be supported only in the traditional sectors of the econ-
omy; this region has high concentration of production in activities with low added
value. The region must invest in the industrial sector, to promote economic growth, and
especially those based on innovation and technological and export capacity. The
reduced degree of use of information technologies by some segments of the population
is one factor that is damaging the competitiveness of the region. To do so, it is urgent to
increase the investment in industrial innovation processes and have entrepreneurs work
together and collaboratively. However, regional entrepreneurs reveal almost total
ignorance of the networking programs promoted by local organizations [14, 16, 17].
The role of ICTs is crucial for the creation of smart rural regions and for them to
improve both their levels of competitiveness and their levels of citizenship and social
justice. It is therefore vital to understand how ICT can help to tackle the main problems
of rural regions and enhance the development and wealth creation so that only the
identity-specificity becomes associated to rural regions – and not just the vision of
depopulation area and inefficiency in terms of economic sustainability and develop-
ment. Next there are presented the key drivers of ICT intervention to respond to the
challenges/opportunities of rural regions to become smart rural regions in the future.
Building Smart Rural Regions: Challenges and Opportunities 583
The analysis of rural regions and the definition of strategies to improve their main
vectors should, in our opinion, be perceived according to two main axes - the axis of
citizenship and social justice and the axis of economic growth and development. In
Fig. 1 is presented some main drivers for the creation of smart regions according to the
two referred main axes.
Rural regions face several challenges like an aging and isolated population in need
of adequate health care and better care by their caregivers (family or others); The same
fringe of the population concentrates a set of ancient knowledge of immaterial character
that must be perpetuated between generations to come. From a competitiveness and
wealth generation perspective, there is a marked dependence on agriculture; The
industrial fabric is uncompetitive and in isolation has a low intervention capacity;
Equally, these regions lack a greater capacity for promotion, enclosing in themselves an
important set of traditions, high quality products and unique natural beauty. From this
analysis emerge five processes that in our opinion should be worked on to create smart
rural regions and where ICT will be a key enabler and lever. These five processes are
therefore – cooperative networks, precision agriculture, digital marketing geron-
totechnology and elders’ immaterial heritage. Next, these five processes are succinctly
discussed.
584 C. R. Cunha et al.
Following an introduction to the key concepts that define a rural region and a demo-
graphic and economic characterization of an example of a rural region - TTM, and the
discussion of several main vectors for the creation of smart rural regions, we then
present a conceptual model that reflects our vision of a smart rural region. In Fig. 2 the
proposed conceptual model is presented, which brings together an integrated view of
four fundamental quadrants.
Building Smart Rural Regions: Challenges and Opportunities 587
The existence of rural regions is a constant in any country. However, its definition
varies from country to country. This paper introduced the concept of rural region by
characterizing it. The Portuguese case of TTM was used as an example of a rural region
588 C. R. Cunha et al.
that presents multiple challenges. An analysis of some of the main challenges was
made according to the vectors of citizenship and social justice and the vectors of
economic growth and development. This paper provides a discussion of the key
foundations that characterize the challenges and opportunities of rural regions and how
ICT can drive the transformation of rural and smart rural regions. The conceptual
model presented intends to be an integrative vision and based on a cooperation model
to support a moderate vision for rural regions. However, there is a full awareness that
implementing the proposed model is a huge challenge.
Acknowledgments. UNIAG, R&D unit funded by the FCT – Portuguese Foundation for the
Development of Science and Technology, Ministry of Science, Technology and Higher Edu-
cation. UIDB/04752/2020.
References
1. Simkova, E.: Strategic approaches to rural tourism and sustainable development of rural
areas. Agric. Econ.–Czech 53(6), 263–270 (2007)
2. Organization for Economic Co-operation and Development (OECD): Tourism Strategies and
rural development. OCDE/GD (94)49 Publications, Paris (1994)
3. Størup, J.: Mobility is about bringing people together. Technical report (2018)
4. Plazinic, B., Jovic, J.: Mobility and transport potential of elderly in differently accessible
rural areas. J. Transp. Geogr. 68, 169–180 (2018)
5. Budejovice, C.: Macroeconomic Effects on Development of Sparsely Populated Areas.
Interreg-Central.Eu (2017)
6. Mendonça, V., Varajão, J., Oliveira, P.: Cooperation networks in the tourism sector:
multiplication of business opportunities. Procedia Comput. Sci. 64, 1172–1181 (2015)
7. O’Brien, J.A., Marakas, G.: Management Information Systems. McGraw-Hill/Irwin,
New York (2010)
8. INE - Instituto Nacional de Estatística. https://www.ine.pt/. Accessed 21 Nov 2019
9. PORDATA - População residente em lugares com 10 mil e mais habitantes, segundo os Censos
(2015). https://www.pordata.pt/Municipios/Popula%C3%A7%C3%A3o+residente+em+luga
res+com+10+mil+e+mais+habitantes++segundo+os+Censos-26. Accessed 21 Nov 2019
10. Cunha, C.R., Mendonça, V., Morais, E.P., Fernandes, J.: Using pervasive and mobile
computation in the provision of gerontological care in rural areas. Procedia Comput. Sci.
138, 72–79 (2018). https://doi.org/10.1016/j.procs.2018.10.011
11. INE - Instituto Nacional de Estatística - Estudo sobre o Poder de Compra Concelhio. INE,
Lisboa (2019). ISBN 978-989-25-0501-5 (2017)
12. INE - Instituto Nacional de Estatística. Empresas (N.º) por Localização geográfica (NUTS -
2013) e Atividade económica (Subclasse - CAE Rev. 3); Anual - INE, Sistema de contas
integradas das empresas (2018). https://www.ine.pt/xportal/xmain?xpid=INE&xpgid=ine_
indicadores&indOcorrCod=0008466&contexto=bd&selTab=tab2. Accessed 21 Nov 2019
13. CIM-TTM Comunidade Intermunicipal Terras de Trás-os-Montes. Plano Estratégico de
desenvolvimento intermunicipal das Terras de Trás-os-Montes para o período 2014–2020
(2014). http://cim-ttm.pt/pages/482. Accessed 20 Nov 2019
14. Pereira, A.: Competitividade Regional para micro e pequenas empresas (on-line edition).
Diário de Trás-os-Montes, Montes de Notícias (2016). https://www.diariodetrasosmontes.
com/noticia/competitividade-regional-para-micro-e-pequenas-empresas
Building Smart Rural Regions: Challenges and Opportunities 589
15. Jornal Económico. Portugal com a mais alta taxa de despesa em investigação e
desenvolvimento no ensino superior (on-line edition) (2017). https://jornaleconomico.sapo.
pt/noticias/portugal-com-a-mais-alta-taxa-de-despesa-em-investigacao-e-desenvolvimento-no-
ensino-superior-239948
16. CIMAT – Comunidade Intermunicipal Alto Tâmega. Fórum para o Desenvolvimento de
Trás-os-Montes e Alto Douro (2015). https://cimat.pt/forum-para-o-desenvolvimento-de-
tras-os-montes-e-alto-douro
17. CIM-TTM Comunidade Intermunicipal Terras de Trás-os-Montes. CIM das Terras de Trás-
os-Montes com projeto para implementação de marca territorial (2018). http://cim-ttm.pt/
pages/528?news_id=84
18. Gao, J.Z., Prakash, L., Jagatesan, R.: Understanding 2D-barcode technology and applica-
tions in m-commerce - design and implementation of a 2D barcode processing solution. In:
Proceedings of 31st Annual International on Computer Software and Applications
Conference, Beijing, China, pp. 49–56 (2007)
19. Hall, C.M., Mitchell, R.: Wine tourism in the mediterranean: a tool for restructuring and
development. Thunderbird Int. Bus. Rev. 42(4), 445–465 (2001)
20. Stafford, J.V.: Implementing precision agriculture in the 21st century. J. Agric. Eng. Res. 76,
267–275 (2000)
21. Cox, S.: Information technology: the global key to precision agriculture and sustainability.
Comput. Electron. Agric. 36(2–3), 93–111 (2002)
22. Čorejová, T., Madudová, E.: Trends of scale-up effects of ICT sector. Transp. Res. Procedia
40, 1002–1009 (2019). ISSN 2352-1465
23. Cunha, C.R., Carvalho, A., Afonso, L., Silva, D., Fernandes, P.O., Pires, L.C.M., Costa, C.,
Correia, R., Ramalhosa, E., Correia, A.I., Parafita, A.: Boosting cultural heritage in rural
communities through an ICT platform: the Viv@vó project. IBIMA Bus. Rev. 2019, 1–12
(2019). ISSN 1947-3788
24. Kolar, T., Zabkar, V.: A consumer-based model of authenticity: an oxymoron or the
foundation of cultural heritage marketing? Tour. Manag. 31(5), 652–664 (2010)
25. Nao, T.: Visitors’ evaluation of a historical district: the roles of authenticity and
manipulation. Tour. Hosp. Res. 5(1), 45–63 (2004)
26. Yeoman, I., Brass, D., McMahon-Beattie, U.: Current issue in tourism: the authentic tourist.
Tour. Manag. 28, 1128–1138 (2007)
27. Boyle, D.: Authenticity: brands, fakes, spin and the lust for real life. Harper Perennial,
London (2004)
28. Sheets, D.J., La Buda, D., Liebig, P.S.: Gerontechnology. The aging of rehabilitation. Rehab
Manag. 10, 100–102 (1997)
29. Graafmans, J., Taipale, V.: Gerontechnology. A sustainable investment in the future. Stud.
Health Technol. Inform. 48, 3–6 (1998)
30. Smith, A.: Pew Research Center. Older adults and technology use; 201. http://www.
pewinternet.org/2014/04/03/older-adults-and-technology-use
31. Siegel, C., Dorner, T.E.: Information technologies for active and assisted living—influences
to the quality of life of an ageing society. Int. J. Med. Inform. 100, 32–45 (2017). ISSN
1386-5056
32. Lam, J.C.Y., Lee, M.K.O.: Digital inclusiveness - longitudinal study of internet adoption by
older adults. J. Manag. Inf. Syst. 22, 177–206 (2006)
The Power of Digitalization: The Netflix Story
1 Introduction
The Internet’s origin goes back to the 1960’s, with ARPANET in the U.S.A. [1]. Since
then, the Internet has evolved and has become more accessible to everyone across the
world. What started as a military tool has improved to what we have nowadays at
home. The world has become more connected and globalized. More than that, the
Internet became something so indispensable to ordinary life, that work, school and
even entertainment requires it.
© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 590–599, 2020.
https://doi.org/10.1007/978-3-030-45697-9_57
The Power of Digitalization: The Netflix Story 591
Therefore, companies had to adapt and evolve with the Internet. Online business
started a new era of change and evolution. Besides that, customers’ needs also changed,
as accessibility and convenience led to a more self-indulged client, specifically in the
television market. The effects of the technology caused a digital transformation in the
entertainment distribution market, with online streaming revolutionizing the business
format. This data streaming can be explained as the ability of playing a multimedia file
without it being completely downloaded first.
Summing up, the evolution and accessibility of the Internet and the change in the
audience’s behavior, potentiated the creation of a new business model – a platform of
streaming. From the customer’s point of view, this platform allowed access to a variety
of content which was previously harder to get, and they are no longer subject to
television’s schedule and content. At the same time, the suppliers could cut costs in
physical inventory and physical stores and increase their network of customers.
Therefore, for both sides it is a win-win situation – a mutually beneficial way of doing
business.
In this context, Netflix earned a reputation and became world leader [2]. This article
focuses on their adaptation to the new era. As all of the co-authors are Netflix clients,
this provided additional motivation for the case-study. Netflix, a former DVD rental
company, recognized how to do business in order to keep up with the trends, being
pioneers of the digital transformation in the entertainment distribution market.
In the next section, this article presents a brief chronological explanation of Netflix,
followed by a literature review. Primary data, gathered in a survey, is also analyzed and
discussed, in order to understand how current and potential customers see this new type
of business, and in order to ascertain how the future of the entertainment market might
become in the future.
2 Netflix
Software engineers Reed Hastings and Marc Rudolph first founded Netflix in 1997, as
a regular DVD rental business. According to Hastings, the competition was charging
high fees for late returns, and he saw it as an opportunity of differentiation: creating a
more customer-friendly model [3].
In April 1998, Netflix developed their first game changer: DVD rental by e-mail.
Customers could select the movie they wanted online and have it delivered to their
door. By that time, VHS was dominating the market, so only 2% of the population
owned a DVD player back then. It was thus a risky strategy, but clearly shows how
innovative their model was since the very beginning. A year later, Netflix changed their
payment method to a subscription model, where customers could rent DVDs for a fixed
fee per month.
Figures 1 and 2 show how Netflix’s website has evolved.
592 M. Au-Yong-Oliveira et al.
In 2003, Netflix achieved one million subscribers. The DVD rental store kept their
subscription model until 2007. The co-founders understood that the company was not
growing anymore, and with the digital transformation affecting almost every company,
it was time to develop a new plan to cover customers’ demands.
“We never spent one minute trying to save the DVD business”, Ted Sarandos says,
Netflix’s head of content since early 2000 [3]. It was all about evolving and improving
the TV industry. Therefore, they began offering the option of streaming licensed
movies, and even a couple of TV shows. This new option quickly started to get known
and popular, thus improving the content library was now a priority.
The Power of Digitalization: The Netflix Story 593
With that in mind, Netflix reached an agreement with Starz Entertainment, in 2008.
In 2010, an agreement valued in one billion dollars was announced with Lion Gates
Entertainment, MGM and Paramount Pictures. In the same year, Netflix’s app was
launched for iOS.
Although having completely dominated the market, having good broadcasters and
producers was not enough. Based on IMDB ratings, number of views, customer
feedback and other parameters, Netflix developed an algorithm for a rating system that
could look through customers’ preferences, improving the recommendations model [6].
In 2013, based on that analysis of customer data, Netflix began producing their own
shows. House of Cards was their first of many originals, debuting on February first [7].
In 2016, this TV network was already available for 190 countries.
Now, in 2019, Netflix is a case study and an example for the competition, as their
combination of digitalization with content marketing completely reinvented the cable
era. According to Sarandos, “Pay television didn’t have a distribution problem – it had
a packaging problem and a content problem. We saw that a lot of [cable customers]
were paying for sports they didn’t want and channels they didn’t watch. There’s got to
be much more equilibrium between consumer demand and pricing. Through the growth
of all these direct-to-consumer services, television will become better and better.” [3].
Figure 3 shows a business model canvas for Netflix.
3 Literature Review
3.1 Technological Development
The importance of innovation on competitiveness is well recognized. However, there is
less consensus about what enables an organization to innovate. Innovation networks are
a logical effect from the increasing complexity of innovative products and services [9].
In this context, technological development has been increasing in an exponential
way in years past, affecting the way industries and companies drive their businesses in
order to satisfy the way that customers want their products and services to be delivered.
Thus, the impact of digitalization on products and services cannot be depreciated.
Digital products are an interesting issue regarding digital value propositions. Many
products are today enabled by connectivity, so that these devices send data back to the
supplier [10].
Therefore, digitalization and evolution led to a new era and a new way of doing
business, and it matters to understand its impact on television and on the entertainment
market.
3.3 Netflix
The major motivation for this change driven by Netflix was the change in consumer
habits, led by the role that the Internet has in people’s lives nowadays. [13] used Pardo
and Johnson’s points of view to describe the role that consumers had in the change of
The Power of Digitalization: The Netflix Story 595
distribution market. Although these authors have different perspectives about the new
consumer trends, they both agree that technology has had a significant impact on
television consumption. According to [14] there are two types of emerging consumers:
“cord nevers”, who have never subscribed to traditional multichannel video pro-
gramming, opting instead for Internet streaming options, and “cord cutters”, consumers
who previously paid for cable or satellite television, but have decided to stop sub-
scribing. On the other hand, [15] looks to the consumers’ ability to keep up with
technological devices. He relates the digitalization of entertainment with the expansion
of the “Apple ecosystem”. “This iPod/iPhone/iPad generation epitomizes the new peer
group of users whose audiovisual experience is based on all sorts of media platforms
and whose profile to a large extent mirrors that of the cinema-going public and those
who play video games.” [15].
With the Internet’s availability to consumers and the common use of smart devices,
these new technologically driven users interact with entertainment in a more practical
and efficient way [13]. The author adds that “with the prevalence of cord cutters, cord
nevers, and a generation of Apple users, Johnson and Pardo view changes in distri-
butions as a response to growing demand for digital platforms for online television
viewing”.
Succinctly, Netflix was always one step ahead in understanding all these market
changes and providing to the new technological generation the services they want to
pay for. With a high understanding of the market needs and the new way of making
their business better than the competitors, the company was capable of climbing several
positions, establishing itself on the throne of this kind of market.
4 Methodology
One of the purposes of this article was to identify how much moving along with tech-
nological advances and understanding market changes benefits a company, using Net-
flix’s case to do so. In order to fulfil that, a survey was created – using Google forms – as a
research methodology to collect quantitative data and gain in-depth information about
people’s habits, preferences and opinions on the subject. The survey gathered data from a
convenience sample (an accessible group of individuals, which is readily available to the
researchers) [16]. Convenience samples are not ideal, however they are good for
exploratory research, and as a basis for further research to be performed. Convenience
samples are also good for establishing links to existing related research. This type of
sample, in business and management research, is very common [16].
A timeline of three weeks was defined to collect the necessary data for the research,
and the form was shared mainly in social media (such as Facebook and Twitter), but
also in forum websites (such as Reddit – an international platform for sharing ideas and
content). Erasmus [Facebook and WhatsApp] groups were also a target, so that the
form could reach a wider geographic area, as two of the authors had Erasmus contacts
due to recent Erasmus experiences that they had had.
There was not a specific target audience, and it was deemed important to acquire
information from different age groups, professions and even nationalities. Thus, the
596 M. Au-Yong-Oliveira et al.
survey gathered 74 answers, mainly from Portugal, but also from Spain, Belgium, Italy,
Turkey, Georgia and Malaysia. As Netflix has customers in over 180 countries, it was
important to reach a wide demographic and geographic area.
Secondary data was also analyzed to reach the paper’s aim.
5 Discussion
An objective of the research was to get survey answers from different segments (dif-
ferent ages, professions and nationalities, among others), however, the respondents
were mainly millennials, like two of the authors. The data collected is mainly quan-
titative and only the last survey question was qualitative. A total of 72% of those who
answered the survey are between 18 and 25 years old, as this is the generation that is
mostly evolving with technological change.
Of the people who have answered the survey, 90.1% were stream consumers, but
only 59.1% had premium TV channels (paid channels, such as Sport TV, which are
paid for separately). From those 90.1%, 58.3% also said that they watched streams
between two and four times per week, but the majority of premium TV channel
subscribers (63.8%) replied that they watch TV less than twice in a week. That clearly
goes according to the new era: consumers habits are changing, and people are getting
used to the digitalization era. A couple of years back, the percentages would probably
be swapped. The streaming business is getting popular; opposed to the TV industry,
that is declining.
When we say stream consumers, we are talking about multiple platforms of
streaming: Netflix, HBO, Hulu, Amazon, among others. However, between all those
platforms, 77.2% are Netflix subscribers. Therefore, if this sample can be taken as a
general example, this survey mostly proves one thing: Netflix is the leader of the
entertainment distribution business, being superior to the competition by a wide dif-
ference. This is also confirmed by the literature [2].
But why? Why is Netflix so superior to their competitors? What makes them the
very best?
According to the respondents, the biggest strength of this platform is content
(54.5%). Price, distribution and the possibility of streaming in multiple devices was
also referred, but apparently content is what sets them apart from the competition, as
81.8% places Netflix Originals as the best producer among all of them.
As for the role of digitalization in ordinary life, 77.3% of the respondents have
classified its impact with high/very high importance, showing the dependence on new
technologies for this group of people.
competition and consumers, and the way they used it to earn competitive advantage, is
beyond inspirational. Netflix has reached, probably, their full potential already, which
is truly remarkable.
With our Google forms survey, it was possible to understand that the brand is
already stronger than the competition: people look at it as quality content and
distribution.
As shown before, 72% of those who answered the survey are between 18 and 25
years old, mostly students. Furthermore, for future research, this might be the critical
segment to analyze, as it is the generation to hold as customers.
Additionally, as content seems to be Netflix’s biggest strength, it might create
doubt as to where improvements should be channeled: should the next big strategy
focus on an ongoing development of original content, or should other aspects (such as
price and distribution) be improved upon? What about the possibility of freelancers
getting more attention from Netflix, since Black Mirror and Love, Death and Robots
became huge hits?
Besides the acknowledgment of strategy, one of the proposed goals for this article
was to possibly predict the next acquisition for the platform. Between the several
answers to the survey, some stood out, sometimes due to their repetition, and some-
times for their innovation. Thus, to the question “In your opinion, what is the next step
for streaming platforms?”, a few answers merit particular highlight: the acquisition of
rights to stream live sports; stream e-sports such as CS:GO, League of Legends and
Fortnite; Virtual Reality; or having the streaming services in a double package with
Internet servers. In fact, for future research, these are the themes that point to a whole
new market, so new competitors and markets must be studied in depth.
From the data collected, only 63.8% have premium TV channels. From that per-
centage, it is inevitable that a few only have it so they can watch sports. With the
possible acquisition of sports streams by Netflix, that percentage would probably be
even lower, and it would still improve their biggest strength: content. More variety
means more target audience, and sports have a tremendous impact on European
entertainment. People could save on cable TV and get a better service with the
platform.
The same goes for the acquisition of e-sports streams. In a world where gaming sets
a new trend, especially among children, competing against Twitch (which dominates e-
sports streaming) could set a new branch of customers for Netflix, and still fortify
content.
Virtual Reality could also be the next big step in terms of innovation of content.
The movie Black Mirror: Bandersnatch was immediately a huge hit, as it was a pioneer
in terms of interaction between the watcher and the movie itself. In this movie, viewers
can step in the story and pick different ends to it. Virtual Reality would be the perfect
development of it. As a matter of fact, there are already a few Netflix experiences with
VR and it should not take much longer to become regularly used [17].
Finally, as hard as it could be, having the streaming services along with Internet
servers would be a game changer. Logically, one cannot be used without the other, so it
would slightly improve accessibility. Some Internet providers already offer a free trial
on streaming platforms (Vodafone-HBO for instance), so a deal between both could be
reached.
598 M. Au-Yong-Oliveira et al.
Although there are a lot of question marks as to where the next step by Netflix will
take them, Reed Hastings – Netflix’s CEO – remains optimistic, and even enthusiastic.
When asked about the upcoming new competitors on streaming platforms, such as
Apple and Disney, he gently said: “these are amazing, large, well-funded companies
with very significant efforts, but you do your best job when you have great
competitors”.
The Netflix service, focused on herein, has been labeled as an “inexpensive legal
streaming service” [18] which has, in fact, due to its low cost, lowered movie piracy.
Due to its low cost, Netflix should have an additional appeal, in rural areas. Rural areas
are, besides being more isolated, generally poorer, experiencing lower incomes [19].
However, for services such as Netflix to become popular, even in rural areas, fast
Internet connections are necessary, and may not always be available in certain regions.
Additionally, rural areas are stages to aging populations, relevant to the extent that the
elderly are less tech savvy [20, 21]. Connecting to a streaming service may, therefore,
be a problem. We thus suggest, for future research, that the effect and popularity of
Netflix in rural areas be studied. Netflix, and similar streaming services, may contribute
to a diminishing of the exodus of rural areas [19] and provide for an important con-
nection between the local population, as well as with family and friends who have since
moved to live outside such regions. Does the existence of low-cost streaming services,
such as Netflix, have a positive impact on the satisfaction and happiness of resident
rural populations?
References
1. Leiner, B., Cerf, V., Clark, D., Kahn, R., Kleinrock, L., Lynch, D., Postel, J., Roberts, L.G.,
Wolff, S.: Brief History of the Internet—Internet Society (2009)
2. Investopedia. https://www.investopedia.com/articles/personal-finance/121714/hulu-netflix-
and-amazon-instant-video-comparison.asp. Accessed 03 Dec 2019
3. Littleton, C., Roettgers, J.: How Netflix Went From DVD Distributor to Media Giant (2018).
https://variety.com/2018/digital/news/netflix-streaming-dvds-original-programming-1202910
483/. Accessed 31 Oct 2019
4. Business Insider. https://www.businessinsider.com/how-netflix-has-looked-over-the-years-
2016-4#in-2010-streaming-begins-to-be-more-than-an-add-on-and-gets-prominent-real-estate-
on-the-home-page-5. Accessed 03 Dec 2019
5. Netflix. https://www.netflix.com/browse. Accessed 03 Dec 2019
6. Oomen, M.: Netflix: How a DVD rental company changed the way we spend our free time
(2019). Business Models Inc. https://www.businessmodelsinc.com/exponential-business-
model/netflix/. Accessed 31 Oct 2019
7. Venkatraman, N.V.: Netflix: A Case of Transformation for the Digital Future (2017). https://
medium.com/@nvenkatraman/netflix-a-case-of-transformation-for-the-digital-future-4ef612c
8d8b. Accessed 31 Oct 2019
The Power of Digitalization: The Netflix Story 599
1 Introduction
Today it is a fact that Spain, like the rest of the European Union (EU), is ageing.
According to data from the National Statistics Institute of Spain (INE), Spain registered
a new ageing historical maximum in 2018, continuing with the ascending trend of the
last decade. The percentage of the population aged 65 and over, which currently stands
at 19.2% of the total population, is expected to rise up to 25.2% in 2033. In this sense,
and if current trends continue, the dependency ratio (quotient, as a percentage, between
the population aged under 16 or over 64 and the population aged 16 to 64) would rise
from 54.2% today to 62.4% in 2033.
Given this reality, active ageing policies in Spain have received special attention in
the last decade. Active ageing is a concept defined by the World Health Organization
© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 600–611, 2020.
https://doi.org/10.1007/978-3-030-45697-9_58
An Online Sales System to Be Managed by People with Mental Illness 601
(WHO) [1, 2] as the process of optimizing opportunities for health, participation and
safety in order to improve the quality of life as people age. In particular, in order to
promote the active ageing in an environment in which the penetration of technology in
different areas of life is already a reality, solutions must be proposed that allow the
active participation of the older citizens in the Digital Society. In this sense, different
initiatives have been promoted, like the 2011 European Agenda for Adult Learning
(EAAL), which defines the focus of European cooperation on adult education policies
for the period up to 2020, the Active Assisted Living Programme (AAL), or the
Interuniversity Programme of Experience running since 2002-2003 in the Autonomous
Region of Castile and León (Spain) [3]. The main goal is to promote ways for the
senior population to acquire new job skills within the Digital Society, promoting an
active lifestyle and avoiding social exclusion. However, an important aspect of older
people as they age is the progressive deterioration of both their physical and mental
capacities, which can make it difficult for them to use technological solutions [4].
With this in mind, a technological ecosystem [5] has been developed with two
fundamental objectives. First, improving the quality of life of (in)formal caregivers
through learning, ubiquitous access to information and support. Second, providing a set
of services for relatives and patients with a particular focus on those who live in rural
areas. Thus, the ecosystem integrates different software components with the aim of
improving the welfare work based on three pillars: teaching-learning, to provide the
necessary and specific training to (in)formal caregivers to provide care to the elderly;
social, with the aim of sharing experiences on the process of learning and welfare work,
also providing means to avoid the social exclusion of caregivers; and finally a dash-
board for ecosystem management and obtaining metrics that can be used for monitoring
and proposing new actions both at the welfare level and for the management of the
ecosystem itself. On the other hand, and given the inherent evolutionary approach of
technological ecosystems that must allow the incorporation of new components [5, 6],
an online sales platform has been developed that promotes active ageing, seeking that it
can be used and managed by older people who may have cognitive impairment
problems (as well as other people with other mental illness).
In this paper, the development of the prototype of an online store that is inclusive,
so that it takes into account all types of users and can also be managed by people with
different abilities, particularly people with severe and prolonged mental illness, is
presented. It has to be taken into account that, in the context of mental diseases, each
patient has a unique clinical picture [7], so it is very difficult for two people to share the
same symptoms or the same reactions to similar situations, so special care must be
taken in terms of the user experience (accessibility, usability, etc.). Although there are
many e-commerce platforms on the market (Etsy, Shopify, Bigcartel, Amazon, etc.),
they focus on improving the usability of the system but do not consider users with
special needs [8]. People with severe and prolonged mental illness must be able to
manage sales or make purchases over the Internet. The aim has been to develop a
system that allows a simplified sales network adapted to both workers and customers.
To do this, the objective has not been to develop a software prototype from scratch, but
to focus on aspects relating to accessibility and usability to improve online stores and
apply these improvements to an existing solution, following the philosophy of Open
Source software development.
602 A. García-Holgado et al.
The rest of the paper is organized as follows. Section 2 provides an overview of the
ecosystem. Section 3 describes the online sales platform. Section 4 presents the results
of the heuristic evaluation of the interface for consumers. Finally, Sect. 5 summarizes
the main conclusions of this work.
2 Ecosystem Overview
provides a uniform interface to all the components, in this proposal, the branding
associated with the ecosystem has not yet been fully implemented.
The second layer provides the software components with the main user-level ser-
vices. Initially, the ecosystem consisted of three components but incorporated a fourth
component, the online store. The first service is an online platform to provide a set of
private and safe areas for patients, relatives and caregivers, so they can maintain the
contact despite the living places or other socioeconomic situations [13]. Walls are
managed by (in)formal caregivers, care managers, but also the patients and their rel-
atives may be granted access to the social network.
Fig. 1. The architecture of the technological ecosystem for (in)formal caregivers. Based on [14]
The second service is focused on psychoeducation [15, 16] for (in)formal care-
givers in order to provide them with training support to cover different knowledge
needs, but also information, advice, and guidance, as well as access to a community of
equals and experts. The third service is a dashboard to support decision-making pro-
cesses. The knowledge managed in the ecosystem is associated with different sources
such as the information stored in the different tools of the ecosystem, the implicit and
tacit knowledge of the users, as well as the interaction of them and the ecosystem. The
dashboard combines these sources through data visualization. Finally, the fourth ser-
vice is the online store described in this proposal to simplify the sales network adapted
to both workers and customers with special needs.
604 A. García-Holgado et al.
Regarding the static data management layer, it provides tools to centralize infor-
mation needed by other components of the ecosystem. This layer has a database to
store data associated with the patients: caregiving activities, provided treatments, etc.
The last layer is the infrastructure; it provides a set of services that are used by the
software components from other layers. In particular, the mail server, the user man-
agement tool based on CAS (Central Authentication Service) and a tool to support data
analysis as a service for the dashboard.
Finally, the human factor is represented in the architecture through two input flows:
the business plan from a management point of view, and the training plan and medical
protocol from a methodological perspective. Furthermore, those as sensible and
medical data may be generated within the ecosystem component; the human factor
should take into account the ethics and data protection necessary to warranty safe data
governance [14].
The online sales platform was planned to be integrated with the activities of the Special
Employment Centre in Zamora (Spain) (in Spanish, Centro Especial de Empleo, CEE).
The origins of the CEE are in the needs detected by professionals and associations
dedicated to the rehabilitation and socio-labour reinsertion of people with disabilities
due to serious and prolonged mental illness. The principal activity of the CEE consists
of the creation of spaces that allow the labour integration of people with serious and
prolonged mental illness, such as the design, production and marketing of products and
services in which people with disabilities participate, promoting and encouraging their
training and employment. Among the activities of the CEE in Zamora is the cultivation
and marketing of organic fruit and vegetables in rural areas of Castile and León, the
development of craft products with various materials, as well as cleaning and catering
services.
The online sales platform aims to support the distribution of the products created in
the activities associated with the CEE, so the people with mental illness will be
involved not only in the production phase but also in the sales phase. Furthermore, the
target audience is all types of users, but with particular emphasis on those who have
some mental illness or disability.
The online sales system has been developed within the province of the techno-
logical ecosystem presented in Sect. 2. For that purpose, a similar approach as the one
followed in [5], in which the authors showed the importance of modelling the business
structure along with the software structure during the early stages of the ecosystem
development. The main objective is not to follow a “business first” approach, but to
develop the business and software structures altogether, as they are complementary.
Taking into account the different business processes while developing the software
structure, provides fundamental “constraints” or characteristics for the software
structure, such as the data taxonomy and ontology, the data architecture, and the data
security and lifecycle.
An Online Sales System to Be Managed by People with Mental Illness 605
Fig. 3. BPMN 2.0 Process diagram of the login task in the sales system.
606 A. García-Holgado et al.
Fig. 4. BPMN 2.0 Collaboration diagram between the customer and the online sales system that
describes the process of purchasing a product.
of the different interfaces necessary for the development of the web services required to
integrate these data into the ERP was also developed. On the other hand, considerations
on non-functional requirements were also taken into account: integration with existing
solutions, usability and internationalization.
Likewise, a document has been developed that describes, from a technical point of
view, the different scenarios of the online sales system as well as its integration with the
ERP. These scenarios have been subdivided into the following sections: (1) description
of the scenario; (2) procedure (different steps that take place in the scenario); (3) UML
sequence diagram that implements the procedure; (4) data set needed to carry out each
of the ERP transactions required for each scenario.
The considered scenarios were:
• Product creation/modification: assign extra data to exist products and add them to
the store or create new products (not existing products in the ERP), as well as delete
products.
• Stock visualization: evaluate stock each time a customer visits a product page or
tries to make a purchase.
• Sales: record sales in the ERP each time a customer buys something through the
web.
• Sales Cancellation: Allow the customer to cancel a purchase (as long as the ship-
ment has not been made) and reflect the changes in the ERP.
• From draft sale to historical sale: sales are created as drafts so that the customer can
cancel them if desired. After the package has been shipped, these sales must be
converted to historical.
Figure 5 shows an example of the sequence diagram for adding a new product
within the product creation/modification scenario.
The next stage consisted of the visual design of the online store focused on pro-
viding a satisfying user experience, with special emphasis on users with severe and
prolonged mental illness. To this end, and based on the results of the previous activ-
ities, the different screens associated with the different scenarios and components
needed to meet the functional requirements have been defined, following the standards
of the World Wide Web Consortium (W3C). More specifically, the store design takes
into account the Web Content Accessibility Guidelines (WCAG) 2.0 standard for users
with some cognitive issues [18], as well as the recommendations for the cognitive
accessibility of web content available at the Web Accessibility Initiative (WAI) [19].
Although not all of the WCAG 2.0 criteria were applied, the WAI recommends
meeting at least the WCAG 2.0 Level A and AA criteria, along with some Level AAA
criteria that are particularly important for people with cognitive difficulties.
Finally, the connectivity of the online store with the ERP has been deployed, from
which an alpha version of the sales platform has been obtained. The final prototype is
available at http://dueroland.grial.eu.
608 A. García-Holgado et al.
4 Heuristic Evaluation
Finally, the beta version of the platform has been developed, including in its design the
aspects of usability and accessibility previously identified and proceeding to its usability
study in two ways: through heuristic tests carried out by experts in usability, and through
tests with the platform workers that are planned to be conducted in the future.
The first part of the usability study, the heuristic evaluation, was focused on the
interface for clients. It was carried out by two experts, two men between 25 and 38
years old. None of the experts had used the online store previously. One expert was
involved in the projects that support the definition and development of the system, and
the other one was directly involved in the development. None of the experts has
cognitive impairments or mental illness, but they have knowledge related to technology
apply to mental health. In addition to these characteristics, the criteria used to select the
experts was based on their professional profiles:
An Online Sales System to Be Managed by People with Mental Illness 609
• E1: A full stack developer with six years of experience developing online platforms
for health and wellbeing sectors.
• E2: A researcher with more than ten years of experience in multimodal human-
computer interaction.
Each expert reviewed the online store. They identified the usability problems
associated with each heuristic proposed by Nielsen [20] and assigned a value from 1
(major usability problems) to 10 (no usability problems). Table 1 summarizes the
values for each heuristic rule. The average of each heuristic was calculated in order to
get a final value for each heuristic, so this value reflects where are more usability issues.
Active ageing is one of the main objectives of the current society. The population aged
over 65 years old has increased during the last decades, and the figures will continue
rising up. This situation supposes a challenge for the health systems across the world.
In this context, active ageing is one of the main objectives of the World Health
Organization in order to improve the quality of life as people age. This approach is
combined with technology to provide solutions that allow active participation of older
citizens in the Digital Society.
The current proposal aims to provide a set of guidelines to develop an inclusive
online store, so that it takes into account all types of users and can also be managed by
people with different abilities, particularly people with severe and prolonged mental
illness, is presented. The result is a prototype of an online sales platform in which
production and sales are carried out by people with mental illness.
It is important to emphasize the context in which the prototype of the online store
has been deployed. Although it is an online solution, accessible from any region in the
world, it is specially focused on promoting the distribution of products among the
different rural areas in Zamora, in the first instance, and subsequently at the regional
level, in Castile and León, and the national level, Spain. Currently, the prototype is
610 A. García-Holgado et al.
operative and contains products made in the workshops and activities of the CEE, with
home delivery service available for the Zamora region, including the entire
metropolitan area and rural areas of the province.
The tool was correctly integrated with the ERP to manage the sales, but also it was
correctly added to the technological ecosystem for (in)formal caregivers due to the
architecture proposed in Fig. 1.
Regarding the heuristic evaluation, it is important to take into account the bias of
this procedure. According to [21] the perception of evaluators in using this method is
not consistent with the users’ experience with a system. Despite this, the results
obtained are useful to improve the system and prepare the next phase with real users.
Experts detected problems associated with most of the heuristic rules, although there
are significant differences between the experts. The most significant usability problem
is associated with HR10 (Help and documentation); there is no documentation or users’
support available in the online store. Besides, several problems were detected in HR8
(Aesthetic and minimalist design) by both experts, most of them related to the lack of
products and images associated with the available products.
Finally, the findings of this study have a number of important implications for
improving the development of e-commerce platform adapted to users with different
abilities. On the other hand, future works need to be done to complete the usability
study with qualitative techniques such as focus group with final users with different
abilities and mental illness.
Acknowledgments. This work has been partially funded by the Spanish Ministry of Economy
and Competitiveness throughout the DEFINES project (Ref. TIN2016-80172-R) and the Min-
istry of Education of the Junta de Castilla y León (Spain) throughout the TE-CUIDA project
(Ref. SA061P17).
References
1. Kalache, A., Gatti, A.: Active ageing: a policy framework. Adv. Gerontol. Uspekhi
Gerontol. Akad. Nauk. Gerontol. Obs. 11, 7–18 (2003)
2. WHO: Active Ageing: A Policy Framework. World Health Organization, Geneva (2002)
3. Cámara, C.P., Eguizábal, A.J.: Quality of university programs for older people in Spain:
innovations, tendencies, and ethics in European higher education. Educ. Gerontol. 34, 328–
354 (2008)
4. Stompór, M., Grodzicki, T., Stompór, T., Wordliczek, J., Dubiel, M., Kurowska, I.:
Prevalence of chronic pain, particularly with neuropathic component, and its effect on overall
functioning of elderly patients. Med. Sci. Monit.: Int. Med. J. Exp. Clin. Res. 25, 2695–2701
(2019)
5. García-Holgado, A., Marcos-Pablos, S., García-Peñalvo, F.J.: A model to define an eHealth
technological ecosystem for caregivers. In: Rocha, Á., Adeli, H., Reis, L., Costanzo, S. (eds.)
New Knowledge in Information Systems and Technologies. WorldCIST 2019. Advances in
Intelligent Systems and Computing, vol. 932, pp. 422–432. Springer, Cham (2019)
6. García-Holgado, A., García-Peñalvo, F.J.: Architectural pattern to improve the definition and
implementation of eLearning ecosystems. Sci. Comput. Program. 129, 20–34 (2016)
An Online Sales System to Be Managed by People with Mental Illness 611
7. Malla, A., Joober, R., Garcia, A.: “Mental illness is like any other medical illness”: a critical
examination of the statement and its impact on patient care and society. J. Psychiatry
Neurosci.: JPN 40, 147–150 (2015)
8. Gonçalves, R., Rocha, T., Martins, J., Branco, F., Au-Yong-Oliveira, M.: Evaluation of e-
commerce websites accessibility and usability: an e-commerce platform analysis with the
inclusion of blind users. Univ. Access Inf. Soc. 17, 567–583 (2018)
9. Manikas, K., Hansen, K.M.: Software ecosystems – a systematic literature review. J. Syst.
Softw. 86, 1294–1306 (2013)
10. Pillai, K., King, H., Ozansoy, C.: Hierarchy model to develop and simulate digital habitat
ecosystem architecture. In: 2012 IEEE Student Conference on Research and Development
(SCOReD). IEEE, USA (2012)
11. Ostadzadeh, S.S., Shams, F., Badie, K.: An architectural model framework to improve
digital ecosystems interoperability. In: Elleithy, K., Sobh, T. (eds.) New Trends in
Networking, Computing, E-learning, Systems Sciences, and Engineering. Lecture Notes in
Electrical Engineering, vol. 312, pp. 513–520. Springer, Cham (2015)
12. García-Holgado, A.: Análisis de integración de soluciones basadas en software como
servicio para la implantación de ecosistemas tecnológicos educativos. Programa de
Doctorado en Formación en la Sociedad del Conocimiento. University of Salamanca,
Salamanca, Spain (2018)
13. García-Peñalvo, F.J., Franco Martín, M., García-Holgado, A., Toribio Guzmán, J.M., Largo
Antón, J., Sánchez-Gómez, M.C.: Psychiatric patients tracking through a private social
network for relatives: development and pilot study. J. Med. Syst. 40 (2016). Article no. 172
14. Marcos-Pablos, S., García-Holgado, A., García-Peñalvo, F.J.: Modelling the business
structure of a digital health ecosystem. In: Conde-González, M.Á., Rodríguez Sedano, F.J.,
Fernández Llamas, C., García-Peñalvo, F.J. (eds.) Proceedings of the 7th International
Conference on Technological Ecosystems for Enhancing Multiculturality, TEEM 2019,
León, Spain, 16–18 October 2019, pp. 838–846. ACM, New York (2019)
15. Geldmacher, D.S., Kirson, N.Y., Birnbaum, H.G., Eapen, S., Kantor, E., Cummings, A.K.,
Joish, V.N.: Implications of early treatment among Medicaid patients with Alzheimer’s
disease. Alzheimer’s Dement. 10, 214–224 (2014)
16. Ostwald, S.K., Hepburn, K.W., Caron, W., Burns, T., Mantell, R.: Reducing caregiver
burden: a randomized psychoeducational intervention for caregivers of persons with
dementia. Gerontologist 39, 299–309 (1999)
17. Diffenderfer, P.M., El-Assal, S.: Microsoft Dynamics NAV: Jump Start to Optimization.
Vieweg+Teubner Verlag (2008)
18. W3C: Web Content Accessibility Guidelines (WCAG) 2.0 (2008)
19. Cognitive Accessibility at W3C. Web Accessibility Initiative (WAI). http://bit.ly/2QSj2sG
20. Nielsen, J.: Heuristic evaluation. In: Nielsen, J., Mack, R.L. (eds.) Usability Inspection
Methods, vol. 17, pp. 25–62. Wiley, Hoboken (1994)
21. Khajouei, R., Ameri, A., Jahani, Y.: Evaluating the agreement of users with usability
problems identified by heuristic evaluation. Int. J. Med. Inform. 117, 13–18 (2018)
Author Index
A C
Abelha, António, 466, 476, 484, 503, 510 Cădar, Ionuț Dan, 307
Abnane, Ibtissam, 15 Canaleta, Xavi, 570
Agredo-Delgado, Vanessa, 203 Cardoso, Henrique Lopes, 108
Aguiar, Joyce, 108 Carneiro, João, 54
Akyar, Özgür Yaşar, 357, 367, 377, 397 Carrillo-de-Gea, Juan Manuel, 25
Alami, Hassan, 36, 86 Carvalho, Victor, 108
Aldhayan, Manal, 95 Caussin, Bernardo, 397
Ali, Raian, 95 Cham, Sainabou, 95
Almeida, Ana, 54 Chamba, Franklin, 137
Almourad, Mohamed Basel, 95 Chevereșan, Romulus, 429
Alvarez, Gustavo, 137 Collazos, Cesar A., 203
Alves, Victor, 441, 452, 493 Colmenero Ruiz, María Jesús Yolanda, 245
Amato, Cibelle, 387 Costa Tavares, João A., 590
Araújo, Miguel, 3 Costanzo, Sandra, 287
Arias, Susana, 137 Costas Jauregui, Vladimir, 357, 367, 397
Au-Yong-Oliveira, Manuel, 3, 590 Crnojević, Vladimir, 544
Aydin, Mehmet N., 531 Cunha, Carlos R., 579
B
Bachiri, Mariam, 36 D
Badia, David, 570 De Weerdt, Jochen, 523
Bădică, Amelia, 192 Demirhan, Gıyasettin, 367
Bădică, Costin, 192
Balaban, Igor, 152
Barroso, João, 3 E
Bergande, Bianca, 142 Egorova, Olga, 235
Berrios Aguayo, Beatriz, 245 El Asnaoui, Khalid, 44
Bin Qushem, Umar, 357 Eliseo, Maria Amelia, 387, 397
Boitsev, Anton, 235 Encinas, A. H., 295
Brdar, Sanja, 544 Estrada, Rogelio, 120
Bylieva, Daria, 225 Ezzat, Mahmoud, 65
© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
Á. Rocha et al. (Eds.): WorldCIST 2020, AISC 1161, pp. 613–615, 2020.
https://doi.org/10.1007/978-3-030-45697-9
614 Author Index
F M
Fardoun, Habib M., 203 Machado, Joana, 452
Faria, Brígida Mónica, 108 Machado, José, 466, 510
Fernandes, Catarina, 466 Magalhães, Ricardo, 493
Fernandes, Filipe, 452 Marcos-Pablos, Samuel, 600
Fernandes, Gisela, 334 Marinheiro, Miguel, 590
Fernandes, Joana, 579 Marques, Gonçalo, 76
Fernández-Alemán, José Luis, 25 Marreiros, Goreti, 54
Ferreira, Diana, 510 Martín-Vaquero, Jesús, 295
Flores, Marcelo, 367 Martin, Anne, 142
Fonseca, David, 570 Martínez Nova, Alfonso, 295
Fonseca, Luís, 3 Martinho, Diogo, 54
Frazão, Rui, 3 Martins, Constantino, 54
Freitas, Francisco, 317 Martins, Valéria Farinazzo, 387, 397
McAlaney, John, 95
G Meissner, Roy, 142
García-Berná, José Alberto, 25 Mejía, Jezreel, 120
García-Holgado, Alicia, 347, 600 Mikhailova, Elena, 235
García-Peñalvo, Francisco J., 409, 600 Miranda, Filipe, 452
Gomes, João Pedro, 579 Mitrović, Sandra, 523
Gómez, Héctor, 137 Mon, Alicia, 203
Gonçalves, Helena, 108 Morais, Elisabete Paulo, 579
Gonçalves, Joaquim, 108 Moreira, Fernando, 203
Gonçalves, Ramiro, 557 Motz, Regina, 357, 397, 418
Govedarica, Miro, 544 Mounir, Fouad, 253
Grafeeva, Natalia, 235 Munoz, Darwin, 357
Grujić, Nastasija, 544 Murareţu, Ionuţ Dorinel, 192
Guimarães, Tiago, 476, 484, 503 Murtonen, Mari, 215
H
N
Hak, Francini, 476, 484
Nabil, Attari, 253
Hakkoum, Hajar, 15
Nafil, Khalid, 253
Hwang, Ting-Kai, 175
Neves, José, 452
Nicolás, Joaquín, 25
I
Novović, Olivera, 544
Idri, Ali, 15, 36, 44, 65, 86
Istrate, Cristiana, 429
Ivanović, Mirjana, 192 O
Oliveira, Alexandra, 108
J Ortega Vázquez, Carlos, 523
Jesus, Tiago, 493 Ouhbi, Sofia, 25
Jin, Bih-Huang, 175 Oyelere, Solomon Sunday, 357, 367, 387, 397
Jorge, Filipa, 557
P
K Paiva, Sandra, 271
Kharbouch, Manal, 86 Pantoja Vallejo, Antonio, 245
Knihs, Everton, 347 Peixoto, Rui, 317
Peraza, Juan, 120
L Perdahci, Ziya N., 531
Laato, Samuli, 215 Petre, Ioana, 429
Labrador, Emiliano, 570 Pitarma, Rui, 76
Lizarraga, Carmen, 120 Popescu, Daniela, 192
Lobatyuk, Victoria, 225 Portela, Carlos Filipe, 317
Luís, Ana R., 263 Portela, Filipe, 334, 466
Author Index 615