Deliverable 5.2
Deliverable 5.2
Copyright 2017 FIESTA-IoT Consortium: National University of Ireland Galway - NUIG / Coordinator
(Ireland), University of Southampton IT Innovation - ITINNOV (United Kingdom), Institut National
Recherche en Informatique & Automatique - INRIA, (France), University of Surrey - UNIS (United
Kingdom), Unparallel Innovation, Lda - UNPARALLEL (Portugal), Easy Global Market - EGM (France),
NEC Europe Ltd. NEC (United Kingdom), University of Cantabria UNICAN (Spain), Association Plate-
forme Telecom - Com4innov (France), Research and Education Laboratory in Information
Technologies - Athens Information Technology - AIT (Greece), Sociedad para el desarrollo de
Cantabria – SODERCAN (Spain), Fraunhofer Institute for Open Communications Systems – FOKUS
(Germany), Ayuntamiento de Santander – SDR (Spain), Korea Electronics Technology Institute KETI,
(Korea).
DOCUMENT HISTORY
OVERVIEW OF UPDATES/ENHANCMENTS
Section Description
Section 1 Updates introduction text explaining the new contents of Section 3 and Section 4
Explanation that the “validation of FIESTA-IoT platform from the point of view of
Seciton 1
testbeds” will be left to deliverable D6.3
Section 5 Update conclusion in order to includes the new outcomes of Section 3 and Section 4.
Annex III Moved questionnaire from Section 4
Create the checklist for evaluating the integration and implementation of experiments.
Annex IV
The checklist has been answered by all the three in-house experiments
Create the questionnaire for validating the usability and the resources of FIESTA-IoT
platform. The questionnaire is containing questions regarding the usability of tools and
Annex V
the time spent to integrate experiments. All the three in-house experiments have
answered to the proposed questions.
TABLE OF CONTENTS
1 EXECUTIVE SUMMARY ................................................................................................................. 9
2 EXPERIMENTS SELECTION, IMPLEMENTATION AND INTEGRATION ....................................11
2.1 Data Assembly and Services Portability Experiment..............................................................11
2.1.1 Use-case selection .........................................................................................................11
2.1.2 Architecture and workflow ............................................................................................. 13
2.1.2.1 Architecture ............................................................................................................................. 13
2.1.2.2 Data acquisition workflow........................................................................................................ 14
2.1.2.3 Data contextualization workflow .............................................................................................. 17
2.1.2.4 Data Visualization workflow .................................................................................................... 18
2.1.3 Dataset used: FIESTA-IoT Ontology concepts used towards building the queries ....... 19
2.1.4 Outcomes ...................................................................................................................... 19
2.1.5 Future work .................................................................................................................... 21
2.2 Dynamic Discovery of IoT Resources for Testbed Agnostic Data Access ............................. 21
2.2.1 Use-case selection ........................................................................................................ 22
2.2.1.1 Map-based I/O ........................................................................................................................ 23
2.2.1.2 Side map tab menu ................................................................................................................. 23
2.2.1.3 Basic output: weather station with the average values ........................................................... 24
2.2.1.4 Advanced output: graphical data analysis ............................................................................... 25
2.2.1.5 Documentation ........................................................................................................................ 25
2.2.2 Architecture and workflows ............................................................................................ 25
2.2.2.1 Initial discovery of resources (backend) + Map visualization (Web browser) .......................... 27
2.2.2.2 Data retrieval (through IoT Service endpoint) ......................................................................... 29
2.2.2.3 Phenomena-based filtering (resource discovery) .................................................................... 30
2.2.2.4 Location-based clustering (data retrieval) ............................................................................... 30
2.2.2.5 Historical data ......................................................................................................................... 32
2.2.2.6 Periodic polling for last observations....................................................................................... 33
2.2.2.7 Data visualization .................................................................................................................... 34
2.2.2.8 Performance monitoring .......................................................................................................... 34
2.2.3 Dataset used: FIESTA-IoT Ontology concepts used towards building the queries ....... 34
2.2.4 Outcomes ...................................................................................................................... 35
2.2.5 Future work .................................................................................................................... 36
2.3 Large Scale Crowdsensing Experiments .............................................................................. 37
2.3.1 Use-case selection ........................................................................................................ 37
2.3.2 Experiment architecture and workflow .......................................................................... 38
2.2.6 FIESTA-IoT Ontology concepts used towards building the queries .............................. 41
2.3.3 Outcomes ...................................................................................................................... 42
2.3.4 Future work .................................................................................................................... 42
3 METHODOLOGIES FOR EXPERIMENT EVALUATION AND FIESTA-IOT VALIDATION ......... 43
3.1 Evaluation of experiments ..................................................................................................... 44
3.1.1 Evaluate achievement of experiment objectives ........................................................... 45
3.1.2 Evaluate experiment advance over SotA ...................................................................... 45
3.1.3 Evaluate experiment integration and implementation ................................................... 45
3.2 Validation of FIESTA-IoT Platform and Tools ........................................................................ 46
3.2.1 Validate FIESTA-IoT concepts ....................................................................................... 46
3.2.2 Validate FIESTA-IoT tools ............................................................................................. 47
3.2.3 Validate FIESTA-IoT resources ..................................................................................... 47
4 EVALUATION OF IN-HOUSE EXPERIMENTS AND VALIDATION OF FIESTA-IOT BY IN-
HOUSE EXPERIMENTS ....................................................................................................................... 48
4.1 Evaluation of experiments ..................................................................................................... 48
4.1.1 Achievement of experiment KPIs evaluation ................................................................. 48
Data Assembly and Services Portability Experiment............................................................... 48
Dynamic Discovery of IoT Resources for Testbed Agnostic Data Access ............................... 49
Large Scale Crowdsensing Experiments ................................................................................ 50
4.1.2 Experiment integration and implementation evaluation................................................. 51
Data Assembly and Services Portability Experiment............................................................... 51
Dynamic Discovery of IoT Resources for Testbed Agnostic Data Access ............................... 52
Large Scale Crowdsensing Experiments ................................................................................ 53
LIST OF FIGURES
Figure 1 Observations request via IoT Services ....................................................... 12
Figure 2 Observations request via Semantic Data Repository ................................. 13
Figure 3 Smart City Magnifier architecture ............................................................... 14
Figure 4 Data acquisition through endpoints workflow. ............................................. 15
Figure 5 Data acquisition via semantic data repository workflow.............................. 16
Figure 6 Data contextualization workflow ................................................................. 17
Figure 7 Visualization workflow ................................................................................ 18
Figure 8 Smart City Magnifier ................................................................................... 19
Figure 9 Smart City Magnifier: different geographic abstraction level selected with the
slide-bar .................................................................................................................... 20
Figure 10 Smart City Dashboard: deployment situations scope ............................... 21
Figure 11. Screenshot of the actual status of the so-called “Dynamic Discovery” .... 23
Figure 12. Dynamic discovery of resources generic architecture ............................. 26
Figure 13. (Dynamic Discovery) Resource discovery sequence diagram ................ 29
Figure 14. (Dynamic Discovery) Getting a last observation from an iot-service
endpoint sequence diagram ..................................................................................... 29
Figure 15. (Dynamic discovery) Phenomena-based filtering sequence diagram ...... 30
Figure 16. (Dynamic discovery) Location-based clustering data retrieval sequence
diagram..................................................................................................................... 31
Figure 17 (Dynamic discovery) Historical data dump sequence diagram ................. 32
Figure 18 (Dynamic discovery). Storage of last observations from periodic polling
sequence diagram. ................................................................................................... 33
Figure 19 Visualization of the last observations measured by a particular node ...... 35
Figure 20. Sample of a graphical output of the processed data................................ 36
Figure 21 FISMO pane in the Experiment Management Console ............................ 39
Figure 22 Large Scale Crowdsensing Experiment Architecture with interactions ..... 41
Figure 23: Noisy Locations heatmap ........................................................................ 42
Figure 24 Working Elements .................................................................................... 69
LIST OF TABLES
Table 1 Evaluation of experiment methodology ........................................................ 44
Table 2 Validation of the FIESTA-IoT platform methodology ..................................... 46
Table 3 Evaluation of the Data Assembly and Service Portability through KPIs ....... 49
Table 4 Evaluation of the Dynamic Discovery of IoT Resources for Testbed Agnostic
Data Access experiment through KPIs ..................................................................... 50
Table 5 Evaluation of the Large Scale Crowdsensing experiments through KPIs..... 51
Table 6 Validation of FIESTA-IoT concepts by the Data Assembly and Service
Portability experiment ............................................................................................... 55
Table 7 Validation of FIESTA-IoT concepts by Dynamic Discovery of IoT Resources
for Testbed Agnostic Data Access experiment .......................................................... 56
Table 8 Validation of FIESTA-IoT concepts by Large Scale Crowdsensing
experiments .............................................................................................................. 57
Table 9 Validation of FIESTA-IoT platform by the Data Assembly and Service
Portability experiment through KPIs ......................................................................... 58
Table 10 Validation of FIESTA-IoT tools by the Data Assembly and Service Portability
experiment ................................................................................................................ 59
Table 11 Validation of FIESTA-IoT platform by the Dynamic Discovery of IoT
Resources for Testbed Agnostic Data Access experiment through KPIs .................. 60
Table 12 Validation of FIESTA-IoT tools by the Dynamic Discovery of IoT Resources
for Testbed Agnostic Data Access experiment .......................................................... 61
Table 13 Validation of FIESTA-IoT platform by the Large Scale Crowdsensing
Experiments experiments through KPIs ................................................................... 62
Table 14 Validation of FIESTA-IoT tools by the Large Scale Crowdsensing
Experiments experiments ......................................................................................... 62
Table 15 Validation of the FIESTA-IoT resources by the Data Assembly and Services
Portability experiment ............................................................................................... 63
Table 16 Validation of the FIESTA-IoT resources by the Dynamic Discovery of IoT
Resources for Testbed Agnostic Data Access experiment ........................................ 64
Table 17 Validation of the FIESTA-IoT resources by the Large Scale Crowdsensing
experiments .............................................................................................................. 64
Table 18 Virtual Machines......................................................................................... 70
1 EXECUTIVE SUMMARY
This deliverable is reporting the contribution of the whole experimentation work
package (WP5) by describing the experiments implementation and integration (task
T5.2), and validation and evaluation of experiments (task T5.4). The integration of
experiments of third-parties has been neglected in this document, since at time of
reporting, no third-parties (from Open Calls) have been acknowledged to join the
project.
This deliverable addresses future FIESTA-IoT users, like researcher on Future
Internet Research and Experimentation (FIRE), members of other Internet of Things
(IoT) communities and projects and entrepreneurs and application developers. It
provides some examples on how to use and leverage the FIESTA-IoT platform, tools
and concepts. Furthermore, it also addresses the researchers and engineers within
the FIESTA-IoT consortium to acquire feedback on which aspects of the platform
should be improved, what is already possible to achieve with the platform, and what
are the expectation for third year of the project from experimenters’ point of view.
The three in-house experiments delineates three different approaches on how to
leverage the FIESTA-IoT platform for IoT experiments having different scenarios. For
example, the adoption and implementation of the concept of Virtual Entities in the
case of the Data Assembly and Services Portability Experiment, the massive usage
of the IoT Service Discovery for the Dynamic Discovery of IoT Resources for Testbed
Agnostic Data Access, and the usage of the FIESTA-IoT execution engine by the
Large Scale Crowdsensing Experiments (Appendix II provides experiment
specification in FED-Spec form). Other components of the FIESTA-IoT platform, such
as, the security functions, the meta-cloud storage and the communication functions,
are transversely used by all the experiments.
The first two sections of this deliverable provide a status report of the three in-house
experiments. Because of the demonstrator typology of this deliverable, this document
is to be considered complementary for the screencast video linked in the main
experimenters webpage of the FIESTA-IoT project 1. Section 2 depicts the actual
achievements of each experiments and the comparison with original plan and design
phase carried out during the task T5.1 (that has ended with the report contained in
(FIESTA-IoT D5.1, 2016)). Section 3 describes the requirements requested by the
experimenters that have been already satisfied by the FIESTA-IoT platform and the
achieve KPIs, specified in (FIESTA-IoT D5.1, 2016).
Section 3 describes in very detail the methodology of the evaluation that is used
during the FIESTA-IoT project to bring improvements on both sides, the FIESTA-IoT
platform and the experiments. The tools (such as questionnaires and checklists) used
for performing the validation and the evaluation are reported in Annex III
Questionnaire of experiment evaluation from FIESTA-IoT point of view, Annex IV
EvaluatION experiment integration and implementation Checklist and Annex V
QuestionNaire: Validation of the FIESTA-IoT resources.
1 http://fiesta-iot.eu/fiesta-experiments/
The validation of FIESTA-IoT platform from the point of view of testbeds will be left to
deliverable D.6.3, which is about certification, but it will be open to host the
feedbacks of testbeds owners to the FIESTA-IoT platform. This document is focused
only on experimentations and in particular it is a demonstrative document about
experimentation in FIESTA-IoT.
The methodologies are then applied as exercise to all the three in-house
experiments, which are the only one, at the time of creation of this deliverable, that
have been designed, and partially integrated and implemented. The outcome of this
process is depicted in Section 4.
In Annex I the technical support for cloud resources, that are hosting the FIESTA-IoT
platform used by each of the three in-house experiments, is reported.
Finally, in Annex II the example of experiment specification, in the form of FEDSPEC,
for the FIESTA-IoT experiment engine used by one of the in-house experiments (viz.
Large Scale Crowdsourcing experiments) is reported.
The last axis can be seen again as a multi-dimension parameter. For instance, the
abstraction axis can vary over the abstraction of the situation, e.g. lower abstracted
situation might be the temperature situation, air pollutant concentrations or also
occupied parking slot percentage in the focused area, whilst an higher abstracted
situation might be traffic situation which may take into consideration all the previous
situations. The abstraction axis can also vary on the abstraction level of the subject of
the analysis, e.g. from low to high abstracted subject a situation might refer to a
building, a street, a suburb, a city, a region or a country (see Figure 9).
The first implementation of this experiment has taken into consideration only the
space and the abstraction axes, whilst the time axes will be a future development.
For our experiment, we have envisaged several use-cases (see (FIESTA-IoT D5.1,
2016)): Resource-Oriented analytics, Observation-Oriented analytics, Knowledge-
Produced analytics and Hybrid analytics.
Observation-Oriented analytics.
For this kind of analytics we have implemented two different use-cases, in order to
make use of data coming from testbeds that are exposing themselves as IoT services
and testbeds that are pushing their observations into the Semantic Data Repository.
In case of endpoints available the use-case implemented is the one shown in Figure
1. The Data Assembly and Service Portability Experiment first performs a SPARQL
query (step 1) to the IoT Service Resource Registry in order to infer the available
endpoints (returned at step 2). Then it is querying in polling each of the endpoints
(step 3) in order to get the latest available data (step 4). Step 1 and 2 are
continuously repeated with a predefined frequency in order to catch any changes in
the endpoint set.
This use-case differs from the one depicted in (FIESTA-IoT D5.1, 2016) in two ways:
the metadata cloud is not queried in order to get historical data and the data requests
are made in polling. The first difference is due to the fact that at the state of the
experiment implementation, the testbeds exposing endpoints were not exposing a
semantic historical repository. The second difference is due to the fact the
Subscription Manager functionalities were not yet ready at the state of the first
experiment implementation.
In case of data available only in the Semantic Data Repository, the use-case
implemented is shown in Figure 2.
The experiment is making a SPARQL query (step 1, see section 2.1.2.2 for SPARQL
examples) to the Semantic Data Repository requesting historical in polling, with a
specified period τ , with the following approach:
1) data with timestamp not older than the time of the request minus τ.
Step 1) and 2) are repeated with the period of τ; the time-window is shifted of τ plus
1s in order to start at the time of the last submitted query.
Also this use-case differs from the one depicted in the (FIESTA-IoT D5.1, 2016)
since the data is continuously retrieved with a time-windowed historical data instead
of a subscription (or polling) to IoT endpoints after the first historical query. This is
due to the fact that at the time of the experiment implementation the integrated
testbeds able to push data to the Semantic Repository were not exposing IoT
endpoints.
2.1.2.1 Architecture
• Backend components
• Frontend component:
o Dashboard: offers a GUI to the experiment users and interact with the
Context Management for retrieving the needed data.
The Semantic Mediation Gateway (SMG) component is in charge to retrieve IoT data
from the FIESTA platform. In order to acquire that, it implements two different
workflows at the same time, with the aim to get data from all the testbeds conntected
to FIESTA.
The first workflow (see Figure 4) corresponds to the first use-case of the
Observations-oriented analytics where IoT service endpoints are available.
1. First the SMG discovers the list of resources,
2. The SMG starts to poll periodically each of the endpoints in order to get data.
3. The data is then forwarded to the Context Management.
Step 1 is periodically repeated, and in case the list of resources changes, the list of
endpoints contacted at Step 2 is updated.
The resource discovery is executed via a SPARQL query to the FIESTA platform,
similar to the following one:
PREFIX iot-lite: <http://purl.oclc.org/NET/UNIS/fiware/iot-lite#>
PREFIX m3-lite: <http://purl.org/iot/vocab/m3-lite#>
PREFIX ssn: <http://purl.oclc.org/NET/ssnx/ssn#>
PREFIX geo: <http://www.w3.org/2003/01/geo/wgs84_pos#>
PREFIX xsd: <http://www.w3.org/2001/XMLSchema#>
PREFIX rdfs: < http://www.w3.org/2000/01/rdf-schema#>
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
SELECT ?dev ?qk ?endp ?lat ?long
WHERE {
?dev a ssn:Device .
?dev ssn:onPlatform ?platform .
?platform geo:location ?point .
?point geo:lat ?lat .
?point geo:long ?long .
?dev ssn:hasSubSystem ?sensor .
?sensor a ssn:SensingDevice .
?sensor iot-lite:exposedBy ?serv .
?sensor iot-lite:hasQuantityKind ?qkr .
?qkr rdf:type ?qk .
?serv iot-lite:endpoint ?endp .
}
The second workflow (see Figure 5) corresponds to the second use-case of the
Observations-oriented analytics where no IoT service endpoints are available.
1. The SMG periodically polls the FIESTA platform for historical data with a
SPARQL query specifying a time-window (which ends to the time of the
request).
2. All the data retrieved is then forwarded to the Context Management which
stores it.
The data SPARQL query used for retrieving the historical data is similar to the
following one:
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX owl: <http://www.w3.org/2002/07/owl#>
PREFIX xsd: <http://www.w3.org/2001/XMLSchema#>
PREFIX oneM2M: <http://www.onem2m.org/ontology/Base_Ontology/base_ontology#>
PREFIX ssn: <http://purl.oclc.org/NET/ssnx/ssn#>
PREFIX qu: <http://purl.oclc.org/NET/ssnx/qu/qu#>
PREFIX iot-lite: <http://purl.oclc.org/NET/UNIS/fiware/iot-lite#>
PREFIX geo: <http://www.w3.org/2003/01/geo/wgs84_pos#>
PREFIX m3-lite: <http://purl.org/iot/vocab/m3-lite#>
PREFIX dul: <http://www.loa.istc.cnr.it/ontologies/DUL.owl#>
PREFIX time: <http://www.w3.org/2006/time#>
SELECT DISTINCT ?qkClass ?lat ?long ?time ?sensor ?dataValue ?observation
WHERE {
?observation a ssn:Observation .
?observation geo:location ?point .
?point geo:lat ?lat .
?point geo:long ?long .
?observation ssn:observationResult ?sensOutput .
?sensOutput ssn:hasValue ?obsValue .
?observation ssn:observedBy ?sensor .
?observation ssn:observedProperty ?qk .
In order to request the data, every HTTP request to the FIESTA platform is carrying
the authorization token as an HTTP header.
The backend components perform data analytics task in order to compute the Smart
City Magnifier indicators. The output data is again stored in the Context
Management.
entity is a Virtual Entity (VE). For every VE a set of analytics functions are
executed with the compute data statistics on observation (average, minimum,
maximum) and sensor deployment quality (observation density per area,
number of active sensors of a certain type per virtual entity).
4. The inferred situation is then pushed to the Context Management.
The frontend component, the Dashboard, is in charge of offering a GUI to the Smart
City Magnifier user, for showing both acquired observations and inferred situations
(see Figure 7).
2.1.3 Dataset used: FIESTA-IoT Ontology concepts used towards building the
queries
Following is the description of the dataset used (each field is tagged with the FIESTA-
IoT ontology class):
• rdf:type (and its subclasses): the quantity kind of the observed value;
• geo:lat: the exact latitude at which the value has been observed;
• geo:long: the exact longitude at which the value has been observed;
2.1.4 Outcomes
With the first implementation of our experiment, we achieved a full Smart City
application formed by a backend analytics engine and a Smart City Dashboard (see
Figure 8).
The first is able to infer situations of all kind of geographic location (e.g. streets,
buildings, cities). The backend component is able to automatically infer new Virtual
Entities to which the pure data, fed by the FIESTA-IoT platform, belongs.
The dashboard is a complete Smart City Dashboard for visualizing the outcome of
the inference of the analytics in a map based widget where the situations are
displayed as traffic-light color-schema markers for visualizing their status. In addition
the situations are summarized over the geographic scope of the map on a set of
single situation widgets (gauge widget and time series widget on the left side of
Figure 8). Finally the analytics outcome have been classified over the geographic
abstraction level (see Figure 9), selectable interactively with a slide-bar, and over the
situation scopes (i.e. Environmental Scope and Deployment scope, see Figure 10).
Figure 9 Smart City Magnifier: different geographic abstraction level selected with the
slide-bar
The experiment shows also its portability, since the same dashboard can visualize
situations of one or the other corner of the world, seamlessly, simply moving the
geographic scope of the map.
The Data Assembly and Services Portability Experiment is not yet completed and
several aspects needs to be improved and enhanced in order to consider it fully
satisfying the foreseen outcomes. In particular the following aspects will be
addressed during the 3rd year of the FIESTA-IoT project:
• Perform data analytics algorithms over data coming from multiple testbeds
(the actual limitation is the fact that the data analytics algorithms have a
geographic scope of a size at most of a country and the testbeds are
dislocated in different countries).
retrieving data from sensors coming from heterogeneous platforms (as the ones that
compose the FIESTA-IoT federation) in a single and common solution. For this
experiment, and according to the legacy description of this pilot, we only focus on the
weather/environmental domain. Namely, we will only show resources and
observations that have to do with a subset of physical phenomena (e.g. temperature,
illuminance, wind speed, etc.), where external running this experiment will be able to
see the resources on a map and dynamically select subsets of them, in order to play
around with the information (i.e. observations) the actual sensors generate.
Amongst the set of features that we support in this experiment, we stand out the
following ones: graphical representation of resources, location and phenomena-
based resource discovery, retrieval of observations, combination of data for the
generation of statistical analysis, graphical representation of these stats, etc. The rest
of the section addresses a deeper description of each of them.
The main challenges that are pursued by this experiment can be grouped as follows,
embracing up to three different targets:
• Guidance to third parties: In order to provide some introductory guidelines to
external users, we can see this as the entry point to the experimentation
realm.
• Platform performance assessment: At the same time the experiment is
running, we gather data of each of the operations that do interact with the
FIESTA-IoT platform. This way, we receive some feedback of the experience
achieved by experimenters and might also use this information for internal
purposes (e.g. accounting, optimization, etc.)
• Exportation of tools: The way the experiment has been implemented allows
us to straightforwardly export and encapsulate each of that shape it. Beside
this, the nature of this experiment will follow an open-source approach, so
third-parties might take all they need just by grabbing the piece of code that
suit them.
During the design phase of the application (addressed in (FIESTA-IoT D5.1, 2016))
we provided a mockup of the user interface that we designed in order to cover all the
objectives and KPIs that were used to streamline the experiment. When it comes the
time to actually implement the application, we have come to the look-and-feel shown
in Figure 11 (excepting the red elements, which are there for explicative purposes).
As can be appreciated, we have tried to port all the elements that were defined
during the specification phase. Likewise we did in the abovementioned deliverable,
we can easily split the interface into a number of use cases that actually compose the
full story. In this section we proceed to briefly outline them, whereas we will break
them down to explain how they operate in Section 2.2.2. Before proceeding with the
individual description of each of them, the reader shall take into account that we are
presenting in this document the ones that are completely functional at the time of
writing, leaving aside the rest for the next version of the deliverable.
Figure 11. Screenshot of the actual status of the so-called “Dynamic Discovery”
In this first (and main) tab, the most remarkable element is a map (framed as ‘1’ in
the figure) where we can graphically see where the different resources coming from
the FIESTA-IoT platform are physically allocated (at the moment of the so-called
“resource discovery” stage).
However, the actual role of this map goes beyond this point: by clicking on any
individual marker (we can thus associate the item marker to a resource itself, the
system will automatically invoke the corresponding IoT service in order to retrieve all
the subjacent observations (i.e. the latest ones) from that particular node 2, as shown
in the outcome subsection (see 2.2.4) In addition to this, the framework we have
used provides a tool that permits the creation of “graphical assets”, such as polylines,
rectangles, polygons or circles. We will leverage them for the manual and interactive
selection of nodes, whose date will be used as input for use cases 3 and 4 (Sections
2.2.1.3 and 2.2.1.4, respectively).
Just aside the map we can find (element numbered as ‘2’ in ) a side menu that is
provided to complement its behaviour. Split into 5 categories, we support the
following features:
• Phenomena-based discovery:
2 In our approach, a node might contain more than one sensor, so a single “click” might imply more
One of the actions that are executed atomically after the creation/edition/deletion of
the previously named “graphical assets” is the clustering of all the nodes into a new
group of “selected devices”. After this, the experiment is in charge of retrieving all the
latest observations measured by these resources and properly combining them
altogether (in a per phenomenon basis), yielding a weather station-ish outcome that
displays the average values observed. However, this operation goes a step beyond
and automatically triggers a polling service that will periodically poll the information
from this list of nodes and properly updates the stats of this weather station.
Whereas the previous feature is a sample of this dynamic selection of resources and
observations, it is here where we can directly exploit the potential of this combination
of data or “composition of IoT services”. In this tab (as can be seen in Figure 11 –
element number 4 – it is not part of the main view), we will output graphical and
statistical information, whose input values come from the observations explained in
the previous section. To cite a couple of examples, we will represent for instance a
timeline of the evolution of data throughout time (combining historical data with info
gathered from the periodic polling). Additionally, we can also display a detailed
statistical analysis in a per phenomenon basis at a time instant ‘t’.
2.2.1.5 Documentation
During the implementation phase, we have not left aside one of the requirements that
was part of (FIESTA-IoT D2.1, 2015): “30_NFR_ACC_FIESTA_well_documented -
FIESTA-IoT must be well-documented”. Despite the fact that this experiment is not a
explicit part of the platform, we do believe that it can be used by externals to learn
how to actually interact with the platform.
In a nutshell, the architecture that defines this experiment can be seen as shown in
Figure 12. As can be seen, we have split the functionalities at the application level
into two standalone modules: a server that handles the interaction between the
experiment per se and the FIESTA-IoT platform and a web application (client) in
charge of all the visualization and the interaction with users. In the next paragraphs
we briefly streamline the main highlights of the experiment.
Web
Web
Web browser
browser
browser
I/O
FIESTA-IoT Platform
Res. Obs.
(Sem.) (Sem.)
• Client/server approach
o The server side is in charge of the global discovery of resources and
the execution of complex operations, like location or phenomena-
based queries. In other words, it is the communication point between
the FIESTA-IoT platform and experimenters that execute the web
application in their own systems (see below). Therefore, there will only
be a single instance of the server.
o The client (i.e. browser app) is the actual interface between users and
the server. As such, multiple clients might run at the same time.
Amongst its duties, it undertakes the role of displaying the information
on the map and on the visual tools, as well as of getting data from
users, as has been introduced before. It fosters all the set of features
that was introduced in (FIESTA-IoT D5.1, 2016), where we did not
have in mind the breakdown of the experiment into these two parts,
though. Hence, we have managed to significantly reduce the
First and foremost, before getting any kind of data, we have to know the assets that
are available within the FIESTA-IoT federation. By exploiting the agnosticism that is
brought about by the project, we will discover, with a single and common query (i.e.
SPARQL), the whole set of resources that have been registered till the moment it is
executed. Namely, the query (SPARQL) that is used for this phase is the following
one:
3 https://www.mongodb.com/
4 https://platform.fiesta-iot-eu/iot-registry/docs/api.html
Taking into account that the registration of resources is not a frequent activity, we do
not need to re-discover of resources very often at the server level. Thus, this process
will be performed once per hour (albeit this rate might change in the future) so as to
periodically discover. Figure 13 presents the sequence diagram of messages
exchanged in order to get the whole list of resources registered. Namely, the
meaning of each of them is the following:
1. The server synchronously addresses (as mentioned before, once per hour)
the previously introduced SPARQL query to the FIESTA-IoT platform.
2. The platform internally processes the query and sends back the
corresponding response to the server, which keeps the resultset in memory,
waiting for requests coming from the client side.
3. In a completely independent manner of the previous two messages, a client
runs the application. Immediately, a resource discovery request (in this case,
it is not a SPARQL query) is sent from the browser to the server. We have to
take into account that this process does not lead to the exchange of any
message between the experiment and the platform.
4. The server proceeds to reply the client with the list of resources discovered in
the first two steps.
1. Global
SPARQL q
uery
Triplestore
search
L resp.
2. SPARQ (iot-registry)
3. Resourc
e
Discovery
req.
e
4. Resourc .
o v e rt re sp
Disc
Once we know the assets and where they are, the next step is to harvest real data
(i.e. observations from the sensors). In the current version of the experiment, every
time we click on a node’s marker, we make use of the IoT Service endpoints
(included in the resource description) in order to address a message and request the
last observations measured by that particular node. We can see in Figure 14 as this
leads to an end-to-end operation, whose sequence diagram is depicted below.
tated
2.1. Anno
o n (RDF)
observati
tated
2.2. Anno
on (RDF)
observati
tated
2.3. Anno
a ti o n (RDF)
observ
1. As has been mentioned, the resource description contains the address of the
IoT endpoint that actually expose that resource (i.e. a simple GET message).
If we take Figure 14 as an example, the clicked node hosts five different
sensors, that is five different endpoints. Thus, five messages addressed to five
different endpoints will be sent.
2. Internally, the testbed retrieves the observation and sends it back in RDF
format, fulfilling the FIESTA-IoT semantic model. Following the previous
example, five annotated observations will be sent back to the client. It is worth
highlighting that these messages’ formats are actually RDF (Resource
Description Format) documents, so the application has to parse them
accordingly.
One of the features supported at the client side is the interactive discovery of
resources based on their sensing capabilities, thus filtering out those that are not
able to measure a particular subset of physical phenomena. As shown in Figure 11,
when the option “Remote (SPARQL)” is enabled and we click on the “Send query”
button, a SPARQL will be automatically generated by the client and delivered, across
the server, to the FIESTA-IoT core. Regarding the query per se, it is basically alike
that of Section 2.2.2.1. The main difference is the array of physical phenomena that
is in the body; unlike the static list of the above case, we only append those
QuantityKinds that are enabled in the interactive toggle group.
Figure 15 summarizes the sequence diagram observed in the system.
the parsing and further filtering of all the duplicated information). All in all, the
exchange of messages in this process is reflected in Figure 16.
Triplestore
search
(iot-registry)
QL
2.1. SPAR tions)
(observa
QL Response
2.2. SPAR tions)
(o rva
b se
Response
?o a ssn:Observation.
?o ssn:observedBy ?s.
?s iot-lite:hasQuantityKind ?qk.
?s iot-lite:hasUnit ?unit.
?o ssn:observationSamplingTime ?t.
?t time:inXSDDateTime ?dt.
}group by (?s)
}
} group by (?s) ?tim ?val ?lat ?long ?qk ?unit
1. SPARQ
L
query
Triplestore
search
(iot-registry)
L
2. SPARQ cal
(h is tori
Response )
data
Data storage
(MongoDB)
Finally, this server parses this message and stores all the data in its own
database (MongoDB).
Prefix ssn: <http://purl.oclc.org/NET/ssnx/ssn#>
Prefix iot-lite: <http://purl.oclc.org/NET/UNIS/fiware/iot-lite#>
Prefix dul: <http://www.loa.istc.cnr.it/ontologies/DUL.owl#>
Prefix geo: <http://www.w3.org/2003/01/geo/wgs84_pos#>
Prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
While the server is running, it periodically polls the FIESTA-IoT platform in order to
get all the information. Such a process only implies the experiment’s server and the
FIESTA-IoT platform, where the server delivers a SPARQL like the one introduced in
Section 2.2.2.4, albeit in this case we will not filter according to any sensor ID, but we
will retrieve all the list of observations. Upon the reception of the SPARQL response,
the server has to undertake the task of dropping out all the duplicated entries. This
workflow is shown in Figure 18.
1. SPARQ
L
query
Triplestore
search
(iot-registry)
L
2. SPARQ
se (l ast
Respon s)
rv a ti o n
Filter duplicated obse
observations and
storage
(MongoDB)
One of the main outcomes of this experiment consists in the visualization of data
coming for the sensors for further analysis. Indeed, this is the actual reason behind
the storage of the observations. Hence, whenever a web client wants to load any of
the graphical assets supported by the experiment (for instance, the evolution of the
average data within a time frame), the client sends a request to the server’s
repository. Then, the response will be the input data of all this graphical elements.
Last, but not least, at the very same time we carry out the any of the aforementioned
operations, the experiment’s server keeps track of all of them. In other words, we
record all the computational times consumed by each of this operations (regardless
the location of the client executing the web application) so that we can extract
information in the future about the overall performance of the platform.
Thanks to this, we can elaborate a set of good practices that might help us improve
the quality of experience on not only experimenters, but also testbed providers (e.g.
by optimizing the SPARQL queries to discover and handle resources).
2.2.3 Dataset used: FIESTA-IoT Ontology concepts used towards building the
queries
As for the elements of the ontology that are used in this experiment, below we
proceed to outline the purpose of each of them. Before starting, it is deemed
necessary to differentiate between the two main phases that shape the experiment’s
lifetime: the resource discovery and the observation(s) retrieval.
On the other hand, when it comes to the observation realm, apart from the core of
the measurement per se (ssn:Observation), we have to answer the following
questions:
(1) Who sensed the observation? – ssn:Sensor/ssn:SensingDevice
(2) Where? – geo:location
(3) When? – Time:Instant
(4) Type? – iot-lite:QuantityKind and iot-lite:Unit
(5) Value? – This corresponds to the actual values that are connected through
the dul:hasDataValue data property
2.2.4 Outcomes
Apart from all the functionalities that have been presented throughout this section, it
is worth highlighting that the current version of the experiment is not definitive and
there are still many open issues to be tackled in the third year. Hence, there is a
number of features that are to be implemented and integrated before the project
ends, as the ones listed below.
• Utilization of composed IoT services within the platform.
• So far, the graphical assets only yield a group of nodes. It would be interesting
to let users define various subsets in order to compare among them.
• Subscription to data streams. At the time of writing, the asynchronous service
is not an supported option in the FIESTA-IoT platform. As a consequence, we
did have to seek an alternative to keep getting data from the meta-cloud.
However, as soon as we can rely on this feature, we will shift from our periodic
polling system to the subscription-based one.
• Creation of a FEDSPEC for the definition of the experiment. So far, playing the
role of advanced experimenters has allowed us to manually interplay with the
FIESTA-IoT registry API. Nevertheless, and in order to test another
components of the platform, we might rely on the ERM and EEE functional
components in order to:
(1) Test its behaviour and performance,
(2) Reduce the complexity of the experiment.
• Feedback from users. Even though we have not explicitly mentioned this
feature in this deliverable, during the design phase we contemplated the
possibility of enabling a place where experimenters might send information
about potential misbehaviors in the platform.
We envision implementing all the use-cases that are described in (FIESTA-IoT D5.1,
2016). However, for this version we implement only one use-case that reports what
are the most noisy places in the area. Thus we convert the requirement for this use-
case to the FISMO and send the FISMO to the EEE for the execution.
Nevertheless, as different use-cases are complementing each other we like to build
all uses-cases. The main use-case that is to be implemented and translated to the
FISMO is the case where sound information is requested for the given region over a
duration of time. Note that, as there is a “scheduling” attribute available in the
FISMO, the duration of time can be specified within the “scheduling” attribute with the
periodicity with which information is needed. The use-case query is defined such that
the response of the query consists of only most recent observations. Thereby making
it essential for the experimenter to correctly configure the periodicity within the
“scheduling” attribute.
The current support from the EEE component also makes it possible to poll this use
case via “Polling” option. This aspect would make it possible to realise the use-case
where only most recent observations are requested for the particular region.
The main use-case as described above, would provide most recent sound values
coming from all the sound sensors within the specified region. Once such data is
received at the experimenter end, while creating visualization selection of sound level
values can be done to identify if the value is more/less than certain value. This makes
it possible to realize the cases where the visual for most/least noisy locations within
the region is to be shown. Nevertheless, the above cases can also be done using
queries, for this version we implemented queries that are made available via
experiment FISMOs. Note that we build all the queries and related FISMOs, however,
at the experimenter side only most noisy location heat map is available.
Translating the above said workflow to the experiment architecture, there are various
interactions among various components. These interactions are shown in the Figure
22. The sequential list of interactions is as follows:
1. Experimenter creates the experiment FED-Spec.
2. He authenticates himself.
3. An access-token is returned to the experimenter upon successful
authentication.
4. The experimenter then sends the created FED-Spec to ERM using the
provided User Interface (UI) along with the access-token.
5. He then opens the EMC.
6. The EMC requests the experiment details from the ERM.
7. Upon the response from ERM, EMC displays the content to the experimenter.
8. The experimenter then selects the experiment,
9. He enables the associated FISMOs to be executed on the FIESTA-IoT
platform.
10. EMC then calls the EEE API to start the execution of the FISMO.
11. Upon the interval set in the scheduling object of the FISMO, the EEE queries
IoT-Registry component to retrieve the desired data using the following query:
Prefix ssn: <http://purl.oclc.org/NET/ssnx/ssn#>
Prefix iotlite: <http://purl.oclc.org/NET/UNIS/fiware/iot-lite#>
Prefix dul: <http://www.loa.istc.cnr.it/ontologies/DUL.owl#>
Prefix geo: <http://www.w3.org/2003/01/geo/wgs84_pos#>
Prefix time: <http://www.w3.org/2006/time#>
Prefix m3-lite: <http://purl.org/iot/vocab/m3-lite#>
Prefix xsd: <http://www.w3.org/2001/XMLSchema#>
select ?sensorID (max(?ti) as ?time) ?value ?latitude ?longitude
where {
?o a ssn:Observation.
?o ssn:observedBy ?sensorID.
?o ssn:observedProperty ?qk.
Values ?qk {m3-lite:Sound m3-lite:SoundPressureLevelAmbient}
?o ssn:observationSamplingTime ?t.
?o geo:location ?point.
?point geo:lat ?latitude.
?point geo:long ?longitude.
?t time:inXSDDateTime ?ti.
?o ssn:observationResult ?or.
?or ssn:hasValue ?v.
?v dul:hasDataValue ?value.
{
select (max(?dt)as ?ti) ?sensorID
where {
?o a ssn:Observation.
?o ssn:observedBy ?sensorID.
?o ssn:observedProperty ?qk.
Values ?qk {m3-lite:Sound m3-
lite:SoundPressureLevelAmbient}
?o ssn:observationSamplingTime ?t.
?t time:inXSDDateTime ?dt.
}group by (?sensorID)
}
FILTER (
(xsd:double(?latitude) >= "-90"^^xsd:double)
&& (xsd:double(?latitude) <= "90"^^xsd:double)
&& ( xsd:double(?longitude) >= "-180"^^xsd:double)
&& ( xsd:double(?longitude) <= "180"^^xsd:double)
)
FILTER(?value>="75"^^xsd:double)
} group by ?sensorID ?time ?value ?latitude ?longitude
12. IoT-Registry executes the query and sends the response back to the EEE.
13. The EEE then forwards the response to the Experiment Data receiver
component that executes on the experimenter side.
13'. The EEE also notifies the EMC about the successful execution. The
EMC upon the receipt of the response updates the UI related to the
FISMO.
14. The Experimenter is presented with the updated UI (with most resent
information about the FISMO) of the EMC
15. The Visualizer pull the information collected by Experiment Data Receiver and
create the UI. Note that this UI is different from the EMC UI.
16. The Experimenter loads the Visualizations. In the current version only the
most recent results are shown. Experimenter will have to refresh the UI to see
the most recent information.
Referring back to (FIESTA-IoT D5.1, 2016), the above use case needs following
information: Sensor producing the sound level observations, location of the sensor,
time the observation was taken, sound level values, and sound quantityKind. These
information are realized in the ontology via ssn:sensor, geo:location, time:instant,
ssn:ObservationValue and m3-lite:QuantityKind.
• geo:location: that contain exact latitude and longitude at which the value has
been observed;
2.3.3 Outcomes
With our experiment 5 it is possible to know what are the most noisy areas within a
specified region (although we consider the region as whole world for our current
experiment). Using such information citizens could evade the noisy places and find
more quite places to do their activities. As a part of this deliverable we are able to
show above-mentioned aspect (see Figure 23).
Currently, for this deliverable we have focused only on identifying most noisy
locations. We however, intend to address all the use cases for the next version of the
deliverable.
All the described phases and sub-phases are iterative and connected each other with
loop-back interaction. For that reason the evaluations of experiments and the
validations of the FIESTA-IoT concepts, platform and tools, can also be considered
an iterative process and the assessment might vary from this deliverable (which is an
intermediate deliverable) to the final deliverable D5.3 due in M36.
The conception and implementation of the presented methodologies is an outcome of
an iterative process of methodologies design and their application on the three in-
house experiments. Therefore, the methodologies have been deeply analysed on
both theoretical and practical aspects.
6 Note: The advance from SotA subject will not be evaluated at the integration phase because the
results of the experiment which actually might support this advance should only be available after
experiment execution. Similarly, experiment integration evaluation is not relevant at the execution
phase where the interactions of experiment and the FIESTA-IoT is purely machine-to-machine.
• Feedback to the platform. It’s a win-win that the experiment uses FIESTA-IoT
resources and gives feedback to FIESTA-IoT to help the platform and the
ecosystem to improve. FIESTA-IoT will privilege the experiments that are
potentially capable of provide valuable feedback.
The questionnaire provided a score to each candidate experiments. This
questionnaire did not produce any veto situation but was useful to have a reference
for the experiments from high to low score.
The “Feedback” section of this questionnaire can be re-distributed to experimenters
when they finish the experiments. The updated score will rely on the actual
implementation and results of the experiment taking into account the additional
insights that experimenters have learnt during the integration and execution of the
experiment.
As part of the experiment definition, experimenter defines a set of objectives that the
implementation of the experiment on top of the FIESTA-IoT Platform is aiming at.
Objectives can be both specific to the experiment (e.g. Investigate the correlations
between network topologies and associated data graphs in IoT-big network data
environments.) as well as related to the FIESTA-IoT Platform (e.g. Include
mechanisms in FIESTA-IoT to provide information on quality of data transmission and
data outliers to data consumers).
In this respect, experimenters will define KPIs and measureable outcomes at the
beginning of their experiments in order to make the assessment of the achievement
of the objectives defined beforehand. Evaluation of this topic will be carried out over
the successful completion and fulfilment of the previously identified KPIs.
This evaluation topic refers to the advance to the State of the Art or the innovation
that the experiment has achieved. In this sense, evaluation will be done through
tangible impact KPIs that the experimenters have identified in terms of the research
questions that could be answered with the execution of proposed experiment and the
corresponding publications they can generate with these answers.
Additionally, since experiments selected through the FIESTA-IoT Open Calls can also
focus on innovation rather than on research, analogous impact KPIs can be identified
for them.
Although this is not a primary evaluation topic, it will be included within the
experiment evaluation methodology as it will provide third-party assessment of the
quality or innovative nature of the experimentation (i.e. peer-reviewed publications,
market advantage, etc.).
The final experiment evaluation topic refers to the steps and process followed by the
experimenters during the integration phase of their experiment. In this sense, the
methodology that will be followed to make the assessment of this point will be the
specification of a checklist (Annex IV EvaluatION experiment integration and
implementation Checklist) that will be checked upon the completion of the integration
phase for each experiment.
The aspects that have been identified relates to the best practices and support
mechanisms that the FIESTA-IoT consortium has put in place. The attendance to
these best practices is meant to ease the experimentation process and also to
optimize the use of the FIESTA-IoT Platform resources.
FIESTA-IoT platform is delivered together with a set of tools for the experiment
development, deployment and execution. The quality of the tools from the point of
view of users, aka the experimenters, is also key to the quality of the FIESTA-IoT
platform. Thus, it is indispensable to validate if the tools meet the expectations from
the point of view of the users and provide the functionalities that the platform
promises.
The validation of tools consists also of 2 phases:
• Integration phase. At the end of this phase, experimenters will be given a
questionnaire with questions to evaluate the tools that served during their
experiment development and deployment, including the easiness of learning
the tools, the usefulness and the performance. The questionnaire will be
shown in Annex V QuestionNaire: Validation of the FIESTA-IoT resources
together with the questions from the following section.
• Execution phase. During the execution phase, monitoring tool of the FIESTA-
IoT platform will continuously track the functions and performance of the tools
which participate to the experiment execution, for example, if the API provides
latest and historical observation of a sensor regarding to the request from an
experiment, how much is the delay between the request and response, etc.
The development of such monitoring tool is under the responsibility of
experimenter.
FIESTA-IoT resources refer to the support materials that the FIESTA-IoT platform
made available for experiment development purpose. The completeness and clarity
of these materials are essential for the experiment development efficiency. This
aspect will be evaluated by the experiment developers by answering a specifically
designed questionnaire at the end of the integration phase. This questionnaire,
together with the one introduced in 3.2.2 will be presented in Annex V QuestionNaire:
Validation of the FIESTA-IoT resources, and the result will help the FIESTA-IoT
consortium to identify unsatisfactory part of the resources and to improve them in the
future.
Table 3 Evaluation of the Data Assembly and Service Portability through KPIs
Table 4 Evaluation of the Dynamic Discovery of IoT Resources for Testbed Agnostic
Data Access experiment through KPIs
7 https://github.com/fiesta-iot/in-house-dynamic-discovery
The questionnaire proposed for evaluating the integration and implementation phase
has been answered by this experiment and put in the Annex IV EvaluatION
experiment integration and implementation Checklist.
Furthermore, for the specific case of the Data Assembly and Services Portability
Experiment, the implementation phase leveraged several components for achieving
all the functionalities. The FIESTA-IoT tools have been used mainly for retrieving and
interpreting the data. Instead, for implementing the backend analytics and context
management, other tools have been used.
FIESTA-IoT tools
• FIESTA-IoT endpoints: used for retrieving the latest observed value by the
sensing devices.
• Semantic Data Repository: used for fetching the data of sensing devices
deployed by testbeds which are not exposing IoT endpoints.
• FIESTA ontology: all the fetched data is interpreted with the annotation defined
by the FIESTA ontology
• OpenAM: all the resource discoveries and historical data requests have been
authenticated by a token acquired through the OpenAM server within FIESTA-
IoT. A single set of credentials have been enough to access data from all the
available testbeds.
Other tools
• NGSI 9: as the data format and API used for the communications between: the
backend components; the backend components and the frontend component.
The questionnaire proposed for evaluating the integration and implementation phase
has been answered by this experiment and put in the Annex IV EvaluatION
experiment integration and implementation Checklist.
Furthermore, in a similar way to the former case, this experiment does make use of
various key components that form the FIESTA-IoT platform core. We describe in the
following list the main interaction with these elements.
FIESTA-IoT tools & components:
• Resource discovery: By means of an off-the-shelf SPARQL query, we can
gather all the resources available (i.e. registered) at the FIESTA-IoT
federation.
• FIESTA-IoT ontology: Playing the role of experimenters, we have to be aligned
with the datasets generated by the platform. Due to the direct interplay
between our application and FIESTA-IoT platform, we have to directly parse
data that respect the rules imposed by this semantic model.
• IoT-Registry. The interaction with this component is essential to retrieve the
resource descriptions (during the discovery phase) and the measurements
that are being generated by the sensors throughout the time.
8 https://catalogue.fiware.org/enablers/iot-broker
9 https://forge.fiware.org/plugins/mediawiki/wiki/fiware/index.php/FI-
WARE_NGSI_Open_RESTful_API_Specification_%28PRELIMINARY%29
10 http://wiki.openstreetmap.org/wiki/Nominatim
11 https://freeboard.io/
The questionnaire proposed for evaluating the integration and implementation phase
has been answered by this experiment and put in the Annex IV EvaluatION
experiment integration and implementation Checklist.
Furthermore, this experiment uses FIESTA-IoT tools built to support Experimentation.
The experiment uses following to execute the experiment, the “Experiment data
receiver” created by FIESTA-IoT to enable experimenters receive data. The specific
usage of the above-mentioned tools is explained in the Section 2.3.2:
• ERM: to store the FEDspec created.
• EEE: to execute the FISMOs in the FEDspec.
• EMC: to enable the execution of the needed FISMO.
• IoT-Registry (FIESTA-IoT Semantic Storage component): it is the component
where the EEE periodically sends the query to be excuted for the results.
• Portal: is used to login to FIESTA-IoT and use ERM and EMC.
12 http://leafletjs.com/
13 http://turfjs.org/
14 https://d3js.org/
• Security Component: to login to the portal and use the ERM and EMC, first
necessary session cookie has to be generated. Security component is thus
need to generate the session cookie (also known as access token) .
Other tools
The current implementation state of this experiment has accomplished already many
FIESTA-IoT objectives:
This experiment was designed in order to accomplish various objectives that were
originally defined as a list of main challenges to be tackled under the scope of the
FIESTA-IoT project. Even though not all of them have to do with this application, it is
true that we do cover some of them, summarized below.
This implementation of the experiments has validated the following objectives of the
FIESTA-IoT project:
Objective Details Status
Design and implement We are able to use the FIESTA-IoT Matched
integrated IoT Platform to design, implement and
experiments/applications execute our experiment using various
tools made available by FIESTA-IoT. We
further used “single entry point and based
on a single set of credentials” to perform
our experimentation. This validates
Objective 1
We have created a list of KPIs (FIESTA-IoT D5.1, 2016), related to our experiment,
for assessing the validation of FIESTA-IoT tools. The following table contains those
KPIs with the status with the current situation of the platform and the experiment
implementation.
We have also experienced the integration of different tools offered by FIESTA-IoT for
easing the process of the experiment implementation:
Problems encountered:
The first problem encountered during the implementation of the experiment is the
lacking of the asynchronous notification system due to a prioritization of the effort on
finalizing the core of the FIESTA-IoT platform. This problem has not blocked the
development of the experiment which implements query polling instead. In future an
asynchronous notification system would nevertheless be adopted and bring
advantages like better performances.
A second problem encountered was the performance of the Semantic Data
Repository. Data SPARQL queries were replied with a too high response time or
even, in some extreme cases, ended with a connection timeout. This issue was
solved by optimizing the SPARQL and reformulating it by changing the order of the
clause.
We have given rise to the following list to assess the FIESTA-IoT platform, where we
concentrate the most relevant achievements to be accomplished:
KPI Details Status
Display information of a The achievement of this indicator Partially
minimum of 5,000 depends on the platform and the achieved
resources, coming from integrated testbeds. At the time of
Problems encountered
One of the features that has not been ready by the initial phase of this experiment is
the asynchronous service, where we could have harnessed the potential of a fully-
fledged subscription system to the observations. As soon as it is available, we will
integrate this mechanism into the experiment, thus replacing the legacy polling
service.
At the moment of writing this document, the primary way to retrieve data from relying
on the utilization of IoT service endpoints instead of the storage of observations in the
own platform. Therefore, we have had to rely on the invocation of these endpoints,
found in the resource descriptions, instead of directly querying to the meta-cloud
repository. Alike, this is a temporal solution that will be replaced as soon as testbed
gradually store their data in the FIESTA-IoT platform.
The KPI defined help us validate FIESTA-IoT tools. We list these KPIs below.
KPI Details Status
Leaverage data from as Currently, based on the experiment Achieved
many testbeds as requirement of observations from sound
possible (to quantify: sensors, the experiment gets data from
more than 1 testbed) SmartSantander and Soundcity testbed.
Further, as the query used is not testbed
agnostic, in furture, if more testbed
associated to FIESTA-IoT provide sound
sensor related observations, the
experiment will get data from them as
well.
Number of sensors that As SoundCity Testbed is crowdsensing Achieved
provide data to the Testbed and the integration with
experiment (to quantify: FIESTA-IoT was done recently (month
more than 100 sensors) 24), there are less than 4 users that
have authenticated Soundcity Testbed to
send their information to FIESTA-IoT.
We envision seeing an increase in this
number once more and more users join.
Further, from SmartSantander there are
currently 34 sensors and more than 100
sound sensors from Smart ICS from
which the data is received.
Large Number of samples Currently, as the triple store is young not Not
needed for high quality lot of observations are available. This Achieved
results also partially depends on previous KPI.
As soon as the triple store grows wrt to
number of observations related to
sound, this KPI will be achieved.
Further, based on our experience, we also list if used FIESTA-IoT tools were easy to
use, integrate, provide needed functionality with respect to our experiment:
Tools Details Status
Portal We were able to successfully login to the Validated
portal and use the different tools. This
also validates the Security Component
(OpenAM).
ERM We were able to successfully able to Validated
register our experiment using ERM
EMC We were able to successfully schedule Validated
the experiment using the EMC and use
provided functionality.
EEE The experiment services were Validated
successfully scheduled and executed.
IoT-Registry The IoT-registry was successfully able to Validated
execute the query and provide result in
desired format in desired time.
Experiment Data The experiment data receiver was Validated
receiver successfully able to receive data sent by
EEE.
Table 14 Validation of FIESTA-IoT tools by the Large Scale Crowdsensing
Experiments experiments
Problems Encountered
There are a number of issues currently faced by us while the preparing for our
experiment. These issues are:
• Missing/incorrect triples: some Testbeds missed essential concepts made
available via ontology. Thus, effort had to be put in to identify issues using
SPARQL queries and were reported to the Testbed owners which then
modified their annotators to correctly match the ontology. Further, as this was
reported to the Testbed owners, they gave priority to solve this issue.
• Unavailability of a large number of sound sensors: Currently there are few
sound sensors available within FIESTA-IoT platform and due to this the results
quality is not very high.
Besides, above-mentioned list, we also, would like to state that we also faced some
of the issues that were stated before (by the two other experiments).
Table 15 Validation of the FIESTA-IoT resources by the Data Assembly and Services
Portability experiment
All in-house experimenters have filled the validation questionnaire and their answers
are available in Annex V QuestionNaire: Validation of the FIESTA-IoT resources. This
questionnaire assesses the quality of experience of FIESTA-IoT experimenters. From
their answers, we can get the following conclusion:
• Documentation of FIESTA-IoT is consulted often and appreciated by the
experimenters that it provides rich, comprehensive and useful information for
experiment development. Experimenters declared that they always found
needed information in the documentation, and the quality of the
documentation is satisfactory.
• For setting up and deploying an experiment, the processes are easy to follow
and implement, and the integration and deployment on the FIESTA-IoT
platform is relatively straight forward without much complication. The time
spent to fully integrate the experiment with FIESTA-IoT is, in general, not
more than 2 weeks.
• The FIESTA-IoT APIs are simple and useful. 2 experimenters over 3 declared
that they preferred the API-based solution for interaction with the platform
rather than using the experiment portal for the flexibility. However, the
experiment portal stays the favorite of 1 in-house experimenter.
• The experiment results have met the experimenters’ expectation according to
their answers.
• All the experimenters declared to have an excellent interaction with the
FIESTA-IoT team, and would recommend FIESTA-IoT platform to other
experimenters.
From their answers, we can also identify some aspects that FIESTA-IoT need to
improve in the future:
• The only documentation that the experimenters declared lower rate is the one
about the SPARQL query. This subject can be enhanced in the handbook.
• The performance and availability of the platform is not totally satisfactory.
5 CONCLUSIONS
This deliverable has described the implementations and outcomes of the three in-
house experiments. The report contained in this document is twofold for each
experiment: report of the actual experiment architecture and implementation, and the
interaction with FIESTA-IoT platform and the concept exploited of the FIESTA-IoT
project.
It is worth to notice that even if all the experiments are at their first versions, many
achievements and outcomes have been reached on both perspective:
experimentation and FIESTA-IoT platform validation.
From the experiments perspective all of the three are effectively working and
operative and ready to use the FIESTA-IoT platform as testbeds interoperability
platform. All of them have implemented the first version of both the backend system,
for getting and analyze data, and the frontend components for showing the results.
Furthermore all the three in-house experiments have been evaluated with a
methodology in order to have the most objective view of the results.
From the platform validation perspective, the experiments have been able to access
data from different testbeds in an agnostic manner. Furthermore the experiments
have shown their portability among testbeds, since, for all the three in-house
experiments, the applications can be used in every region of the globe without the
necessity to re-configure it and seamlessly exploiting data from very different IoT
systems. In addition the experiments have been capable of retrieving more than one
observations from the same sensor, with a single query, hence showing the capability
of historical query. One of the experiments has also successfully leveraged the
FIESTA-IoT tools for design the experiment and running it directly into the FIESTA-
IoT platform and harvesting the results asynchronously. Finally all the three
experiments have successfully integrated and used the security functions used to
ensure the access control to data. All the interactions with the FIESTA-IoT platform
have been preceded by only a single authentication request, used to retrieve the
necessary token. Many FIESTA-IoT concepts have been leveraged into the three in-
house experiment: the usage of the FIESTA-IoT ontology for understanding data
coming from different IoT deployment in a seamless manner and the automatic
execution of backend analytics (e.g. statical data aggregation); dynamic discovery of
the resources regardless of testbed deployments; the Virtual Entities concept for
adding abstraction layer to the data starting from the pure observations; the
automatic execution of the experiment and the asynchronous harvesting of the
results.
Also for the validation of the FIESTA-IoT platform, concepts and tools has been
address with a well defined methodology and applied on all the experiments.
The outcome of the this document is bringing at the same time: hints to third-parties
experimenters and FIESTA-IoT platform users on how to use the powerful FIESTA-
IoT tools and testbeds interoperability for IoT applications; feedback to the FIESTA-
IoT project on which aspect is to be considered weakness points to be enhanced in
the future.
The work executed till now brought to attention some weakness of the FIESTA-IoT
platform like: the triple store performance which can be a big bottle-neck if not wisely
handled in the future; the scarcity of the data which can be easily overcome with the
integration of FIESTA-IoT extensions from Open Calls; the necessity of an
asynchronous notification system for data, with the aim of lowering the bandwidth
used, which is already in the roadmap of the FIESTA-IoT platform for the third year of
the project.
6 REFERENCES
FILTER (
(xsd:double(?latitude) >= "4.34"^^xsd:double)
&& ( xsd:double(?longitude) >= "3.806"^^xsd:double)
)
} group by ?sensorID ?time ?value ?latitude ?longitude
]]></query>
</prt:query-request>
</fed:queryControl>
</fed:FISMO>
<fed:FISMO name="3rdUseCase">
<fed:description>Over time noise observations for a given bounding box
(time period in scheduling)</fed:description>
<fed:discoverable>true</fed:discoverable>
<fed:experimentControl>
<fed:scheduling>
<fed:startTime>2016-11-08T18:50:00.0Z</fed:startTime>
<fed:Periodicity>250</fed:Periodicity>
<fed:stopTime>2017-11-08T18:49:59.0Z</fed:stopTime>
</fed:scheduling>
</fed:experimentControl>
<fed:experimentOutput location="="https://experimentserver.org/store
"></fed:experimentOutput>
<fed:queryControl>
<prt:query-request>
<query><![CDATA[
# [1 / 1] visualization type: 'Gauge' and sensors
Prefix ssn: <http://purl.oclc.org/NET/ssnx/ssn#>
Prefix iotlite: <http://purl.oclc.org/NET/UNIS/fiware/iot-
lite#>
Prefix dul:
<http://www.loa.istc.cnr.it/ontologies/DUL.owl#>
Prefix geo: <http://www.w3.org/2003/01/geo/wgs84_pos#>
Prefix time: <http://www.w3.org/2006/time#>
Prefix m3-lite: <http://purl.org/iot/vocab/m3-lite#>
Prefix xsd: <http://www.w3.org/2001/XMLSchema#>
select ?sensorID (max(?ti)
as ?time) ?value ?latitude ?longitude
where {
?o a ssn:Observation.
?o ssn:observedBy ?sensorID.
?o ssn:observedProperty ?qk.
Values ?qk {m3-lite:Sound m3-
lite:SoundPressureLevelAmbient}
?o ssn:observationSamplingTime ?t.
?o geo:location ?point.
?point geo:lat ?latitude.
?point geo:long ?longitude.
?t time:inXSDDateTime ?ti.
?o ssn:observationResult ?or.
?or ssn:hasValue ?v.
?v dul:hasDataValue ?value.
{
select (max(?dt)as ?ti) ?sensorID
where {
?o a ssn:Observation.
?o ssn:observedBy ?sensorID.
?o ssn:observedProperty ?qk.
Values ?qk {m3-lite:Sound m3-
lite:SoundPressureLevelAmbient}
?o ssn:observationSamplingTime ?t.
?t time:inXSDDateTime ?dt.
}group by (?sensorID)
}
FILTER (
(xsd:double(?latitude) >= "-90"^^xsd:double)
&& (xsd:double(?latitude) <= "90"^^xsd:double)
&& ( xsd:double(?longitude) >= "-180"^^xsd:double)
&& ( xsd:double(?longitude) <= "180"^^xsd:double)
)
} group by ?sensorID ?time ?value ?latitude ?longitude
]]></query>
</prt:query-request>
</fed:queryControl>
</fed:FISMO>
<fed:FISMO name="4thUseCase">
<fed:description>3rd usecase with noise more than x
dB(A)</fed:description>
<fed:discoverable>true</fed:discoverable>
<fed:experimentControl>
<fed:scheduling>
<fed:startTime>2016-11-08T18:50:00.0Z</fed:startTime>
<fed:Periodicity>250</fed:Periodicity>
<fed:stopTime>2017-11-08T18:49:59.0Z</fed:stopTime>
</fed:scheduling>
</fed:experimentControl>
<fed:experimentOutput location="="https://experimentserver.org/store
/"></fed:experimentOutput>
<fed:queryControl>
<prt:query-request>
<query><![CDATA[
# [1 / 1] visualization type: 'Gauge' and sensors
Prefix ssn: <http://purl.oclc.org/NET/ssnx/ssn#>
Prefix iotlite: <http://purl.oclc.org/NET/UNIS/fiware/iot-
lite#>
Prefix dul:
<http://www.loa.istc.cnr.it/ontologies/DUL.owl#>
Prefix geo: <http://www.w3.org/2003/01/geo/wgs84_pos#>
Prefix time: <http://www.w3.org/2006/time#>
Prefix m3-lite: <http://purl.org/iot/vocab/m3-lite#>
Prefix xsd: <http://www.w3.org/2001/XMLSchema#>
select ?sensorID (max(?ti)
as ?time) ?value ?latitude ?longitude
where {
?o a ssn:Observation.
?o ssn:observedBy ?sensorID.
?o ssn:observedProperty ?qk.
Values ?qk {m3-lite:Sound m3-
lite:SoundPressureLevelAmbient}
?o ssn:observationSamplingTime ?t.
?o geo:location ?point.
?point geo:lat ?latitude.
?point geo:long ?longitude.
?t time:inXSDDateTime ?ti.
?o ssn:observationResult ?or.
?or ssn:hasValue ?v.
?v dul:hasDataValue ?value.
{
select (max(?dt)as ?ti) ?sensorID
where {
?o a ssn:Observation.
?o ssn:observedBy ?sensorID.
?o ssn:observedProperty ?qk.
Values ?qk {m3-lite:Sound m3-
lite:SoundPressureLevelAmbient}
?o ssn:observationSamplingTime ?t.
?t time:inXSDDateTime ?dt.
}group by (?sensorID)
}
FILTER (
(xsd:double(?latitude) >= "-90"^^xsd:double)
&& (xsd:double(?latitude) <= "90"^^xsd:double)
&& ( xsd:double(?longitude) >= "-180"^^xsd:double)
&& ( xsd:double(?longitude) <= "180"^^xsd:double)
)
FILTER(?value>="75"^^xsd:double)
} group by ?sensorID ?time ?value ?latitude ?longitude
]]></query>
</prt:query-request>
</fed:queryControl>
</fed:FISMO>
<fed:FISMO name="5thUseCase">
<fed:description>3rd usecase with noise less than x
dB(A)</fed:description>
<fed:discoverable>true</fed:discoverable>
<fed:experimentControl>
<fed:scheduling>
<fed:startTime>2016-11-08T18:50:00.0Z</fed:startTime>
<fed:Periodicity>250</fed:Periodicity>
<fed:stopTime>2017-11-08T18:49:59.0Z</fed:stopTime>
</fed:scheduling>
</fed:experimentControl>
<fed:experimentOutput location="="https://experimentserver.org/store
/"></fed:experimentOutput>
<fed:queryControl>
<prt:query-request>
<query><![CDATA[
# [1 / 1] visualization type: 'Gauge' and sensors
Prefix ssn: <http://purl.oclc.org/NET/ssnx/ssn#>
Prefix iotlite: <http://purl.oclc.org/NET/UNIS/fiware/iot-
lite#>
Prefix dul:
<http://www.loa.istc.cnr.it/ontologies/DUL.owl#>
Prefix geo: <http://www.w3.org/2003/01/geo/wgs84_pos#>
Prefix time: <http://www.w3.org/2006/time#>
Prefix m3-lite: <http://purl.org/iot/vocab/m3-lite#>
Prefix xsd: <http://www.w3.org/2001/XMLSchema#>
select ?sensorID (max(?ti)
as ?time) ?value ?latitude ?longitude
where {
?o a ssn:Observation.
?o ssn:observedBy ?sensorID.
?o ssn:observedProperty ?qk.
Values ?qk {m3-lite:Sound m3-
lite:SoundPressureLevelAmbient}
?o ssn:observationSamplingTime ?t.
?o geo:location ?point.
?point geo:lat ?latitude.
Evaluation questionnaire:Template
Does the experiment need external data to -1 point if the answer is yes
accomplish the goal? If so, which external data do
you need?
How the experiment will consume data? (request- -1 point if subscription-
based or subscription-based) based for the moment as
the function is not yet
stable. Will be neutral in
the future
In case of request-based consumption, what is -1 point if the rate<10s
the expected request rate?
In case of subscription-based data consumption, -1 point if the rate<10s
what is the expected notification rate?
Do you need third party tools to accomplish the Neutral question
experiment?
What tools do you need among the ones provided + 1 point for each FIESTA-
by FIESTA-IoT? (refer to the FIESTA-IoT tool list) IoT tool
How many FIESTA-IoT testbeds do you need to + 1 point for each testbed
accomplish the experiment? involved in the experiment
Feedback
To what extend will the experiment use semantic +1 point for each basic use
data? (e.g. only for discovery, produce semantic (i.e. discovery, retrieve,
data, etc.) store), +2 points for
knowledge producing
operations (i.e. reasoning,
cross-field operations)
Will the experiment generate new knowledge + 4 if provide knowledge
from the requested data and provide it back to back to FIESTA-IoT
Questionnaire Answer
What kinds of sensor data do you need? (e.g. Our experiment is
temperature, humidity, etc.) automatically instantiating
a set of analytics functions
(data statistics such as
average, minimum,
maximum, but also sensor
deployment metrics like
observation density per
area, number of active
sensors of a certain type
per virtual entity) on every
available kind of sensor
data.
Does the experiment need data from a specific No
place? (e.g. Paris, Tokyo, etc.). If so, specify the
Feasibility
place(s)
Do you need to filter the data during No.
discovery/retrieve? If so, what are the criteria?
(location, phenomena, time, etc.)
Does the experiment need external data to No, all the inputs can be
accomplish the goal? If so, which external data do taken from the FIESTA-IoT
you need? platform
How the experiment will consume data? (request- Can work in both modes,
based or subscription-based)
In case of request-based consumption, what is ~3 minutes
the expected request rate?
In case of subscription-based data consumption, ~1 minute
what is the expected notification rate?
Do you need third party tools to accomplish the Yes.
experiment?
What tools do you need among the ones provided OpenAM security,
by FIESTA-IoT? (refer to the FIESTA-IoT tool list) Semantic Data Repository,
IoT-Registry API and IoT
Service endpoints
Feedback
How many FIESTA-IoT testbeds do you need to At least two, but the more
accomplish the experiment? the better.
To what extend will the experiment use semantic For the process of data
data? (e.g. only for discovery, produce semantic acquisition and analytics:
data, etc.) historical query, resource
discovery and data
analytics execution.
Will the experiment generate new knowledge New knowledge will be
from the requested data and provide it back to produced but at the
FIESTA-IoT knowledge base? moment there is not plan to
push it back to FIESTA-IoT.
Questionnaire Answer
What kinds of sensor data do you need? (e.g. Environmental data
temperature, humidity, etc.) (temperature, illuminance,
atmospheric pressure,
relative humidity,
windspeed, solar radation,
etc.)
Does the experiment need data from a specific No
place? (e.g. Paris, Tokyo, etc.). If so, specify the
place(s)
Do you need to filter the data during Yes. At the time of writing,
discovery/retrieve? If so, what are the criteria? location, phenomena and
(location, phenomena, time, etc.) time queries will be
Feasibility
necessary
Does the experiment need external data to No, all the inputs can be
accomplish the goal? If so, which external data do taken from the FIESTA-IoT
you need? platform
How the experiment will consume data? (request- Can work in both modes,
based or subscription-based) but preferably through a
subscription-based
operation
In case of request-based consumption, what is ~5 minutes
the expected request rate?
In case of subscription-based data consumption, ~1 minute
what is the expected notification rate?
Do you need third party tools to accomplish the Yes.
experiment?
What tools do you need among the ones provided OpenAM security, IoT-
by FIESTA-IoT? (refer to the FIESTA-IoT tool list) Registry API and IoT
Service endpoints
How many FIESTA-IoT testbeds do you need to Every testbed providing
accomplish the experiment? environmental data is
Feedback
Questionnaire Answer
What kinds of sensor data do you need? (e.g. Sound Sensor. This is
temperature, humidity, etc.) already available in the
taxonomy and the testbeds
are already providing data
Does the experiment need data from a specific No, it is large scale and
place? (e.g. Paris, Tokyo, etc.). If so, specify the does not depend on the
place(s) location.
Do you need to filter the data during Yes, we need most recent
discovery/retrieve? If so, what are the criteria? observations. This can be
(location, phenomena, time, etc.) done while querying the
registry.
Does the experiment need external data to No.
accomplish the goal? If so, which external data do
you need?
Feasibility
How the experiment will consume data? (request- It will be request based.
based or subscription-based) We have a portal that will
allow the end users or the
citizens to view the map of
the noisy/quite places.
Thus once the data is sent
to the experimenter (us) by
the EEE, we will store the
data and will consume it if
there is a request from a
citizen or end user.
In case of request-based consumption, what is As soon as possible. We
the expected request rate? expect it to be fast
In case of subscription-based data consumption, NA
what is the expected notification rate?
Do you need third party tools to accomplish the Yes.
experiment?
What tools do you need among the ones provided We need: Portal, OpenAM
by FIESTA-IoT? (refer to the FIESTA-IoT tool list) security, EEE, EMC, ERM
and IoT-Registry
Feedback
How many FIESTA-IoT testbeds do you need to All those have the sound
accomplish the experiment? sensor. Currently there are
3 testbeds: Smart
Santander, Smart ICS and
SoundCity.
To what extend will the experiment use semantic Only to get observations.
Checklist template
e-mail Y/N
Use of
Have the experimenter used the
support ticket system helpdesk support tools?
Y/N
channels
live chat Y/N
Learning phase
REST access to
Y/N
datasets
Have the experimenter proposed
Suggest additional functionalities additional functionalities that could be Y/N
based on experience benefitial for future experiments?
which one(s)
If yes, which one(s)?
Have the experimenter provided
code/enhancements/modules/tools
Provide code / enhancements / that could be benefitial for future Y/N
modules / tools experiments? which one(s)
e-mail NA
Use of
Have the experimenter used the
support ticket system helpdesk support tools?
NA
channels
live chat NA
Learning phase
Experiment
N
Related tools
SPARQL
Use of Which tools from the FIESTA-IoT Y
endpoint
FIESTA-IoT Platform portfolio does the
tools Resource experiment use?
N
ADesign and development phase
browser
REST access to
Y
datasets
Have the experimenter proposed
Suggest additional additional functionalities that
functionalities based on could be benefitial for future N
experiments?
experience
If yes, which one(s)?
Have the experimenter provided
code/enhancements/modules/tool
Provide code / enhancements s that could be benefitial for future N
/ modules / tools experiments?
If yes, which one(s)?
Does the experiment allow
Support objective assessment objective assessment of the
N
of platform non-functional FIESTA-IoT platform non-
functional requirements? Not systematically
requirements
If yes, which one(s)?
e-mail NA
Use of
Have the experimenter used the
support ticket system helpdesk support tools?
NA
channels
live chat NA
Learning phase
REST access to
Y
datasets
Have the experimenter proposed
Suggest additional additional functionalities that Y
functionalities based on could be benefitial for future Performance
monitoring tool
experiments?
experience
Feeback module
If yes, which one(s)?
Have the experimenter provided Y
code/enhancements/modules/tool
Provide code / enhancements Application source
s that could be benefitial for future code which can be
/ modules / tools experiments? easily broken down
into independent
If yes, which one(s)? module
e-mail NA
Use of
Have the experimenter used the
support ticket system helpdesk support tools?
NA
channels
live chat NA
NA
Learning phase
Experiment
Y
Related tools
SPARQL
Use of Which tools from the FIESTA-IoT NA
endpoint
FIESTA-IoT Platform portfolio does the
tools Resource experiment use?
NA
ADesign and development phase
browser
REST access to
NA
datasets
Have the experimenter proposed
Suggest additional additional functionalities that
N
functionalities based on could be benefitial for future
experiments? Not as of now
experience
If yes, which one(s)?
Have the experimenter provided
code/enhancements/modules/tool
Provide code / enhancements s that could be benefitial for future N
/ modules / tools experiments? No not as of now.
Questionnaire template
GOOD
GOOD
POOR
VERY
FAIR
N/A
NT
Q5.How would you rate the relevance of the documentation to support you to
set up your experimentation?
EXCELLE
GOOD
GOOD
POOR
VERY
FAIR
N/A
NT
Documentation about FEDSPEC ☐ ☐ ☐ ☐ ☐ ☐
Documentation about APIs ☐ ☐ ☐ ☐ ☐ ☐
Documentation about Ontology ☐ ☐ ☐ ☐ ☐ ☐
Documentation about SPARQL ☐ ☐ ☐ ☐ ☐ ☐
queries
Documentation about installing ☐ ☐ ☐ ☐ ☐ ☐
Experiment Data Receiver
Experiment Execution process and ☐ ☐ ☐ ☐ ☐ ☐
guidelines
EXCELLE
GOOD
GOOD
POOR
VERY
FAIR
N/A
NT
Q6.How would you rate the FEDSPEC creation ☐ ☐ ☐ ☐ ☐ ☐
process?
Q7.How would you rate the SPARQL Queries creation ☐ ☐ ☐ ☐ ☐ ☐
process?
Q8.How would you rate the integration and ☐ ☐ ☐ ☐ ☐ ☐
deployment process?
Q9.How would you rate the quality and quantity of ☐ ☐ ☐ ☐ ☐ ☐
available data?
Q10.How would you rate the performance of EEE ☐ ☐ ☐ ☐ ☐ ☐
module?
Q11.How would you qualify the quality and relevance ☐ ☐ ☐ ☐ ☐ ☐
of tools which have been made available to you?
Q12.How would you qualify the quality of FIESTA-IoT ☐ ☐ ☐ ☐ ☐ ☐
APIs?
Q13.How would you qualify the easy of installing ☐ ☐ ☐ ☐ ☐ ☐
Experiment Data Receiver (Excellent being very easy
and Poor being very hard)
Q14.Do you prefer to move to API-based solution rather than using the
experiment portal?
Yes
No
If Yes, Please specify the reason………………………………….
Q15. How much time have you spent in total to integrate the FIESTA-IoT tools
in your experiment for having the first experiment prototype working (it counts
only the time used to setup the FIESTA-IoT tools such as APIs connector, EMC,
Data Receiver setup and so on, without counting effort for visualization tools
or set up of external tools):
MONTHS
MONTHS
1 WEEK
WEEKS
MONTH
THAN 2
THAN 1
THAN 2
THAN 2
MORE
THAN
LESS
LESS
LESS
LESS
Get Started Level* ☐ ☐ ☐ ☐ ☐
Basic Integration Level ☐ ☐ ☐ ☐ ☐
Full Integration Level ☐ ☐ ☐ ☐ ☐
* “Get Started Level” corresponds to following the the instructions in the handbook, “Basic Integration level”
corresponds to the first integration of your experiment to Fiesta-IoT, and “Full Integration level” refers to a final
integration after necessary fine-tuning of your experiment
GOOD
GOOD
POOR
VERY
FAIR
NT
Q17.Please give us all comments you may have about your experience during
the experimentation?
…………………………………………………………………..
Ending the experiment
EXCELLE
GOOD
GOOD
POOR
VERY
FAIR
NT
FIESTA-IoT platform?
GOOD
GOOD
POOR
VERY
FAIR
N/A
NT
Q5. How would you rate the relevance of the documentation to support you to
set up your experimentation?
EXCELLE
GOOD
GOOD
POOR
VERY
FAIR
N/A
NT
EXCELLE
GOOD
GOOD
POOR
VERY
FAIR
N/A
NT
Q6. How would you rate the FEDSPEC creation ☐ ☐ ☐ ☐ ☐ ☒
process?
Q7. How would you rate the SPARQL Queries ☒ ☐ ☐ ☐ ☐ ☐
creation process?
Q8. How would you rate the integration and ☐ ☒ ☐ ☐ ☐ ☐
deployment process?
Q9. How would you rate the quality and quantity ☐ ☒ ☐ ☐ ☐ ☐
of available data?
Q10. How would you rate the performance of EEE ☐ ☐ ☐ ☐ ☐ ☒
module?
Q11. How would you qualify the quality and ☒ ☐ ☐ ☐ ☐ ☐
relevance of tools which have been made
available to you?
Q12. How would you qualify the quality of ☒ ☐ ☐ ☐ ☐ ☐
FIESTA-IoT APIs?
Q13. How would you qualify the easy of installing ☐ ☐ ☐ ☐ ☐ ☒
Experiment Data Receiver (Excellent being very
easy and Poor being very hard)
Q14. Do you prefer to move to API-based solution rather than using the
experiment portal?
Yes
If Yes, Please specify the reason
I went to the API-based solution from the first phase since we are more accustomed
to connector creation.
Q15. How much time have you spent in total to integrate the FIESTA-IoT tools
in your experiment for having the first experiment prototype working (it counts
only the time used to setup the FIESTA-IoT tools such as APIs connector, EMC,
Data Receiver setup and so on, without counting effort for visualization tools
or set up of external tools):
MONTHS
MONTHS
1 WEEK
WEEKS
MONTH
THAN 2
THAN 1
THAN 2
THAN 2
MORE
THAN
LESS
LESS
LESS
LESS
Get Started Level* ☒ ☐ ☐ ☐ ☐
Basic Integration Level ☒ ☐ ☐ ☐ ☐
Full Integration Level ☐ ☒ ☐ ☐ ☐
* “Get Started Level” corresponds to following the instructions in the handbook, “Basic Integration level”
corresponds to the first integration of your experiment to Fiesta-IoT, and “Full Integration level” refers to a final
integration after necessary fine-tuning of your experiment
GOOD
GOOD
POOR
VERY
FAIR
NT
Q17. Please give us all comments you may have about your experience during
the experimentation?
We have found the FIESTA-IoT platform very reliable and, in case of any small issue,
the direct communication with the FIESTA-IoT support helped us on have it quickly
solved (either on our side or on their side).
EXCELLE
GOOD
GOOD
POOR
VERY
FAIR
NT
Q18. Overall, how do you qualify your experience on ☒ ☐ ☐ ☐ ☐
FIESTA-IoT platform?
GOOD
GOOD
POOR
VERY
FAIR
N/A
NT
Q5.How would you rate the relevance of the documentation to support you to
set up your experimentation?
EXCELLE
GOOD
GOOD
POOR
VERY
FAIR
N/A
NT
EXCELLE
GOOD
GOOD
POOR
VERY
FAIR
N/A
NT
Q6.How would you rate the FEDSPEC creation ☐ ☐ ☐ ☐ ☐ ☒
process?
Q7.How would you rate the SPARQL Queries creation ☐ ☒ ☐ ☐ ☐ ☐
process?
Q8.How would you rate the integration and ☐ ☒ ☐ ☐ ☐ ☐
deployment process?
Q9.How would you rate the quality and quantity of ☐ ☐ ☒ ☐ ☐ ☐
available data?
Q10.How would you rate the performance of EEE ☐ ☐ ☐ ☐ ☐ ☒
module?
Q11.How would you qualify the quality and relevance ☒ ☐ ☐ ☐ ☐ ☐
of tools which have been made available to you?
Q12.How would you qualify the quality of FIESTA-IoT ☒ ☐ ☐ ☐ ☐ ☐
APIs?
Q13.How would you qualify the easy of installing ☐ ☐ ☐ ☐ ☐ ☒
Experiment Data Receiver (Excellent being very easy
and Poor being very hard)
Q14.Do you prefer to move to API-based solution rather than using the
experiment portal?
Yes
I went to the API-based solution from the first phase since we are more accustomed
to connector creation.
We opted for the direct use of the API because we think it offers more flexibility for
skilled experimenters (or application developers).
Q15. How much time have you spent in total to integrate the FIESTA-IoT tools
in your experiment for having the first experiment prototype working (it counts
only the time used to setup the FIESTA-IoT tools such as APIs connector, EMC,
Data Receiver setup and so on, without counting effort for visualization tools
or set up of external tools):
MONTHS
MONTHS
1 WEEK
WEEKS
MONTH
THAN 2
THAN 1
THAN 2
THAN 2
MORE
THAN
LESS
LESS
LESS
LESS
Get Started Level* ☒ ☐ ☐ ☐ ☐
Basic Integration Level ☒ ☐ ☐ ☐ ☐
Full Integration Level ☐ ☒ ☐ ☐ ☐
* “Get Started Level” corresponds to following the instructions in the handbook, “Basic Integration level”
corresponds to the first integration of your experiment to Fiesta-IoT, and “Full Integration level” refers to a final
integration after necessary fine-tuning of your experiment
EXCELLE
GOOD
GOOD
POOR
VERY
FAIR
NT
Availability of the platform ☐ ☒ ☐ ☐ ☐
Performance of the platform ☐ ☐ ☒ ☐ ☐
Interaction with FIESTA-IoT team ☒ ☐ ☐ ☐ ☐
Q17.Please give us all comments you may have about your experience during
the experimentation?
Thanks to the clear documentation we have only had to follow the instructions in
order to get the information needed from the FIESTA-IoT platform. The toughest part
was on our own court, being the implementation of the experiment the real tricky
thing.
Ending the experiment
EXCELLE
GOOD
GOOD
POOR
VERY
FAIR
NT
GOOD
GOOD
POOR
VERY
FAIR
N/A
T
Q5. How would you rate the relevance of the documentation to support you to
set up your experimentation?
EXCELLE
GOOD
GOOD
POOR
VERY
FAIR
N/A
NT
EXCELLE
GOOD
GOOD
POOR
VERY
FAIR
N/A
NT
Q6. How would you rate the FEDSPEC creation ☐ ☐ ☐ ☐ ☐
process?
Q7. How would you rate the SPARQL Queries ☐ ☐ ☐ ☐ ☐
creation process?
Q8. How would you rate the integration and ☐ ☐ ☐ ☐ ☐
deployment process?
Q9. How would you rate the quality and quantity ☐ ☐ ☐ ☐ ☐
of available data?
Q10. How would you rate the performance of EEE ☐ ☐ ☐ ☐ ☐
module?
Q11. How would you qualify the quality and ☐ ☐ ☐ ☐ ☐
relevance of tools which have been made
available to you?
Q12. How would you qualify the quality of ☐ ☐ ☐ ☐ ☐
FIESTA-IoT APIs?
Q13. How would you qualify the easy of installing ☐ ☐ ☐ ☐ ☐
Experiment Data Receiver (Excellent being very
easy and Poor being very hard)
Q14. Do you prefer to move to API-based solution rather than using the
experiment portal?
No
Q15. How much time have you spent in total to integrate the FIESTA-IoT tools
in your experiment for having the first experiment prototype working (it counts
only the time used to setup the FIESTA-IoT tools such as APIs connector, EMC,
Data Receiver setup and so on, without counting effort for visualization tools
or set up of external tools):
MONTHS
MONTHS
1 WEEK
WEEKS
MONTH
THAN 2
THAN 1
THAN 2
THAN 2
MORE
THAN
LESS
LESS
LESS
LESS
Get Started Level* ☒ ☐ ☐ ☐ ☐
Basic Integration Level ☒ ☐ ☐ ☐ ☐
Full Integration Level ☒ ☐ ☐ ☐ ☐
* “Get Started Level” corresponds to following the instructions in the handbook, “Basic Integration level”
corresponds to the first integration of your experiment to Fiesta-IoT, and “Full Integration level” refers to a final
integration after necessary fine-tuning of your experiment
EXCELLE
GOOD
GOOD
POOR
VERY
FAIR
NT
Q17. Please give us all comments you may have about your experience during
the experimentation?
Easy to use solution enabled us configure our experiment on FIESTA-IoT platform
with ease.
Ending the experiment
EXCELLE
GOOD
GOOD
POOR
VERY
FAIR
NT