Andrea Splendiani

Andrea Splendiani

Basel, Basel, Schweiz
5293 Follower:innen 500+ Kontakte

Info

Bridging 𝗱𝗮𝘁𝗮 𝘀𝗰𝗶𝗲𝗻𝗰𝗲, 𝗔𝗜, and 𝘀𝗲𝗺𝗮𝗻𝘁𝗶𝗰 𝘁𝗲𝗰𝗵𝗻𝗼𝗹𝗼𝗴𝗶𝗲𝘀 to…

Artikel von Andrea Splendiani

Aktivitäten

Berufserfahrung

  • IQVIA Grafik

    IQVIA

    Basel, Switzerland

  • -

    Basel, Switzerland

  • -

    Basel Area, Switzerland

  • -

    Basel, Switzerland

  • -

    London, United Kingdom

  • -

    Harpenden, UK

  • -

    Milan, Paris, London

  • -

    Rennes Area, France

  • -

    Paris Area, France

  • -

    Milan, Italy

  • -

    Milan, Italy

Ausbildung

  • Università degli Studi di Milano-Bicocca Grafik

    Università degli Studi di Milano-Bicocca

    Activities and Societies: ISCB, Bioinformatics Italian Society, International Immunomics Society

    My PhD focused on the application of ontologies and semantic technologies for data management and analysis, with a focus on bioinformatics.
    The first part of my PhD focused on data management and was carried at the Biotechnology and Bioscience department of the University of Milano-Bicocca.
    The second part focused on systems biology, semantic technologies and visualisation and was carried at the systems biology unit of Institut Pasteur (collaborations with BioPAX and…

    My PhD focused on the application of ontologies and semantic technologies for data management and analysis, with a focus on bioinformatics.
    The first part of my PhD focused on data management and was carried at the Biotechnology and Bioscience department of the University of Milano-Bicocca.
    The second part focused on systems biology, semantic technologies and visualisation and was carried at the systems biology unit of Institut Pasteur (collaborations with BioPAX and Cytoscape).

    Thesis: Integration of ontologies and high-throughput data in bioinformatics

  • My academic background is in engineering, theoretical computer science and language and compiler design.
    During my thesis work I have focused on information systems for biomedical data, specifically functional genomics, metadata (MIAME/MAGE), experimental annotation and analysis.
    Thesis: GCDB: An information system for the study of gene expression (under the supervision of prof. Carlo Ghezzi, Giancarlo Mauri, Paola Ricciardi-Castagnoli).

Bescheinigungen und Zertifikate

Ehrenamt

  • SWAT4LS Grafik

    Founding Member

    SWAT4LS

    –Heute 16 Jahre 9 Monate

    Wissenschaft und Technologie

    Devised content and format of SWAT4HCLS (Semantic Web Applications and Tools for Healthcare and Life Sciences). SWAT4HCLS grew into one of the leading annual worldwide conferences and hackathons for industry and academic professionals within the life sciences and healthcare sectors. It focus on data interoperability, standards and semantic technologies.

  • DERI: Digital Enterprise Research Institute Grafik

    Adjunct Lecturer

    DERI: Digital Enterprise Research Institute

    3 Jahre 1 Monat

    Wissenschaft und Technologie

    Research in the area of data management, semantic technologies, information extraction and visualisation to solve problems in Healthcare and Life Sciences. Worked on open-source research project, participated in the annual BioHackathon in Japan and mentored at Google Summer of Code for the National Center for Network Biology (San Diego). Guest editor of the Journal of Biomedical Semantics.

Veröffentlichungen

  • The FAIR Cookbook - the essential resource for and by FAIR doers

    Nature Scientific Data

    The notion that data should be Findable, Accessible, Interoperable and Reusable, according to the FAIR Principles, has become a global norm for good data stewardship and a prerequisite for reproducibility. Nowadays, FAIR guides data policy actions and professional practices in the public and private sectors. Despite such global endorsements, however, the FAIR Principles are aspirational, remaining elusive at best, and intimidating at worst. To address the lack of practical guidance, and help…

    The notion that data should be Findable, Accessible, Interoperable and Reusable, according to the FAIR Principles, has become a global norm for good data stewardship and a prerequisite for reproducibility. Nowadays, FAIR guides data policy actions and professional practices in the public and private sectors. Despite such global endorsements, however, the FAIR Principles are aspirational, remaining elusive at best, and intimidating at worst. To address the lack of practical guidance, and help with capability gaps, we developed the FAIR Cookbook, an open, online resource of hands-on recipes for “FAIR doers” in the Life Sciences. Created by researchers and data managers professionals in academia, (bio)pharmaceutical companies and information service industries, the FAIR Cookbook covers the key steps in a FAIRification journey, the levels and indicators of FAIRness, the maturity model, the technologies, the tools and the standards available, as well as the skills required, and the challenges to achieve and improve data FAIRness. Part of the ELIXIR ecosystem, and recommended by funders, the FAIR Cookbook is open to contributions of new recipes.

    Andere Autor:innen
    Veröffentlichung anzeigen
  • Ontology mapping for semantically enabled applications

    Drug Discovery Today

    In this review, we provide a summary of recent progress in ontology mapping (OM) at a crucial time when biomedical research is under a deluge of an increasing amount and variety of data. This is particularly important for realising the full potential of semantically enabled or enriched applications and for meaningful insights, such as drug discovery, using machine-learning technologies. We discuss challenges and solutions for better ontology mappings, as well as how to select ontologies before…

    In this review, we provide a summary of recent progress in ontology mapping (OM) at a crucial time when biomedical research is under a deluge of an increasing amount and variety of data. This is particularly important for realising the full potential of semantically enabled or enriched applications and for meaningful insights, such as drug discovery, using machine-learning technologies. We discuss challenges and solutions for better ontology mappings, as well as how to select ontologies before their application. In addition, we describe tools and algorithms for ontology mapping, including evaluation of tool capability and quality of mappings. Finally, we outline the requirements for an ontology mapping service (OMS) and the progress being made towards implementation of such sustainable services.

    Andere Autor:innen
    Veröffentlichung anzeigen
  • Implementation and relevance of FAIR data principles in biopharmaceutical R&D

    Drug Discovery Today

    Biopharmaceutical industry R&D, and indeed other life sciences R&D such as biomedical, environmental, agricultural and food production, is becoming increasingly data-driven and can significantly improve its efficiency and effectiveness by implementing the FAIR (findable, accessible, interoperable, reusable) guiding principles for scientific data management and stewardship. By so doing, the plethora of new and powerful analytical tools such as artificial intelligence and machine learning will be…

    Biopharmaceutical industry R&D, and indeed other life sciences R&D such as biomedical, environmental, agricultural and food production, is becoming increasingly data-driven and can significantly improve its efficiency and effectiveness by implementing the FAIR (findable, accessible, interoperable, reusable) guiding principles for scientific data management and stewardship. By so doing, the plethora of new and powerful analytical tools such as artificial intelligence and machine learning will be able, automatically and at scale, to access the data from which they learn, and on which they thrive. FAIR is a fundamental enabler for digital transformation.

    Andere Autor:innen
    Veröffentlichung anzeigen
  • YummyData: providing high-quality open life science data

    Database, Oxford University Press

    Many life science datasets are now available via Linked Data technologies, meaning that they are represented in a common format (the Resource Description Framework), and are accessible via standard APIs (SPARQL endpoints). While this is an important step toward developing an interoperable bioinformatics data landscape, it also creates a new set of obstacles, as it is often difficult for researchers to find the datasets they need. Different providers frequently offer the same datasets, with…

    Many life science datasets are now available via Linked Data technologies, meaning that they are represented in a common format (the Resource Description Framework), and are accessible via standard APIs (SPARQL endpoints). While this is an important step toward developing an interoperable bioinformatics data landscape, it also creates a new set of obstacles, as it is often difficult for researchers to find the datasets they need. Different providers frequently offer the same datasets, with different levels of support: as well as having more or less up-to-date data, some providers add metadata to describe the content, structures, and ontologies of the stored datasets while others do not. We currently lack a place where researchers can go to easily assess datasets from different providers in terms of metrics such as service stability or metadata richness. We also lack a space for collecting feedback and improving data providers’ awareness of user needs. To address this issue, we have developed YummyData, which consists of two components. One periodically polls a curated list of SPARQL endpoints, monitoring the states of their Linked Data implementations and content. The other presents the information measured for the endpoints and provides a forum for discussion and feedback. YummyData is designed to improve the findability and reusability of life science datasets provided as Linked Data and to foster its adoption. It is freely accessible at http://yummydata.org/.

    Andere Autor:innen
    Veröffentlichung anzeigen
  • Knowledge sharing and collaboration in translational research, and the DC-THERA Directory

    Briefings in Bioinformatics

    Biomedical research relies increasingly on large collections of data sets and knowledge whose generation, representation and analysis often require large collaborative and interdisciplinary efforts. This dimension of 'big data' research calls for the development of computational tools to manage such a vast amount of data, as well as tools that can improve communication and access to information from collaborating researchers and from the wider community. Whenever research projects have a…

    Biomedical research relies increasingly on large collections of data sets and knowledge whose generation, representation and analysis often require large collaborative and interdisciplinary efforts. This dimension of 'big data' research calls for the development of computational tools to manage such a vast amount of data, as well as tools that can improve communication and access to information from collaborating researchers and from the wider community. Whenever research projects have a defined temporal scope, an additional issue of data management arises, namely how the knowledge generated within the project can be made available beyond its boundaries and life-time. DC-THERA is a European 'Network of Excellence' (NoE) that spawned a very large collaborative and interdisciplinary research community, focusing on the development of novel immunotherapies derived from fundamental research in dendritic cell immunobiology. In this article we introduce the DC-THERA Directory, which is an information system designed to support knowledge management for this research community and beyond. We present how the use of metadata and Semantic Web technologies can effectively help to organize the knowledge generated by modern collaborative research, how these technologies can enable effective data management solutions during and beyond the project lifecycle, and how resources such as the DC-THERA Directory fit into the larger context of e-science.

    Andere Autor:innen
    Veröffentlichung anzeigen
  • Biomedical semantics in the Semantic Web

    Journal of Biomedical Semantics

    The Semantic Web offers an ideal platform for representing and linking biomedical information, which is a prerequisite for the development and application of analytical tools to address problems in data-intensive areas such as systems biology and translational medicine. As for any new paradigm, the adoption of the Semantic Web offers opportunities and poses questions and challenges to the life sciences scientific community: which technologies in the Semantic Web stack will be more beneficial…

    The Semantic Web offers an ideal platform for representing and linking biomedical information, which is a prerequisite for the development and application of analytical tools to address problems in data-intensive areas such as systems biology and translational medicine. As for any new paradigm, the adoption of the Semantic Web offers opportunities and poses questions and challenges to the life sciences scientific community: which technologies in the Semantic Web stack will be more beneficial for the life sciences? Is biomedical information too complex to benefit from simple interlinked representations? What are the implications of adopting a new paradigm for knowledge representation? What are the incentives for the adoption of the Semantic Web, and who are the facilitators? Is there going to be a Semantic Web revolution in the life sciences?We report here a few reflections on these questions, following discussions at the SWAT4LS (Semantic Web Applications and Tools for Life Sciences) workshop series, of which this Journal of Biomedical Semantics special issue presents selected papers from the 2009 edition, held in Amsterdam on November 20th.

    Andere Autor:innen
    Veröffentlichung anzeigen
  • RDFScape: Semantic Web meets systems biology

    BMC Bioinformatics

    BACKGROUND:

    The recent availability of high-throughput data in molecular biology has increased the need for a formal representation of this knowledge domain. New ontologies are being developed to formalize knowledge, e.g. about the functions of proteins. As the Semantic Web is being introduced into the Life Sciences, the basis for a distributed knowledge-base that can foster biological data analysis is laid. However, there still is a dichotomy, in tools and methodologies, between the use…

    BACKGROUND:

    The recent availability of high-throughput data in molecular biology has increased the need for a formal representation of this knowledge domain. New ontologies are being developed to formalize knowledge, e.g. about the functions of proteins. As the Semantic Web is being introduced into the Life Sciences, the basis for a distributed knowledge-base that can foster biological data analysis is laid. However, there still is a dichotomy, in tools and methodologies, between the use of ontologies in biological investigation, that is, in relation to experimental observations, and their use as a knowledge-base.
    RESULTS:

    RDFScape is a plugin that has been developed to extend a software oriented to biological analysis with support for reasoning on ontologies in the semantic web framework. We show with this plugin how the use of ontological knowledge in biological analysis can be extended through the use of inference. In particular, we present two examples relative to ontologies representing biological pathways: we demonstrate how these can be abstracted and visualized as interaction networks, and how reasoning on causal dependencies within elements of pathways can be implemented.
    CONCLUSIONS:

    The use of ontologies for the interpretation of high-throughput biological data can be improved through the use of inference. This allows the use of ontologies not only as annotations, but as a knowledge-base from which new information relevant for specific analysis can be derived.

    Veröffentlichung anzeigen
  • The Genopolis Microarray Database

    BMC Bioinformatics

    The scope of the Genopolis database is to provide a resource that allows different groups performing microarray experiments related to a common subject to create a common coherent knowledge base and to analyse it. The Genopolis database has been implemented as a dedicated system for the scientific community studying dendritic and macrophage cells functions and host-parasite interactions.
    The Genopolis Database system allows the community to build an object based MIAME compliant annotation of…

    The scope of the Genopolis database is to provide a resource that allows different groups performing microarray experiments related to a common subject to create a common coherent knowledge base and to analyse it. The Genopolis database has been implemented as a dedicated system for the scientific community studying dendritic and macrophage cells functions and host-parasite interactions.
    The Genopolis Database system allows the community to build an object based MIAME compliant annotation of their experiments and to store images, raw and processed data from the Affymetrix GeneChip® platform. It supports dynamical definition of controlled vocabularies and provides automated and supervised steps to control the coherence of data and annotations. It allows a precise control of the visibility of the database content to different sub groups in the community and facilitates exports of its content to public repositories. It provides an interactive users interface for data analysis: this allows users to visualize data matrices based on functional lists and sample characterization, and to navigate to other data matrices defined by similarity of expression values as well as functional characterizations of genes involved. A collaborative environment is also provided for the definition and sharing of functional annotation by users.
    Conclusion
    The Genopolis Database supports a community in building a common coherent knowledge base and analyse it. This fills a gap between a local database and a public repository, where the development of a common coherent annotation is important. In its current implementation, it provides a uniform coherently annotated dataset on dendritic cells and macrophage differentiation.

    Andere Autor:innen
    Veröffentlichung anzeigen

Kurse

  • 1st International School on Advanced BioMedicine and BioInformatics

    -

  • 3rd VIPS Advanced School on Computer Vision, Pattern Recognition and Image Processing (Introduction to Bayesian Inference and Statistical Learning)

    -

  • DASI '06: Bertinoro PhD school on Data and Service Integration

    -

  • First International School on Biology, Computation and Information (Mathematical modeling tools for systems biology)

    -

  • Languages and compiler design

    -

  • M1: leading at the front line (Novartis leadership program)

    -

  • Operative research

    -

  • Software engineering

    -

  • Systems Theory

    -

  • TechMeetups Guru Program (@Cass)

    -

  • Theoretical computer science

    -

  • Web Publishing of Scientific Data and Services

    -

Projekte

  • Web Accessible Registries

    –Heute

    A protype to publish code lists as linked-data. Plus some other nice features. Neat conceptual approach to entities, identifiers, versions and collections.

    Andere Mitarbeiter:innen
    Projekt anzeigen
  • PHI-base

    –Heute

    PHI-base is manualy curated database of genes involving in pathogenicity process

    Andere Mitarbeiter:innen
    Projekt anzeigen
  • BioPAX

    Biological Pathways Exchange Language (BioPAX) for knowledge representation and integration in Bioinformatics and Computational Biology. It is defined in OWL (RDF/XML).

    Andere Mitarbeiter:innen
    Projekt anzeigen
  • Ondex - Data integration and visualisation

    –Heute

    Gathering and managing data from diverse and heterogeneous datasets is central to many systems approaches to biology. Systems biologists need to easily identify and gather the information they require and then integrate and analysis the experimental and reference data sets which can come from myriad databases with a wide variety of formats and access methods.

    The Ondex data integration platform enables data from diverse biological data sets to be linked, integrated and visualised…

    Gathering and managing data from diverse and heterogeneous datasets is central to many systems approaches to biology. Systems biologists need to easily identify and gather the information they require and then integrate and analysis the experimental and reference data sets which can come from myriad databases with a wide variety of formats and access methods.

    The Ondex data integration platform enables data from diverse biological data sets to be linked, integrated and visualised through graph analysis techniques. Ondex uses a rich and flexible core data structure, which has the ability to bring together information from structured databases and unstructured sources such as biological sequence data and free text. Ondex also allows users to visualise and analyse the integrated data. Ondex is free and open-source software.

    Andere Mitarbeiter:innen
    Projekt anzeigen

Sprachen

  • Italian

    Muttersprache oder zweisprachig

  • English

    Verhandlungssicher

  • French

    Grundkenntnisse

Organisationen

  • International Society for Computational Biology

    Member

    –Heute

Andrea Splendianis vollständiges Profil ansehen

  • Herausfinden, welche gemeinsamen Kontakte Sie haben
  • Sich vorstellen lassen
  • Andrea Splendiani direkt kontaktieren
Mitglied werden. um das vollständige Profil zu sehen

Weitere ähnliche Profile

Entwickeln Sie mit diesen Kursen neue Kenntnisse und Fähigkeiten