A Guide To The Dimensions Data Approach
A Guide To The Dimensions Data Approach
APRIL 2019
About Dimensions Dimensions® is a modern and innovative, linked research data infrastructure and tool, re-
imagining discovery and access to research: grants, publications, citations, clinical trials, patents and
policy documents in one place. The development of Dimensions has been triggered by the feedback
from clients and partners of the Digital Science portfolio companies. As a result, Dimensions has
been developed through a dynamic collaboration across Digital Science and six of its portfolio
businesses (ReadCube, Altmetric, Figshare, Symplectic, DS Consultancy and ÜberResearch). With
each company focused on a different pain point within the research cycle and serving various
stakeholders in the research ecosystem, these teams shared their true passion for innovation, and
contribute their unique experiences, opinions, and values into Dimensions. Visit www.dimensions.ai
About Digital Science Digital Science® is a technology company serving the needs of scientific and research communities
at key points along the full cycle of research. We invest in, nurture and support innovative businesses
and technologies that make all parts of the research process more open, efficient and effective. We
believe that together, we can change research for good. Visit www.digital-science.com
Acknowledgements We are grateful to all contributors and would like to thank our development team for their time and
effort in extracting the data to support this report.
This report has been published by Digital Science, which is owned by the Holtzbrinck Publishing Group..
For inquries in respect of Dimensions, please contact [email protected], otherwise please write to
Digital Science at [email protected] or 625 Massachusetts Avenue, Cambridge, MA, 02139 USA.
DOI: 10.6084/m9.figshare.5783094
Contents
1. A modern linked research data landscape 2
3. Broadening the view beyond publications - bringing content together from as many
places as possible 8
How does Dimensions compare to other databases like Google Scholar, Pubmed, Scopus or Web of Science? 8
Citation counts in different systems and databases - there is no single truth! 9
The current content scope and quality is just the starting point 10
4. Grants - a real glimpse into the future 11
Key statistics on the Dimensions grant data 12
Dimensions Report 1
A modern linked research
data landscape
The broader Dimensions was created in response to two significant constraints for Digital
Science and its development partners.The first constraint was that existing
Dimensions team: solutions sought to understand the research landscape solely through the lens
100+ development of publication and citation data.The second constraint was the way that existing
solutions exposed what data they did have. Much of the publications research
partners and
graph had been locked away in proprietary applications, which constrained how
Digital Science the information could be used, including through a lack of workable APIs.Where
proprietary data existed, there were significant data holes, making the data less
useful for core use cases.
Making publication To address these constraints and to try to stimulate innovation to support
research, we worked closely with more than 100 development partners
and citation data (research organisations and funders) to realise an integrated database covering
freely available the entire research process from funding to research, from publishing of
results through attention, both scholarly and beyond, to commercial application
and policy making - consistently linked in multiple dimensions.
Does it support Another aspect of supporting the academic community was empowering
the community. The current vogue in research evaluation promotes the use
your use case? of metrics to cope with the vast quantities of material being evaluated. It is
clear that a more open data source compatible with more open publications,
more open evaluation frameworks and more open metrics are needed.
Dimensions aims to be a system that helps the academic community to own
the formulation and development of metrics that tell the best stories and give
the best context to a piece of research.
We will improve One of the most important aspects of Dimensions is that we are going to
develop it further with the research community - any feedback is welcome.
it together!
Please contact us at [email protected].
2 Dimensions Report
Quick facts on Dimensions - the total record count and more
Content type Number of items indexed
Publications 100 million
Grants 4.6 million
Patents 38 million
Clinical Trials 455,000
Policy Documents 422,000
Records with Altmetric attention 10 million
Grand total 153 million
Dimensions Report 3
Full text index - enabling deep discovery
Article-level indicators In existing databases such as Web of Science and Scopus, the documents are
typically categorized using a journal as a proxy, with a few research categories
need to be paired being assigned at the journal level. This approach has created unintended
with article-level consequences across research, from content coverage in databases to citation
benchmarking practices.
classifications
Technology has developed further. The fields of natural language processing,
NLP and machine machine learning and artificial intelligence have all made huge advances in
recent years. Dimensions has been able to leverage these technologies to
learning are allowing solve a very practical problem requiring a different approach: If you want to
categorisation consistently categorize grants, patents, clinical trials and policy documents, a
approaches which journal proxy is no longer available. The path we have chosen for Dimensions
is to use existing classification systems and a machine learning based approach
take the substance to automatically assign a consistent set of categories to all documents -
into account regardless of the source.
4 Dimensions Report
reproduced by the algorithm. This is then checked against actual codes, and
the algorithm is iterated.
Other classification systems have been implemented in addition to the FOR NIH’s RCDC and UK
codes. The choice of these different classification lenses is mainly driven by
the needs of research funders, the majority of whom are focused on the
HRCS implemented as
biomedical sciences. An analogous machine-learning approach has been used well
to implement these schemes. Examples include:
•T
he Research, Condition, and Disease Categorization (RCDC) is a classification
scheme used by the US National Institutes of Health (NIH) for the public
reporting required by the US Congress. The ÜberResearch team has
implemented the technology for RCDC at the NIH and is still supporting it.
•T
he Health Research Classification System (HRCS) is a classification system
used by nearly all UK biomedical funders to classify their portfolio of health
and biomedical projects. There are two strands to HRCS – Research Activity
Codes (RAC) and Health Categories (HC).
Any other classification system can be generated in a similar way with very Other classification
little effort. Several additional schemes have been implemented for clients with
specific topic classification needs. Examples could be classification systems systems can be
on a national level or very specific topic focused systems. If required, it is also implemented
possible to categorize documents that are not part of Dimensions. Please
reach out to the Dimensions team if you would like to learn more.
Dimensions Report 5
Disambiguating institution names - based on GRID
The challenge of Authors of publications (as well as the other research objects such as grants
and patents) express their institutional affiliations in non-standard ways. Indeed,
affiliation names most institutions have a few name variants but for some organizations we
found hundreds of name variants. For a data infrastructure like Dimensions it is
important to be able to assign documents automatically to a unique identifier
that corresponds to a single institution. Furthermore, each institution in that
unique identifier list must be well defined according to a policy that helps to
quantify what we classify to be an institution, why it has been included and
what type of institution we believe it to be. On top of this, there must be
useful metadata, such a geolocation information, date of foundation and, most
importantly, a persistent identifier.
Digital Science has already started to tackle that challenge - resulting in the
release of the open GRID database, which has grown to cover more than
90,000 institutions, where the data has been curated and each institution
assigned a persistent identifier. GRID is continuously improved and used in
many other systems, for example ORCID (see ORCID blog post).
GRID - an open
In Dimensions, the GRID system is used to allow us to create a consistent
resource provided by view of an organization within one content source, but also across the
Digital Science different types of content.
Feedback to improve GRID is continuously improved as we encounter more data and feed that
back into the GRID database. Digital Science is committed to providing GRID
GRID is appreciated! on an ongoing basis as an open dataset under a CC0 license to support the
research community. GRID is not yet perfect and never will be. Research
organizations change: some merge, some rename themselves, new institutions
appear. Change here is more fluid than you think! For more information on
GRID please visit www.grid.ac or submit a support request via Dimensions for
improvement suggestions.
6 Dimensions Report
Person disambiguation across publications, grants, patents
and clinical trials - a challenging task
The extraction of the references and links between the different content Extracting references
sources is key to Dimensions. Our aim is to allow a user to gain a far superior
understanding of the context of a piece of research by eliminating the walls - creating a network
and separations between isolated data silos. Bringing data together in this across sources
way allows a much improved view on the nature of research in a particular
field as well as the associated research process. The user is then able to
draw conclusions and gain new insights, which previously would have taken an
enormous amount of effort.
References between the different records are either harvested from existing
databases (such as Crossref, PubMed Central, Open Citation Data) or
extracted directly from the full text record provided by the content publisher.
This is not only limited to journal publication references, but also includes
acknowledgement and citation from and to books, conference proceedings,
patents, grants and clinical trials.
Dimensions Report 7
More than 1.3 billion In total, we have extracted more than 1.3 billion direct connections between
the document records, with more than 1 billion between publication records
references between alone. This number is continually growing as we integrate more content, as we
documents improve the representation of the content from more and more publishers,
and as we work on perfecting our extraction routines.
Dimensions provides:
•A
solid citation graph of the kind offered by Scopus or Web of Science;
•W
ide coverage and an enhanced experience around discovering the right (or
most relevant) research based on indexing the full-text, in a similar approach to
Google Scholar;
•G
rants as an early trend discovery method showing the intended rather than
published research
•A
broad linked and rich view on content relevant for the research process
- to avoid the narrow focus on publications and citations, allowing a deeper
understanding of the inputs, outputs and impact and how they are related.
8 Dimensions Report
Citation counts in different systems and databases - there is no
single truth!
One question we are asked when talking about Dimensions is, ’how does our
Citations counts -
citation count compare to Google Scholar, Scopus or Web of Science’? As much
as we would like to be able to give a simple answer, it is not possible. First of all, why do they differ?
Dimensions and the reference that it contains is not directly comparable with other
databases since Dimensions also captures references and links to sources beyond
classic publication-based citations. Even if we only examine the publication-based
citation count, it is not possible to establish a simple ranking. (This type of work
was already found by the bibliometrics community in the comparison of the Scopus
and Web of Science databases following the launch of Scopus in 2006.) There are
several reasons why Google Scholar, Scopus,Web of Science and other services may
show different citation counts for the same content. Some of the reasons for these
disparities include:
• e ach database covers different sets of databases and content to build its citation
graph
• e ach database may include content from different date ranges (e.g. 1996 to
present)
• e ach database may include different types of content. For example, some sources
may only include references from peer-reviewed journals, while others may include
references from non-published or not-yet-published works, such as student theses
published on a website, citations from pre-prints or e-prints (where versioning and
disambiguation of pre-print and post-print versions of the same paper adds yet
more complexity)
• e xtracting references from a paper and uniquely matching them to the reference
graph is a challenge which each database solves in different ways. There is no
standard, industry defined approach and, as a result, in some cases references may
not properly match, and in other cases false positives may occur.
• a s algorithms for matching improve and new data sources become available,
reference graphs may be updated, resulting in changes to citation counts.
While spot checking Dimensions records against source data we found that for
some articles we were under reading citation counts while for other publications
our counts were notably higher than publicly available higher citation sources. We
know that there are some fields where we need to engage with more publishers
or more funders for greater coverage. Likewise, we know that there are some
geographies where more work is needed to achieve greater patent coverage. As
ever, we look for feedback from the community to prioritise our development focus
for content integration.
Dimensions Report 9
As an illustratration, from the available data, an example from PLOS One:
Given the many variables described above, it is not possible for multiple parties
to arrive at a single absolute count. As a result, in practice many researchers
consider citations counts to be a useful relative metric when comparing other
content within a single system.
underlying data is an It took a large amount of effort and resource to bring all the current sources
and content together - we consider this only to be a starting point:
ongoing effort
•G
rants are added continuously - every few months new funders and their
portfolios become part of the Dimensions data universe
•W
e are going to add more publication data, new patent offices, new clinical trial
registries and publishing organizations of policy documents during the course
of 2019
•P
re-prints will be consistently integrated
•W
e continue to support publishers who wish to work with us to make their
content more discoverable in Dimensions.
A joint effort to Most important to us is your input and feedback.We are looking forward to
being challenged and to receiving many suggestions from you as to where we
improve the data - can improve the data. We already have a long list of tasks from our development
please be as critical partners and friends, but we can always be better! This is clearly a team effort
as possible! and we need you as the users, the research community and the broader
Dimensions team!
10 Dimensions Report
Grants - a real glimpse into
the future
Funded grants are the result of an extensive process in which a researcher or Grants, a forward
team of researchers describe the research project that they wish to undertake.
The aim of their “pitch” is to convince a research funder, through an anonymous
looking data source -
peer review panel, that the research problem is interesting, tractable and worthy, neglected for too long
and that the team is qualified and capable of achieving the outcomes suggested.
This process is even more important since, in most cases, the money being
spent is public money and hence must be accounted for in a responsible manner.
Grants are the first manifestation of a research idea in a cogent format that
must convince a third-party of their value - a little like a beta software release.
That position in the research cycle makes it a very special source for discovery
since it allows analysis of trends and movements in fields by looking at the
research that is intended to be carried out in the coming years - a glimpse into
the future. For funders, research policy strategists and planners, analysis of the
funding landscape allows early intervention and strategy formulation, not only
the retrospective identification of fast facts or wrong decisions.
ÜberResearch (one of the six businesses in the Digital Science portfolio creating ÜberResearch
Dimensions) was founded in 2013 to work with research funders on aggregating
a large grant database. Its aim was to enable, for the first time, a broad view
aggregated a grant
across national and institutional borders on the resource input aspects of the database with $1.5
research system and to make this available not just to the largest funders, who trillion in funding
have the responsibility to commission custom systems to ensure appropriate
reporting to public stakeholders, but also to smaller funders with smaller teams
and more limited resources. ÜberResearch’s early effort has now become part
of the new and broader version of Dimensions, which covers the entire flow
from input to academic attention, commercialization, policy formulation and
routes to impact.
Grants are a difficult content source for several reasons: They do not follow Grant data provides
a common metadata schema in the way that publications do, nor do they yet
have a persistent identifier such as the DOI; they are highly dependent on
particular insights, not
individual national frameworks of research funding. Geographic differences are a complete research
not trivial. In some countries, the majority of the research funding is given out in funding view
competitive project grants, while in other countries there is a skew toward block
funding, which will never show up in a funded grants database. Of course, there
are a lot of countries that fall between the ends of this spectrum with a mix of
block funding and project-based funding. For that reason the grant data should
not be taken as a complete view on all research related funding, as we pointed
out in a recent report. It covers project-based funding from different types of
funders (government, multinational, charities etc.). If you have any questions
related to your use case do not hesitate to reach out to us.
Dimensions Report 11
Key statistics on the Dimensions grant data
The following key statistics have been captured on April 1, 2019 and are
changing on a monthly basis - this means that the values in this document can
vary from the actual results in the Dimensions application or API.
■ United States
■ Japan
■ Canada
■ Germany
■ China
■ United Kingdom
■ Russia
■ South Africa
■ Switzerland
■ Brazil
■ Australia
■ Poland
■ South Korea
■ Sweden
■ France
■ Czechia
■ Italy
■ Belgium
■ Netherlands
■ Norway
■ Other
12 Dimensions Report
Aggregated funding amount of starting grants over time
120
USD (billion)
100
80
60
40
20
0
2009 2010 2011 2012 2013 2014 2015 2016 2017 2018
250
Starting grants
(thousand)
200
150
100
50
0
2009 2010 2011 2012 2013 2014 2015 2016 2017 2018
Dimensions Report 13
Publications, books
and citations
Dimensions and publications / citations - a database,
not a judgement call
Lack of innovation due to With Dimensions, a powerful publication and citation database has been
made available to increase access and usage of metadata for researchers and
data being ‘locked up’ institutions, which has been, for a long time, an aspiration for Digital Science. An
uncompetitive landscape has led to a slower-than-desirable pace of innovation
to support researchers in many use cases. Rather than a lively research-led
discussion about the needs of researchers, administrators and evaluators, there
has been a narrower approach born of historical legacies both technological and
practical as well as specific drivers from the research policy arena.
Dimensions - not But it was clear that simply replicating existing approaches to create a third
(or fourth, or fifth, depending on how you classify and count) abstracting and
a replication of the indexing database would not be in the sector’s interest, so we decided to do two
usual approach - a things in a fundamentally different way:
different approach
•D
imensions should be open to integrate all relevant research objects - in
essence, less editorial choices over the content to be included (within reason,
predatory journals, for example, clearly need to be treated differently)
•C
onsistent integration and linking of other sources (grants, patents and more.)
treated in on the same basis as publications.
To ensure that users have the tools that they need to make content the right
content filtering decisions for their use case we have implemented features
that allow the user to limit the results that they obtain to certain subsets. The
standard filters are specified by pre-defined, curated lists, which can be white
14 Dimensions Report
or black lists. We started our list definition with accepted openly available
listed defined by others in the community, but are looking forward to receive
new suggestions, again from the research community.
•D
OAJ list: Directory of Open Access Journals (DOAJ) is a community-
curated online directory that indexes high quality, open access, peer-
reviewed journals. The DOAJ journal list includes over 10,000 journal
titles covering all areas of science, technology, medicine, social science and
humanities.
•E
RA list: The ERA 2015 journal list was designed by the Australian Research
Council (ARC) in cooperation with the National Health and Medical
Research Council (NHMRC) and the broader research community, with the
purpose of supporting Australia’s national research evaluation framework,
Excellence in Research for Australia (ERA). Included are journals that were
eligible for institutions' ERA 2015 submissions. We will include the ERA 2018
list as a filter once it is released.
•N
orwegian Register: The Norwegian register, officially the ‘Norwegian
Register for Scientific Journals, Series and Publishers’ is operated jointly by
the Norwegian Centre for Research Data (NSD) and the National Board
of Scholarly Publishing (NPU). The list shows which scientific publications
are recognized in the weighted funding model and includes around 30,000
source titles.
•P
ubMed list: PubMed is a search engine of the abstracts and references
of life science and biomedical publications mainly sourced from MEDLINE,
and maintained by the United States National Library of Medicine (NLM) DOAJ, ERA list,
at the National Institutes of Health (NIH). The PubMed filter in Dimensions
filters to only publications which have a PubMed identifier (PMID), as used
Norwegian Register
in PubMed (website). and PubMed
These filters are just a starting point and only address specific use cases. We
Any idea for an
are keen to learn about other general, national or institutional filters that
should be considered, as well as different use cases where other lists may be additional ‘quality’ list?
helpful and welcome feedback so that we can develop this concept further. Please get in touch!
Dimensions Report 15
100 million publication index of uniquely-identified publications containing about 100 million records.
The Crossref records associated with a DOI sourced from the publishers
metadata records among the 12,442 Crossref members form a significant core of this spine. This
assembled provides the Dimensions database with a very robust metadata backbone,
but even with this great resource there are some limitations on metadata
completeness, most notable is affiliation data for authors.
69 million records This step includes deriving reference/citation data from the full-text and mining
acknowledgements sections to identify links to funded projects, research
enriched - from more funders and clinical trials. This step has been completed for more than 69
than 100 publishers million full-text records, some open access but many made available to Digital
Science for such purpose. These records are sourced from more than 100
already publishers including some of the largest STM publishers in the world. Searching
Dimensions will quickly indicate where we have coverage.
A key part of this data enhancement Step is that we are able to index full-text
records. This means that a user can search for any term in a paper - it doesn’t
have to be in the title or the abstract. In concert with the filtering mechanisms
that we’ve put in place for users, this means that you are increasingly likely to
locate the research work that you’re looking for.
New content added New publication data is added as more and more publishers join the effort
and make their content more discoverable. Over the last 12 months, we have
continuously focused primarily on the large- and medium-sized publishers to be ready for
the launch of Dimensions. If you are a publisher and want to see your content
representation improved in Dimensions - just reach out to us via this form and
we will be in touch.
Altmetrics - an Digital Science was an early supporter of the alternative metrics movement and
Altmetric has played a key part in defining the agenda around altmetrics. Indeed,
immediate and Altmetric has lead the field with a number of innovations including the colorful
different type of Altmetric badges, score, unique sources like policy documents and university
syllabi, and the always popular Altmetric Top 100.
impact
Dimensions includes high-level Altmetric data for each article in the index
and displays this on the article details page. In this way, we bring together the
academic attention (citations), innovation attention (patents), clinical attention
(trials) alongside public and policy engagement attention including social
media, traditional media, policy attention and the other forms of attention that
Altmetric indexes.
16 Dimensions Report
Open Access, Open Citation Data and Dimensions
Dimensions is aligned with the very important Initiative for Open Citations. Dimensions and open
Indeed, Dimensions is an example of what can be done if citation data is more
openly available. In building Dimensions, Digital Science had to invest significant citation data
effort to make a good enough citation graph so that a good quality discovery
experience could be delivered to users. We hope that the I4OC and similar
initiatives continue to lower that barrier going forward. This will allow the
community to focus on more valuable functionality for users who want to
push their research forward faster.
Since we have been asked this question often: Digital Science is not a publisher
and is not in the best position to contribute citation data to I4OC - we believe
this should come from publishers themselves. From the Dimensions team,
both Altmetric and Figshare are members of the initiative.
The following key statistics were captured on April 1, 2019 and are changing
on a daily basis - this means that the values in this document can vary from the
actual results in the Dimensions application or API.
Dimensions Report 17
Distribution of publications across disciplines
■ Elsevier
■ Springer Nature
■ Wiley
■ Taylor & Francis
■ IEEE
■ Oxford University Press
■ SAGE Publications
■ Wolters Kluwer
■ JSTOR
■ Cambridge University Press
■ American Chemical Society
■ De Gruyter
■ BMJ
■ IOP Publishing
■ AIP Publishing
■ Thieme
■ Royal Society of Chemistry
■ American Medical Association
■ American Physical Society
■ SPIE
■ Other
18 Dimensions Report
Clinical trials - research
results en route to clinical
application
To be clear about definitions: A clinical trial is any research study that Clinical trials,
prospectively assigns human participants or groups of humans to one or more
health-related interventions to evaluate the effects on health outcomes'.
aggregated from
different registries
Interventions include, but are not restricted to drugs, cells and other biological
products, surgical procedures, radiological procedures, devices, behavioural
treatments, process-of-care changes, preventive care, etc. (Source: WHO)
More will follow in the future. We integrate and map all relevant source data
into Dimensions’ coherent data model with filters, for e.g. research categories,
research organizations or years, applicable across content types.
The following key statistics were captured on April 1, 2019 and are changing
on a daily basis - this means that the values in this document can vary from the
actual results in the Dimensions application or API.
Dimensions Report 19
Distribution of clinical trials across disciplines (based on the Health
Research Classification System (HRCS) from the UK)
■ Cancer
■ Cardiovascular
■ Metabolic and Endocrine
■ Infection
■ Mental Health
■ Oral and Gastrointestinal
■ Musculoskeletal
■ Neurological
■ Respiratory
■ Reproductive Health and Childbirth
■ Stroke
■ Inflammatory and Immune System
■ Renal and Urogenital
■ Eye
■ Skin
■ Generic Health Relevance
■ Injuries and Accidents
■ Blood
■ Congenital Disorders
■ Ear
■ Other
■ United States
■ Japan
■ United Kingdom
■ China
■ Germany
■ France
■ Canada
■ India
■ Netherlands
■ Australia
■ Switzerland
■ South Korea
■ Italy
■ Spain
■ Belgium
■ Denmark
■ Brazil
■ Israel
■ Sweden
■ Taiwan
■ Other
20 Dimensions Report
Patents - research resulting
in practical and commercial
applications
We started with an initial tranche of patent offices for the launch of Patent data - to show
Dimensions. We are now in the process of adding more, which will appear
the translation of
in Dimensions during the course of 2019. The focus of the patent data in
Dimensions is to provide a downstream view on how research funding research activities into
is impacting and enabling the commercial protection and potential use of the commercial space
research results.
The following key statistics were captured on April 1, 2019 and are changing
on a weekly basis - this means that the values in this document can vary from
the actual results in the Dimensions application or API.
Patents 38 million
Patent offices covered 10
Number of links to research organizations 37 million
(GRID IDs)
Number of cited patent references 227 million
Number of links to publications 10 million
Number of links to grants 165,000
Number of links to funders 221,000
Dimensions Report 21
Distribution of patents across disciplines
■ Engineering
■ Information and Computing Sciences
■ Medical and Health Sciences
■ Chemical Sciences
■ Biological Sciences
■ Technology
■ Physical Sciences
■ Psychology and Cognitive Sciences
■ Studies in Human Society
■ Mathematical Sciences
■ Language, Communication and Culture
■ Earth Sciences
■ History and Archaeology
■ Agricultural and Veterinary Sciences
■ Studies in Creative Arts and Writing
■ Economics
■ Environmental Sciences
■ Commerce, Management, Tourism and Services
■ Built Environment and Design
■ Law and Legal Studies
■ Philosophy and Religious Studies
■ Education
■ United States
■ Japan
■ Germany
■ France
■ United Kingdom
■ South Korea
■ Switzerland
■ Russia
■ Netherlands
■ China
■ Canada
■ Australia
■ Italy
■ Sweden
■ Taiwan
■ Finland
■ India
■ Austria
■ Belgium
■ Israel
■ Other
22 Dimensions Report
Policy documents - research
resulting in policy and
guidance documents
The policy document data in Dimensions is provided by the Digital Science Policy documents from
portfolio company Altmetric. It includes policy sources that are designed
over 70 publishing
to change or otherwise influence guidelines, policy or practice. Tracked
policy sources range from government guidelines, reports or white papers, organization
independent policy institute publications, advisory committees on specific
topics, research institutes, and international development organisations. We
aim to curate a broad scope of policy sources from organisations around
the world and cover topics from climate change to health, transport and
economics. Wherever possible we deep-index the full text, allowing us to
categorize the record and extract references.
The following key statistics were captured on April 1, 2019 and are changing
on a daily basis - this means that the values in this document can vary from the
actual results in the Dimensions application or API.
Dimensions Report 23
Thank You
Thank you for your interest in Dimensions. We look forward to
improving both the tool and the data in cooperation with you and the
research community.
Legal note: while we have tried to ensure the accuracy of this report, it is
subject to change and provided for information only on an "as is" basis, and
is not intended to form part of any legal contract. Any reference to a third
party in this report should not be considered as an endorsement by or of, or
indication of any association with, Dimensions or Digital Science.
24 Dimensions Report
Dimensions Report 25
www.dimensions.ai digital-science.com