WWF METT Handbook 2016 FINAL PDF
WWF METT Handbook 2016 FINAL PDF
WWF METT Handbook 2016 FINAL PDF
MANUAL
2016
METT HANDBOOK
A guide to using the Management
Effectiveness Tracking Tool (METT)
A guide to using the METT ︱ page 1
Contents METT use | Best practices | METT Q&A | Standards | Conclusions | Addendum | References
Suggested citation
Stolton, S. and N. Dudley. 2016. METT Handbook: A guide to using
the Management Effectiveness Tracking Tool (METT), WWF-UK,
Woking
ISBN: 978-1-5272-0060-9
CONTENTS
ACKNOWLEDGEMENTS
Foreword
Preface
1 EXECUTIVE SUMMARY
2 EXPERIENCES FROM 15 YEARS OF METT USE
2.1. Overview of METT use
2.2. Studies using the METT to understand management effectiveness
2.3. Using the METT to increase effective management
2.4. Global database of METT results
Case study 1: R-METTT: Ramsar Convention on Wetlands
6. CONCLUSIONS
6.1. Core findings and conclusions
6.2. Moving forward
Acknowledgements
This handbook draws on the work of WWF and the World Bank regarding the original
development of the Management Effectiveness Tracking Tool (METT); on the work of
many colleagues in WWF, The World Bank, GEF and other institutions on applying the
METT over 20 years, on a wide range of literature and on discussions with a range of
friends and experts in identifying lessons learned and next steps.
Specifically we would like to thank: Glyn Davies (WWF UK) for funding this work and
for his insights into the first draft. For their detailed comments on various drafts thanks
to Neil Burgess (UNEP-World Conservation Monitoring Centre); Marc Hockings (The
University of Queensland, Vice-Chair (Science) IUCN World Commission on Protected
Areas and Senior Fellow, UNEP-World Conservation Monitoring Centre); Matthew
Linkie (Wildlife Conservation Society) and Llewellyn Young (Ramsar Convention
Secretariat – Oceania). Thanks also to K Knights for comments on the draft and data
analysis from the METT database, Naomi Kingston (UNEP-WCMC) and to Lauren Coad
(OUCE, University of Oxford) for additional comments in relation to the data used here;
Fiona Leverington ((The University of Queensland) for the contribution and lessons
learned from PNG; Dechen Lham (Wildlife Conservation Division, Department of
Forests and Park Services, Ministry of Agriculture and Forests, Royal Government of
Bhutan) for development and comments on the Bhutan case study; Khalid Pasha (WWF)
for comments on the section on CA|TS and Tony Whitten (Flora and Fauna
International) for comments and suggestions.
We would also like to acknowledge the developers of the GD-PAME and METT databases
and the data they contain, which was gathered together with protected area managers
globally. The data was collated under the auspices of the IUCN WCPA Management
Effectiveness Task Force, working together with Universities of Queensland, Oxford
and Copenhagen, UNEP-WCMC and with financial assistance from various donors,
including WWF and the GEF.
and the first version was field tested in 2001. Since then it has been adopted and adapted
by the Global Environment Facility and many other countries, organisations and
projects, as outlined in this publication.
As the authors note in this Handbook, the uses to which the METT has been put go far
beyond the original intention, which was to have a systematic way of gathering
information on whether protected areas (largely terrestrial ones) were being managed
effectively. This would allow people to challenge themselves on making improvements
over time, and allow some comparison between sites. What it could not do, without
considerable additional information, was convincingly show whether effective
management was leading to improved ecological and social outcomes. Furthermore, the
quality of each METT depends on the knowledge and diligence of the assessors, and the
integration of information from a diverse range of stakeholders.
This latest review has come about through Sue Stolton and Nigel Dudley’s diligent
pursuit of ever improving a popular tool to help protected area managers assess their
progress and make their management more effective. It emerged through the IUCN-UK
Committee, as an extension of a project looking at protected areas in the UK, and as
such is a collaborative effort between WWF, IUCN WCPA and UNEP-WCMC. I hope it
continues to provide even more support to protected area managers in the years ahead.
Glyn Davies
Living Planet Centre, August 2016
PREFACE
It is now fifteen years since the
first edition of the Management
Effectiveness Tracking Tool (METT)
was published, after lengthy and sometimes passionate
discussions about its form and function.
Although originally designed to measure a single time-limited conservation target,
the METT has somewhat to our surprise become the most widely used tool to measure
protected area management effectiveness, already applied many thousand times around
the world. Surprise and consternation: although we are delighted to see so much interest
in addressing management effectiveness of protected areas, we are also aware that the
METT is sometimes being used in ways that we never envisaged.
Usefulness is also closely connected to how well the assessment is carried out. A
manager can sit in their office and fill out the form in a few minutes, but the results will
likely be inaccurate, be lacking justification and will certainly have little buy-in from
other stakeholders. Our experience suggests that a good METT process takes up to two
or three days and is way better if the assessment is evidence-based and a diverse group
of stakeholders have a chance to input into the results.
This review was stimulated by two realisations by the original authors of the METT.
First, that quantity was not necessarily being matched by quality in the METT with
assessors going through the motions rather than doing a professional job (e.g. when
only one person completes the METT with no other protected area staff, stakeholders
etc involved). Usefulness is dramatically reduced as a result. Secondly, even when
assessors are committed to best practice, if they take the METT seriously they are
likely to have a string of questions, and no additional advice was available to them. In
spring of 2016 two of us worked with managers from all the national parks in Bhutan,
rigorously critiquing the METT from the perspective of its application in that unique
and fascinating country: much of that experience is reflected here as well.
The need for greater guidance has been emphasised by other researchers as well.
Carbutt and Goodman note (2013): “We have noticed that a clear, emphatic and
absolute statement on how to best apply the various assessment tools is lacking, because
most publications address best practice methodology only in terms of ‘guidelines’ or
‘recommendations’” Coad et al., also note (2015): “To improve the credibility of protected
area management effectiveness scores, we suggest that standardized, robust operating
guidelines need to be developed and applied…”
The following review is an attempt to provide such advice, using lessons learned to date
drawn from both our own direct experience and from what others have found. This is
certainly not the final word on the subject: one thing we have learned is that a couple of
dozen simple questions are not actually that simple at all. If you use the METT and have
comments, or have made modifications, or find things that do not seem to make sense,
please let us know!
Photo: The METT has been used all over the world in the last 15 years in both terrestrial
and marine protected areas. Monte Leon National Park, Patagonia, Argentina
Contents METT use | Best practices | METT Q&A | Standards | Conclusions | Addendum | References
The METT consists of two main sections: datasheets of key information on the
protected area and an assessment form containing a questionnaire with four
alternative responses to 30 questions, each with an associated score, a data field for
notes and a justification for the answers, and a place to list steps to improve
management if necessary. Various versions of the METT exist, along with many local
modifications. The latest global METT is available here1.
Experience has shown that many users do not apply the METT as effectively as possible,
in particular focusing on the score rather than the list of necessary next steps (a
checklist of how management needs to change). In addition, there is confusion about
interpretation of some of the questions. This handbook aim to improve the efficacy with
which the METT is applied. It includes detailed additional guidance on the application of
the METT and best practices for developing, implementing and using the results of the
METT. Best practices are summarised below.
2. Allow enough time to complete the assessment in full. A good METT cannot
be done in a quick hour; most questions take serious thought. The first METT
is likely to take at least a day, probably two. Subsequent repeat METTS may be
a little quicker.
Verify results
10. Although designed as a self-assessment tool, METT implementation can
involve verification processes; from simple checking of completed METTs by
external assessors to more detailed field verification exercises involving data
collection.
Implement recomendations
11. Completing the METT is only the first step of the assessment; the
implementation process should include adaptive management (e.g. a plan of
action to implement results) and communications process to share results
locally and globally.
In addition to the best practices outlined above, the following general recommendations
are made:
1. Extra questions: there are strong arguments for additional questions on climate
change, transboundary conservation, social processes and a division of the outcome
questions to separate conservation outcomes and cultural/social outcomes.
2. Clearer wording: although constant revisions are simply confusing, there are
several places where repeated applications have identified some ambiguities
remaining in the 2007 version.
3. Capacity building material: practical experience with the METT has shown
that additional tools can be helpful, such as PowerPoint presentations that can
be projected and filled in through discussion and consensus where multiple
stakeholders are involved in completing the METT.
4. A dedicated web site: there is a need for a METT website, to include the definitive
version of the assessment tool, translations, associated capacity building and
presentation material and also perhaps a chat room for people to swap experiences.
5. Outcome assessment: several users have used the METT with systems for
assessing outcomes, to provide a more complete assessment, or have provided advice
on how to modify the METT.
7. Data control: a measure of quality control is needed when METTs are completed,
particularly when implemented as part of an NGO, donor or government led project.
Finally, whilst PAME gives general management advice, it does not contain standards.
Furthermore, PAME assesses management against a site’s own goals and objectives and
often involve self-assessments; while standards evaluate a site’s management against
peer reviewed best practices. Two initiatives (Conservation Assured from WWF and
the Green List from IUCN) have developed management standards for protected areas
and are introduced in the Handbook. Both start from a PAME assessment; then apply
standards and an external expert assessment process as to whether those standards
have been reached.
Photo: Management, monitoring and assessment are vital activities for any protected
area. Green turtle tagging and monitoring, Philippines.
1. It is simple and cheap to use (objectives which influenced its initial design and
development) and there are few alterative tools with similar objectives.
2. The conservation outreach of the institutional developers of the METT (WWF and
the World Bank) and the many organisations which have since used /promoted the
METT.
3. Use by the Global Environment Facility (GEF) for monitoring projects in protected
areas (see box 1).
At its most basic, implementation can take little more than a few hours by someone
(e.g. a manager or project officer) with intimate knowledge of the protected area being
assessed and no equipment is required beyond a computer – or even just a pencil and
paper if a hard copy is used.
The fact that the METT was the initiative of a major conservation organisation (WWF)
and a major conservation funder (the World Bank) has undoubtedly been a factor in its
widespread uptake. The World Bank has been using various versions of the METT in
monitoring its projects since 2001. The Global Environment Facility (GEF) made the
METT mandatory for use in all projects in protected areas funded from GEF-3 (2002-
2006) grants onwards. In 2003, WWF started a serious attempt to use the METT in
connection with all its projects involving forest protected areas.
Results from the first assessment (Dudley et al., 2004) were presented to the Seventh
Meeting of the Conference of the Parties (COP-7) to the Convention on Biological
Diversity (CBD) in 2004, and helped to persuade CBD signatories to include the need for
assessment of management effectiveness in the CBD’s Programme of Work on Protected
Areas (see box 1).
Many other institutions have also adopted and/or adapted the METT. Country
adaptations have been made for over 20 organisations and governments (see section
7) including Bhutan, Indonesia, Jamaica, Zambia, Namibia, India, Papua New Guinea,
South Africa etc. Other conservation NGOs such as Conservation International
(Pauquet, 2005), Wilderness Foundation Africa2 , Global Wildlife Conservation3 , The
Nature Conservancy4 , Wildlife Conservation Society (see for example Heffernan et al.,
2004), IUCN5 , Zoological Society of London (for example three METTs applied in the
Tsavo ecosystem in 2015), USAID (LESTARI project)6 , Western Hemisphere Shorebird
Reserve Network7 (which uses a modified form of the METT) and Space for Elephants
Foundation (SEF, 2012) has also used and adapted the METT as have other funding
bodies such as the Critical Ecosystem Partnership fund (CEPF, 2012 and Burgess et al.,
2015) and conventions including the Ramsar Convention on Wetlands (Ramsar, 2015).
The World Bank developed an equivalent system for marine protected areas based on
the METT (Staub and Hatziolos, 2004). The basic structure of the METT has also been
used in the development of tools such as the UNDP’s Capacity Development Scorecard8
and Financial Sustainability Scorecard9.
IUCN WCPA: The METT was originally developed from work carried out by the IUCN
WCPA task force on management effectiveness (see section 7.1). The task force went on
to help develop and promote widely PAME assessments in general and the METT in
particular during the early years of its development. Those involved have continued to
implement the METT across the globe.
GEF: The METT is the first area-based tracking tool to become a requirement for
GEF-financed operations. METTs for all protected areas supported by a project are
submitted at three stages (i.e. three times) of implementation: (i) at CEO Endorsement
for full-sized (FSP), or CEO approval for medium-sized projects (MSP), (ii) at project
midterm and (iii) at project completion (Swartzendruber, 2013). At both the project
and portfolio level, the GEF is using the METT as a proxy for biodiversity status
and condition and as a measure of one key contributing factor towards ensuring the
sustainability of a protected area system, i.e., effectively managed individual protected
areas must be considered a cornerstone of a sustainable system, notwithstanding key
aspects of sustainability such as financing, institutional sustainability and capacity,
and ecosystem and species representation that may not be directly assessed at the
system level (Zimsky et al., 2010). The GEF thus makes the assumption that project
interventions leading to improvements in protected area management will have a
positive impact on biodiversity (Coad et al, 2014). The GEF supports this assumption
with evidence from studies, such as one carried out in Zambia, which used the
adapted METTPAZ, which found that increases in METT scores were correlated
with improvements in biodiversity outcomes (Zimsky et al., 2010). The GEF has data
from some 2,440 METTs from 1,924 PAs in 104 countries (GEF, 2015). The main
adaptations of the latest version (Tracking Tool for GEF-6 Biodiversity Projects10 )
include changes to the datasheets in relation to biodiversity objectives and the threat
assessment; and question 30 has been adapted specifically to assess the status of
the biodiversity outcomes stated in the tools datasheets. Of note is that the tool only
includes comments and next steps narrative fields and does not ask for justification of
the assessment response.
World Bank: the Bank used the METT for reporting on all its protected area projects
and was a major mover in the wider uptake of protected area assessment by the GEF
and CBD.
WWF: WWF used the METT in over 200 forest protected areas in 37 countries during
2003-4 (Dudley et al., 2004), and again in over 100 protected areas in 2005-6 (Dudley
et al., 2007). The results of the METT helped WWF to identify minimum management
standards for application in its protected area projects and also helped to shape the
work programme and targets for WWF’s global programme (Dudley et al., 2007). More
recently, the METT has been used extensively by WWF and partners as the first stage
in the Conservation Assured | Tiger Standards (CA|TS) development (see section 5.2).
Soon after the METT was first developed, during 2003-2004, WWF carried out two
analyses of METT data, drawing on successive applications in forest protected areas
(Dudley et al, 2004; Dudley et al, 2004a). Analysis of around 200 forest protected
areas suggested that management effectiveness tended to increase with length of
establishment and pinpointed important regional differences, with management at
that stage being particularly weak in Latin America. Key threats were from poaching
and illegal timber extraction, encroachment and over-collection of non-timber forest
products. Strengths and weaknesses of management were highlighted, along with those
aspects of management which correlated with success, as outlined in table 1 below.
Table 1: Results from early analyses of METT application in forest protected areas
Management strengths Management weaknesses Correlations with management success
Achieving legal status Social relations Enforcement capacity
Design Budget management Staff numbers and funding
Boundary demarcation Monitoring and evaluation Education and awareness-raising
Resource inventory Law enforcement Monitoring and evaluation
Objective setting IUCN category – stricter categories better
Enforcement capacity emerged as the strongest indicator of success (which may reflect
the sites that WWF was working in at the time), but also an area where many protected
areas were failing. Monitoring and evaluation was similarly important but often under-
developed. Although the management objectives (i.e. IUCN protected area management
category) correlated strongly with success, with stricter categories generally being
considered more effective at meeting management goals, this was based on a small
sample of the less strict categories (V and VI). Presence of other designations (World
Heritage, UNESCO Man and the Biosphere or Ramsar) conversely had no statistical
links with performance.
In 2007, another METT analysis was carried out by WWF, drawing on results from
over 330 protected areas in 51 countries, and from assessments carried out in 2004
and 2006 (Dudley et al, 2007). Results closely matched the earlier two studies. As
before, the strongest association between effectiveness and management related to
law enforcement, control of access, resource management, monitoring and evaluation,
maintenance of equipment, budget management and existence of annual work plans;
all elements of a well-regulated and managed reserve. A stricter IUCN category was
again associated with a more effective result while international designations such as
recognition as a natural World Heritage site conversely had little apparent influence on
success. Consumptive biotic use, predominantly poaching, was identified as the most
significant pressure. And once more, results seemed to indicate an increasing trend
towards effectiveness over time.
A later global study focused on the number and distribution of applications of PAME,
and the utility of PAME in relation to the achievement of Aichi Biodiversity Targets
(CBD, 2010), with less emphasis on the results (Coad et al., 2013). More recently,
a major analysis of METT data was undertaken, principally looking at changes in
management effectiveness over time (Geldmann et al, 2015). By the time of this
assessment, some 1,934 METT results were available, including 722 with repeat
data from the same protected area. Analysis confirmed the earlier suggestion that
protected area management effectiveness tends to improve over time, with 69.5 per
cent of those analysed showing an increased overall score over time. Larger and
more threatened protected areas tended to show the greatest improvement, and
those with initially low scores also tended to improve. The authors conclude that the
commonsense assumption that additional effort and resources can lead to improved
management effectiveness is frequently correct.
The GEF also carried out an assessment of METT use in 2015 (GEF, 2015). A total of
2,440 METTs were reviewed from 1,924 protected areas in 104 countries; of these
a subset of 275 protected areas in 75 countries with at least two METT assessments
were used to assess changes in PAME over time. The reliability of the METT as a
monitoring tool was also considered and field assessments were undertaken in seven
countries across three regions. The assessment of results (using only those METTs more
than 50 per cent complete) found the highest individual mean scores were legal status,
protected area boundaries, and protected area design. The lowest mean scores were
linked to the contributions of commercial tourism to protected area management and
involvement of local communities and indigenous people in protected area decision-
making. When looking at changes over time, the greatest improvements were observed
in the adequacy of management plans, law enforcement, protected area regulations,
resource inventory and protected area objectives; all which reflect the substantial inputs
of GEF into protected area management.
Finally, in 2015 WWF updated and reviewed the METT assessment results of PAME
in their priority places (a series of areas identified by WWF as having exceptional
ecosystems and habitats). Average PAME scores (where 3 is the highest level of
effectiveness) in WWF priority places ranged from 1.29 to 2.28 with only four places out
of 27 having scores over 2, suggesting most protected areas in their portfolio still needed
to improve management (Stephenson et al., 2015).
Global METT data are not evenly distributed (see section 7.4). The METT was initially
designed, and has been primarily used, to measure conservation funding impact, so that
it has probably been biased towards underperforming protected areas, identified as requiring
additional support (Nolte and Agrawal, 2012; Coad et al., 2014; Stephenson et al., 2015).
As such, claims about the relative effectiveness of protected areas using METT results
must include information on and analysis of factors such as the sample size and location
of the areas being assessed to ensure the context of the results are correctly understood.
1. Does using the METT help increase the management effectiveness of protected areas?
2. Do the METT results correlate with other indicators of protected area effectiveness
in terms of outcomes?
In relation to the first question, a clear strength of the METT is that it allows for
progress to be measured over time in relation to specific management issues (Higgins-
Zogib and MacKinnon, 2006). If the METT is to have a role in increasing PAME and
helping countries reach the Aichi Biodiversity Targets (see Stephenson et al., 2015),
specifically target 11 which call for “through effectively and equitably managed,
ecologically representative and well-connected systems of protected areas” (CBD, 2010),
this implies that the METT findings are reflected in subsequent management decisions
(e.g. through adaptation, funding or action plans). This positive relationship is most
evident in regional/jurisdictional use of the METT, with examples provided in a range of
reports (see section 7.3).
The second question relates to the validity of the METT results and the relationship of
those results to conservation actions. As noted, the METT does not focus on outcome
assessments but rather whether the core components of effective management are in
place to achieve conservation. As Coad et al. (2015) note: “It is important to understand
the causes of success or failure of management: without such an analysis, attempts to
improve performance may be ineffective. The rationale for PAME, while focused on
facilitating effective management rather than building a scientific evidence base, is
therefore, in part, to understand the impacts of protected area management”. The METT
can thus be a useful contributor to a range of datasets, rather than providing the sole
dataset, to help practitioners assess conservation outcomes (see for example Forrest et
al., 2011 and Henschel et al., 2014).
The most detailed paper on impact evaluation in protected areas was published
by Coad et al. in 2015. This looked at the impact of protected area management on
biodiversity outcomes. It used the whole dataset of PAME results held in the Global
Database for Protected Area Management Effectiveness (see section 2.4), which at
the time held almost 18,000 PAME assessments and in addition assessed the peer-
reviewed literature on how PAME data had been used in impact evaluation. The authors
found that the paucity of data from appropriate counterfactuals (i.e. a “counter-to-fact
conditional” such as the status of an area if had not been declared a protected area, or
certain management activities had not happened) means that the PAME data are not
ideally suited to the needs of scientific impact assessment. However they concluded
that: “When suitably combined with independent measures of PA impact that have
employed appropriate counterfactual methodologies, PAME data can help increase our
understanding of the impact of aspects of PA management on conservation outcomes”
(Coad et al., 2015).
Overriding both the above questions is the need to be confident that the METT score
does indeed provide a useful reflection of management realities. In the research carried
out by Geldmann et al (2015), which focused on 722 sites that had completed at least two
METT assessments, the authors specifically addressed the criticism that METT scores
are not an accurate reflection of reality on the ground. They note that in general most
repeated METT assessments produce scores that suggest improvement in management
over time, as would be expected if increased METT scores were indicative of real
improvements, but some 30 per cent experienced no change, or even declines, in overall
scores. They conclude that this “is a considerable proportion had there been systematic
manipulation of scores”. The authors noted that although this: “does not represent
definitive causal evidence that scores are not manipulated, it does suggest that at least
some of the observed changes can be attributable to actual changes in management
effectiveness on the ground”.
There is little evidence that protected area staff routinely inflate scores to make
themselves look better although trends can be observed. Carbutt and Goodman (2013)
assessed use of the METT in South Africa. They noted that field staff members tend to
be so closely involved with day-to-day activities that they lose objectivity, and tend to be
too negative and score low. Senior management come with a more strategic viewpoint
and, in the absence of the day-to-day realities, tend to score too high. Hence they stress
the need to encourage a range of viewpoints and opinions and to facilitate dialogue until
a consensus score is reached. Similarly Zimsky et al. (2010) found that when completed
using a rigorous process in Zambia, the METT was assessed as a suitable performance
metric for PAME, backing up the findings of WWF’s analysis of METT results in 2004
and 2006 (Dudley et al, 2007).
tools and report the results into the global database on management effectiveness
maintained by the World Conservation Monitoring Centre of the United Nations
Environment Programme (UNEP WCMC)”11.
There is also a specific METT database which contains most of the known METT
assessments conducted by the major users (GEF, WWF, CEPF) and assorted other
contributors, which is currently managed in a temporary capacity by a core group of
researchers connected to UNEP-WCMC, the University of Oxford, the University of
Copenhagen and the consultancy Protected Area Solutions. The data and structure of
the database are in the process of being error checked, made user-friendly and more
intuitive. Many organisations have provided METT data and funds for data entry over a
period of years, mainly in the form of short-term projects. At present, there is no long-
term funding in place to maintain the database, although UNEP-WCMC have committed
to host the METT and GD-PAME datasets and to link them to the World Database on
Protected Areas (WDPA) so that they have an institutional home, and will endeavour to
make the data available through the online portal protectedplanet.net, providing that
the data providers have given consent.
The centralised database is the most efficient way to maximise the utility of the compiled
METT data for the widest audience. The crucial next step to ensure that data from
METT assessments are compiled, checked and available for management and research
purposes is to solidify the long-term plan with the consent of data providers and secure
long-term funding.
Contributors wishing to add their data to the database must be aware that the current
hosting and management context is not permanent and is highly likely to change in
the near future. Further to this, the paucity of continued funding for the upkeep and
development of the database has meant that the procedure for adding data is not fixed
and needs to be adapted for individual project circumstances. Initial contact for a data
entry and/or analysis project should be made to the Protected Areas Programme at
UNEP-WCMC
To enable a cost-effective, swift and efficient data entry process the following
suggestions should be taken on board once the project has been agreed:
1. Provide an “assessment list” including the protected area name, country, date of
assessment and WDPA ID. A checklist such as this is a basic safeguard for ensuring
that all the data has been provided, and that all data will be entered correctly.
2. Organise data into protected areas folders and country folders, count how many
assessments there are and identify what version of the METT has been used (e.g.
2002 version (METT 1) or 2007 version (METT 3), variations or modifications,
etc), and include this information in the “assessment list”. Also check carefully
for duplicate files and remove them. If the data entry team has to trawl through
hundreds of files just to work out what is there this will add days or weeks to the
project, increasing the cost immensely.
3. Be aware that translations will add time to the data entry process. Assessments
in English are straightforward, and it may be worth considering translating into
English before passing over the data, depending on the language. Non-Roman script
and non-Romanized languages are the most difficult to process as the requisite
skills are less likely to be present within the team (e.g. Russian, Vietnamese, Greek,
Chinese).
4. There is a standard process developed for adding results from the 2002 version
(METT 1) and 2007 version (METT 3), and the database has a limited capacity for
modified versions and variations. If the standard questions have been modified or
additional questions have been added, only the scores for questions that match the
standard METT 1 and METT 3 will be entered.
Case study 1
R-METT: Ramsar Convention on Wetlands
Llewellyn Young
After a period of discussion, review and field testing by the Ramsar Convention, the
Ramsar Site Management Effectiveness Tracking Tool (R-METT) was adopted at the
12th Meeting of the Conference of the Parties to the Convention in 2015 (Ramsar COP12
Resolution XII.15). The decision encourages Contracting Parties that do not already
have effective mechanisms in place to consider using the R-METT.
The R-METT is based on the 2007 version of the METT with some adaptations specific
to the needs of the Convention and wetlands. The adaptations are:
• Data Sheet 1b: Identifying and describing values from the Ecological
Character Description and the Ramsar Information Sheet. This provides
information on the ecological character of the site including the ecosystem services
that it provides, and the criteria under which the site qualifies as a Wetland of
International Importance.
i. Part of a jurisdictional (e.g. protected area system, category or biome type) approach
to PAME usually instituted by the protected area agency (e.g. Bhutan, Indonesia,
Jamaica, Namibia, South Africa, Zambia etc) or type of protected area (e.g. Ramsar,
marine protected areas)
ii. Part of an NGO-led project (e.g. WWF and a range of other NGOs, see section 7.1)
iii. For monitoring large-scale funding impacts (e.g. GEF, World Bank, CEPF).
i. Highlighting management practices that are failing to achieve desired results and the
solutions to adequately address these.
The second and third types of use are often as a result of the METT being used as a
performance indicator by conservation organisations and donors. This may encourage
funding recipients to deliver overly positive self-assessments at the end of a project
(Coad et al., 2014). As Carbutt and Goodman, 2013 note: “Management effectiveness
assessments should not be seen merely as a ‘paper exercise’ to meet reporting
obligations. Rather, they should be undertaken objectively and with sober judgement
and diligence to ensure that the effectiveness score achieved represents a realistic
picture of management practices and processes, in the absence of hard quantitative
data”. Thus where assessments are conducted as part of donor funding requirements,
donors should insist on procedural standards being met and provide specific funding for
assessments within project budgets (Coad et al., 2014), making the use of the METT a
useful tool rather than just a reporting task (Zimsky et al., 2011).
The WCPA has reviewed the different processes to undertake PAME, and assessed their
pros and cons (Hockings et al., 2006 and Hockings et al., 2015). Best practices specific
to the METT are outlined (in the boxes) and discussed below.
Best practices:
1. Plan the implementation process. Review the METT before
undertaking the assessment and assess the information
available to complete it. Then think about capacity and pre-
assessment training needs, adaptation, timing, scope and scale,
verification, etc.
The METT is only useful if done properly, and the quality and objectivity of the
assessment process should be considered if the results are to be used in site, national or
international reporting (Knights et al., 2014). A little time spent collating evidence and
planning implementation can ensure the validity of results.
Before even starting to plan implementation, managers and others should review the
content of the METT, work out what evidence is available relevant to each indicator and
then assemble this evidence to have it available during the assessment discussions.
What follows here are a range of process orientated practices which should be
considered before completing the assessment. Although intended to be a rapid and cost-
effective tool the time allotted to undertake the assessment should allow for thorough
deliberation of the results (Coad et al., 2014).
Best practices:
3. Complete all the METT including all questions on the datasheets
and narrative sections related to the multiple choice questions. The
next steps section is essential as the steps identified create a quick
check list of needed actions.
The current version of the METT used by WWF (Stolton et al, 2007) is a relatively short
document with a minimum of essential guidance. Those in charge of implementation
should read and ensure this simple guidance is followed (as noted above pre-assessment
training may be needed to explain how to implement the METT) and, where a project
manager exists, a few simple checks can be made to assess quality of completed results
including:
• Number of people involved (data sheet 1), where possible assessments should be
carried out with a range of stakeholders and rightsholders, including protected area
managers, local government, partner NGOs, local community representatives etc
• Quality of completion of the two narrative boxes accompanying each question in the
multiple choice questionnaire
• Evidence of use of the results to develop a plan of action to address areas of weakness
in management
In particular the space provided for the narrative (comments/justification and next
steps) is a vital component of the METT; although it is one that is often missed. Because
of the dominance of input and process questions in the METT, if the outcome question
and additional points are completed without sufficient detail to back-up the claims
made, then the ability for the METT to serve as a tool to assess biodiversity outcomes is
even more seriously limited. Zimsky et al. (2010) note that: “the METT fails to require
those who fill out the form to justify outcome scores with concrete data of biodiversity
status, threat reduction”; however the failure here is perhaps more to do with the
lack of oversight in completing the METT and lack of guidance (e.g. training of those
undertaking or overseeing the assessment) to complete the METT properly.
PAME tools are increasingly being implemented using web-based questionnaires (e.g.
UNESCO’s periodic reporting format for World Heritage sites); if such as tool is developed
for METT implementation one simple way to help ensure all elements are completed is
not to let users continue or submit an assessment unless all fields have been completed.
PAME systems, such as the METT, which focus on collecting qualitative rather than
quantitative data can be subject to criticism. However, the reality is that in many cases,
expert-based knowledge is the only source for making such assessments. A study in
Australia, which has one of the world’s better developed and researched protected
area networks, found that in 25 per cent of management effectiveness assessments,
practitioners had insufficient evidence to assess their management performance and
even where sufficient information was available 60 per cent of assessments relied solely
on experience to judge the success of management approaches (Cook et al., 2009). Thus
although quantitative data should be used wherever possible to justify the assessments
made in the METT, qualitative data will in many cases inevitably form the basis for
much of the reporting. In these cases additional steps related to who undertakes the
assessment (see 3.2.5) and processes of verification (see 3.2.7) are particularly important
to ensure that an accurate and valid assessment is made.
Best practices:
5. The METT is a generic tool designed for global use; thus it is
unlikely to fit one protected area (or system, type etc) of area
perfectly. Adaptation is encouraged; ideally by keeping the basic
format of the METT the same and adding to, rather than changing,
the wording of the METT (e.g. providing additional advice on
interpretation for local conditions or by additional questions).
Because the METT is used globally there are advantages in ensuring that the core
questions of the METT are always included in an assessment, to help facilitate
comparison between assessments made in different parts of the world, or different
protected areas within a single network. Adaptation is still possible however and can
take two forms:
ii. Adding detailed instructions to the existing questionnaire, in order to relate the
METT better to local circumstances.
Hockings et al., 2015 state that: “The more clearly the categories [i.e. responses to
PAME questions] are defined for local circumstances, the more accurate and consistent
will be the responses”. They go on to discuss how the use of subjective terms – such as
‘adequate’, ‘sufficient’ and ‘appropriate’ – have been deliberately chosen in tools such as
the METT use to ensure that assessment categories can be applied to protected areas
in very different contexts. It is therefore important to ensure that definitions of what
is meant by general terms such as ‘adequate’, ‘sufficient’, ‘appropriate’ etc in a specific
country, portfolio or jurisdictional context are clear to all assessors, to avoid errors
derived from using poorly defined language.
The 2015 evaluation of the impact of GEF investments recommended that the GEF
supports countries in adapting the METT to make it more appropriate to their capacities
and information needs, noting this: “will help build country capacities in monitoring
parameters that they find useful for improving biodiversity conservation management
within their specific context, while still providing key information that can be compared
and analyzed at a global level” (GEF, 2015).
© FIONA LEVERINGTON
Marc Hockings The development of the Bhutan METT +, for example, included a fairly substantial
presenting at the refinement of the threat assessment and the addition of notes where specific multiple
training session for
choice questions needed more detail, along with addition of a number of extra questions
METT implantation in
Indonesia. (Dudley et al., 2016). The adaptation process was managed in two workshops with
managers and staff of protected areas, staff and experts from the ministry which
oversees protected areas and facilitated by two of the original developers of the METT
(Wildlife Conservation Division and Equilibrium Research, 2015). Other versions of
the METT which have been adapted with guidance for local implementation include
the Carpathian Countries Protected Areas Management Effectiveness Tracking Tool
(CCPAMETT), see for example the version from Poland (Pap, 2012); the Management
Effectiveness Tracking Tool for Protected Areas managed by the Zambia Wildlife
Authority (METTPAZ) (Mwima, 2007); South Africa (Cowan et al., 2010) and Indonesia
(Kementerian Lingkungan Hidup dan Kehutanan, 2015)
The first version of the METT was translated into multiple languages (at least seven
and probably many more) however as there is no central repository of METT versions,
reports or advice most countries have a new translation made when using the METT.
Best practices:
6. The METT is designed to track progress over time. Sites/networks
planning to implement the METT should thus aim to repeat the
assessments every few years; ideally the METT should be an
automatic part of annual planning.
Given the central role that protected areas play in conservation strategies, assessment
of their effectiveness should not be restricted to time-limited projects but rather
considered to be an integral part of everyday management. The relative simplicity of
the METT means that it can easily be used annually and the results integrated into
management and/or project planning. The METT was designed for repeated use to show
progress and users (e.g. Heffernan et al., 2004; Knights et al., 2014) have noted the true
benefit of the METT will largely be realized when future reviews are conducted and can
report on significant changes in management practices or local conditions.
From the data collected in the METT database, 90 countries have used the METT more
than once in at least one protected area (see section 7.2). Thus nearly half (almost 2
million km2) of the area where the METT has been implemented has seen more than one
assessment. However given the use of the METT by the GEF in large scale projects in
protected areas this is not particularly surprising.
Best practices:
7. The implementation of the METT should wherever possible include
a wide range of rightsholders and stakeholders to aid insight in the
assessment results; including people outside the protected area,
such as local communities, will bring richer insights.
Although designed to be a self-assessment tool, the intent of the METT was to involve a
range of stakeholders in the assessment process. Although datasheet 1 of the METT includes
details of who has been involved in the assessment this information is often not completed
and thus few of the METTs collected on the METT database include this information. The
METT guidance notes state that implementation is best achieve through discussion and
consensus building with protected area, project or other agency/expert staff and “where
possible additional external experts, local community leaders or others with knowledge
and interest in the area and its management should also be involved” (Stolton et al., 2007,
pg 6). However this wide-ranging consultation process has not always been a feature of
implementation and as Coad et al. (2014) note “where funding for PAME assessments is
not ring-fenced within project budgets, PAME assessments may be conducted rapidly
with the minimum number of participants, reducing their robustness”.
The protected area manager/s should be actively involved in the assessment. As Cook
and Hockings (2011) state: “involving protected area managers in the evaluation process
demonstrates the importance of setting clear objectives, which will ultimately benefit
the day-to-day management of the protected area” and by being involved in the assessment
“the evaluation data are more likely to be used to improve management”. Research has
shown however that protected area managers on the whole are well placed to accurately
assess key management issues (Cook et al., 2014) and bias in METT responses, even
when linked to large-scale funding such as that provided by the GEF, is not a major issue
when completed as part of a participatory process (Zimsky et al., 2010).
Group discussions have been shown to result in better PAME results because discussion
can stimulate additional recollections from other members of the group (Cook et al.,
2014). In Zambia, where the METT was completed with peer review and full stakeholder
participation – including protected area managers, private sector in the form of tour and
lodge operations, and local communities living in the Game Management Areas (GMAs)
– the scores had more buy-in and were more accurate as more debate and discussion had
been undertaken before a score was decided upon. The METT thus serves not only as a
performance metric but also as a means to foster communication and participation in
the management of the protected area or GMA (Zimsky et al., 2010). A review of METT
use by the GEF found that higher mean METT scores were correlated with the presence
of protected area managers and staff; whereas scores were found to be lower by as much
as 0.1 (on a scale of 0 to 1) when community members, NGOs and external experts were
present (GEF, 2015). As a result of this, the GEF database on METT results now collects
data on the number of people involved. Data from over 800 assessments shows that
although some assessments are still only completed by one person, one site assessment
involved 70 people and the average number of people involved is five.
The METT datasheets allows for the type of stakeholders to be recorded (e.g. protected
area staff, local stakeholders, NGO staff etc). But again these simple check boxes are
rarely completed. As such it is hard to know who has been involved in implementing
the METT. But from the results from the 800 or so assessments collected by the GEF
it is clear that wider stakeholder participation in the METT is very rare, which must
certainly impact on the rigour of the results collected globally.
Best practices:
8. Although designed as a simple tool, implementing the METT may
be the first time protected area staff and other rightsholders and
stakeholders have been involved in assessing PAME. Thus some
capacity building is advisable so that all participants understand
PAME.
One challenge with capacity building Cook et al (2014) noted, when reviewing PAME
assessments (not using the METT) in Australia, is that workshops, training sessions
and written guidelines were not sufficient to prevent discrepancies when eliciting expert
knowledge. As this is the only research that has been carried out to specifically assess
the quality of PAME inputs from protected area managers it is hard to know if this is a
country/PAME tool specific problem (although it is likely that similar issues will apply
more broadly across the suite of PAME assessment systems), and thus more research on
this issue would be useful.
Specific capacity building material may also be useful, such as provision of PowerPoint
slides of each question, which can be projected and filled in collaboratively if several
different stakeholders are involved.
© EQUILIBRIUM RESEARCH
Meetings with
park staff and
3.2.7: Verify results
local community
representatives
Best practices:
during a verification
field visit to Jigme 10. A lthough designed as a self-assessment tool METT implementation
Singye Wangchuck can involve verification processes; from simple checking of
National Park, completed METTs to more detailed field verification exercises.
Tingtibi Range
Office as part of
the Bhutan METT +
implementation
As noted above, repeat assessments are intended to show change in management
over time. However reporting change may also be influenced by the desire of staff to
show that their sites and management have improved; this may particularly be the
case when METT results are linked to funding – as is the case with the GEF. One of
the main criticisms of the METT is that it relies on purely subjective responses by the
management agency and partners to questions, with no field verification (e.g. Johns,
2012) and scoring system can be subject to one-sided opinions and perspectives in
the absence of peer review, thereby introducing subjectivity and bias (e.g. Carbutt and
Goodman, 2013). Projects to build capacity might be tempted to score themselves low
to start with and progressively higher over time: such manipulation definitely does
sometimes occur. Employing external experts to participate in the evaluation process
is increasingly being used, and recommended, in a range of PAME processes (Cook
and Hockings, 2011). Some independent auditing can therefore be valuable when
implementing METT projects.
There are many different options for verifying METT results, including:
• Desk study verification: Either getting experts who are familiar with the site to
peer review the results, or undertaking a short desk study to validate the assessment
results, can be a relatively quick and cheap verification process. The 2007 analysis
by WWF included the use of the METT in repeat assessments where management
Best practices:
11. Completing the METT is only the first step of the assessment; the
implementation process should include adaptive management (e.g.
a plan of action to implement results) and communications process
to share results locally and globally.
The METT should not be viewed as an academic exercise but rather as an aid to good
conservation planning and management. Thus the METT score should not be seen
as a “pass” or “fail” but as an indication of the level of effective management. Many
METT studies (see section 7.2) report on the assessment in terms of the six elements
of the WCPA Framework as recommended in the METT (e.g. see Inciong et al., 2013;
Mwima, 2007, etc), helping highlight specific areas of management weakness, and
thus providing a better indicator of effectiveness than an overall score. However, very
few include specific action plans, let alone with details such as clarity about timeline
of action, responsibility, budget etc, which will ensure the results of the METT are
implemented The most likely reason for this is that the next steps section of the METT
has not be adequately filled in (see 3.2.2.) and thus turning the METT into an adaptive
management planning tool is difficult. One effective use of the score used in some
countries (e.g. Indonesia) is to translate the scores in actionable outputs, i.e. identify
activities to improve low scoring questions and set targets for improvement.
Communicating the results of the METT is also important – to all those involved as well
as to protected area management authorities, funders etc. Section 7.2 details several
of the reports and papers that have been developed about the implementation of the
METT. Many of these have been project reports and analyses by users and most have
remained in the grey literature, although METT results have been included in peer-
reviewed studies of global data sets. Communication is important for those who have
been involved in the assessment, so they can see if and how results have been used; to
managers of protected areas so they can react to the proposals and more generally to
politicians and civil society, to show how protected areas are performing.
Case study 2
Bhutan METT+
Authors: Sue Stolton, Nigel Dudley, Sonam Wangchuk, Dechen Lham and Shubash Lohani
© EQUILIBRIUM RESEARCH
Jigme Dorji National Bhutan is a land-locked, mountainous country with a small population and a strong
Park, Bhutan. commitment to sustainable development. It has set aside over half the country into
protected areas, mainly but not exclusively in the high mountain areas. Bhutan has rich
wildlife, including viable populations of tigers and many endemic species.
The METT was applied to all ten protected areas in Bhutan and the Royal Botanic Park,
as a key stage in developing a State of the Parks report for the country and as part of an
awareness raising programme on the Conservation Assured | Tiger Standards (CA|TS,
see section 5.2). While the METT was used as the core of the assessment, considerable
modifications were made in association with the Bhutan government and protected
area managers and staff. The Bhutan Management Effectiveness Tracking Tool
Plus (Bhutan METT +) was developed at training workshops organized by the Bhutan
Wildlife Conservation Department (WCD), in Lobesa, Punakha in 2015 and at the Royal
Botanic Park Lamperi in 2016. Representatives from the WCD, 10 protected areas and
the Royal Botanic Park and core team of the Department of Forest and Parks Services
worked with Equilibrium Research to develop the recommendations that led to the
design of the Bhutan METT + in 2016.
The basic structure of the METT was not changed, to allow the results to feed into the
global database. Adaptations took four forms:
1. Adaptation of the threats assessment to allow for current and potential threats and
issues (which could become threats if not effectively managed) to be identified.
2. New tools added to the METT to provide a more detailed assessment of: (i) threats,
looking at spatial and temporal issues of threats considered of medium or high
© EQUILIBRIUM RESEARCH
Developing and significance (current or potential) and suggested management actions to mitigate
implementing the threats; (ii) an assessment sheet of national context looking at the extent to
Bhutan METT +
which current policy supported protected area management; and (iii) outcomes,
baseline data will be collated to develop a set of headline indicators for monitoring
biodiversity in Bhutan; once these are agreed work will start on developing detailed
indicators and monitoring systems and protocols for the headline indicators.
4. Additional questions added to the METT relating to e.g., climate change and
transboundary influences along with some modifications to existing METT questions
and to the background data sheet.
A “Rosetta Stone” version of the Bhutan METT + was produced which shows clearly the
changes and additions to the METT. This version has been edited and revised to produce
final version of the Bhutan METT+ 2016.
The METT was filled in for five sites in a workshop situation in 2015; and for all the sites
in 2016, when field verification also took place for three of the protected areas, including
interviews with local stakeholders conducted by external specialists. Data from all the
METTs has been reviewed by WCD and external specialists and has been collated and
analysed to show relative strengths and weaknesses and to identify important next
steps for both individual protected areas and for the national protected area system as a
whole. The State of the Parks report will be published in late 2016.
@ MARC HOCKINGS
Contents METT use | Best practices | METT Q&A | Standards | Conclusions | Addendum | References
QUESTION GUIDANCE
The METT was designed to be a simple tool which could be picked up and
used with minimal training. However, experience has shown that this may
be over-optimistic, and that most users need some help to get the best
possible use out of the assessment. Over the years since its first publication
and use, various training courses have been developed around using the
METT. The section below reviews each part of the METT and, drawing
on the experience of using and training users, provides more detailed
guidance, with particular emphasis on the multiple choice questions.
Data sheet 1
Name of protected area: this should be the full name; and the same as that included
on any official list (from the government, World Database of Protected Areas etc). If
the site is known by more than one name, or if the name has changed recently, include
alternatives, stressing which one is now the “official” name.
WDPA Code: Each protected area has a code, which is listed on the World Database
of Protected Areas and is a unique identifier. Nowadays this can be found most easily
on the Protected Planet website12. Type the name of the protected area into the “Start
Exploring” box, open the record for the site and the WDPA ID is listed on the top left of
the page.
Designations: National: this refers to the national category – such as national park,
wilderness reserve, nature reserve, etc. Identification is important because in most
countries particular designations will have their own policies, rules and sometimes
legislation.
Designation: IUCN category: most, but by no means all, protected areas are also
identified by the national government as falling into one of the six IUCN management
categories. This is important, because the way that individual countries define
something like a national park might be very different in terms of the way that it is
managed: the IUCN category provides an international standard. IUCN categories are
also listed on the WDPA. The UNEP World Conservation Monitoring Centre, which
manages the WDPA, only lists the IUCN category if it is proposed by the government,
so if none exists, this section should be left blank. Further information: Guidelines for
Applying Protected Area Management Categories (Dudley, 2008).
Designation: international: explained in more detail on the second page of the data
sheet. This collects information on regional or global designations, such as UNESCO
World Heritage or ASEAN Heritage.
What are the main values for which the area is designated: this may sometimes
be written down formally (for instance in application for World Heritage status or in the
protected area management plan), or it may be implicit. It is important to note whether
the protected area is designated primarily to protect a whole habitat (such as a coral reef
or rainforest) or whether it is to protect a certain species or group (like a seabird colony
or a rare plant).
Please note if assessment was carried out in association with a particular project, on
behalf of an organisation or donor: for example as a condition of getting a GEF grant, or
because it is standard government policy.
• Threat 2.1a Medicinal plant cultivation: Note that the collection of species from the
wild is covered in threat 5.2
• Threat 3.3 Energy generation including HEP: This question looks specifically at threats
within protected areas. Hydropower developments outside protected areas can still
impact on the protected area, the impact of such threats is covered in threat 7.2.
• Threat 4.3 Flight paths: It should be stressed that this threat is considering flight
paths of aeroplanes, hot air balloons, gliders etc, not the flight paths of birds
• Threat 6.2 War, civil unrest and military exercises: this can include intrusion of
political insurgency from across national borders
1. Legal status: this usually only refers to state-managed protected areas. In the case
of many private reserves and indigenous and community conserved areas (ICCAs) legal
status is not an option and this question is not applicable. Where such protected areas
do have some formal status (e.g. a covenant or legal recognition of Indigenous Protected
Areas) this should be listed. Further information: Guidelines for Protected Area
Legislation (Lausche, 2011).
2. Protected area regulations: the term “regulation” can refer to both legal and
customary controls; for instance protected areas managed by private individuals, trusts
or communities should still have clear rules regarding use of land and water.
3. Law enforcement: here “staff” relates to both those formally employed and those
responsible for management in other governance types. The question refers to both
personal capacity (training, skills) and sufficiency of equipment and infrastructure
(vehicles, routes to access remote areas, etc.) The next steps section should identify
needs if the score is low.
4. Protected area objectives: this question refers back in part to the key management
objectives already identified in the datasheet. Were these obvious or did the assessment
group have to work them out? If the latter, this probably means that overall management
has not considered the objectives of the protected area in sufficient detail. Key references
include the original legislation establishing the reserve, in the case of state-run protected
areas, and management plans, information and knowledge of day-to-day activities. Lack of
clear objectives probably means that management is itself undirected and likely
inefficient: a process for firming up objectives (for instance a stakeholder workshop)
should if necessary be noted in next steps.
5. Protected area design: issues to consider here include whether key species are
adequately protected (for instance it would be an issue if a marine protected area
omitted a nearby area where many of the constituent species bred), whether it is large
enough to support viable populations and whether events outside the protected area
could undermine its value (for instance if a hydroelectric power project dammed a river
and interrupted flow). It is also important to consider, where possible, projected future
climate change influence in this assessment: for instance if sea level rises is there space
in the protected area for a mangrove forest to retreat inland?
7. Management plan: in most cases this will be a formal management plan, written
down and in the case of government protected areas also approved by the relevant
department or ministry. In other cases management plans may be less formal, agreed
through discussion with community members, and existing only as oral agreements,
minutes of meetings or other less formal arrangements. The aim of this question is to
see whether or not management is following a set and logical course. Further
information: Guidelines for Management Planning of Protected Areas (Thomas and
Middleton, 2003).
There are a number of additional questions in the METT (7a,b,c; 21a,b,c; 24a,b,c; and
30a,b,c), which go beyond the basic assessment and identify whether particular best
practices are in place. All of these additional questions should be considered during the
assessment (as with the other METT questions, the extra best practice questions add
up to a total score of 3 and therefore fit the scoring framework). However it is common
for assessors to answer only one of the 3 additional points possibly because assessors
do not understand that they can score any or all of the additional points. If any further
additions of the METT are produced it should be made clear that they should give 1 or 0
as an answer to ALL the additional questions.
7a. Planning process: opportunities for key stakeholders to influence planning: “key
stakeholders” in this case refers to people beyond the immediate management authority;
such as local communities or indigenous peoples living in or near the protected area,
sometimes also tourism operators, local government and industry: if there have been
no such involvement the next steps column should identify those people who should in
future be involved.
7b. Planning process: periodic review: many formal management plans cover
5-10 year periods. But many things can change over this length of time; such as new
pressures, changing weather patterns, new opportunities. This question captures
whether there is a way to make sure such changes are integrated into management, and
lessons learned as management proceeds.
7b. Planning process: monitoring results: the fact that monitoring takes place, and
assessments are carried out, is no guarantee that the results are incorporated into
management. The question addresses this and if answered negatively the next steps
column should include concrete, time-bound proposals to address the lack. Further
information: Enhancing our Heritage Toolkit: Assessing management effectiveness of
natural World Heritage sites (Hockings et al., 2008).
8. Regular work plan: this will usually refer to an annual plan, aimed at implementing
the next stage of the management plan.
10. Protection systems: the question focuses particularly on enforcement, and will
be applicable in places where there is pressure from poaching, encroachment, illegal
mining etc. In protected areas with no such pressures, designation and management in
itself can be judged “largely or wholly effective”. This is less about capacity and resources
13. Staff numbers: answering this question might be slightly more difficult for
community-managed sites; here the issue will be more generally one of having sufficient
number of people involved for there to be capacity to manage rather than “employment”
in a traditional sense. In some remote protected areas, with few pressures, there may be
no permanent staff but rather one person will have oversight of several protected areas:
in this case the answer would fall somewhere between the second and fourth of the
answers.
14. Staff training: again this question can refer to both formal staff members and/
or others involved in management. Training needs to be in relevant disciplines; it is
not uncommon for protected area staff to be seconded from other institution, such as
forestry; although these people have received training it is largely irrelevant to the job in
hand. Next steps should list any important training gaps.
15. Current budget: this question relates to the total amount of budget, rather than
to budget security, addressed in question (16). Virtually every protected area rates
themselves as inadequately financed! This is not aimed at identifying whether more
money would be useful but whether there is sufficient budget to carry out effective
management and to implement a realistic management plan.
16. Security of budget: the main question here is whether the budget is reliant
on intermittent project funding or whether there is a reasonable chance of it being
maintained over time – for instance because it is a core part of a government budget,
or maintained through a private trust, or has low costs and strong volunteer support.
Further information: Sustainable Financing of Protected Areas (Emerton et al., 2006).
18. Equipment: this could include, for example, vehicles, communication systems,
tools, uniforms, shows; but also contributory materials like fuel.
20. Education and awareness: this question covers education both for learning
establishments, such as schools programmes, and the provision of more general
educational opportunities for local communities or recreational visitors.
21. Planning for land and water use: note that this question relates to planning
processes outside the protected area. Protected area effectiveness can be seriously
undermined by actions that take place beyond its boundaries, such as pollution,
alterations to hydrology, and development of infrastructure such as roads and rail links.
Does the protected area have any influence on surrounding decisions? Do managers or
communities managing protected areas engage in wider planning discussions? Does
the government take account of the protected area when undertaking broader planning
exercises?
21a. Land and water planning for habitat conservation: this additional question
narrows down the focus of question (21) by focusing on surrounding environmental
conditions, such as pollution levels, hydrology etc. Relatively few protected areas will be
able to score this additional point.
21b: Land and water planning for connectivity: is the protected area connected to
other similar habitats or is it isolated? Particular issues here is the potential for animal
migration or animal movement to prevent species becoming inbred, opportunities for
fish migration along rivers, and the presence of buffer zones around protected areas
to prevent edge effects and encroachment. If not, is there anything that can be done to
improve the situation?
23. Indigenous and traditional peoples: this will be not applicable in cases where
there are no indigenous people present. Note that different countries use a range of
terms to describe such cultures: ethnic minorities, traditional peoples etc. Further
information: Indigenous and Traditional Peoples and Protected Areas: Principles,
Guidelines and Case Studies (Beltrán, 2000); Indigenous and Local Communities and
Protected Areas: Towards Equity and Enhanced Conservation (Borrini-Feyerabend et
al., 2004)
24. Local communities: to score 2 or 3 in this question the communities should also
have a reasonable amount of influence on the overall decision: mere consultation is not
sufficient.
25. Economic benefit: this question is aimed explicitly at local communities rather
than outside businesses, such as tourism companies; the latter might be included if
they employ a significant number of local people. Economic benefits include direct jobs,
Payment for Ecosystem Service schemes, indirect benefits from increased tourism or
sales to visitors, and other options such as guiding.
27. Visitor facilities: not all protected areas need visitor facilities; this question is
judging against the perceived need.
29. Fees: not all protected areas should or do collect fees; this question is not applicable
in these cases. The aim here is more to find out, where fees are an expected part of
the protected area management, whether they are used to help management or simply
disappear into the government and provide no support for the generating resource.
30. Condition of values: the METT really measures management and outputs and
does not consider outcomes in detail; this one question covers both biological and
cultural values and is simply an indication of whether staff and other stakeholders
believe that the fundamental objectives are being met. The comments section can give
further details, including data if this is available, and also can distinguish between the
relative success of conserving biological and cultural values if there are differences
between the two.
30a. Condition of values: monitoring: if scoring yes for this question, details of type
of monitoring should be given in the comments section.
Perhaps the best way to assess the relationship between METT scores and conservation
outcomes is to use a variety of independent datasets (Knights et al., 2014). Suitable
data could come from analysis of forest cover changes over time, or species population
trends, within and outside protected areas as is being suggested in the Bhutan METT
+ (Dudley et al., 2016). Some commentators have suggested additional questions on
outcomes and their causes (e.g. Coad et al., 2015), however to cover outcomes in the
METT comprehensively would require such a fundamental change to the tool that many
of its overriding objectives as an entry-level, cost effective and simple tool to apply
could be lost. Conversely, the review of GEF usage of the METT suggested streamlining
to focus on information that can be used in conjunction with existing global datasets
and geospatial data, to perform meaningful analyses on management effectiveness
and biodiversity impacts at a global level (GEF, 2015). One option is to include more
guidance on using the tool as part of a site level assessment system as is being proposed
in Bhutan.
As Nolte and Agrawal (2012) suggest, more effort in understanding the relation between
protected area management, protected area effectiveness, and the indicators used to
measure both is needed. They also note that to understand why some areas are effective
and what type of support makes them effective, “future analyses will need to examine
causation rather than correlation”.
The impacts of spatial differences across protected areas are not covered in the METT
and thus those completing the METT can face difficulties in assessing the questions,
as there can be a lot of variation in management effectiveness, especially across large
However to date these questions have not been added to the METT, although they have
been included in METT adaptations (e.g. the Bhutan METT +, see Dudley et al., 2016).
Case study 3
Papua New Guinea: Protected Areas Assessment Project
Authors: Fiona Leverington, Ann Peterson and Greg Peterson
© ANN PETERSON
Participants from
Varirata National Park
start the METT process
by drawing images of
the main values of their
protected areas, this
helps easy translation
into a more formal
statement of values
and benefits and then
completion of the
adapted METT
As most protected areas in PNG are on land owned and managed by the customary
landowners, and have no government employees, the METT needed to be adapted
to local circumstances. Many of the METT questions have been worded for people
very familiar with protected area issues, and in the PNG context this would have
caused some level of confusion or ambiguity, particularly for those unfamiliar with
protected area management jargon. Rather than relying only on facilitators to clarify
questions, explanatory notes were added to the questionnaire for most questions.
This helped improve its reliability and to increase consistency when the questions are
applied at different times and by different people. However, it is also essential that the
questionnaire is applied in workshops with trained facilitators who have a more in-
depth understanding of the questions and the logic behind them. In some cases, the
questionnaire part of the METT was duplicated so respondents could choose between
the traditional METT question (for the few government–owned protected areas) and a
new version (for community areas). Other adaptations included keeping the standard
threat classification (Salafsky et al., 2008) used in the METT but altering the wording to
make it clearer within the classification’s meaning.
The PNG METT was developed through a staged process. A draft methodology was
devised and shared with staff of CEPA, UNDP and some civil society representatives
at a workshop in Port Moresby in April 2016. The methodology was then trialled
and adjusted in the field before being finalised. Although this iteration of PAME was
undertaken with external funding and with the assistance of consultants, PNG has an
excellent opportunity to ensure that regular PAME studies are undertaken to show the
changes and hopefully improvements in management over time. In the delivery of the
PNG METT, CEPA staff and UNDP staff were involved in training that incorporated both
facilitating and recording information. In this process they were mentored by the project
facilitators. This will enable CEPA staff to continue with the assessments in the future
and thus ensure the consistency, reliability and validity of the recorded information.
As PNG is still beginning the journey of developing an effective protected area system,
the PAME assessment is providing important baseline information and guiding future
developments across the protected area network.
This case study has been sourced from Leverington et al., 2016
5. METT: PREPARING
THE GROUNDWORK
FOR PROTECTED AREA
STANDARDS
Until recently management effectiveness evaluations have provided the
main source of information available about the management of protected
areas. Tools like the METT provide valuable information on management
and, when used well, can help to plan adaptive management. However,
relying solely on these tools to ensure that a large percentage of the world
is well managed for conservation could be misleading. There is increasing
pressure for protected area standards and means of verification. In this
context, specific and detailed tools are needed which set the baseline
standard for effective management.
Photo: Site and species based standards are now being developed to further assess
protected area effectiveness
A guide to using the METT ︱ page 50
Contents METT use | Best practices | METT Q&A | Standards | Conclusions | Addendum | References
IUCN started to investigate options for introducing some kind of certification system for
protected areas, which itself implied a set of standards (Dudley et al., 2003; Dudley et al,
2004b), and this possibility was also addressed by the IUCN Environmental Law Centre
(Dudley, 2004). These ideas were initially rejected by IUCN as impractical.
Pressure grew steadily to introduce some form of standards as a logical next step to
management effectiveness assessment and was included in the 2004 Programme of
Work on Protected Areas from the UN Convention on Biological Diversity: Activity 4.1.1
Collaborate with other Parties and relevant organizations, particularly IUCN, on the
development, testing, review and promotion of voluntary protected areas standards
and best practices on planning and management, governance and participation (CBD,
2004).
• Legal designation
• Demarcation of protected area boundaries
• Clear management objectives
• Operational plan
• Operational budget
• Monitoring plan
Whilst falling far short of standards in terms of defining what was required for each of
these steps, this guidance recognised the general need for minimum components of good
management to ensure success.
More recently, two initiatives (Conservation Assured and the Green List of Protected
and Conserved Areas) have developed and started to apply the idea of something
approaching a certification scheme for protected areas, although both approaches have
been wary of using this particular term (see sections 5.2 and 5.3). Both standards start
from an assessment (usually a self-assessment by park staff) of management effectiveness;
they then build on this through application of specific standards developed through a
peer-review process and an external assessment by local or international experts.
Assessment focus
• PAME assesses management against a site’s individual goals and objectives
• Standards evaluate a site’s management against peer reviewed best practices
Verification process
• Although processes vary, most PAME systems are self-assessments; where
they exist verification processes tend to be project based and do not involve
accreditation or certification
• Standards usually involve some kind of formal accreditation/certification
process with multiple steps to ensure compliance to the standards. Processes
are encouraged to be based on international best practices such as ISEAL’s
Principles for Credible and Effective Sustainability Standards Systems16
The CA|TS concept is also being developed for other species. CA|TS standards consist
of seven sections or “pillars”; five relate to species protection in any protected area, with
an emphasis on high value species likely to attract poachers (Conservation Assured) and
two relate specifically to tigers and their prey (Tiger Standards). CA|TS is therefore a
modular system that can be applied to other species and for wider conservation impact
(Pasha et al., 2014), and the Zoological Society of London is already doing this for rhinos
as part of an initiative to build impact bonds as a way of drawing private finance into
conservation.
CA|TS METT
The Green List standard consists of four main components – 1) achieving conservation
outcomes through good governance, 2) sound planning and 3) design, and 4) effective
management. Each component has a number of criteria and indicators which emphasise
the importance of managing equitably, maintaining natural values and associated
cultural and spiritual values and achieving positive socio-economic outcomes. PAME
assessments are a major component of the Green List, both as providing an important
entry point for taking part in the initiative and as an indicator of good management.
Performance levels that represent sound management of protected and conserved areas
have been defined and candidate protected areas must meet these levels in order to
achieve “Green List” status.
The IUCN has established a strong governance framework for the Green List initiative
and a comprehensive assurance procedure to ensure transparency, rigor and credibility.
This review has provided a chance to spend some time looking carefully
at the way that the METT has been used; from small beginnings into
a global tool. To some extent a victim of its own success, along with
the benefits there are some evident weaknesses and things that could
usefully be changed, improved, added to or explained more clearly. The
following section draws together some overall conclusions and makes
recommendations for future steps.
Given the qualitative approach and the reliance on individual judgement, the METT
is likely to be best at comparing performance in one site over time than at comparing
between different sites. But analysis of the global database shows that it can also provide
useful information about the general status of management effectiveness of protected
areas, as long as data are treated with the necessary caution. Overall usefulness of
the METT for institutions such as WWF is likely to increase as the total number of
assessments, and particularly the number of repeat assessments, continues to grow.
METT results have already helped to identify those management processes critical
to success, and in turn to set best practice standards for protected areas that reflect
the real experience of many thousand managers and rangers around the world, rather
than being based on a few case studies. The focus of protected area capacity building
is now moving beyond assessments towards the establishment of globally-accepted
standards and, increasingly, third-party verification that these standards are being
met. Conservation Assured | Tiger Standards (CA|TS) and the IUCN Green List of
Protected and Conserved Areas are two concrete examples. However, these standards
are predicated on the assumption that management effectiveness assessments are
being carried out as an essential first part of the assessment. As the favoured “first
assessment” system, use of the METT will spread further as these systems develop.
Many variations on the METT have also emerged over the last 15 years, as people
have modified the original questions and format to fit different biomes, management
approaches and national priorities. Along with modifications for freshwater and marine
protected areas, variations have been developed for community forest areas and some
countries wish to change the questions the better to fit national conditions. Some users
appear to need to make some modifications as part of the socialisation process of getting
used to and excited about application. Whether or not such changes are to be welcomed
depends to a large extent on whether the priority is for a comparable global dataset, or
for a plethora of systems that best fit national priorities. Attempts to reconcile these
two objectives include development of a global list of core indicators, which allow many
different PAME systems to be compiled with respect to all their critical data. As the
METT continues to be modified (even the GEF has made changes to the original) use of
the core indicators may be increasingly relevant for METT results as well.
1. Extra questions: it is increasingly clear that the METT does not cover some
areas that are increasingly seen as critical for protected area performance. Whilst
recognising that the strength of the METT is largely in its brevity, there are
strong arguments for additional questions on climate change (including carbon
sequestration), transboundary conservation, social processes within and around the
site and a division of the outcome questions to separate conservation outcomes and
cultural/social outcomes (see section 4.3).
2. Clearer wording: there is always a temptation to revise constantly and this has
consciously been avoided with the METT in favour of constancy. But repeated
applications have identified some important ambiguities remaining in the 2007
version, which make it difficult for users to decide between some of the multiple
choice questions. The guidance offered in this handbook aims to help provided
clearer interpretation of the METT questions (see section 4).
3. Capacity building material: practical experience with the METT has shown
that additional tools can be helpful, such as PowerPoint presentations that can
be projected and filled in through discussion and consensus where multiple
stakeholders are involved in completing the METT. Making these materials more
generally available could help others in making the best use of the assessment.
Spending time training assessors, so that they fully understand the METT, will
also help to ensure better results; one efficient way of doing this is to have future
assessors take part in a METT assessment conducted by someone with experience
(see section 4.2.6).
4. A dedicated web site: with this in mind, there is a need for a METT website, to
include the definitive version of the assessment tool (different versions circulate),
translations, associated capacity building and presentation material, relevant
publications and also perhaps a chat room for people to swap experiences, ask
questions and make suggestions. The tool has grown considerably beyond the original
concept of those who developed it, and now needs back up resources to function as
effectively as possible.
7. Data control: a measure of quality control is needed when METTs are completed,
particularly when implemented as part of an NGO, donor or government led project.
It is clear that many METTs are not being completed accurately and many are only
being partially completed. The better the process to implement the METT (see
section 4.2) the more accurate large datasets will be and the insights they can give to
PAME particularly when these data are being used in global studies.
7. ADDENDUM: METT
ORIGINS, DIFFERENT
VERSIONS AND
IMPLEMENTATION
This final section provides a history of the development and use of the
METT. Section 7. 1 outlines the METT’s origins and evolution and section
7.2 provides details of a range of reports of the METT’s implementation
from individual countries to portfolios of protected areas. The METT has
also been adapted and used as the basis for a range of similar assessment
tools, as shown in 7.3. Finally, section 7.4 provides a list of countries which
have undertaken the METT as recorded on the METT database, including
those countries who have undertaken repeat assessments.
The WCPA Framework is made up of a range of elements and processes that can usefully
form the basis of any PAME system. It is based on the idea that an evaluation should
reflect three main assessment themes:
From these three themes the WCPA Framework identifies six key elements of protected
area management, which together provide the basis of a PAME assessment (see Figure
1). These six elements reflect the way protected areas are established and managed, i.e.
the management cycle.
The WCPA Framework suggests that systems for PAME should include all six elements as
they are complementary rather than alternative approaches to assessing management
effectiveness. Thus the assessment needs to be made in the context of the protected
Outcomes Planning
What did Where do we
we achieve? want to be ?
Evaluation
Outputs
What were
the results?
Inputs
What do we need?
Processes
How do we go about it?
area, so assessments first need to gather data on issues relating to the area’s values,
threats and opportunities, stakeholders, and the management and political context.
Management starts with planning of strategies needed to fulfil the vision, goals and
objectives of protection and to reduce threats. To put these plans in place and meet
management objectives, managers need inputs (resources) of staff, money and
equipment. Management activities are implemented according to accepted processes
(i.e. best practices); which produce outputs by completing activities outlined in work
plans. The end result of management is the achievement of outcomes, i.e. reaching the
goals and objectives set for the biological conservation, economic development, social
sustainability or cultural heritage of the protected area.
Following the field testing phase the Alliance contracted the authors of the WCPA
Framework to provide two outputs:
i. A review on how the scorecard can be improved, with guidance on its scope and
limitations.
ii. Recommendations on how the WCPA Framework can be developed to:
a) Track progress on the Alliance’s target
b) Provide reliable information to field managers to enhance management of
biodiversity.
The review of the scorecard in the pilot sites, highlighted issues related to the trade-off
between the brevity of the 10 question scorecard and the greater detail which would
provide more complete basis for both the assessment and adaptation of protected
area management. WWF felt that the existing Appendix II scorecard did not go far
enough in assessing site performance and proposed the development of a more detailed
alternative, which was presented to the Alliance in 2002. However, World Bank staff
argued strongly that the proposal was too time-consuming to be used by their project
executants. A compromise was agreed whereby the format of the original scorecard was
maintained with a few extra questions and a data section added to reflect other elements
in the framework (Stolton et al, 2002b).
As the title implies, the primary aim of the METT is to supply consistent data about the
progress of protected area management over time. The purposes of the tool are detailed
in the introduction of the 2002 and 2007 publications (Stolton et al, 2002a), which
states that the METT was developed as a response to eight requirements:
The METT is ideally an “entry tool” into the whole concept and practice of PAME. There
are now a multitude of tools (both generic and those developed for specific protected
area systems or categories) (Leverington et al., 2010b) and the revised WCPA PAME
Framework (Hockings et al, 2006) provides detailed guidance on how to carry out
PAME evaluation and reviews many of the tools available.
Datasheets that collect key information on the protected area, its characteristics,
threats and management objectives and details of who carried out the assessment
WWF funded a more detailed review and revision of the METT in 2007 based on
implementation experience, best practices and the need to reflect the growing interest
and implementation of the METT beyond the original aims of the Alliance’s protected
forest targets. This version, known as METT 3, was published in 2007 (Stolton et al., 2007).
• Scoring disclaimer: The review of results between 2002 and 2006 found that the
concerns about using the METT to calculate an overall PAME score were slightly
allayed and although a note remained in subsequent version of the METT concerning
the development of an overall score due to the lack of weighting of questions (see box
4 for further discussion of scoring), the disclaimer against scoring was removed. This
was due to results showing that most individual questions correlated fairly highly
with the total score, the exceptions being those relating to legal status, protected area
design, local communities and indigenous people. This meant that the total score
apparently correlated reasonably well with most individual scores and thus could
serve as a reasonably good indicator of overall management effectiveness (Dudley
et al., 2004 and 2007). However, as noted before, in the METT guidance reporting
scores for individual elements of the WCPA Framework is likely to provide a much
better indication of effectiveness than an overall score. (This view was supported
in a paper by Nolte and Agrawal (2012) where although composite METT scores
were not significantly related to the effectiveness of protected areas in reducing
fire occurrence – which was used as a proxy for effectiveness – several individual
indicators in the METT were related. These indicators included cooperation with
neighbouring official and commercial land; research activities; and access control, all
of which would seem to have a direct impact on fire. They suggest that links between
METT scores and outcomes may be stronger than the researchers of the paper
suggest, but this depends on the other indicators chosen to indicate effectiveness).
• Explanation: A seemingly very simple revision was the change of the column
heading of “comments” to “comments/explanation” in the first of two narrative
columns in the multiple choice element of the METT. This change was however the
first step toward the type of evidence based verification approach currently being
developed in protected area management standards such as Conservation Assured |
Tiger Standards and the Green List of Protected and Conserved Areas (see sections
5.2 and 5.3). As the guidance notes to the METT explain, this box “allows for
qualitative judgements to be explained in more detail. This could range from local
staff knowledge (in many cases, staff knowledge will be the most informed and
reliable source of knowledge), a reference document, monitoring results or external
studies and assessments – the point being to give anyone reading the report an idea
of why the assessment was made”.
• Wider focus: The revision made the METT less narrowly orientated towards forest
protected areas and thus suitable for use in all protected areas including wetlands
and marine. This wider focus also allowed for the tool to be used beyond government
protected areas, for example, in village forest reserves in Tanzania (Malugu et al.,
2008; Knights et al., 2014).
• That the assessment be seen by protected area staff as a judgement rather than
a management tool
Thus the ability for data from simple PAME systems like the METT, which focus
on the practice of management, to indicate or correlate with overall biodiversity
outcomes is limited (Carranza et al., 2014) and using the overall METT score
to infer conservation outcomes is likely misleading, considering only one of the
questions actually address conservation outcomes (Zimsky et al., 2010). Ideally,
only where the METT is used as part of a fully planned PAME implementation
system and has been explained, adapted and results verified (see section 3.2),
should the results be used to infer conservation outcomes.
Bhutan Use in all protected areas which cover just over 50% of the country. Wildlife Conservation
Division and Equilibrium
Research, 2015 and 2016
Indonesia The METT has been adopted as the national assessment system Kementerian Lingkungan
for protected areas and is being widely applied. Training is being Hidup dan Kehutanan, 2015
given to managers and a national target has been adopted to lift
250 protected areas to effective management (i.e. with a score
of at least 70%) by 2020. The METT has been translated into
Indonesian and extensive guidance on application and scoring has
been provided.
GEF Review of GEF use in nearly 2,000 protected areas including field GEF, 2015
visits to 47 sites.
No. of No. of
Country sites Area (ha) Country sites Area (ha)
Albania 4 33,042 Egypt 8 1,855,700
Algeria 7 28,085,550 El Salvador 16 124,170
Angola 4 3,364,000 Estonia 26 209,180
Argentina 47 6,107,648 Ethiopia 14 963,700
Armenia 35 169,780 Federated States of Micronesia 2
Azerbaijan 7 218,390 Fiji 5 46,702
Bahamas 26 256,152 Finland 1 27,000
Belarus 11 344,940 French Guyana 1 2,464
Belize 29 562,469 French Polynesia 2
Benin 13 6,239,105 Gabon 12 4,836,064
Bhutan1 8 4,008,229 Gambia 2 12,138
Bolivia 6 5,183,300 Georgia 8 328,235
Bosnia and Herzegovina 6 51,747 Ghana 8 399,562
Botswana 5 2,254,539 Greece 2 63,700
Brazil 110 41,920,122 Grenada 3 160
Bulgaria 4 104,386 Guatemala 16 2,132,848
Burkina Faso 7 978,145 Guinea 4 260,201
Burundi 3 93,401 Guinea-Bissau 10 801,502
Cambodia 13 1,274,342 Haiti 3 34,000
Cameroon 20 3,327,333 Honduras 15 2,114,575
Cape Verde 12 135,342 India 14 710,940
Central African Republic 5 2,530,200 Indonesia2 22 9,719,348
Chile 54 4,418,618 Iran, Islamic Rep. 4 584,380
China 116 32,633,300 Italy 14 82,352
Colombia 20 8,661,598 Jamaica 34 530,514
Congo, Dem. Rep. 21 12,916,550 Jordan 8 376,969
Congo, Rep. 7 4,141,676 Kazakhstan 19 6,526,202
Costa Rica 36 1,597,068 Kenya 54 755,748
Cote d’Ivoire 9 1,706,040 Kyrgyzstan 2 336,118
Czech Republic 3 256,500 Lao PDR 8 1,438,743
Djibouti 2 12,500 Latvia 6 524,192
Dominican Republic 16 533,977 Lebanon 5 103,390
Ecuador 19 1,650,248 Lesotho 3 14,047
No. of No. of
Country sites Area (ha) Country sites Area (ha)
Liberia 4 1,638,942 Syria 6 75,516
Libya 2 83 Tajikistan 3 25,139
Lithuania 8 138,507 Tanzania 430 40,157,200
Macedonia FYR 6 124,934 Thailand 10 1,073,995
Madagascar 31 3,083,804 Togo 7 423,170
Malawi 10 2,209,500 Tunisia 8 216,168
Malaysia 17 2,000,217 Turkey 15 1,337,680
Mali 5 215,619 Turkmenistan 9 1,902,320
Mauritania 1 16,000 Uganda 14 596,759
Mauritius 25 12,206 Ukraine 5 114,750
Mexico 22 4,667,166 Uruguay 23 283,974
Moldova 13 225,348 Uzbekistan 10 266,458
Mongolia 23 11,971,084 Vanuatu 7 35,161
Montenegro 5 125,695 Venezuela 14 4,103,882
Morocco 5 283,801 Vietnam 50 552,262
Mozambique 19 9,978,648 Zambia 17 7,113,200
Namibia 24 7,539,684 Zimbabwe 1
Nepal 13 9,337,970 Grand Total 2506 427,370,966
Nicaragua 33 2,480,520
Niger 8 9,525,885
Nigeria 3 410,100
Niue 3 6,329
Pakistan 4 1,835,245
Palau 1
Panama 20 1,970,431
Papua New Guinea 7 2,425,599
Paraguay 6 181,392
peru 27 9,814,704
Philippines 26 1,118,662
Poland 2 39,704
Romania 23 1,025,422
Russian Federation 125 53,613,257
Rwanda 2 117,300
Samoa 1 45,692
Senegal 8 184,107
Serbia 23 335,856
Seychelles 4 44,157
Sierra Leone 6 497,373
Slovakia 5 106,853
Solomon Islands 2 270,000
South Africa 248 1,530,573 1. A ll protected areas in Bhutan have now completed the
Bhutan METT+, but the data has not been loaded onto the
Sudan 4 14,000,000 METT database yet.
Suriname 16 2,221,400 2. In 2015 Indonesa assessed 283 terrestrial and marine
protected areas, but this data has not been added to the
Sweden 1 38,483 database yet.
Assessments
Assessments
No. of sites
No. of sites
(including
(including
Area (ha)
Area (ha)
repeats)
repeats)
Country
Country
No of
No of
Albania 2 7,900 4 Guinea 1 145,200 2
Argentina 12 229,065 25 Guinea-Bissau 5 482,502 20
Armenia 32 169,662 75 Honduras 15 2,114,575 40
Bahamas 3 78,229 8 India 8 477,900 38
Belarus 4 135,503 12 Indonesia 17 6,838,937 38
Belize 29 562,469 70 Iran, Islamic Rep. 4 584,380 10
Benin 4 5,755,586 9 Jamaica 2 194,564 7
Bhutan1 1 135,129 2 Jordan 6 256,789 18
Bolivia 1 747,000 2 Kazakhstan 8 1,498,337 28
Bosnia and Kenya 47 547,221 116
Herzegovina 5 31,947 10
Lao PDR 5 701,163 11
Botswana 1 2
Latvia 1 457,000 3
Brazil 43 20,364,494 93
Lebanon 3 59,328 6
Bulgaria 2 31,856 4
Liberia 1 180,400 2
Burkina Faso 1 818,046 2
Libya 1 2
Cambodia 9 872,401 25
Lithuania 5 92,216 15
Cameroon 14 3,023,833 66
Macedonia FYR 2 8,172 5
Cape Verde 8 119,478 18
Madagascar 10 631,797 32
Central African
Republic 1 464,400 3 Malaysia 9 174,513 27
Assessments
No. of sites
(including
Area (ha)
repeats)
Country
No of
Syria 2 33,994 4
Tajikistan 3 25,139 8
Tanzania 107 37,111,066 322
Togo 2 317,980 4
Tunisia 2 25,200 4
Turkey 2 83,899 4
Turkmenistan 7 976,117 22
Uganda 2 49,700 8
Ukraine 3 110,623 8
Uruguay 20 224,232 60
Uzbekistan 9 266,458 29
Vietnam 31 359,615 91
Zambia 14 2,770,200 70
Grand Total 961 199,697,930 2566
Dudley, N., Belokurov, A., Higgins-Zogib, L., Hockings, M., Stolton, S. and N. Burgess. 2007. Tracking progress in managing
protected areas around the world. An analysis of two applications of the Management Effectiveness Tracking Tool
developed by WWF and the World Bank, WWF International, Gland, Switzerland.
Dudley, N., Lham, D., Stolton, S., Wangchuk, S. and S. Wangchuk. 2016. Bhutan Management Effectiveness Tracking Tool Plus.
Final Version 2016. Wildlife Conservation Division and Equilibrium Research, Thimphu, Bhutan and Bristol, UK.
Dizon, E.C., Geronimo, R.C. and R. Quicho Jr. 2013. Benchmarking the management effectiveness of nationally-managed
marine protected areas in the Philippines and policy recommendations. Final Report for USAID Coral Triangle Support
Partnership (CTSP) and Conservation International – Philippines. September 2013.
Emerton, L., Bishop, J. and L. Thomas. 2006. Sustainable Financing of Protected Areas: A global review of challenges and
options. IUCN, Gland, Switzerland and Cambridge, UK.
Evans, B. 2014. Mainstreaming the METT Powerpoint Presentation. [download at: www.conservationmeasures.org/wp-
content/uploads/sites/4/2014/10/g-METT_CMP_10.8.2014.pptx]
Forrest, J. L., Bomhard, B., Budiman, A., Coad, L., Cox, N., Dinerstein, E., Hammer, D., Huang, C., Huy, K., Kraft, R., Lysenko,
I. and W. Magrath. 2011. Single-species conservation in a multiple-use landscape: current protection of the tiger range.
Animal Conservation 284:14, 283–294.
GEF. 2015. Impact Evaluation of GEF Support to Protected Areas and Protected Area Systems. GEF/ME/C.49/Inf.02 October
06, 2015. GEF, Washington, D.C.
Geldmann, J., Coad, L., Barne, M., Craigie, I.D., Hockings, M., Knights, K., Leverington, F., Cuadros, I.C., Zamora, C., Woodley,
S. and N.D. Burgess. 2015. Changes in protected area management effectiveness over time: A global analysis, Biological
Conservation 191 (2015) 692–699.
Gereau, R.E., Kariuki, M., Ndang’ang’a, P.K., Werema, C. and P. Muoria. 2014. Biodiversity Status and Trends Report for the
Eastern Arc Mountains and Coastal Forests of Kenya and Tanzania Region, 2008 – 2013, Birdlife International, Africa
Partnership Secretariat, Nairobi, Kenya.
Gevorgyan, A. and P. Abovyan. 2010. Assessment of Management Effectiveness and Law Enforcement in Forest Sanctuaries in
Armenia. Draft Report
Guiang, E.S. and G.C. Braganza. 2014. National Management Effectiveness and Capacity Assessment of Protected Areas in the
Philippines, GIZ, Bonn, Germany.
Heffernan, D.E., Zahler, P., Merkel, J., Heffernan, C.A. and C. Jargalsaikhan. 2005. An Assessment of the Protected Areas of the
Eastern Steppe of Mongolia. Mongolian Journal of Biological Sciences 2005 Vol. 3(1): 25-29.
Henschel, P., Coad, L., Burton, C., Chataigner, B., Dunn A, MacDonald, D., Saidu, Y., and L.T.B. Hunter. 2014. The Lion in West
Africa Is Critically Endangered. PLoS ONE 9(1): e83500. Doi:10.1371/journal.pone.0083500.
Higgins-Zogib, L. and K. MacKinnon. 2006. World Bank/WWF Alliance Tracking Tool: Reporting conservation progress at
protected area sites, in Hockings, M., S. Stolton, F. Leverington, N. Dudley and J. Corrau. 2006. Evaluating Effectiveness:
A framework for assessing management effectiveness of protected areas, 2nd edition, IUCN WCPA and University of
Queensland, Gland Switzerland and Brisbane Australia.
Hockings, M. and R. Hobson. 2000. Fraser Island World Heritage Area Monitoring and Management Effectiveness Project
Report. University of Queensland, Brisbane.
Hockings, M., James, R., Stolton, S., Dudley, N., Mathur, V., Makombo, J., Courrau, J. and J. Parrish. 2008. Enhancing our
Heritage Toolkit: Assessing management effectiveness of natural World Heritage sites. World Heritage Paper 23, UNESCO,
Paris.
Hocking, M., Stolton, S. and N. Dudley. 2000. Evaluating Effectiveness: A Framework for Assessing the Management of
Protected Areas. IUCN, Gland, Switzerland and Cambridge, UK.
Hockings, M., Stolton, S., Leverington, F., Dudley, N. and J. Corrau. 2006. Evaluating Effectiveness: A framework for assessing
management effectiveness of protected areas, 2nd edition, IUCN WCPA and University of Queensland, Gland Switzerland
and Brisbane Australia.
Hockings, M., Leverington, F. and C. Cook. 2015. Protected area management effectiveness, in G. L. Worboys, M. Lockwood, A.
Kothari, S. Feary and I. Pulsford (eds) Protected Area Governance and Management, pp. 889–928, ANU Press, Canberra.
Inciong, R.A., Pantastico, A.R. Uriarte, M.T. and J.F. Goloyugo. 2013. Management Effectiveness of Asean Heritage Parks: A
Study Report. GIZ and ASEAN Centre for Biodiversity, Laguna, Philippines.
IUCN, World Bank, WWF. 1999. Threats to forest protected areas: summary of a survey of ten countries, Gland, Switzerland
and Washington DC.
Independent State of Papua New Guinea. 2014. Papua New Guinea Policy on Protected Areas, Waigani, National Capital
District, Papua New Guinea.
Ismail, M., Rossi, A. and N. Geiger. 2011. A Compilation of Bioenergy Sustainability Initiatives: Update. Rome: Food and
Agriculture Organization of the UN (FAO).
Johns, A.G. 2012. Final Evaluation Report: Rwenzori Mountains Conservation and Environmental Management Project,
Uganda Phase II (2010-2012), WWF Uganda Country Office, Kampala, Uganda and WWF Norway, Oslo.
Kementerian Lingkungan Hidup dan Kehutanan.2015. Pedoman Penilaian: Efektivitas Pengelolaan Kawasan Konservasi di
Indonesia (Management Effectiveness Tracking Tool). Kementerian Lingkungan Hidup dan Kehutana, Jakarta, Indonesia
[download at: http://ksdae.menlhk.go.id/assets/publikasi/BUKU_panduan_mett_2015.pdf]
Knights, K., Cuadros, I., Zamora, C., Coad, L., Leverington, F., O’Connor, B., Gonçalves de Lima, M., Kingston, N., Danks,
F., Hockings, M., Malugu, I., Scheren, P., Ngoye, E., Stephenson, P.J., and N.D. Burgess. 2014. A preliminary assessment
of protected area management within the WWF ‘Coastal East Africa’ priority place, Eastern Africa. PARKS 20.2, DOI:
10.2305/IUCN.CH.2014.PARKS-20-2.KK.en.
Pap, C. 2012. Monitorowanie Postępu Wzarządzaniu Obszarami Chronionymi Wkrajachkarpackich, WWF International
Danube Carpathian Programme, Vienna, Austria [downloaded from: www.ccibis.org/images/PDF/CCPAMETT/PL%20
CCPAMETT_web.pdf]
Parrish, J., Braun, D.P. and R. S. Unnasch. 2003. Are we conserving what we say we are: measuring ecological integrity within
protected areas. BioScience 53, 851-860.
Pasha, M.K.S. Stolton, S., Baltzer, M. and M. Belecky. 2014. Conservation Assured Tiger Standards: A Multifunctional
Protected Area Management Tool to Aid Implementation of International Conventions, Multilateral Treaties, Global
Initiatives & National Action. October 2014, Conservation Assured, Petaling Jaya, Malaysia.
Lausche, B. 2011. Guidelines for Protected Areas Legislation. IUCN, Gland, Switzerland.
Leverington F., Costa K., Pavese H., Lisle A. and M. Hockings. 2010. A Global Analysis of Protected Area Management
Effectiveness. Environmental Management 46(5): 685-698.
Leverington F., Costa K.L., Courrau J., Pavese H., Nolte C., Marr M., Coad L., Burgess N., Bomhard B. and M. Hockings. 2010.
Management effectiveness evaluation in protected areas: a global study. Second edition. University of Queensland, IUCN-
WCPA, TNC, WWF, St Lucia, Australia.
Leverington, F., Peterson, A. and G. Peterson. 2016. Methodology for assessment of protected area management effectiveness,
SPREP, Samoa
Malugu, I., Killenga. R. and T. Jones. 2008. East Usambara forest landscape restoration project. Tanzania
Forest Conservation Group [download from: www.easternarc.or.tz/groups/webcontent/documents/pdf/
TFCGEastUsambarabaselinemonitoringre.pdf]
Mascia, M.B., Pailler, S., Thieme, M.L., Rowe, A., Bottrill, M.C., Danielsen, F., Geldmann, J., Naidoo, R., Pullin, A.S. and N.D.
Burgess. 2014. Commonalities and complementarities among approaches to conservation monitoring and evaluation.
Biological Conservation 169, 258–267.
Mathur, V.B., Gopal, R. Yadav, S.P., Negi, H.S. and N.A. Ansari. 2014. Management Effectiveness Evaluation (MEE) of Tiger
Reserves in India: Process and Outcomes. National Tiger Conservation Authority and Wildlife Institute of India, Dehradun
Ministry of Environment and Tourism (MET). 2014. Fifth National Report to the Convention on Biological Diversity (2010-
2014), MET, Republic of Namibia.
Mwima, H.K. 2007. Synthesis of completed management effectiveness tracking tool for protected areas managed by the
Zambia Wildlife Authority for the year 2007, The Ministry of Tourism, Environment and Natural Resources, Zambia.
National Parks, Wildlife and Conservation Department, Thailand and CA|TS. 2016. Summary of the First CA|TS Global
Consultation Meeting. Bangkok and Kuala Lumpur.
Nolte, C and A. Agrawal. 2012. Linking Management Effectiveness Indicators to Observed Effects of Protected Areas on Fire
Occurrence in the Amazon Rainforest. Conservation Biology, Volume 27, No. 1, 155–165.
Pauquet, P. 2005. Field-testing of Conservation International’s Management Effectiveness Assessment Questionnaire in Seven
Protected Areas in Bolivia. Parkswatch Boliva.
Quan, J., Ouyang, Z.Y., Xu, W.H. and H. Miao. 2009. Management effectiveness of China nature reserves: status quo
assessment and countermeasures. State Key Laboratory of Urban and Regional Ecology, Research Center for Eco-
Environmental Sciences, Chinese Academy of Sciences, Beijing 100085, China.
Ramsar. 2015. Evaluation of the management and conservation effectiveness of Ramsar Sites. Resolution XII.15. [Downloaded
from: www.ramsar.org/sites/default/files/documents/library/cop12_res15_management_effectiveness_e.pdf]
Roux, D.J., Murray, K., Nel, J.L., Hill, L., Roux, H. and A. Driver. 2011. From scorecard to social learning: a reflective co-
assessment approach for promoting multiagency cooperation in natural resource management. Ecology and Society 16(1):
24.
Salafsky, N., Salzer, D., Stattersfield, A.J., Hilton-Taylor, C., Neugarten, R., Butchart, S.H.M., Collen, B., Cox, N., Master, L.L.,
O’Connor, S. and D. Wilkie. 2008. A Standard Lexicon for Biodiversity Conservation: Unified Classifications of Threats and
Actions. Conservation Biology, 22: 897–911. doi: 10.1111/j.1523-1739.2008.00937.x.
SEF. 2012. Management Effectiveness Assessment: Thanda Private Game Reserve & Mduna Royal Reserve. Space for
Elephants Foundation (SEF), KwaZulu-Natal, South Africa.
Staub, F. and M.E. Hatziolos. 2004. Score Card to Assess Progress in Achieving Management Effectiveness Goals for Marine
Protected Areas. World Bank [downloaded from: documents.worldbank.org/curated/en/2004/07/6065915/score-card-
assess-progress-achieving-management-effectiveness-goals-marine-protected-areas].
Stephenson, P.J., Burgess, N.D., Jungmann, L., Loh, J., O’Connor, S., Oldfield, T., Reidhead, W. and A. Shapiro. 2015.
Overcoming the challenges to conservation monitoring: integrating data from in-situ reporting and global data sets to
measure impact and performance, Biodiversity, 16, 2-3, http://dx.doi.org/10.1080/14888386.2015.1070373
Stoll-Kleemann, S. 2010. Evaluation of management effectiveness in protected areas: Methodologies and results. Basic and
Applied Ecology 11 (2010) 377–382.
Stolton, S., Hockings, M. and N. Dudley. 2002a. Reporting Progress on Management Effectiveness in Protected Areas. A simple
site-level tracking tool developed for the World Bank and WWF. WWF International, Gland, Switzerland.
Stolton, S., Hockings, M. and N. Dudley. 2002b. Reporting Progress at Protected Area Sites: Background. An analysis of
existing methods used to assessment management effectiveness of protected areas by the World Bank. WWF International,
Gland, Switzerland.
Stolton, S., Hockings, M., Dudley, N., MacKinnon, K., Whitten, T. and F. Leverington. 2007. Management Effectiveness
Tracking Tool. Reporting Progress at Protected Area Sites: Second Edition. WWF International, Gland, Switzerland.
Swartzendruber. F. 2013. Sub-study on Results Based Management in GEF. OPS5Technical Document # 10. GEF Evaluation
Office.
Thomas, L. and J. Middleton. 2003. Guidelines for Management Planning of Protected Areas. IUCN Gland, Switzerland and
Cambridge.
UNEP WCMC and IUCN WCPA. 2016. Protected Area Management Effectiveness (PAME)
Information document for 20th meeting of the Subsidiary Body on Scientific, Technical and Technological Advice (SBSTTA),
UNEP WCMC, Cambridge, UK.
UNEP-WCMC and IUCN. 2016. Protected Planet Report 2016. UNEP-WCMC and IUCN: Cambridge UK and Gland, Switzerland.
Valencia, I.D. and C. Duncan. 2006. Western Hemisphere Shorebird Reserve Network (WHSRN) Site Assessment Tool.
Western Hemisphere Shorebird Reserve Network, Manomet Center for Conservation Sciences, USA [download from: www.
whsrn.org/tools/site-assessment-tool/forms-tutorial].
Van Lavieren, H. and R. Klaus. 2013. An effective regional Marine Protected Area network for the ROPME Sea Area: Unrealistic
vision or realistic possibility?. Mar. Pollut. Bull. 72.2.
Walston, J., Robinson, J.G., Bennett, E.L., Breitenmoser, U., da Fonseca, G.A.B., Goodrich, J., Gumal, M., Hunter, L., Johnson,
A., Ullas Karanth, K., Leader-Williams, N., MacKinnon, K., Miquelle, D., Pattanavibool, A., Poole, C., Rabinowitz, A., Smith,
J.L.D., Stokes, E.J., Stuart, S.N., Vongkhamheng, C. and H. Wibisono 2010. Bringing the tiger back from the brink: the six
percent solution, PLoS Biology 8 (9)
Wildlife Conservation Division and Equilibrium Research. 2015. Management Effectiveness of Protected Areas in Bhutan: A
training session and initial assessment of five protected areas. Thimphu, Bhutan and Bristol, UK.
Wildlife Conservation Division and Equilibrium Research. 2016. External Assessment of Bhutan METT+ : Results for three
pilot protected areas. Thimphu, Bhutan and Bristol, UK.
Worboys, G.L., Lockwood, M., Kothari, A., Feary, S. and I. Pulsford (eds.). 2014. Protected Area Governance and Management,
pp. 889–928, ANU Press, Canberra.
World Bank/World Wildlife Fund. 2008. WB/WWF Biofuels Environmental Sustainability Scorecard [download from: http://
www.fao.org/bioenergy/28173-0ff9097a27061bb7225641118b93b617.pdf].
WWF. 2009. Protected area management effectiveness: METT. In: Adaptation of Landscape Tools in Support of REDD.
Contribution to the NORAD funded project Engaging Civil Society in REDD: Tools, Methodologies and Capacity Building
to Reduce Emissions from Forest Loss and Forest Degradation.
Zimsky, M., Ferraro, P., Mupemo, F., Robinson, J. and N. Sekhran. 2010. Results of the GEF biodiversity portolio monitoring
and learning review mission, Zambia. Enhancing outcomes and impact through improved understanding of protected area
management effectiveness. Global Environment Facility, Washington, D.C.
Endnotes
1 assets.panda.org/downloads/mett2_final_version_july_2007.pdf
2 www.wildernessfoundation.co.za/projects/promoting-management-effectiveness
3 globalwildlife.org/our-work/regions/africa/engaging-communities-to-protect-the-wildlife-
oasis-of-kitobo-forest/
4 www.sgmeet.com/icrs2016/viewabstract.asp?AbstractID=29638
5 papaco.org/286-2/
6 www.lestari-indonesia.org/en/usaid-lestari-program-launch-in-aceh/
7 www.whsrn.org/tools
8 www.undp.org/content/dam/aplaws/publication/en/publications/environment-energy/www-
ee-library/mainstreaming/monitoring-guidelines-of-capacity-development-in-gef-operations/
Monitoring%20Capacity%20Development-design-01.pdf
9 www.unpei.org/sites/default/files/PDF/ecosystems-economicanalysis/Financial-
Sustainability-Scorecard-PA.pdf
10 www.thegef.org/gef/BD_tracking_tool
11 www.cbd.int/decision/cop/default.shtml?id=12297
12 www.protectedplanet.net/
13 whc.unesco.org/en/list
14 rsis.ramsar.org/
15 www.unesco.org/new/en/natural-sciences/environment/ecological-sciences/biosphere-
reserves/world-network-wnbr/wnbr/
16 www.isealalliance.org/our-work/defining-credibility/credibility-principles
17 assets.panda.org/downloads/mett2_final_version_july_2007.pdf
•METT HANDBOOK : A GUIDE TO USING THE MANAGEMENT EFFECTIVENESS TRACKING TOOL (METT) 2016
BIODIVERSITY
Biodiversity, ecosystems and
ecosystem services – our
natural capital – must be
preserved as the foundation
MONITORING of well-being for all.
Good information
helps track whether
conservation targets
are being met.
EFFECTIVENESS
LEARNING Ensuring we make the very
best use of conservation
investments.
Lessons from practical
experience help build more
robust conservation.
© 1986 panda symbol and ® “WWF” Registered Trademark of WWF. WWF-UK registered
charity (1081247) and in Scotland (SC039593). A company limited by guarantee (4016725)
WWF-UK
The Living Planet Centre. Rufford House, Brewery Road, Woking, Surrey, GU21 4LL
Tel: +44 1483 426 444 www.wwf.org.uk/