PR Final Assign 2003
PR Final Assign 2003
PR Final Assign 2003
The generic goal of most evaluations is to provide "useful feedback" to a variety of audiences
including sponsors, donors, client-groups, administrators, staff, and other relevant constituencies.
Most often, feedback is perceived as "useful" if it aids in decision-making. But the relationship
between an evaluation and its impact is not a simple one -- studies that seem critical sometimes
fail to influence short-term decisions, and studies that initially seem to have no influence can
have a delayed impact when more congenial conditions arise. Despite this, there is broad
consensus that the major goal of evaluation should be to influence decision-making or policy
formulation through the provision of empirically-driven feedback.
Types of Evaluation
There are many different types of evaluations depending on the object being evaluated and the
purpose of the evaluation. Perhaps the most important basic distinction in evaluation types is that
between formative and summative evaluation. Formative evaluations strengthen or improve the
object being evaluated -- they help form it by examining the delivery of the program or
technology, the quality of its implementation, and the assessment of the organizational context,
personnel, procedures, inputs, and so on. Summative evaluations, in contrast, examine the effects
or outcomes of some object -- they summarize it by describing what happens subsequent to
delivery of the program or technology; assessing whether the object can be said to have caused
the outcome; determining the overall impact of the causal factor beyond only the immediate
target outcomes; and, estimating the relative costs associated with the object.
• needs assessment determines who needs the program, how great the need is, and what
might work to meet the need
• evaluability assessment determines whether an evaluation is feasible and how
stakeholders can help shape its usefulness
• structured conceptualization helps stakeholders define the program or technology, the
target population, and the possible outcomes
• implementation evaluation monitors the fidelity of the program or technology delivery
• process evaluation investigates the process of delivering the program or technology,
including alternative delivery procedures
DO’S:
1. Think beyond budget. The reason to evaluate is not to justify the budget, but to assess that a
campaign, its process and methodology are producing results.
2. Set goals and objectives. Unless objectives are explicit and SMART (Specific, Measurable,
Achievable, Relevant and Timely) – then it is not possible to devise a meaningful form of evaluation.
3. Select key performance indicators based on campaign aims. It should reflect the goals – so think
about creating tangible outcomes wherever possible.
4. Use surveys to measure soft issues. Even with soft issues, where the aim may be to influence
attitudes and shape opinions, rather than immediately change behavior, pre and post campaign
research can indicate how opinion is moving.
5. Build in tangibles. For marketing based campaigns build in a response channel that can be
monitored – an advice line, dedicated e-mail channel, response form for information, and so on.
6. Monitor traditional media. Media coverage is the starting point for many traditional evaluation
techniques. Sign up a good media monitoring company, brief them thoroughly, and keep them in the
loop about what you are issuing, when and to whom.
7. Monitor new media. In many areas, the web is more influential than traditional media. Sign up a
specialist new media monitoring company who can monitor web appearances for you and also, if
required, review newsgroups, blogs and feeds.
8. Google and DIY. If you don’t sign up a new media specialist then you can at least Do it Yourself by
selecting key words and phrases that you can search on Google pre and post campaign to see how
your clients ownership of and ranking against these key concepts has changed. Also, Google
provides a free ‘Alerts’ service where you set a keyword and Google will notify you of appearances.
9. Multiple objectives require multiple measurement tools. Where a campaign has mixed objectives
you may need different evaluation techniques for each. There may be a need to combine both
quantitative and qualitative measurement techniques. Again this reinforces the case for keeping the
objectives clear and simple.
10. Borrow budget. In many cases behavior will be subject to multiple influences – PR, advertising,
direct mail, incentives, sales activity, and so on. This is a good reason for the cost of evaluation to
come from a general marketing pot, rather than just the PR budget.
DONT’S:
1. Don’t take all the responsibility. PR doesn’t drive sales and profitability so, though clients will let
you take heroic responsibility for this, explain that you are the messenger and others usually carry
this forward to action. Measure the PR contribution, not that of others.
2. Don’t disparage advertising value equivalent (AVE). Academics and those promoting more
elaborate, and expensive, performance measures hate AVE. The merit of AVE is that they are
low cost and quantify performance in simple monetary terms that all the management team, but
especially the bean counters, can understand.
3. Don’ rush to judgment. Many traditional media have a natural cycle that spans many months and
opinion shifts often happen slowly. While it is tempting to seek an early measure of campaign
effectiveness, the true impact may not be measurable until several months have passed.
4. Don’t rely exclusively on clipping services. Do additional media research over and above that
provided by the clipping service. If you discover they are missing references, let them know and
agree with them measures to improve their performance.
5. Don’t believe in magic bullets. There is no single evaluation technique that meets all needs.
1. Specify, select, refine, or modify project goals What is the general focus of the evaluation?
and evaluation objectives. (See Fig. 2.1 ‘Project
evaluation framework’) • What is to be evaluated?
• Why — what are the purposes?
3. Plan appropriate evaluation design • What are the key questions that need
answering?
5. • From whom?
Collect relevant data • By whom?
6. Process, summarize, analyze relevant data How will the information be analyzed and
interpreted, and by whom? (Criteria for
judging will relate to Step 2.)
7. Contrast data with evaluation standards/criteria
• To whom?
• By when?
Managers can and should conduct internal evaluations to get information about their programs so
that they can make sound decisions about the implementation of those programs. Internal
evaluation should be conducted on an ongoing basis and applied conscientiously by managers at
every level of an organization in all program areas. In addition, all of the program's participants
(managers, staff, and beneficiaries) should be involved in the evaluation process in appropriate
ways. This collaboration helps ensure that the evaluation is fully participatory and builds
commitment on the part of all involved to use the results to make critical program improvements.
Although most evaluations are done internally, conducted by and for program managers and
staff, there is still a need for larger-scale, external evaluations conducted periodically by
individuals from outside the program or organization. Most often these external evaluations are
required for funding purposes or to answer questions about the program's long-term impact by
looking at changes in demographic indicators such as graduation rate or poverty level. In
addition, occasionally a manager may request an external evaluation to assess programmatic or
operating problems that have been identified but that cannot be fully diagnosed or resolved
through the findings of internal evaluation.
Program evaluation, conducted on a regular basis, can greatly improve the management and
effectiveness of your organization and its programs. To do so requires understanding the
differences between monitoring and evaluation, making evaluation an integral part of regular
program planning and implementation, and collecting the different types of information needed
by managers at different levels of the organization.
A thorough evaluation helps both the organization and the individual identify strengths and
weaknesses in their respective contributions, provides for greater accountability of organizational
resources and improves the overall morale of all.
References:
1. Fisher, J. & Cole, K. (1993). Leadership and Management of Volunteer Programs. San
Francisco: Jossey-Bass Publishers
2. www2.guidestar.org
3. http://www.evancarmichael.com/Public-Relations/267/PR-Evaluation--A-Practical-Approach.html
5. http://www.socialresearchmethods.net/kb/intreval.php