The table below explains the purpose and output of each of
|1. Write an evaluation objective
|| To focus
your evaluation purpose. This is a description of what
you want to know from the evaluation, and how you plan
to go about finding out. Without objectives, evaluators
are unfocused and do not know what to measure. The objective
will vary depending on the aspect of the program that
is being evaluated.
|A well-written and clearly
defined objective that includes these elements:
What will be Determined
The information you desire to know as a result of the
The number and types of people or sources from which you
will gather evaluation information.
The methods by which the “what” will be achieved;
the information collection. Information collection can
involve a spectrum of approaches – from counting
the number of services, to completing a survey, to conducting
focus groups, to using a standardized test that will result
in a quantifiable score.
The number and specific characteristics of the persons
who will gather the evaluation information.
The timeframe in which you expect the evaluation activities
to take place.
For more about writing objectives, visit the next installment
of The Power of
Proof: Writing Evaluation Objectives.
|2. Define the target population
||The target population is the
group you plan to reach or evaluate.
||A description of the participant
All persons from County XYZ who visited the clean indoor
air booth at the local health fair.
|3. Identify the evaluation type and design
||The evaluation “type” defines
what you would like to evaluate about the program, like
the formation of one of the activities, or an outcome.
The “design” determines the methods you will
use to gather the information that will answer your questions
about things like the program’s formation and outcomes.
A choice of evaluation type and design is made based on
these three criteria:
of your evaluation
questions you want answered
For more about evaluation types and designs, review Evaluation
Types and Designs in the next section.
|An evaluation format that can be easily
described and depicted, for use in sharing with the target
population and other groups interested in the evaluation.
|4. Choose suitable methods for collecting
||There are two general methods of collecting
data for program evaluations – qualitative and quantitative.
include open-ended collection techniques like personal
interviews, focus groups, and observation.
Quantitative methods are
ways to collect data that can be translated into numbers.
Such methods include paper-and-pencil questionnaires,
web-based questionnaires, and interviewer-administered
The purpose of choosing suitable methods is to ensure
that they will produce the type of information you are
For more about data collection methods, go to The Power
of Proof: Data Collection.
|A listing of quantitative and/or qualitative sources
of information that meets your evaluation objective and
suits the abilities and other requirements of your target
|5. Create a timeline of tasks and resources
||Evaluations require careful organization
of tasks and resources to be successful. Every evaluation
should have an organized timeline that clearly identifies
who will do what, when, and using which resources.
For more on the kinds of resources required to conduct
evaluations, review Gather
the Necessary Resources and Decide
Who Will Conduct the Evaluation in later sections.
|A timeline that includes elements like the following:
The task(s) or job(s) that will accomplish the objective
The person who is responsible for completing the activity.
Needed and available resources and support for completing
The duration of the activity, including start and end
dates. As the activity progresses, note any change of
dates with explanation.
|6. Design and test instruments
||Data collection instruments (e.g., questionnaires)
should be designed so that they are appropriate for the
target population and the chosen methods for collecting
the information. It is particularly important to attend
to the developmental level and literacy of the target
population. For interviews and focus groups, you will
need interview guides. For surveys, you will need questionnaires.
For record reviews, you will want a form on which to record
the information you find.
Once instruments are designed, they should be pilot-tested
(ideally with a subset of the target population) to ensure
that they are clear and acceptable.
|A list of tested instruments that are suitable for the
objective, culturally sensitive, and otherwise appropriate
for the target population.
|7. Collect raw data
||Raw data are simply the information you
collect as you run the program, like the number of people
who came to your location or the number of items you have
distributed. Raw data is collected from members of the
target population that is not yet processed or analyzed.
||Original information that can be verified with participants
and that can be referenced if there are questions about
|8. Process and analyze the data
||After evaluation data are collected, they
must be organized in a way that allows you to gather meaning
from them. Analysis should be conducted at various times
during the course of the evaluation to allow you and your
staff to obtain ongoing feedback about the program. This
feedback will either validate what you are doing or identify
areas where changes may be needed.2
|| Summaries that are easily
understood for the purpose of sharing the results with
a variety of populations.
Qualitative outputs will include groups
of quotes and charts showing how information appears to
be linked from the perspective of the target population.
outputs will include tables, graphs, charts, and other
easily understood summaries.
|9. Write an evaluation report
||For all evaluations, a report should be
written to communicate the results, progress toward goals,
description of the methods used to arrive at the results,
and details about how the results are going to be used.
||A written report that includes these elements:
Explanation of evaluation purpose
Description of the participants
How results will be used to improve the program