diamond.gif (561 bytes) Classroom Resources
diamond.gif (561 bytes) Courses & Professional Development
diamond.gif (561 bytes) E-rate
diamond bullet e-Store
diamond.gif (561 bytes) Evaluation
diamond.gif (561 bytes) Grants & Funding
diamond.gif (561 bytes) Hardware/Software
diamond.gif (561 bytes) Home - Community
diamond.gif (561 bytes) Leadership
diamond.gif (561 bytes) Planning
diamond.gif (561 bytes) Policy
diamond.gif (561 bytes) Presentations
diamond.gif (561 bytes) Publications
diamond.gif (561 bytes) SEIR*TEC Initiatives
diamond.gif (561 bytes) State Technology Information
diamond.gif (561 bytes) Teacher Education
diamond.gif (561 bytes) GRITSonline
diamond.gif (561 bytes) NEON
SEIR*TEC Home | About SEIR*TEC | Partners | SEIR*TEC Region | Search

5. The Plan


Basic Components

To make development of a project evaluation manageable, the evaluation planning team will likely find it helpful to break the whole design into bite-sized pieces. Section 3 (Theory) explains how to begin this process by developing a logic map describing Goals, Objectives, Strategies, and Inputs.
The next step is to flesh out basic components of the evaluation design - evaluation questions; indictors of performance, quality, or success relating to those questions; data collection methods and measures; benchmarks; and explanation of likely uses of evaluation findings.

Evaluation Questions

It is important to emphasize here that there are two primary types of evaluation questions that might be considered once a project is underway - implementation questions and impact questions. A comprehensive project evaluation will typically include both.

Implementation Questions - Examining the quality of implementation of project strategies and the smaller activities that make them up, these questions ask about the degree to which a project is implemented with fidelity - compared to its original design (i.e., "Is the project completing the activities it should be and, if so, how well?"). Implementation questions may also considerquantity - the number of hours of training provided, for example.

Importantly, successful implementation must not be defined in terms of whether or how well desired Objectives or Goals are achieved: It is possible for a project to be implemented with quality but still fail to further the desired outcomes, if other factors bearing on the outcome are not considered in the project design.

Impact Questions - These questions ask about the degree to which project Objectives and Goals have been achieved, or the progress that has been made toward their achievement (i.e., "Did the project make the expected difference?"). They may also be in terms of either quality or quantity, but it is often helpful to consider both types of questions during evaluation planning.

Determining Evaluation Questions

It might be helpful to see how both types of question fit into the example logic map (PDF) provided earlier. As illustrated there, while it may not be that there is a one-to-one correspondence between them, each Strategy will be addressed by one or more implementation question, and each Objective or Goal will have one or more impact questions associated with it.

It is likely that multiple questions will bear on any given Strategy or Objective, as illustrated in From Logic Map to Evaluation Questions (PDF). However, it is unlikely - but not impossible - that one question will address more than one Strategy or Objective.

Are these the same as Research Questions?

It is worth mentioning here that "research questions" are different from formative evaluation questions - more specialized, as is appropriate given the differences between evaluation and research described in the first section of this document.

The questions that drive research efforts test a hypothesis, examining a theoretical strand through the logic of a project as they attempt to determine the degree to which specific Strategies are found to associate with Objectives and Goals, that existing research suggests might be related. For example, the research question in this variation of the example logic map (PDF) hypothesizes that...

If communication with families and the community are established and maintained, if technology is used to facilitate collaboration with stakeholders, and if technology infrastructure is installed, maintained, and upgraded; then the district, community, and schools will provide a supportive environment for technology use.

Sometimes, a well funded, longer-term comprehensive project evaluation might ask such questions but when this is the case, they are often pursued by external evaluators, are undertaken for summative rather than formative purposes, or both. Questions of this type typically require data generated only after a project has been implemented for some while, by which point their findings are not particularly useful for formative purposes.

The evaluation will be most manageable and useful if the planning team narrows evaluation questions to focus specifically on attributes of implementation and impact (speaking to Objectives and Goals). This template might facilitate that process, although it will require a download for those without Inspiration already installed on their computers. See the bottom of this page for a link.


Bob takes a long time to finish his work. Is Bob detail-oriented and thorough, or simply lazy?

In this example, the length of time that Bob takes to finish a project is an indicator - a measurable or observable attribute - of the quality of his effort. The problem here however is that it might be an accurate indicator of either of these conceptual terms - or constructs - that we might use to describe how hard he works.

Most of the things technology project managers care about are also constructs (e.g., integration of technology), that are made up of complex combinations of attributes. This makes it necessary to think critically about possible indicators, on which statements about implementation quality or achievement of outcomes (impact) might be based.

For example, it is not enough to define as a Strategy that, "Teachers use research-based, technology-enhanced practices with students." It is necessary, for the purposes of both project implementation and associated evaluation efforts, to come to consensus on definitions of the constructs "research-based" and "technology-enhanced."

Note here that, in many discussions about evaluation, the term "indicator" is refers to "indicators of project success." The SEIR*TEC framework applies the same word to smaller aspects of that success, speaking of "indicators of project implementation fidelity or quality" (applied to strategies), or "indicators of impact" (examining objectives).

is applied Saying that indicators must be observable or measurable should not be taken to mean that they must be "quantifiable." If project evaluation makes it important to assess the quality of something - student work, for example - it is neither necessary nor appropriate to resort to simply counting what can be counted. Instead, determine quality relative to established benchmarks and exemplars, considering attributes that really matter. Similarly, if a school staff's collective attitude about something is an indicator that matters, it is okay to simply ask staff members what they think.

Methods & Measures

The measures used to collect evaluation data - and the methods used to apply those measures and analyze resulting information - must be well-matched to pertinent evaluation questions, indicators, and benchmarks.

For example, if an evaluation asks about how well teachers are "modeling technology use for their students," it is not sufficient to simply ask them if they are. A survey, interview, or focus group can assess the degree to which they think they are modeling technology use, but not the degree to which they might actually be doing so. In order to find out what is really happening, it would be necessary to observe classroom practice.

Data for formative evaluation should be collected through the use of a variety of sources, applying a variety of methods including surveys or questionnaires (resulting in perceptive data, or what people think or feel), observation protocols (structured ways of looking at something), interviews, focus groups (essentially group interviews), or the examination of artifacts - things created by project participants or stakeholders.

Evaluations may also utilize data from professional development sign-in sheets, student attendance or discipline referral forms, email communication records, or other sources appropriate to the questions, indicators, and benchmarks at hand. Different methods and measures have differing benefits and disadvantages that must also be considered during evaluation planning.


Regardless of the indicator being examined, it is necessary to define in advance benchmarks - the targeted standards or levels to which measured conditions will be compared, to define degrees of success.

Benchmarks are often indexed to time as a way of gauging impact over the life of a project implementation, following the format in the SEIR*TEC-recommended Format for Writing Benchmarks (PDF). This can lead to some confusion of terminology, as the distinction between benchmarks and objectives becomes blurred, but the critical aspect of the term "benchmark" is its use as a point of comparison for data collected during evaluation.

Benchmarks may seem somewhat arbitrary, particularly if project or evaluation designers do not have much experience on which to base them. It may also be tempting to set benchmarks at levels easily achievable, in an effort to make a project look successful. In the end, it is less important that benchmarks be met than it is that they effectively guide project implementation and evaluation, based on real-world expectations of project success.

Use of findings

It is important to the success and credibility of any evaluation effort that the intended uses of its findings be clearly communicated to project stakeholders, and that misuses of evaluation data or findings be avoided.

For example, it would be a serious compromise to reward or sanction a teacher based on observations of their classroom practice that they have been told are required for formative evaluation of a technology implementation. The trust necessary to allow access to this crucial data would be damaged and word would undoubtedly spread to the rest of the staff, making it impossible to get at observation data regarding project implementation or impact.

Similarly, it would create problems if school-level formative evaluation findings - intended to monitor and adjust project implementation - were used to publicize school successes with technology in the press, as staff members might well perceive value in fudging evaluation data in an effort to make the school look good.

Next > 6. Data Sources: Some Examples



If you have questions or comments, contact

This page last updated 6/23/05