|
-
Planning for Evaluation
-
-
-
-
Thinking Beyond Surveys
-
For several reasons, surveys are the most commonly used tools for evaluating technology programs. First, they can measure a variety of elements of the program and participant characteristics, such as the number of computers in a school, teachers' and students' attitudes, opinions, behaviors, and other descriptive information. Another positive feature of surveys is that, compared with other evaluation methods, they are relatively inexpensive and can be quickly administered to a large number of people. A third aspect is that survey findings usually lend themselves to quantitative analyses, and the results can be expressed in easily understood percentages and means, which in turn can be presented in easily understood charts and graphs.
However, since the primary way to collect information through surveys is to ask people written questions, the evaluator has no control over misinterpretation of the questions, missing data, or inaccurate responses. If the entire technology-program evaluation design depends on surveys or self-reporting data, the findings could be biased or not reflect a complete picture of a technology programs's quality and effectiveness. Therefore, it is important to think beyond surveys and to look at other evaluation designs and data-collection techniques. There are seven commonly used data-collection methods in educational technology program evaluation. The table below summarizes the methods and descriptions their advantages and disadvantages.
Data-collection
methods
|
Advantages |
Disadvantages |
Questionnaires
(self-administered)
|
Good for finding answers to short, simple questions; relatively inexpensive; can reach a large population in a short time. |
Low response rate; no control over misunderstanding or misinterpretation of the questions; missing data, or inaccurate responses; not suited for people who have difficulty reading and writing; not appropriate for complex or exploratory issues. |
Interviews
|
Yield rich data, details, and new insights; interviewers can explain questions that the interviewee does not understand; interviewers can probe for explanations and details. |
Can be expensive and time consuming; limited sample size; may present logistics problems (time, location, privacy, access, safety); need well-trained interviewer; can be difficult or time consuming to analyze qualitative data. |
Focus groups
|
Useful for gathering ideas, different viewpoints, new insights from a group of people at the same time; facilitator can probe for more explanations or details; responses from one person provide stimulus for other people. |
Some individuals may dominate the discussions while others may not like to speak in a group setting; hard to coordinate multiple schedules; takes longer to have questions answered. |
Tests
|
Provide "hard data" that are easily accepted; relatively easy to administer. |
Difficult to find appropriate instruments for treatment population; developing and validating new tests may be expensive and time-consuming; tests can be biased and unfair. |
Observations
|
Best for obtaining data about behaviors of individuals or groups; low burden for people providing data. |
Time consuming; some items are not observable; participant behavior may be affected by presence of observer; needs well-trained observer. |
Archival documents
(student records, school plans, past program evaluations, etc.)
|
Low burden for people providing information; relatively inexp3ensive. |
May be incomplete or require additional information; may need special permission to use. |
Artifacts or products
|
Good evidence of impact; low burden for people providing data; relatively inexpensive. |
May be incomplete or require additional interpretation. |
Depending on the needs of the programs, a sound evaluation design incorporates three or more of the above methods. Which methods to use should be determined by the evaluation questions, and complex questions often call for multiple sub-questions, each of which would have an appropriate data-collection method. For example, a frequently asked question of technology programs is, "How are teachers and students actually using technology?" This is a complex question that might be divided into several sub-questions about the extent, nature, and frequency of teacher and student technology use. In the table below, which is drawn from the National Science Foundation's User Friendly Handbook for Project Evaluation, shows a simplified version of an evaluation design matrix.
Sub-questions
|
Data-collection approach
|
Respondents
|
Schedule
|
1a. Did teachers use technology in their testing? |
Questionnaires
Observations |
Teachers
Supervisors
|
Pre/post project
Twice per semester |
1b. Did students use technology to learn science, math, or other subject areas? |
Questionnaires
Interviews
Observations |
Students
Teachers |
Pre/post project
Twice per semester |
1c. How often did teachers use technology? |
Questionnaires |
Teachers
Students
Supervisors |
Pre/post project |
The National Science Foundation handbook (pg. 19) suggests you pose the following questions when you want to determine the most appropriate approaches to data collections:
- Do you want to explore the experience of a small number of participants in-depth (case studies) or get general experience for a larger population (surveys)?
- If you select a survey approach, do you want to survey all the participants, or can you select a sample?
- Do you want to evaluate what happens to project participants or to compare the experiences of participants with those of the non-participants (quasi-experimental design)?
how these questions are answered will affect the design of the evaluation as well as the conclusions that can be drawn.
—by Anna Li, Ph.D.
SEIR*TEC Evaluator
-
- Originally printed in SEIR*TEC NewsWire Volume Five, Number Three, 2002
|