CN: Week 1 Reading & Response

The assignment:

Read the following article: Evaluating Instructional Software. The article considers three general data sources for software evaluation: (1) experimental study, (2) expert opinion, and (3) user surveys.

Answer the following questions in the your Group’s discussion topic area.

  1. Of the three data sources for software evaluation, which do you think is best and why?
  2. Would this overview of software evaluation be appropriate for tool/application software like word processors or spreadsheets?
  3. Which approach–constructivist or directed–is more in keeping with this author’s look at evaluation? The directed versus constructivist methods’ page contains more information on these philosophies.
  4. Since we will be concentrating on expert opinion style of software evaluation, discuss what types of criteria you think would be necessary to look for in designing a software evaluation model?

 

Notes from the Article:

To evaluate software for educational use, there are three primary methods: experiments (was it effective?), expert opinions and user surveys. Utility of evaluation method is determined by the information it yields and its applicability.

  • Experiments:
    • most convincing, but time and cost prohibit widespread employment of this method.
    • previous experiments chalk up differences in outcomes to differences in teaching style rather than the software
    • “Effectiveness is operationally defined in terms of student achievement. This style of inquiry is referred to as a “black box” study. How a student learned or how to use the software is not addressed.”
  •  Expert Opinions:
    • Experts have lists based on their experiences
    • Failure to involve learners in the evaluation process results in a subjective opinion by the reviewer.
    • Conclusion: Fails to take into account student outcomes unless students are intentionally included.
  • User surveys
    • Interpretive Evaluation: performed by observing user interaction with the software
    • Reiser and Dick (1990) created a method that involved the learner, and found results different from the subjective (expert) results. Two years later  they published modified results that attempted to answer the question: “Do students learn the skills that the program is designed to teach?” Data sources include “pretests, immediate and delayed posttest, and learner attitude surveys”. The revision uses a representative student from low, average, and high achieving sectors of the student body. Software efficacy is a function of the delayed post-test outcomes. They conclude that teachers use criteria other than student achievement for their evaluations, and that the time and cost of in-class evaluation is worthwhile.
    • Anderson and Chen (1997) use an econometric model, focusing on user evaluation of features.
    • Castellan’s (1993) method is described as being thorough and advantageous to the reflective practitioner; but largely impractical for implementation.

Author’s Conclusion:

“Software evaluation should be conducted by the instructor. The evaluation method used should yield information about quality, effectiveness, and instructional use” (p. 6)

Answers to the questions:

Of the three data sources for software evaluation, which do you think is best and why?

It’s hard to answer this question without considering what qualities would describe “the best.” If concerned with educational outcomes, then the Reiser and Dick’s modified method (using information from user results) might be best.

Upon examining the study by Zahner, Reiser, Dick and Gill (1992), and considering the findings listed for the Experiment section of this paper, I wonder if some of the problems with retention and thus the appropriateness of the software had to do with teaching methods and student learning styles. It appears to have used a direct method of instruction, and there was no discussion of audio, visual or other types of communication which would serve learners who were not primarily linguistic learners. I am curious to know whether software with different instruction methods, perhaps one with problem-based instruction, would have performed better.

Would this overview of software evaluation be appropriate for tool/application software like word processors or spreadsheets?

I think this depends. I think tools such as word processors and spreadsheets are seen primarily as tools for organizing and sharing information whereas the software discussed in this paper were used for delivering instruction. I would argue that both need to be considered for their educational purpose and efficacy. In some cases, some educators may argue that learning to use software such as word processors and spreadsheets may be a purpose unto themselves as a preparation for life both in and beyond the classroom as such tools are prevalent in most offices around the world.

Which approach–constructivist or directed–is more in keeping with this author’s look at evaluation? The directed versus constructivist methods’ page contains more information on these philosophies.

The article itself doesn’t lend to many direct statements about pedagogy, but given that the focus of the articles I could find making no mention of contemporary, student-based pedagogies such as problem-based learning, and the description of the software used in Zahner et al, I would surmise that most of the pedagogies here have been direct-instruction based. The attention to standards and criteria alone, however, do not give adequate information as to the pedagogy involved as either a direct-instruction or constructivist pedagogy may be employed to the same ends.

Since we will be concentrating on expert opinion style of software evaluation, discuss what types of criteria you think would be necessary to look for in designing a software evaluation model?

Criteria should likely include not only hardware and software concerns, but attention to sound pedagogy and outcomes in concert with curricular design.

 

Reiser, R. A., & Dick, W. (1990). Evaluating instructional software. Educational Technology Research and Development, 38 (3), 43-50.

 

Published by

Kimberly Hogg

As a child, Kim would take apart anything she could put a screwdriver in to figure out how it worked. Today, she's still interested in exploring the processes and limits of our tools, whether online or in hand. Kim enjoys exploring and learning about anything and everything. When not at a computer, she enjoys birdsong and the smell of pine needles after a rain. Kimberly holds an MEd in Information Technology and a BA in Communication Studies. You can contact Kim here or on Twitter @mskhogg.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.