But Does it Work? New Text on Evaluation to the Rescue!

Categories: Evaluation, Impact, MentorSpace, Research, Social Issues, Students, Teaching, Tools and Resources, Uncategorised

Tags:

Rossi_Evaluation 8eThe long-awaited eighth edition of Evaluation: A Systematic Approach is now available. To learn more about the thinking behind this respected text and how it can be used, I conducted a written interview with co-author Dr. Mark Lipsey.

Learn more about the book and access instructional resources, sample chapters, and an Evaluation Tips poster.


Janet Salmons JS Given your definition of evaluation research as: “a social science activity directed at collecting, analyzing, interpreting, and communicating information about the workings and effectiveness of social programs,” what characteristics distinguish evaluation from other types of social science research?

  ML Evaluation research is not distinguished from other types of social science research as much by the methods it uses (nearly all of which are from the general social science toolbox) as by the questions it addresses and the audience for the evaluation results. The questions, of course, have to do with some aspect of the performance of a social program or policy, e.g., the nature of the social problem it intends to ameliorate, the approach it takes to addressing that problem, how well that approach is implemented, what impact it has on the target problem, and at what cost relative to its benefits. An inherent characteristic of evaluation questions is that they typically entail performance standards against which descriptive information about program performance can be assessed to draw evaluative conclusions; that is, a judgment about how well the program is performing with regard to the issues raised in the motivating evaluation questions. And the primary audience for that information is those with decision-making responsibilities for the program and interested other stakeholders.

 

Janet Salmons JS You note in Chapter 1 that the purpose of evaluation is to “inform action.” What, if any, characteristics distinguish evaluation from action research?

 ML “Action research” is a term that has been used to describe a wide range of research approaches oriented toward solving a problem or providing guidance for practice. Evaluation research might be seen as a particular form of action research that focuses on social programs and policies deliberately organized to ameliorate a social condition deemed problematic. The purview of action research, however, is typically viewed as much broader, encompassing for example, organizational and community development. Also, some of the classic themes in action research are not especially salient in evaluation research. For example, action research generally refers to situations in which the researchers actively participate in organizing or implementing change while simultaneously conducting research on the process and results. The role of evaluation researchers, however, is mainly to provide evaluative information to those with responsibility for organizing or implementing social interventions, not to attempt to take on those essentially political roles for themselves.

 

 Janet Salmons JS  You noted that time is a differentiating factor because when programmatic decisions must be made, evaluation research must proceed more quickly than is typical in other social research projects. However, social researchers are also feeling the pressure to get studies done and published more speedily. Can social researchers learn from evaluation researchers so they generate useful findings on a timely basis? What do you recommend?

 ML The time pressures on evaluation research stem from the political process and must be dealt with in that context by adjusting the scope of the evaluation, prioritizing the most immediately relevant issues, or, in some cases, acknowledging that credible results cannot be produced on the schedule desired by policymakers. Options of this sort that are closely tied to the political context and relevant decisionmakers would not likely generalize well to social researchers not working in such a context. And doing the same research only faster is not generally a feasible option for evaluators given practical constraints and the importance of producing empirically sound results. Nor is it apparent that this would be a good option for general social researchers either.

 

Janet Salmons JS  With increasing demands for accountability and proof that funds are used appropriately and accomplish the identified purpose, it would seem that the need for skilled evaluators must also be growing. What specific knowledge or skills are essential for professionals in the field of evaluation? How can students or career changers develop appropriate research knowledge to prepare for work in this field?

 ML The need and opportunities for capable evaluators are indeed growing while at the same time the capacity for training evaluation professionals continues to be limited; e.g., there are very few academic programs that provide degree program in evaluation and not many more that allow an evaluation major within a broader degree program. A common route to evaluation expertise for those interested is academic training in an empirical social/behavioral science discipline followed by practical experience on an evaluation team and/or relevant topic-specific workshops. Such workshops are offered regularly by The Evaluators’ Institute in freestanding training programs and by the American Evaluation Association in conjunction with its annual meeting. The question of the essential knowledge and skills for evaluators has been a matter of considerable discussion and some controversy in the field, largely because of the diverse backgrounds, domains of practice, and methodological orientations of evaluators. A task force of the American Evaluation Association recently developed a set of evaluation competencies that were adopted by the association and provide a good starting point for identifying essential knowledge and skills for evaluators.

 

Janet Salmons JS  You described the need to evaluate, that is, to “ascertain the worth” of social programs. Researchers also need to demonstrate that their efforts are worthwhile.  Importantly, they need to show the impact of their research on the advancement of knowledge and/or on finding useful solutions to the problems they study. How can researchers use the systematic approaches described in your book to evaluate their own research departments or programs?

 ML The concepts and methods of program evaluation have applicability beyond the task of evaluating social programs or policies, though this evaluation text does not attempt to discuss those applications in any detail. Nonetheless, the evaluation framework presented in this text can give guidance to the process of specifying what a research department or program intends to accomplish, how it intends to do so, what indicators are appropriate to assess its performance, and how to use such data to guide and improve performance.

 

 Janet Salmons JS  Tell us a little about the journey from the earlier editions to this one. The first edition of Evaluation: A Systematic Approach was authored by Peter Rossi, Howard Freeman, and Sonia Rosenbaum in 1979, and subsequent revisions were updated by Rossi and Freeman until, after the 5th edition, Mark Lipsey joined as a co-author.

  • How did the source material inspire you to carry forward this work?
  • With this revision, what do you feel is the most important contribution you have made to this landmark text?

 ML Because of its long history, Evaluation: A Systematic Approach, is a highly evolved introductory textbook on program evaluation that has grown and developed over the decades along with the evaluation field itself. The revisions we have made for the upcoming 8th edition of this classic text maintain the comprehensive conceptual framework for program evaluation that has served well in prior editions while updating and refreshing selected topics and examples that reflect contemporary developments and practices in the field. The most notable revisions include expansion of the discussion of impact evaluation, which is especially relevant to the current evidence-based practice movement, and addition of a new chapter that provides extensive practical detail about planning an evaluation. Along the way, however, virtually every chapter has been revised with the intent of efficiently providing a comprehensive introduction to the various domains, concepts, and methods of program evaluation.

 

Leave a Reply