Studying Practices for Evaluation

The Methodspace focus for May 2022 is on evaluation. This multidisciplinary collection of open-access articles offers examples and recommendations for a variety of qualitative and quantitative approaches. Read and share!


Abma, T. A., Visse, M., Hanberger, A., Simons, H., & Greene, J. C. (2020). Enriching evaluation practice through care ethics. Evaluation, 26(2), 131–146. https://doi.org/10.1177/1356389019893402

Abstract. Recently, several authors have called for a critical assessment of the normative dimensions of evaluation practice. This article responds to this call by demonstrating how evaluation practice can be enriched through deliberate engagement with care ethics. Care ethics has a relational and practice view of morality and places caring relationships and responsibilities at the forefront of our being in the world. We will demonstrate how care ethics, in particular Joan Tronto’s moral-political theory of democratic caring, can help evaluators to reshape our way of working by placing caring and relationality at the centre of our evaluative work. Care ethics as a normative orientation for evaluation stretches beyond professional codes of conduct, and rule- or principled-based behaviour. It is part of everything we do or not do, how we interact with others, and what kinds of relationships we forge in our practice. This is illustrated with two examples: a democratic evaluation of a programme for refugee children in Sweden; and a responsive evaluation of a programme for neighbours of people with an intellectual disability in The Netherlands. Both examples show that a caring ethos offers a promising pathway to address the larger political, public issues of our times through the interrogation of un-caring practices. We conclude a caring ethos can help evaluators to strengthen a caring society that builds on people’s deeply felt need to care, to relate, and to connect within and across communities

Barbrook-Johnson, P., & Penn, A. (2021). Participatory systems mapping for complex energy policy evaluation. Evaluation, 27(1), 57–79. https://doi.org/10.1177/1356389020976153

Abstract. The use of complexity science in evaluation has received growing attention over the last 20 years. We present the use of a novel complexity-appropriate method – Participatory Systems Mapping – in two real-world evaluation contexts and consider how this method can be applied more widely in evaluation. Participatory Systems Mapping involves the production of a causal map of a system by a diverse set of stakeholders. The map, once refined and validated, can be analysed and used in a variety of ways in an evaluation or in evaluation planning. The analysis approach combines network analysis with subjective information from stakeholders. We suggest Participatory Systems Mapping shows great potential to offer value to evaluators due to the unique insights it offers, the relative ease of its use, and its complementarity with existing evaluation approaches and methods.

Befani, B., Elsenbroich, C., & Badham, J. (2021). Diagnostic evaluation with simulated probabilities. Evaluation, 27(1), 102–115. https://doi.org/10.1177/1356389020980476

Abstract. As policy makers require more rigorous assessments for the strength of evidence in Theory-Based evaluations, Bayesian logic is attracting increasing interest; however, the estimation of probabilities that this logic (almost) inevitably requires presents challenges. Probabilities can be estimated on the basis of empirical frequencies, but such data are often unavailable for most mechanisms that are objects of evaluation. Subjective probability elicitation techniques are well established in other fields and potentially applicable, but they present potential challenges and might not always be feasible. We introduce the community to a third way: simulated probabilities. We provide proof of concept that simulation can be used to estimate probabilities in diagnostic evaluation and illustrate our case with an application to health policy.

Bundi, P., & Pattyn, V. (2022). Citizens and Evaluation: A Review of Evaluation Models. American Journal of Evaluation. https://doi.org/10.1177/10982140211047219

Abstract. Evaluations are considered of key importance for a well-functioning democracy. Against this background, it is vital to assess whether and how evaluation models approach the role of citizens. This paper is the first in presenting a review of citizen involvement in the main evaluation models which are commonly distinguished in the field. We present the results of both a document analysis and an international survey with experts who had a prominent role in developing the models. This overview has not only a theoretical relevance, but can also be helpful for evaluation practitioners or scholars looking for opportunities for citizen involvement. The paper contributes to the evaluation literature in the first place, but also aims to fine-tune available insights on the relationship between evidence informed policy making and citizens.

Gilmore, B., McAuliffe, E., Power, J., & Vallières, F. (2019). Data Analysis and Synthesis Within a Realist Evaluation: Toward More Transparent Methodological Approaches. International Journal of Qualitative Methods. https://doi.org/10.1177/1609406919859754

Abstract. Realist evaluations are increasingly used in the study of complex health interventions. The methodological procedures applied within realist evaluations however are often inexplicit, prompting scholars to call for increased transparency and more detailed description within realist studies. This publication details the data analysis and synthesis process used within two realist evaluation studies of community health interventions taking place across Uganda, Tanzania, and Kenya. Using data from several case studies across all three countries and the data analysis software NVivo, we describe in detail how data were analyzed and subsequently synthesized to refine middle-range theories. We conclude by discussing the strengths and weaknesses of the approach taken, providing novel methodological recommendations. The aim of providing this detailed descriptive account of the analysis and synthesis in these two studies is to promote transparency and contribute to the advancement of realist evaluation methodologies.

Nobrega, S., Ghaziri, M. E., Giacobbe, L., Rice, S., Punnett, L., & Edwards, K. (2021). Feasibility of Virtual Focus Groups in Program Impact Evaluation. International Journal of Qualitative Methods. https://doi.org/10.1177/16094069211019896

Abstract. Focus groups are often used for qualitative investigations. We adapted a published focus group method for evaluating impact of an organizational intervention for virtual delivery using video conferencing. The method entailed convening small groups of three to five participants for a 2-hour facilitated workshop. We delivered the virtual workshops, adding qualitative evaluation with researchers and participants, to assess the effectiveness of the protocol. We address the questions of how to structure the data collection procedures; whether virtual delivery permits cross participant interactions about a studied intervention; and how easy and comfortable the experience was for participants. Participants were university faculty members who were the focus of an institutional diversity program. The results indicated that the virtually delivered focus group workshop could be successfully implemented with strong fidelity to the original protocol to achieve the workshop goals. The workshops generated rich data about the impacts of the institutional program as well as other events and conditions in the working environment that were relevant to consider along with the observed program outcomes. A well-planned virtual focus group protocol is a valuable tool to engage intervention stakeholders for research and evaluation from a distance. Video conferencing is especially useful during the current COVID-19 pandemic, but also whenever geography separates researchers and evaluators from program stakeholders. Careful planning of privacy measures for a secure online environment and procedures for structured facilitation of group dialogue are critical for success, as in any focus group. This article addresses a gap in the literature on feasibility and methodology for using video conference technology to conduct qualitative data collection with groups.

Schmidt-Abbey, B., Reynolds, M., & Ison, R. (2020). Towards systemic evaluation in turbulent times – Second-order practice shift. Evaluation, 26(2), 205–226. https://doi.org/10.1177/1356389020909120

Abstract. The need for, and possibilities of, a second-order shift in evaluation practice are explored. Second-order evaluation practice enables an evaluator to improve practice as a skilled practitioner, acknowledging her embeddedness within an evaluand. The article explores evaluation practice as experienced by professional evaluators, using ideas from developmental evaluation coupled with systemic evaluation in the tradition of systems thinking in practice. Systemic evaluation aims to capture systemic sensibilities – the bigger picture – of complex turbulent situations of change underpinning evaluands. Attributes of second-order practice with systemic evaluation are understood as being aligned with both systemic and systematic modes of evaluation praxis. Personal experiences are provided where this juxtaposing praxis has been found wanting. By example, a systems thinking in practice framework is explored as heuristic support for making a second-order practice shift. The article concludes with a discussion of some implications for developments in professionalising evaluation practice and research.

Vitak, J., & Zimmer, M. (2020). More Than Just Privacy: Using Contextual Integrity to Evaluate the Long-Term Risks from COVID-19 Surveillance Technologies. Social Media + Society. https://doi.org/10.1177/2056305120948250

Abstract. The global coronavirus pandemic has raised important questions regarding how to balance public health concerns with privacy protections for individual citizens. In this essay, we evaluate contact tracing apps, which have been offered as a technological solution to minimize the spread of COVID-19. We argue that apps such as those built on Google and Apple’s “exposure notification system” should be evaluated in terms of the contextual integrity of information flows; in other words, the appropriateness of sharing health and location data will be contextually dependent on factors such as who will have access to data, as well as the transmission principles underlying data transfer. We also consider the role of prevailing social and political values in this assessment, including the large-scale social benefits that can be obtained through such information sharing. However, caution should be taken in violating contextual integrity, even in the case of a pandemic, because it risks a long-term loss of autonomy and growing function creep for surveillance and monitoring technologies.


More Methodspace Posts about Evaluation

Previous
Previous

The challenges of co-creation: some reflections on programming and (e)valuating MAKE@StoryGarden

Next
Next

Practical Mapping for Applied Research and Program Evaluation