Categories: SAGE Posts
Simple question: Can you explain what the academic field of evaluation is?
If you’re not an evaluator, the answer is quite possibly no. And if you are an evaluator, the answer is … often no. As a glut of qualitative evidence in the academic press suggests, evaluators may struggle when explaining what it is they do to an outsider, something pointed out in a the literature review portion of a new paper, “So What Do You Do? Exploring Evaluator Descriptions of Their Work.”
Acknowledging the field’s relative infancy and low profile, Picciotto (2011) suggests this “fuzziness” is in part due to the field’s uncertain identity—a tendency for evaluators to describe evaluation in heterogeneous, diffuse, and often amorphous ways . … Indeed such ambiguity in evaluators’ professional identities has been noted in writings from the United, Europe, Canada, and Israel.
(You can see the paper – which is free to read until the end of the year – for citations.)
Writing in the American Journal of Evaluation, the paper’s authors, Sarah Mason and Ashley Hunt, suggest what might seem at first as an amusing hitch is actually a genuine issue. “While these complaints are often framed as mild frustrations, they have more serious implications. That is, professions survive on their ability to convince the public about the value of their work. If evaluators cannot describe what they do in a way that others understand, the field’s potential for advancement and expansion may be constrained.”
Mason and Hunt, both on the faculty at the School of Social Science, Policy and Evaluation at Southern California’s Claremont Graduate University, describe interviews with 42 practicing evaluators, drawn from the American Evaluation Association’s Find an Evaluator tool.
One of the key things they found was that most of the evaluators actually did hit similar themes in discussing what their job was.
In particular, evaluators tended to emphasize evaluation’s purpose rather than its process when talking about their work. That is, evaluators tended to focus their language on how evaluation can be used (e.g., to help improve social programs), rather than what it is or how it works. And while the evaluators we spoke with were not entirely consistent in their articulations of this purpose, common themes did emerge. These include evaluation as a tool to (a) help organizations improve performance, (b) measure progress toward goal attainment, and (c) make a value judgment, often about accountability or effectiveness.
But that unanimity didn’t overturn the conventional wisdom about the difficulty of explaining evaluation to non-evaluators. “[C]onsistent with anecdotal reports,” the author write, “many evaluators reported feeling challenged when talking about their work, with the most commonly perceived reactions being disinterest, boredom, or confusion.”
It can get better, Mason and Hunt suggest, offering several strategies or at least directions for dealing with future “and what do you do?” conversations.
By avoiding discussion of who we work with, where we work, when evaluations occur, and how we do our work—essentially, the contexts for evaluation—we provide our audiences with only a minimalistic basis from which to understand evaluation and one that may not advance awareness of our field. Focusing on process as well as purpose may enhance our audience’s understanding of what we do.
To read the full paper, click here.