Archived Webinar – Evaluation Failures: Case Studies for Teaching and Learning

Categories: Evaluation, Webinar

Tags:

It’s often said that failure is the best teacher. In this free webinar, well-known evaluators Kylie Hutchinson, Benoit Gauthier, and E. Jane Davidson share their real-life evaluation blunders and ways to incorporate their lessons learned into the classroom. Professional evaluators, instructors, and evaluation students are encouraged to watch this event, which included an audience question-and-answer session (Hutchinson and Davidson answer more evaluation questions that didn’t fit into the hourlong webinar below).

Hutchinson, principal consultant with Vancouver-based Community Solutions, is the author of An Innovative Guide to Evaluation Reporting and Survive and Thrive: Three Steps to Securing Your Program Sustainability. Gauthier, president of the Quebec-based firm Circum, is the joint editor of Recherche sociale: de la problématique à la collecte des données, which has seen six French editions and one in Portuguese. Davidson, director of Davidson Consulting Limited (New Zealand), is the author of Evaluation Methodology Basics: The Nuts and Bolts of Sound Evaluation.

 

This topic is particularly interesting for our discipline because I think many of us are type-A personalities and hold ourselves to a very high standard. Are their other professions/disciplines that are “good” at failing that we can take lessons from?

  • Great question! I’m not sure….
  • I think one of the first groups to really embrace the whole learning from failure thing is engineering, because the failure stakes are so much higher. Check out Engineers Without Borders.

Do you think some sort of contract laying out timelines etc., at the beginning of an evaluation would help?

  • Hutchinson: I think that would definitely help, in fact in many cases I would strongly recommend it. If it’s a Developmental Evaluation you might not be able to predict all of the evaluation activities, but a timeline helps keep everyone on track and provides some accountability for both the evaluator and the client in terms of furnishing the information the evaluator requires in a timely way. There are often only certain windows for collecting data so it’s good to know these in advance.
  • Davidson: I had timelines laid out in the contract. The problem was that they didn’t incorporate massive changes that occurred well after we had agreed to those timelines. So, it’s always good to have a timeline to aim for, but the issue is having a process for revisiting it later, in case things don’t pan out.

Have any of you had experience needing to pick up where someone else left off?  Either due to turnover or contract issues or something? How do you a) calm down the client and quickly build trust and b) manage their expectations if a lot of the budget and time was eaten up by the prior evaluator

Hutchinson: Yikes. My first thought would be to listen deeply and let the client say what they need to say. What worked and didn’t work about the previous evaluation/evaluator? Then I wouldn’t be afraid to prepare a new evaluation plan that salvages what has been done so far but add your own thoughts as to how to finish within budget. Most importantly, finishing the evaluation the way it was originally intended should not come at your expense (e.g. going over the amount, too many unbillable hours, losing your shirt, etc.). Evaluations come in Tesla, Toyota, and beat up Chevy versions, you may have to adapt what was originally proposed. Look for ways to economize, for example, is there some data collection or analysis that the client can do? Can you submit a two-page report instead of a longer traditional one?

For those of us starting out in evaluation and somewhat isolated, any tips on how to find a mentor in evaluation? It seems like there are so many different approaches and types of evaluators… I’m not sure where to start in finding a good mentor. Anything to consider to narrow my search?

Hutchinson: First of all I would start frequenting evaluator events in your area or national conferences to meet people in person. Volunteer for your local evaluation chapter so people get to know you and your skills.

I’d look for someone who’s clearly willing and has the time. I’d make a formal ask, don’t just assume they’re doing it or fall into it casually. I’d prepare some clear expectations of what you’re hoping they can provide. I’d also keep this to a minimum, as most experienced evaluators are really busy so large demands may scare them away. For example, I’m always happy to answer one-off questions from new evaluators, and average about an hour a month doing this. But the idea of coaching someone formally is too much for me. Maybe you just want someone that you can take for coffee a couple of times a year to pick their brain about something, or ask a question over the phone several times a year. I’d also research the person a bit and find out if their skills and style are close to your own, e.g. are they more quantitative vs. qualitative?, internal vs. external evaluator?, experienced in the program areas e.g. health vs. international development?, experienced in the evaluation approach you’re interested in, e.g. Developmental or Participatory vs. RCTs? etc. etc. Finally, remember that it’s an honor and boost to one’s self-esteem to be asked mentoring questions, so don’t hold back from finding someone.

What is a process you use to develop evaluative capacity amongst the people you work with? And, how do you determine the outcomes of projects (not outputs, but outcomes)?

Hutchinson: These are two very large questions that touch on basic evaluation concepts that are unfortunately outside of the scope of the webinar. I would suggest finding some formal training in evaluation to answer these.

Do you think qualitative evaluation alone is sufficient to evaluate a program?

Hutchinson: The short answer is, it depends on the program and its context. Mixed methods have many advantages and should be a goal, but sometimes only qualitative data is available or resources aren’t available to do both.

Davidson: In general, mixed methods is best. However, I have certainly done evaluations that were 100% qualitative, and that was absolutely the right choice in those cases, and led to clear evaluative conclusions. In contrast, I do NOT believe it is ever appropriate to do an evaluation that is 100% quantitative. Quantitative measures require that everything you need to know can be anticipated in advance and nothing unexpected can possibly come up. This is never, ever true.

Scope creep is super challenging and really common for me, especially in developmental evaluation (DE). I find not every client is open to recognizing and dealing with this, though (government and non-profits). Sometimes this happens when there are real capacity issues in the client org. can you talk more about drawing boundaries and addressing scope creep while still maintaining the relationship on a long-term project?

Hutchinson: This is a challenging part of DE, particularly for external evaluators. I found myself once doing the job of an inept project manager in a DE and it was really frustrating, and a big money loser.

I recently found some resources on budgeting for DE for someone in one of my workshops, if you contact me offline I can send them to you kylieh@communitysolutions.ca. Not sure if you’ll find them helpful or not.

My advice would be to establish very clear communication with the client at the outset about the potential for scope creep for DE and how you need to monitor it regularly. It’s awkward having these discussions, I know, we all hate to talk about money and sound greedy. I would keep track of your hours diligently and report a detailed breakdown to the client monthly so they see where your time goes.

I’ve often thought about using a retainer with my next DE project perhaps.

I appreciate the mention of how important the paper trail is for successful evaluation projects. Other than the typical Gantt chart, do you have any best practices or resources on how to document well?

Hutchinson: Build in lots of real-time feedback loops. Short updates by email (never text, I once had a client who only sent me texts and I always copied them and replied by email), more formal monthly or interim reports, short updates in meetings that get recorded in the minutes, project materials on Google docs where the client can see real-time progress and add input,

Is scope creep always a bad thing? Do the presenters have any examples of when some scope creep was beneficial to the project?

Hutchinson: From a developmental evaluation perspective, I guess scope creep is a good thing. But rather that just looking at it as a good or bad thing, it’s simply a reality in many evaluations. Things change, people come and go, life and work is not static. The problem is where the evaluator is unprepared or not reimbursed for their extra time required to deal with the scope creep, so that’s when it needs to be discussed and managed.

 

Leave a Reply