October 2019 Focus on Evaluation

Welcome October Mentors-in-Residence, Drs. Bernadette Wright and Steve Wallis!

Bernadette and Steve are the co-authors of a new book from SAGE, Practical Mapping for Applied Research and Evaluation.  You can find a sample Chapter 1, and online resources. Enter the code MSPACEQ222 for a discount on your book purchase.

I asked Bernadette and Steve a few questions, so they can introduce themselves and their work.

wallis-2-1.jpg

JS. Tell us a little about the journey that resulted in this book. How did you come to write this book together? How did you build on your respective backgrounds and experiences in this collaborative effort?

Dr. Steve Wallis

Steve: We first “met” by reading each other’s work online. There was a shared frustration with the existing status quo - the need to find new ways to conduct more effective evaluations. We collaborated on a number of smaller publications (white papers, academic articles and so on) and learned to respect each-others’ creativity, hard work, dedication, and knowledge.

Dr. Bernadette Wright

b-2.jpg

Bernadette: I came across Steve’s website, and I liked what he was saying about how we need ways of advancing better theories for solving social problems. So I joined his email list. One day, he posted that he was looking for someone to work with him to finish a paper for publication in a journal. I was intrigued, so I volunteered, and that led to writing that paper and then more research, writing, and facilitating workshops together.

Steve: Bernadette brought a great depth of experience from the world of evaluation while I brought new insights from the world of organization development. Together, we develop new methods of knowledge mapping that could lead to more useful evaluation results for understanding and improving programs.

JS. Given yourdefinition of evaluation research as:“research conducted to provide informationfor shaping effective programs, policies, and other actions,” whatcharacteristics distinguish evaluation from other types of socialscience research?

Steve: An interestingquestion, Janet.  Some researchers in various fields might argue with us,but there is actually quite a lot of overlap between evaluation research andother kinds of research in the social sciences. The main difference is thefocus or topic. Evaluation research is very much focused on providingorganizations with information to help them to run their programs, policies,and initiatives more effectively, and to demonstrate their effectiveness tofunders. The flip-side of that focus is the need to understand “how the worldworks;” that is, what is going on in the surrounding community. That’s whereother research comes in

Bernadette:Interesting that you ask, because I recently wrote a short article about thison LinkedIn.Both social research and evaluation studies vary a great deal in their purposeand the types of questions they explore. So, we might think more in terms of a2x2 grid than two categories.

  • One dimension is the researchpurpose. Social research can be conducted for generalized knowledge or fora specific practical application. An example study for generalizedknowledge would be a psychological study to test a theory about how peoplemake decisions. An example study for practical application would beresearch that a membership organization might conduct to get informationthey need for planning their future services. Evaluation can be focused onpractical application (such as an evaluation that provides feedback fordeveloping and enhancing a program) or on measuring impact on pre-determinedgoals.

  • Another dimension is the researchtopic. Evaluations focus on measuring specific programs, policies, andother activities by a specific organization or group. Other socialresearch can focus on a wider set of topics, such as understanding publicperceptions of an issue and understanding how people and social groupsfunction.

All of these types of research and evaluation are valuable, depending on the specific situation. So, I like to focus less on what we call it and more on how to do research or evaluation that will help answer the questions that need answering.

JS. Your definition also frames evaluation as “to purposefully bring about social or environmental change.” Can you say more about that purpose? What, if any, characteristics distinguish evaluation conducted with a purpose for change from action research

Steve: Evaluation and action research are both examples of research that is “applied” Rather than conducted for generalized knowledge. Action research would fit very neatly into the process we present in our book.

Bernadette: It depends on how the specific evaluation is done. Action research can be and has been used as an approach to evaluation. One example we cite in our book can be found here: http://grantcraft.org/wp-content/uploads/sites/2/2018/12/par.pdf Many evaluations use methods similar to action research, involving people who are involved in the program in all aspects of the evaluation, without using that term. Evaluations can also be conducted in a non-participatory manner that is not focused on usefulness for action.

JS. With increasing demands for accountability and proof that funds are used appropriately and accomplish the identified goals, it would seem that the need for skilled evaluators must also be growing. What specific knowledge or skills are essential for professionals in the field of evaluation? How can students or career changers develop appropriate research knowledge to prepare for work in this field? 

Steve: Great question Janet. Yes, there is a growing demand for good evaluators. In fact, it is something of a (true, yet painful) joke that some people are referred to as “Monday morning evaluators” because their boss walks in with a surprise assignment!

Those who want tohave more preparation time for their career would do well to start by learningthe basics. Luckily, many of the same research skills are found in a widevariety of fields (psychology, sociology, business, and so on). So, peopleentering the field may have already learned about a range of methods. Anotherhelpful approach is to work with a seasoned research for your first fewprojects. You will want to know about the general context of your organization– what kind of work it is doing. And, getting down to brass tacks, what are therange of research methods open to you and which methods might you use in whichsituations? Writing skills are a plus along with strong ethical standards,project management (because every research project is… yes… a project) and theability to think clearly and logically. Then there are interview skills, andmore generally communication skills. We could go on… but that should givepeople a taste. Anyway, they are all in the book.

Bernadette: For evaluators in the U.S., the American Evaluation Association recently published a set of evaluator competencies. Some of the evaluator professional associations in other countries also have their own sets of competencies. I’d advise new evaluators to develop their skills in those areas. Plenty of options are available for new and experienced evaluators alike to enhance their skills. Join your national and local evaluation professional association and attend their conferences and other learning events. Take some graduate courses, professional development workshops, or webinars in evaluation. Read evaluation books, journals, and blogs.

Steve: And, of course, our book.

Bernadette: Yes! [Laughs] The opportunities for learning more about evaluation are almost endless – much like the opportunities for improving our organizations and our planet.

Previous
Previous

Cultivating Mentoring Digitally and Among Peers

Next
Next

Gender equality in social data science. Get to know our panel and join us on 8th October