Tips for Evaluating Your Research

How do you evaluate success in research?

You can find resources on Methodspace that address methods and strategies for research whose efforts are conducted for the purpose of evaluation. Sometimes we need to look through an evaluative lens at our work as researchers. Look at strategies used in fields of study, disciplines, and methodologies different from your own to get new perspectives. This collection of open-access SAGE Journals articles offers practical suggestions for evaluating the design, process, or outcomes of the study.

Open Access articles from SAGE Journals:

Aksnes, D. W., Langfeldt, L., & Wouters, P. (2019). Citations, Citation Indicators, and Research Quality: An Overview of Basic Concepts and Theories. SAGE Open. https://doi.org/10.1177/2158244019829575

Abstract. Citations are increasingly used as performance indicators in research policy and within the research system. Usually, citations are assumed to reflect the impact of the research or its quality. What is the justification for these assumptions and how do citations relate to research quality? These and similar issues have been addressed through several decades of scientometric research. This article provides an overview of some of the main issues at stake, including theories of citation and the interpretation and validity of citations as performance measures. Research quality is a multidimensional concept, where plausibility/soundness, originality, scientific value, and societal value commonly are perceived as key characteristics. The article investigates how citations may relate to these various research quality dimensions. It is argued that citations reflect aspects related to scientific impact and relevance, although with important limitations. On the contrary, there is no evidence that citations reflect other key dimensions of research quality. Hence, an increased use of citation indicators in research evaluation and funding may imply less attention to these other research quality dimensions, such as solidity/plausibility, originality, and societal value.

Atkinson, P., Baird, M., & Adams, K. (2021). Are you really using Yarning research? Mapping Social and Family Yarning to strengthen Yarning research quality. AlterNative: An International Journal of Indigenous Peoples, 17(2), 191–201. https://doi.org/10.1177/11771801211015442

Abstract. Yarning as a research method has its grounding as an Aboriginal culturally specified process. Significant to the Research Yarn is relationality, however; this is a missing feature of published research findings. This article aims to address this. The research question was, what can an analysis of Social and Family Yarning tell us about relationality that underpins a Research Yarn. Participant recruitment occurred using convenience sampling, and data collection involved Yarning method. Five steps of data analysis occurred featuring Collaborative Yarning and Mapping. Commonality existed between researcher and participants through predominantly experiences of being a part of Aboriginal community, via Aboriginal organisations and Country. This suggests shared explicit and tacit knowledge and generation of thick data. Researchers should report on their experience with Yarning, the types of Yarning they are using, and the relationality generated from the Social, Family and Research Yarn.

Bruton, S. V., & Sacco, D. F. (2018). What’s it to me? Self-interest and evaluations of financial conflicts of interest. Research Ethics, 14(4), 1–17. https://doi.org/10.1177/1747016117739940

Abstract. Disclosure has become the preferred way of addressing the threat to researcher objectivity arising from financial conflicts of interest (FCOIs). This article argues that the effectiveness of disclosure at protecting science from the corrupting effects of FCOIs—particularly the kind of disclosure mandated by US federal granting agencies—is more limited than is generally acknowledged. Current NIH and NSF regulations require disclosed FCOIs to be reviewed, evaluated, and managed by officials at researchers’ home institutions. However, these reviewers are likely to have institutional and personal interests of their own that may undermine the integrity of their evaluations. This paper presents experimental findings suggesting that such interests affect third-party assessments of FCOIs. Over 200 participants gauged the ethical significance of various hypothetical yet realistic FCOIs in academic research settings. Some of them were led to believe they had a small personal interest in allowing conflicted research to proceed, whereas others’ personal outcomes were unrelated to the conflicted research. The results show that motivated reasoning influences FCOI evaluations, such that those with personal interest in conflicted research provided more lenient evaluations of researcher FCOIs. These findings imply that the capacity of federally mandated FCOI disclosure procedures to enhance bias-free science is quite restricted.

Howard, M., & Thomas-Hughes, H. (2021). Conceptualising quality in co-produced research. Qualitative Research, 21(5), 788–805. https://doi.org/10.1177/1468794120919092

Abstract. Co-produced research is said to create new knowledge through including the perspectives of those traditionally excluded from knowledge production, which in turn is expected to enhance research quality and impact. This article critically examines academic and UK voluntary sector literature concerning participatory and co-produced approaches to explore how quality is currently understood in co-produced research. Drawing on early career researchers’ experiences of a programme of co-produced research, the authors illustrate how theory and practice of co-production can differ, and the implications for conceptualising ‘research quality’ within co-produced research. Drawing on debates within qualitative research, community work and policy studies, the article outlines a potential framework for raising questions of ‘quality’, co-produced by research partners as part of the research process. Key dimensions of this framework are process, outcomes and autonomy.

Høyland, S. A., Hagen, J. M., & Engelbach, W. (2017). Developing and Applying a Framework for Assessing the Research Quality of Qualitative Project Methods in the EU Project SECUR-ED. SAGE Open. https://doi.org/10.1177/2158244017710291

Abstract. Qualitative research plays a vital role in political development and in the design of statutes and directives. Consequently, ensuring the quality of this research is important. However, the current literature on evaluation of quantitative and qualitative research does not reflect this importance, and we have identified a need to establish guidelines for evaluating qualitative research projects for quality. Therefore, and based on existing research, we have developed a framework for assessing the research quality of large complex projects using qualitative methods. In our study, as presented in this article, we operationalize and apply the framework to evaluate six specific methods in the large European research project, Secured Urban Transportation—A European Demonstration (SECUR-ED); each method is assessed according to the quality criteria of “transferability,” “systematic design/reliability,” and “transactional validity.” Overall, we find that half of the SECUR-ED project methods demonstrate thorough documentation and transferability, and that half of the methods lack consistent usage and therefore score low on both reliability and validity. We also find that one method, the capacity mapping matrix, scores high on all quality parameters. Accordingly, we suggest that future European Union (EU) projects replicate the documentation efforts demonstrated in relation to several of the SECUR-ED methods, and consider the capacity mapping matrix as “best practice” standard. We conclude that the framework represents a novel approach to quality assessments of qualitative project methods across research topics and contexts.

Howard, M., & Thomas-Hughes, H. (2021). Conceptualising quality in co-produced research. Qualitative Research, 21(5), 788–805. https://doi.org/10.1177/1468794120919092

Abstract. Co-produced research is said to create new knowledge through including the perspectives of those traditionally excluded from knowledge production, which in turn is expected to enhance research quality and impact. This article critically examines academic and UK voluntary sector literature concerning participatory and co-produced approaches to explore how quality is currently understood in co-produced research. Drawing on early career researchers’ experiences of a programme of co-produced research, the authors illustrate how theory and practice of co-production can differ, and the implications for conceptualising ‘research quality’ within co-produced research. Drawing on debates within qualitative research, community work and policy studies, the article outlines a potential framework for raising questions of ‘quality’, co-produced by research partners as part of the research process. Key dimensions of this framework are process, outcomes and autonomy.

Lavee, E., & Itzchakov, G. (2021). Good listening: A key element in establishing quality in qualitative research. Qualitative Research. https://doi.org/10.1177/14687941211039402

Abstract. What is “good” qualitative research? Considerable literature articulates criteria for quality in qualitative research. Common to all these criteria is the understanding that the data gathering process, often interviews, is central in assessing research quality. Studies have highlighted the preparation of the interview guide, appropriate ways to ask questions, and especially the interaction between interviewer and interviewee. To a lesser extent, qualitative scholars mention the importance of the interviewer’s listening abilities in obtaining the interviewee’s cooperation. Based on results of listening studies in the fields of psychology and organizational behavior, we argue that good listening is crucial for assessing the quality of qualitative research, yet remains a blind spot in qualitative data gathering. Drawing on our experience as qualitative researcher and listening researcher, we present practices for enhancing good listening in qualitative research, thereby enabling researchers to calibrate themselves as research instruments and obtain richer data.

Macdonald, M. E. (2009). Growing Quality in Qualitative Health Research. International Journal of Qualitative Methods, 97–101. https://doi.org/10.1177/160940690900800209

Abstract. Qualitative methodologies are growing in popularity in health research; however, the integration of these methodologies into the clinical context is not always straightforward. In this article the author discusses some of the paradigmatic and methodological tensions that characterize the use of qualitative methodologies in clinical health research and showcase one solution to these tensions. The McGill Qualitative Health Research Group is a scholarly group of qualitative health researchers working together to advance a qualitative research agenda in clinical disciplines.

Rünzel, M., Sarfatti, P., & Negroustoueva, S. (2021). Evaluating quality of science in CGIAR research programs: Use of bibliometrics. Outlook on Agriculture, 50(2), 130–140. https://doi.org/10.1177/00307270211024271

measuring tools

Abstract. When evaluating Quality of Science (QoS) in the context of development initiatives, it is essential to define adequate criteria. The objective of this perspective paper is to show how altmetric and bibliometric indicators have been used to support the evaluation of QoS in the 2020 Review of the Phase 2-CGIAR Research Programs (CRPs, 2017–2022), where, for the first time, the Quality of Research for Development (QoR4D) frame of reference has been utilized across the entire CGIAR CRP portfolio. Overall, the CRP review showed a significant output of scientific publications during the period 2017–2020, with 4,872 articles, 220,101 references, and 7.1 citations per article. Additionally, wider interest in scientific publications is demonstrated by good to high altmetrics, with average attention scores ranging from 70.8 to 806.9 with an average of 425.1. The use of selected bibliometrics was shown to be an adequate tool, for use together with other qualitative indicators to evaluate the QoS in the 12 CRPs. The CRP review process clearly demonstrated that standardized, harmonized and consistent data on research output is paramount to provide high-quality quantitative instruments and should be a priority throughout the transition toward One CGIAR. Therefore, we conclude that the QoR4D framework should be augmented by standardized bibliometric indicators embedded in measurement frameworks within the new One CGIAR. Finally, its practical utilization in monitoring and evaluation should be supported with clear guidelines.

Santiago-Delefosse, M., Bruchez, C., Gavin, A., Stephen, S. L., & Roux, P. (2015). Complexity of the Paradigms Present in Quality Criteria of Qualitative Research Grids. SAGE Open. https://doi.org/10.1177/2158244015621350

Abstract. With qualitative methods being increasingly used in health science fields, numerous grids proposing criteria to evaluate the quality of this type of research have been produced. Expert evaluators deem that there is a lack of consensual tools to evaluate qualitative research. Based on the review of 133 quality criteria grids for qualitative research in health sciences, the authors present the results of a computerized lexicometric analysis, which confirms the variety of intra- and inter-grid constructions, including within the same field. This variety is linked to the authors’ paradigmatic references underlying the criteria proposed. These references seem to be built intuitively, reflecting internal representations of qualitative research, thus making the grids and their criteria hard to compare. Consequently, the consensus on the definitions and the number of criteria becomes problematic. The paradigmatic and theoretical references of the grids should be specified so that users could better assess their contributions and limitations.

Methodspace Resources on Evaluation

Previous
Previous

What is relevant now? Thinking about emerging methods.

Next
Next

Researching the Aftermath of Violence in Schools