The Problem with Surveys in Research

By Ben Hardy

‘When I use a word,’ Humpty Dumpty said, in rather a scornful tone, ‘it means just what I choose it to mean – neither more nor less.’

There is a little of the Humpty Dumpty in all of us. When we communicate with others we tend to think about what we want to say – and choose the words to mean what we want them to mean – rather than thinking carefully about what it is that others will hear.

The same is true when we conduct survey research. We identify a construct, such as job satisfaction, carefully define it and then produce a series of items which we believe will tap into the cognitive domain occupied by this construct. We then test these items to check that people understand them and use a variety of statistical techniques to produce a finished scale which, we believe, measures just what we choose it to mean – neither more nor less.

Alternatively we may bypass all of this and choose to use a published scale, assuming that all this hard work has been done for us.

Unfortunately, we do not tend to pay much attention to the actual words of the items. Sure, we check whether people understand them but we seldom check whether people understand them in exactly the same way as we do. Instead, like Humpty Dumpty, we fall back on assuming that words mean what we choose them to mean – neither more nor less.

The average noun has 1.74 meanings and the average verb 2.11. This leaves quite a good deal of scope for words to mean very different things, whatever we, or Humpty Dumpty, might choose. Consider the item ‘How satisfied are you with the person who supervises you – your organizational superior?’ What does it mean to you? How satisfied are you with your boss? (49); How satisfied are you with your boss and the decisions they make? (25); Is your supervisor knowledgeable and competent? (6); Do you like your supervisor? (14); One of these probably accords with your interpretation. You might be interested to know that quite a few people do not agree with you. The figures in brackets are the percentage of people selecting that particular option. You knew exactly what the item meant. And so did everyone else. The problem is that you did not agree.

“So what?” you might argue. If the stats work out then is there a problem? Well yes, there is. Firstly, we are not measuring what we think we are measuring. Few of us would trust a doctor whose laboratory tests might or might not be measuring what they claim to measure – even if the number looked reassuringly within the normal range. So should we diagnose organizational pathologies on the basis of surveys which may or may not be measuring what they claim to measure – even if the number if reassuring? Simply because something performs well statistically, it doesn’t mean that it tells you anything useful. Secondly, we do not know what individuals would score if they were actually answering exactly the same question that the researcher intended Thirdly, the different interpretations mean that there are different sub-groups within a population and this may have knock-on effects when linked to other factors, such as intention to leave.

So what is to be done? There are a number of simple fixes. Probably the easiest is to actually go and talk to some of the people who are going to be surveyed and ask them what they think the items in the survey actually mean. This will give a good idea of whether your interpretation differs wildly from theirs, and in many cases you will find that it does.

This problem of other peoples’ interpretations differing from our own extends beyond survey research, of course. Indeed, there is a whole field of research, that of linguistic pragmatics, which seeks to understand why we interpret things the way that we do. At the heart of it all, however, is communication. And so the assumption that words mean what we choose them to mean – neither more nor less – is a fallacious one, at least as far as other people are concerned. We need to stop thinking about what we are saying and spend a little more time thinking about what others are hearing. Humpty Dumpty was wrong. It is not us who chooses what words mean, it is the recipient of those words. And we ignore their views at our peril.

***

The preceding was reposted from the blog Management Ink. Ben Hardy collaborated with Lucy R. Ford for the article "It's Not Me, It's You: Miscomprehension in Surveys," available now in the OnlineFirst section of Organizational Research Methods. Read the article for free by clicking here.

Previous
Previous

CALL for PAPER in IJRMBSS, Journal with ISSN and eISSN

Next
Next

''Most Promising New Textbook'' author shares insights on behavioral research methods