online QDA logo - Home Page

NEW

REQUALLO


WWW http://onlineqda.hud.ac.uk
Bookmark and Share

Quality of qualitative analysis

Authors of this page: Celia Taylor¹, Graham R. Gibbs¹

and Ann Lewins²

Affiliation: ¹University of Huddersfield and

²University of Surrey

Date written: 2nd Dec 2005

 

 

 

 

 

Every researcher strives to ensure that the analysis they produce is of high quality and is not affected by any simple errors they may have made during the analysis and any biases they themselves might have about what they are researching.

 

The debate about criteria for good quality

In natural science findings are validated by their repeated replication. If a second investigator cannot replicate the findings when they repeat the experiment then the original results are questioned. If no-one else can replicate the original results then they are rejected as flawed and invalid. Natural scientists have developed a whole spectrum of procedures and designs to ensure that experiments don’t get the wrong answers and that replication is possible.

In social research there are two problems with adopting this approach. First, there is no widespread agreement about whether there can be any procedures that ensure research and analysis produce the right answers. Second, and this is a problems with qualitative research especially, replication is seldom possible and in most cases doesn’t make much sense. When observed or questioned again, respondents in qualitative research will rarely say or do exactly the same things. Whether results have been successfully replicated is always a matter of interpretation.

In many areas of the social sciences, though, a number of ideas and procedures based on concepts taken from the natural scientific model, have been adopted. These usually focus around the idea that analysis should be valid, reliable and generalisable.

  • Validity refers to the idea that the account truly reflects what actually happened, or put simply, that it is accurate.
  • Reliability means that the results of the analysis would also be obtained if different researchers repeated the research and analysis on another occasion. The respondents or participants involved may be different from those in the original research though they will be similar and be doing similar things.
  • Generalisability means that the results of the research and analysis apply to a wider group of people, social situations and settings than just the ones investigated in the original study.

The most fundamental challenge to these ideas has come from those who reject what they describe as the realist assumptions that underpin such procedures. Such researchers, who include a wide range of post-modernists and constructivists, argue that we cannot assume that there is just one, single, fundamental social reality against which results can be check to see if they are valid. Rather, all we can say is that there are many different social realities and views about our world and there is no way we can give any single one a privileged position as the reality. For them, it does not make sense to ask whether a qualitative analysis reflects what actually happened in the social world as it is always possible to have different interpretations.

Nevertheless, the assumption of most participants and respondents and the belief of many qualitative researchers is that most of the time, for most purposes, it does make sense to say that we inhabit the same, shared social reality. At the same time, humans are fallible, and it is always possible to make mistakes in ones interpretation or observation of this social reality. Therefore it makes sense to follow procedures that minimise mistakes and simple misinterpretations.

 

Reflexivity

However, specifying and applying such criteria and procedures is no simple matter, because social researchers and their views, presuppositions, predilections and biases, inevitably reflect the social milieu they inhabit. Reflexivity is the recognition that a researcher’s background and prior knowledge have an unavoidable influence on the research they are conducting. This means that no researcher can claim to be completely objective (Mays and Pope, 2000).

The impact of reflexivity cannot be avoided but it can be monitored and reported. This means being self-aware and open about the possible influences of milieu and background. Researchers need to remain aware of the danger of their favourite ideas and theories acting as blinkers to other possibilities arising from the data.

 

Validity

Even if, like the anti-realists, you do not believe any procedures can guarantee the accuracy or truth of qualitative analyses, there are procedures, that if followed, will help minimise the chances of producing analyses that are partial, mistaken or biased.

 

Triangulation

Triangulation essentially means combining two or more views, approaches or methods in an investigation in order to get a more accurate picture of the phenomena. It has had a bad press in recent years either because it seems to rely on realist ideas of validity (in the sense that triangulation will produce an accurate picture of the world) or because it seems to require the impossible (the combination of incompatible or even incommensurable explanations or phenomena). To put this another way, two different theories used in triangulation may produce different analyses not because one is biased and the other not, but because both are biased or because both are simply different interpretations and are paying attention to and recording different aspects of the situation. As Bloor (1997) argues, different methods produce data in different forms making direct comparison difficult.

However, many of the advantages of triangulation can be gained even without a full commitment to a realist ontology. In particular, triangulation can be used not to check one theory, set of data or approach against another, but rather to create an analysis of greater scope and richness. Even if two views do seem to contradict one another, then the difference can be used as a reason for deeper and repeated analysis of the data in order to try to explain and resolve the differences. This is, in fact, a very common procedure in ethnography, where researchers compare what people say they will do against what they actually do. If there are differences then further phenomena worthy of explanation have been revealed.

In fact, because of the analytic difficulties of comparing the results produced by using two different theories, the most frequently used form of triangulation is the use of multiple sources of data such as interviews, observation and documents.

 

Auditability / Audit trail

In line with the general openness required to deal with reflexivity, it is usually a good idea to ensure that qualitative analysis is auditable. In other words that it is possible to retrace the steps leading to a certain interpretation or theory to check that no alternatives were left unexamined and that no biases had any avoidable influence on the results. Usually this entails the recording of information about who did what with the data and in what order so that the genesis of interpretations can be retraced. Such an audit trail provides a sufficiently clear account of the research process to allow others to follow the researcher's thinking and conclusions about the data and thus allows them to assess whether the findings are dependable.

 

Consider negative cases

In line with the recommendation in triangulation to try and explain differences in results that arise from different data sources, it is good practice in qualitative analysis to look constantly for what are called negative cases. These are cases, settings, events and so on that are out of line with your main findings or even that directly contradict what your explanations would predict. In some natural scientific, positivistic approaches the occurrence of a negative case would be sufficient grounds to reject the theory. The explanation would be considered to have been refuted. However, in qualitative research this is rarely the response to a negative case. Of course, if there are many negative cases and on examination it seem impossible to explain them without abandoning the explanation or theory we have developed, then we do have to drop the theory. However, usually the response to a negative case, a case that doesn’t seem to fit, is to re-examine the data to try to find a way of explaining why that case has happened in such an untypical way. The upshot is commonly a modification of your ideas and assumptions (Seale, 2000) and eventually a richer and more complex theory and explanation.

 

Constant Comparison

The search for negative cases is, in fact, a variety of a procedure that is one of the central planks of the grounded theory approach to qualitative research, constant comparison. This involves checking the consistency and accuracy of interpretations and especially the application of codes by constantly comparing one interpretation or code with others both of a similar sort and in other cases and settings. This is to ensure both consistency and completeness in analysis.

One particular mechanism that helps here is the construction of a code list with brief definitions of each code. That way, each time a passage of text is coded, it can be checked against the code definition to ensure both that the coding is appropriate and that the definition is still adequate. This is one way of avoiding what is known as definitional drift, where the way a code is used shifts slightly as analysis progresses so that text coded later may represent different events and phenomena from that coded the same way at the start. If this happens it is usually a sign that there is a need for a new code so that the coded text can be split between the existing and the new codes.

Constant comparison also involves looking for variations and differences across cases, settings or factors which affect the phenomena that are being studied. Such variations might include the influence of age and gender.

 

Safeguards involving other people

 

Inter-rater reliability

If more than one person or team is coding then it is possible to compare how they have coded the same passages. The coding of the same data by a primary coder and secondary coder is compared to see where there are areas of agreement and disagreement. Disagreements can then be discussed and a new agreement reached about a codes definition, improving consistency and rigour. This is commonly used by teams where individuals may be coding parts of a large data set and it is important that codes are applied consistently. In this case inter-rater reliability helps refine the coding definition to one which the team agree on. It is a less useful approach when an individual is working alone on a project and coding the whole data set. This is because; either they have to give the secondary coder their codes and definitions and are therefore likely to convey their biases too, or the secondary coder creates their own codes, which are then highly likely to be different and therefore difficult to compare with the primary coder’s codes.

 

Member/Respondent Validation

It is possible to involve participants and respondents during the later, analysis stages of a project too. They can be consulted both about the adequacy of transcription of interviews (if they have been undertaken) and about the kind of interpretations and explanations the data analysis has generated. This is usually referred to as member validation.

Member or respondent validation can involve a variety of techniques, it validates findings by showing there is an agreement between the researcher’s analysis and the respondent’s (member’s) description. Of course this assumes that the analysis is both comprehensible to the respondents and acceptable to them. It may be neither for reasons that have nothing to do with accuracy or validity but may have a lot to do with politics and ideology. Moreover, expecting a respondent to read through the analysis is asking a lot of that person and may affect their judgment. A respondent's view of research findings is subject to constant change. They may pick aspects of the research as important, ignoring the researcher’s central topic, and therefore support the findings for the wrong reasons (Bloor, 1997). In may cases, it may not be possible to go back to informants for a number of reasons, for instance they don’t have time or have moved away.

 

Trustworthiness/Reliability

Trustworthiness or reliability is the degree to which different observers, researchers etc. (or the same observers etc. on different occasions) make the same observations or collect the same data about the same object of study. The concept is highly contentious in qualitative research where it is often not clear what the same object of study is.

One way to engender trustworthiness is to include evidence in your analytic reports. Usually, this takes the form of quotations from interviews and field notes, along with detailed descriptions of episodes, events and settings. A danger, when using quotations, is to use too many and to make them too long. Make sure that you express all the key aspects of your analysis in your own words and that the inclusion of long quotations does not force the reader to make their own interpretations of the data. If you want to use a long quotation then it is usually a good idea to explain to the reader what analytic points (and there is often more than one) it is illustrating. On the other hand beware of quotations that are too short. It is easy for them to become decontextualised and, again, without explanation they will lose their power.

 

Generalisability/Transferability

Generalisability or transferability refers to the extent that the account can be applied to other people, times and settings other than those actually studied. In terms of qualitative research, generalisability is based on the assumption that it is useful to begin to understand similar situations or people, rather than being representative of the target population (Maxwell, 1997). Qualitative research is rarely based on the use of random samples so the kinds of reference to wider populations made on the basis of social surveys (and the statistical boundaries that accompany them) cannot be used in qualitative analysis. Usually what the analyst has to do is ensure that any reference to people and settings beyond those in the study are justified. This is usually done by defining, in detail, the kind of settings and types of people to whom the explanation or theory applies based on the identification of similar settings and people in the study.

One particular danger in qualitative research is what Silverman (2001, p. 223) has referred to as ‘anecdotalism’. What happens here is that in the research we come across a particularly interesting, fascinating or spectacular person, setting, or event. There is a good story there that we want to tell and this may blind us to the fact that the anecdote is actually far from typical and possibly unique. There is a real danger that such memorable and distinct phenomena may come to colour and even bias the rest of our interpretation. Therefore, beware of anecdotalism. One way to guard against this is to include reference to how typical (or not) examples and cases are. It may even be useful to include counts or percentages to show how typical phenomena are. Although, beware, if, as is usually the case, your sample is not selected randomly from a defined population, the numbers do not give you any warrant for generalising to the wider population.

 

References

Bloor, M (1997) Techniques of Validation in Qualitative Research: a Critical Commentary. In: Miller, G. and Dingwall, R. (Ed’s.) Context and Method in Qualitative Research, London, SAGE Publications, p37-50.

Gibbs, G.R. (2007) Analyzing qualitative data. London: Sage. Part of the Qualitative Research Kit, ed. U. Flick.

Mays, N. and Pope, C. (2000) Qualitative research in health care: Assessing quality in qualitative research. BMJ, 320, pp.50-52

Maxwell, J. A. (1992). Understanding and Validity in Qualitative Research. Harvard Educational Review, 62(3), 279-300.

Miles, M. B., & Huberman, A. M. (1994). Qualitative data analysis: a sourcebook of new methods. Beverly Hills, CA: Sage, pp. 262.

Seale, C. (2000) The Quality of Qualitative Research, London, SAGE Publications.

Silverman, D (2001) Interpreting Qualitative Data. 2nd Ed. London: Sage.

top of page