Analysis process & keeping the data tidy
Authors of this page: Graham R. Gibbs
Affiliation: University of Huddersfield
Date written: 30th June 2005
Updated 10th Sept. 2010
How to code and analyse
Qualitative data sets tend to be large, complex and detailed. The task of keeping on top of such a mountain of data so that each part is given a fair, balanced and equally thorough analysis should not be underestimated. There are techniques and procedures that can help with that.
With large amounts of data to manage, finding ways of keeping organised in how you process the data is important. This is one reason why many people use CAQDAS. Using software keeps all the data and ideas in one place and the software often encourages you to be organised in how you develop your thinking and your writing about the data.
One way of processing the data is to write summaries of what people have said and in so doing reduce the amount on information you have to deal with. This is an approach taken by the developers of the Framework approach (Richie and Lewis, 2003) which also suggests the use of tables to lay out the summaries in a structured way. the developers of Framework worked at the National Centre for Social Research (The NatCen) and they have now developed software that supports this approach.
Working to create theory
Do you need to explore a theory or create one, or add ideas to an existing theory? Tom Richards (Richards and Richards, 1994) neatly captures the potential processes in developing theory from data
“We often get going by finding little things that relate in some meaningful way – perhaps, if our interest is in stress, that certain topics get discussed in anxious ways (and that is something that good coding and retrieval can find for us). So then we start looking for components in those topics that might cause anxiety, often by studying the text, finding or guessing the components and coding for them, recalling situational facts not in the text and looking for suggestive co-occurrences of codes. We might on a hunch start looking at text passages on people’s personal security and how they arrange it (research on background theory here, and lots of coding again), to see if there is some possible connection between components occurring in the anxiety topics and security arrangements. If we find one, the theory is still thin, so we embark on a search for others, and thereby look for a pattern. The result of this is a little group of chunked-together coded text, ideas and hypotheses that, provided they can be kept and accessed as a chunk, can become an ingredient in further more abstracted or wide-ranging explorations. This chunk is said to be of larger “grain size” than its component codings, and it may in turn become an ingredient of a later theorizing of larger grain size still that is built out of existing chunks. (Big fleas are made out of smaller fleas.)
And so the web – of code, explore, relate, study the text – grows, resulting in little explorations, little tests, little ideas hardly worth calling theories but that need to be hung onto as wholes, to be further data for further study. Together they link together with other theories and make the story, the understanding of the text. The strength of this growing interpretation lies to a considerable extent in the fine grain size and tight inter-knittedness of all these steps; and the job of qualitative data handling (and software) is to help in the development of such growing interpretations”
Ritchie, J. and Lewis, J. (eds) (2003) Qualitative Research Practice: A Guide for Social Science Students and Researchers, Sage Publications, London.
Richards T. and Richards, L. (1994) Using Computers in Qualitative Analysis. In: Denzin and Lincoln (eds) Handbook of Qualitative Research, Sage Publications, Thousand Oaks, California. p. 448
The resources on this site by Graham R Gibbs, Dawn Clarke, Celia Taylor, Christina Silver and Ann Lewins are licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.