The Research Quality Myth

For research to be taken up by teachers it has to be of high quality. In assessing the quality of research, lay users, notably policymakers and practitioners, face two problems;

1) How to assess research findings in the way they are presented, in the specific language and beliefs of a field of research.

2) How to relate these findings to what they already know, in the way they normally judge the validity of claims to truth made in everyday life.

Lay people may act as if in awe of research and the research process on the one hand or dismiss research findings if they conflict with their existing knowledge on the other. Lacking the background knowledge, skill and experience of professional researchers in their specific fields, lay users may see external quality criteria as their only hope in assessing quality. (Hammersley, M. pp. 289-291, 2013)

The three established quality criteria for quantitative research are;

1) Reliability – Can it be accurately reproduced or replicated?

2) Validity – Has it achieved what it set out to do? Has a causal relationship been demonstrated between the experimental variable and its hypothesized effect? Have the findings really be accurately interpreted and alternative explanations offered?

3) Generalisability – Are the findings applicable in other research settings? Can a theory be developed that can apply the findings to other non-research situations?

In quantitative research methodology, including the quality criteria, is rarely mentioned in research reports. It is assumed that readers already know about it in detail and attention is paid instead to method, i.e. how things were set up in the laboratory, and data collection and analytical procedures.

In contrast quantitative researchers often describe their methodology in detail. This is defensive to some extent, in the face of the critical assault made on by some in the quantitative researchers community who feel free to talk and write about qualitative research as if it were no more than pseudo-science, a poor relation. In doing so they are missing the point that it is operating in a fundamentally different reality.

It is commonly assumed that all research evidence should ideally match the established quantitative quality criteria – the so-called Gold Standard – and there is sustained pressure for the adoption of qualitative criteria to parallel them. This pressure comes principally from lay users of research aligned with the evidence-based practice movement, which is rooted firmly in quantitative science. A central theme is ‘transparency’, which demands that the basis of research professionals’ work should be made explicit, so that the lay people who use their services can judge the quality of what is provided.

Social and educational research is caught up in this current push for quality criteria because lay users see it as being capable of supplying the evidence on which more effective policy can be based and professional practice can be judged. Transparency is seen as the means by which lay users, lacking the specialist knowledge and understanding of the professional research community, can assess which research findings they can rely upon.

Hammersley (2013) states that the idea that research can be fully transparent is a mirage. It is not possible for researchers to make their judgements transparent and fully intelligible to anyone, irrespective of background knowledge and experience. Lay people cannot consistently make judgements about the quality of particular research studies that are as good as those of researchers who work in the relevant field. Indeed there are limits to the extent to which these judgments can be made intelligible even to fellow researchers, as judgment is made within a context, which is by definition uncertain. The judgement made by quantitative researchers of qualitative research is equally likely to be open to question. 

‘Intelligibility is an achievement, it is not automatic.’ Hammersley (2013)

There is no exact correspondence between research and the report that seeks to represent it. The researcher must lay out as clearly as they can the reasons they have for making their judgments. The reader must be able to infer what is meant in the report on the basis of their background resources in order for accurate assessment to be made. Interpretation is going on throughout the process. In highly specialised fields of work these resources include an extensive knowledge of the history of the field, its language and its methodology and methods; in short what reality it occupies, its ontology and epistemology. This leads back to the demands for external research guidelines and quality criteria.

Hammersley concludes; ‘The greater the experiential distance between speaker and hearer, the larger this problem of communication will be.’ ….‘Because the use of guidelines always depends upon background knowledge and judgement, they cannot solve this problem even if they can serve as a useful resource in dealing with it.’

Teachers in this sense are lay users of educational research carried out by university based academics and as such it produces a very narrow definition of what is meant by ‘research’. The experiential gap and the limitations in what guidelines can do, go some way in explaining why over many years so little of such research has brought about sustained change in classrooms. Can we do better?

My next blog; Doing Research – what does it mean for teachers?





Leave a Reply

Your email address will not be published. Required fields are marked *