Publication

Why we need to think about research malpractice in the social sciences

By 
Tony Murphy

Tony Murphy points to a lack of investigation into academic misconduct and fraud

Writing for the Bulletin of the American Association of University Professors in 1952, Professor Grant Redford highlighted the ‘pressures to publish’ within academia. Redford and others at the time noted how productive research formed a basis of promotion, and that such work was largely governed by ‘the questionable virtues of chance, accident, caprice and the least worthy elements of aggressive salesmanship’. In short, such an emphasis on publication was problematic at that time, just as it is now, but those words surely have never been as relevant as they are today. Increasingly, academics are forced to prove their worth through publication in peer reviewed journals, and demonstrate the ‘impact’ of their work. The dangers associated with such pressures have been widely explored and flagged-up within the disciplines of the natural or ‘hard’ sciences. This also includes the discipline of psychology.

A range of dubious, unethical and even illegal research practices have been highlighted in the media. Such cases have served to discredit individuals, institutions, and even entire research areas. In some cases, the fabrication of data or entire experiments has been proven.

‘Publish or peril’

Much of this can be viewed in the light of the maxim ‘publish or peril’. If we consider some recent prominent cases, including Hwang Woo-suk (South Korean stem-cell researcher); Jon Sudbø (Norwegian medical researcher); Dipak Das (Indian medical researcher based in the USA); Diedriek Stapel (Dutch psychologist); and H Zhong and T Liu (Chinese medical researchers), a key driver of malpractice has been the desire to advance individual careers where a strong emphasis is placed on publication. However, despite an established history of investigation and reporting on such practices in the context of the hard sciences, relatively little attention has been given to those processes within the context of the social sciences outside of psychology. It would be naive to assume that such problems are the preserve of the hard sciences, especially when we consider recent shifts in the social sciences, including the heightened value of peer reviewed publications and the importance of individual academics being able to attract external funding.

It can be argued that there is a lack of focus to date on research malpractice within the social sciences, and that there is an acute need for investigation in such areas. If one were to search the collections within university libraries, or even online via common search tools (I have done both), it is possible to locate dozens of texts reporting on processes, case studies and motivations for research malpractice – but virtually all of these are within the fields of medicine, engineering, physics and similar areas. These range from Kohn’s 1989 text False Prophets, to McGarity and Wagner’s 2008 text Bending Science; from Grayson’s Scientific Deception (1995), to Goodstein’s 2010 text On Fact and Fraud: cautionary tales from the frontline of science; from Lock and Well’s Fraud and Misconduct in Medical Research (1996), to Wells and Farthing’s 2008 text Fraud and Misconduct in Biomedical Research, and many more. It is difficult to find such texts devoted to the social sciences. Yet, the increasingly competitive nature of publishing within academia as a whole, and the heightened pressures to publish can surely be viewed as contributory factors in adversely shaping researcher behaviour across all fields. Instruments such as the Research Excellence Framework (REF) may also serve to encourage and indeed perversely incentivise academics in the social sciences to ‘cut corners’, or at worse, engage in more blatant acts of malpractice, especially if we are to learn lessons from elsewhere (see for example Jha, 2012).

Academic pressures and malpractice

Reporting on researcher behaviour in the United States, Daniele Fanelli (2010) noted how researchers are torn between a conflict of interest: conducting accurate and objective work to enhance subject knowledge versus the need to develop their careers through research publications. Under such pressures, some scientists have shaped their research to ensure that work gets published. In that study, it was found that positive findings were more likely to be reported in science research projects in the knowledge that such work has a greater chance of acceptance for publication. There is no reason to believe that social scientists are somehow immune to such pressures, particularly in the context of the ‘publish or perish’ mentality which pervades much of academia, in large part owing to the REF (and previously the Research Assessment Exercise) and an increasingly competitive higher education landscape.

Determining the scale of such practice across academia is difficult. This is even more challenging in the context of the social sciences because of the acute lack of investigation. Commenting at a meeting organised by the British Medical Journal in 2012, Professor Malcolm Green indicated that for every case of fraud detected there are a dozen or so more undetected (Grove, 2012). It has also been reported that the rates at which research papers are withdrawn from top science journals increased significantly over the last ten years or so (Van Noorden, 2011). Papers can be withdrawn for a number of reasons, but this is often because questions are raised regarding the integrity of the research. It is possible to make a link from this to the pressures faced by academics to publish in top journals if they are to progress their careers. This creates a temptation to cut corners and in some cases go even further. Again, if we assume that such issues are only applicable to the hard sciences then we are certainly being naive. What has been written about such processes in the context of the social sciences has been focused on psychology, probably owing to its closer resemblance to the natural sciences than other social science fields.

Misconduct in psychology research

Writing in the context of recent research fraud cases in psychology, the Dutch academic Rene Bekkers has commented on the nature of academic misconduct and the processes involved (Bekkers, 2012). In addition to noting that the scale of such activities is unknown, without any reliable estimates, he argues that misconduct appears to be most prevalent where the benefits of such are highest, and where the risks of detection and the costs of detection are lowest. Beyond the obvious offences of plagiarism, nondisclosure of conflicts of interest, unethical research procedures and the fabrication of data itself, many other possible fraudulent behaviours are more subtle and relate to the exaggeration of trends or artificially propping-up hypotheses. Bekkers outlines a range of such behaviours, as set out by the Association of Universities in the Netherlands, including: invalid procedures for data handling (results can be incorrectly reported in the direction of the hypothesis supported by the author: Bakker and Wicherts, 2011); ‘data snooping’, whereby data collection ends before a target sample is achieved in the instance that a significant result emerges; ‘cherry picking’ data to support hypotheses; and ‘harking’, whereby hypotheses are developed after the results emerge and it is claimed that such findings had been predicted at the outset. Reflecting the essence of what Fanelli reported in his research, Bekkers notes that much of the above leads to the reporting of artificially strong and positive results. Such results increase the likelihood of work being accepted in prestigious journals. Much of this is seemingly specific to quantitative research, but this could equally apply to qualitative work.

Universities and other research institutions have individual policies and mechanisms in place for dealing with cases of research fraud or malpractice when these come to light. There are also a number of organisations working to prevent and investigate research misconduct, as well as advisory bodies. The latter includes the UK Research Integrity Office (UKRIO), whose remit is, broadly, the promotion of good governance and practice around research conduct, and the provision of information and guidance on such matters across all areas of research. However, if one were to look at the composition of the UKRIO’s Advisory Board, one realises that this is overwhelmingly made up of advisors with health and medicine related professional and academic backgrounds. That need not be a big issue, but it is indicative of a difference in focus between those areas and the social sciences.

Tony Murphy is Senior Lecturer in Criminology, Sheffield Hallam University

References

Bekkers, R. (2012), ‘Risk factors for fraud and academic misconduct in the social sciences’, Rene Bekkers, 29 November.

Bakker, M. and Wicherts, J. (2011). ‘The (mis)reporting of statistical results in psychology journals’, Behavior Research Methods, 43, pp. 666-678.

Fanelli, D. (2010), ‘Do Pressures to Publish Increase Scientists’ Bias? An Empirical Support from US States Data’, PLOS ONE 5(4).

Grove, J. (2012), ‘One in eight UK scientists has witnessed research fraud’, Times Higher Education, 13 January.

Jha, A. (2012), ‘False positives: fraud and misconduct are threatening scientific research’, The Guardian, 13 September.

Redford, G. (1952), ‘Publish or Else’, Bulletin of the American Association of University Professors, 38(4), pp. 608-618.

Van Noorden, R. (2011), ‘Science publishing: The trouble with retractions’, Nature, 478, pp. 26-28.