For best experience please turn on javascript and use a modern browser!
You are using a browser that is no longer supported by Microsoft. Please upgrade your browser. The site may not present itself correctly if you continue browsing.

A series of scandals in 2011-2012, ranging from data fabrication to ordinary errors, entailed a debate on scientific practice far beyond scientific circles. The problems and the debate are much older, of course, and resurface every now and then.  

The first elaborate study of scientific misconduct (among NHI-financed scholars in the USA), which listed a range of behaviours and their frequencies from plagiarism to data polishing, was published in 2005 (Martinson, et al. 2005). To see how difficult or easy it is to publish gibberish papers, a spoof medical paper was submitted to 305 open-access journals and accepted by 157 of them (Bohannon 2013). Moreover, a computer program that can detect fake papers that are written by another computer program detected 120 of them in scientific journals and conference proceedings. Even prestigious journals publish non-replicable results, because the social sciences have no generally accepted standards and values, and editors do not perceive their own biases and blind spots; see Starbuck’s (2016) important but shocking review of bad editorial practices,  senior scholars being the wrong role models, and junior researchers becoming cynical careerists. All this looks very bad indeed.

When focusing on the social sciences one may ask, among others, what percentage of published articles contains incorrect statistics, for whatever reason. Roughly half of controlled experimental studies turns out to be replicable (Camerer e.a. 2018; Open Science Collaboration 2015). On top of errors and  fraud, a serious problem is that positive results, which confirm hypotheses, are much easier publishable than a lack of confirmation, even though the latter is equally important for the audience to know. All this leads to a systematic bias of published results (Fanelli 2010a, 2010b; Franco e.a. 2014). A main culprit is misuse of p-values as evidence for hypothesized effects (Nuzzo 2014).  What you want to know is how likely your results are true, but that depends much stronger on other factors, in particular the validity of your theory (Colquhoun 2018), than on p-values, which in turn say next to nothing if in these other factors there is something amiss. Currently fashionable big data can contain hidden flaws, for example, and limited access to those data (e.g. Facebook) hinder critical inspection by others (Huberman 2012; Ruths and Pfeffer 2014). Yet, the problems are most severe outside the social sciences – in biomedicine – which suffers from a poisonous combination of small samples, small effect sizes, few replications, and enormous financial interests (Carpenter 2012).

Fraud typically starts small, but escalates easily (Crocker 2011). The damage of fraud and errors can be large, because spectacular or much-wanted results are more often cited, and may get policy implications until long after the flaw is exposed and debunked (Miguel et al. 2014). Some flaws, such as the Hawthorne effect and Protestants’ higher suicide rate than Catholics’ are too pleasant to give up, and continue to be reproduced in many sociology textbooks (Nolan 2003). The mnemonic TRAGEDIES (Temptation, Rationalization, Ambition, Group and authority pressure, Entitlement, Deception, Incrementalism, Embarrassment and Stupid systems) elaborate where scholars can go wrong (Gunsalus and Robinson 2018).

What are the remedies? PhD advisors, journal editors, data providers and practicing scientists can all be agents of change. PhD advisors should socialize their students appropriately before they become professional scientists (Neaves 2012). This involves teaching them to document their work well, and to set up a transparent structure that makes their work easily replicable (in theory, that is, as in practice few scholars undertake the endeavour to replicate substantial portions of existing work). Whereas forcing a change of scientists’ minds or checking their behaviour permanently are not feasible, establishing a new work ethic is viable. Exposure of results – with requisite reporting of the methods and data – to criticism and replication (Fanelli 2013; Stojmenovska et al 2017; Freese and Peterson 2017) can further strengthen the scientific body by – in the worst-case scenario – discovering scientific fraud (see for example the case of LaCour and Green 2014; Broockman et al. 2015) or simply – in the case of successful replication – boosting the confidence in a particular finding. The dedicated R package statcheck can help authors and critics alike; see also the general manifesto for reproducible science (Munafo e.a. 2017) Complementary to replication, new findings should also be consistent with well-established theory. There are exceptions and conceived wisdom is sometimes wrong, but then there are even stronger reasons to replicate.

There has been visible progress towards embracing replication, at journals such as the American Economic Review requiring authors to submit data and replication files, and in the Reproducibility Project in Psychology (Open Science Collaboration, 2015). Yet, a reluctance to replicate and/or be replicated lingers on. An important cause of this reluctance appears to be the fear of reputational damage upon unsuccessful replication. The fear of replication, however, appears not to be well grounded according to Fetterman and Sassenberg’s (2015) study of hypothetical failed replication scenarios. They find that scientists tend to overestimate the negative effects of reputational damage, and that admitting to have made a mistake is the best thing one can do.

There are also intrinsic difficulties to replicate. Field observations, for example in remote villages, are often impossible to replicate, and well-replicated experimental results may not hold true in the field. There was a discussion about these issues in five papers from different fields in a special issue of Science 334 (2 December 2011). Also Nature (16 May 2013) had a special on reproducibility of research. Moreover, not all failures to replicate are unambiguous signs in favour of rejection, and this can lead to fierce debates, e.g. on social priming (Abbott 2013). Finally, in some types of qualitative research, knowledge is seen as situated in the researcher. Standpoint theory (Harding 1991), for example, stipulates that when conducted by a different scholar, the results are bound to be different, thereby rendering any replication as failed by default.

To reduce the problem of data mining in the natural and the social sciences it has been proposed that journals require plans for data analysis to be submitted before the actual analysis is conducted and the article written (see Olken 2015 for economics). This decreases the likelihood that articles with statistically significant and sensational findings will feature disproportionately in journals, or that authors will attempt to inflate the significance of their findings post hoc (Brodeur et al. 2016). Another remedy is encouraging authors to discuss substantive significance – effect size of findings – rather than, or along with, statistical significance. In a review of all articles published in the European Sociological Review between 2000-2004 and 2010-2014, Bernardi et al. (2017) found that only one in three articles discussed the findings in terms of effect sizes, while half of the articles incorrectly interpreted statistically insignificant regression coefficients as zero effects. These practices are slowly changing, though, and ever more scholars become convinced that discussing findings solely in terms of statistical significance is inappropriate.

If falsehood has been established by an independent committee, flawed papers should be listed in a public database such that ongoing diffusion of false knowledge can be brought to a halt (Flutre et al. 2011). Moreover, if the flaw is exposed in the same journal as the original paper, and if the author admits the flaw, the effect on the audience is larger than if it’s in another journal or the author denies (Eriksson and Simpson 2013). Fortunately, ever more journals require public access to data. However, subject’s privacy must be guaranteed (Wicherts 2011). Also software scripts oftentimes have errors, or worse, and should be made public together with the publications for which they are used, and possibly be peer-reviewed (Joppa et al. 2013). To make this possible, these authors recommend improvements in education and propose that journals educate too, by publishing tutorials. Finally, the review process itself should be reviewed, and reviewer-author and editor-author interactions should be opened to investigators (Lee and Moher 2017).

If nothing else helps and you do find results that are too good to be true, blow the whistle! (Yong and Van Noorden 2013)

Editorial note: Misconduct, mistakes and replication were newly debated from 2011 onwards, and important enough for a dedicated AISSR webpage. Now these topics have progressively become mainstream, I (J.B.) won’t continue to keep up with the rapidly growing literature, and only update this site with some out of all papers from 2019 onwards. See for example the continually updated collection at Nature.

References 

  • Abbott, A. (2013). Disputed results a fresh blow for social psychology. Nature 497: 16.
  • Bernardi, F., Chakhaia, L., & Leopold, L. (2017). Sing Me a Song with Social Significance: The (Mis) Use of Statistical Significance Testing in European Sociological Research. European Sociological Review 33: 1-15.
  • Bohannon, J. (2013). Who's afraid of peer review? Science 342: 60-65.
  • Brodeur, A., Lé, M., Sangnier, M., & Zylberberg, Y. (2016). Star wars: The empirics strike back. American Economic Journal: Applied Economics 8: 1-32.
  • Broockman, D., Kalla, J., & Aronow, P. (2015). Irregularities in LaCour (2014). Stanford University.
  • Camerer, C.F. e.a. (2018) Evaluating the replicability of social science experiments in Nature and Science between 2010 and 2015. Nature Human Behaviour 2: 637-644.
  • Carpenter, S. (2012) Psychology’s bold initiative. Science 335:  1558-1561.
  • Crocker, J. (2011). The road to fraud starts with a single step. Nature 479: 151.
  • Eriksson, K. and Simpson, B. (2013). Editorial decisions may perpetuate belief in invalid research findings. PloS one 8: e73364.
  • Fanelli, D. (2013). Redefine misconduct as distorted reporting. Nature 494: 149-149.
  • Fanelli, D. (2010a). “Positive” results increase down the hierarchy of the sciences. PloS one 5: e10068.
  • Fanelli, D. (2010b). Do pressures to publish increase scientists' bias? An empirical support from US States Data. PloS one 5: e10271.
  • Fetterman, A. K., & Sassenberg, K. (2015). The Reputational Consequences of Failed Replications and Wrongness Admission among Scientists. PLoS one 10: 1-13.
  • Flutre, T., Julou, T., Riboli-Sasco, L., and Ribrault, C. (2011). Research community: Pilot scheme for misconduct database. Nature 478: 37-37.
  • Franco, A., Malhotra, N. and Simonovits, G. (2014) Publication bias in the social sciences: Unlocking the file drawer. Science 345: 1502-1505. 
  • Freese, J. and Peterson, D. (2017) Replication in social science. Annual Review of Sociology 43: 147-165.
  • Gunsalus, C.K. and Robinson, A.D. (2018) Nine pitfalls of research misconduct. Nature 557: 297-299.
  • Harding, S. (1991). Whose Science/ Whose Knowledge? Milton Keynes: Open University Press.
  • Huberman, B. A. (2012). Sociology of science: Big data deserve a bigger audience. Nature 482: 308-308.
  • Joppa, L. N., McInerny, G., Harper, R., Salido, L., Takeda, K., O’Hara, K., and Emmott, S. (2013). Troubling trends in scientific software use. Science 340: 814-815.
  • LaCour, M. J., & Green, D. P. (2014). When contact changes minds: An experiment on transmission of support for gay equality. Science 346: 1366-1369. Retracted.
  • Lee, C.J. and Moher, D (2017) Promote scientific integrity via journal peer review data. Science 357-358.
  • Martinson, B. C., Anderson, M. S., and De Vries, R. (2005). Scientists behaving badly. Nature 435 737-738.
  • Miguel, E., Camerer, C., Casey, K., Cohen, J., Esterling, K. M., Gerber, A., and Van der Laan, M. (2014). Promoting transparency in social science research. Science 343: 30-31.
  • Neaves, W. (2012). The roots of research misconduct. Nature 488: 121-122.
  • Nolan, P. D. (2003). Questioning textbook truth: suicide rates and the Hawthorne effect. The American Sociologist 34: 107-116.
  • Nuzzo, R. (2014). Statistical errors. Nature 506: 150-152.
  • Olken, B. (2015). Promises and Perils of Pre-Analysis Plans. Journal of Economic Perspectives 29: 61–80.
  • Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science 349: aac4716.
  • Ruths, D. and Pfeffer, J. (2014) Social media for large studies of behaviour. Science 346: 1063-1064.
  • Starbuck, William H. (2016) 60th Anniversary essay: How journals could improve research practices in social science. Administrative Science Quarterly 61: 165-183.
  • Stojmenovska, D., Bol, T., and Leopold, T. (2017) Does diversity pay? A replication of Herring (2009). American Sociological Review 82: 857-867.
  • Yong, E., Ledford, H., and Van Noorden, R. (2013). 3 ways to blow the whistle. Nature 503: 454-457.
  • Wicherts, J. M. (2011). Psychology must learn a lesson from fraud case. Nature 480: 7.