The Untrustworthy Evidence in Dishonesty Research


  • František Bartoš University of Amsterdam



z-curve, TIVA, test statistics, statistical power, false positive risk


Replicable and reliable research is essential for cumulative science and its applications in practice. This article examines the quality of research on dishonesty using a sample of 286 hand-coded test statistics from 99 articles. Z-curve analysis indicates a low expected replication rate, a high proportion of missing studies, and an inflated false discovery risk. Test of insufficient variance (TIVA) finds that 11/61 articles with multiple test statistics contain results that are ``too-good-to-be-true''. Sensitivity analysis confirms the robustness of the findings. In conclusion, caution is advised when relying on or applying the existing literature on dishonesty.


Metrics Loading ...


Abeler, J., Nosenzo, D., & Raymond, C. (2019). Preferences for truth-telling. Econometrica, 87(4), 1115–1153. DOI:

Baker, M. (2016). 1,500 scientists lift the lid on reproducibility. Nature, 533(7604), 452–454. DOI:

Bartoš, F. (2019). Faktory asociované s podvádˇením [Bachelor’s thesis]. Univerzita Karlova, Filozofická fakulta.

Bartoš, F., Maier, M., Shanks, D. R., Stanley, T., Sladekova, M., & Wagenmakers, E.-J. (2023). Meta-analyses in psychology often overestimate evidence for and size of effects. Royal Society Open Science, 10(7), 1–12. DOI:

Bartoš, F., Maier, M., Wagenmakers, E.-J., Doucouliagos, H., & Stanley, T. D. (2022). Robust Bayesian meta-analysis: Model-averaging across complementary publication bias adjustment methods. Research Synthesis Methods, 14(1), 99–116. DOI:

Bartoš, F., Maier, M., Wagenmakers, E.-J., Nippold, F., Doucouliagos, H., Ioannidis, J. P. A., Otte, W. M., Sladekova, M., Deresssa, T. K., Bruns, S. B., Fanelli, D., & Stanley, T. D. (2022). Footprint of publication selection bias on meta-analyses in medicine, environmental sciences, psychology, and economics.

Bartoš, F., & Schimmack, U. (2020). zcurve: An R package for fitting z-curves [R package version 2.1.2].

Bartoš, F., & Schimmack, U. (2022). Z-curve. 2.0: Estimating replication rates and discovery rates. Meta-Psychology, 6, 1–14. DOI:

Benjamin, D. J., Berger, J. O., Johannesson, M., Nosek, B. A., Wagenmakers, E.-J., Berk, R., Bollen, K. A., Brembs, B., Brown, L., Camerer, C., et al. (2018). Redefine statistical significance. Nature Human Behaviour, 2(1), 6–10. DOI:

Brunner, J., & Schimmack, U. (2020). Estimating population mean power under conditions of heterogeneity and selection for significance. Meta-Psychology, 4. DOI:

Buckley, J., Hyland, T., & Seery, N. (2022). Estimating the replicability of technology education research. International Journal of Technology and Design Education, 1–22. DOI:

Button, K. S., Ioannidis, J. P., Mokrysz, C., Nosek, B. A., Flint, J., Robinson, E. S., & Munafò, M. R. (2013). Power failure: why small sample size undermines the reliability of neuroscience. Nature Reviews Neuroscience, 14(5), 365–376. DOI:

Camerer, C. F., Dreber, A., Forsell, E., Ho, T.-H., Huber, J., Johannesson, M., Kirchler, M., Almenberg, J., Altmejd, A., Chan, T., et al. (2016). Evaluating replicability of laboratory experiments in economics. Science, 351(6280), 1433–1436. DOI:

Chambers, C. D. (2013). Registered reports: A new publishing initiative at cortex. Cortex, 49(3), 609–610. DOI:

Chambers, C. D., Dienes, Z., McIntosh, R. D., Rotshtein, P., & Willmes, K. (2015). Registered reports: Realigning incentives in scientific publishing. Cortex, 66, A1–A2. DOI:

Chen, L., Benjamin, R., Guo, Y., Lai, A., & Heine, S. J. (2023). Managing the terror of publication bias: A comprehensive p-curve analysis of the Terror Management Theory literature. DOI:

Duval, S., & Tweedie, R. (2000). Trim and fill: A simple funnel-plot–based method of testing and adjusting for publication bias in meta-analysis. Biometrics, 56(2), 455–463. DOI:

Efendic, E., Bartoš, F., Vranka, M. A., & Bahník, Š. (2019). Unpacking the justifiability of dishonesty: Behavioral and process-tracing investigation [Preprint at]. DOI:

Fanelli, D. (2010). "Positive" results increase down the hierarchy of the sciences. PloS One, 5(4), e10068. DOI:

Fanelli, D., Costas, R., & Ioannidis, J. P. (2017). Meta-assessment of bias in science. Proceedings of the National Academy of Sciences, 114(14), 3714–3719. DOI:

Fraley, R. C., & Vazire, S. (2014). The N-pact factor: Evaluating the quality of empirical journals with respect to sample size and statistical power. PloS One, 9(10), 1–12. DOI:

Gelman, A., & Loken, E. (2013). The garden of forking paths: Why multiple comparisons can be a problem, even when there is no "fishing expedition" or "p-hacking" and the research hypothesis was posited ahead of time. Department of Statistics, Columbia University, 348.

Gerlach, P., Teodorescu, K., & Hertwig, R. (2019). The truth about lies: A meta-analysis on dishonest behavior. Psychological Bulletin, 145(1), 1–44. DOI:

Gupta, A., & Bosco, F. (2023). Tempest in a teacup: An analysis of p-Hacking in organizational research. PloS One, 18(2), e0281938. DOI:

Held, L., Micheloud, C., & Pawel, S. (2022). The assessment of replication success based on relative effect size. The Annals of Applied Statistics, 16(2), 706–720. DOI:

Hoekstra, R., & Vazire, S. (2021). Aspiring to greater intellectual humility in science. Nature Human Behaviour, 5(12), 1602–1607. DOI:

Hoogeveen, S., Berkhout, S. W., Gronau, Q. F., Wagenmakers, E.-J., & Haaf, J. M. (2023). Improving statistical analysis in team science: The case of a Bayesian multiverse of Many Labs 4. DOI:

Ioannidis, J. P., Stanley, T. D., & Doucouliagos, H. (2017). The power of bias in economics research. The Economic Journal, 127(605), F236–F265. DOI:

Kristal, A. S., Whillans, A. V., Bazerman, M. H., Gino, F., Shu, L. L., Mazar, N., & Ariely, D. (2020). Signing at the beginning versus at the end does not decrease dishonesty. Proceedings of the National Academy of Sciences, 117(13), 7103–7107. DOI:

Kvarven, A., Strømland, E., & Johannesson, M. (2020). Comparing meta-analyses and preregistered multiple-laboratory replication projects. Nature Human Behaviour, 4(4), 423–434. DOI:

Ly, A., Etz, A., Marsman, M., & Wagenmakers, E.-J. (2019). Replication Bayes factors from evidence updating. Behavior Research Methods, 51(6), 2498–2508. DOI:

Maier, M., Bartoš, F., Stanley, T. D., Shanks, D., Harris, A. J., & Wagenmakers, E.-J. (2022). No evidence for nudging after adjusting for publication bias. Proceedings of the National Academy of Sciences, 119(31). DOI:

Maier, M., Bartoš, F., & Wagenmakers, E.-J. (2023). Robust Bayesian meta-analysis: Addressing publication bias with model-averaging. Psychological Methods, 28(1), 107–122. DOI:

McAuliffe, W. H., Edson, T. C., Louderback, E. R., LaRaja, A., & LaPlante, D. A. (2021). Responsible product design to mitigate excessive gambling: A scoping review and z-curve analysis of replicability. PLoS One, 16(4), e0249926. DOI:

McKay, B., Bacelar, M. F., Parma, J. O., Miller, M. W., & Carter, M. J. (2023). The combination of reporting bias and underpowered study designs has substantially exaggerated the motor learning benefits of self-controlled practice and enhanced expectancies: A meta-analysis. International Review of Sport and Exercise Psychology, 1–21. DOI:

Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251). DOI:

Pawel, S., & Held, L. (2022). The sceptical Bayes factor for the assessment of replication success. Journal of the Royal Statistical Society Series B: Statistical Methodology, 84(3), 879–911. DOI:

Proceedings of the National Academy of Sciences. (2021). Retraction for 'Shu et al., Signing at the beginning makes ethics salient and decreases dishonest self-reports in comparison to signing at the end'. Proceedings of the National Academy of Sciences, 118(38), 1–1. DOI:

Prochazka, J., Fedoseeva, Y., & Houdek, P. (2021). A field experiment on dishonesty: A registered replication of Azar et al. (2013). Journal of Behavioral and Experimental Economics, 90, 101617. DOI:

Psychological Science. (2023a). Retraction notice to 'Evil genius? How dishonesty can lead to greater creativity'. Psychological Science, 34(8), 947–947. DOI:

Psychological Science. (2023b). Retraction notice to 'The moral virtue of authenticity: How inauthenticity produces feelings of immorality and impurity'. Psychological Science, 34(8), 948–948. DOI:

R Core Team. (2021). R: A language and environment for statistical computing. R Foundation for Statistical Computing. Vienna, Austria.

Renkewitz, F., & Keiner, M. (2019). How to detect publication bias in psychological research. Zeitschrift für Psychologie, 227(4), 261–279. DOI:

Riesthuis, P., Otgaar, H., Bogaard, G., & Mangiulli, I. (2023). Factors affecting the forced confabulation effect: A meta-analysis of laboratory studies. Memory, 31(5), 635–651. DOI:

Rosenthal, R. (1979). The file drawer problem and tolerance for null results. Psychological Bulletin, 86(3), 638–641. DOI:

Schimmack, U. (2014). The test of insufficient variance (TIVA): A new tool for the detection of questionable research practices [Blogpost at].

Schimmack, U., & Bartoš, F. (n.d.). Estimating the false discovery risk of (randomized) clinical trials in medical journals based on published p-values. PLoS ONE, 18(8), 1–12. DOI:

Schwab, S., Kreiliger, G., & Held, L. (2021). Assessing treatment effects and publication bias across different specialties in medicine: A meta-epidemiological study. BMJ Open, 11(9), e045942. DOI:

Sorić, B. (1989). Statistical "discoveries" and effect-size estimation. Journal of the American Statistical Association, 84(406), 608–610. DOI:

Sotola, L. K., & Credé, M. (2022). On the predicted replicability of two decades of experimental research on system justification: A z-curve analysis. European Journal of Social Psychology, 52(5-6), 895–909. DOI:

Stanley, T. D., Carter, E. C., & Doucouliagos, H. (2018). What meta-analyses reveal about the replicability of psychological research. Psychological Bulletin, 144(12), 1325–1346. DOI:

Stanley, T. D., & Doucouliagos, H. (2014). Meta-regression approximations to reduce publication selection bias. Research Synthesis Methods, 8(1), 60–78. DOI:

Sterling, T. D. (1959). Publication decisions and their possible effects on inferences drawn from tests of significance—or vice versa. Journal of the American Statistical Association, 54(285), 30–34. DOI:

Stern, J., Arslan, R. C., Gerlach, T. M., & Penke, L. (2019). No robust evidence for cycle shifts in preferences for men’s bodies in a multiverse analysis: A response to Gangestad, Dinh, Grebe, Del Giudice, and Emery Thompson (2019). Evolution and Human Behavior, 40(6), 517–525. DOI:

Syrjänen, E., Fischer, H., Liuzza, M. T., Lindholm, T., & Olofsson, J. K. (2021). A review of the effects of valenced odors on face perception and evaluation. i-Perception, 12(2), 1–19. DOI:

van der Cruyssen, I., D’hondt, J., Meijer, E., & Verschuere, B. (2020). Does honesty require time? Two preregistered direct replications of experiment 2 of Shalvi, Eldar, and Bereby-Meyer (2012). Psychological Science, 31(4), 460–467. DOI:

van Aert, R. C., Wicherts, J. M., & Van Assen, M. A. (2019). Publication bias examined in meta-analyses from psychology and medicine: A meta-meta-analysis. PloS One, 14(4), e0215052. DOI:

van Anen, A. (2022). How strong is our evidence? Evidential value and publication bias in research on social media use and self-esteem [Master’s thesis]. Tilburg University.

Verschuere, B., Meijer, E. H., Jim, A., Hoogesteyn, K., Orthey, R., McCarthy, R. J., Skowronski, J. J., Acar, O. A., Aczel, B., Bakos, B. E., et al. (2018). Registered replication report on Mazar, Amir, and Ariely (2008). Advances in Methods and Practices in Psychological Science, 1(3), 299–317. DOI:

Vevea, J. L., & Hedges, L. V. (1995). A general linear model for estimating effect size in the presence of publication bias. Psychometrika, 60(3), 419–435. DOI:

Wagenmakers, E.-J., Sarafoglou, A., & Aczel, B. (2022). One statistical analysis must not rule them all. Nature, 605(7910), 423–425. DOI:

Wouda, J., Bijlstra, G., Frankenhuis, W. E., & Wigboldus, D. H. (2017). The collaborative roots of corruption? A replication of Weisel & Shalvi (2015). Collabra: Psychology, 3, 1–3. DOI: