The reliability of replications: a study in computational reproductions.
Breznau N., Rinke EM., Wuttke A., Adem M., Adriaans J., Akdeniz E., Alvarez-Benjumea A., Andersen HK., Auer D., Azevedo F., Bahnsen O., Bai L., Balzer D., Bauer PC., Bauer G., Baumann M., Baute S., Benoit V., Bernauer J., Berning C., Berthold A., Bethke FS., Biegert T., Blinzler K., Blumenberg JN., Bobzien L., Bohman A., Bol T., Bostic A., Brzozowska Z., Burgdorf K., Burger K., Busch K., Castillo J-C., Chan N., Christmann P., Connelly R., Czymara CS., Damian E., de Rooij EA., Ecker A., Edelmann A., Eder C., Eger MA., Ellerbrock S., Forke A., Forster A., Freire D., Gaasendam C., Gavras K., Gayle V., Gessler T., Gnambs T., Godefroidt A., Grömping M., Groß M., Gruber S., Gummer T., Hadjar A., Halbherr V., Heisig JP., Hellmeier S., Heyne S., Hirsch M., Hjerm M., Hochman O., Höffler JH., Hövermann A., Hunger S., Hunkler C., Huth-Stöckle N., Ignácz ZS., Israel S., Jacobs L., Jacobsen J., Jaeger B., Jungkunz S., Jungmann N., Kanjana J., Kauff M., Khan S., Khatua S., Kleinert M., Klinger J., Kolb J-P., Kołczyńska M., Kuk J., Kunißen K., Kurti Sinatra D., Langenkamp A., Lee RC., Lersch PM., Liu D., Löbel L-M., Lutscher P., Mader M., Madia JE., Malancu N., Maldonado L., Marahrens H., Martin N., Martinez P., Mayerl J., Mayorga OJ., McDonnell R., McManus P., McWagner K., Meeusen C., Meierrieks D., Mellon J., Merhout F., Merk S., Meyer D., Micheli L., Mijs J., Moya C., Neunhoeffer M., Nüst D., Nygård O., Ochsenfeld F., Otte G., Pechenkina A., Pickup M., Prosser C., Raes L., Ralston K., Ramos M., Reichert F., Roets A., Rogers J., Ropers G., Samuel R., Sand G., Sanhueza Petrarca C., Schachter A., Schaeffer M., Schieferdecker D., Schlueter E., Schmidt K., Schmidt R., Schmidt-Catran A., Schmiedeberg C., Schneider J., Schoonvelde M., Schulte-Cloos J., Schumann S., Schunck R., Seuring J., Silber H., Sleegers W., Sonntag N., Staudt A., Steiber N., Steiner ND., Sternberg S., Stiers D., Stojmenovska D., Storz N., Striessnig E., Stroppe A-K., Suchow JW., Teltemann J., Tibajev A., Tung B., Vagni G., Van Assche J., van der Linden M., van der Noll J., Van Hootegem A., Vogtenhuber S., Voicu B., Wagemans F., Wehl N., Werner H., Wiernik BM., Winter F., Wolf C., Wu C., Yamada Y., Zakula B., Zhang N., Ziller C., Zins S., Żółtak T., Nguyen HHV.
This study investigates researcher variability in computational reproduction, an activity for which it is least expected. Eighty-five independent teams attempted numerical replication of results from an original study of policy preferences and immigration. Reproduction teams were randomly grouped into a 'transparent group' receiving original study and code or 'opaque group' receiving only a method and results description and no code. The transparent group mostly verified original results (95.7% same sign and p-value cutoff), while the opaque group had less success (89.3%). Second-decimal place exact numerical reproductions were less common (76.9 and 48.1%). Qualitative investigation of the workflows revealed many causes of error, including mistakes and procedural variations. When curating mistakes, we still find that only the transparent group was reliably successful. Our findings imply a need for transparency, but also more. Institutional checks and less subjective difficulty for researchers 'doing reproduction' would help, implying a need for better training. We also urge increased awareness of complexity in the research process and in 'push button' replications.