ReproducibiliTea
Started in early 2018 at the University of Oxford, ReproducibiliTea has now spread to 104 institutions in 25 different countries and is sponsored by the UK Reproducibility Network (UKRN). Attended by both staff and students at the University of Plymouth and hosted by Robert Harlow and Darya Klymenko, this bi-weekly journal club discusses diverse ideas, papers and issues on how to improve research practices, reproducibility of results, and the Open Science movement overall.
Below, you will find a blog outlining the latest ReproducibiliTea journal club sessions. So stick the kettle on, and enjoy!
↓ Scroll to find out more ↓
Previous sessions of ReproducibiliTea ☕
-
Topic A - ‘Easing into open science: A guide for graduate students and their advisors’. PDF: Here
This paper offers valuable guidance to graduate students seeking to embrace open science practices. It underscores the advantages of open science, such as enhanced transparency and collaboration in research. Importantly, the paper emphasizes that open science can be adopted gradually, providing practical steps for students to incorporate these principles into their work, including pre-registration and data sharing. Ethical considerations, such as consent and privacy, are also addressed. The paper encourages students to seek support and resources from mentors and institutions to facilitate this transition and stresses the importance of openly sharing research outputs to increase the visibility and impact of their work. In essence, it serves as a valuable roadmap for graduate students looking to integrate open science into their research endeavors. Our session introduces these concepts, allowing ‘food for thought’ (with tea) on why these practices are so important.
Topic B - ‘Some reasons to use R: A discussion on R for statistical analysis’. Website: Here
This website post by Andy Wills provides some useful reading on why using R is beneficial for research. While being free, reproducible, and with open access coding; the software provides a considerable amount of complimentary tools for the Open Science movement. However, recent discussions of student satisfaction here in Plymouth elucidate that the complexity of learning to code may act as a barrier to science for psychologists starting out, especially those who are not interested in progressing towards research based careers. Yet, the skills acquired from such are invaluable to an ever advancing society and help facilitate reproducible data. Our session explores this, while comparing other statistical analysis tools.
-
‘Five selfish reasons to work reproducibly’: Paper Here
This weeks session focused on why working reproducibly pays off in the long run, even on an individual level. The paper outlined the following reasons:
1. To avoid disaster by noticing errors before they are too late.
2. Easier to write papers as more eyes on transparent analysis can spot mistakes.
3. Reproducibility helps reviewers see it your way, streamlining the reviewing process.
4. Reproducibility enables continuity in your work by documenting each step.
5. Reproducibility helps build your reputation as an honest and careful researcher.
This session discussed each of these points, as well as what experiences each attendee has in learning about the benefits of reproducibility, rather than just following the ‘rules of research’. Further discussions were made about when is the best time to make young scientists aware of these practices and why they are imperative to both the field and individual alike.
-
‘Furthering Open Science in Behaviour Analysis: An Introduction and Tutorial for Using GitHub in Research’. Paper: Here
This weeks session focuses on GitHub - A service used to publicly archive elements of research and manage projects. Users can directly upload and edit repositories with version control. GitHub provides extensive and quick collaboration, with cloning of publicly archived repositories aiding feedback and reproducibility, as researchers can access and reuse materials from previous studies. Similarly, researchers can view published analysis code, suggest edits and build upon projects. The extensive use of GitHub makes collaboration and open practices easy. This ReproducibiliTea session introduces GitHub, discusses how it can be used, and allows attendees opportunity to have a go setting up an account. A very useful session for researchers just starting out!
-
Item description‘Writing Empirical Articles: Transparency, Reproducibility, Clarity, and Memorability’. Paper: Here
This session focused on the importance of not only researchers remaining transparent in their work, with an “warts and all” approach, but also how our work is conveyed through differing writing styles. We can all reminisce having to read sections of papers several times as the lexical complexity pertaining to articulations employed in publications surpass the cognitive accessibility of an audience devoid of a scientific predisposition… ahem… yes.
This session discussed adapting science communication to a broader audience for improved accessibility, but noted the challenge of limited word counts in publications for conveying complex topics. The conversations extended to teaching experiences, highlighting the focus on developing academic writing skills, emphasizing the need for abstracts to provide layperson-friendly summaries, and underscoring the importance of early instruction to this regard. The session further explored the factors influencing changes in academic writing practices across disciplines.
-
‘How (and Whether) to Teach Undergraduates About the Replication Crisis in Psychological Science’. Paper: Here
This session focused on introducing young scientists to the replication crisis. Discussions within this session focused on when in our own timelines attendees (mostly PhD Psychology students) became aware of the replication crisis in science and the open science movement overall. Experiences varied with some finding out during their undergraduate, and some not being aware until attending the ReproducibiliTea journal club.
Interesting points were made about the use of open science practices by project supervisors and how again, this varies even within departments. A suggestion was made to create a uniformed policy on open science practices that current and new employees could be taught, which could lead to the teaching of these practices to students. One member of staff commented that they teach about the reproducibility crisis as early as first year undergraduate, though this is not standard amongst other lecturers.
The paper attached provides a 1-hour lecture on replicable research practices, which can be updated and delivered to staff and students easily. Some attendees stated that sharing this with their supervisors may prove beneficial, not just for their own research, but for the awareness of open science practices among seasoned academics.description
-
Fallibility in Science: Responding to Errors in the Work of Oneself and Others. Paper: Here
This session focused on the importance of authors reporting errors in both their own work and their peers. A vital element of maintaining reproducible research! Discussions were made on how the reporting of errors may indeed lead to publications being retracted, but how this does not seem to negatively impact the career progression of said author. In fact, demonstrating honesty and integrity in dealing with research errors are a sign of a respectable scientist. As embarrassing as it may be to admit errors, hiding them is worse. The paper outlined in this session provided guidance and examples on how to deal with mistakes, and that the reporting of errors is not a personal affair. ‘Criticism is the bedrock of the scientific method’. The report finishes with the statement: “As open science becomes increasingly the norm, we will find that everyone is fallible. The reputations of scientists will depend not on whether there are flaws in their research, but on how they respond when those flaws are noted”.
-
Computational Reproducibility via Containers in Psychology. Paper: Here.
This session explored a valuable resource for open science; containers. A huge importance in reproducible research is the ability to replicate coded analysis. However, even with the correct code and analysis done, different results may be obtained. Some software packages can update, influencing their outputs. Some places may not have the same programs used to conduct said analysis. A solution to these issues is containers. The example in this article is Code Ocean, a container platform that allows researchers to publish their data, code, and analysis in capsules that can be ran exactly the same through the click of a button on the Code Ocean website. All analysis are timestamped and open access. This means that anyone with an internet connection can replicate the results of a study published here. Each capsule also comes with its own DOI. An example of a Code Ocean capsule is found HERE.
Our session explored Code Ocean from a beginners perspective, discussed its advantages, and considered how to advertise this to other members of their research labs. Points were made about usability and whether or not this is available for different coding languages (it can be), or if it is an advantage over other capsule platforms used. Overall, the software and arguments for it were well received.
-
A Multilab Replication of the Induced-Compliance Paradigm of Cognitive Dissonance. Paper: Here.
This session discussed the replication of a well used paradigm, the ‘induced-compliance paradigm’, across difference labs, institutions, and cultures. Discussions centered around the fluid definitions of cognitive dissonance from different perspectives, how methodologies change according to the norms of an institution/lab, and to what extent these differences in practice contribute to a replication crisis of even well used experimental effects. Attending members spoke on personal experiences of how differences in research practices varied across different institutions. Whether through the use of research practices, or the specific protocol and interpretation of a paradigm influences the credibility of research outputs. One example made was that two institutions with the same equipment, running the same experiment, with the same protocol, should produce the same results. Yet, external factors may influence this. Overall, engaging points were made by members regarding a topic that is easily overlooked in the ongoing replication crisis.
-
From symbiont to parasite: the evolution of for-profit science publishing. Paper: Here
This session discussed the opinion that publishing companies are no longer a force of good in the scientific community, or at least, in the purposes they present themselves for. Discussions were made about how large publishing companies charge researchers to publish their work, charge universities for access to this work, have academics review papers voluntarily, and charge extra if an author wishes for their work to be open access. Back in 2023, 40 editors from one of Elsevier’s leading journals ‘Neuroimage’ quit, stating that publishing fees were unsustainable and an exploitation of authors work. Discussions were had about how we as a society use the yard stick of journal titles to measure accomplishment, while we are charged for this and relinquish copyright to our own work. It is easy for early career researchers to lose their passion when this is the case, as number of publications appear to equate to career progression. A change to the status quo is needed, and the paper used here suggests a plan of action to this effect (Plan S). The opinion piece was very well received by attendees and is regarded as a great recommendation for all!
-
Open Science Isn’t Always Open to All Scientists. Paper: Here
This session discussed how open science aims to make research more accessible, but that open science practices can in fact create barriers for researchers from underprivileged backgrounds with limited recourses. The paper shown discussed several points, including:
Open data sharing requires resources like high-performance computing and large storage capacities.
There are geographic biases in open science tools, platforms, and support networks.
Open science can reinforce existing inequalities if not implemented equitably.
Lack of training and incentives for open practices in some regions is a barrier.
Cultural differences in attitudes towards data sharing exist across the world.
Ethical concerns around data sharing, like privacy and consent, are considered differently in different places.
Open science needs to be inclusive of diverse research contexts and perspectives.
Funding agencies should support open science capacity building in underprivileged regions.
Multilingual and culturally-aware open science platforms are needed.
Discussions were made on each of these points, with references to our own personal experiences as a multicultural group from a variety of backgrounds. Some members spoke of little awareness of open science in previous institutions they attended. Others outlined how software differences and access to such made the continuity of open research practices harder. Overall, a constructive session considering a different side of open science that is easily overlooked.
-
Neural correlates of interspecies perspective taking in post-mortem Atlantic Salmon: An argument for proper multiple comparisons correction. Paper: Here
This weeks session discussed an oldie but goodie, putting fish in an MRI scanner! This paper highlighted the importance of properly correcting for multiple comparisons when analyzing neuroimaging data, such as fMRI. The paper outlined that in 2008, proper multiple comparisons were used in only 74% of NeuroImage papers, 67.5% of Cerebral Cortex papers, 60% of Social Cognitive and Affective Neuroscience papers, 75.4% of Human Brain Mapping papers, and 61.8% of Journal of Cognitive Neuroscience papers, with some researchers selectively applying corrections based on results obtained! To highlight this issue in dramatic fashion, the authors conducted an fMRI study on a deceased Atlantic salmon to demonstrate the risk of false positives without correction. The salmon "subject" still showed significant activation in its brain despite being deceased, illustrating the high probability of false positives without correction across the tens of thousands of voxels in fMRI data. The authors argue for routine and proper use of multiple comparisons correction methods like false discovery rate (FDR) in neuroimaging research to improve validity.
Discussions were made about the statistical implications of improper use of multiple comparisons corrections, as well as the humor of the paper itself. Members queried to what extent the methods were followed as described, with the favorite chosen quote being “The salmon was asked to determine which emotion the individual in the photo must have been experiencing”. Overall, the session was well received and the paper, a fine depiction of what waits for us all following improper statistics… zombie fish!
-
Six factors affecting reproducible research and how to handle them (article HERE).
This session focused on the article published in nature, discussing what barriers impact reproducibility. The factors highlighted include:
- A lack of access to methodological details, raw data, and research materials
- Misidentified or improper maintenance of materials.
- Inability to manage complex data due to lack of tools or knowledge.
- Poor research practices or experimental designs.
- Cognitive bias.
- A competitive culture that rewards novel findings and undervalues negative results.A discussion was had on how these elements impact robust research, how this varies across countries, and what experiences attendees have with these factors. Further discussions were made on how to address these factors, acknowledging how open science research practices have improved research to date. Overall, the session was well received and sparked some interesting discussions!
-
This weeks session focus on the Replication Games.
The Replication Games are a one-day event run by the Institute for Replication in collaboration with the UK Reproducibility Network (UKRN), where participants form teams and replicate existing results or analyze new data from a recently published study in high-ranking journals, with an opportunity of being included as a co-author in a meta-paper. The session was an opportunity to look at the resources together, chat about replication papers and think about participation in the upcoming Replication Games.
-
This weeks session focused on preparing attendees for the upcoming replication games. The replication games are a one-day event that brings researchers together to collaborate on reproducing and replicating papers published in high-ranking journals, all completed in one day! The majority of those in attendance were conveniently put on one team. As such, we prepared our plan of action to attempt replication of a chosen paper once the games commence. These games are an exciting opportunity to demonstrate the importance of open science research practices, and a great insight into the replication crisis in science. Findings are compiled into a meta-paper and published.
-
This weeks session lasted the entire day! A whole day of tea and good science, what would be better?! Teammates who registered for the replication games performed computational replications as well as robustness checks on a specific paper published in Psych Science, allowing us to see if the published data had any errors in the code, produced the same results described in the study, and stood up to robustness checks to see if alternative plausible analytical decisions produced similar results.
A very rewarding day collaborating with researchers around the world and an excellent opportunity to gain new skills in open science practices. The results of our replication attempts will be published in Psych Science some time in 2025! -
‘Editor Bias and Transparency in Psychology’s Open Science Era’ paper HERE
This weeks session discussed editor bias, whereby editors of journals display biases in a variety of ways. These include:
- Identity bias: Bias based on the characteristics of the authors, such as gender, race, status, and connections.
- Content bias: Bias surrounding content of submissions that can be political or go against the grain of current knowledge, even if substantial and proven to be significant for society (Nobel Prize winners rejected by journals - Campanario, 2009).
Discussions were made on how impactful these biases can be, with surprise expressed by attendees on instances where editor bias has shown to be overt and purposeful. An example being editors instructing reviewers to accept submissions from known and well seasoned academics, shunning early career researchers, as this will improve the impact factor of their journal. Many insightful discussions were made on how early career researchers felt about these issues and what possible solutions could address editor bias. The paper itself outlines some key suggestions, including masked reviewing, diversity of editors, and full transparency within the editing/reviewing process. Though, these solutions are not without their own challenges.
Overall, the session provided a forum of discussion and was well recieved by attendees. -
This session presented something a little different, though still paramount to open science - science communication.
Our session began by discussing types of science communication, how the role of a science communicator functions, and what challenges impact the communication of complex topics to a layperson crowd. The session showcased some favorited examples of good science communication. These included:YouTube Channels: Kurzgesagt, Pindex, and Veritasium.
Book: An Adventure in Statistics: The Reality Enigma – Andy Field
Overall, the session was well received with inciteful discussions surrounding the examples presented and how science should be accessible to all. Particular focus was raised on Andy Fields book, which has been fully read by two attendees, as a way to convey complex statistical concepts to layperson readers, or those beginning an academic journey. A video from Kurzgesagt was also shown to showcase industry careers of science communication that PhD graduates could pursue as opposed to the usual choice of academia or data science.
Thank you for reading. The latest sessions from the ReproducibiliTea journal club will be uploaded soon!