The Project
Being able to duplicate published research results is an important process of conducting research whether to build upon these findings or to compare with them. This process is called ``replicability'' when using the original authors' artifacts (e.g., code), or ``reproducibility'' otherwise (e.g., re-implementing algorithms). Reproducibility and replicability of research results have gained a lot of interest recently with assessment studies being led in various fields, and they are often seen as a trigger for better result diffusion and transparency. In this project, we assess replicability in Computer Graphics, by evaluating whether the code is available and whether it works properly. As a proxy for this field we compiled, ran and analyzed 151 codes out of 374 papers from 2014, 2016 and 2018 SIGGRAPH conferences. This analysis shows a clear increase in the number of papers with available and operational research codes with a dependency on the subfields, and indicates a correlation between code replicability and citation count.
This website provides an interactive tool to explore our results and evaluation data. It also provides tools to comment on the various codes either as an author or as a user.
Our project aims at providing the community with tools to improve Computer Graphics research replicability. Sharing this goal is the Graphics Replicability Stamp Initiative whose objective is to highlight replicable research works in Computer Graphics.
You can contribute new code analysis for computer graphics papers. We're looking forward to your contributions. You can also contact us.
Explore
Explore the data and our replicability scores
Analyze
Read our Siggraph 2020 paper on 374 analyzed Siggraph papers.
Contribute
Add comments or new analysis for Computer Graphics papers.