Each tool and case study submission, except for a limited number of cases such as, e.g., industrial/proprietary benchmarks, must be accompanied by a Repeatability Evaluation Package (REP) due by the date provided in the CFP. The reviews by the Repeatability Evaluation Committee are considered when making the final decision on acceptance of the paper.

Authors of Accepted regular papers are encouraged to submit a repeatability package. Among the RE packages submitted, the papers, results of which are deemed repeatable will receive a repeatability badge that will be included on the first page of the published version. These papers will also be highlighted on the conference website. The submission date will be approximately a week after the notifications for the papers are sent out. For the exact date, see the CFP.

Submission Site: Easychair

Note for tool and case study paper authors on double-blind process: While we expect the tool/case study papers submitted to the conference to follow the double-blind instructions, the RE packages do not need to be anonymised. In order to honour the anonymity of the authors during the main review process, the RE packages will be evaluated by a different committee.

Submission Guidelines

The Repeatability Evaluation Package (REP) consists of three components:

  • copy (in pdf format) of the submitted paper.  This copy will be used by the REC to evaluate how well the elements of the REP match the paper.
  • document (either a webpage, a pdf, or a plain text file) explaining at a minimum:
    • What elements of the paper are included in the REP (e.g.: specific figures, tables, etc.).
    • Instructions for installing and running the software and extracting the corresponding results. Try to keep this as simple as possible through easy-to-use scripts.
    • The system requirements for running the REP (e.g.: OS, compilers, environments, etc.). The document should also include a description of the host platform used to prepare and test the docker image or virtual machine.
  • The software and any accompanying data.  This should be made available with a link that should remain accessible throughout the review process. Please prepare either a:
    • Docker Image (preferred).
    • Virtual Machine. You may use VirtualBox to save a VM image as an OVA file.
    • If the previous options are not viable, please contact the RE PC chairs to make other arrangements. For example, if your software uses other licensed software which cannot be included in a VM.

General Guidelines:

  • When preparing your REP, keep in mind that the most common reason for reproducibility failure is due to installation problems.  We recommend that you have an independent member of your lab test your installation instructions and REP on a clean machine before final submission.
  • The REP should run without a network connection and should have all the required software installed.
  • If the REP requires computing resources that exceed the capabilities of a standard laptop and/or take a long time to complete, in addition to the complex result/benchmark, a simpler result/benchmark should also be included to facilitate the review process.
  • Please include instructions on the expected output for each script included in the REP and how it relates to the computational elements in the paper.

Other information:

REPs are considered confidential material in the same sense as initial paper submissions: committee members agree not to share REP contents and to delete them after evaluation. REPs remain the property of the authors, and there is no requirement to post them publicly (although we encourage you to do so).

The repeatability evaluation process uses anonymous reviews so as to solicit honest feedback.  Authors of REPs should make a genuine effort to avoid learning the identity of the reviewers.  This effort may require turning off analytics or only using systems with high enough traffic that REC accesses will not be apparent.  In all cases where tracing is unavoidable the authors should provide warnings in the documentation so that reviewers can take necessary precautions to maintain anonymity.

Repeatability Evaluation Criteria

The submissions will be evaluated based the following criteria:

Instructions/Documentation/Testing: How easy it is to install and run the RP? How well is the code in the RP documented? How well is the RP tested?

Ease of Code Reuse: How easy is it to reuse the tool or modules within the tool? Are instructions included to facilitate extension and reuse?

Coverage: What is the extent to which the RP enables reproducibility of the computational elements presented in the paper?

Background and Goals

HSCC has a rich history of publishing strong papers emphasizing computational contributions; however, subsequent re-creation of these computational elements is often challenging because details of the implementation are unavoidably absent in the paper. Some authors post their code and data to their websites, but there is little formal incentive to do so and no easy way to determine whether others can actually use the result.  As a consequence, computational results often become non reproducible — even by the research group which originally produced them — after just a few years.

The goal of the HSCC repeatability evaluation process is to improve the reproducibility of computational results in the papers selected for the conference. 

Benefits for Authors

We hope that this process will provide the following benefits to authors:

  • Raise the profile of papers containing repeatable computational results by highlighting them at the conference and online.
  • Raise the profile of HSCC as a whole, by making it easier to build upon the published results.
  • Provide authors with an incentive to adopt best-practices for code and data management that are known to improve the quality and extendability of computational results.
  • Provide authors an opportunity to receive feedback from independent reviewers about whether their computational results can be repeated.
  • Obtain a special mention in the conference proceedings, and take part in the competition for the best RE award.

While creating a repeatability package will require some work from the authors, we believe the cost of that extra work is outweighed by a direct benefit to members of the authors’ research lab: if an independent reviewer can replicate the results with a minimum of effort, it is much more likely that future members of the lab will also be able to do so, even if the primary author has departed.

The repeatability evaluation process for HSCC draws upon several similar efforts at other conferences (SIGMOD, SAS, CAV, ECOOP, OOPSLA), and a first experimental run was held at HSCC14.