Several systems research conferences now incorporate an artifact description and artifact evaluation (AD/AE) process as part of the paper submission. Authors of accepted papers optionally submit a plethora of artifacts: documentation, links, tools, code, data, and scripts for independent validation of the claims in their paper. An artifact evaluation committee (AEC) evaluates the artifacts and stamps papers with accepted artifacts, which then receive publisher badges. Does this AD/AE process serve authors and reviewers? Is it scalable for large conferences such as SCxy? Using the last three SCxy Reproducibility Initiatives as the basis, this talk will analyze the AD/AE process using a data-driven approach. We will distinguish studies that benefit from AD, i.e., increased transparency versus areas that benefit from AE. We will present a vision for the resulting curated, reusable research objects---how such research objects are a treasure in themselves for advancing computational reproducibility and making reproducible evaluation practical in the coming years.