Call for Papers on Special Track for Systems, Benchmarks and Challenges

This call for papers extends the main call for papers by a special track. Please first read the main CfP. Here, we only highlight how this special track differs from the main track.

Single-Blind Review

Since authors and organizers of systems, benchmarks and challenges are often easily identifiable and it is often required to look up the actual framework, submissions to this special track will undergo a single-blind review process, i.e., the authors of the submission should be listed on the front page.

Social Media

The social media silence period does not apply to this track because of the single-blind reviewing.

What makes a good system submission to this track? 

We follow a similar approach to JMLR MLOSS by looking for papers about novel, well-engineered, well-established and well-documented systems. To this end, a submission has to show that

  1. It is a novel system that has features or application domains that were not available beforehand.
  2. It already has an established user base (shown by stars on github, active commit history by several developers, an active issue tracker, etc.)
  3. It is an open-source software package with an open-source software licence that allows users to easily use and contribute to it.
  4. It achieves excellent performance on the addressed application domains.

What makes a good benchmark submission to this track? 

The progress in the field of AutoML is often driven by empirical results. Although the community has made tremendous progress in defining best practices and benchmarks in recent years, we invite submission to further enhance the quality of benchmarking in AutoML. This could include (but is not strictly limited to):

  • Demonstrating pitfalls and proposing pitfalls in benchmarking AutoML systems
  • Proposing new benchmarks (e.g., similar to HPOBench for hyperparameter optimization or the NASBench series for neural architecture search) or substantial extensions of existing benchmarks
  • Approaches for more efficient benchmarking

What makes a good challenge submission to this track?

In recent years, there were many challenges on AutoML, AutoDL and HPO pushing the community to new heights. Since neither running a meaningful competition is trivial nor gaining thorough insights from it, we invite submissions on the following topics:

  • Design and visions for future challenges on AutoML
  • Post-challenge analysis, highlighting gained insights and future open tasks
  • Methodology and best practices for organizing AutoML challenges

We note that especially post-challenge analysis and insights can also be described by attendees and not only by organizers.