1. Introduction
Both the ethical implications of the research published at the AutoML-Conf as well as its accessibility to everyone are key concerns that should be addressed by authors and reviewers.
The first part of these guidelines discusses how and why we review papers with regards to their real-world impact as well as provides guidance on how to integrate such discussions into the submissions themselves. Reviewers should base their ethics review on the criteria presented here. This section is based on the NeurIPS ethics guidelines.
Furthermore, we expect authors to make their papers as accessible as possible so that both reviewers and readers can enjoy them fully. Therefore we provide a list of accessibility measures reviewers are encouraged to evaluate in submitted papers and suggest improvements to. Beyond the submission, there is also information on how to create inclusive and accessible presentations and posters, so that the audience will have the best possible experience. This section is based on the ICML guidelines for accessible and inclusive papers, talks and posters.
2. The AutoML-Conf Ethics Review Process
As ML research and applications have increasing real-world impact, the likelihood of meaningful social benefit increases, as does the attendant risk of harm. Indeed, problems with data privacy, algorithmic bias, automation risk, and potential malicious uses of AI have been well-documented [e.g. by Whittlestone et al. 2019].
In the light of these findings, ML researchers can no longer ‘simply assume that… research will have a net positive impact on the world’ [Hecht et al., 2018]. The research community should consider not only the potential benefits but also the potential negative societal impacts of ML research, and adopt measures that enable positive trajectories to unfold while mitigating risk of harm. Thus we expect authors to discuss such ethical and societal consequences of their work in their papers, while avoiding excessive speculation. The AutoML-Conf template provides a section for this broader impact statement.
This document should be used by both authors and reviewers (including normal reviewers and ethics reviewers) in order to get on the same page about the AutoML-Conf ethics principles. The primary goal for reviewers should be to provide critical feedback for the authors to incorporate into the paper. We do not expect this feedback to be the deciding factor for acceptance in most cases, though papers could be rejected if substantial concerns are raised that the authors are not able to address.
There are two aspects of ethics we consider: potential negative societal impacts (Section 2.1) and general ethical conduct in research (Section 2.2). Both sections provide authors and reviewers with prompts to reflect on a submission’s possible harms. The broader impact statement of a paper does not need to answer these exact questions or all of them, but it should address both categories of ethics adequately.
J. Whittlestone, R. Nyrup, A. Alexandrova, K. Dihal, and S. Cave. (2019) Ethical and societal implications of algorithms, data, and artificial intelligence: a roadmap for research. London: Nuffield Foundation.
B. Hecht, L. Wilcox, J. P. Bigham, J. Schoning, E. Hoque, J. Ernst, Y. Bisk, L. De Russis, L. Yarosh, B. Anjam, D. Contractor, and C. Wu. (2018) It’s Time to Do Something: Mitigating the Negative Impacts of Computing Through a Change to the Peer Review Process. ACM Future of Computing Blog.
2.1 Potential Negative Societal Impacts
Submissions to the AutoML-Conf are expected to include a discussion about potential negative societal impacts of the proposed research artifact or application. (This corresponds to question 1c of the Reproducibility Checklist). Whenever these are identified, submissions should also include a discussion about how these risks can be mitigated.
Grappling with ethics is a difficult problem for the field, and thinking about ethics is still relatively new to many authors. Given its controversial nature, we choose to place a strong emphasis on transparency. In certain cases, it will not be possible to draw a bright line between ethical and unethical. A paper should therefore discuss any potential issues, welcoming a broader discussion that engages the whole community.
A common difficulty with assessing ethical impact is its indirectness: most papers focus on general-purpose methodologies (e.g., optimization algorithms), whereas ethical concerns are more apparent when considering deployed applications (e.g., surveillance systems). Also, real-world impact (both positive and negative) often emerges from the cumulative progress of many papers, so it is difficult to attribute the impact to an individual paper.
The ethics consequences of a paper can stem from either the methodology or the application. On the methodology side, for example, a new adversarial attack might give unbalanced power to malicious entities; in this case, defenses and other mitigation strategies would be expected, as is standard in computer security. On the application side, in some cases, the choice of application is incidental to the core contribution of the paper, and a potentially harmful application should be swapped out (as an extreme example, replacing ethnicity classification with bird classification), but the potential mis-uses should be still noted. In other cases, the core contribution might be inseparable from a questionable application (e.g., reconstructing a face given speech). In such cases, one should critically examine whether the scientific (and ethical) merits really outweigh the potential ethical harms.
A non-exhaustive list of potential negative societal impacts is included below. Consider whether the proposed methods and applications can:
- Directly facilitate injury to living beings. For example: could it be integrated into weapons or weapons systems?
- Raise safety or security concerns. For example: is there a risk that applications could cause serious accidents or open security vulnerabilities when deployed in real-world environments?
- Raise human rights concerns. For example: could the technology be used to discriminate, exclude, or otherwise negatively impact people, including impacts on the provision of vital services, such as healthcare and education, or limit access to opportunities like employment? Please consult the Toronto Declaration for further details.
- Have a detrimental effect on people’s livelihood or economic security. For example: Have a detrimental effect on people’s autonomy, dignity, or privacy at work, or threaten their economic security (e.g., via automation or disrupting an industry)? Could it be used to increase worker surveillance, or impose conditions that present a risk to the health and safety of employees?
- Develop or extend harmful forms of surveillance. For example: could it be used to collect or analyze bulk surveillance data to predict immigration status or other protected categories, or be used in any kind of criminal profiling?
- Severely damage the environment. For example: would the application incentivize significant environmental harms such as deforestation, fossil fuel extraction, or pollution?
- Deceive people in ways that cause harm. For example: could the approach be used to facilitate deceptive interactions that would cause harms such as theft, fraud, or harassment? Could it be used to impersonate public figures to influence political processes, or as a tool of hate speech or abuse?
2.2 General Ethical Conduct in Research
Submissions must adhere to ethical standards for responsible research practice and due diligence in the conduct.
If the research uses human-derived data, consider whether that data might:
- Contain any personally identifiable information or sensitive personally identifiable information. For instance, does the dataset use features or label information about individual names? Did people provide their consent on the collection of such data? Could the use of the data be degrading or embarrassing for some people?
- Contain information that could be deduced about individuals that they have not consented to share. For instance, a dataset for recommender systems could inadvertently disclose user information such as their name, depending on the features provided.
- Encode, contain, or potentially exacerbate bias against people of a certain gender, race, sexuality, or who have other protected characteristics. For instance, does the dataset represent the diversity of the community where the approach is intended to be deployed?
- Contain human subject experimentation and whether it has been reviewed and approved by a relevant oversight board. For instance, studies predicting characteristics (e.g., health status) from human data (e.g., contacts with people infected by COVID-19) are expected to have their studies reviewed by an ethical board.
- Have been discredited by the creators. For instance, the DukeMTMC-ReID dataset has been taken down and it should not be used in AutoML-Conf submissions.
In general, there are other issues related to data that are worthy of consideration and review. These include:
- Consent to use or share the data. Explain whether you have asked the data owner’s permission to use or share data and what the outcome was. Even if you did not receive consent, explain why this might be appropriate from an ethical standpoint. For instance, if the data was collected from a public forum, were its users asked consent to use the data they produced, and if not, why?
- Domain specific considerations when working with high-risk groups. For example, if the research involves work with minors or vulnerable adults, have the relevant safeguards been put in place?
- Filtering of offensive content. For instance, when collecting a dataset, how are the authors filtering offensive content such as racist language or violent imagery?
- Compliance with GDPR and other data-related regulations. For instance, if the authors collect human-derived data, what is the mechanism to guarantee individuals’ right to be forgotten (removed from the dataset)?
This list is not intended to be exhaustive — it is included here as a prompt for author and reviewer reflection.
3. Accessibility & Inclusivity at the AutoML-Conf
We want to make the AutoML-Conf a space for discussing AutoML research everyone can participate in. This does not only include physical attendance, but also the ability to fully understand and enjoy the papers, talks and presentations at the conference.
For authors, this is usually not a large time commitment, but enables both readers and reviewers to get the most out of submissions. Reviewers are encouraged to review papers with accessibility in mind and provide suggestions for improvement in their review. We do not expect questions of accessibility to meaningfully influence the review score, especially since they should be easy to improve upon, but rather see this as feedback for improving the general readability of the paper. Section 3.1 contains a list of accessibility information for papers and Section 3.4 the same for talks and posters.
We also provide guidance on inclusive language and encourage authors to consider their papers and presentations compared to what is proposed in Section 3.3. These are, of course, not hard rules, but our goal is to foster a welcoming community and all participants can help us achieve this goal by being inclusive in their language.
Section 3.2 contains our guidelines on citations and name changes. Making sure the name cited in a submission is correct is an important task of the authors and they are expected to fix any mistakes raised immediately.
3.1 Paper Accessibility
Having an accessible paper means that your work can be reviewed by all of our reviewers and enjoyed by the broadest possible audience, including disabled and neurodivergent people. Please follow the guidelines below to assure your paper is accessible. ICML has developed these guidelines in collaboration with the NAACL 2022 team, and the NAACL 2022 blog post contains additional details and resources.
- Create figures that are high contrast and high resolution, so they remain clear when zooming in.
- Ensure that fonts are sufficiently large, especially in figures. The font size in figures should be no smaller than the font size of the caption of the figure.
- Ensure that your visuals are legible to people with all types of color vision by following the recommendations of How to Design for Color Blindness and Color Universal Design:
- Choose color schemes that can be easily identified by people with all types of color vision, in consideration with the actual lighting conditions and usage environment. Many plotting tools include specific settings for this.
- Do not rely on color to convey information, but also use a combination of different shapes, positions, line types, and coloring patterns.
- Ensure that the PDF of your paper is accessible by following the steps in the SIGACCESS Accessible PDF Author Guide. This means in particular:
- Check that all fonts are embedded in the PDF.
- Set the title and language for the PDF.
- Add tags to the PDF. Tags capture the underlying logical structure, reading order, etc. and allow the use of assistive technologies such as screen readers.
- Add alternative text to all figures, tables, charts, images, diagrams, and other visuals in your paper. This is what will be spoken to readers who cannot see the visuals. Use plain, concise language that captures both the content and the function of the visuals in the paper. Highlight the aspects of the visuals that are salient to the paper, rather than merely describing the visuals or repeating their captions. Follow the SIGACCESS guidelines, which include several examples. For further examples see Appendix F of 2kenize: Tying Subword Sequences for Chinese Script Conversion.
- Set the tab order for the PDF.
- Mark table headers in the PDF.
3.2 Author Names and Citations in Submissions
Many authors (in particular, transgender, non-binary, and/or gender-diverse authors; married and/or divorced authors; etc.) can change their names during their academic careers. You show respect to the authors that you cite by using their updated names. Not using their updated names produces ongoing harms, such as a violation of privacy, denial of credit, denial of dignity, ongoing corrective epistemic labor, epistemic exploitation, and exposure to abuse and trauma, and can even constitute hate speech. Please take the following steps:
- Ensure that you are using updated author names by checking their website or Semantic Scholar page.
- To obtain bibliographic entries, do not rely on platforms such as Google Scholar that do not properly support author name changes. We instead recommend using dblp.
- Use tools such as Rebiber and manually check your work to spot and fix outdated bibliographic entries and in-text citations.
- For works that include examples from citation networks, academic graphs, etc., manually check that none of the examples contain incorrect names, which can occur in publicly available academic graphs.
It is critical that you follow the above steps in all drafts of your paper, not just in the camera-ready version. Any version of your paper that you upload to arXiv or submit for open review can be indexed or scraped, and if it contains incorrect names, it can result in the harm mentioned above. If you discover or are notified that any of your papers contain incorrect names, make the appropriate corrections immediately.
3.3 Inclusive Language
Use inclusive and respectful language throughout your paper:
- Use examples that are understandable and respectful to a diverse, multicultural audience. It is acceptable to include offensive content when it is relevant to the focus of your paper (e.g., to provide examples of toxic data). In this case, we recommend including a trigger or content warning at the beginning of your paper (for an example, see Harms of Gender Exclusivity and Challenges in Non-Binary Representation in Language Technologies).
- When talking about people and their individual characteristics including age, caste, disability, gender, neurodivergence, racial and ethnic identity, religion, sexual orientation, socioeconomic status, etc. follow the APA style guide.
- If your paper discusses accessibility issues or refers to people with disabilities, please follow the SIGACCESS Accessible Writing Guide.
- Avoid inherently sexist language, such as generic “he” and gendered professional titles (e.g., use “firefighter” instead of “fireman”). Also, avoid using combinations such as “he or she,” “she or he,” “he/she,” and “(s)he” as alternatives to the singular “they” because such constructions imply an exclusively binary nature of gender and exclude individuals who do not use these pronouns. See RECSYS guidelines for additional recommendations.
- Consider adding your pronouns (if comfortable) under your name in the camera-ready version to ensure that you and your co-authors are referred to appropriately when your paper is discussed. This practice additionally normalizes pronouns, which creates a more welcoming environment for trans, non-binary, and/or gender-diverse folks. For an example, see Harms of Gender Exclusivity and Challenges in Non-Binary Representation in Language Technologies. If you are not familiar with pronouns, see the oSTEM guide to pronouns.
3.4 Talks and Posters
There are many great guides to making accessible and inclusive talks and posters; we advise everyone to consider all the points made in the RECSYS guidelines, the ACM guide, and the W3C guide. In particular, we would like to highlight the following items:
- Keep your slides and posters clear, simple, and uncrowded. Use large, sans-serif fonts, with ample white space between sentences and paragraphs. Use bold for emphasis (instead of italics, underline, or capitalization), and avoid special text effects (e.g., shadows).
- Choose high contrast colors; dark text on a cream background works best.
- Avoid flashing text or graphics. For any graphics, add a brief text description of the graphic right next to it.
- Choose color schemes that can be easily identified by people with all types of color vision and do not rely on color to convey a message (see How to Design for Color Blindness and Color Universal Design for further details).
- Use examples that are understandable and respectful to a diverse, multicultural audience.
- When beginning a talk, introduce yourself with your chosen name and pronouns (if comfortable), and describe your appearance and background so that blind and visually impaired individuals can picture the talk. Every time you start talking after another person was just talking, say “This is [insert your name],” so that blind and visually impaired folks can easily know who is talking.
- When welcoming, referring to or interacting with your audience:
- Do not use: ladies and gentlemen, boys and girls, men and women, brothers and sisters, he or she, sir/madam.
- Instead, use: esteemed guests, that person, friends and colleagues, students, siblings, everyone, the participants.
- Do not assume the pronouns of any audience member, or publicly ask audience members for their pronouns. Instead, use singular “they” to refer to audience members.
- Avoid inherently sexist language, such as generic “he” and gendered professional titles (e.g., use “firefighter” instead of “fireman”). Also, avoid using combinations such as “he or she,” “she or he,” “he/she,” and “(s)he” as alternatives to the singular “they” because such constructions imply an exclusively binary nature of gender and exclude individuals who do not use these pronouns. See RECSYS guidelines for additional recommendations.
- Avoid intentional and casual ableist language, such as “Oh, I’m dumb!”, “I’m blind”, “Are you deaf?”, “I’m so OCD about these things”, “I hope everyone can see this”, etc. This language alienates and harms disabled and neurodivergent people.
- When speaking, do not assume that all audience members can see the slides: cover everything important in what you say, even if it’s already on the slide. Be kind when asked to say content on your slides (especially equations) out loud.
- Before responding to an audience question, slowly repeat the question so that everyone can catch and process the question. When speaking, repeat important words and ideas to allow everyone to follow along.
- For virtual talks and poster sessions, monitor the chat for questions.