Research:Ethical and human-centered AI/Process frameworks

Tracked in Phabricator:
Task T207513

Annotated bibliography and synthesis of common themes from process frameworks for algorithmic accountability and bias detection and prevention.

Ethically Aligned Design edit

Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, version 2. IEEE (2017)

Purpose

  • Advance a public discussion about how we can establish ethical and social implementations for intelligent and autonomous systems and technologies, aligning them to defined values and ethical principles that prioritize human well-being in a given cultural context.
  • Inspire the creation of Standards (IEEE P7000TM series and beyond) and associated certification programs.
  • Facilitate the emergence of national and global policies that align with these principles.

How to prevent discriminatory outcomes in machine learning edit

How to Prevent Discriminatory Outcomes in Machine Learning. World Economic Forum (March 2018)

Purpose

  • Respect the public’s right to know which systems impact their lives by publicly listing and describing automated decision systems that significantly affect individuals and communities
  • Increase public agencies’ internal expertise and capacity to evaluate the systems they build or procure, so they can anticipate issues that might raise concerns, such as disparate impacts or due process violations;
  • Ensure greater accountability of automated decision systems by providing a meaningful and ongoing opportunity for external researchers to review, audit, and assess these systems using methods that allow them to identify and detect problems; and
  • Ensure that the public has a meaningful opportunity to respond to and, if necessary, dispute the use of a given system or an agency’s approach to algorithmic accountability.

Algorithmic Impact Assessments edit

Algorithmic Impact Assessments: a Practical Framework for Public Agency Accountability. AI Now Institute (April 2018)

As the name indicates, this document is designed primarily for public agencies (i.e. government agencies). The proposal is intended to map out an official, formal process for developing Algorithmic Impact Statements—legally(?) binding public documents that serve the same purpose and share many of the procedural steps with environmental impact statements. As such, it's very bureaucratic and detailed. Required steps include public comment periods, external review, and internal self-reviews.

Principles for Accountable Algorithms edit

Principles for Accountable Algorithms and a Social Impact Statement for Algorithms. FAT/ML (2016)

Premise

"Algorithms and the data that drive them are designed and created by people -- There is always a human ultimately responsible for decisions made or informed by an algorithm. "The algorithm did it" is not an acceptable excuse if algorithmic systems make mistakes or have undesired consequences, including from machine-learning processes."

The premise is unpacked into five principles—responsibility, explainability, accuracy, auditability, and fairness. These five principles are then fleshed out in the context of a three-stage design process (design stage, pre-launch stage, and post-launch stage), with a set of guiding questions and "initial steps to take" at each stage.

The process of aligning system design with these principles and stages should be documented, and the document itself released after deployment as a Social Impact Statement. These statements appear to be less formal, or at least less fully realized, than the Algorithmic Impact Statements proposed in the AI Now Institute's guidance described above. And the intended audience is all companies that develop algorithm-driven technologies, not just those in the public sector.

Digital Decisions edit

Digital Decisions. Center for Democracy & Technology (n.d.)

Long form overview of the causes and consequences of bias in algorithmic systems; proposes a framework for addressing these based on fairness, explainability, auditability, and reliability. Provides an interactive graphic that allows people involved in system design to explore considerations and best practices for building systems aligned with this framework.

EthicalOS Toolkit edit

Ethical OS toolkit and checklist. Institute for the Future and Omidyar Network (2018)

A set of scenarios and probe questions ("checklist") for informing the technological design process (focusing on AI/ML technologies) and assessing the risk of unintended consequences.

Algorithmic Accountability Policy Toolkit edit

Algorithmic Accountability Policy Toolkit. AI Now Institute (2018)

From the introduction: "The following toolkit is intended to provide legal and policy advocates with a basic understanding of government use of algorithms including, a breakdown of key concepts and questions that may come up when engaging with this issue, an overview of existing research, and summaries of algorithmic systems currently used in government. This toolkit also includes resources for advocates interested in or currently engaged in work to uncover where algorithms are being used and to create transparency and accountability mechanisms."

Ethics and Algorithms Toolkit edit

Ethics and Algorithms Toolkit. Johns Hopkins University Center for Government Excellence

From the website: "Government leaders and staff who leverage algorithms are facing increasing pressure from the public, the media, and academic institutions to be more transparent and accountable about their use. Every day, stories come out describing the unintended or undesirable consequences of algorithms. Governments have not had the tools they need to understand and manage this new class of risk. GovEx, the City and County of San Francisco, Harvard DataSmart, and Data Community DC have collaborated on a practical toolkit for cities to use to help them understand the implications of using an algorithm, clearly articulate the potential risks, and identify ways to mitigate them."

Components