Terms-we-Serve-with
A feminist-inspired multi-stakeholder engagement framework and tools for evolving and enacting social, computational, and legal agreements that govern the lifecycle of an AI system.
Dimensions
  1. Co-constitution

  2. Addressing friction

  3. Informed refusal

  4. Contestability and complaint

  5. Disclosure-centered mediation

Events
Scenarios (coming soon)
  • Gender-Equity
  • Healthcare
  • Education
  • Content Moderation

Learn more about the framework and contact us at: b.rakova@gmail.com

In service
of you and a vision for improved transparency and human agency in the interactions between people and algorithmic systems:
  • Bogdana Rakova, ex-Mozilla Foundation, Senior Trustworthy AI Fellow
  • Megan Ma, Assistant Director, Stanford Center for Legal Informatics
  • Renee Shelby, Sociology Department and Legal Studies Program, Georgia Institute of Technology

Graphic design by Yan Li.

With the kind support of Mozilla Foundation.



Mark

Addressing Friction


Collectively anticipating and addressing AI frictions supports development of trustworthy algorithms and redistributes the allocation of benefits and burdens.

TwSw interventions acknowledge and reflect marginalized knowledge systems. We challenge existing deceptive design practices and seek to enable meaningful dialogue through the production and resolution of conflict. We believe critical discussion allows individuals to self-organize and discuss algorithmic harms and accountability mechanisms in a way that is safe, respects their privacy, and human dignity.

We encourage practitioners to ask: What frictions or tensions exist among stakeholders (i.e. builders, policymakers, vulnerable populations, etc.)? What is your understanding of the failure modes of the AI system? How do different stakeholders experience friction in interacting with the AI when there’s a functionality failure? Could intentional frictions be a force for algorithmic reparation, for example, what are some nudges you've come across in the context of the AI system; what do these nudges enable (e.g., further engagement, caution, learning); what nudges and choice architecture or affordances could empower transparency, slowing down, self-reflection, learning, and care?

Resources:


001a
001b


Mark