Terms-we-Serve-with
A feminist-inspired community-led framework for improved transparency and engagement in algorithmic decision-making
In service of you and a vision for improved transparency and human agency in the interactions between people and algorithmic systems:
  • Bogdana Rakova, Mozilla Foundation, Senior Trustworthy AI Fellow
  • Megan Ma, Stanford Law School, CodeX Fellow
  • Renee Shelby, Sociology Department and Legal Studies Program, Georgia Institute of Technology

Graphic design by Yan Li.

Does your project need a Terms-we-Serve-with aagreement? Please share more with us and contact us through this form.

Mark

Productive Friction


Collectively anticipating and addressing AI frictions supports development of reparative algorithms and redistributes the allocation of benefits and burdens.

TwSw agreements acknowledge and reflect marginalized knowledge systems. We challenge one-sided terms-of-service by enabling meaningful dialogue through the production and resolution of conflict. We believe critical discussion allows individuals to self-organize and discuss algorithmic harms and accountability mechanisms in a way that is safe, respects their privacy, and human dignity.

We encourage praacctiotioners to ask: What frictions or tensions exist among stakeholders (i.e. builders, policymakers, vulnerable populations, etc.)? What is your understanding of the failure modes of the AI system? How do different stakeholders experience friction in interacting with the AI when there’s a functionality failure? Could intentional frictions be a force for algorithmic reparation, for example, what are some nudges you've come across in the context of the AI system; what do these nudges enable (e.g., further engagement, caution, learning); what nudges and choice architecture or affordances could empower transparency, slowing down, self-reflection, learning, and care?


001a
001b


Mark