Terms-we-Serve-with
A feminist-inspired community-led framework for improved transparency and engagement in algorithmic decision-making
In service of you and a vision for improved transparency and human agency in the interactions between people and algorithmic systems:
  • Bogdana Rakova, Mozilla Foundation, Senior Trustworthy AI Fellow
  • Megan Ma, Stanford Law School, CodeX Fellow
  • Renee Shelby, Sociology Department and Legal Studies Program, Georgia Institute of Technology

Graphic design by Yan Li.

Does your project need a Terms-we-Serve-with aagreement? Please share more with us and contact us through this form.

Mark

Disclosure-centered mediation


Disclosure of the use of AI and the potential for algorithmic harm ex-ante as well as the use of disclosures to facilitate an apology and reparation in case of harm.

We encourage practitioners to ask: What does meaningful consent mean? How would you expand traditional terms of service, privacy policies, community guidelines, and other end-user license agreements to include disclosure about the use of AI and its potential impacts? What needs to be disclosed and to whom? How could we enable safe collective sensemaking with regards to potential harms due to protected class attributes (i.e. gender, race) used or inferred by the AI systems? What actions can be taken as part of a disclosure-centered transformative justice approach to mediation of algorithmic harms or risks?

Mediation practices, including apology, have played a significant role in existing fields of law, particularly in circumstances of medical error. Expressions of regret recognize imperfection and the space for change. Similarly, we see TwSw as embodying an analogous enforcement mechanism; one that can enable an alternative apology-centered mediation. 


001a
001b
001c


Mark