Terms-we-Serve-with
A feminist-inspired multi-stakeholder engagement framework and tools for evolving and enacting social, computational, and legal agreements that govern the lifecycle of an AI system.
Dimensions
  1. Co-constitution

  2. Addressing friction

  3. Informed refusal

  4. Contestability and complaint

  5. Disclosure-centered mediation

Events
Scenarios (coming soon)
  • Gender-Equity
  • Healthcare
  • Education
  • Content Moderation

Learn more about the framework and contact us at: b.rakova@gmail.com

In service
of you and a vision for improved transparency and human agency in the interactions between people and algorithmic systems:
  • Bogdana Rakova, ex-Mozilla Foundation, Senior Trustworthy AI Fellow
  • Megan Ma, Assistant Director, Stanford Center for Legal Informatics
  • Renee Shelby, Sociology Department and Legal Studies Program, Georgia Institute of Technology

Graphic design by Yan Li.

With the kind support of Mozilla Foundation.



Mark
︎

"Power flows through governing bodies, social institutions, and micro-interactions, all of which engage with technologies of the time."

- Jenny Davis, Apryl Williams, and Michael W Yang
2021. Algorithmic reparation. Big Data & Society, 8(2), 20539517211044808.

Recent updates:
- 11/15/23 - read our academic paper - Rakova, B., Shelby, R., & Ma, M. (2023). Terms-we-serve-with: Five dimensions for anticipating and repairing algorithmic harm. Big Data & Society, 10(2). https://doi.org/10.1177/20539517231211553
- 10/04/23 - a blog post update published by Stanford Law School  - Engaging on Responsible AI terms: Rewriting the small print of everyday AI systems.  
- 05/24/23 - a blog post update published by Data and Soceity - A New Framework for Coming to Terms with Algorithms. Reflections on terms of service, gender equity, and chatbots 

“I agree to the terms of service” is perhaps the most falsely given form of consent, often leaving individuals powerless in cases of algorithmic harm i.e. incidents experienced by individuals and communities that lead to social, material, or ecological harms, resulting from algorithmic systems and interactions between human and algorithmic actors

The Terms-we-Serve-with (TwSw) is a social, computational, and legal framework. Along each of its five dimensions, we help technology companies, communities, and policymakers co-design and operationalize critical feminist interventions that help them engage with AI in a way centered on trust, transparency, and human agency.   

We are looking to engage with interdisciplinary collaborators - express your interest in joining our online focus group here.

Socio-technical outputs from engaging with the framework include:
  • Human-centered user agreements i.e. ToS, content policies, data use agreements, etc.
  • User studies and user experience research that enables specific kinds of UX design friction
  • An ontology of user-perceived AI failure modes
  • Contestability mechanisms that empower continuous AI monitoring grounded in an ontology
  • Mechanisms that enable the mediation of potential algorithmic harms, risks, and functionality failures when they emerge

Reach out to us to learn more: b.rakova@gmail.com

Explore the five dimensions of the Terms-we-Serve-with (TwSw) social, computational, and legal framework - co-constitution, addressing friction, informed refusal, disclosure-centered mediation, and contestability.


Case Studies
Read about our pilot project with the Kwanele startup, using an AI chatbot in the context of gender-based violence prevention. They wanted to engage with the TwSw framework in determining ways to incorporate AI in a manner that aligns with their mission, values, and the needs of their users. We ran a multi-stakeholder workshop with them and inspired them to leverage the design principles of an early version of the TwSw open source technical tool. As a result, they developed what we frame as, critical feminist interventions, for example (1) mechanisms to better engage with their users, helping them understand and navigate potential risks and harms of their use of an AI chatbot and (2) user interface that empowers users to continuously give the developer team improved feedback about potential risks and harms of the AI. Ultimately, helping them better serve their users.

Read a summary blog post on reimagining consent and contestability in AI.

Join us in discussing this proposal at the Power Asymmetries track of the Connected Life Conference, Oxford Internet Institute.

Learn more about the framework, share about your work, and contact us at: b.rakova@gmail.com

Let’s engage in co-creating a new trustworthy social imaginary for improved transparency and human agency in AI.