Terms-we-Serve-with
A feminist-inspired multi-stakeholder engagement framework and tools for evolving and enacting social, computational, and legal agreements that govern the lifecycle of an AI system.
Dimensions
  1. Co-constitution

  2. Addressing friction

  3. Informed refusal

  4. Contestability and complaint

  5. Disclosure-centered mediation

Events
Scenarios (coming soon)
  • Gender-Equity
  • Healthcare
  • Education
  • Content Moderation

Learn more about the framework and contact us at: b.rakova@gmail.com

In service
of you and a vision for improved transparency and human agency in the interactions between people and algorithmic systems:
  • Bogdana Rakova, ex-Mozilla Foundation, Senior Trustworthy AI Fellow
  • Megan Ma, Assistant Director, Stanford Center for Legal Informatics
  • Renee Shelby, Sociology Department and Legal Studies Program, Georgia Institute of Technology

Graphic design by Yan Li.

With the kind support of Mozilla Foundation.



Mark

Events


Do you trust me? Developing a shared-by-design language model evaluation protocol with Kwanele

Virtual workshop
TBD, 10am-12pm PDT / 5-7pm UTC

How do we improve trust in our language-based interactions with AI systems and the people facilitating them? Building on the Terms-we-Serve-with framework and recent works on large language models (LLMs) evaluations, we will explore a community driven approach to designing an evaluation protocol, reflexively deciding on the participatory approach in partnership with the South Africa community and AI startup Kwanele. Workshop participants will evolve the domain-specific evaluation protocol of their LLM which answers questions related to legal and social dimensions of experiences of gender-based violence. The activities during the workshop will draw on existing socio-technical taxonomies of AI risks and harms, red teaming methods, socratic questioning, algorithmic auditing, and other evaluation experiments. Through a facilitated design activity, we will engage in co-designing an evaluation protocol by generating new data and insights that inspire novel dialogue interaction patterns, benchmarks and safeguards that could act as methodological interventions to proactively mitigate and completely transform potential failure modes into a positive human experience. We hope that these interventions could contribute towards building responsible AI infrastructure for language-based interactions, and we invite you to join us in this emerging discussion across academia, industry, government, and civil society.


>> Past events


Workshop on Algorithmic Contestability
AI Palace conference, Bückeburg Palace, Germany
07/04, 13:00-15:30 CEST


How do we empower decision-makers to exert leverage in the governance and evaluation of AI systems? We define algorithmic contestability as the ability for people to disagree, challenge, appeal, or dispute harmful algorithmic outcomes and will argue that contestability is a critical part of what safety means for AI. In particular, we ground contestability in the fields of Human Computer Interaction, Science and Technology Studies, and other interdisciplinary work on understanding sociotechnical harms defined as the adverse lived experiences resulting from a system’s deployment and operation in the world — occurring through the ‘co-productive’ interplay of technical system components and societal power dynamics. We propose a workshop session where we’ll first present recent work we’ve done at the intersection of improving consent and contestability in AI, algorithmic audits, data leverage, and data donation. We’ll then facilitate an open discussion where participants will engage in: (1) defining algorithmic contestability in a particular context, (2) exploring mechanisms that empower contestability, and (3) operationalizing and evaluating such mechanisms in practice.


Workshop on Algorithmic Injustice
University of Amsterdam
06/27, 16:30-17:30 CEST


In working to expand our vocabulary and capacity to transform algorithmic harms and injustice into positive human experience, it is critical to consider the contractual agreements between people and technology companies. For example, Terms-of-Service (ToS) agreements, cookie policies, content moderation policies, privacy policies, limitations of liability clauses, non-disclosure agreements, and other kinds of user agreements. The small print and legalese in contractual agreements often fail to provide people with meaningful consent and contestability in cases of algorithmic failures, risks, harms, or injustice (Vincent, 2021; Fiesler, Beard, and Keegan, 2020; Vaccaro et al., 2015). Instead of contributing to extractive power structures and information asymmetries, what if contractual agreements could become a living socio-technical artifact that empowers trust through equitable community-driven participation and oversight in the lifecycle algorithmic systems? This question is at the core of the Terms-we-Serve-with (TwSw) framework for evolving and enacting social norms and agreements between people and companies developing AI-driven products and services. The goal of the TwSw intervention proposal is achieving improved transparency and human agency in AI beyond debiasing, explainability, and ethics. Furthermore, it inspires a new legal and regulatory approach to accountability based on a taxonomy of sociotechnical harms and risks (Shelby et al. 2022). We hope that many critical feminist interventions will emerge during engaging with the TwSw framework and will provide meaningful steps towards centering work around the lived experiences of members of communities affected by complex algorithmic systems. Read more here.

Workshop website and RSVP

Critical Feminist Interventions in Trustworthy AI and Technology Policy
Mozilla MozFest, Amsterdam
06/20 (Tuesday) 16:30-17:30 CEST

Who are the people engaged along the entire lifecycle of design, development, and deployment of AI from the material resource extraction to the value that’s created and experienced by people? What frictions exist and need to be made visible? How could specific interventions in AI rather than algorithmic predictions made by AI contribute to improved algorithmic justice outcomes? This session will center critical theory, critical design, design justice, design friction, and service design, in facilitating a roundtable discussion on what critical feminist interventions could empower trustworthy AI. To seed the conversation, we will present and discuss the Terms-we-Serve-with intervention for improving the contractual agreements between people and technology companies. It is a feminist-inspired multi-stakeholder engagement framework and tools for enacting social norms and agreements centered on five dimensions - co-constitution, addressing friction, informed refusal, disclosure-centered mediation when harm occurs, and contestability which enables people to report potential concerns. We will then open up the discussion to engage session participants to share their feedback as well as ideas for other interventions. We hope that many critical feminist interventions will emerge during this discussion and will provide meaningful steps towards centering work around the lived experiences of members of communities affected by AI systems.

Prototyping Social Norms and Agreements in Responsible AI
Responsible AI Challenge by Mozilla Builders, San Francisco
05/31, 10:30-12:00 PST

What do we mean when we speak of recognizing the risks of AI and evolving safeguards that help builders develop it responsibly to serve society? What does "service" mean in terms of privacy, security, fairness, human autonomy, digital sovereignty, and more? In this workshop we'll explore these questions through social, computational, and legal mechanisms, applying a hand-on approach grounded in the projects you're evolving.