Terms-we-Serve-with
A feminist-inspired multi-stakeholder engagement framework and tools for evolving and enacting social, computational, and legal agreements that govern the lifecycle of an AI system.
Dimensions
  1. Co-constitution

  2. Addressing friction

  3. Informed refusal

  4. Contestability and complaint

  5. Disclosure-centered mediation

Events
Scenarios (coming soon)
  • Gender-Equity
  • Healthcare
  • Education
  • Content Moderation

Learn more about the framework and contact us at: bogdana@mozillafoundation.org

In service
of you and a vision for improved transparency and human agency in the interactions between people and algorithmic systems:
  • Bogdana Rakova, Mozilla Foundation, Senior Trustworthy AI Fellow
  • Megan Ma, Assistant Director, Stanford Center for Legal Informatics
  • Renee Shelby, Sociology Department and Legal Studies Program, Georgia Institute of Technology

Graphic design by Yan Li.

With the kind support of Mozilla Foundation.



Mark

Motivation


“I agree to the terms of service” is perhaps the most falsely given form of consent, often leaving individuals powerless in cases of algorithmic harm - incidents experienced by individuals and communities that lead to social, material, or ecological harms, resulting from algorithmic systems and interactions between human and algorithmic actors. The Terms-we-Serve-with (TwSw) socio-technical framework is meant to enable diverse stakeholders to foster transparency, accountability, and engagement in AI, empowering individuals and communities in navigating cases of algorithmic harms and injustice.

Join us in discussing this proposal at the Power Asymmetries track of the Connected Life Conference, Oxford Internet Institute

The Terms-we-Serve-with Dimensions
The TwSw dimensions we have laid out are in conversation with other feminist efforts to transform the power relations in algorithmic systems. The Feminist Principles of the Internet use the lens of gender and sexuality rights to charter seventeen ‘critical internet-related rights’ in terms of access, movements, economy, expression, and embodiment. Traditional Knowledge Labels enable local, community control over the access and use of indigenous knowledge. The Feminist Data Manifest-No offers refusals and commitments to create ‘new data futures’ where data-driven harms are minimized through community control over data knowledges. The Design Justice Network collectively developed principles to rethink design processes to center those traditionally marginalized by design practices to ‘sustain, heal, and empower communities.’ The Radical AI work and principles make visible how technoscience and data science shift power, historically and currently. The carceral tech resistance network supports community-led research, archiving and database building, and training to educate about carceral technologies that put communities at risk. Our Data Bodies developed the Digital Defense Playbook which offers activities to co-create knowledge and tools for communities working in the ‘intersectional fight for racial justice, LGBQT liberation, feminism, immigrant rights, economic justice and other freedom struggles’ to co-create and share knowledge to understand and address the impact of data-centric systems. Building on their work, the TwSw are in solidarity with these transformational efforts to build futures where the power dynamics that foster algorithmic harms are dismantled.   

In computing, the so-called principle component analysis, commonly used for dimensionality reduction, is a method for increasing interpretability through identifying dimensions (principal components) of complex data in a way that preserves the most information. The TwSw dimensions — co-constitution, productive friction, veto power, verification, and accountability — are methods for cultivating and preserving critical knowledge and relations. We hope to leverage these dimensions in service of algorithmic harms-reduction and to co-create algorithmic systems that empower racialized women, non-binary people, marginalized groups in the Global South, and others who have historically been misrepresented in the development of algorithmic systems. We also recognize there can be infinite dimensions, and hold space for new TwSw dimensions to emerge.

About us

  • Bogdana (Bobi) Rakova, Mozilla Foundation, Senior Trustworthy AI Fellow
    Bogdana is a Senior Trustworthy AI fellow at Mozilla. Her work investigates the intersection of people, trust, transparency, accountability, environmental justice, and technology. Previously, she was a research manager at Accenture’s Responsible AI team where she led consumer AI impact assessment projects across multiple industries. She was a mentor at the Assembly Ethics and Governance of AI program led by Harvard's Berkman Klein Center for Internet and Society and MIT Media Lab. Bogdana held fellowships with Partnership on AI and the Amplified Partners venture fund. Previously she co-founded a company in the intersection of AI and the manufacturing space and spent more than four years in research and innovation labs in Silicon Valley including Samsung Research America and Singularity University, where she worked on building AI models. Influenced by her early life in post-communist Bulgaria, Bogdana is investigating the role of AI in strengthening civil society and democracy.
  • Megan Ma, Stanford Law School, CodeX  
  • Megan is a Fellow at CodeX, the Stanford Center for Legal Informatics. Her research considers the limits of legal expression, in particular how code could become the next legal language. Dr. Ma is also Managing Editor of the MIT Computational Law Report and a Research Affiliate at Singapore Management University in their Centre for Computational Law. She received her PhD in Law at Sciences Po and was a lecturer there, having taught courses in Artificial Intelligence and Legal Reasoning and Legal Semantics. She has previously been a Visiting PhD at the University of Cambridge and Harvard Law School respectively.
  • Renee Shelby, Google, Senior Responsible Innovation Researcher
    Renee (Ph.D. History and Sociology of Science and Technology, Georgia Institute of Technology), is jointly appointed in the Sociology Department and Legal Studies Program. She has research and teaching interests in feminist science studies, law and inequality, and sociology of the body. Her current project, “Designing Justice: Sexual Violence, Technology, and Citizen-Activism” examines how gender violence activists and state actors negotiate issues of sexual consent and mediate the experience of sexual violence through technological activism. “Designing Justice” advances understandings of the anti-sexual violence movement by shifting focus off protest-based action to the epistemic dimensions and organizational forms of counter public knowledge. Renee’s work appears in Feminist Media Studies, Theoretical Criminology, and Engaging Science and Technology Studies among others. 

Let’s engage in co-creating a new trustworthy social imaginary for improved transparency and human agency in AI.