A feminist-inspired community-led framework for improved transparency and engagement in algorithmic decision-making
In service of you and a vision for improved transparency and human agency in the interactions between people and algorithmic systems:
  • Bogdana Rakova, Mozilla Foundation, Senior Trustworthy AI Fellow
  • Megan Ma, Stanford Law School, CodeX Fellow
  • Renee Shelby, Sociology Department and Legal Studies Program, Georgia Institute of Technology

Graphic design by Yan Li.

Does your project need a Terms-we-Serve-with aagreement? Please share more with us and contact us through this form.


"Power flows through governing bodies, social institutions, and micro-interactions, all of which engage with technologies of the time."

- Jenny Davis, Apryl Williams, and Michael W Yang
2021. Algorithmic reparation. Big Data & Society, 8(2), 20539517211044808.

“I agree to the terms of service” is perhaps the most falsely given form of consent, often leaving individuals powerless in cases of algorithmic harm - incidents experienced by individuals and communities that lead to social, material, or ecological harms, resulting from algorithmic systems and interactions between human and algorithmic actors.

We offer an interdisciplinary multidimensional perspective on the future of regulatory frameworks - the Terms-we-Serve-with (TwSw) social, computational, and legal framework. Along each dimension, we help technology companies and communities co-design and operationalize critical feminist interventions that help them engage with AI in a way centered on trust, transparency, and human agency. Do you want to experiment with the TwSw?

Read about our pilot project with the Kwanele startup, using an AI chatbot in the context of gender-based violence prevention. They wanted to engage with the TwSw framework in determining ways to incorporate AI in a manner that aligns with their mission, values, and the needs of their users. We ran a multi-stakeholder workshop with them and inspired them to leverage the design principles of an early version of the TwSw open source technical tool. As a result, they developed what we frame as, critical feminist interventions, for example (1) mechanisms to better engage with their users, helping them understand and navigate potential risks and harms of their use of an AI chatbot and (2) user interface that empowers users to continuously give the developer team improved feedback about potential risks and harms of the AI. Ultimately, helping them better serve their users.

Read a summary blog post on reimagining consent and contestability in AI.

Join us in discussing this proposal at the Power Asymmetries track of the Connected Life Conference, Oxford Internet Institute.

About us
  • Bogdana (Bobi) Rakova, Mozilla Foundation, Senior Trustworthy AI Fellow
    Bogdana is a Senior Trustworthy AI fellow at Mozilla. Her work investigates the intersection of people, trust, transparency, accountability, environmental justice, and technology. Previously, she was a research manager at Accenture’s Responsible AI team where she led consumer AI impact assessment projects across multiple industries. She was a mentor at the Assembly Ethics and Governance of AI program led by Harvard's Berkman Klein Center for Internet and Society and MIT Media Lab. Bogdana held fellowships with Partnership on AI and the Amplified Partners venture fund. Previously she co-founded a company in the intersection of AI and the manufacturing space and spent more than four years in research and innovation labs in Silicon Valley including Samsung Research America and Singularity University, where she worked on building AI models. Influenced by her early life in post-communist Bulgaria, Bogdana is investigating the role of AI in strengthening civil society and democracy.
  • Megan Ma, Stanford Law School, CodeX Fellow 
  • Megan is a Fellow at CodeX, the Stanford Center for Legal Informatics. Her research considers the limits of legal expression, in particular how code could become the next legal language. Dr. Ma is also Managing Editor of the MIT Computational Law Report and a Research Affiliate at Singapore Management University in their Centre for Computational Law. She received her PhD in Law at Sciences Po and was a lecturer there, having taught courses in Artificial Intelligence and Legal Reasoning and Legal Semantics. She has previously been a Visiting PhD at the University of Cambridge and Harvard Law School respectively.
  • Renee Shelby, Google, Senior Responsible Innovation Researcher
    Renee (Ph.D. History and Sociology of Science and Technology, Georgia Institute of Technology), is jointly appointed in the Sociology Department and Legal Studies Program. She has research and teaching interests in feminist science studies, law and inequality, and sociology of the body. Her current project, “Designing Justice: Sexual Violence, Technology, and Citizen-Activism” examines how gender violence activists and state actors negotiate issues of sexual consent and mediate the experience of sexual violence through technological activism. “Designing Justice” advances understandings of the anti-sexual violence movement by shifting focus off protest-based action to the epistemic dimensions and organizational forms of counter public knowledge. Renee’s work appears in Feminist Media Studies, Theoretical Criminology, and Engaging Science and Technology Studies among others.

Does your project need a Terms-we-Serve-with agreement? Let us know what you think, share about your works and contact us through this form

Let’s engage in co-creating a new trustworthy social imaginary for improved transparency and human agency in the contractual agreements between people and AI.