A feminist-inspired community-led framework for improved transparency and engagement in algorithmic decision-making
In service of you and a vision for improved transparency and human agency in the interactions between people and algorithmic systems:
  • Bogdana Rakova, Mozilla Foundation, Senior Trustworthy AI Fellow
  • Megan Ma, Stanford Law School, CodeX Fellow
  • Renee Shelby, Sociology Department and Legal Studies Program, Georgia Institute of Technology

Graphic design by Yan Li.

Does your project need a Terms-we-Serve-with aagreement? Please share more with us and contact us through this form.



Experiences of algorithmic harms are shaped by contractual agreements. We challenge coercive terms-of-service through multi-stakeholder engagement. Marginalized communities and various stakeholders take part in drafting TwSw agreements and are rewarded for their participation. We believe they can help technology companies anticipate algorithmic harm and design adequate response mechanisms that empower solidarity. 

We encourage practitioners to ask - Who are the stakeholders engaged in the lifecycle of design, development, and deployment of AI? How are they contributing? How are they rewarded for their contribution? Are there other stakeholders who are currently not represented, but could be considered unintended users of the algorithmic system and be impacted by it directly or through any downstream decisions made by other human or algorithmic actors?