A feminist-inspired multi-stakeholder engagement framework and tools for evolving and enacting social, computational, and legal agreements that govern the lifecycle of an AI system.
  1. Co-constitution

  2. Addressing friction

  3. Informed refusal

  4. Contestability and complaint

  5. Disclosure-centered mediation

Scenarios (coming soon)
  • Gender-Equity
  • Healthcare
  • Education
  • Content Moderation

Learn more about the framework and contact us at: bogdana@mozillafoundation.org

In service
of you and a vision for improved transparency and human agency in the interactions between people and algorithmic systems:
  • Bogdana Rakova, Mozilla Foundation, Senior Trustworthy AI Fellow
  • Megan Ma, Assistant Director, Stanford Center for Legal Informatics
  • Renee Shelby, Sociology Department and Legal Studies Program, Georgia Institute of Technology

Graphic design by Yan Li.

With the kind support of Mozilla Foundation.



Co-constitution is an opportunity to challenge one-sided and coercive modes of participation in AI development. Instead, we question what does meaningful participation means and how could technology companies co-design with the communities they are serving. Marginalized communities and various stakeholders could take part in drafting user agreements and be rewarded for their participation. We believe they can help technology companies anticipate algorithmic harm and design adequate response mechanisms that empower solidarity. 

We encourage practitioners to ask - Who are the stakeholders engaged in the lifecycle of design, development, and deployment of AI? How are they contributing? How are they rewarded for their contribution? Are there other stakeholders who are currently not represented, but could be considered unintended users of the algorithmic system and be impacted by it directly or through any downstream decisions made by other human or algorithmic actors?