A feminist-inspired multi-stakeholder engagement framework and tools for evolving and enacting social, computational, and legal agreements that govern the lifecycle of an AI system.
  1. Co-constitution

  2. Addressing friction

  3. Informed refusal

  4. Contestability and complaint

  5. Disclosure-centered mediation

Scenarios (coming soon)
  • Gender-Equity
  • Healthcare
  • Education
  • Content Moderation

Learn more about the framework and contact us at: bogdana@mozillafoundation.org

In service
of you and a vision for improved transparency and human agency in the interactions between people and algorithmic systems:
  • Bogdana Rakova, Mozilla Foundation, Senior Trustworthy AI Fellow
  • Megan Ma, Assistant Director, Stanford Center for Legal Informatics
  • Renee Shelby, Sociology Department and Legal Studies Program, Georgia Institute of Technology

Graphic design by Yan Li.

With the kind support of Mozilla Foundation.



Image credits: Ways of Council, Four Worlds International Institute (source)

We want to acknowledge and celebrate the work of the communities who have supported and inspired us in exploring this new social imaginary - Mozilla Foundation, Data & Society, and Stanford CodeX! Especially the interactive Data & Society workshop exploring the social life of algorithmic harms in 2022, our MozFest workshops in 2022 and 2023, and the Stanford CodeX FutureLaw community.