A feminist-inspired multi-stakeholder engagement framework and tools for evolving and enacting social, computational, and legal agreements that govern the lifecycle of an AI system.
  1. Co-constitution

  2. Addressing friction

  3. Informed refusal

  4. Contestability and complaint

  5. Disclosure-centered mediation

Scenarios (coming soon)
  • Gender-Equity
  • Healthcare
  • Education
  • Content Moderation

Learn more about the framework and contact us at: bogdana@mozillafoundation.org

In service
of you and a vision for improved transparency and human agency in the interactions between people and algorithmic systems:
  • Bogdana Rakova, Mozilla Foundation, Senior Trustworthy AI Fellow
  • Megan Ma, Assistant Director, Stanford Center for Legal Informatics
  • Renee Shelby, Sociology Department and Legal Studies Program, Georgia Institute of Technology

Graphic design by Yan Li.

With the kind support of Mozilla Foundation.


Contestability and Complaint

Contestability broadly refers to the ability for people to disagree, challenge, dispute, or otherwise express their critical concerns. Furthermore, in the field of feminist science and technology studies, complaints are expressions of dissatisfaction, pain, or grief.

Within this dimension of the framework, we envision mechanisms that empower people to voice concerns as testimonies to structural and institutional problems. These concerns can be related to any and all parts of the AI life cycle including data collection, curation, labeling issues, data use, the training of a specific model, algorithmic audits, as well as the sunsetting of models or entire products.

We believe individuals and communities could build and leverage open-source tools in verifying that their concerns are being addressed. For example, we point to the use of computable contracts as a mechanism to verify properties of algorithmic outcomes and enable reporting of algorithmic harm. We see this as a new feedback mechanism between technology companies and civil society stakeholders who have the expertise to take action in helping individuals and communities.

We encourage practitioners to ask: What are institutional barriers for AI builders to meaningfully "hear" complaints? Have you ever provided feedback to an app? Or, if you haven't, what prevented you from providing feedback? After deploying the AI, can you anticipate how potential algorithmic bias might lead to harmful user experiences? After deploying the AI, how would you engage with end users and communities? What would it look like to "hear" and act on user complaints? 

  • Ahmed S (2021) Complaint! Duke University Press.
  • Boyarskaya M, Olteanu A and Crawford K (2020) Overcoming failures of imagination in AI infused system development and deployment. arXiv preprint: arXiv:2011.13416
  • Fu B, Lin J, Li L, Faloutsos C, Hong J and Sadeh N (2013) Why people hate your app: Making sense of user feedback in a mobile app store. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1276-1284).
  • Gordon-Tapiero A,Wood A, and Ligett K (2022) The case for establishing a collective perspective to address the harms of platform personalization. In Proceedings of the 2022 Symposium on Computer Science and Law (CSLAW '22). Association for Computing Machinery. https://doi.org/10.1145/3511265.3550450.
  • Griffin D and Lurie E (2022) Search quality complaints and imaginary repair: Control in articulations of Google Search. New Media & Society.
  • Holloway BB and Beatty SE (2003) Service failure in online retailing: A recovery opportunity. Journal of Service Research 6(1): 92-105. 
  • Khalid H, Shihab E, Nagappan M and Hassan AE (2014) What do mobile app users complain about? IEEE Xplore 32(3): 70-77.
  • Panichella S, Di Sorbo A, Guzman E, Visaggio CA, Canfora G and Gall HC (2015) How can I improve my app? Classifying user reviews for software maintenance and evolution. In 2015 IEEE International Conference on Software maintenance and evolution (ICSME) (pp. 281-290).