Addressing Friction
Collectively anticipating and addressing AI frictions supports development of trustworthy algorithms and redistributes the allocation of benefits and burdens.
TwSw interventions acknowledge and reflect marginalized knowledge systems. We challenge existing deceptive design practices and seek to enable meaningful dialogue through the production and resolution of conflict. We believe critical discussion allows individuals to self-organize and discuss algorithmic harms and accountability mechanisms in a way that is safe, respects their privacy, and human dignity.
We encourage practitioners to ask: What frictions or tensions exist among stakeholders (i.e. builders, policymakers, vulnerable populations, etc.)? What is your understanding of the failure modes of the AI system? How do different stakeholders experience friction in interacting with the AI when there’s a functionality failure? Could intentional frictions be a force for algorithmic reparation, for example, what are some nudges you've come across in the context of the AI system; what do these nudges enable (e.g., further engagement, caution, learning); what nudges and choice architecture or affordances could empower transparency, slowing down, self-reflection, learning, and care?
Resources:
- Costanza-Chock S (2020) Design Justice: Community-led Practices to Build the Worlds We Need. The MIT Press.
- Dean J, Dunford K, Gupta K, Marini M (2022) Towards Trusted Design—takeaways from Envisioning Yesterday’s Future. Web Foundation.
- DeVito MA (2021) Adaptive folk theorization as a path to algorithmic literacy on changing platforms. ACM Conference on Human Computer Interaction 5(CSCW2) (pp.1-38).
- DiSalvo, C., & Lukens, J. (2009). Towards a critical technological fluency: The confluence of speculative design and community technology programs.
- Dunne, A., & Raby, F. (2013). Speculative everything: design, fiction, and social dreaming. MIT press.
- Hamraie A and Fritsch K (2019) Crip technoscience manifesto. Catalyst: Feminism, Theory, Technoscience 5(1): 1-33.
- Lemley MA (2022) The benefit of the bargain. Stanford Law and Economics Olin Working Paper No. 575.
- Mathur A, Acar G, Friedman MJ, Lucherini E, Mayer J, Chetty M and Narayanan A (2019) Dark patterns at scale: Findings from a crawl of 11K shopping websites. In Proceedings of the ACM on Human-Computer Interaction 3(CSCW). (pp. 1-32).
- Nguyen, S and McNealy J (2021) “I, obscura:” Illuminating deceptive design patterns in the wild. UCLA Center for Critical Internet Inquiry. (Accessed 21 February 2023)
- Raji ID, Kumar IE, Horowitz A and Selbst A (2022) The fallacy of AI functionality. In Proceedings of 2022 ACM Conference on Fairness, Accountability, and Transparency (pp. 959-972).
- Sinders C (2020) We Need a New Approach to Designing for AI, and Human Rights Should Be at the Center.
- Stanley J (2017) Pitfalls of artificial intelligence decision making highlighted in Idaho. ACLU Case. ACLU Blogs. (Accessed 21 February 2023).
- Ytre-Arne B and Moe H (2021) Folk theories of algorithms: Understanding digital irritation. Media, Culture & Society 43(5): 807-824.
001a
001b