Productive Friction
Collectively anticipating and addressing AI frictions supports development of reparative algorithms and redistributes the allocation of benefits and burdens.
TwSw agreements acknowledge and reflect marginalized knowledge systems. We challenge one-sided terms-of-service by enabling meaningful dialogue through the production and resolution of conflict. We believe critical discussion allows individuals to self-organize and discuss algorithmic harms and accountability mechanisms in a way that is safe, respects their privacy, and human dignity.
We encourage praacctiotioners to ask: What frictions or tensions exist among stakeholders (i.e. builders, policymakers, vulnerable populations, etc.)? What is your understanding of the failure modes of the AI system? How do different stakeholders experience friction in interacting with the AI when there’s a functionality failure? Could intentional frictions be a force for algorithmic reparation, for example, what are some nudges you've come across in the context of the AI system; what do these nudges enable (e.g., further engagement, caution, learning); what nudges and choice architecture or affordances could empower transparency, slowing down, self-reflection, learning, and care?


