Skip to main navigation Skip to search Skip to main content

Compromise, not compliance: a generative ethic for Human–AI interaction

Research output: Contribution to conferencePaperpeer-review

Abstract

As artificial intelligence increasingly mediates human interaction, creativity, and decision-making, its underlying logic tends toward compliance: systems are designed for seamlessness, fluency, and optimisation. This paper argues for a fundamental shift, proposing that compromise—not as concession, but as relational negotiation—offers a generative ethic for AI–human interaction.

Drawing from spatial design pedagogy, where learning unfolds through resistance, iteration, and shared vulnerability, compromise becomes a method rather than a failure. In this context, ambiguity, friction, and hesitation become productive: signs of mutual attunement rather than breakdown. Trust is not a given but something that emerges in relation, through negotiation and embodied response.

This presentation weaves together critical theories of knowledge and authorship—Michel de Certeau’s tactical resistance, Stephen Marsh’s computational model of trust, and Édouard Glissant’s defence of opacity—to critique the prevailing emphasis on clarity and resolution in interface design. We ask: what would it mean to build systems that make space for interpretive difference, refusal, and delay?

In response, we sketch a speculative triadic model of human–AI interaction. Rather than binary user–system relations, we imagine a third position: a disruptive, refractive voice that resists instrumentalisation. This third agent—drawing from the figure of the fool—opens a space for non-resolution. Within this model, trust becomes relational, discomfort is generative, and compromise the guiding ethic.

Rather than advocate for user-centred initiatives—where trust, accuracy, and fluency are ends in themselves—we call for commons-centred systems: spaces of co-authorship, situated difference, and asymmetry. These systems value uncertainty not as noise to be resolved, but as the condition for ethical relation and collective meaning-making.

This paper offers compromise as a speculative, pedagogical, and relational counter-gesture to the logic of the black box—one that unsettles optimisation and reclaims HCI as a shared, situated practice of refusal, negotiation, and imagination.
Original languageEnglish
Publication statusPublished - 17 Oct 2025
EventAfter AI 2025: an interdisciplinary holistic discussion on Artificial Intelligence - online
Duration: 17 Oct 202517 Oct 2025
https://afteraisymposium.com/

Conference

ConferenceAfter AI 2025
Period17/10/2517/10/25
Internet address

Fingerprint

Dive into the research topics of 'Compromise, not compliance: a generative ethic for Human–AI interaction'. Together they form a unique fingerprint.

Cite this