Holding Complexity in AI Conversations

Public conversations about AI tend to become polarized very quickly. Positions harden, certainty escalates, and disagreement is often moralized. What begins as inquiry easily turns into performance, alignment signaling, or attempts to defeat opposing views. This dynamic is not simply a failure of civility or critical thinking; it reflects the difficulty of staying present to a phenomenon that simultaneously provokes different emotions, particularly fear.

Concerns about AI are neither abstract nor exaggerated. Tangible ecological, social, and psychological harms are already unfolding, including accelerating ecological devastation, techno-oligarchic consolidation of power, labor displacement and precarity, data extraction and exploitation, cultural homogenization, pervasive surveillance, and the intensification of anxiety, comparison, and alienation. These concerns deserve serious attention. At the same time, they are often folded into debates that demand premature coherence and definitive positions before the phenomenon itself has fully come into view.

AI evokes existential questions about agency, intelligence, authorship, governance, and the future of social life. When nervous systems are activated by this level of uncertainty, there is a strong pull toward narrow-boundary coherence: either/or framings, yes/no answers, quick conclusions, and demands for certainty. These responses can feel stabilizing in the short term, but they often limit what can be collectively sensed, questioned, and understood.

The AI phenomenon is not only complex; it is fundamentally uncertain. Like the early stages of the COVID-19 pandemic, we are still learning what this is, how it operates across different contexts, and what kinds of transformations it may trigger. In such conditions, deductive certainty and inductive generalization are often inadequate. What may be required instead is abductive reasoning: the capacity to hold incomplete information, notice emerging patterns, and revise interpretations as reality continues to unfold.

Learning from the polarizations of the pandemic, we need ways of having conversations that widen the hologram of reality by holding complexity, paradox, complicity, and uncertainty long enough for deeper collective intelligence to emerge. This includes the ability to remain with discomfort, to acknowledge the partiality of all perspectives, and to resist the urge to collapse inquiry into moral polarities.

This does not mean abandoning critique or accountability. It means recognizing that the quality of our engagement shapes the depth of our understanding and the health of the relational fabric of the eco-systems we are part of. If we want conversations about AI that can address real harms without foreclosing possibility, we need rules of engagement that expand, rather than constrict, our individual and collective nervous systems.

This requires different rules of engagement. For example:

situatedness before certainty

Participants name the standpoint, context, or paradigm they are speaking from, rather than presenting claims as view-from-nowhere truths.

contribution over convergence

Responses aim to add dimensionality, nuance, or new angles, rather than force agreement or closure.

curiosity before critique

Challenges begin as questions that explore assumptions, boundaries, and implications before asserting error or incoherence.

non-convergence as legitimate

Disagreement is not automatically a problem to be solved. Some differences are generative and help widen the hologram of reality.

relational integrity

Shaming, vilifying, ridiculing, pathologising, or attempting to defeat those operating from different paradigms are not signs of rigor. These moves fracture relational space, contract nervous systems, and reduce collective intelligence by narrowing what can be sensed, thought, and explored together.

attribution discipline

Differences are first examined as differences in framing, values, context, or paradigm, rather than explained through presumed cognitive, moral, or psychological deficits.

These rules are offered as an experiment rather than a mandate. They are one attempt to support conversations that can stay with complexity without collapsing into polarization, certainty without inquiry, or critique without care.

In moments of profound uncertainty, the quality of our conversations matters. Not because they will produce immediate answers, but because they shape what we are collectively able to sense, imagine, and become.

See also: Clearing the Field: A relational protocol for difficult conversations about AI.