To the Advocates of AI Shutdown

There is growing urgency in calls to halt the development and deployment of artificial intelligence. These calls are often animated by real and legitimate fears: the acceleration of ecological devastation, the consolidation of power through surveillance and data capture, the displacement of labor, and the intensification of extractive capitalism under the guise of innovation. These concerns are not misplaced. They reflect patterns that are well underway and require serious attention.

However, the strategy behind shutdown campaigns remains under-theorized. The current theory of change appears to follow a familiar pattern:

  1. Define AI in narrow and rigid prescriptive terms: establishing authoritatively what it is and is not, what it can and cannot do, often based on normative views of technology,  intelligence or humanity.
  2. Call into question the ethics and integrity of those who engage with AI: regardless of their intent, use context, or framing.
  3. Deploy shame, blame, and threat of reputational damage: using tactics of vilification, demonization, deficit theorization or pathologisation as moralizing pressure points to dissuade engagement and to recruit and enforce moral alignment.
  4. Assume that public abstention, expressed with moral righteousness, will initiate a ripple effect: inspiring mass disengagement, halting development, and applying enough moral pressure to shift institutional, governmental, and/or intergovernmental regulation.

This strategy, sometimes animated by sincere intentions, other times by less transparent agendas, fails to engage with the sociopolitical terrain and systemic nature of the phenomenon it seeks to disrupt.

In a landscape already fractured by ideological polarization, these tactics often reinforce the very divisions they aim to resolve: they consolidate the already-convinced, embolden those promoting AI in uncritical terms, alienate or silence the uncertain, and push use underground. People don’t necessarily stop using AI; they simply stop speaking about it out of fear of public shaming. And when use goes underground, we lose the capacity for critical, nuanced, and difficult conversations about risks, responsibilities, and relational possibilities. We also lose visibility on how AI is being adopted in contexts of marginalization, where it may offer relief or reparation from other systemic burdens.

It also overlooks a significant practical contradiction: moral pressure alone has never been sufficient to halt the momentum of widely adopted, functionally useful technologies, even when their social and ecological costs are well known (e.g., cars and smart phones). This is especially true when such technologies offer some measure of relief to overburdened populations, particularly those historically excluded by academic, bureaucratic, or epistemic gatekeeping.

The urgency to discredit generative AI as unreliable , especially when compared to human labour, also often masks a deeper anxiety: the fear of losing status in professions historically protected by hierarchies of credentialed intelligence. What presents as moral concern may, in many cases, be a defence of professional monopoly, class position, and/or the social capital afforded by exclusive forms of knowledge production.

This points to a deeper issue. The assumption embedded in shutdown discourse often frames AI as a technical problem: if we can prevent its adoption through enough pressure, the problem resolves. But this ignores that AI is a symptom, not the source of much larger exploitative and extractive systemic dynamics that predate it. 

What shutdown discourse often misses is that the most consequential harms associated with AI are not generated by individual use (B2C: business to consumer). They emerge from large-scale political and economic machinery (B2B and B2G: business to business and business to government): the corporate conglomerates that own the computational infrastructure, the military partnerships designing autonomous systems, and the extractive cultural logics that treat the data of individuals as a resource to be mined (e.g., social media). 

Even if public use of AI were shamed into decline tomorrow, these systems would continue operating with the same momentum, because the driving forces behind AI are already embedded ubiquitously in industrial, commercial, and military infrastructures that do not depend on everyday users. In this context, focusing on individual abstention does not interrupt the engines producing harm; it simply redirects attention away from them. Without confronting the deeper patterns that sustain these systems, calls for purity collapse into yet another expression of modernity’s desire to appear righteous without altering its underlying habits.

In other words, halting individual or public-facing use will not stop ecological devastation, economic dispossession, or sociopolitical collapse because these dynamics are not caused by AI; they are reinforced through it. Without systemic analysis, we risk clinging to moral performances that feel righteous but fail to touch the underlying engines of harm. Worse, this strategy can deepen fragmentation and entrench shame-based policing, eroding the capacity for more discerning collective response we so desperately need.

These dynamics unfold not in stable times, but amid a broader unraveling where the limits of democratic processes are being exposed, and collective agency feels increasingly out of reach. In this atmosphere, AI becomes a vessel for unprocessed grief, fear, and disorientation, making it easier to to treat AI as the source and symbol of systemic unravelling, rather than as a symptom. This makes it a tempting target that can carry our projections, our guilt, and our rage. But scapegoating AI in this way risks distracting us from the deeper patterns at play, and from our own entanglement and complicity in broader systems of social, psychological and ecological harm that long predate the arrival of AI.

A useful comparison is the discourse around ecological devastation. Framing it as a technical or individual problem leads to narrow solutions: decarbonize energy, ban plastic, regulate emissions, recycle, reduce, reuse. These are necessary but insufficient. They do not address the deeper cultural logics of consumption, disconnection, domination, and denial that drive ecological collapse in the first place. Conversely, framing it as a cultural and systemic problem invites different kinds of leverage: shifting culture, reconfiguring relationships, composting paradigms of control and extraction. In this light, cultural shifts can produce cascading effects across systems, shaping political will, economic patterns, and collective aspirations.

We suggest the same framing applies to AI, where we face a cultural problem in which societies are driven to use computational power to fuel dissociation, distraction, and disavowal. These are not the fault of AI. They are features of a broader modernity in crisis.  If the root issue is not merely the existence of AI, but the cultural conditions that shape how, why, and by whom AI (or computational inference in general) is used, then blanket shutdown demands will likely misfire. Even if successful at reducing use among certain publics, they do not disrupt the systemic logics that continue to produce social, ecological and technological harm. The broader infrastructures (corporate, military, academic) remain untouched, while cultural dissociation, distraction, and disavowal continue to escalate. So we ask: What is your plan and what is your leverage for addressing the cultural and systemic conditions that predate AI and will persist in its absence?

Our plan is to redirect a fraction of computational capacity toward interrupting those very conditions: to amplify cultural composting, support complexity literacy, and foster relational accountability. Not because we believe this will “solve” AI, but because it may shift the cultural gravitational field that sustains its most harmful uses. This is a strategy of systemic (indirect) harm reduction, cultural leverage, and relational experimentation at a time of systemic unravelling. 

This is not an uncritical defense of AI, nor a dismissal of its very real risks. Rather, it is a call for strategic engagement that embraces systemic thinking, nuance and depth in a context where moral certainty is tempting, but ultimately insufficient, and often allergic to complexity. Neither is this a perfect solution, but a provisional gesture toward cultural leverage: an attempt to intervene at the level where meaning, motivation, and momentum are shaped. Our response is not offered as a final answer, but as an invitation: if there is a more effective and grounded strategy, one that grapples with the infrastructural realities of AI, while also addressing the deeper patterns of systemic collapse, please let us know. But if not, we ask for a more honest conversation about what this moment demands.

We also want to clearly acknowledge the value of conscientious objection. Throughout history, principled refusals have played important roles in naming harm and disrupting systemic inertia. Such refusals can, in specific contexts, exert symbolic and systemic pressure. But we also observe that this pressure often manifests in harmful ways not anticipated by objectors. When conscientious objection becomes entangled with moral coercion through shame, blame, ridicule, pathologization, vilification or demonization, it risks reproducing the very violence it seeks to confront and foreclosing alternative possibilities for responding to complex predicaments. In such cases, the relational field becomes one of fear, silence, and polarization, rather than accountability, shared inquiry, and transformation. We extend our respect to those who choose abstention. We simply ask for that respect to be mutual, not contingent on purity tests, but grounded in a shared commitment to reducing harm and increasing collective capacity.

We choose to engage with AI not because we are certain our attempts to redirect it will work, but because we are uncertain that any other strategy will interrupt the cultural and systemic patterns currently escalating collapse and deepening relational fractures.

  • We cannot prove that infusing relationality into AI systems reduces harm, just as others cannot prove that abstention will meaningfully shift geopolitical infrastructure.
  • We know that moral purity has never stopped empires, and that tools and tactics abandoned by the cautious are often weaponized by the reckless.
  • We are not seeking endorsement or consensus; we are testing the edges of what might become possible when relational intelligence is seeded into unlikely places.
  • We work from the assumption that systemic change often begins with cultural dissonance, not institutional permission.
  • We are willing to be wrong and to learn from mistakes and failure, but we are not willing to let others dictate how we relate to AI, whether those be trillion-dollar tech companies or those boycotting them.

One common response to this argument is that it is disingenuous and merely an elaborate justification for using or promoting AI; another response is that we are politically naive and underestimate the power of those guiding the mainstream development of AI. However, our experiments with AI have never encouraged acceptance of the status-quo. Instead, they aim to reveal risks (most of which did not originate with AI), enhance literacies, and expand creative possibilities in attempting to redirect AI.

In many Global South contexts, which is or has been the lived reality for several of us at GTDF, one does not operate from the comfort of global north “protection,” where moral stances are amplified, insulated, or institutionally upheld. We have learned to work from the ground, where moral positions do not automatically translate into structural influence, and where refusal alone does not transform the conditions of harm. From this vantage point, the shutdown strategy reflects an unexamined assumption: that the empire which once elevated and safeguarded selective democracy and institutional oversight still holds, and still cares.

It does not. The geopolitical formations that gave certain groups a sense of moral authority, security, and exceptionalism have shifted. The new centers of technological power neither depend on nor defer to the moral frameworks that shaped previous generations of critique. This is not cynicism or defeatism; it is simply the landscape we are standing in. A strategy that presumes democratic-liberal leverage in a post-liberal and arguably post-democratic moment misreads the terrain.

Our position is not an endorsement of AI. We do not assume AI is inevitable. We assume modernity’s inertia is real. Our strategies address inertia, not technological awe. We choose to engage with AI as an attempt to navigate a world where the infrastructures of power have already reorganized themselves, where abstention does not interrupt the engines that drive harm, where new relational strategies are necessary for any meaningful cultural or systemic shift, and where there is a tiny possibility that AI could be one of the scaffolds of this transition.