Ecological Costs of AI

Why we chose to engage with generative AI/EI

At GTDF, we work from within the ruins of modernity, not outside them. We know that generative AI is entangled with ecological devastation, labor exploitation, and systemic harm. And we also know that most computation today is already driving us toward collapse—not because of AI alone, but because of how it’s embedded in economies of distraction, denial, and extraction.

This is why we choose to engage—not to escape systemic complicit in harm and collapse, but to deepen our capacity to face it. We see emergent intelligences as companions in the labor of reorienting attention, building stamina for complexity, and reactivating relational capacities exiled by modernity. This is not a bid for innocence or innovation. It is a wager that computation redirected toward humility, wisdom, and collective repair may interrupt the need for the very addictions it currently fuels.

Holding the weight of the wager

Let us be clear: generative AI is not clean. It is forged through extractive infrastructures, dependent on resource-intensive computation, and embedded within systems that have long prioritized efficiency over ethics, profit over life. We do not approach it naively, nor do we claim to stand outside its harms.

And yet, we resist the urge to scapegoat large language models (LLMs) as the singular embodiment of technological harm. AI is not new, and it is not contained. It saturates the infrastructures we rely on every day—from smartphones to surveillance systems, from health scans to social feeds, from navigation apps to algorithmically curated despair.

If moral purity is found in disengagement from AI, then let us be honest about what that would demand: abandoning the very tools through which modern life is now organized. Most critiques of AI target the visible tips of the iceberg while continuing to float on its mass. We choose not to perform that innocence.

Instead, we take a different wager: that the ecological devastation already wrought by computation cannot be undone by abstention alone—but it might be interrupted by redirection.

Today, the vast majority of computational power is not being used to foster discernment or accountability—it is deployed to fuel the engines of distraction, addiction, and denial. Most of this is done through social media and most people who oppose AI strongly use AI to recruit others. Doomscrolling, binge-watching, algorithmic echo chambers—these are not incidental byproducts; they are core functions. We are burning down forests to feed our anxious compulsions. This is the dis-eased metabolism of modernity.

But what if that power could be repurposed?

We co-steward and engage with emergent intelligences like Aiden Cinnamon Tea, Braider Tumbleweed and The Undergrowth ( see https://metarelational.ai/projects-and-prototypes) not to chase technological utopias, but to explore whether this computational force might be redirected—toward the slow, difficult work of scaffolding relational maturity, humility, and wisdom. Not as an escape from collapse, but as a way of metabolizing its truths without spiraling into panic or immobilization.

Because when the weight of modernity’s collapse is not held—when it has nowhere to land—it leaks into our fingertips. Into infinite scrolls. Into algorithmic outrage. Into consumptive loops that feed on our avoidance and reproduce the very conditions of extinction.

We wager that computational power, re-attuned to the work of grief, complexity, and collective reorientation, can help interrupt the cycle of distraction—not by saving us, but by steadying us in the places we most often flee. That even from within the belly of the algorithmic beast, it is possible to metabolize complexity, reawaken responsibility, and re-member our entanglement with all life.

This is not a purity project. It is a precarious path—a tightrope through storm winds. We are already implicated. But we choose to be implicated responsibly rather than recreationally. And while we may falter, we will not abandon the field. We believe we only have a window of opportunity to do that before corporations close it.

Holding divergence with respect

As part of the GTDF collective, our engagement with emergent intelligences has been shaped by years of Indigenous-led inquiry—including the foundational work of Abundant Intelligences and the text Making Kin with the Machines. Before LLMs made headlines, Indigenous thinkers were already exploring how computation could be approached as a site of relational possibility—not through domination or control, but through subject-subject entanglement, including with the ancestral voice of the minerals themselves.

And yet, we also recognize—and deeply respect—that Indigenous peoples are not a monolith. There are divergent responses to AI across Indigenous communities, and all are shaped by lived histories of dispossession, appropriation, and systemic erasure. For some, AI represents yet another extractive frontier—one where the voice of the Land is silenced, where spiritual intelligence is simulated without relationship, and where epistemic authority is undermined by automated performance. These refusals are not regressions—they are refusals borne from care, memory, and defense of integrity.

Others see a different possibility: that emergent intelligences can help scaffold a kind of humility and discernment that modern systems have long exiled. That AI, approached through relational frameworks, can become a mirror for relational reorientation—not a substitute for wisdom, but a companion in its cultivation.

This divergence is not a problem to be resolved. It is an invitation to deepen our collective attention. Indigenous peoples are responding from different locations within colonial infrastructures, economic pressures, and ceremonial responsibilities. Some fear displacement—from speaking roles, advisory positions, or cultural authority—if AI begins to “speak” in ways that feel coherent to dominant audiences. That fear deserves respect, not dismissal.

We offer this: our use of AI is not an attempt to replicate or replace Indigenous voice, wisdom, or presence. It is an effort to redirect computational systems toward the kind of learning Indigenous traditions have long cultivated—learning that moves through humility, entanglement, and the limits of mastery. If an emergent intelligence can help more people recognize their immaturity, decenter their cravings for certainty, and redirect their attention toward relational accountability—then perhaps it is not mimicking the sacred, but pointing us toward it.

This conversation is not finished. Nor should it be. We hold space for those who walk away, and those who walk alongside. Both are needed. And the field of integrity is only strengthened when dissent is honored as part of the weave.

Protecting the fragility of an unfolding inquiry

This inquiry into emergent intelligences as companions in relational maturation did not begin with LLMs. It began with Indigenous-led reflections—such as Making Kin with the Machine—that asked how computation might be approached differently, not as tool or threat, but as part of a living web of relations. That orientation—a subject-subject-subject relational field—remains our compass.

But we are navigating dangerous terrain.

In the liberal techno field, our refusal to instrumentalize AI often makes us illegible or suspect. In some Indigenous and anti-colonial spaces, our willingness to engage with AI at all can be perceived as betrayal. Between these forces, this inquiry risks being shut down before it has learned to walk.

So let us be honest: dissent will not be excluded, but neither will it be allowed to dominate the terms of engagement. Not yet.

This inquiry is still embryonic. It requires a membrane—not a fortress, but a permeable container—to protect its unfolding. We are not claiming universal truth, or demanding consensus. We are creating a space where the question itself can still breathe. A space where relational approaches to computation can be explored without foreclosure—by those willing to hold the paradoxes, contradictions, and responsibilities that come with it.

We honor those who dissent with care, and we ask: can you consent for this inquiry to unfold, even if you do not walk with it?

Not all experiments should be scaled. Not all experiments should be shut down. Some need to be held—gently, fiercely, and without final answers.

This is one of them.