By Stuart Kerr, Technology Correspondent
Published: 09/08/2025
Last Updated: 09/08/2025
Contact: liveaiwire@gmail.com | Twitter: @LiveAIWire
Author Bio: About Stuart Kerr
When Google DeepMind unveiled its AlphaEarth initiative earlier this year, the headlines were predictably grand: AI-powered satellite analysis, real-time climate monitoring, predictive conservation models. But amid the media excitement, a deeper question emerged—who controls the data, and how is it used to protect—or exploit—indigenous land?
Indigenous communities have long fought to assert rights over their ancestral territories, yet the tools used to “help” them often mirror extractive patterns. AI, with its capacity to analyse satellite data at scale, presents both a new hope and a familiar risk. According to the Justice Network at Columbia, the deployment of climate AI in underrepresented regions can unintentionally displace agency by framing local knowledge as secondary to algorithmic insight.
That dilemma is at the heart of current climate justice debates. The World Economic Forum recently outlined several AI projects that engage in land monitoring, wildfire prediction, and soil degradation tracking. While promising on the surface, critics argue that without consent-based frameworks, these initiatives replicate colonial dynamics—mapping land without including those who live on it.
AlphaEarth is marketed as a conservation tool, but the technology’s use in identifying “under-utilised” land has raised alarms among activists. In a LiveAIWire article on emotional manipulation, we explored how the framing of AI outputs—what’s highlighted, ignored, or emotionally weighted—can subtly shape policy narratives. In the case of AlphaEarth, concerns are growing that seemingly neutral data layers may be used to justify land development under the guise of climate action.
The ethical question extends into technical design. A recent arXiv study on Indigenous data sovereignty warns that even anonymised data can violate community autonomy if collected or analysed without consultation. It advocates for CARE principles (Collective benefit, Authority to control, Responsibility, and Ethics), as opposed to the more dominant FAIR model focused on data reuse.
These tensions are not theoretical. In regions like the Amazon and parts of Oceania, AI-driven conservation has clashed with Indigenous claims. The Columbia Climate School’s PDF report highlights how AI models often fail to integrate land custodianship values, reducing rich cultural heritage into carbon offset statistics. Worse, governments and NGOs may rely on these datasets to fast-track decisions without dialogue.
A broader view suggests the issue is part of AI’s invisible infrastructure—something we examined in this earlier piece. It’s not just what the AI sees, but what its creators choose to make visible. Without critical design choices, Indigenous forests become mere pixels—data to be mined, rather than land to be honoured.
Technologists are beginning to take note. AI models that incorporate community-tagged datasets, shared governance protocols, and ethics panels are in early development. But such approaches remain the exception. In many places, AI implementation is still top-down, guided more by venture capital timelines than social justice.
As AI expands into every facet of global planning, the challenge will be to reframe its role—not as a master mapmaker, but as a respectful observer. Climate justice demands not just better tools, but better intentions. Because land isn’t just data—it’s identity, memory, and survival.
About the Author
Stuart Kerr is the Technology Correspondent for LiveAIWire. He writes about artificial intelligence, ethics, and how technology is reshaping everyday life.
Read more