a [revised] solarpunk ai manifesto
technology for earth, community, and the future
note: this text revises an earlier version of the solarpunk ai manifesto. the original remains archived here. this revision reflects further research, sharper language, and a more careful grounding of claims. parts of the earlier manifesto leaned on directional signals and aggregated summaries. for instance, an industry sector pie chart in the previous version suggested a level of quantification that the evidence does not firmly support. this revision removes that visual and grounds the argument more carefully in available research. i made this revision because ideas evolve, and so should their articulation.
ai today: a turning point
the current path: we are seeing increasingly centralised ai systems, expanding data centre infrastructure, rising energy and water demands, and compute and capital-driven rather than human and environment-centric use of ai.
result of the current path: mounting ecological and social pressures. if this type of growth continues without constraint, we are in trouble.
a truth: this trajectory is not inevitable. energy use (or sources), governance, and deployment models are political design choices, not natural laws.1 2 the current reality of ai is shaped by poor judgment, horrendous decision-making, and tech-bro economic policies tethered to late-stage capitalism within the current technocratic and algorithmic dictatorship.3
social media and ai: a structural tension
today, social media platforms are major users of applied machine learning systems.
recommendation systems, ranking models, ad targeting, content moderation, and notification systems all rely heavily on machine learning infrastructure. exact global shares of ai usage by sector are not publicly disclosed, but personalisation and recommendation systems are among the most widely deployed ai applications worldwide.4 5
across billions of users, this results in continuous large-scale inference workloads being distributed across global data centre networks.
most platforms optimise for engagement, advertising yield, and growth. the ecological costs are rarely a primary optimisation metric. profit before the environment and people seems to be the persisting model.
environmental pressures
- energy demand: data centre electricity use is rising rapidly. multiple analyses suggest ai-related workloads are a significant driver of projected growth in global data centre electricity demand through 2030.6 4 this required electricity is massive and the demand continues to grow without
constraintrestraint. - water usage: data centres require substantial water for cooling in many regions. measurement and disclosure remain inconsistent.2 one should ask, when and where water is a sacred resource, such as in queretaro mexico (where a data centre boom is occurring), why would we allow these data centres to exist. there is already plenty of water insecurity in queretaro, yet again ... capital before the environment and people.
- carbon emissions: indirect emissions associated with large technology firms have increased in recent years alongside ai expansion, although exact attribution to ai versus other cloud services varies by report.7 needless to say, in places whose primary energy sources are not clean, this will remain a problem.
- material throughput: hardware acceleration cycles increase demand for specialised chips and infrastructure, contributing to embodied carbon and electronic waste. furthermore, this increases demannd for these resources and imperialist extrationism is the result.
social pressures
- behavioural optimisation: algorithmic feeds shape attention patterns and emotional responses. research documents measurable psychological and behavioural effects of recommender systems and social ranking mechanisms.8
- opportunity cost: high performance ai talent and compute resources are disproportionately concentrated in advertising, finance, and consumer platforms.
- centralised control: compute, capital, and model development remain highly concentrated in a small number of corporations.
- extractive data practices: large-scale data mining has relied heavily on scraping text, images, code, and creative work without explicit consent from authors. generative systems are often trained on cultural production that was never offered for machine replication. this raises ethical tensions around authorship, attribution, and the transformation of creative labour into raw training material.
- creative asymmetry: while models can remix, imitate, and synthesise artistic styles at scale, the original creators rarely share in the economic or infrastructural benefit. even outside of formal copyright frameworks, there remains a question of consent, acknowledgement, and reciprocity.
- erosion of authorship: generative ai complicates the boundary between inspiration and extraction. creative communities depend on shared culture, but they also depend on recognition of contribution. when creative works are absorbed into opaque training pipelines without dialogue, the social fabric of artistic production is strained, creating greater tension between authors and users.
note: as an anarchist, i reject all rigid intellectual property regimes. yet rejecting copyright does not require accepting unconsented appropriation. authorship can be understood not as ownership in a capitalist sense, but as a form of relational integrity. creative work emerges from lived experience, labour, and context. its incorporation into machine systems without consent or acknowledgement presents a moral tension that cannot be dismissed, even though it arrived as a technological inevitability. it is reasonable to argue that a substantial share of frontier ai deployment today is directed toward advertising, recommendation, and consumer optimisation rather than ecological restoration. much of this development has relied on large scale scraping and ingestion of cultural material without explicit consent from creators. practices that would be contested or restricted at an individual level are often normalised when conducted at corporate scale. this asymmetry raises serious ethical concerns about power, accountability, and who bears the consequences of extraction.
the issue is not simply legality. it is structural. corporations operate within regulatory grey zones, while individual creators rarely have the resources to challenge appropriation of their work. the result is a system in which creative labour is absorbed into industrial-scale machine systems with limited transparency and little reciprocity.
green and solarpunk ai principles
- local first: deploy lightweight, decentralised systems where possible.
- renewable roots: power ai infrastructure with verifiable renewable energy sources wherever feasible.
- small is beautiful: prioritise efficient, specialised models over unnecessarily large general systems.
- open commons: share tools and knowledge for ecological and community use.9
- carbon and water transparency: consistent public reporting of energy, water, and embodied material impacts.
from dirty ai to green ai
| dirty ai | green ai |
|---|---|
| centralised and opaque | transparent and accountable |
| growth driven optimisation | ecological constraint aware design |
| compute maximisation | efficiency and sufficiency |
| concentrated ownership | community and public governance |
ai usage across sectors
public reports do not provide a precise global breakdown of ai compute by sector. however, surveys of enterprise adoption show high uptake in marketing, sales, operations, finance, and customer experience functions.4 5
energy, agriculture, biodiversity monitoring, and climate applications represent a smaller but growing portion of documented ai deployment.
the imbalance is qualitative rather than precisely quantified. commercial optimisation dominates current large scale deployment of ai systems. ecological and public interest applications remain comparatively underfunded.
this reflects the political economy of the industry. as ai capabilities scale, so do valuations and executive wealth. capital concentrates upward while ecological crises intensify. intelligence at planetary scale, artificial or otherwise, is presently engineered to multiply capital more reliably than it multiplies care, resilience, or shared wellbeing.
we can do better than this.
solarpunk reallocation vision
| currently dominant uses | possible reallocation |
|---|---|
| engagement ranking systems | local renewable grid optimisation |
| ad targeting | biodiversity monitoring |
| high frequency consumer prediction | climate adaptation modelling |
| attention engineering | public knowledge infrastructures |
| extractive data mining | community health and resilience systems |
| speculative financial modelling | cooperative economic planning tools |
| automated consumer persuasion | participatory civic decision systems |
| proprietary black-box models | transparent and auditable community models |
| centralised hyperscale data centres | distributed, community-scale compute |
our future: a solarpunk ai world
- ai assisting solar microgrid balancing and local storage optimisation.
- ai supporting forest monitoring, ecosystem restoration, and ocean observation.
- citizen science networks augmented by open tools.
- community-owned models trained with constrained, transparent energy budgets.
- ai systems embedded in regenerative agriculture and low-energy housing design.
- ai assisting in educational practices, encouraging exploration and curiosity rather than presenting shortcuts or replacing human memory and recall systems.
- decentralised low-energy compute models where individuals control their data and dataflow.
these applications already exist in early forms within conservation technology networks and climate focused ai initiatives.9 10 but instead of being experiments in ai altruism, they need to be championed as the core goal.
another issue is that as compute models force centralisation and drag us further into the cloud, we are seeing the tech industry turning into a cloud-oligopoly effectively attacking local computing11 and therefore decentralisation. without decentralisation, the utopian dreams of a solarpunk world assisted by ai are not possible. so one immediate milestone is to decentralise the world. decentralise the future.
closing remarks
we must appropriate machines not be appropriated by them.
guattari (1989)
the very notion of the domination of nature by man stems from the very real domination of human by human.
bookchin (1982)
ai systems can either intensify extraction or support regeneration. they can remain centralised behemoths, corporately controlled with minimal communal input, or become decentralised, publicly accountable, and collectively shaped. the direction depends on governance, infrastructure, and shared priorities.
the choice is still open, though the window narrows as infrastructure hardens and power consolidates.
avoiding ai will not protect us from it. we need to understand it. we need to experiment with it at human scale. small projects, local experiments, community tools such as my own thought experiment, the solar grove. literacy is a form of agency, and we need agency in this game.
democratising this technology means more than open access. it means redistributing knowledge, compute, and decision-making power. it means building systems that serve ecological stability and communal resilience rather than short-term and destructive extraction.
ai should not remain concentrated in the hands of a narrow technical and financial elite. like the early internet, it can either consolidate into enclosed platforms or evolve into a distributed commons.
the future of intelligence is not predetermined. it is negotiated. we need to play our part in this negotiation.
references
strubell, e., ganesh, a., & mccallum, a. (2019). energy and policy considerations for deep learning in nlp. acl proceedings. https://doi.org/10.18653/v1/P19-1355↩
oecd. (2022). measuring the environmental impacts of artificial intelligence compute and applications. https://www.oecd.org/↩
hart, m., bavin, k. & lynes, a. artificial intelligence, capitalism, and the logic of harm: toward a critical criminology of ai. crit crim 33, 513–532 (2025). https://doi.org/10.1007/s10612-025-09837-0↩
stanford institute for human centered artificial intelligence. (2025). ai index report 2025. https://hai.stanford.edu/ai-index↩
mckinsey & company. (2024). the state of ai in early 2024. https://www.mckinsey.com/↩
international energy agency. (2024). electricity 2024: analysis and forecast to 2026. https://www.iea.org/↩
reuters. (2025). reporting on indirect emissions growth among major technology companies linked to data centre expansion. https://www.reuters.com/↩
nature communications. (2022). research on social media, machine learning, and behavioural effects. https://www.nature.com/↩
climate change ai. (2019). tackling climate change with machine learning. https://www.climatechange.ai/↩
wildlabs. conservation technology community network. https://wildlabs.net/↩
velazquez, jason. the computational web and the old ai switcharoo. https://www.fromjason.xyz/p/notebook/the-computational-web-and-the-old-ai-switcharoo/ (tab:https://www.fromjason.xyz/p/notebook/the-computational-web-and-the-old-ai-switcharoo/)↩