Mandatory AI integration is reshaping digital norms while driving rising energy, water, and environmental costs
The rapid development of generative artificial intelligence (Gen-AI) has cascaded into the integration of large language models (LLMs) and AI chatbots onto most digital platforms. Variations of Gen-AI models began to appear on every app, website, and device. LLMs, such as the now ubiquitous Generative Pre-trained Transformer (GPT) models, are trained meticulously on real human data and powered by data centres that are proven to be increasingly damaging to the environment. This advancement has largely occurred without the consent of those most adversely impacted, raising serious ethical questions and concerns about climate-related impacts.
The most pressing data privacy concerns have already been infringed upon. Tech giants trained their AI models by ‘scraping’ user data without their knowledge. Reversing this and ethically retraining AI models is highly unlikely; it would be an expensive, energy-intensive process that tech companies are unlikely to undertake. Moreover, the environmental cost of AI remains a growing concern. Data centres not only require massive amounts of energy but also use water—often directly from local drinking water sources—to cool their Graphics Processing Unit (GPU) clusters. Recently released research by Vrije Universiteit, Amsterdam, found that the AI boom used as much water as all bottled water consumed globally in 2025.
Data centres not only require massive amounts of energy but also use water—often directly from local drinking water sources—to cool their Graphics Processing Unit (GPU) clusters.
A substantial share of energy and water usage occurs during the intensive training process of Gen-AI models such as GPT-5. The billions of queries processed daily by Gen-AI models have their own ecological costs, particularly in relation to energy, water consumption, and the generation of e-waste. The ecological toll of each query is difficult to quantify; shorter, text-based queries consume substantially fewer resources than image or video generation, both of which are becoming increasingly common. New video generation models, such as NanoBanana 3, are further expected to expand these resource-intensive tasks. Cumulatively, the ecological costs are massive. The International Energy Agency (IEA) highlighted this in a 2025 report on the energy demands of the AI boom. Assessing the impacts of using drinkable water to cool large-scale GPU clusters is even more challenging. That said, the early-stage impact on local communities hosting data centres is deeply concerning, in light of the planned expansion of data centres on a massive scale.
Data and privacy concerns aside, the ecological costs of the data centre boom, exacerbated by the widespread use of chatbots and AI-generated media, have led to an increasing backlash against their creeping, unquestioned integration. Protests against thirsty data centres for AI expansion have taken place across the world, from Chile, India, and Uruguay to Spain and parts of the US. Today, nearly every widely used application, website, and modern electronic device likely features some variation of an AI chatbot or assistant. Social media is flooded with fake images and videos, as AI-generated content proliferates through every medium, making it nearly impossible to distinguishbetween fiction and reality. At the root of the backlash is the forced and abrupt nature of this integration, which leaves users with little to no choice to disable or opt out of AI usage.
Every Google search, by default, automatically generates an AI overview, even when the more intensive ‘AI mode’ is disabled. These overviews are generated by Google’s Gemini AI and exponentially increase processing requirements. While users can include “-ai” in their search query to skip the generation of this overview, little effort has been made to directly inform users of this option. Many have questioned why this feature is active when the process to generate an AI overview relies on energy-intensive data centres that often consume clean drinking water for cooling. Regulators should perhaps ask why users are not required to include “+ai” in their query instead, if they truly believe an AI-generated overview is necessary.
At the root of the backlash is the forced and abrupt nature of this integration, which leaves users with little to no choice to disable or opt out of AI usage.
The transformation of digital norms in something as straightforward as an internet search has irrevocably altered the online information economy. The aim is explicitly to “let Google do the searching for you.” When users leave Google to manually check sources and verify information, advertisers lose out on viewership and attention, which is an undesirable outcome for a for-profit company. However, people have expressed their disenchantment with these features and have found their own creative ways to bypass the AI intrusion.
‘Meta AI’ abruptly began to appear on the most widely used apps on every smartphone. Facebook and Instagram were the first to integrate it, with Meta AI finding its way directly into personal chats and direct messages, like a silent third party that responds on command whenever called upon. The AI chatbot even appeared on the top search bar of WhatsApp, whether users of the personal messaging app were made aware of this or not. There is no option to opt out or turn this feature off; users were given no opportunity to consent to this update. A similar integration process occurred on Instagram and Facebook. Users can sometimes partially opt out by choosing not to share their data with the integrated AI models on the platform. However, the opting-out process is often obscured and requires a tutorial to even locate the setting where consent can be revoked.
Microsoft also faced criticism for mandating an upgrade to an AI-driven version of Windows 11 by ending support for all personal computers that could only run Windows 10. Many valid concerns about forcing AI integration in this manner were raised, as Microsoft even faced legal action due to the multifold ramifications of this decision, particularly concerning the embedded energy and water demands. Furthermore, the mandatory migration to the AI Copilot-driven Windows 11 resulted in a massive generation of e-waste, as hundreds of millions of devices that didn’t match the technical requirements had to be disposed of entirely.
These pressing environmental concerns, among many other factors, have led many people to become increasingly disillusioned with the current state of technology, dominated by a few large corporations. There have been renewed calls for users to migrate from mainstream platforms to other alternatives. Social media giants like X, Facebook, and WhatsApp face some competition from platforms such as Bluesky, Signal, and Telegram in this regard. Should the AI push continue, this could lead to reduced demand, which may cause AI’s biggest funders to become wary of supporting further investments.
Regulators have struggled to keep pace with tech developers; however, many now recognise that an intrusive AI rollout has, so far, failed to deliver both on the profit and social benefit fronts that were earlier envisioned.
Regulators have struggled to keep pace with tech developers; however, many now recognise that an intrusive AI rollout has, so far, failed to deliver both on the profit and social benefit fronts that were earlier envisioned. Going forward, developers should seriously consider whether automatic opt-in models lead to a greater backlash and increase the possibility of innovation-stifling regulatory intervention. Tech players should ensure that users can opt out of AI features by default and elect to use these features only when they deem it necessary. This could substantially reduce AI’s ecological footprint in the everyday operations of these platforms. Clear and specific permissions should be provided on all platforms that unambiguously define what processes are enabled when turned on.
While many claim that AI could help ‘solve’ climate change and optimise energy usage, recent research has shown that these claims are likely overstated. In any event, these applications should be undertaken with utmost care to reduce the environmental harm caused in the process. Energy-intensive processes such as training new LLMs should only be undertaken when these industries have transitioned to clean and renewable energy and sustainable water and waste management. Technologies of the future should not use the fuels of the past.
Krishna Vohra is a Junior Fellow with the Centre for Economy and Growth at the Observer Research Foundation.
The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.
Krishna Vohra is a Junior Fellow at the Centre for Economy and Growth. His primary research areas include energy, technology, and the geopolitics of climate ...
Read More +