Marc Andreessen famously said that software is ‘eating’ the world, and now we have AI eating up software. However, in this original formulation, the ‘world’ represented the economic slice of the world: How businesses operated and the profits they made were the core concern. With the push towards the triple bottom line, where all three Ps—profit, people and the planet—are taken into consideration, we must re-examine how AI is eating our planet!
AI systems are very compute-intensive, i.e., their design, development, and deployment consumes a lot of cycles on a computer, typically utilising one or more Graphical Processing Units (GPUs). With the prevalence of cloud computing, we now have most training and inference jobs for these systems running in large data centres, that in turn have a rising carbon footprint. Granted that these are more carbon-efficient than running your own infrastructure because of their scale and optimisations, but as they make computation cheap and easily accessible, Jevon’s Paradox kicks in. So, while efficiency goes up, so do demand and total consumption, at the same time normalising very large-scale AI models. Some of these systems can cost millions of dollars to train and emit carbon equivalents rivalling that of the lifetime carbon emissions of several cars or several trans-continental flight trips.
A study conducted by OpenAI highlighted the remarkable pace of growth of the computational consumption of large-scale AI systems: Doubling in their requirements every 3.4 months, far outstripping Moore’s law of doubling of the number of transistors that we can pack in an integrated circuit almost every two years. While some may justify the use of such large-scale AI systems in creating eudaimonia, we know that this is far from reality, specifically, the various societal harms emerging from the use of AI systems. In some cases, ironically, fighting climate change using AI might harm the environment in the process through the design, development, and deployment of large-scale AI systems.
AI systems are very compute-intensive, i.e., their design, development, and deployment consumes a lot of cycles on a computer, typically utilising one or more Graphical Processing Units (GPUs).
The machines are raging and harming our planet, subtly through their massive environmental footprints in data centres, buildings that are tucked away from our sights. Our fight against the societal harms from AI systems, i.e., fighting against biases, lack of transparency, lack of interpretability, privacy violations, and unfairness must include planetary and environmental concerns as well. A harmonised framework will boost both efforts in achieving success.
The field of green software engineering has some insights to offer. As stated by this movement: “Green Software Engineering is an emerging discipline at the intersection of climate science, software practices and architecture, electricity markets, hardware, and data centre design.
The Principles of Green Software Engineering are a core set of competencies needed to define, build and run green sustainable software applications.”
The counter attack against Red AI systems, AI systems that are built with the sole purpose of optimising business and functional objectives without regard for the associated computational costs, need to be implemented at all stages of the AI lifecycle from the conception and design of the system, all the way to its end-of-life. Practitioners and researchers must advocate for the introduction of environmental costs as a core functional requirement in the AI lifecycle, arming us with the ammunition required to counter the scourge of larger and larger AI systems.
Many approaches such as distilled and compressed networks have almost equivalent performance as large systems, but with a fraction of the computational costs.
Reducing AI systems’ impact on the environment needn’t come at the cost of loss in performance. Many approaches such as distilled and compressed networks have almost equivalent performance as large systems, but with a fraction of the computational costs. In fact, one can make a monetary case for saving the environment here: Building a greener AI system saves the organisation money too! The burgeoning field of TinyML has a lot to offer in terms of other approaches that can lead to smaller AI systems that pack about the same punch. Other approaches include building AI applications to be carbon-efficient, which can be achieved as mentioned above, and by being carbon-aware: Running compute-intensive training jobs of AI systems at a time when renewable energy, for example, constitutes a bigger portion of the electricity grid and dispatching the jobs towards those parts of the world where electricity is generated from greener inputs. This can have a significant impact: If you run a job in a place like Iowa, US versus running it in Quebec, Canada, you have a 40x higher carbon impact! So, relocation alone can trigger massive savings. As consumers become savvier about the carbon impacts of software systems and demand more transparency, the risk that organisations will move their operations to areas with weaker regulations to evade more onerous requirements in terms of green mandates will be lessened. This is why we also need to simultaneously build up awareness about the environmental impacts of these systems so that we can adopt a multipronged approach to mitigating harm to our planet.
Increasingly, in addition to the sociological landscape where the power and impacts of machines are being contested, we need to think more deeply about the contestations that will emerge from AI’s disproportionate use of energy. In a world that faces looming climate change, access to resources is exacerbating inequities in the world, widening the chasm between haves and have-nots. AI-intensive industries can get started on their journey through thinking about carbon-efficiency and carbon-awareness for your AI systems, through thinking about whether AI is even required in the first place (opting for simpler, deterministic systems that might have similar functionality), and finally designing these systems in a way that harmonises the machines’ existence both from a sociological perspective but also from an environmental perspective.
If AI is to truly become a technology that can deliver benefits for humanity, create eudaimonia, environmental considerations will have to become a core consideration, and advocating for sustainable AI engineering is the first step in that direction.
The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.