Environmental impact of AI

By
Sébastien Duguay, September 8 2021
Sustainable AI
Energy Efficiency
Jevons' paradox

With the current global pandemic of COVID-19, you've probably heard time and time again that reducing transportation would reduce the environmental impact of workers. But what about the impact of the significant resources consumed by telecommuting, video conferencing?

As shown in a recent MIT study the environmental impact of technology consumption in our daily lives is enormous. In addition to video conferencing, the increased use of cloud platforms to enable remote collaboration is also contributing to this increased carbon footprint.

Finally, let's add the work done in different sectors to automate internal processes using data, blockchain and artificial intelligence, where the resources consumed for their development are considerable. Whether it is for data storage on the cloud or through the use of artificial intelligence, these technologies are energy intensive, produce waste and CO2 emissions in addition to requiring rare materials.

In this blog, we will focus on programs related to the use of artificial intelligence techniques and the improvements that could be achieved, while taking into account the rebound effect that could be observed.

The environmental impact of technology can be felt on many fronts. Not only does the use of information technology result in significant energy consumption during the computations essential to training artificial intelligence models, but the manufacturing of the necessary hardware also has negative environmental impacts. The ramifications of the artificial intelligence lifecycle are as complex as the algorithms developed by specialists in the field.

As noted by the Commission à l'éthique en science et technologie du gouvernement du Québec, the entire life cycle of digital technologies is currently contributing to the increase of global pollution. They are high emitters of GHG, consume more and more energy and non-renewable natural resources (including "critical" minerals) and produce a large amount of waste electrical and electronic equipment (WEEE). https://www.ethique.gouv.qc.ca/fr/actualites/ethique-hebdo/eh-2021-02-26/

In 2008, the Global e-Sustainability Initiative (GESI) SMART 2020 report estimated that digital technologies could reduce global GHGs by 15-30% by 2020. However, global GHG emissions have increased from 736 Mt CO2 Eq in 2008 to 730 Mt CO2 Eq in 2019 in Canada. Why is this? What explains this disparity?

To break it down, let's focus on the energy impact of using technology in AI. A research paper by Emma Strubell, Ananya Ganesh, and Andrew MaCallum of the University of Massachusetts (UMass), published last June, reports that training an AI model can produce more than 272,155 kg of CO2 emissions, or five times the amount produced by an average car over its lifetime. Another paper, Green AI, published by the Allen Institute for AI in Seattle, describes a 300,000-fold increase in the computations required for deep learning research between 2012 and 2019 as research increases. The publication of these reports and the media coverage they received illustrated the need for the development of in-depth thinking and analysis about the environmental impact of AI use, particularly in its early stages, and for a challenge to the assumption that digital processes are somehow ephemeral and highly efficient. Not only are they not ephemeral, but the perception of dematerialization is actually wrong: we don't dematerialize, we simply outsource.

The energy impact of the development of artificial intelligence models is directly linked to the computational resources used, i.e. servers, processors and graphics cards, among others. They physically exist at the suppliers and have a real environmental impact. The higher the capacity of the equipment, the higher the cost per unit of time and the shorter the time required for computation. Why? More capacity requires more energy. Obviously, when we talk about energy efficiency, we want to decouple this ratio by doing the same work, or more, with less energy. But despite the important technological advances in this field, the carbon footprint of the technology field is constantly increasing. Why is this? Several factors come into play.

First of all, it must be specified that AI is not a uniform and monolithic field. Several approaches and designs are used to obtain the desired results on various problems. Solutions that can be trained on a standard laptop are not rare and have a minimal impact on AI. Of course, no matter how big it is, training an AI model (teaching a machine to recognize a face from a large number of photographs, for example) will always be much more resource-intensive than simply running it, which can be instantaneous. This means that developers need to be careful about what exactly they are trying to achieve before re-training already operational systems to make small improvements.

Jevons' paradox

source : https://fr.wikipedia.org/wiki/Paradoxe_de_Jevons#/media/Fichier:Graphique_illustrant_lEffet-dUne_baisse_des_co%C3%BBts_de_production.svg

It is at this point that a questioning that goes beyond the strictly technical aspect must intervene. What exactly are we trying to accomplish with the development or re-training of an AI model? What will be its environmental, social and economic impact? Is it worth the risk or should technological progress be given a free pass in the name of technical progress? We do not claim to have a definitive answer to this question, but rather the conviction that questioning and discussion must take place, especially since an increase in the energy efficiency of the equipment used does not guarantee a decrease in impacts, as Jevons' paradox shows. In economics, Jevons' paradox occurs when an increase in the efficiency with which a resource is used also leads to an increase in the rate of consumption of this resource due to an increasing demand. It is called a paradox because there is a tendency to believe that increased efficiency leads to decreased consumption of a resource. That's the point of better efficiency, right? What can happen is that improved efficiency leads to greater availability of the resource on the market and therefore to a decrease in price. This decrease brings new players into the system and increased use by existing players due to lower prices. In short, in this context, will an increase in efficiency (with a decrease in costs) lead to a decrease in the environmental impacts of AI or actually increase them? Again, it is difficult to predict the future, but the energy efficiency measures implemented in the last few decades have not reduced our overall GHG emissions and there is no reason to believe that it will be different with AI.

If more efficiency is not the answer, what is? In fact, as in other areas of consumption, the behavior and choices of actors sometimes have a much greater impact on the environment than technological solutions. So, the real question to ask is whether we really need to drive this or that model to address a problem we are facing. Can the problem be solved in whole or in part with business rule automation? Are there pre-trained models that can satisfy the constraints of the situation?

It is critical that the development of efficiency measures be accompanied by education and awareness of environmental impacts. An improvement in the energy efficiency of any technology is only a good thing if it is not negated by the behavior of those who adopt it. Let's choose wisely.

Finally, it should also be noted that little data is available to assess the positive impact of AI on the environment. There are significant energy costs associated with training and using the models, but if replacing traditional services with AI results in lower GHG emissions, the balance could be positive from a broader perspective.