AI sits squarely in the climate gap. Will the unrestricted pursuit of AI in health care exacerbate health inequities?
Although artificial intelligence (AI) has exploded in popular culture with the roll out of new generative AI tools for consumers (eg, chatbots, image and voice generation), AI’s role in health care has emerged in a more controlled fashion. The potential of AI in health care has been firmly established with the FDA approval of AI models (largely imaging applications) and the surge of R&D in the field. The announcement earlier this year of a deep learning algorithm that can predict the risk of pancreatic cancer years before onset exemplifies the direct impact AI can have on health outcomes, as pancreatic cancer is difficult to diagnose early, when it is curable, and it is deadly when diagnosed in its later stages, with only a 2%-9% 5-year survival rate. In pancreatic cancer, early diagnosis means the difference between life and death. There is little doubt that quality AI models, when implemented well and maintained, will improve health outcomes for patients. But recent research into the impact of AI on the climate begs the question of which patients will benefit, and at what cost to the health of others?
AI models are power hungry, estimated to consume as much energy during training one model as 100 US households use in a year. The high-powered computers required to run these models also generate a lot of waste heat, meaning the data centers in which the units reside require massive cooling systems. Training ChatGPT, a large language model in the generative AI realm, uses an amount of energy equivalent to what 120 US households consume in a year and emitted 502 tons of carbon. And that’s just the energy used to train the model, roughly only a modest 40% of the energy a model is expected to consume over its lifetime. In other words, AI models, including the ones being developed for health-care applications, create substantial negative externalities—pollution and a large carbon footprint—that contribute to climate change.
Currently, there is little to no transparency about the use of energy from many of the corporations involved in this market, from cloud operators to chip makers. One researcher predicts that data, if released, will reveal that “GPUs burn up as much as the total emissions of a small country.” But in the goldrush-like frenzy to develop more and bigger AI models, will that revelation elicit any changes? Development efforts by large corporations and research institutions run in parallel without restriction in a footrace to the finish line, with many more projects failing than getting to market, complicating the accounting of these costs.
In 2021, over 200 medical journals published a joint statement identifying climate change as the biggest threat to human health. The planet is closer to the climate change tipping point (1.5 degrees Celsius above pre-industrial temperatures) than we thought in 2021, with scientists now predicting that point will be reached between 2023 and 2027.
In real terms, exceeding that point tips the globe into extreme weather, heat, flooding, drought, and wildfires, resulting in myriad health issues, including heat-related illnesses, the loss of shelter, the spread of disease, food shortages, and asthma. While these extremes affect everyone, ample research over decades has established the inequities in the distribution of the negative environmental externalities across populations, the climate gap.
Some existing approaches make neural networks more efficient, eg, input-adaptive multi-exit architectures, and can decrease the energy consumption of large models. And our now may benefit from reconsidering the past: some engineers are exploring how analog components (yes, analog!) could help defray AI’s high energy and storage requirements when used in combination with digital components. But will these technologies be explored if they are not the most expedient option to make it first to market?
The ethical problem here is that no doubt, AI models hold the potential to improve health outcomes for those who directly benefit from their application; however, without a change to the existing paradigm, the development of those models will worsen the health outcomes for others by way of AI’s contribution to climate change, exacerbating existing inequities. While Google, Amazon, and Microsoft have targets for zero carbon or negative carbon by 2030, the tipping point is now. More transparency regarding energy consumption by these large developers and data center providers would provide decisionmakers with a more accurate estimation of their carbon footprint and the health outcomes for everyone.