So when I got an invitation to attend a conference on developing safe and ethical AI in the lead-up to the Paris AI summit, I was fully prepared to hear Maria Ressa, the Filipino journalist and 2021 Nobel peace prize laureate, talk about how big tech has, with impunity, allowed its networks to be flooded with disinformation, hate and manipulation in ways that have had very real, negative, impact on elections.
But I wasn’t prepared to hear some of the “godfathers of AI”, such as Yoshua Bengio, Geoffrey Hinton, Stuart Russell and Max Tegmark, talk about how things might go much farther off the rails. At the centre of their concerns was the race towards AGI (artificial general intelligence, though Tegmark believes the “A” should refer to “autonomous”) which would mean that for the first time in the history of life on Earth, there would be an entity other than human beings simultaneously possessing high autonomy, high generality and high intelligence, and that might develop objectives that are “misaligned” with human wellbeing. Perhaps it will come about as the result of a nation state’s security strategy, or the search for corporate profits at all costs, or perhaps all on its own…
The idea that we, on Earth, might lose control of an AGI that then turns on us sounds like science fiction – but is it really so far-fetched considering the exponential growth of AI development? As Bengio pointed out, some of the most advanced AI models have already attempted to deceive human programmers during testing, both in pursuit of their designated objectives and to escape being deleted or replaced with an update…
But even if the conference seemed weighted towards these future-driven fears, there was a fairly evident split among the leading AI safety and ethics experts from industry, academia and government in attendance. If the “godfathers” were worried about AGI, a younger and more diverse demographic were pushing to put an equivalent focus on the dangers that AIs already pose to climate and democracy.
And even in AI’s current, early stages, its energy use is catastrophic: according to Kate Crawford, visiting chair of AI and justice at the École Normale Supérieur, data centres already account for more than 6% of all electricity consumption in the US and China, and demand is only going to keep surging. To that end, Sasha Luccioni, AI and climate lead at Hugging Face – a collaborative platform for open source AI models – announced this week that the startup has rolled out an AI energy score, ranking 166 models on their energy consumption when completing different tasks. The startup will also offer a one- to five-star rating system, comparable with the EU’s energy label for household appliances, to guide users towards sustainable choices.