So when I got an invi­ta­tion to attend a con­fer­ence on devel­op­ing safe and eth­i­cal AI in the lead-up to the Paris AI sum­mit, I was ful­ly pre­pared to hear Maria Ressa, the Fil­ipino jour­nal­ist and 2021 Nobel peace prize lau­re­ate, talk about how big tech has, with impuni­ty, allowed its net­works to be flood­ed with dis­in­for­ma­tion, hate and manip­u­la­tion in ways that have had very real, neg­a­tive, impact on elections.

But I wasn’t pre­pared to hear some of the “god­fa­thers of AI”, such as Yoshua Ben­gio, Geof­frey Hin­ton, Stu­art Rus­sell and Max Tegmark, talk about how things might go much far­ther off the rails. At the cen­tre of their con­cerns was the race towards AGI (arti­fi­cial gen­er­al intel­li­gence, though Tegmark believes the “A” should refer to “autonomous”) which would mean that for the first time in the his­to­ry of life on Earth, there would be an enti­ty oth­er than human beings simul­ta­ne­ous­ly pos­sess­ing high auton­o­my, high gen­er­al­i­ty and high intel­li­gence, and that might devel­op objec­tives that are “mis­aligned” with human well­be­ing. Per­haps it will come about as the result of a nation state’s secu­ri­ty strat­e­gy, or the search for cor­po­rate prof­its at all costs, or per­haps all on its own…

The idea that we, on Earth, might lose con­trol of an AGI that then turns on us sounds like sci­ence fic­tion – but is it real­ly so far-fetched con­sid­er­ing the expo­nen­tial growth of AI devel­op­ment? As Ben­gio point­ed out, some of the most advanced AI mod­els have already attempt­ed to deceive human pro­gram­mers dur­ing test­ing, both in pur­suit of their des­ig­nat­ed objec­tives and to escape being delet­ed or replaced with an update…

But even if the con­fer­ence seemed weight­ed towards these future-dri­ven fears, there was a fair­ly evi­dent split among the lead­ing AI safe­ty and ethics experts from indus­try, acad­e­mia and gov­ern­ment in atten­dance. If the “god­fa­thers” were wor­ried about AGI, a younger and more diverse demo­graph­ic were push­ing to put an equiv­a­lent focus on the dan­gers that AIs already pose to cli­mate and democracy.

And even in AI’s cur­rent, ear­ly stages, its ener­gy use is cat­a­stroph­ic: accord­ing to Kate Craw­ford, vis­it­ing chair of AI and jus­tice at the École Nor­male Supérieur, data cen­tres already account for more than 6% of all elec­tric­i­ty con­sump­tion in the US and Chi­na, and demand is only going to keep surg­ing. To that end, Sasha Luc­cioni, AI and cli­mate lead at Hug­ging Face – a col­lab­o­ra­tive plat­form for open source AI mod­els – announced this week that the start­up has rolled out an AI ener­gy score, rank­ing 166 mod­els on their ener­gy con­sump­tion when com­plet­ing dif­fer­ent tasks. The start­up will also offer a one- to five-star rat­ing sys­tem, com­pa­ra­ble with the EU’s ener­gy label for house­hold appli­ances, to guide users towards sus­tain­able choices.