Select Page

FEEDING THE MACHINE : THE HIDDEN HUMAN LABOR POWERING AI — JAMES MULDOON, MARK GRAHAM, CALLUM CANT — 2024 / A must-read, myth-busting exposé of how artificial intelligence exploits human labour

FEEDING THE MACHINE : THE HIDDEN HUMAN LABOR POWERING AI — JAMES MULDOON, MARK GRAHAM, CALLUM CANT — 2024 / A must-read, myth-busting exposé of how artificial intelligence exploits human labour

Here are a few extracts from the book to encourage you to buy it.

Big Tech has sold us the illusion that artificial intelligence is a frictionless technology that will bring wealth and prosperity to humanity. But hidden beneath this smooth surface lies the grim reality of a precarious global workforce of millions that labour under often appalling conditions to make AI possible. Feeding the Machine presents an urgent, riveting investigation of the intricate network of organisations that maintain this exploitative system, revealing the untold truth of AI.

Based on hun­dreds of inter­views and thou­sands of hours of field­work over more than a decade, this book shows us the lives of the work­ers often delib­er­ate­ly con­cealed from view and the sys­tems of pow­er that deter­mine their future. It shows how AI is an extrac­tion machine that churns through ever-larg­er datasets and feeds off human­i­ty’s labour and col­lec­tive intel­li­gence to pow­er its algo­rithms. Feed­ing the Machine is a call to arms against this exploita­tive sys­tem and details what we need to do, indi­vid­u­al­ly and col­lec­tive­ly, to fight for a more just dig­i­tal future.

FEEDING THE MACHINE : THE HIDDEN HUMAN LABOR POWERING AI — JAMES MULDOON, MARK GRAHAM, CALLUM CANT — 2024

This book is a call to arms that details what we need to do to fight for a more just dig­i­tal future.

For read­ers of Nao­mi Klein and Nicole Perl­roth, a myth-dis­solv­ing exposé of how arti­fi­cial intel­li­gence exploits human labor, and a resound­ing argu­ment for a more equi­table dig­i­tal future.

Sil­i­con Val­ley has sold us the illu­sion that arti­fi­cial intel­li­gence is a fric­tion­less tech­nol­o­gy that will bring wealth and pros­per­i­ty to human­i­ty. But hid­den beneath this smooth sur­face lies the grim real­i­ty of a pre­car­i­ous glob­al work­force of mil­lions labor­ing under often appalling con­di­tions to make A.I. pos­si­ble. This book presents an urgent, riv­et­ing inves­ti­ga­tion of the intri­cate net­work that main­tains this exploita­tive sys­tem, reveal­ing the untold truth of A.I.

Based on hun­dreds of inter­views and thou­sands of hours of field­work over more than a decade, Feed­ing the Machine describes the lives of the work­ers delib­er­ate­ly con­cealed from view, and the pow­er struc­tures that deter­mine their future. It gives voice to the peo­ple whom A.I. exploits, from accom­plished writ­ers and artists to the armies of data anno­ta­tors, con­tent mod­er­a­tors and ware­house work­ers, reveal­ing how their dan­ger­ous, low-paid labor is con­nect­ed to longer his­to­ries of gen­dered, racial­ized, and colo­nial exploitation.

A.I. is an extrac­tion machine that feeds off human­i­ty’s col­lec­tive effort and intel­li­gence, churn­ing through ever-larg­er datasets to pow­er its algo­rithms. This book is a call to arms that details what we need to do to fight for a more just dig­i­tal future.
Table of con­tents of the book

AI main players – AI definition and data workers’role

In 2024, Big AI ben­e­fits from what we call ‘infra­struc­tur­al pow­er’: own­er­ship of AI infra­struc­ture – the com­pu­ta­tion­al pow­er and stor­age need­ed to train large foun­da­tion mod­els. This occurs through their con­trol of large data cen­tres, under­sea fibre-optic cables, and AI chips used to train their models.

Just three com­pa­nies own over half of the world’s largest data cen­tres, while only a select few can pro­vide access to the hard­ware need­ed to train cut­ting-edge AI mod­els. This infra­struc­tur­al pow­er also exer­cis­es a pro­found pull on AI tal­ent, because the best peo­ple in the indus­try want to work at the lead­ing organ­i­sa­tions where they can do state-of-the-art work on the devel­op­ment of AI. Rather than AI open­ing the doors to more inno­va­tion and diver­si­ty, we may be wit­ness­ing the fur­ther con­sol­i­da­tion of wealth and pow­er as new play­ers join more estab­lished firms.8 One con­se­quence of this infra­struc­tur­al pow­er is a change in the nature of fund­ing mod­els and the degree of inde­pen­dence for new star­tups. AI com­pa­nies do not just require a few mil­lion to get start­ed – they need hun­dreds of mil­lions in cap­i­tal and access to a cloud plat­form to train foun­da­tion mod­els. This means AI star­tups require strate­gic part­ner­ships with exist­ing cloud providers that often buy a minor­i­ty stake in the com­pa­ny. Large tech com­pa­nies are also in a per­fect posi­tion to pro­vide bil­lions in fund­ing to new star­tups because they tend to have large cash reserves.

The first gen­er­a­tion of plat­forms received fund­ing from ven­ture cap­i­tal (VC), but the orig­i­nal founders main­tained sig­nif­i­cant uni­lat­er­al con­trol over their busi­ness­es. As a result, many of these plat­forms turned into gigan­tic empires ruled by a sin­gle bil­lion­aire founder.

The first step towards tak­ing action is under­stand­ing how AI is pro­duced and the dif­fer­ent sys­tems that are at play. This allows us to see how AI helps con­cen­trate pow­er, wealth and the very abil­i­ty to shape the future into the hands of a select few.

Arti­fi­cial intel­li­gence is often con­ceived of as a mir­ror of human intel­li­gence, an attempt to ‘solve intel­li­gence’ by repro­duc­ing the process­es that occur with­in a human mind. But from the per­spec­tive we devel­op in this book, AI is an ‘extrac­tion machine’. When we engage with AI prod­ucts as con­sumers we only see one sur­face of the machine and the out­puts it pro­duces. But beneath this pol­ished exte­ri­or lies a com­plex net­work of com­po­nents and rela­tion­ships nec­es­sary to pow­er it. The extrac­tion machine draws in crit­i­cal inputs of cap­i­tal, pow­er, nat­ur­al resources, human labour, data and col­lec­tive intel­li­gence and trans­forms these into sta­tis­ti­cal pre­dic­tions, which AI com­pa­nies, in turn, trans­form into profits.

To under­stand AI as a machine is to unmask its pre­ten­sions to objec­tiv­i­ty and neu­tral­i­ty. Every machine has a his­to­ry. They are built by peo­ple with­in a par­tic­u­lar time to per­form a spe­cif­ic task. AI is embed­ded with­in exist­ing polit­i­cal and eco­nom­ic sys­tems and when it clas­si­fies, dis­crim­i­nates and makes pre­dic­tions it does so in the ser­vice of those who cre­at­ed it. AI is an expres­sion of the inter­ests of the wealthy and pow­er­ful who use it to fur­ther entrench their posi­tion. It rein­forces their pow­er while at the same time embed­ding exist­ing social bias­es in new dig­i­tal forms of discrimination.

Cor­po­rate nar­ra­tives of AI empha­sise its intel­li­gence and con­ve­nience, often obscur­ing the mate­r­i­al real­i­ty of its infra­struc­ture and the human labour need­ed for it to function.4 In the pub­lic imag­i­na­tion, AI is asso­ci­at­ed with images of glow­ing brains, neur­al net­works and weight­less clouds, as if AI itself sim­ply float­ed through the ether. We tend not to pic­ture the real­i­ty of the con­stant heat and white noise of whirring servers loaded into heavy racks at ener­gy-inten­sive data cen­tres, nor the ten­ta­cle-like under­sea cables that car­ry AI train­ing data across the globe.

And just like a phys­i­cal body, AI’s mate­r­i­al struc­ture needs con­stant nour­ish­ment, through elec­tric­i­ty to pow­er its oper­a­tions and water to cool its servers. Every time we ask Chat­G­PT a ques­tion or use an Inter­net search engine, the machine lives and breathes through this dig­i­tal infrastructure.

We also tend to for­get that behind the seem­ing­ly auto­mat­ed process­es of AI often lies the dis­guised labour of human work­ers forced to com­pen­sate for the lim­i­ta­tions of technology.5 AI relies on human work­ers to per­form a wide vari­ety of tasks, from anno­tat­ing datasets to ver­i­fy­ing its out­puts and tun­ing its para­me­ters. When AI breaks down or does not func­tion prop­er­ly, human work­ers are there to step in and assist algo­rithms in com­plet­ing the work. When Siri does not recog­nise a voice com­mand or when facial recog­ni­tion soft­ware fails to ver­i­fy a person’s iden­ti­ty, these cas­es are often sent to human work­ers to estab­lish what went wrong and how the algo­rithm could be improved.

Sophis­ti­cat­ed soft­ware func­tions only through thou­sands of hours of low-paid and menial labour – work­ers are forced to work like robots in the hopes that AI will become more like a human.

AI cap­tures the knowl­edge of human beings and encodes it into auto­mat­ic process­es through machine learn­ing mod­els. It is fun­da­men­tal­ly deriv­a­tive of its train­ing data, through which it learns to under­take a diverse range of activ­i­ties: from dri­ving a car to recog­nis­ing objects and pro­duc­ing nat­ur­al lan­guage, and it relies on a project of col­lect­ing the his­to­ry of human knowl­edge in enor­mous datasets con­sist­ing of bil­lions of data points.

The sys­tems trained by these datasets can often per­form at super­hu­man lev­els, and while many of these datasets are in the pub­lic domain, oth­ers con­tain copy­right­ed works tak­en with­out their authors’ con­sent. AI com­pa­nies have under­tak­en a pri­vati­sa­tion of col­lec­tive intel­li­gence by enclos­ing these datasets and using pro­pri­etary soft­ware to cre­ate new out­puts based on a manip­u­la­tion of that data.

Arti­fi­cial intel­li­gence can be broad­ly under­stood as a machine-based sys­tem that process­es data in order to gen­er­ate out­puts such as deci­sions, pre­dic­tions and rec­om­men­da­tions. It can refer to any­thing from aut­ofill in emails to tar­get­ed weapons sys­tems in drone war­fare. The real­i­ty is it’s more of a mar­ket­ing con­cept, or an umbrel­la term under which very dif­fer­ent tech­nolo­gies can be grouped. This includes com­put­er vision, pat­tern recog­ni­tion and nat­ur­al lan­guage pro­cess­ing (that is, the pro­cess­ing of every­day speech and text). It’s an amor­phous idea that can evoke the won­ders of post-human intel­li­gence but also her­ald the dan­gers of an AI-trig­gered extinc­tion event.

Most recent­ly, this has cen­tred upon the sys­tems that pow­er chat­bots: large lan­guage mod­els or LLMs. LLMs are trained on enor­mous datasets con­tain­ing vast amounts of text data usu­al­ly scraped from the Inter­net. Large lan­guage mod­els such as Chat­G­PT are called large because of the size of their datasets (hun­dreds of bil­lions of giga­bytes of data), but also because of the num­ber of para­me­ters that have been used to train them (about 1.76 tril­lion para­me­ters for ChatGPT‑4). Para­me­ters are the vari­ables that dri­ve the per­for­mance of the sys­tem and can be fine-tuned dur­ing train­ing to deter­mine how a mod­el will detect pat­terns in its data, which influ­ences how well it will per­form on new data.

Today, we are in the mid­dle of a hype cycle in which com­pa­nies are rac­ing to inte­grate AI tools into a vari­ety of prod­ucts, trans­form­ing every­thing from logis­tics to man­u­fac­tur­ing and health­care. AI tech­nolo­gies can be used to diag­nose ill­ness­es, design more effi­cient sup­ply chains and auto­mate the move­ment of goods. The glob­al AI mar­ket was worth over $200 bil­lion in 2023, and is expect­ed to grow 20 per cent each year to near­ly $2 tril­lion by 2030.3 The devel­op­ment of AI tends to be secre­tive and opaque; there are no exact num­bers of how many work­ers par­tic­i­pate glob­al­ly in the indus­try, but the fig­ure is in the mil­lions and, if trends con­tin­ue at their cur­rent rate, their num­ber will expand dra­mat­i­cal­ly. By using AI prod­ucts we are direct­ly insert­ing our­selves into the lives of these work­ers dis­persed across the globe.

But data­work like this is per­formed by mil­lions of work­ers in dif­fer­ent cir­cum­stances and loca­tions around the world.

This data work is essen­tial for the func­tion­ing of the every­day prod­ucts and ser­vices we use – from social media apps to chat­bots and new auto­mat­ed tech­nolo­gies. It’s a pre­con­di­tion for their very existence

With­out data anno­ta­tors cre­at­ing datasets that can teach AI the dif­fer­ence between a traf­fic light and a street sign, autonomous vehi­cles would not be allowed on our roads. And with­out work­ers train­ing machine learn­ing algo­rithms, we would not have AI tools such as ChatGPT.

We spoke with dozens of work­ers just like Mer­cy at three data anno­ta­tion and con­tent mod­er­a­tion cen­tres run by one com­pa­ny across Kenya and Ugan­da. Con­tent mod­er­a­tors are the work­ers who trawl, man­u­al­ly, through social media posts to remove tox­ic con­tent and flag vio­la­tions of the company’s poli­cies. Data anno­ta­tors label data with rel­e­vant tags to make it leg­i­ble for use by com­put­er algo­rithms. We could con­sid­er both of these types of work ‘data work’, which encom­pass­es dif­fer­ent types of behind-the-scenes labour that makes our dig­i­tal lives possible.

They’re expect­ed to action between 500 and 1,000 tick­ets a day (to action one ‘tick­et’ every fifty-five sec­onds dur­ing their ten-hour shift). Many report­ed nev­er feel­ing the same again: the job had made an indeli­ble mark on their lives. The con­se­quences can be dev­as­tat­ing. ‘Most of us are dam­aged psy­cho­log­i­cal­ly, some have attempt­ed sui­cide … some of our spous­es have left us and we can’t get them back,’ com­ment­ed one mod­er­a­tor who had been let go by the company.

‘Phys­i­cal­ly you are tired, men­tal­ly you are tired, you are like a walk­ing zom­bie,’ not­ed one data work­er who had migrat­ed from Nige­ria for the job.

Salary of AI workers

In the case of the BPOs we exam­ined, the annu­al salary of one of their US senior lead­er­ship team could employ more than a thou­sand African work­ers for a month. But there are also hard lim­its to how high the wages of data anno­ta­tors can be lift­ed. The actors who have the most pow­er in this rela­tion­ship are not the BPO man­agers, but the tech com­pa­nies who hand out the con­tracts. It is here where the terms and con­di­tions are set. Some of the impor­tant ben­e­fits work­ers receive, such as a min­i­mum wage and guar­an­teed break times, result from terms put into the con­tract by the client.

Job secu­ri­ty at this par­tic­u­lar com­pa­ny is min­i­mal – the major­i­ty of work­ers we inter­viewed were on rolling one- or three-month con­tracts, which could dis­ap­pear as soon as the client’s work was com­plete. They worked in rows of up to a hun­dred on pro­duc­tion floors in a dark­ened build­ing, part of a giant busi­ness park on the out­skirts of Nairo­bi. Their employ­er was a client of Meta’s, a promi­nent busi­ness process out­sourc­ing (BPO) com­pa­ny with head­quar­ters in San Fran­cis­co and deliv­ery cen­tres in East Africa where inse­cure and low-income work could be dis­trib­uted to local employ­ees of the firm.

But large com­pa­nies like Meta tend to have mul­ti­ple out­sourced providers of mod­er­a­tion ser­vices who com­pete for the most prof­itable con­tracts from the company.

Many tech com­pa­nies there­fore do what they can to hide the real­i­ty of how their prod­ucts are actu­al­ly made. They present a vision of shin­ing, sleek, autonomous machines – com­put­ers search­ing through large quan­ti­ties of data, teach­ing them­selves as they go – rather than the real­i­ty of the poor­ly paid and gru­elling human labour that both trains them and is man­aged by them.

Trends – Impact on work

These work­ers are at the fore­front of these tech­no­log­i­cal changes, but AI-enabled sur­veil­lance and pro­duc­tiv­i­ty tools are com­ing for many work­ers, even those who might con­sid­er them­selves immune from such encroach­ment into their work­ing lives.

Pri­mar­i­ly, this is the extrac­tion of effort from work­ers, who are forced to work hard­er and faster by AI man­age­ment sys­tems that cen­tralise knowl­edge of the labour process and reduce the lev­el of skill required to do a job by rou­tin­is­ing and sim­pli­fy­ing it. This inten­si­fi­ca­tion of work extracts more val­ue from the labour of work­ers for the ben­e­fit of employ­ers. For many of us, this will be the mech­a­nism through which we are most exposed to the dam­age caused by the extrac­tion machine.

The real pow­er of AI when it comes to work is its abil­i­ty to inten­si­fy and deskill work process­es through increased sur­veil­lance, rou­tin­i­sa­tion and more fine-grained con­trol over the work­force. AI allows boss­es to track work­ers’ move­ment, mon­i­tor their per­for­mance and pro­duc­tiv­i­ty, cen­tralise more data about the labour process in man­age­r­i­al hands, and even claims to detect their phys­i­cal and emo­tion­al states. It is also increas­ing­ly used by HR depart­ments for ‘hire and fire’ deci­sions – screen­ing appli­cants before CVs are seen by a human and trig­ger­ing dis­ci­pli­nary and ter­mi­na­tion pro­ce­dures when tar­gets are not met. Gig work has been the canary in the coal mine for this type of algo­rith­mic man­age­ment, but the tech­nol­o­gy is quick­ly spread­ing to a range of oth­er forms of work.15 If you think your job is immune, you are prob­a­bly mis­tak­en. Once test­ed on work­ers with weak­er bar­gain­ing pow­er, these tech­nolo­gies are then rolled out to broad­er sec­tors. There they are used to cut costs either by replac­ing func­tions pre­vi­ous­ly per­formed by humans, or more often, by increas­ing the pace at which humans must work and reduc­ing the skills required to per­form a spe­cif­ic job. AI man­age­ment tech­nol­o­gy is over­whelm­ing­ly designed for the ben­e­fit of man­agers and own­ers, not work­ers. As a result, work inten­si­ty can increase to unsafe lev­els, and deskilling can reduce work­ers’ auton­o­my and job qual­i­ty. The result is a wide­spread ten­den­cy across all sec­tors of the econ­o­my towards tir­ing, dehu­man­is­ing and dan­ger­ous work.
The Ama­zon work­er who stands in one place repeat­ing the same small tasks thou­sands of times a day to make the rate that their AI man­ag­er dic­tates is expe­ri­enc­ing this future of work. They are one of the mil­lions of work­ers who are exposed to a high risk of injury, sub­ject to track­ing, and pushed to work hard­er and hard­er every day. From the data pro­duced by their scan­ning guns to the pat­terns observed by cam­eras over­head, they expe­ri­ence AI man­age­ment as a con­stant pres­sure to per­form. Beyond the ware­house, the pan­dem­ic pro­vid­ed an oppor­tu­ni­ty for employ­ers to start using these mon­i­tor­ing tech­niques on employ­ees’ pro­duc­tiv­i­ty as they worked from home. Microsoft was crit­i­cised for offer­ing ‘pro­duc­tiv­i­ty scores’ in one of their soft­ware suites that could have been used to allow man­agers to track how active­ly employ­ees con­tributed to activ­i­ties such as email­ing and col­lab­o­ra­tive documents.16 The use of cam­eras to mon­i­tor work­ers in the office, key­stroke and com­put­er activ­i­ty mon­i­tor­ing and soft­ware that tracks and records per­for­mance is becom­ing more wide­spread. Nobody is exempt from this future of work.

Trends – On AI is changing Big an handful of digital gatekeepers that amassed billions of users on their plateform since 2000 -> 2010 which seek to limit knowledge about how their AI models are trained, and develop them in ways that increase their competitive advantage in the sector

Stud­ies of the broad­er social con­text in which AI oper­ates are increas­ing­ly impor­tant, since we are enter­ing a new era of tech devel­op­ment. The 2010s were char­ac­terised by the growth and then dom­i­nance of a hand­ful of dig­i­tal gate­keep­ers that amassed bil­lions of users on their plat­forms, became tril­lion-dol­lar com­pa­nies and lever­aged their posi­tion to exer­cise unpar­al­leled polit­i­cal and eco­nom­ic pow­er. The rise of AI has led to major shifts in the inter­nal dynam­ics of the tech sec­tor, which has pro­found con­se­quences for the glob­al econ­o­my. The plat­form era that last­ed from the mid-2000s to 2022 has now giv­en way to an era of AI. Fol­low­ing the launch of Chat­G­PT and new part­ner­ships between Big Tech and AI com­pa­nies, both invest­ment strate­gies and busi­ness mod­els are dri­ven by a new coa­les­cence of forces around AI. The era of AI has giv­en rise to a new con­fig­u­ra­tion of major play­ers that over­laps with but is dis­tinct from the plat­form era. In place of the lead­ing Big Tech firms of the 2010s, a group of com­pa­nies we call ‘Big AI’ has emerged, and are cen­tral organ­i­sa­tions of this new era. This group of com­pa­nies includes lega­cy Big Tech firms such as Ama­zon, Alpha­bet, Microsoft and Meta and also includes AI star­tups and chip design­ers like Ope­nAI, Anthrop­ic, Cohere and Nvidia. If atten­tion was turned to Chi­nese com­pa­nies, which are the next most sig­nif­i­cant set of actors in the era of AI, we could also include Aliba­ba, Huawei, Ten­cent and Baidu. Although the pre­cise mem­ber­ship of this group is like­ly to shift, Big AI con­sists of com­pa­nies that under­stand AI as a com­mer­cial prod­uct that should be kept as a close­ly guard­ed secret, and used to make prof­its for pri­vate com­pa­nies. Many of these com­pa­nies seek to lim­it knowl­edge about how their AI mod­els are trained, and devel­op them in ways that increase their com­pet­i­tive advan­tage in the sector.

Fol­low­ing the pub­lic release of Chat­G­PT, a series of new strate­gic col­lab­o­ra­tions was announced between lega­cy tech firms and AI star­tups. Microsoft invest­ed $10 bil­lion in Ope­nAI; Google invest­ed $2 bil­lion in Anthrop­ic; Ama­zon invest­ed $4 bil­lion in Anthrop­ic; Meta has part­nered with both Microsoft and AI start­up Hug­ging Face; Microsoft devel­oped a new AI unit from Inflec­tion staff mem­bers; while Nvidia is now a two tril­lion-dol­lar com­pa­ny that sup­plies 95 per cent of the graph­ics pro­cess­ing unit (GPU) mar­ket for machine learning.

The dom­i­nance of social media and adver­tis­ing plat­forms dur­ing the plat­form era was par­tial­ly based on ‘net­work effects’: the more users a plat­form had, the more effi­cient and valu­able its ser­vice became and the more prof­itable for its own­ers. Large quan­ti­ties of user data pro­vid­ed plat­form own­ers with greater insight into this dig­i­tal world and the abil­i­ty to bet­ter extract val­ue through fees or adver­tis­ing rev­enue. In the era of AI, own­er­ship over soft­ware still mat­ters, but the under­ly­ing hard­ware has grown in impor­tance. Ear­ly plat­form com­pa­nies were lean: Airbnb did not own any hous­es and Uber did not own any cars. They were sell­ing X‑as-a-ser­vice and relied on net­works of users to make it all hap­pen. Big AI ben­e­fits from what we call ‘infra­struc­tur­al pow­er’: own­er­ship of AI infra­struc­ture – the com­pu­ta­tion­al pow­er and stor­age need­ed to train large foun­da­tion mod­els. This occurs through their con­trol of large data cen­tres, under­sea fibre-optic cables, and AI chips used to train their models.

Just three com­pa­nies own over half of the world’s largest data cen­tres, while only a select few can pro­vide access to the hard­ware need­ed to train cut­ting-edge AI mod­els. This infra­struc­tur­al pow­er also exer­cis­es a pro­found pull on AI tal­ent, because the best peo­ple in the indus­try want to work at the lead­ing organ­i­sa­tions where they can do state-of-the-art work on the devel­op­ment of AI. Rather than AI open­ing the doors to more inno­va­tion and diver­si­ty, we may be wit­ness­ing the fur­ther con­sol­i­da­tion of wealth and pow­er as new play­ers join more estab­lished firms.8 One con­se­quence of this infra­struc­tur­al pow­er is a change in the nature of fund­ing mod­els and the degree of inde­pen­dence for new star­tups. AI com­pa­nies do not just require a few mil­lion to get start­ed – they need hun­dreds of mil­lions in cap­i­tal and access to a cloud plat­form to train foun­da­tion mod­els. This means AI star­tups require strate­gic part­ner­ships with exist­ing cloud providers that often buy a minor­i­ty stake in the com­pa­ny. Large tech com­pa­nies are also in a per­fect posi­tion to pro­vide bil­lions in fund­ing to new star­tups because they tend to have large cash reserves.

The first gen­er­a­tion of plat­forms received fund­ing from ven­ture cap­i­tal (VC), but the orig­i­nal founders main­tained sig­nif­i­cant uni­lat­er­al con­trol over their busi­ness­es. As a result, many of these plat­forms turned into gigan­tic empires ruled by a sin­gle bil­lion­aire founder.

This is unlike­ly to occur in the era of AI, because any new empires will have to coop­er­ate or merge with exist­ing mega-cor­po­ra­tions. The strug­gle to suc­cess­ful­ly com­mer­cialise AI prod­ucts will like­ly cre­ate a mul­ti-polar tech sphere in which lega­cy tech com­pa­nies seek to part­ner with the most suc­cess­ful of the younger star­tups to form new coali­tions to out­com­pete their rivals.

Trends – AI and the global South ?

If any­thing, cur­rent trends can best be described as the growth of tech com­pa­nies’ ambi­tions for glob­al dom­i­nance and the expan­sion of their empires deep­er into the social fab­ric of our lives and the halls of polit­i­cal pow­er. AI accel­er­ates these trends and enrich­es those who have already ben­e­fit­ed from the grow­ing con­cen­tra­tion of pow­er in the hands of Amer­i­can tech bil­lion­aires. For those at the bot­tom of the pile, the pick­ings will be slim indeed. If coun­tries in the Glob­al South had lit­tle say in how dig­i­tal sur­veil­lance plat­forms were built or deployed in their neigh­bour­hoods, they will have even less input into the devel­op­ment of AI – a tech­nol­o­gy that is shroud­ed in mys­tery and requires enor­mous resources and com­pu­ta­tion­al power.

In Feed­ing the Machine, we draw a line from the tech­no­log­i­cal devel­op­ment of cur­rent AI sys­tems back to ear­li­er forms of labour dis­ci­pline used in indus­tri­al pro­duc­tion. We argue that the prac­tices through which AI is pro­duced are not new. In fact, they close­ly resem­ble pre­vi­ous indus­tri­al for­ma­tions of con­trol and exploita­tion of labour. Our book con­nects the pre­car­i­ous con­di­tions of AI work­ers today to longer his­to­ries of gen­dered and racialised exploita­tion – on the plan­ta­tion, in the fac­to­ry, and in the val­leys of California.

To prop­er­ly under­stand AI, we have to view its pro­duc­tion through the lega­cy of colonialism.

AI is pro­duced through an inter­na­tion­al divi­sion of dig­i­tal labour in which tasks are dis­trib­uted across a glob­al work­force, with the most sta­ble, well-paid and desir­able jobs locat­ed in key cities in the US, and the most pre­car­i­ous, low-paid and dan­ger­ous work export­ed to work­ers in periph­er­al loca­tions in the Glob­al South. Crit­i­cal min­er­als required for AI and oth­er tech­nolo­gies are mined and processed in loca­tions across the Glob­al South and trans­port­ed to spe­cial assem­bly zones to be turned into tech­nol­o­gy prod­ucts such as the advanced AI chips required for large lan­guage models.

Out­puts from gen­er­a­tive AI also rein­force old colo­nial hier­ar­chies, since much of the AI datasets and com­mon bench­marks on which these mod­els are trained priv­i­lege West­ern forms of knowl­edge and can repro­duce dam­ag­ing stereo­types and dis­play bias­es against minor­i­ty groups mis­rep­re­sent­ed or dis­tort­ed in the data.

AI & wars – the AI Gaza war example: “the project Nimbus” Contract with Google and Amazon participating to the war

Fol­low­ing the 7 Octo­ber 2023 attack by Hamas on Israel, the Israeli army loos­ened con­straints on civil­ian casu­al­ties and began an AI-assist­ed bomb­ing campaign.

Fur­ther +972 inves­ti­ga­tions revealed that dur­ing these ear­ly days of Israel’s geno­ci­dal assault the IDF used ‘kill lists’ gen­er­at­ed by an AI tar­get­ing sys­tem known as Laven­der to attack up to 37,000 tar­gets with min­i­mal human ver­i­fi­ca­tion. The sys­tem uses mass sur­veil­lance data and machine learn­ing to rank the like­li­hood that any one indi­vid­ual in Gaza is active in the mil­i­tary wing of organ­i­sa­tions like Hamas from 1–100. Anoth­er sys­tem called Where’s Dad­dy? was used to con­firm the tar­get­ed indi­vid­u­als had entered their fam­i­ly homes before they were attacked, usu­al­ly at night, by ‘dumb’ bombs that were guar­an­teed to cause ‘col­lat­er­al dam­age’. Accord­ing to sources with­in the Israeli mil­i­tary, the result was thou­sands of Pales­tin­ian women and chil­dren dying on the say-so of an AI system.

In these cas­es, arti­fi­cial intel­li­gence was used to mas­sive­ly expand the capac­i­ties of a state mil­i­tary appa­ra­tus to con­duct a war in which enor­mous civil­ian casu­al­ties result­ed from the pur­suit of puta­tive­ly mil­i­tary ends. AI hasn’t saved civil­ian lives; it has increased the blood­shed. Nor is AI lim­it­ed to tar­get­ing pro­grams; it is used across the Israeli mil­i­tary, includ­ing in anoth­er AI pro­gram called Fire Fac­to­ry, which assists with the organ­i­sa­tion of wartime logis­tics. As Antony Loewen­stein has shown, once test­ed in com­bat on Pales­tini­ans, IDF mil­i­tary tech­nol­o­gy is then export­ed to con­flict zones across the world by Israeli secu­ri­ty companies.

This exam­ple of the mil­i­tary appli­ca­tion of AI relies on glob­al pro­duc­tion net­works involv­ing a hid­den army of work­ers across the world. In this book, we have shown that these vast net­works dis­trib­ute deci­sion-mak­ing pow­er uneven­ly and are direct­ed by pow­er­ful com­pa­nies for their own ben­e­fit. For the most part, work­ers do not know what hap­pens else­where in the net­work, with the whole sys­tem remain­ing opaque to all but a few coor­di­nat­ing actors. The Israeli mil­i­tary access­es its AI and machine learn­ing capa­bil­i­ties via Google and Ama­zon, which pro­vide cloud com­put­ing ser­vices for its oper­a­tions in a con­tro­ver­sial con­tract called ‘Project Nimbus’.11 After win­ning the Project Nim­bus con­tract both Ama­zon and Google began spend­ing hun­dreds of mil­lions of dol­lars on new state-of-the-art data cen­tres in Israel, some of them under­ground and secured against mis­sile strikes and hos­tile actors. It’s a long way from data-cen­tre work­er Einar and his home in Blön­duós, Ice­land, but these cen­tres will employ oth­er tech­ni­cians like him to keep the sys­tem functioning.

Hun­dreds of Google employ­ees, part of the Jew­ish Dias­po­ra in Tech group, protest­ed the con­tract, sign­ing a state­ment stat­ing, ‘Many of Israel’s actions vio­late the UN human rights prin­ci­ples, which Google is com­mit­ted to uphold­ing. We request the review of all Alpha­bet busi­ness con­tracts and cor­po­rate dona­tions and the ter­mi­na­tion of con­tracts with insti­tu­tions that sup­port Israeli vio­la­tions of Pales­tin­ian rights, such as the Israel Defense Forces’.

When it comes to com­put­er vision sys­tems such as mil­i­tary tar­get­ing tech­nol­o­gy, facial recog­ni­tion soft­ware and autonomous drones, images and videos need to be curat­ed and anno­tat­ed by an army of data anno­ta­tors, many of whom are employed via out­sourc­ing cen­tres such as the ones in Ugan­da or in Kenya.

Fol­low­ing an exchange of mis­siles and air strikes with Hamas dur­ing its eleven-day war in Gaza in 2021, the Israel Defense Forces (IDF) declared it had con­duct­ed its ‘first AI war’.1 Machine learn­ing tools were at the heart of a new cen­tre estab­lished in 2019 called the Tar­gets Admin­is­tra­tive Divi­sion that used avail­able data and arti­fi­cial intel­li­gence to accel­er­ate tar­get gen­er­a­tion. For­mer IDF Chief of Staff Aviv Kochavi said, ‘in the past we would pro­duce fifty tar­gets in Gaza per year. Now, this machine pro­duces one hun­dred tar­gets [in] a sin­gle day, with 50 per cent of them being attacked.’2

The IDF’s AI-based tar­get­ing sys­tem can make it appear like tar­gets are now select­ed with machine-like pre­ci­sion to min­imise the indis­crim­i­nate use of force. In real­i­ty, the pre­cise oppo­site is true.

We too refuse to be the raw mate­r­i­al that is fed into the extrac­tion machine. We too are will­ing to put our bod­ies upon the gears of a sys­tem that chews up human labour and spits out prof­it. We too wish to indi­cate to the peo­ple who run it, to the peo­ple who own it, that unless we are free, the extrac­tion machine will be pre­vent­ed from work­ing at all.

OUR MISSION:

THE PURPOSE IS TO SHARE BEST PRACTICES AND PROMOTE ACTIONS AGAINST HUMAN TRAFFICKING.

WE MAKE AVAILABLE TO YOU GUIDES AND RESEARCH ON TRAFFICKING IN HUMAN BEINGS FROM THE MOST RECOGNISED LEGAL AND OPERATIONAL ACTORS.

ADLAUDATOSI INTEGRAL ECOLOGY FORUM WEBINARS (WATCH THE REPLAY FOR PAST WEBINARS)

ADLAUDATOSI WEBINARS — LISTEN TO A SELECTION OF SPEAKERS’INTERVENTION IN MP3 (FOR LOW INTERNET DATA CONNEXION)

FABRICE HADJADJ — VIRTUAL AND REAL WORLDS: HOW TO INHABIT THE DEVASTATED EARTH?

AN EXAMPLE FOR CATHOLIC ENTITIES TO FOLLOW: ERADICATE MODERN SLAVERY IN ALL ITS FORMS FROM THE OPERATIONS AND SUPPLY CHAINS OF CATHOLIC ENTITIES IN AUSTRALIA — PROPOSAL OF ACTION PLAN – MODERN SLAVERY RISK MANAGEMENT PROGRAM FROM 2021 TO 30 JUNE 2023

Adlaudatosi Webinars Videos VIMEO

Videos of the speakers’ interventions adlaudatosi VIMEO

Adlaudatosi Webinars Videos YOUTUBE

Religious Helping Trafficking Victims along the Road of Recovery (ON-DEMAND VIDEO WEBINAR)

Religious Working In International Advocacy Against Human Trafficking (ON-DEMAND VIDEO WEBINAR)

Impact Of Human Trafficking On Health: Trauma (ON-DEMAND VIDEO WEBINAR)

Impact Of Human Trafficking On Health: Healing (ON-DEMAND VIDEO WEBINAR)

International Prosecution Of Human Trafficking — Where Are We Now? (ON-DEMAND VIDEO WEBINAR)

International Prosecution Of Human Trafficking — What can be done? (ON-DEMAND VIDEO WEBINAR)

International Prosecution Of Human Trafficking — Best Practices (ON-DEMAND VIDEO WEBINAR)

Demand As Root Cause For Human Trafficking – Sex Trafficking & Prostitution

Human Trafficking — Interview with Prof. Michel Veuthey, Order of Malta — 44th UN Human Right Council 2020

POPE’S PAYER INTENTION FOR FEBRUARY 2020: Hear the cries of migrants victims of human trafficking

FRANCE — BLOG DU COLLECTIF “CONTRE LA TRAITE DES ÊTRES HUMAINS”

Church on the frontlines in fight against human trafficking

Holy See — PUBLICATION OF PASTORAL ORIENTATIONS ON HUMAN TRAFFICKING 2019

RIGHT TO LIFE AND HUMAN DIGNITY GUIDEBOOK

Catholic social teaching

Doctrine sociale de l’Église catholique

Register to our series of webinars adlaudatosi on Human Trafficking

You have successfully registered !