The rapid advance­ments in Arti­fi­cial Intel­li­gence owe their suc­cess to the efforts of at least 150 mil­lion indi­vid­u­als world­wide, assist­ing in tasks rang­ing from dis­tin­guish­ing a banana from a yogurt pot. Despite this sig­nif­i­cant con­tri­bu­tion, there exists no tan­gi­ble reg­u­la­tion for the sec­tor, as assert­ed in an op-ed pub­lished in French news­pa­per Libéra­tion by myself and a col­lec­tive of legal experts of the NGO “Intérêt à Agir”.

While con­cerns about the impact of AI on jobs in devel­oped coun­tries are well-doc­u­ment­ed, lit­tle atten­tion is giv­en to the work­ers cru­cial for the devel­op­ment and main­te­nance of AI sys­tems. Beyond com­put­er engi­neers and data sci­en­tists, indi­vid­u­als are indis­pens­able in var­i­ous stages of the AI cre­ation process, includ­ing train­ing algo­rithms with raw data and cor­rect­ing bias­es to enhance performance.

The World Bank esti­mates that 154 to 435 mil­lion peo­ple glob­al­ly are employed by dig­i­tal plat­forms, con­sti­tut­ing 4.4% to 12.5% of the glob­al work­force. Among them are data work­ers, fac­ing par­tic­u­lar­ly chal­leng­ing con­di­tions such as expo­sure to vio­lent con­tent, repres­sion of union activ­i­ties, long work­ing hours span­ning dif­fer­ent time zones, low or absent remu­ner­a­tion, pre­car­i­ous con­tracts, and informality.

Exist­ing attempts at reg­u­lat­ing the AI sec­tor right­ly focus on the impact on end-users in devel­oped coun­tries. How­ev­er, there is a notable absence of equiv­a­lent efforts to safe­guard the social rights of data work­ers in devel­op­ing regions. The Fair Work project by the Uni­ver­si­ty of Oxford high­lights a dete­ri­o­ra­tion in work­ing con­di­tions since 2021, empha­siz­ing issues such as wage equi­ty, non-dis­crim­i­na­tion, and the right to union rep­re­sen­ta­tion in micro-work platforms.

View­ing AI as a new man­i­fes­ta­tion of glob­al­iza­tion reveals sim­i­lar­i­ties with the orga­ni­za­tion­al struc­ture of the glob­al econ­o­my. Legal frame­works, such as the UN Guid­ing Prin­ci­ples on Busi­ness and Human Rights adopt­ed in 2011, designed to reg­u­late multi­na­tion­al cor­po­ra­tions, can be applied to address the chal­lenges posed by AI.

Pro­posed Measures

  1. Clear State Require­ments: Gov­ern­ments should estab­lish clear expec­ta­tions for busi­ness­es regard­ing the impor­tance of respect­ing human rights in their val­ue chains. For exam­ple, the ongo­ing nego­ti­a­tion of the AI Act could include broad­er respon­si­bil­i­ties for pro­duc­ers, importers, and pro­fes­sion­al users of AI solu­tions con­cern­ing the social con­di­tions in which they are developed.
  2. Cor­po­rate Account­abil­i­ty: Com­pa­nies must con­sid­er the neg­a­tive impacts their activ­i­ties may have on human rights in their val­ue chains. Prof­it­ing from AI solu­tions built on human labor should prompt active poli­cies to iden­ti­fy and mit­i­gate risks in the sup­ply chain.
  3. Access to Reme­dies: States should ensure access to legal reme­dies for vic­tims. Exist­ing laws, like the French duty of vig­i­lance, which man­dates par­ent com­pa­nies to pre­vent seri­ous risks to fun­da­men­tal social rights in their val­ue chains, can be pow­er­ful tools for work­ers’ rights pro­tec­tion if active­ly employed.