The rapid­ly increas­ing pres­ence and use of gen­er­a­tive AI since the end of 2022 – epit­o­mized in plat­forms such as Chat­G­PT, Sta­ble Dif­fu­sion etc. – rais­es a num­ber of ques­tions. These include not only the wide­ly known (opti­mistic or wor­ry­ing) pro­jec­tions about the ben­e­fi­cial use of AI in research, the alleged “end of truth” or an expect­ed “end of work”. What’s more, such prog­nos­tics are direct­ly linked to devel­op­ments that affect cur­rent labor – for and with AI. In con­trast to the idea of pure­ly machine-based sys­tems, human work is an essen­tial agency and pre­req­ui­site of vir­tu­al­ly every man­i­fes­ta­tion of con­tem­po­rary AI: the human labor of data work­ers (label­ing huge amounts of data for the “learn­ing” machines) as well as the cre­ation of the tex­tu­al and visu­al con­tent to be cap­tured and crawled on the inter­net (to become the basis of human-led learn­ing as data).
For both forms of human agency, impli­cat­ed in a sup­pos­ed­ly arti­fi­cial and autonomous tech­nol­o­gy as they are, the ques­tion of legit­i­ma­cy aris­es. Years of exploit­ing human (pre-)labor for auto­mat­ed arti­fi­cial intel­li­gence process­es don’t legit­imize such extrac­tive prac­tice. Far from it, they instead demand crit­i­cal incur­sions by legal advo­ca­cy and schol­ar­ship that per­tain to labor law and intel­lec­tu­al prop­er­ty leg­is­la­tion. By neces­si­ty, the con­di­tions of AI’s pro­duc­tion and its terms have to be brought to the fore with as much clar­i­ty as possible.
The work­ing con­di­tions of data work­ers, who are scat­tered all over the world, poor­ly paid and – due to label­ing and fil­ter­ing out vio­lent and dis­turb­ing con­tent – men­tal­ly bur­dened, need to be scru­ti­nized and improved. Also, the copy­right on the data used (not only, but espe­cial­ly for the work of artists) has to be adressed. It’s always about mass­es – about enor­mous amounts of data processed by the huge num­ber of data work­ers, which:

the World Bank currently estimates at between 150 and 430 million.

Both aspects there­fore illus­trate the impor­tance of the pro­duc­tion and use of (big) data for this new form of tech­nol­o­gy. They help with today’s urgent task of under­stand­ing how arti­fi­cial intel­li­gence works.
Ini­tia­tives such as the 2023 Con­tent Mod­er­a­tors Man­i­festo, the col­lec­tive research project Data Work­ers’ Inquiry launched in Sum­mer 2024 and the ongo­ing law­suits filed by authors, screen­writ­ers and pub­lish­ers against Open AI and Microsoft are impor­tant steps. They pro­voke fur­ther ques­tions such as what can become data at all, where new data comes from, who has the author­i­ty to inter­pret the data and how, and what role the users of plat­forms such as Chat­G­PT, Sta­ble Dif­fu­sion and many more play in this.
The work­shop “AI as Work. Terms and Con­di­tions of Con­tem­po­rary Image and Knowl­edge Pro­duc­tion” will bring into con­ver­sa­tion researchers, prac­ti­tion­ers and activists Ari­ana Dongus, Krys­tal Kauff­man, Nico­las Malevé and Tian­ling Yang. With the fol­low­ing impulses:
Krys­tal Kauffman:
The Data Work­ers behind AI: Exploita­tion in the Indus­try and how to Pre­vent it
Ari­ana Dongus:
Always in Beta? AI, Data, and Labor in Exper­i­men­tal Economies of Exclusion
Nico­las Malevé:
The Vagaries of the Artis­tic Class in Gen­er­a­tive AI
Pre­cise­ly because the infra­struc­tures and con­crete func­tion­al­i­ties of AI are often hid­den, abstract and not easy to under­stand, the focus of this gath­er­ing will be on dis­cus­sions with all participants.
The work­shop is part of the project “Terms and Con­di­tions. The Legal Form of Images” and a col­lab­o­ra­tion of the Media Stud­ies pro­gram (EMW) at the Uni­ver­si­ty of Applied Sci­ences Pots­dam and Uni­ver­si­ty of Pots­dam with the Harun Faroc­ki Insti­tut (HaFI), Berlin, and the Acad­e­my of Visu­al Arts, Leipzig.