Research Seminar
AI Labour Disclosure Initiative: Recognizing the social cost of human labour behind automation
Exploring the push for transparency with the AI Labor Disclosure Initiative (AILDI), revealing the human workforce behind AI and advocating for digital labor disclosure for informed policies, investments, and ethical AI development.
This seminar explores the evolving landscape of workforce transparency and its impact on AI development. Various initiatives for labor transparency and human capital reporting are already in place, including those initiated by civil society, investors, governments, and international organizations. When not enforced as part of legislation, these initiatives rely on companies volunteering information about human capital, including gender composition, working conditions, and compliance with laws.
The purpose of this discussion is to extend this transparency to AI-related workforce issues, advocating for regular reports on digital labor in order to inform the formulation of policies, supply chain management, and responsible investment decisions. Several revelations indicate that human moderators continue to operate "automated" filters on social media like Facebook [1], that large language models like ChatGPT are trained and verified by data annotators [2], and that motion-recognition security cameras are run by artificial intelligence impersonators [3]. In order for their products to be labeled as "artificial intelligence," companies must disclose who fuels their data bases and who maintains their information pipelines.
The AI Labor Disclosure Initiative (AILDI) promotes accountability and enables benchmarking through a culture of transparency. Our discussion will explore how the integration of socially sustainable machine learning practices complements the push for identifying employees, subcontractors, and microworkers. We will also discuss existing legislation regarding due diligence and social responsibility in support of the formalization of digital labor's role in AI production. This initiative challenges us to consider the role and the social costs of digital labor in AI business models and valuation methods.
Guest Speaker
Antonio A. Casilli (Institut Polytechnique de Paris)
Chair: Uma Rani, (ILO Research)
Relevant links
[1] Euronews (2023). Facebook content moderators call the work they do 'torture.' Their lawsuit may ripple worldwide.
[2] Time (2023). Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic.
[3] The Nation (2024). AI Isn’t a Radical Technology.