Overhead, ‘security camera-style’ shot of people walking in a building complex; each person is outlined by a blue box.
According to Professor Mark Graham, many of the digital technologies we depend on can only function due to an army of human labour, hidden from sight. Image credit: Adobe Stock.

The hidden cost of AI: In conversation with Professor Mark Graham

Professor Mark Graham a middle aged man with dark hair, wearing a black shirt.Professor Mark Graham
Today, we are in the middle of a hype cycle in which companies are racing to integrate AI tools into a variety of products, transforming everything from logistics to manufacturing to healthcare. However, the data work that is essential for the functioning of the products and services we use is often deliberately concealed from view. If it wasn’t for content moderators continually scanning posts in the background, social networks would be immediately flooded with violent and explicit material.

Without data annotators creating datasets that can teach AI the difference between a traffic light and a street sign, autonomous vehicles would not be allowed on our roads. And without workers training machine learning algorithms, we would not have AI tools such as ChatGPT.

Professor Mark Graham researches global technology through the eyes of the hidden human workforce who produce it. He argues that AI is an "extraction machine", churning through ever-larger datasets and feeding off humanity’s labour and collective intelligence to power its algorithms.

We caught up with Mark to discuss some of these issues.

In talking about the AI technologies we rely on, you mention the "countless humans forced to work like robots, toiling in monotonous low-paid jobs just to make such remarkable machines possible" -- who are these people, and what sorts of jobs are they doing?

There is a whole range of ‘data work’ needed to make our digital lives possible. Data annotators label data with tags so that it can be understood by computer programs. Content moderators sift through digital content in order to remove harmful content that breaches company guidelines. If you have ever interacted with any form of AI, whether it be a chatbot, a search engine, a social media feed, a streaming recommendation system, or a facial recognition system, data workers have had a hand in building or maintaining those systems.

The root cause of the problems encountered by data workers lies in the power imbalance between them and the institutions that govern their jobs… It is unlikely that any of the issues data workers experience will ever be meaningfully addressed without workers building their collective power in movements and institutions.

You talk about "building worker power", as a step towards redressing some of the issues of the hidden labour of AI — I guess this is quite difficult when these jobs are so atomised, and the workforce so expendable?

The root cause of the problems encountered by data workers lies in the power imbalance between them and the institutions that govern their jobs. Historically, when social movements achieve lasting change, it's often through organizing a critical mass of people to push for policies that address systemic inequalities. It is therefore unlikely that any of the issues data workers experience will ever be meaningfully addressed without workers building their collective power in movements and institutions.

However, data workers face serious barriers to ever building that power. The jobs that they undertake are relatively footloose and standardised, and, as a result, are carried out in a planetary-scale labour market. Their jobs can be quickly shifted to the other side of the planet. 

There are no easy ways for workers to build collective power under these sorts of conditions. Data workers in a country like Kenya or the Philippines have an enormous structural disadvantage. However, that is not to say that organizing is impossible. In production networks that are organised globally, workers will increasingly also need to explore ways of organising across geographies. This will undoubtedly take a range of forms, but will all need to be rooted in the principle that workers acting collectively will be able to demand better conditions for everyone in a production network. Isolated efforts, by contrast, are unlikely to achieve lasting change.

A requirement of accountability is visibility. What are some of the ways that this labour is hidden from us -- and why? How could it be made more open and visible, more recognised? 

The labour in AI production networks is almost always hidden from view. If you drink a cup of coffee or buy a pair of shoes, you probably have a conception that at some point that coffee passed through the hands of a plantation worker or the shoes were assembled by someone in a sweatshop. However, precisely because AI presents itself as automated, very few people can imagine what the human labour on the other side of the screen looks like. AI companies are complicit in this subterfuge. They want to present themselves as technological innovators rather than as the firms behind vast digital sweatshops.

Because AI presents itself as automated, very few people can imagine what the human labour on the other side of the screen looks like. AI companies are complicit in this subterfuge.

Because of this enormous gap between how tech companies present themselves and the actual on-the-ground conditions experienced by workers in those production networks, I started the Fairwork project. Fairwork evaluates companies against principles of decent work and gives every company a score out of 10 based on how well they stack up against those principles. To date, we have scored almost 700 companies in 38 countries. Doing this work has encouraged a lot of companies to make improvements to the working conditions of their workers to receive a higher score.

The next phase of our project will involve going to the lead firms in AI production networks, the brands that consumers are familiar with, and letting them know that we are going to start holding them accountable for all the working conditions upstream in their production networks. We will constructively work with them to embed principles of fair work into their contracts and supplier agreements, but also use our research to hold them accountable when they fail to do so.

Watch Mark’s latest video highlighting the hidden cost of AI and the implications of this ever-evolving technology for the thousands of AI workers toiling away behind the scenes to deliver AI powered services.

I guess an obvious solution to some of these issues is regulation -- workers should be protected in whatever jurisdiction they are working, and wherever in the production chain they sit. What efforts are being made in this area?

The planetary labour market that much of this work is traded in makes it difficult for regulators in the Global South to raise conditions. If regulation raises costs in Kenya, those jobs can move to India. If regulation in India raises costs, those jobs can move to the Philippines. These dynamics create a ‘race to the bottom’ in wages and working conditions, leaving regulators having to choose between bad jobs or no jobs. As the economist Joan Robinson famously said, ‘The misery of being exploited by capitalists is nothing compared to the misery of not being exploited at all.'

However, even though the global geography of the labour market neuters the ability of regulators to act in the Global South, it strengthens the hand of regulators in the Global North. Regulators in countries that are home to a lot of the demand for digital products and services have the ability to play an outsized role in setting standards. The EU's proposed Supply Chain Directive is a good example of this. It aims to make companies operating in the EU accountable for human rights and environmental impacts throughout their global supply chains. Because few AI companies will want to forgo being able to sell to consumers in the EU, this directive has the potential to improve conditions for the many workers in countries with weak labour protections.

Much of the discussion about AI has been focused on existential risks that it might present in the medium- to long-term future. However, the real risks of AI are already right here in the present.

Finally, your assessment of the AI industry is pretty bleak, that "workers are treated as little more than the fuel needed to keep the machine running" — and that this is happening to all of us, right now. What are some of the key issues and battlegrounds in which this question will be played out in the coming years?

Much of the discussion about AI has been focused on existential risks that it might present in the medium- to long-term future. However, the real risks of AI are already right here in the present.

A few decades ago, anti-sweatshop campaigns shifted attention to the plight of garment workers, and shifted the onus of responsibility for those workers onto the brands who sell clothes. Those campaigns did not fully eradicate sweatshops, but have been an important moment on a path to normalising the idea that lead firms in production networks have the potential power to impose decent work conditions throughout a supply chain. If we are to head towards a fairer future of work, one of the key battlegrounds will have to be ensuring that big tech companies take responsibility for the conditions of all workers in their supply chains.

Because tech companies have, to date, taken on very little of this responsibility, pressure will be required from consumers, policy makers, and workers. Consumers will have to recognise that they are complicit in the conditions of workers who made or maintained the product or service that they are using. Policy makers will have to realise that a laissez-faire approach to regulation is only serving to increase inequalities. And workers will have to find ever more creative ways to organise across supply chains in order to hold companies accountable. Until we all force these companies to change, we will only continue to be nothing more than fuel for the machine.

***

Professor Mark Graham was talking to the OII's Scientific Writer, David Sutcliffe.

Mark Graham is Professor of Internet Geography at the Oxford Internet Institute, where he leads a range of research projects spanning topics between digital labour, the gig economy, internet geographies, and ICTs and development. He's also the Prinicipal Investigator of the participatory action research project Fairwork, which aims to set minimum fair work standards for the gig economy.

Read his latest book: James Muldoon, Mark Graham, and Callum Cant (2024) Feeding the Machine. Canongate.