‘Enslaved corporate ‘droids, sent by oligarchs, are comin’ for your job!’
ChatGPT has been all over the news of late, promising a bright new future… except that’s not what the Government’s own research predicts for England and Wales.
According to the media, on-line chatbots are helping people to be ‘more creative’; but is the whole picture really that rosy? Digging deeper, the facts are in plain sight, but no one ‘in authority’ seems to be capable of holding this rather disturbing discussion.
This is a really complex issue; which is perhaps why the media has so far failed to have an honest, expansive discussion about it. In response to recent ‘popular science’-type news about ChatGPT, what I outline here is why the drastically changing pace of technological change, being driven by artificial intelligence1 (AI), is about to up-end the traditional world of work; and with it, the well-being of large parts of England and Wales. The government’s own research says this. The question is, why is there no public debate about this, and why are our politicians seemingly so supportive of it?
This is an issue which I have been following for many years: Mapping how the changing use2 of technology is allowing seemingly unstoppable forces to ‘reform’ society. There’s a lot of necessary information I need to cram into this piece, which is really hard to do in a clear and concise way.
The first draft failed, taking a massive detour, which ended-up as a piece of music celebrating Mario Savio’s rant about “the machine”3. Having got that out of my system, the second draft was meant to be a short news item, but after passing 2,000 words it was clear it needed a new approach. And so we end-up here…
1. “So this is how liberty dies, with thunderous applause”
As I rework a 1-second clip of R2D2, in my first attempt to make a video, that line from, ‘The Revenge of the Sith’4, kept going through my mind.
In ‘Star Wars’, the ‘droids are essentially dumb, voice-controlled tools that people use for the menial chores in their life; and nowhere in that fictional universe do the ‘droids actually seem to replace people; instead, they seem to act like a slave class5, where even the poorest person in society can own one to make their life easier.
The first obstacle to consider is the in-built bias of the media – in particular the business, economics, and technology media – when covering issues of technological progress, especially as it applies to work and automation.
The media dialogue is skewed toward a narrow message of ‘affluence’ and ‘progress’; phrased within terms and assumptions that are only relevant to a narrow, unrepresentative audience of the ‘like-minded aspirants’ it seeks to address. As a result, the coverage of technological advances tends to favour the most educated or affluent, and ignore the perspective or needs of the least affluent.
Take, for example, the similar debate around the effects of home-working. Figures just released6 by the Office for National Statistics show (see below) that while: 90% of those earning £50,000 per year or more can work from home, and 27% do; only 25% of those earning £10,000 or less can work from home, and only 8% do. Home-working, in general, is skewed towards the affluent. Yet the media debate – as in the general bias towards ‘office culture’ – favours the perspective of clerical or managerial roles, despite statistics showing7 that only a minor proportion of jobs are office-based.
A critical flaw with the popular discussion about ChatGPT8 – and of AI in general – is that it represents a similar distortion both of the intended audience, and how representative of the whole nation that audience is: Figures in the media are talking about AI as if we’ll all end-up with ‘droids we can talk to, and which will free us from the drudge of daily work by making us all creative geniuses; when, in fact, it’ll more than likely make people ‘employment-insecure’, rather than simply ‘unemployed’, with a far lower level of job security and income.
We must question not only ‘what’ is said within the popular dialogue about AI, but also who it is being ‘said for’ – and whether that is representative of society as a whole.
Enter ChatGPT – where that in-built bias within the media’s coverage leaves many of the most troublesome aspects of AI unexplored.
About six month ago, YouTubers9 started producing content which had been ‘written by’ ChatGPT. It was clunky10, and at points gibberish; and while many YouTubers reacted to it as an entertaining toy, others, particularly in the field of music11, fully-understood the potential of this tool12 to overturn the way people worked13.
Arguably, that splurge of content wasn’t just the developers of ChatGPT wanting to generate publicity. An AI requires ‘training’ in order to make it work more reliably – and all those content creators being given a free trial were willingly, though perhaps unwittingly, helping in the development of the system.
Here we hit the next problem of AI in general: The energy consumption of creating a functioning AI system so that it can give the desired response.
This is highlighted in a Royal Institute lecture14 from last December, featuring the results of a survey published in Nature15 (relevant chart shown above). Computers become more powerful by performing more calculations per second; and while each calculation may use slightly less energy16 over time, the fact the amount of processing capacity17 is growing faster than the decrease in energy use, means that overall, energy consumption across the IT sector continues to rise18.
Machine learning19, and AI training, consume a very large amount of processing power, and hence energy. Rather like the problems with bitcoin mining20, this is being driven by the use of multi-CPU, pluggable GPU units21, which add large amounts of processing power to a standard computer system – albeit, at the cost of a far higher level of power use.
As the study in Nature. showed: The processing power consumed by older methods of machine learning had been doubling every 24 months; the latest AI systems, because of the far larger amounts of data they process, are doubling every 3 months; but for ChatGPT, and similarly large22 AI systems, that figure is doubling every two months. And as shown in the Royal Institute lecture, on or after 2030, data processing may be consuming 20% or more of global electricity supply.
What does AI training achieve? It makes the response created by the system reflect an ‘optimum’ of opinion from all the content it has reviewed.
Of course, as I note above, a large part of the news media’s content is biased towards an idealised, narrow perspective. This means that what ChatGPT reflects isn’t necessarily the dominant voices in, for example, political journalism, but the general sentiment across all public media.
For right-wing media pundits this has produced a truly infuriating result: ChatGPT shows a left/liberal bias23 in its responses. That is unsurprising, given the mainstream media consistently assumes24 the public are further to the right than they actually are; and in recent years, research shows an increasing level of prejudice25 in media discourse, despite the upcoming generations26 being far more liberal than their predecessors; and, of course, Britain has the most right-biased media27 in Europe.
Personally, though, I find ChatGPT to be boringly ‘liberal’, as whenever I take those tests I always end up in ‘Anarchist’s Corner’ – with extreme left and libertarian views. Irrespective of that, ChatGPT is never going to give any truly ‘radical’ responses, since it can only reflect a blend of views which are already in general circulation – encompassed within all the data upon which the system was trained. Its tendency to represent a sample across the whole spectrum of content means the colour of that response will always be ‘beige’28, not red or blue.
Finally, there’s been a lot of talk lately about Britain’s industrial strategy – or lack thereof. What is not said, again because of the bias of the political and business media, is that for the last fifty years whenever businesses and politicians talk about ‘investing in productivity’, what they’re actually talking about is greater automation29.
“AI and related technologies should not cause mass technological unemployment, but our analysis suggests that they may well lead to significant changes in the structure of employment across occupations, sectors and regions of the UK. The effects may be relatively small over the next five years, but could become more material over the next 10-20 years.”
The debate about the changing future of work seems inextricably linked to the tag-line of, ‘high-paying jobs’. In reality31, greater automation leads not so much to unemployment, but to the end of traditionally secure high-paying jobs, and the growth of less-secure32 and ‘gig economy’33 working.
Research shows34 that 8%, or 1½ million jobs, may be transformed in England and Wales by new technology over the next few years; a process that will accelerate until 203035.
From the 1980s, though automation played a role, the off-shoring of primary industries (such as steel and mining), and manufacturing, played the greatest role in the decline of ‘working class’ communities. Now AI will do this for the middle classes. That trend will hit the fringes of the nation, the furthest from the South East, the hardest; and overall, women, younger workers, and those working part-time, are disproportionately affected.
The catch, here, is that those who believe this will be positive for the economy assume people retrain36 to use these new technologies – which so far is proving difficult because of the structural barriers to accessing formal education or in-work training.
There is no evidence that AI will aid most people’s creativity, or get them better jobs. From the many studies available, including those carried out for the government, there is a consensus that there will be disruption to the roles people play across many occupations; and that the scale of that disruption is hard to pin down beyond the general description of, “significant”.
3. Unless you’re in ‘the 1%’, you may be about to get some bad news…
The greater issue here is that ChatGPT is not like previous waves37 of automation. ChatGPT will target the middle-level clerical roles across professions – from legal secretaries, to copy-writers38, to local authority managers – who currently do rather well out of the technocratic ‘knowledge economy’39.
That’s not reflected within the current debate over automation: Which tends to focus on the impacts new technologies have on semi- or un-skilled workers; and certainly, most of the mainstream coverage of ChatGPT didn’t relate the likely impacts of this and other AI tools, highlighted in recent research.
“So why did I still have that feeling of dread? Artificial intelligence, text transformers and diffusion models, everything that we’re currently seeing, seems to be on that sigmoid curve of progress. And I don’t know what point on that curve we’ve got to. If we’re already most of the way up that curve, then cool... It’s not going to take many jobs... If we’re at the middle of that curve, then wow, we’re gonna get some really impressive new tools very soon... But that feeling of dread came from the idea that ChatGPT, and the new AI art systems, might be to my world what Napster was to the late nineties. The herald, the first big warning that this new technology, the thing that was going to change everything, was starting to actually change everything. Where huge numbers of people, not just the nerds, were actively using it.” [11:27-12:32]
Tom Scott, if you watch his huge number of videos, enjoys, and actively promotes the idea of technological progress. In this video, you can see that the penny has finally dropped, and he truly realises the scale of disruption AI tools like ChatGPT might bring to all kinds of employment. Yet at the same time, his appreciation of technology also makes him seemingly resigned to his powerlessness to change this outcome.
We seriously need to talk about AI-based automation – such as ChatGPT. Right now this debate over the future of work is being promoted by ‘the ignorant’41 – especially politicians and economists – who feel they are immune or unaffected by the changes to employment these systems will create. Many others, like Tom Scott, clearly do understand, but feel powerless to stand against the continued advance of neoliberal capital’s dismemberment of the social contract.
This is not about ‘stopping’ technology: It’s about who reaps the economic benefits of this process, and how this accentuates national and global inequality; and the fact that under the increasingly unequal distribution of wealth, it is highly likely those most negatively affected by these new systems, will not receive any significant support to manage those disruptive effects.
In the 1930s, economists, such as Keynes42, believed that a century later people would only need to work for a few hours a day43. Clearly, this this did not come to pass! The economic rewards of higher productivity, created by new technology and economic globalisation, were not shared. They were hoarded by a minute group of what we now call ‘billionaires’ or ‘oligarchs’44. There is absolutely no reason to assume that this same pattern of immiseration will not repeat with the introduction of AI: Unless, as a nation, we chose to oppose it. This is not an issue of tackling technology; it’s an issue of tackling the dominant economic ideology that shapes these trends toward certain desired outcomes.