A storm coming?

Sir Sadiq Khan was right to highlight the potentially “colossal” impact of artificial intelligence (AI) on London and its economy in his Mansion House speech last week, but “controlling” this still-emerging technology may be a tall order.

As the Mayor argued, the dominance of knowledge economy sectors in the city puts London “at the sharpest edge of change”. Three such sectors – information and communication, finance and insurance, and professional, scientific and technical services –  account for 31 per cent of jobs in the capital, almost twice as many as across the UK.

These are the roles that are most exposed to AI. Human-centric, a report written by me and published by University of London in October, argued that generative AI’s ability to “precis, to research, to generate ‘ideas’, to structure arguments and data, and to produce text and images make it a close fit for tasks that are core to knowledge economy roles”. AI boosters and think tanks alike predict that these sectors may be as dramatically shaken up as agriculture and manufacturing were during previous spates of technological change.

However, the impact is hard to discern at the moment. There may have been a fall in graduate recruitment, but the evidence is contested and the impact of AI hard to disentangle from other factors. And corporate adoption of generative AI has been slow. This is partly about accuracy and accountability, but also reflects the ways in which generative AI use is spreading: individual workers are using chatbots on mobile phones and desktops rather than technology introduced through complex, top-down corporate roll-outs.

But it is still early days, only three years since ChatGPT 3.5 was launched, even if it seems longer. And so, while things may feel calm for the moment, there could be a storm coming, with London’s economy directly in its path. The impact on employment could, as the Mayor said, be dramatic.

It bears repeating that generative AI does not simply “take jobs”. The technology can be used to support particular tasks (“augmentation”) or to fully automate those tasks (“substitution”). If this saves time and money, it boosts productivity: more output for the same input.

Productivity gains may be realised by redeploying workers to build more products or serve more customers, or to develop new products and services. Such gains can also be shared with workers in the forms of reduced hours or higher pay. But they can also be cashed in, to return money to shareholders or taxpayers, through “efficiency savings” – that is to say, job losses.

There are ways to slow this down, but they are not necessarily desirable. The US think tank Brookings has proposed a “robot tax” on automation to tip the balance in favour of keeping humans in work (and to create revenues that could support workers who lose out).

Regulations could also be used to slow AI adoption. But keeping humans working on tasks that could be done more efficiently by AI makes productivity gains much harder to achieve and poses a particular threat to cities like London, which export and compete globally. We can throttle back AI adoption here, but will Singapore, New York and Dubai follow suit?

London is actually well-placed to seize the opportunities that come with AI: the city is a world leader for AI investment and innovation, with a highly educated, cosmopolitan population and a bedrock of world-class universities. The city is also a centre for innovation, for creating new products and services, and for the highly personalised and specialised professional services that may be most resistant to automation.

Even if AI adoption in London leads to job losses, history suggests that technological change leads to the creation of as many jobs as it destroys. We have done this before: automation of London stock market transactions – part of the Big Bang of 1986 – took work away from hordes of back-office clerical staff, who previously had to reconcile every trade on paper. These jobs went, but new jobs – in IT, as analysts, in compliance – led to a net growth in financial services employment.

That said, there is a time lag, and the net gain in jobs can obscure the traumatic impact on those people who lose out. The Mayor’s commitment to offer AI training to all Londoners will be valuable. Human-centric argues that universities should play a part too, helping workers to develop the skills they will need to thrive in and shape the new world– part of the much vaunted shift towards lifelong learning.

Universities can also ensure that the next generation of graduates has the resilience and skills to thrive. This is partly a matter of technical skills, but also about knowing how to use a deceptively “easy” resource critically and ethically, putting this in the context of a wider understanding of citizens and society, and nurturing the human skills – of judgement, understanding, collaboration – that employers still see as paramount.

The Mayor’s Mansion House speech touched on a bigger issue too: the “unprecedented concentration of wealth and power” that could result from AI adoption. The risk is that productivity gains from the use of AI flow mainly to big tech companies and their shareholders, rather than to workers and the public at large. This is beyond the reach of city or even national governments, but will become increasingly urgent if AI adoption does create a boom. What good is growth if its fruits flow to a few people in Silicon Valley?

There are ideas out there – from a levy based on how many hours of computational time individuals and firms use, to an endowment that takes a proportion of the value of AI companies launching on the stock markets and uses it endow a “universal basic income” that enables everyone to share in the benefits of AI.

However, adopting any of them will require a level of international cooperation that seems almost impossibly remote in today’s fractured geopolitical climate. Perhaps this is an issue where cities can take the lead, hoping that their national governments will catch up over time.

First published by OnLondon

Remote control – AI and hybrid working

This decade is likely to see the biggest transformation of the workplace since the widespread adoption of the personal computer. Hybrid and remote working patterns adopted during the pandemic appear to be sticking, and a wave of disruption from artificial intelligence (AI) and large language models (LLMs) is following rapidly behind.

London is at the epicentre of these twin “workquakes”. The capital has persistently had the highest levels of home-working in the UK, with two thirds of Londoners saying they worked at home at least one day a week last summer. This reflects hybrid working’s dominance among professional and managerial staff, who make up 63 per cent of London’s resident workers, compared to 50 per cent of all England’s.

These people enjoy the flexibility, work-life balance and personal productivity that working from home can offer, though the impact on organisational or inter-organisational productivity is more contested. Nonetheless, speakers at a London Assembly meeting last week said that the era of “five days a week in the office” had gone for good, and that the task was to adapt central London to new ways of living, working and playing.

The accelerating pace of AI adoption looks likely to add turbulence. A recent UK government report found that workers in London were twice as exposed to AI as the UK average. This was not because of LLMs’ appetite for the diversity and vitality of the capital, but (like the prevalence of home-working) is largely a result of London’s occupational make-up. Unlike previous waves of automation, which affected manufacturing and routine clerical work, AI is coming for the professionals.

The report suggests that the most affected occupations include management consultants, financial managers, psychologists, economists, lawyers, project managers, market researchers, public relations professionals, authors and, perhaps surprisingly, clergy. The “safest” are jobs are those of such as sports professionals, roofers, plasterers, gardeners and car valets. The former occupations are over-represented in London, the latter are not.

However, before soft-handed metropolitan knowledge workers like me rush to retrain, ignoring our lack of aptitude, there are some caveats. The first is that the government report’s projections make no distinction between jobs that are augmented (those where workers can deploy AI to dramatically enhance their productivity), and those that are likely to be substituted (replaced, sooner or later, by new technology).

The second is that the analysis takes no account of the new jobs that will be created. We can see those that are at risk, but it is harder to identify the opportunities that will arise. A year ago, few people had any idea what a “prompt engineer” was. Today, demand for them is booming. And we can be re-assured by historical experience: the majority of jobs that Americans do today did not exist in 1940.

In any case, most professional jobs involve more than one activity, which is where the interaction between working from home and AI gets interesting. A management consultant, for example, may spend time meeting clients, preparing pitches, interviewing workers, analysing data, workshopping ideas and writing reports. A PR professional may write press releases, manage staff, research markets, pitch to clients and journalists, develop concepts, devise guest lists, plan and host events.

Some of these tasks are intrinsically social and best undertaken face-to-face. Others are more easily undertaken remotely, away from distraction and other people. Those in the latter group are also those that can be most easily supported by AI.

From this perspective, AI adoption and hybrid working will complement each other. Hybrid working has already accustomed us to working remotely with less social interaction; AI can provide a sounding board for ideas and be an orchestrator of collaboration, without the hassle and cost of a commute. Similarly, intelligent use of AI can boost productivity, improve co-ordination and reduce the “digital overload” of online meetings, emails and collaboration spaces that built up during lockdown.

But there may be a sting in the tail. Over time, people working remotely with AI support may find themselves edged out by their machine collaborators. Cost-conscious employers are already exploring whether some jobs undertaken remotely might be outsourced internationally. A task that can be completed in Leamington Spa rather than London can also be exported to Lisbon or Kuala Lumpur. Over time, it may also be undertaken by an AI.

Oxford University professors Michael Osborne and Carl-Benedikt Frey, who published a highly influential analysis of the potential impact of automation on the workforce in 2013, recently wrote a (very readable) update reflecting on the explosive growth in AI and how it may affect their original projections.

In 2013, they argued that tasks requiring social intelligence were unlikely to be automated. Now, they write, AI has challenged that “bottleneck” to automation: “If a task can be done remotely, it can also be potentially automated.” However, for sensitive tasks and relationships, face-to-face would retain primacy:

“The simple reason is that in-person interactions remain valuable, and such interactions cannot be readily substituted for: LLMs don’t have bodies. Indeed, in a world where AI excels in the virtual space, the art of performing in-person will be a particularly valuable skill across a host of managerial, professional and customer-facing occupations. People who can make their presence felt in a room, that have the capacity to forge relationships, to motivate, and to convince, are the people that will thrive in the age of AI. If AI writes your love letters, just like everybody else’s, you better do well when you meet on the first date.”

What does this all mean for cities like London? To start with, while we do not know precisely what new jobs will be created by the AI revolution, London is already one of a handful of hotspots for AI start-ups, so it is likely to be the location for many of the new jobs too. The capital is already home to Google Deepmind and many other high growth AI firms, and OpenAI have announced plans for their first international outpost in London.

The combination of AI and hybrid working may ironically strengthen London’s role as one of a few genuine global centres for face-to-face interaction. If remote work is increasingly dispersed or automated and in-person workers with social skills remain in demand, then diverse, globally-accessible, sociable cities such as London will provide the ideal setting for their relationships and collaborations.

There is a bigger picture too. A recent paper by Richard Florida and others talked of the rise of “metacities” based on long-distance networks of collaboration and intermittent commuting. This identified London and New York as the world’s two leading “superstar” hubs, sitting at the heart of networks of talent and interaction. London’s network, as measured by talent flows, includes Manchester, Birmingham, Edinburgh and Bristol, but also Dublin, Paris, Lagos and Bengaluru.

Florida and colleagues argue that the constellation of satellite cities will shift over time, but the importance of superstar cities will persist. This suggests that in coming years London will need to plan for growth in housing, in offices and in new forms of collaborative and social spaces.

The city will also need to be open and welcoming to global talent while helping local workers adapt to change, and to work more closely with its satellite cities to ensure that economic transformation can deliver prosperity and economic growth across the UK.

This is likely to be a turbulent decade for London’s economy, but it could also be one in which the capital’s national and global profile increase.

First published by OnLondon.

AI: reshaping the knowledge economy

Since their earliest days technology has shaped cities. The industrial revolution founded the great manufacturing centres of the 19th Century; trains fuelled London’s growth, replacing market gardens with metro-land; and global information and communication technology networks founded a network of global cities in the late 20th Century.

Right now, social media are clamorous with hype about artificial intelligence (AI), and the pace of change seems dizzying. Anyone who has played with “generative” AI tools such as OpenAI’s ChatGPT, Google’s Bard, or Midjourney’s image generators will have experienced the uneasy feeling that they are dealing with something sentient, however much they know that these systems merely aggregate and recombine information.

Prompt engineering is not straightforward, as this Midjourney representation of ‘futuristic London’ illustrates.

What impact is this wave of innovation likely to have in London, and on London’s economy in particular? In recent weeks, a few academic and commercial studies considering the labour market impact of generative AI have been published. This article tries to weave together some of their threads.

One piece of positive news is that London is the leading European city for AI. A 2021 survey by the government’s Digital Catapult identified the UK as the third most important centre for it after the USA and China, with more than 70 per cent of UK AI firms and – judging by 2020 job postings – around a third of all new advertised AI jobs based in London.

London’s tech sector has grown fast and is estimated to employ around 900,000 people. But the impact of generative AI is likely to extend beyond the capital’s silicon centres and suburbs. One team of researchers, Tyna Eloundou and colleagues, have looked at detailed task descriptions for US occupations to estimate the impact that generative AI technologies could have. Overall, they estimate that 80 per cent of the USA workforce could be affected by them, with around 20 per cent being heavily affected. The impact would be greatest for higher paid jobs and those held by graduates.

The research team has not published details of its analysis, but does summarise the impact on different industries. At the top of the list, with more than 40 per cent of tasks affected, are various financial services and IT subsectors, as well as a publishing and broadcasting (non-internet), and professional, technical and scientific services.

A Goldman Sachs report reaches similar conclusions. It argues that the impact of generative AI will be greatest in advanced western and far eastern economies. In Europe, it suggests the greatest impact will be on professionals, associate professionals, clerical support workers and managers, with legal service and office administration likely to be affected most heavily.

These findings map pretty squarely onto the three categories of professional services which dominate the London economy: information and communications; finance and insurance; and professional, scientific and technical services. These sectors have grown in importance in the capital. They made up 31 per cent of jobs in London in 2022 compared to 27 per cent in 2012. They are also concentrated in the capital, accounting for almost twice the proportion of jobs as across the UK as a whole.

Saying that these “knowledge economy” sectors are those most exposed to the impact of generative AI is more or less the precise opposite to what Centre for London colleagues and I found five years ago in our report on disruption to the capital’s labour market. Based on an analysis of how “automatable” different occupations were, we argued that London’s information and communications and its professional, scientific and technical services had the lowest automation potential (finance and insurance was slightly higher).

Why the difference? Were we wrong? Are these new analyses wrong? What has changed? Without re-running our analysis, I suspect part of the difference lies in occupational mix. Many London workers undertake more specialised and knowledge intensive tasks within particular industries. Underwriting risk at Lloyds of London is very different from working in a claims call centre.

But I think our expectations have shifted too. Generative AI is a qualitative change. When we wrote the Centre for London report, we were generally talking about the scope for specialised algorithms to automate specific routine tasks. These new technologies go further: they can draw on huge databases to generate new content. They can respond to simple user requests, writing and refining algorithms on demand. They can draft summaries, presentations, poems and speeches. They are creating visualisations. They are even being deployed in therapy. This is extending their reach much further into professional services than we envisaged.

Will this change destroy jobs? The traditional response is to say, “No! Every other technology has created jobs. This will too.” I think that is certainly right in the short term. The measure of impact used by the Eloundou study is whether generative AI could theoretically speed up tasks by more than half. A recent empirical study found that AI-enabled workers took an average of a third less time to complete certain standardised tasks and produced a better graded submission at the end. Workers also expressed more job satisfaction, spending more time coming up with ideas and editing, and less time drafting.

This sounds like a potential boost to productivity for London’s service sectors – one the capital and country urgently need. Productivity gains can, of course, be realised by cuts in wage bills, but that is only part of the story. AI may also unleash supply of and demand for new products and services. Economics blogger Noah Smith has compared its impact to that of machine tools, which displaced craft manufacture but led to ever increasing demand for goods and employment in manufacturing – at least for a century or so.

London is perfectly positioned to catch this wave of opportunity, creating new software to meet new demands and launching a new wave of hybrid services, following in the path of fintech and medtech. But the impact may go deeper still. Eloundou and colleagues argue that generative AI is already showing signs of being a “general purpose technology” like printing or steam engines, characterised by “widespread proliferation, continuous improvement, and the generation of complementary innovations”. If that is the case, AI will change our world in ways that we cannot yet comprehend.

All this is wildly speculative. At the extremes, London could be left unaffected by AI, though I fear that would be the stagnation option. Or AI may destroy humanity, making predictions moot. Between these poles, job destruction is by no means certain and if AI allows more leisure time alongside more equitably shared prosperity, that might not be a bad thing. But disruption probably is. London could be in for an exciting but choppy few years.

First published by OnLondon

AI: reskilling for the rough beast

I’d like to say that I asked ChatGPT to write me a first draft of this blog, but a) it’s a tiresome cliché, and b) the platform was overloaded when I started writing, so I couldn’t. I’m not surprised. Even over the past couple of months, talk about and use of large language models (LLMs) such as ChatGPT and Bing seems to have been growing exponentially. LLMs will render essay-writing at universities obsolete, hugely accelerate the production of first drafts, and automate the drudge work of academic research.

I am undertaking research on the skills that we will need in the future, and it feels difficult to get a handle on how LLMs and their artificial intelligence (AI) successors will affect these, given the speed at which innovation is advancing and use cases are multiplying. But it also feels careless going on negligent not to do so. So, what might it mean to work with this rough beast, as it slouches towards our workplaces?

Robert Reich’s The Work of Nations

AI will, I think, transform what we currently call the ‘knowledge economy’. Thinking about this sent me back to Robert Reich’s The Work of Nations, and its analysis of the ‘three jobs of the future’. ‘Routine production’ jobs, he wrote, were poorly valued jobs in everything from manufacturing to book-keeping, often moved overseas when he was writing, but also increasingly vulnerable to automation. Many of Reich’s second category, ‘in-person service’ jobs, are less vulnerable to moving overseas (although many are still low-valued by society): even if some shopping has gone on-line, there are still jobs – from brain surgeon to hairdresser, and from bartender to care assistant – that are defined by the need for proximity. The third category, Reich slightly awkwardly describes as ‘symbolic analysts’, comprising everyone from consultants, software engineers and investment bankers, to journalists, TV and film producers, and university professors. These are the elite tier of the global knowledge economy:

“Symbolic analysts solve, identify and broker problems by manipulating symbols. They simplify reality into abstract images that can be re-arranged, juggled, experimented with, communicated to other specialists, and then, eventually, transformed back into reality… Some of these manipulations reveal how to deploy resources or shift financial assets more efficiently, or otherwise save time and energy. Other manipulations yield new inventions – technological marvels, innovative legal arguments, new advertising ploys for convincing people that certain amusements have become life necessities.”

Reich was writing 30 years ago. Since then, the offshoring and automation of routine production has gathered pace, while the rewards accruing to symbolic analyst jobs have increased. But Reich’s description of symbolic analyst jobs underlines how the very features that protected them from routine automation (the combination of analytical skill, a reservoir of knowledge and fluency in communication) may now expose them to a generation of technology that will become increasingly adept at manipulating symbols itself, even if it cannot (yet) ‘think’ or ‘create’. From an architectural drawing to a due diligence report, to an advertising campaign, to a TV show script, to a legal argument, to a news report – there are very few symbolic analyst outputs that LLMs will not be able to prepare, at least in draft.

Revisiting Osborne and Frey

Another way of thinking about the potential impact of more advanced AI on the knowledge economy workplace is to revisit Michael Osborne and Carl Benedikt Frey’s hugely influential analysis. Writing in 2013 Osborne and Frey identified the ‘engineering bottlenecks’ that have held ‘computerisation’ back from specific tasks, and were expected to do so for the next two decades. These included complex perception and manipulation activities, creative intelligence tasks (from scriptwriting to joke-making), and social intelligence tasks (such as negotiation, persuasion, and care).

The growth of LLMs chips away at the second of these, as machines draw on extensive databases to generate coherent content, though their joke-making skills are still a bit iffy. LLMs are also starting to make inroads into the third, as they are deployed as companions or therapists, even if their empathy is performed rather than felt. Engineering bottlenecks still constrain automation, but some are widening much faster than Osborne and Frey predicted. Indeed, one recent assessment suggests that the use of LLM technology will have an impact on around 80 per cent of US workers, with the impact greatest for higher-qualified and higher-paid workers.

That is not to say that AI will ‘destroy jobs’. Like other technologies, AI will probably create new jobs and remodel others. For the moment, there is craft in minding these machines; you need to know how to give instructions, ask questions and evaluate answers. In this, LLMs are like the oracles of classical antiquity, whose riddling utterances contained truth but needed careful interpretation. LLMs can produce good drafts and their accuracy is improving, but they can also ‘hallucinate’ facts, and assert them with a delusional and sometimes aggressive confidence.

This task of interpretation and intermediation is not that far removed from how many professions operate today. Architects, doctors, lawyers, accountants, scriptwriters – even academics – are not pure symbolic analysts, working in an entirely abstract world. Part of their skill, maybe most of it at the top of their professions, is interpersonal – motivating and managing staff, pitching ideas and winning business, convincing clients and colleagues. For these professionals, the current crop of LLMs are best deployed as responsive and multi-talented assistants, which do not get bored, demand pay, or insist on meaningful career development.

Automating menial tasks will disrupt professional development

What does this mean for actual flesh-and-blood assistants and their career development? In many modern professions, life for new recruits is a slog of preparing legal notes, PowerPoint decks, due diligence, and audit reports. I get the sense that some of this is already ‘make-work’, designed to acclimatise a new graduate to the codes and the culture of their profession, but also to give them a chance to see and learn from interactions – in the courtroom, at the client meeting, at the pitch.

If it becomes ever easier and cheaper to commission material directly from machines, that will create a problem not only for future generations of graduates, but also for those at the top of the professions, who will not be able to rely on a stream of graduate trainees to step into their shoes. Even as automation boosts productivity, it will disrupt professional development and may, in the words of one economist, “have stark effects on the value of cognitive labour”.

Furthermore, in the longer term (and I am thinking years not decades), inaccuracy may be less of a problem than the erosion of doubt. A lot of work has already gone into stopping newer LLMs spouting racist opinions like their predecessors did; future models will likely be much clearer about the ‘right answer’ to any question and about the truth of different propositions. Much of this will be helpful, though the lack of transparency and contestability is frustrating.

Minority opinions marginalised and moral judgement at a premium

But as regulation strengthens the guardrails around AI, there is a risk that some minority opinions will be marginalised and eventually expunged. Many of these will be conspiracy theories, junk science and fake news. But they may also be the small voices of gritty corrective to the dominant narrative – the proponents of ‘lab leak theories’ of COVID-19, the dogged campaigners against over-prescription of painkillers, the investigative journalists who stick to the story in the face of denials and threats.

This has inevitably already become a new front in the ‘culture war’, with some media getting angry that ChatGPT refuses to promote fossil fuel use, sing paeans of praise to Donald Trump or say that nuclear war is worse than racist language. So far so funny. But the more the unified version of the truth promoted by AI squeezes out alternative understandings of facts, let alone alternative interpretations of how they should guide our behaviour, the more we will need the ability to challenge and debate that truth, the imaginative capacity to transcend guardrails.

So, what does this all mean for skills? A knowledge economy in which LLMs are increasingly widespread will require critical judgement, a basic understanding of how coding, algorithms and emerging AI technologies operate, the ability to work with clients and colleagues to refine and use results, and the diverse and creative intelligence to challenge them.

Perhaps above all, we will need sophisticated moral judgement. LLMs and their AI successors will be able to do many things, but there will be so many complex judgements to be made about whether and how they should. Who will be accountable for any errors? Is it for a machine to define truth? Should it do so by reference to a consensus, or its own judgements of correspondence to reality? At an existential level, how should we achieve the alignment of AI and human interests? How are the latter to be defined and articulated? What balance of individual and social goods should be struck? Where are the boundaries between humans and machines? Do the machines have rights and obligations?

Today we muddle along, reaching consensus on moral issues through a broad process of societal mediation, with horrible moral errors along the way. Tomorrow, we have the potential for a new age of turbocharged progress and moral clarity, a prospect that is at once scintillating and unsettling.

First published by LSE Business Review.