HomeUnited KingdomInsightsAtlas by BCW Navigate - Mapping the Developments That Matter on AI
Facebook iconLinkedIn iconTwitter icon

Atlas by BCW Navigate - Mapping the Developments That Matter on AIJuly 6, 2023

Welcome to BCW Navigate, our quarterly newsletter on all things AI, brought to you by BCW London’s Corporate and Public Affairs team.

Join Navigate as we help you chart a path through the developments that have marked the UK's AI conversation over the last few months, and share exclusive insights from the experts in our network.

Since our last newsletter, many have noted the Government’s rhetorical volte-face on AI. Whilst, prior to March, the narrative was almost exclusively innovation-focused, recent months have seen safety establish itself as the key regulatory priority.

While public attitudes continue to be rooted in suspicion, adoption continues at pace across all major industries, as companies of all shapes and sizes set out to explore AI’s much-touted commercial benefits.

This month, we were joined by Shadow Health Secretary Wes Streeting MP for a roundtable discussion on the future of healthcare in the UK. Research conducted prior to the event into public attitudes toward AI in healthcare helped us generate valuable discussion points for all attendees, and revealed the deep-rooted mistrust pervading popular perceptions of the technology.

It’s this scepticism that techUK’s Carmine identifies as having driven the Government’s recent pivot in rhetoric, discussed in more depth in her Spotlight On the Age of ‘Guard Rails’ below.

Meanwhile, on the international stage, a worldwide scramble for regulatory dominance has taken flight. In May, Joe Biden met with Google and Microsoft CEOs to progress regulation in the US, and the G7 established the Hiroshima framework for global AI co-operation. In June, both Keir Starmer and Rishi Sunak using London Tech Week to advocate for the UK’s position in the international AI hierarchy.

Should you wish to discuss any of the points raised in this newsletter in more depth, don't hesitate to contact our BCW Navigate team at [email protected]. We look forward to hearing from you.

News Summary


1. Geoffrey Hinton quits Google, warns of AI dangers

In early May, cognitive psychologist and AI expert Geoffrey Hinton announced he had quit his role at Google amid fears over the technology’s potential. His fears about the harm that AI-generated content could pose to democracy, and the implications of artificial intelligence eventually becoming superior to that of humans, echo those expounded in the Future of Life open letter signed by Elon Musk in March.

2. Leaked Google document outlines threat posed by open source research

Current Google employees gave further details on the company’s stance on AI when a leaked memo revealed concerns that the company was losing its competitive edge in development to the open-source community. The memo indicates that the balance of power in the industry is more precarious than onlookers may think – more democratised yes, but also more open to interference by bad actors.

3. Biden meets with Microsoft and Google CEOs to discuss AI regulation

In the same week of May, a major meeting took place in the White House when Joe Biden convened Microsoft and Google CEOs
to discuss the US’s approach to AI regulation. The need for companies to be more transparent with policymakers about their AI systems, as well as the importance of evaluating the safety of AI products and protecting them from malicious attacks, emerged as immediate priorities.

4. CMA launches initial review of AI market domination

Another key moment for the UK’s innovation ecosystem came in early May when the CMA launched its first review into AI models.
The review was accepting evidence until June 2nd, and will focus on how the UK’s AI ecosystem can be kept competitive in order to benefit consumers.

5. G7 leaders agree on ‘Hiroshima Process’ to foster global co-operation on AI

3 weeks later, G7 leaders agreed to create an intergovernmental forum called the "Hiroshima AI process" to debate issues around fast-growing AI tools, and so formalise international collaboration on AI standards and governance. The move comes against a backdrop of diverging approaches to AI between China and the West and aims to foster better cross-border data flows between allied nations.

6. Lawyer who used AI for case research gives statement in court

In the same week as G7 leaders were meeting to discuss AI’s impact on justice and democratic institutions, an experienced US lawyer faced a sanctions hearing for using ChatGPT to draft a briefing which ended up citing non-existent court decisions. Miller v. United Airlines, Petersen v. Iran Air and Varghese v. China Southern Airlines were among some of the fabricated cases offered in Steven Schwartz’s document, which he apologised profusely for in court.


In this edition of Atlas, Carmine Greusard-Deffeuille, AI & Digital Ethics Policy Manager at techUK, discusses the shifts in the Government narrative on AI witnessed over the last three months.

The age of ‘guardrails’ and the shifting government narrative on AI

The release of Chat-GPT and the break of generative AI into the public psyche has sent AI policy to the forefront of the political and business agenda, ahead of a key General Election.

Meanwhile, the long-awaited AI White Paper was published by the newly created Department for Science, Innovation and Technology (DSIT) in March 2023. This is a non-statutory and context-specific approach, relying on sectoral regulators to implement guidance. While the pro-innovation intent has been broadly welcomed, the lack of detail on practical implementation is a source of concern for some. When published, the Labour Party said that ‘it was too late too little’, while Conservatives first stood by their ‘light-touch approach’.

The use of the terminology ‘light-touch’ is significant as its evolution showcases the Government’s recent shift of political narrative on AI policy. This has been triggered by several factors, including the publication of letters by ‘fathers’ of AI and organisations warning about the risks posed by AI technologies, as well as other jurisdictions moving at pace on the policy front, including the more prescriptive EU AI Act or Sam Altman, OpenAI’s CEO, calling for regulation at the US Congress.

Powerful political communication stems from relevance. If Prime Minister Rishi Sunak had decided to keep communicating on the UK’s approach to AI as ‘light-touch’, the UK would be isolated from the current narrative, at home and globally. Therefore, Sunak decided to announce the UK would lead in developing ‘guard rails’ for AI on its way to the G7, where the ‘Hiroshima AI process’ was announced. AI was also high on the agenda during Rishi Sunak’s visit to Joe Biden in June, where he announced an AI summit in the UK later in the year.

At home, Government is increasingly pressured by Labour to be stronger on AI governance. While Labour recognises the benefits of the technology, and supports a sectoral approach to AI governance, it has strengthened its lines following media attention and increased anxieties regarding AI risks.

Using this momentum and London Tech Week (LTW), the Labour frontbench has called the White Paper’s approach ‘already outdated’ with no details on how to address the potential harms of AI, especially regarding Large Language Models. During LTW, Labour leader Keir Starmer published an op-ed titled: ‘We must not let AI cause a widespread repeat of 1980s job losses’ and that AI ‘should work for working people’. However, the Labour frontbench recognises that AI can be both a tool and a threat, attempting to strike a balance between being pro-business, highlighting the benefits of AI and addressing its potential harms. Eventually Labour’s narrative on AI is designed to fit under Starmer’s 5 missions for a better Britain.

Darren Jones, who is forecasted as the next DSIT Secretary of State if Labour wins the General Election, has been especially vocal on the issue. In recent weeks, he has called for an AI summit (which Rishi Sunak eventually announced), urged the Government to clarify its stance on AI and misinformation during elections, and demanded more action on AI regulation.

Meanwhile, Government is continuing to attempt to establish the legitimacy of its approach to AI. Rishi Sunak met with CEOs of leading AI companies to discuss the need for ‘guardrails.’ The Government also announced an AI Foundation Model taskforce to develop UK’s sovereignty in the space, headed by Iain Hogarth, an AI investor who wrote a widely shared opinion piece in April warning about the dangers of rushed AI adoption: ‘We must slow down the race to God-like AI’.

However, a change in Government communication does not necessarily mean a change of approach in practice. The Consultation on the AI White Paper closed on 21 June, but with such little time left before Parliament heads off on summer recess, any significant change to the current approach to AI governance would be a surprising political move from the Conservatives.

This change in Government narrative, and Labour’s increased boldness in talking about the topic, exemplify the decisive role that the conversation about AI will play for voters in the next general election. To move away from a state of anxiety, both the risk and benefits of the technology need to be outlined clearly and calmly. For corporates and politicians alike, choosing the right narrative on AI has to be the number one communications priority – at least until there is greater certainty about what this technology means for our future.