Can the UK lead on AI without undermining creators?

CreateFuture CTO Jeff Watkins explores whether the UK can truly lead in AI without sidelining the very creators who power its cultural and economic identity. As the government rethinks its stance on AI and copyright, is there a path forward that balances innovation with integrity?
The often emotive topic of copyright and AI is set to take another turn this week. According to yesterday's report in the Guardian, ministers are rethinking earlier proposals to let AI developers use copyrighted material without permission unless rights holders opt out, with technology secretary Peter Kyle reportedly no longer backing it as the default.
The UK Government aims to position the country as a global leader in AI having released the UK AI Opportunities Action Plan (AIOP) earlier this year. However, its shift in position ahead of this week’s key parliamentary vote comes in response to mounting pressure from high-profile figures, including Paul McCartney and Elton John – as well as publishers and campaigners. Among them is Mumsnet CEO Justine Roberts, who has notably questioned why AI firms should be allowed to access creative content without compensating its creators.
This updated position reflects a broader shift from earlier proposals, which had suggested AI firms could freely use copyrighted materials unless creators opted out. That idea was supported by the Tony Blair Institute (TBI) in its recent report, ‘Rebooting Copyright: How the UK Can Be a Global Leader in the Arts and AI’, sparking significant controversy and frustration among many within the creative industries in the process. It was based on the argument that licensing requirements could drive AI development to countries with looser copyright laws, threatening the UK’s ambitions to become a global AI leader.
Liberalising copyright
The TBI report does not pull any punches, with one of the leading quotes in the Foreword report being "AI presents new ways of being original", which is a bold statement that not all agree with. At over 15,000 words, it’s a lot to take in, but the entire direction of the report can be boiled down to finding a way of “liberalising copyright” to make it far easier for AI models to be trained on art, books, music, articles, code, and pretty much anything.
In other words, it is an implicit green light to copy the outputs of all human creativity with no explicit consent or, crucially, compensation and attribution. The report argues that the opposite stance of requiring licensing agreements for AI training materials would push the development of models to other territories with less strict copyright laws, “undermining not only the AIOP but also the government’s broader growth agenda”. This would raise questions around the possibility that creators’ original works could be rendered entirely valueless once they have been subsumed and can be iterated upon ad nauseam with no chance of redress.
Ethical dilemmas and power imbalances
There are some difficult ethical discussions to be had, especially around power balances between individuals and “big tech” organisations, the government, and other actors. Attribution will be difficult for heavily remixed works as the original source gets less distinct in any future outputs. The big question is around the ethics of implicit consent in a world of machine learning, and what effective and meaningful consent looks like.
However, the potential upside of liberalising copyright is that, as "piggy in the middle" between the US and the EU, the UK can establish global leadership on this matter, setting international standards in both technical and creative domains. Any loosening of IP rules would likely supercharge the UK’s AI development to unlock new productivity and public value. Unlocking copyrighted material could also provide significant advantages in the creation of new tools and methods for creators, and maybe even encouraging novel kinds of artistic expression.
But it will be difficult to implement in a way that doesn’t feel unbalanced or out of control. Effective opt-out mechanisms would need to be put in place, which would mean addressing legal, technical and some gnarly geopolitical issues. These changes would need to be paired with robust transparency and compensation measures for creators and users of the AI technology.
A proposed oversight body
With that in mind, the TBI report recommends the establishment of a new “Centre for AI and Creative Industries” or CACI. This new body would act in a policy advisory capacity, engage with stakeholders, provide oversight of any controls' design processes, create and promote best practices, and finally govern any new controls in this area. I would expect that the CACI would engage extensively with ethics experts to ensure that the highest ethical standards are in place, given the potential impact of these changes to how copyright laws in the UK works.
In conclusion, the report advocates for a radical rethink on how the big three c’s of creative work – consent, control, and compensation – are implemented in the UK. With ministers now appearing to distance themselves from the opt-out model, it remains to be seen whether this is the start of a real change in direction or simply a tactical pause. Either way, the UK will have to work closely with the creative sector – not just the tech industry – to arrive at a framework that protects creators while also positioning the country as a serious contender in the global AI race.