Expert reaction to the UK AI Safety Summit

Expert reaction to the UK AI Safety Summit

November 1, 20232 min read

Today the UK government announced a "world first agreement" on how to manage the riskiest forms of AI.

It focuses on so-called "frontier AI" - what ministers consider highly advanced forms of the tech - with as-yet unknown capabilities.

The agreement, signed by countries including the US, the EU and China, was announced at the UK's AI Safety Summit.



Dr Alina Patelli, Senior Lecturer in Computer Science, Aston University, comments:



“A summit on AI safety is long overdue. As is the case with all groundbreaking technologies, AI’s transformative potential for public good is only matched by its risks, which are unlikely to be successfully avoided, if AI tech design and deployment are left unregulated and therefore open to misuse, either intentionally or accidentally. The scope of the summit is appropriate, reflective of Government’s cautious approach to managing interactions with AI safety experts from multiple nations and disciplines: the summit focus is kept narrow, to five objectives only, and the number of participants is wisely limited to 100, to keep the conversation productive.”


What is likely to come out of this summit?


“The summit’s main output will most likely be a bare-bones regulatory document comprising (1) a shared understanding of AI (i.e., a generally accepted definition of the term reflective of all summit participants’ views, not just those of tech experts), (2) a list of major risks associated to AI misuse, both in terms of potential damage as well as likelihood of becoming a reality, and (3) a policy draft outlining the core elements that a yet-to-be-developed governance framework should include.”


What AI safety could/should look like?


“Although it would be premature to venture a definition of AI safety ahead of the summit, one thing that is certain is that a comprehensive, therefore effective, AI regulatory framework would encompass more than just laws. Non-legally binding codes of conduct, tech design and development processes that are bound by moral and ethical values, both in the commercial ecosystem, as well as when it comes to individual entrepreneurs, revised open-access licenses under which AI should be used in the public domain, etc. are equally important pieces. The best way to integrate all these in a cohesive, overarching governance plan is perhaps a topic to explore in one of the post-summit events.”



What are the potential practicalities for a route forward towards safe AI?


“The practical way to systematically regulate AI is incremental. Initially, the development and application of those AI tools deemed to be high-risk will most likely be restricted to controlled environments, where the potential benefits justify the risks and where sound mitigation procedures can be quickly and effectively enforced to mitigate those risks. As regulations become better prescribed, AI’s (safe and legal) application space will gradually expand, making its benefits available to larger groups of people without any of the downsides.”



To interview Dr Alina Patelli or request further details contact Nicola Jones, Press and Communications Manager, on (+44) 7825 342091 or email: n.jones6@aston.ac.uk



Spotlight By Aston University

powered by

You might also like...