Expert Comment: Leading AI nations convene for day one of the UK AI Summit

Wednesday 1st Nov 2023, 4.04pm

Leading AI nations in attendance have reached a world-first agreement establishing a shared understanding of the opportunities and risks posed by frontier AI.

Oxford AI experts comment during day one of the UK AI summit:

Professor Robert F. Trager, Director, Oxford Martin AI Governance Initiative at the University of Oxford says:

“The declaration says “We resolve to work together” to ensure safe AI, but is short on details of how countries will cooperate on these issues. The Summit appears to have achieved a declaration of principles to guide international cooperation without having agreed on a roadmap for international cooperation.

“The declaration says that “actors developing frontier AI capabilities, in particular those AI systems which are unusually powerful and potentially harmful, have a particularly strong responsibility for ensuring the safety of these AI systems.” This suggests that governments are continuing down the road of voluntary regulation, which is very likely to be insufficient. It also places the declaration somewhat behind the recent US executive order, which leverages the Defense Production Act and other legal instruments to create binding requirements. This is confirmed when the declaration “encourage[s]” industry leaders to be transparent.”

Professor Vincent Conitzer Head of Technical AI Engagement, Institute for Ethics in AI says:

“It is encouraging to see the AI Safety Summit taking place.  AI is a technology that is in some ways unlike any other, and we have seen dramatic progress in it over the past decade.  Unfortunately, much of this technical progress has come along a branch of AI that makes it very difficult for us to understand or carefully steer what exactly the AI is doing, or even what the next version of a system will be capable of. 

“As a consequence, the variety of concerns raised by AI, across both AI safety and AI ethics, is enormous, and the one thing we can be sure of is that we do not even understand all the risks yet.  Many of these challenges require not just technical understanding but also interdisciplinary expertise of a type that we have traditionally not trained people for.  Some people look at the situation and lament that issue X is getting attention because they think it’s taking away resources from issue Y, and others feel that it’s the other way around.  In my view, in reality, issues X and Y are often related, and the real takeaway should be that there is just a lot of very important work that needs to be done.”

Prof Sandra Wachter, Professor of Technology and Regulation, Oxford Internet Institute (OII), University of Oxford, said:

“I feel quite conflicted about the Summit. On the one hand, I am very excited about the Government’s involvement in AI and that steps are taken to assess both the benefits and the risks of AI products. On the other hand, I feel that how “risk” is defined is very unfortunate. For me, near and medium time risks include how to deal with mass job automation, mass discrimination and disinformation. The environmental costs, as well as the disruption of laws around IP and data protection are things that require urgent attention.

“Unfortunately, this is out of scope for this Summit and the predominant focus is on the “risk of losing control” of AI, in the sense that AI develops a “will of its own” and poses an “existential” risk to humanity. Yet, there is no scientific evidence that we are on such a path, or that such a path even exists. But it distracts from the actual and already existing existential risks such as that we might mass automate people’s jobs away, that marginalised groups have to deal with even more discrimination and that we hugely contributing to an ever-increasing carbon footprint.

“Those issues do exist now, have existed yesterday, and will exist in the future until we find international common ground to tackle them. And this needs international and diverse collaboration between governments, as well as academia, civil society, industry and the general public.”

Professor Keegan McBride, Departmental Research Lecturer in AI, Government, and Policy and Director, MSc Programme in the Social Science of the Internet says:

“As the impact that AI has on the world continues to grow, governments are beginning to grapple with how to best regulate and control AI. The decisions made by leading policymakers now will have longstanding geopolitical implications on the global distribution of power in the age of AI. To ensure that AI remains aligned with democratic values and norms it is essential that fears over the perceived risks of AI do not lead to policies which inhibit innovation or drive the centralization of AI development. Instead, governments must create a regulatory regime that supports, rather than fears, the open development of cutting-edge AI systems.”

Experts at Oxford are developing fundamental AI tools, using AI to tackle global challenges, and addressing the ethical issues of new technologies.

Find out what AI means and how it’s impacting our society from world-leading experts, and discover the groundbreaking ways artificial intelligence is being applied at Oxford.