2023 has been the year that governments, regulators and international organisations have all sought to get to grips with the challenges of regulating AI including the publication in April of the UK government’s policy paper outlining its proposed approach to AI regulation; the EU’s AI Act which may shortly become the world’s first piece of AI specific regulation; and the Whitehouse Blueprint for an AI Bill of Rights in the US.
In addition to the challenge of the fast-developing nature and potential use (and misuse) of AI, a common challenge which all regulators have faced is the international nature of AI. Put simply AI does not respect international borders and so any nation seeking to regulate its use has to accept that cannot be done isolation or by reference to borders alone.
Against that backdrop many commentators have suggested the need for some form of international regulation of AI with Google’s Chief Executive Sundar Pichai calling in a CBS interview in April for a global regulatory framework for AI similar to treaties used to regulate nuclear arms.
However, any such international treaty would be years, if not decades, in the making and would require competing global political tensions and views on the use of AI to be resolved. In the meantime, the development and use of AI will continue despite calls from figures such as Elon Musk in April for a voluntary pause in the further development of advanced AI to allow time to consider how best to regulate that globally.
For this reason the first major step towards any form of international or global regulation of AI has not been a draft international treaty but the news today that the G7 nations (which also includes the EU as a non-enumerated member) have approved guiding principles and a voluntary code of conduct for advanced AI systems which encourages developers to identify and mitigate risk across the full AI lifecycle; to publish and update transparency reports on the capabilities and limitations of their AI to increase accountability; to invest in robust security controls; support the adoption of international standards; and protect personal data and intellectual property rights.
It is against this backdrop that the UK government will be hosting its AI Safety Summit at Bletchley Park on 1 and 2 November (AI Safety Summit 2023 – GOV.UK (www.gov.uk)) with the aim of bringing together governments, AI companies, civil society groups and experts to consider the risks of “frontier AI” (defined by the government as highly capable general-purpose AI models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models) and discuss how those can be mitigated through internationally coordinated action.
The government has outlined five objectives for this summit:
- a shared understanding of the risks posed by frontier AI and the need for action;
- a forward process for international collaboration on frontier AI safety, including how best to support national and international frameworks;
- appropriate measures which individual organisations should take to increase frontier AI safety;
- areas for potential collaboration on AI safety research, including evaluating model capabilities and the development of new standards to support governance; and
- showcase how ensuring the safe development of AI will enable AI to be used for good globally.
Whilst misinformation, bias, discrimination and the potential for mass automation have also been acknowledged by the government as AI risks these are not on the agenda for this summit as the government has said these are already being addressed nationally and internationally and so it is keen to avoid duplicative efforts.
It will be interesting to see how this summit builds on the announcement by the G7 today and in particular if endorsement of this approach and code of conduct is forthcoming from the leaders of non-G7 nations.