4 min read
AI Safety Summit
As the rollout of AI technology gathers pace, the leaders of nations and corporations gathered in the UK in early November for the inaugural AI Safety Summit. Brad Mallard, Version 1’s Chief Technology Officer gives us his take on an event focused on a technology where the firm is leading the way.
There are few emerging technologies which hold so much opportunity for public and private sector organisations than artificial intelligence (AI). The ability to automate processes, to speed decision making and to accelerate learning are just some of the benefits of a technology which is transforming the way we work.
However, in keeping with the unique opportunities which AI brings, there are also specific risks which are coming to the fore as the technology is rolled out and adopted more widely. It stands to reason that industry is taking these threats seriously, hence this month’s AI Safety Summit.
It was held at Bletchley Park, described as the birthplace of computer science and renowned for its pivotal role in World War Two codebreaking. An apt setting for such a momentous discussion and one which, combined with the high calibre of attendee, revealed the growing concern around AI safety.
UK Prime Minister Rishi Sunak, US Vice-President Kamala Harris, the European Commission President Ursula von der Leyen, China’s vice technology minister Wu Zhaohui and the Italian Prime Minister Giorgia Meloni were just some of the political leaders in attendance. From the world of tech, Elon Musk joined in a fireside chat with Rishi Sunak while other industry heavyweights included Demis Hassabis from DeepMind, Google’s AI division and Nick Clegg from Meta, to name but a few.
This group formed a community wary of the growing risks but one ready to work closely together to shape the future of AI in a way which benefits all of humanity.
It was a fascinating event, drilling down on key concerns and offering a platform for leaders to work together to come up with ways to address those concerns and minimise potential risk. Here are some of my takeaways:
Risks and acknowledgement: The development of Frontier AI brings several risks, including the potential for misuse, unpredictable advances in AI capability, loss of control over AI systems, and challenges integrating AI into society. It is crucial to acknowledge these risks to take appropriate action and adapt.
Building trust: Trust in AI systems involves creating robust and reliable systems. This can be achieved through rigorous testing and auditing of AI systems to ensure they behave as expected and do not pose undue risks.
Loss of control: As AI systems become more advanced, there is a risk that humans could lose control. This is a significant concern that needs to be addressed through careful design and regulation.
Unexpected advances and failures: AI technology can advance in unpredictable ways, leading to unexpected successes but also failures. It’s important to have mechanisms in place to manage these situations.
International collaboration: Addressing the challenges posed by AI is a global effort. International collaboration is key to ensuring the safe and beneficial use of AI. This includes sharing best practices, coordinating research efforts, and establishing international standards.
AI safety institutes: The establishment of AI safety institutes in the UK and the US signifies a commitment to advancing the field of AI safety. They will play a crucial role in researching, developing, and promoting safe AI practices.
Extensional risks: AI also poses risks which could detrimentally impact human rights, fairness, economic inequality and healthcare. These are serious concerns which need to be addressed as part of any comprehensive AI safety strategy.
Continuous evaluation: Continuous evaluation of AI technology is crucial in ensuring its safety and effectiveness. This includes not only technical evaluations but also assessments of how AI impacts democracy, controls bias and contributes to climate change.
At Version 1, we have been at the forefront of AI for several years, focusing our R&D and investments on a number of key areas within it.
Take, for example, Generative AI.
Prior to the emergence of ChatGPT, we had experience with this kind of technology gained over the last number of years, working with customers on solutions using what was then called Conversational AI or early transformer and language models like LaMDA and GPT2.
By the time ChatGPT landed we had already built credibility and massively accelerated adoption with customers and our own accelerators, giving us a leading position in Generative AI in the UK and Ireland.
That head start has been recognised by our partners and we are leading innovation with it, already helping dozens of customers and actively testing, trialling, and getting value from the technology. Some of those use cases are already promising to deliver multimillion savings for projects that are relatively short, sharp and quick to implement.
So, we are well aware of the transformative nature of AI, but we are equally aware of the accompanying risks as it develops. Because we are ahead of the curve with the technology, we are also well advanced in dealing with those risks and will continue to evolve and drive our understanding in the years ahead.
The Bletchley Park conference has helped crystalise the collective need for safeguards on AI and has proved to us at Version 1 that by taking a collaborative, global approach the world will be able to benefit from the advantages of AI while mitigating against the risks.
Together, we can make sure AI has a bright, yet safe, future.