OpenAI and Microsoft have joined the AI Security Institute’s (AISI) Alignment Project.
The project is an international effort to work towards advanced AI systems that are safe, secure and under control. It was first announced last summer.
The project works to steer the efforts of advanced AI systems to reliably act as intended, without unintentional or harmful behaviours
£27 million will now be made available through the fund, with £5.6 million coming from OpenAI, and additional support from Microsoft and others.
The first Alignment Project grants have also been awarded to 60 projects from eight countries. A second round is due to open this summer.
As well as OpenAI and Microsoft, Alignment Project is supported by an international coalition including the: Canadian Institute for Advanced Research (CIFAR); Australian Department of Industry, Science and Resources’ AI Safety Institute; Schmidt Sciences; Amazon Web Services (AWS); Anthropic; AI Safety Tactical Opportunities Fund; Halcyon Futures; Safe AI Fund; Sympatico Ventures; Renaissance Philanthropy; UK Research and Innovation (UKRI); and Advanced Research and Invention Agency (ARIA).
UK Deputy Prime Minister, David Lammy, said: "AI offers us huge opportunities, but we will always be clear-eyed on the need to ensure safety is baked into it from the outset.
"We’ve built strong safety foundations which have put us in a position where we can start to realise the benefits of this technology. The support of OpenAI and Microsoft will be invaluable in continuing to progress this effort."
UK AI Minister, Kanishka Narayan, said: "We can only unlock the full power of AI if people trust it – that’s the mission driving all of us. Trust is one of the biggest barriers to AI adoption, and alignment research tackles this head-on.
"With fresh backing from OpenAI and Microsoft, we’re supporting work that’s crucial to ensuring AI delivers its huge benefits safely, confidently and for everyone."