AI Safety Institute releases new AI safety evaluations platform
The AI Safety Institute has open released a new testing platform to strengthen AI safety evaluations.
- From:
- Department for Science, Innovation and Technology,AI Safety Institute andThe Rt Hon Michelle Donelan
- Published
- 10 May 2024

- New UK-builtAI safety testing platform to strengthen and accelerate global safety evaluations.
- Inspect will make it easier for a wide range of groups to developAI evaluations, boosting collaboration with researchers and developers.
- AI Safety Institute, Incubator forAI (i.AI) and Number 10 to bring together leadingAI talent to rapidly test and develop new open-sourceAI safety tools.
GlobalAI safety evaluations are set to be enhanced as the UKAI Safety Institute’s evaluations platform is made available to the globalAI community today (Friday 10 May), paving the way for safe innovation ofAI models.
After establishing the world’s first state-backedAI Safety Institute, the UK is continuing the drive towards greater global collaboration onAI safety evaluations with the release of theAI Safety Institute’s homegrown Inspect evaluations platform. By making Inspect available to the global community, the Institute is helping accelerate the work onAI safety evaluations being carried out across the globe, leading to better safety testing and the development of more secure models. This will allow for a consistent approach toAI safety evaluations around the world.
Inspect is a software library which enables testers – from start ups, academia andAI developers to international governments – to assess specific capabilities of individual models and then produce a score based on their results. Inspect can be used to evaluate models in a range of areas, including their core knowledge, ability to reason, and autonomous capabilities. Released through an open source licence, it means Inspect it is now freely available for theAI community to use.
The platform is available from today - the first time that anAI safety testing platform which has been spearheaded by a state-backed body has been released for wider use.
Sparked by some of the UK’s leadingAI minds, its release comes at a crucial time inAI development, as more powerful models are expected to hit the market over the course of 2024, making the push for safe and responsibleAI development more pressing than ever.
Secretary of State for Science, Innovation, and Technology, Michelle Donelan said:
As part of the constant drumbeat of UK leadership onAI safety, I have cleared theAI Safety Institute’s testing platform - called Inspect - to be open sourced. This puts UK ingenuity at the heart of the global effort to makeAI safe, and cements our position as the world leader in this space.
The reason I am so passionate about this, and why I have open sourced Inspect, is because of the extraordinary rewards we can reap if we grip the risks ofAI. From our NHS to our transport network, safeAI will improve lives tangibly - which is what I came into politics for in the first place.
AI Safety Institute Chair Ian Hogarth said:
As Chair of theAI Safety Institute, I am proud that we are open sourcing our Inspect platform.
Successful collaboration onAI safety testing means having a shared, accessible approach to evaluations, and we hope Inspect can be a building block forAI Safety Institutes, research organisations, and academia.
We have been inspired by some of the leading open sourceAI developers - most notably projects likeGPT-NeoX, OLMo or Pythia which all have publicly available training data andOSI-licensed training and evaluation code, model weights, and partially trained checkpoints. This is our effort to contribute back.
We hope to see the globalAI community using Inspect to not only carry out their own model safety tests, but to help adapt and build upon the open source platform so we can produce high-quality evaluations across the board.
Alongside the launch of Inspect, theAI Safety Institute, Incubator forAI (i.AI) and Number 10 will bring together leadingAI talent from a range of areas to rapidly test and develop new open-sourceAI safety tools. Open source tools are easier for developers to integrate them into their models, giving them a better understanding of how they work and how they can be made as safe as possible. Further details will be announced in due course.
Further Information
Share this page
The following links open in a new tab