Former OpenAI policy research lead Miles Brundage, after seven years at the company, is advocating for external audits of leading AI models through his new organization, AVERI. He argues the industry should no longer be permitted to grade its own homework.
Miles Brundage has established the AI Verification and Evaluation Research Institute (AVERI), a non-profit dedicated to promoting independent safety audits for cutting-edge artificial intelligence models. Brundage departed from OpenAI in October 2024, where he served as an advisor on the company's preparations for the advent of Artificial General Intelligence.
"One thing I learned while working at OpenAI is that companies are largely setting their own norms for this kind of thing," Brundage told Fortune. "No one is forcing them to collaborate with third-party experts to ensure everything is safe and sound. They are essentially making their own rules."
While leading AI labs do conduct safety testing and publish technical reports, sometimes partnering with external red-teaming groups, consumers and governments currently have little choice but to take the labs' word for it.
Internal Donations Hint at Industry Unease
AVERI has raised $7.5 million to date, with a target of $13 million to support a team of 14. Backers include former Y Combinator president Geoff Ralston and AI underwriting firms. Notably, the institute has also received donations from employees of leading AI companies. "These are people who know where the bodies are buried and want to see more accountability," Brundage said.
Coinciding with its launch, Brundage and over 30 AI safety researchers and governance experts published a research paper outlining a detailed framework for independent auditing. The paper proposes an "AI Assurance Level" system—Level 1 roughly corresponds to the current state with limited third-party testing and restricted model access, while Level 4 offers sufficiently robust "treaty-grade" assurances that could form the basis of international agreements between nations.
Insurers and Investors Could Force the Issue
Even in the absence of government mandates, Brundage believes several market mechanisms could push AI companies toward independent audits. Large enterprises deploying AI models for critical business processes may require audits as a condition of purchase to shield themselves from hidden risks.
Brundage suggests insurers could play a particularly significant role. Business continuity insurers could commission independent assessments before underwriting policies for companies heavily reliant on AI. Insurers partnering directly with AI firms like OpenAI, Anthropic, or Google might also demand audits. "Insurance is evolving rapidly," Brundage noted.