Irregular Raises $80M to Set AI Safety Standards

2025-09-18

Irregular, a startup focused on AI security research, has announced a new funding round of $80 million. The funds will be allocated to developing defense systems, testing infrastructure, and security tools aimed at auditing and enhancing the secure deployment of next-generation AI models.

Established in 2023 under the name Pattern Labs Inc., Irregular claims to be the world's first advanced AI security laboratory. The company focuses on ensuring the safety of high-level AI systems before they can be misused. Its mission is to test, strengthen, and defend next-generation AI models by collaborating with leading AI developers in adversarial and red team environments.

Irregular conducts controlled simulations of cutting-edge AI models to identify potential exploitation methods. These simulations examine threat scenarios such as antivirus evasion, autonomous attack behaviors, system infiltration, and other forms of misuse, assessing how AI might launch attacks and its resilience when countered.

In addition to conducting tests, Irregular provides defensive tools, frameworks, and scoring systems that guide how AI systems should be protected in practice.

By partnering with leading AI labs and government agencies, Irregular integrates its testing procedures into the lifecycle of major advanced AI models. This collaboration enables the company to anticipate threats before they manifest in deployed systems and offer guidance on security roadmaps, compliance, and deployment policies.

The company has already contributed to shaping industry standards. Its evaluations are cited in OpenAI's GPT-4, o3, o4 mini, and GPT-5 system cards. Additionally, the UK government and Anthropic PBC utilize Irregular’s SOLVE framework—used to audit web risks in Claude 4. Recently, Google DeepMind researchers cited Irregular's evaluation of emerging web attack capabilities in an AI-focused paper.

Irregular co-authored a white paper with Anthropic proposing a new approach to enhancing the privacy and security of AI model weights and user data using confidential computing technology. It also collaborated with RAND Corporation on a groundbreaking joint paper addressing AI model theft and misuse, helping shape European policy discussions on AI safety and setting benchmarks for the field.

"Irregular has taken on the ambitious task of ensuring that the future of AI is both secure and robust," said Dan Lahav, co-founder and CEO of Irregular. "AI capabilities are evolving at an astonishing pace; we are building tools to test state-of-the-art systems before their public release and developing mitigation strategies that will shape the responsible large-scale deployment of AI."

This funding round was co-led by Sequoia Capital and Redpoint Ventures LP, with participation from Swish Ventures and notable angel investors, including Assaf Rappaport, CEO of Wiz Inc., and Ofir Ehrlich, CEO of E.ON SE.

"The real threats to AI security have yet to emerge," said Shaun Maguire, partner at Sequoia Capital. "What sets the Irregular team apart is their forward-thinking mindset. They are working with the most advanced models being built today to lay the foundation for making AI reliable in the future."