Anthropic Leverages Claude Code to Automate Software Security Audits

2025-08-07

Anthropic PBC, a generative AI startup, has launched an automated software security review feature for Claude Code that identifies and mitigates potential vulnerabilities in codebases. As software complexity escalates, developers are increasingly integrating AI tools into their workflows, coinciding with a 34% year-over-year rise in exploits leveraging code vulnerabilities according to Verizon's 2025 Data Breach Investigations Report. The new security review functionality operates through GitHub Actions, enabling developers to initiate vulnerability assessments using natural language commands within Anthropic's AI-powered terminal environment. By entering "/security-review" post-coding, the system performs real-time security analysis before commits, scanning for SQL injection risks, XSS vulnerabilities, authentication flaws, insecure data handling, and dependency weaknesses. This automated solution offers customizable security policies and integrates seamlessly into CI/CD pipelines. When code transitions to testing phases, the AI model automatically executes scans while filtering false positives. Security findings are documented with mitigation suggestions in tickets, ensuring no unreviewed code reaches production. Anthropic's own development teams utilize this feature for internal tools - recently identifying a remotely exploitable code execution vulnerability in an HTTP server implementation via DNS rebinding through pre-commit GitHub Actions. Major tech firms including Google (Code Assist), Amazon (Q Developer), and Microsoft have also introduced AI-driven code assistants capable of large-scale vulnerability detection and remediation. These systems, like Anthropic's, connect to GitHub to flag potential issues while allowing human reviewers to focus on architectural concerns.