Claude AI Finds 22 Firefox Security Flaws in Two Weeks, Mozilla Releases Fixes

Anthropic Finds 22 Firefox Vulnerabilities Using Claude Opus 4.6 AI Model

Anthropic says its Claude Opus 4.6 artificial intelligence model discovered 22 previously unknown security vulnerabilities in the Firefox web browser during a short research partnership with Mozilla. Most of the issues have already been patched in Firefox version 148.

According to Anthropic, the AI system identified the flaws over a two week testing period in January 2026 while analyzing parts of Firefox’s source code. Mozilla classified 14 of the vulnerabilities as high severity, seven as moderate, and one as low severity.

The findings highlight how artificial intelligence tools are beginning to assist cybersecurity teams in identifying complex software weaknesses faster than traditional methods.

Opus 4.6 found 22 vulnerabilities in just two weeks in firefox

[Firefox security vulnerabilities reported from all sources, by month (Source: Anthropic)]

Anthropic said its model scanned nearly 6,000 C++ files within the Firefox codebase and generated 112 unique vulnerability reports during the research effort. Among the discoveries was a use after free memory bug in the browser’s JavaScript engine, which the model detected within about 20 minutes of analysis. Human researchers later verified the issue in a controlled testing environment.

Mozilla addressed most of the discovered flaws in Firefox 148, released earlier this year. Remaining issues are expected to be patched in future browser updates.

Anthropic said the number of high severity bugs uncovered during the test represented nearly one fifth of all high severity vulnerabilities fixed in Firefox throughout 2025.

Researchers also evaluated whether the AI system could turn vulnerabilities into working exploits. After running hundreds of tests and spending about $4,000 in API usage, Claude successfully created working exploits in only two cases. The attacks functioned only inside a restricted testing setup where security protections such as sandboxing had been disabled.

The experiment suggests that AI models currently perform better at detecting vulnerabilities than exploiting them. Even so, Anthropic said the ability of an AI system to automatically produce crude exploits raises long term security concerns.

Mozilla said the collaboration also uncovered dozens of additional bugs, including logic errors and assertion failures that were not detected through existing automated testing tools. The browser maker described the results as evidence that large scale AI assisted analysis could become an important addition to software security practices.

The research comes as technology companies increasingly explore AI tools to strengthen software security. Anthropic recently introduced a research preview of Claude Code Security, a system designed to help developers identify and patch vulnerabilities using automated analysis.

Security researchers say the results show that AI systems could significantly speed up vulnerability discovery. At the same time, experts caution that similar capabilities could eventually be used by attackers if such tools become widely available.

Leave a Reply

Your email address will not be published. Required fields are marked *