On Feb. 20, Anthropic released a vulnerability-scanning tool aimed at security applications, leading to an average drawdown of over 5% across our cybersecurity stock coverage as investors worry that Anthropic and other artificial intelligence labs could replace cybersecurity functions within companies.

Why it matters: Despite the tangible market impact of the product release, we don’t see Anthropic’s vulnerability scanning tool as a threat to our cybersecurity coverage due to the following reasons.

  • One, there is a clear adversarial dynamic, as demonstrated by Anthropic’s own post. Anthropic claims to have found more than 500 vulnerabilities in open-source codebases currently in production or in use.
  • If Anthropic’s Opus could find these vulnerabilities, a misaligned model with the aim of automating cyber attacks could also find them. This dynamic means that as large language models become more proficient at detecting threats, it benefits both attackers and defenders.

Big picture: Since Anthropic’s Claude sees so much code on its platform and has been trained on gigantic code bases, it has a good sense of which code is secure and which is not, a key factor in identifying vulnerabilities.

  • As we move toward real-time cybersecurity applications, the dynamic will look very different.
  • While LLMs can be trained on security methods and techniques, they simply don’t have access to petabytes of real-time data gathered by large security vendors on a daily basis, and this data matters for training models for security applications.

Long view: We see automated code scanning as evidence that we will move swiftly to a world where cyber vendors, leveraging powerful models such as Claude alongside proprietary real-time telemetry data, will capture a lion’s share of net new security spending catalyzed by AI adoption.