AI Finds 27-Year-Old OpenBSD, Linux Kernel Flaws

A new artificial intelligence model from Anthropic has exposed a 27-year-old vulnerability in the OpenBSD operating system, capable of remotely crashing affected machines simply by initiating a connection. The system, called Mythos, was not designed for hacking, but its reasoning capabilities have unearthed thousands of previously unknown flaws across major operating systems and browsers, bypassing both human review and millions of automated tests. Mythos autonomously chained together multiple Linux kernel vulnerabilities, escalating access from ordinary user to complete control of a machine. “This is a step change,” said Dave McGinnis, Vice President of Global Managed Security Services at IBM. “The people who wrote that code didn’t know those things were there.”

Mythos AI Discovers Decades-Old Zero-Day Vulnerabilities

This discovery is not simply about finding bugs; it’s about the speed and method by which they were revealed, prompting a reevaluation of current security protocols. A striking finding was a 27-year-old vulnerability within OpenBSD, a system renowned for its robust security features, which would allow remote crashing of affected machines through a simple connection. This poses a significant challenge for open-source software, which often relies on smaller teams with limited security resources. Rob Thomas, Senior Vice President of Software and Chief Commercial Officer at IBM, argues that AI’s emergence as critical infrastructure strengthens the case for open development, as security improves through scrutiny rather than concealment. Anthropic has committed USD 2.5 million to Alpha-Omega and the Open Source Security Foundation, alongside USD 1.5 million to the Apache Software Foundation, to bolster open-source security efforts, recognizing that “If the attackers aren’t humans anymore, the defenders can’t be humans anymore either,” according to McGinnis.

Vulnerability Chaining & Binary Code Analysis Capabilities

Current automated vulnerability discovery relies heavily on static and dynamic analysis tools, alongside dedicated human security researchers, yet these methods are increasingly challenged by the sophistication of modern threats. Existing automated systems typically excel at identifying individual flaws, but struggle to correlate them into complex attack chains; a limitation Mythos appears to overcome. Anthropic’s new AI model demonstrates a capacity for “vulnerability chaining,” connecting seemingly minor software flaws into a cohesive attack capable of achieving significant compromise, such as escalating from standard user access to complete control of a machine. This ability distinguishes it from previous systems and highlights a shift in the threat model. The finding underscores that even mature, well-maintained codebases are not immune to long-hidden flaws when subjected to novel analysis techniques.

Beyond identifying thousands of previously unknown vulnerabilities across major operating systems and browsers, Mythos possesses the ability to analyze compiled binary code without requiring access to the original source code. This binary code analysis capability is especially impactful for legacy systems where source code is lost or unavailable, effectively expanding the attack surface to previously unreachable targets.

You’re talking [about] stuff sitting around-a Windows 3.11 machine in the corner, some ancient piece that everybody doesn’t want to look at because it’s still working.

Open Source Security & Anthropic’s USD 4 Million Investment

Anthropic’s Mythos AI is prompting a significant reassessment of open-source security protocols, underscored by a USD 4 million investment aimed at bolstering vulnerable foundations. While the model’s primary function isn’t offensive, its capacity to uncover previously unknown flaws is reshaping the threat landscape, particularly for projects reliant on community contributions and limited resources. The AI’s ability to identify these weaknesses and chain them together, escalating access from standard user to complete system control, represents a departure from conventional automated security testing. “It’s not like they created the bugs.” This funding intends to empower maintainers to respond to the evolving threat environment, recognizing that the sheer scale of open-source infrastructure presents an expansive attack surface.

It’s not like they created the bugs. The people who wrote that code didn’t know those things were there.

Tags:
The Neuron

The Neuron

With a keen intuition for emerging technologies, The Neuron brings over 5 years of deep expertise to the AI conversation. Coming from roots in software engineering, they've witnessed firsthand the transformation from traditional computing paradigms to today's ML-powered landscape. Their hands-on experience implementing neural networks and deep learning systems for Fortune 500 companies has provided unique insights that few tech writers possess. From developing recommendation engines that drive billions in revenue to optimizing computer vision systems for manufacturing giants, The Neuron doesn't just write about machine learning—they've shaped its real-world applications across industries. Having built real systems that are used across the globe by millions of users, that deep technological bases helps me write about the technologies of the future and current. Whether that is AI or Quantum Computing.

Latest Posts by The Neuron: