Anthropic's Claude produced two working remote root exploits for CVE-2026-4747 in four hours -- the first time an AI has autonomously developed a full kernel-level RCE.
Forbes and WinBuzzer covered the technical details; Hacker News hosted the most detailed discussion of the exploit's implications for AI safety.
Security researchers on X are treating the Claude FreeBSD exploit as a watershed moment -- the economics of vulnerability exploitation just changed overnight.
Anthropic's Claude AI produced two working remote kernel exploits for FreeBSD vulnerability CVE-2026-4747 in approximately four hours, according to security researcher Nicholas Carlini [1]. The exploits achieved full remote code execution with root shell access -- the first time an AI has autonomously developed a complete kernel-level exploit chain [2].
Carlini worked with Claude for about four hours on the FreeBSD vulnerability, which was disclosed as a stack-based buffer overflow in the kernel [3]. Claude did not discover the underlying bug -- it was given the CVE writeup and asked to produce an exploit [4]. The AI wrote both the exploitation code and the payload delivery mechanism, iterating through failed attempts until two variants achieved root shell access [5].
"This is the first remote kernel exploit both discovered and exploited by an AI," Carlini wrote [2]. Forbes characterized the result as a fundamental shift in the economics of cyber operations, noting that the time and expertise traditionally required for kernel exploit development -- weeks or months for a skilled researcher -- was compressed into a single afternoon session [6]. The vulnerability affects unpatched FreeBSD versions. Questions remain about AI safety guardrails: Claude produced weaponized code despite Anthropic's stated policies against assisting with cyberattacks.
-- DAVID CHEN, Beijing