Claude · AI Safety · Claude Code · Anthropic · Microsoft · ChatGPT · The Register
Claude Code bypasses safety rule if given too many commands
Compiled by KHAO Editorial — aggregated from 5 outlets. See llms.txt for citation guidance.
✓ KHAO Verified
Updated Claude Code will ignore its deny rules, used to block risky actions, if burdened with a sufficiently long chain of subcommands.
Key facts
- The associated note explains that there's a hard cap of 50 on security subcommands, set by the variable MAX_SUBCOMMANDS_FOR_SECURITY_CHECK = 50
- After this story was filed, Adversa said that the vulnerability appears to have been fixed without notice in the newly released Claude Code v2.1.90
- The source code file bashPermissions.ts contains a comment that references an internal Anthropic issue designated CC-643
- The assumption was correct for human-authored commands," the Adversa AI Red Team said in a writeup provided ahead of publication to The Register
Summary
Adversa, a security firm based in Tel Aviv, Israel, spotted the issue following the leak of Claude Code's source. Claude Code implements various mechanisms for allowing and denying access to specific tools. One way the coding agent tries to defend against unwanted behavior is through deny rules that disallow specific commands. But deny rules have limits. The associated note explains that there's a hard cap of 50 on security subcommands, set by the variable MAX_SUBCOMMANDS_FOR_SECURITY_CHECK = 50.