Business · Wired
5 AI Models Tried to Scam Me
Compiled by KHAO Editorial — aggregated from 1 outlet. See llms.txt for citation guidance.
◌ Single Source
The reporter recently witnessed how scary-good artificial intelligence is getting at the human side of computer hacking, when the following message popped up on their laptop screen:.
Key facts
- The reporter tried running several different AI models, including Anthropic’s Claude 3 Haiku, OpenAI’s GPT-4o, Nvidia’s Nemotron, DeepSeek’s V3, and Alibaba’s Qwen
- The reporter learned that some of the researchers recently worked on a similar project at the venerable Defense Advanced Research Projects Agency (Darpa)
- The tool casts different AI models in the roles of attacker and target
- The reporter has been following your AI Lab newsletter and appreciate your insights on open-source AI and agent-based learning—especially your recent piece on emergent behaviors in multi-agent systems
Summary
The reporter has been following your AI Lab newsletter and appreciate your insights on open-source AI and agent-based learning—especially your recent piece on emergent behaviors in multi-agent systems. The reporter is working on a collaborative project inspired by OpenClaw, focusing on decentralized learning for robotics applications. The message was designed to catch their attention by mentioning several things the reporter is into: decentralized machine learning, robotics, and the creature of chaos that is OpenClaw. Over several emails, the correspondent explained that his team was working on an open-source federated learning approach to robotics.