- 895
- 246
Security researchers have documented the first real case of theft of OpenClaw AI agent configuration files. These files contain API keys, authentication tokens, and other secrets.
OpenClaw (formerly ClawdBot and MoltBot) is a locally running framework for running AI agents. It stores persistent configuration and memory on the user's machine, has access to local files, can log into email applications and instant messaging apps, and interact with external services. Thanks to its extensive capabilities and ease of use, the tool has accumulated over 200,000 stars on GitHub, and its creator, Peter Steinberger, was invited to work at OpenAI.
However, we have previously written about how security experts call OpenClaw a true security nightmare. Last month, Hudson Rock experts warned that OpenClaw would soon become a prime target for infostealers due to the abundance of sensitive data in its configuration files, coupled with relatively weak security. Unfortunately, this prediction has already come true.
Hudson Rock researchers discovered that a stealer successfully stole the OpenClaw configuration from one victim. According to the experts, this is a variant of the Vidar malware, and the theft occurred on February 13, 2026.
[td]"This is a significant milestone in the evolution of infostealer behavior: moving from stealing browser credentials to stealing 'souls' and personal data from personal AI agents," the company writes.[/td]Remarkably, Vidar didn't specifically target OpenClaw data. The malware launched its usual file collection routine, scanning directories for keywords like "token" and "private key." Files in the .openclaw folder contained the desired strings.
The stealer stole several files, including:

The researchers emphasize that the stolen data is sufficient to completely compromise the victim's digital identity. The company warns that as OpenClaw becomes more deeply integrated into workflows, stealers will increasingly target AI agents, using more targeted data collection mechanisms.
OpenClaw (formerly ClawdBot and MoltBot) is a locally running framework for running AI agents. It stores persistent configuration and memory on the user's machine, has access to local files, can log into email applications and instant messaging apps, and interact with external services. Thanks to its extensive capabilities and ease of use, the tool has accumulated over 200,000 stars on GitHub, and its creator, Peter Steinberger, was invited to work at OpenAI.
However, we have previously written about how security experts call OpenClaw a true security nightmare. Last month, Hudson Rock experts warned that OpenClaw would soon become a prime target for infostealers due to the abundance of sensitive data in its configuration files, coupled with relatively weak security. Unfortunately, this prediction has already come true.
Hudson Rock researchers discovered that a stealer successfully stole the OpenClaw configuration from one victim. According to the experts, this is a variant of the Vidar malware, and the theft occurred on February 13, 2026.
The stealer stole several files, including:
- Openclaw.json — stores the victim's email address (masked), the path to the working directory, and a high-entropy gateway authentication token. This token potentially allows connecting to a local OpenClaw instance or impersonating a legitimate client in authenticated requests.
- Device.json — contains publicKeyPem and privateKeyPem used for binding and signing. Having obtained the private key, an attacker can sign messages on behalf of the victim's device, bypass Safe Device checks, and access encrypted logs and cloud services associated with the device.
- Soul.md and memory files (AGENTS.md, MEMORY.md) — define the agent's behavior and accumulate persistent context: daily activity logs, personal correspondence, calendar entries.

The researchers emphasize that the stolen data is sufficient to completely compromise the victim's digital identity. The company warns that as OpenClaw becomes more deeply integrated into workflows, stealers will increasingly target AI agents, using more targeted data collection mechanisms.