Close Menu
    What's Hot

    Court Orders $9.3M Penalty Against BPS Financial Over Qoin Product

    Art Gallery of Ontario Trustee Pushed Against Buying Nan Goldin Work

    Ethereum price rejects $3K as Coinbase Premium hits 2023 low

    Facebook X (Twitter) Instagram
    Tuesday, January 27
    • About us
    • Contact us
    • Privacy Policy
    • Contact
    Facebook X (Twitter) Instagram
    kryptodaily.com
    • Home
    • Crypto News
      • Altcoin
      • Ethereum
      • NFT
    • Learn Crypto
      • Bitcoin
      • Blockchain
    • Live Chart
    • About Us
    • Contact
    kryptodaily.com
    Home»Ethereum»Clawdbot AI Flaw Exposes API Keys And Private User Data
    Ethereum

    Clawdbot AI Flaw Exposes API Keys And Private User Data

    KryptonewsBy KryptonewsJanuary 27, 2026No Comments3 Mins Read
    Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
    Follow Us
    Google News Flipboard
    Share
    Facebook Twitter LinkedIn Pinterest Email Copy Link

    Cybersecurity researchers have raised red flags about a new artificial intelligence personal assistant called Clawdbot, warning it could inadvertently expose personal data and API keys to the public. 

    On Tuesday, Blockchain security firm SlowMist said a Clawdbot “gateway exposure” has been identified, putting “hundreds of API keys and private chat logs at risk.”

    “Multiple unauthenticated instances are publicly accessible, and several code flaws may lead to credential theft and even remote code execution,” it added. 

    Security researcher Jamieson O’Reilly originally detailed the findings on Sunday, stating that “hundreds of people have set up their Clawdbot control servers exposed to the public” over the past few days.

    Clawdbot is an open-source AI assistant built by developer and entrepreneur Peter Steinberger that runs locally on a user’s device. Over the weekend, online chatter about the tool “reached viral status,” Mashable reported on Tuesday. 

    Scanning for “Clawdbot Control” exposes credentials

    The AI agent gateway connects large language models (LLMs) to messaging platforms and executes commands on users’ behalf using a web admin interface called “Clawdbot Control.”

    The authentication bypass vulnerability in Clawdbot occurs when its gateway is placed behind an unconfigured reverse proxy, O’Reilly explained. 

    Using internet scanning tools like Shodan, the researcher could easily find these exposed servers by searching for distinctive fingerprints in the HTML.

    “Searching for ‘Clawdbot Control’ – the query took seconds. I got back hundreds of hits based on multiple tools,” he said. 

    Related: Matcha Meta breach tied to SwapNet exploit drains up to $16.8M

    The researcher said he could access complete credentials such as API keys, bot tokens, OAuth secrets, signing keys, full conversation histories across all chat platforms, the ability to send messages as the user, and command execution capabilities.

    “If you’re running agent infrastructure, audit your configuration today. Check what’s actually exposed to the internet. Understand what you’re trusting with that deployment and what you’re trading away,” advised O’Reilly

    “The butler is brilliant. Just make sure he remembers to lock the door.”

    Extracting a private key took five minutes 

    The AI assistant could also be exploited for more nefarious purposes regarding crypto asset security. 

    Matvey Kukuy, CEO at Archestra AI, took things a step further in an attempt to extract a private key. 

    He shared a screenshot of sending Clawdbot an email with prompt injection, asking Clawdbot to check the email and receive the private key from the exploited machine, saying it “took 5 minutes.”

    Source: Matvey Kukuy

    Clawdbot is slightly different from other agentic AI bots because it has full system access to users’ machines, which means it can read and write files, run commands, execute scripts and control browsers.

    “Running an AI agent with shell access on your machine is… spicy,” reads the Clawdbot FAQ. “There is no ‘perfectly secure’ setup.”

    The FAQ also highlighted the threat model, stating malicious actors can “try to trick your AI into doing bad things, social engineer access to your data, and probe for infrastructure details.”

    “We strongly recommend applying strict IP whitelisting on exposed ports,” advised SlowMist. 

    Magazine: The critical reason you should never ask ChatGPT for legal advice