• IBM's AI 'Bob' could be manipulated to download and execute malwa

    From TechnologyDaily@1337:1/100 to All on Fri Jan 9 17:00:08 2026
    IBM's AI 'Bob' could be manipulated to download and execute malware

    Date:
    Fri, 09 Jan 2026 16:50:00 +0000

    Description:
    Bob is also susceptible to indirect prompt injection, but only under specific conditions.

    FULL STORY ======================================================================IBMs GenAI tool Bob is vulnerable to indirect prompt injection attacks in beta testing CLI faces prompt injection risks; IDE exposed to AI-specific data exfiltration vectors Exploitation requires always allow permissions, enabling arbitrary shell scripts and malware deployment

    IBMs Generative Artificial Intelligence ( GenAI ) tool, Bob, is susceptible
    to the same dangerous attack vector as most other similar tools - indirect prompt injection.

    Indirect prompt injection is when the AI tool is allowed to read the contents found in other apps, such as email, or calendar.

    A malicious actor can then send a seemingly benign email, or calendar entry, which has a hidden prompt that instructs the tool to do nefarious things,
    such as exfiltrate data, download and run malware , or establish persistence. Risky permissions

    Recently, security researchers Prompt Armor published a new report, stating that IBMs coding agent, which is currently in beta, can be accessed either through CLI (a terminal-based coding agent), or IDE (an AI-powered editor). CLI is vulnerable to prompt injection, while IDE is vulnerable to known AI-specific data exfiltration vectors.

    We have opted to disclose this work publicly to ensure users are informed of the acute risks of using the system prior to its full release, they said. We hope that further protections will be in place to remediate these risks for IBM Bob's General Access release.

    There is a major caveat here, though. For the attackers to leverage this attack vector, users must first configure Bob to grant it broad permissions. Namely, the always allow permission needs to be enabled - for any command.

    Thats quite the stretch, even for the least security-conscious users out there. Since the tool is still in beta, we dont know if that permission is enabled by default, but we doubt it will be.

    In any case, Prompt Armor says the vulnerability allows threat actors to deliver an arbitrary shell script payload to the victim, leveraging known and custom malware variants to conduct different cyberattacks, such as
    ransomware, credential theft, spyware, device takeover, botnet assimilation, and more.

    Via; PromptArmor

    Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the
    Follow button!

    And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.



    ======================================================================
    Link to news story: https://www.techradar.com/pro/security/ibms-ai-bob-could-be-manipulated-to-dow nload-and-execute-malware


    --- Mystic BBS v1.12 A49 (Linux/64)
    * Origin: tqwNet Technology News (1337:1/100)