TrendAI™ Research has discovered several new methods that enable attackers to escape Docker Desktop’s WSL2 VM and run arbitrary code on the host. Our analysis highlights how trusted development tooling can create unexpected attack surfaces when internal APIs and configuration mechanisms are left exposed.
Malicious access to the Azure control plane can lead to a cascade of assaults that can be disastrous and difficult to detect. TrendAI Vision One™ helps protect the Azure control plane through early threat identification and rapid response.
Today’s botnet operations, enabled by automation and shared resources, are outpacing traditional response and patching models. This highlights the growing importance of security capabilities that can match the speed and scale of these attacks.
Drawing on insights from Trend Micro’s global researchers and security experts, this year’s edition of our annual security predictions report highlights the AI-driven shifts set to shape 2026 and beyond.
Poor secret management in MCP servers can lead to serious consequences, including data breaches and supply chain attacks. This article examines the reality of these unsecure configurations and offers practical recommendations that minimize the chances of exposure.
In this article, Trend Micro discusses how the fast-moving attacks using CVE-2025-53770 and CVE-2025-53771 have underscored the essential role of virtual patching and reliable intelligence in protecting organizations against evolving threats.
To conclude our series on agentic AI, this article examines emerging vulnerabilities that threaten AI agents, focusing on providing proactive security recommendations on areas such as code execution, data exfiltration, and database access.
How can attackers exploit weaknesses in database-enabled AI agents? This research explores how SQL generation vulnerabilities, stored prompt injection, and vector store poisoning can be weaponized by attackers for fraudulent activities.
In the third part of our series we demonstrate how risk intensifies in multi-modal AI agents, where hidden instructions embedded within innocuous-looking images or documents can trigger sensitive data exfiltration without any user interaction.
Our research examines vulnerabilities that affect Large Language Model (LLM) powered agents with code execution, document upload, and internet access capabilities. This is the second part of a series diving into the critical vulnerabilities in AI agents.