
With the rapid integration of artificial intelligence into web tools, a serious vulnerability has emerged in one of Microsoft’s newest AI-facing technologies. The flaw lies in Microsoft’s NLWeb system, a framework designed to allow AI agents to interact with websites more naturally, using a format similar to HTML. Microsoft first introduced NLWeb at its Build conference earlier this year, and it’s already being tied to experimental tools like the new Copilot Mode in the Edge browser. While Microsoft hasn’t confirmed that Edge’s Copilot uses NLWeb directly, the timing and function seem to suggest some overlap.
Security researcher Aonan Guan has uncovered a path traversal vulnerability in NLWeb that allows malicious actors to trick AI systems into accessing sensitive system files. By manipulating URLs to point to unauthorized file paths, a remote attacker could effectively force the AI to expose configuration data, login credentials, or cloud authentication keys. In his Medium post, Guan demonstrated how the exploit allowed him to access stored passwords and keys for both Google Gemini and OpenAI systems—resources that could then be used to operate AI services without paying for them, a clear abuse of access.
According to Guan, Microsoft has quietly patched the issue in the official GitHub repository that hosts NLWeb, though no formal security advisory has been published by the Microsoft Security Response Center. While users don’t need to take any specific action to stay protected, the underlying problem raises serious concerns about how fast AI systems are being deployed—and how easily their helpful interfaces can be tricked into executing malicious tasks.
Guan’s warning centers on the ambiguity that AI systems introduce. Because NLWeb is designed to interpret natural language and convert it into actionable code, the line between a benign request and a harmful command becomes dangerously thin. Future attacks may not involve obvious code injection, but instead manipulate agents with carefully crafted sentences that are misinterpreted as system-level actions. And with past incidents like ChatGPT conversations leaking into public search results, the stakes for maintaining AI security have never been higher. As AI agents become increasingly integrated into browsers, apps, and even operating systems, vulnerabilities like this one may be only the beginning.




