Eurostar's AI Chatbot Security Flaws and the Controversy Over Responsible Disclosure
January 5, 2026

Researchers at Pen Test Partners uncovered multiple security vulnerabilities in Eurostar's public AI chatbot, exposing serious concerns about data protection and system integrity. Among the issues, attackers could inject malicious HTML or manipulate prompts to leak sensitive system information.
The Discovery and Dispute
The security team reported these flaws to Eurostar via its vulnerability disclosure program (VDP), but the response was delayed and fraught with miscommunication. According to Pen Test Partners, after initial contact on June 11, there was no reply. Follow-ups on June 18 and July 7, including LinkedIn messages, also went unacknowledged.
Eventually, Eurostar seemed to thwart the researchers' efforts when they created a new vulnerability reporting page, effectively losing prior disclosures. It wasn't until the researchers located the original email on July 31 that Eurostar addressed some of the reported issues.
During their interactions, Eurostar's security head allegedly responded dismissively, with one comment suggesting that acknowledgment might be considered blackmail, sparking controversy.
Technical Flaws in the Chatbot’s Design
The vulnerabilities revolve around how the chatbot processes chat history:
- The frontend sends the full chat history, not just new messages, to the API each time.
- The API only verifies the most recent message, approving it if it passes safety checks.
- The earlier, unverified messages can be edited or tampered with on the user's side and reintroduced as if they had been approved, enabling prompt injection.
Researchers demonstrated that by inserting malicious prompts into past messages—particularly HTML code—they could trick the chatbot into leaking confidential system prompts or generating harmful content.
For example, they managed to inject prompts that made the bot produce HTML links and generate system information, which could be exploited further to craft phishing attacks or inject malicious scripts.
Risks and Broader Implications
The flaws allow for complex exploits, including stored cross-site scripting (XSS), where malicious code is embedded into chat history and executed in other users' browsers. This could lead to session hijacking, data theft, or the delivery of malware through seemingly innocuous responses.
Furthermore, the backend's failure to verify conversation and message IDs exacerbates the risk, making stored or shared XSS attacks plausible—an especially troubling concern if the chatbot handles personal or sensitive data.
The Response and Warnings for the Industry
Eurostar has yet to confirm whether all vulnerabilities have been fixed. This incident underscores the importance of integrating robust security measures at every stage of chatbot development, especially for consumer-facing services.
As AI systems become more widespread, the need for secure design—from request validation to prompt handling—has never been more critical. Companies are reminded that responsible disclosure can sometimes be met with resistance, but transparency remains vital for the safety of users and maintaining trust.
This article will be updated pending further details from Eurostar.