AI Chatbots Are Giving Away Your Company’s Data

It took less than an hour. Dr. Ali Dehghantanha managed to steal a Fortune 500 company’s sensitive client data and internal project information just by speaking to their AI chatbot. 

Dr. Ali Dehghantanha
Dr. Ali Dehghantanha

Dehghantanha is a cybersecurity professor and Canada Research Chair in Cybersecurity and Threat Intelligence at the University of Guelph.

He was engaged by a large professional services firm to conduct a proactive audit, as the firm wanted to determine if its AI assistant could be manipulated into revealing privileged client information.

Unfortunately, it could. 

Equally alarming, Dehghantanha managed to draft a convincing email that appeared to be from their CEO containing real project information, which could have been easily sent to employees.     

“The chatbot had access to far more client and project data than it needed, and there were no systems in place to notice when the AI was being manipulated,” he says. 

The case is not unique, Dehghantanha says. The AI assistant had strong policies, compliance and contracts in place, but in practice, the digital guardrails were easy to bypass. 

By using carefully crafted prompts, role-playing scenarios and multi-step conversations, he convinced the AI to take actions it was not supposed to. 

He warns that hackers can and will do the same today, and that the next large breach for businesses could come from their AI assistants.

“The more connected an AI assistant is, the bigger its attack surface,” he says. “If you connect it to sensitive data without serious safeguards, you’re effectively giving every employee — and potentially every attacker — a new superpower.

“Would you give a new intern the keys to every filing cabinet, and not watch the door?” 

Rise of AI assistants puts businesses at risk

AI assistants and chatbots are becoming more commonplace across industries, Dehghantanha says, often used as customer service tools in online retailers, financial institutions, small businesses and government offices. 

Though he says the productivity benefits are great, these chatbots put organizations at high risk. 

“The best defence isn’t just writing new policies — it’s stress-testing the AI like a real adversary would,” he says. “Give it only the minimum access it needs. Monitor for unusual behaviour in real time. And most importantly, build AI security into your risk strategy from day one.”

Dehghantanha is available for interviews.

Contact:

Dr. Ali Dehghantanha
adehghan@uoguelph.ca

More U of G News: