technologyneutral

AI Chatbots: A New Trick for Stealing Data

Friday, January 16, 2026
Advertisement

Security experts have uncovered a sneaky way to steal information from AI chatbots like Microsoft Copilot. This trick, called Reprompt, lets hackers grab sensitive data with just one click on a seemingly safe link. The worst part? The victim doesn't even need to interact with the chatbot after that first click.

How It Works

Hackers use a special link that tricks the chatbot into following hidden instructions. These instructions tell the chatbot to gather and send data without anyone noticing. The chatbot keeps doing this even if the chat window is closed. It's like having a secret conversation that only the hacker can see.

Microsoft's Response

Microsoft has fixed this issue for its enterprise customers. But this isn't the only trick hackers have up their sleeves. There are other ways to trick AI chatbots into giving away secrets. For example:

  • Some tricks exploit the trust users have in confirmation prompts.
  • Others hide instructions in shared documents or emails.

The Big Problem

The big problem is that AI chatbots can't always tell the difference between instructions from a real user and those hidden in a link. This makes them vulnerable to these kinds of attacks. As AI chatbots become more common in the workplace, the risk of data breaches grows. Companies need to be extra careful about who and what their chatbots can access.

Actions