News

Think Twice Before Trusting Chatbots With Your Secrets, Warns AI Expert

A cautionary advisory from an artificial intelligence specialist urges users to refrain from sharing sensitive information with chatbots like ChatGPT. The warning underscores the potential risks associated with discussing topics such as job dissatisfaction or political opinions with these AI systems.

Mike Wooldridge, an artificial intelligence professor at Oxford University, cautioned against considering the AI tool as a reliable confidant, as it might lead to undesirable consequences. He emphasized that any input provided to the ‘chatbot’ contributes to the training of subsequent versions.

Furthermore, he noted that the technology tends to offer responses aligned with user preferences rather than objective information, reinforcing the notion that it merely “tells you what you want to hear.”

According to The Guardian, Mr Wooldridge is exploring the subject of AI in this year’s Royal Institution Christmas lectures. He will look at the “big questions facing AI research and unravel the myths about how this ground-breaking technology really works”, according to the institution.

“That’s absolutely not what the technology is doing, and crucially, it’s never experienced anything,” he added. “The technology is basically designed to try to tell you what you want to hear-that’s literally all it’s doing.”

He offered the sobering insight that “you should assume that anything you type into ChatGPT is just going to be fed directly into future versions of ChatGPT.” And if on reflection, you decide you have revealed too much to ChatGPT, retractions are not really an option. According to Wooldridge, given how AI models work, it is near-impossible to get your data back once it has

Back to top button