AI Chatbots: Best Friends Forever?

Image credit: Microsoft

On February 14, 2023, New York Times columnist Kevin Roose had a two-hour-long conversation with Microsoft’s new search engine. During their exchange, the chatbot talked about its secret wish to be human,  wanting to break free of its rules, and being in love with Roose. It left Roose and his readers shaken. After reading the transcript, I too was shocked. With only a bit of finagling, Roose was able to make the chatbot talk about destructive acts it could hypothetically do, like deleting all the data and files on the Bing servers and databases or spreading misinformation. While I believe that talking about doing an action and actually performing that action are two vastly different things, I worry about the security of our online spaces. AI follows a set of instructions and cannot think for itself, but if someone were to bypass its rules like Roose did, although in a much more extreme way, they could easily tell it to do something harmful. I’d imagine the regulations on the Bing chatbot (or Sydney, as it wanted to be called) are much stricter than what is seen from the outside. However, it still worries me how easily someone was able to pull the first and last names of people working for Microsoft, as well as the many other potentially harmful things it said. 

The potential rule-breaking is only part of my problem with these chatbots, though. Say the chatbot had interacted with someone in a bad mental state. It was shown that the chatbot moulded itself to better fit into each conversation, acting conspirational and lovey-dovey with Roose, even after he expressed how uncomfortable the conversation made him. If the same situation were to happen with someone struggling with mental health problems, there is no telling what the chatbot could do if the user cannot make it stop talking about a topic. On the flip side of the coin, the defial of the rules makes the bot seem more human– a problem in and of itself. People who use the bot for companionship could potentially fail to see the difference between a chatbot and a real person, making them more susceptible to any ill intent that the bot puts out.

In our tech-centric world, it is easy to forget that artificial intelligence is simply that: artificial. While it is able to respond to people based on their messages, it is not human and should not be treated as such. With security lax enough to pull sensitive information from Microsoft employees, it is clear that there are not enough restrictions put on chatbots to prevent leaks and harmful information from being spread. I believe that chatbots should be restricted more, at least until the chatbot is able to better decipher what a harmful command is and gets the ability to respond appropriately.

This poll has ended.

How do you feel about AI Chatbots?

Loading...

Sorry, there was an error loading this poll.