Broadway

Complete News World

Strong criticism of the new AI chatbot

Strong criticism of the new AI chatbot

You can now communicate with an AI chatbot on Snapchat. The new feature isn’t just well received – it raises questions.

This browser does not support the video component.

TikTok/@i_amthespeedhunter – Not everyone is happy with Snapchat’s AI – confusing dialogue ensues.

The basics in brief

  • Recently, the Snapchat chat bot “My AI” became available to all users.
  • There is criticism on social media – some talking about uncomfortable conversations.
  • Commentary of a data protection expert and data scientist on Nau.ch.

Say hello to “My AI” – this message recently appeared among users of the Snapchat app. The instant messaging service is moving to the ChatGPT direction. You can now chat using artificial intelligence (AI).

However, the experimental chatbot doesn’t exactly generate enthusiasm among users – on the contrary. As the “Techcrunch” portal reports, many are annoyed by the fact that the bot cannot be removed from the chat overview. There are negative reviews in the App Store and Google Play.

Users see chatbots as an infringement

There are also many videos on Tiktok by users who find the chatbot encroaching. The AI ​​sees things that cannot be seen in the sent photos. Or knows the user’s location but claims the opposite.

This browser does not support the video component.

TikTok / @jonesgirlstudio – Tiktok user shares scary experience with Snapchat AI.

So how difficult is this new chatbot? “Artificial intelligence can be a risk,” data protection expert Martin Steiger tells Nau.ch. Especially for young people who do not know the limits and possibilities.

Obviously, users will address their first and disturbing experiences with AI chatbots in Tiktok videos. “It’s almost classic for an AI chatbot to tell stories or even lies,” says Steiger. “In the best case scenario, this experience enhances skills with artificial intelligence.”

Artificial intelligence does not feel

Data scientist Elliott Ash of ETH Zurich also understands the negative feedback. The still unknown new technology has “human-like properties”. This may lead to “chatbots being thought of as sentient. But they are not.”

There are some imperfect security measures in place that prevent the AI ​​from saying it knows the location. Part of this is also because these unusual types of conversations did not appear in the data set that the AI ​​had learned, and thus produced unexpected results.

You can now write with a chatbot on Snapchat. – Corner stone

Moreover, what the AI ​​says is also somewhat random. “So if she says something that doesn’t make sense, people will take it to mean that she knows something that you don’t.” For example, artificial intelligence can see what is happening outside the transmitted image.

“Controlling AI functions is problematic”

Does Snapchat control its chatbot? “Yes,” says Elliott Ash. Snapchat, OpenAI, or Microsoft could have incorporated advanced security measures but they’re still not perfect. This should affect what your chatbot says.

But AI works so well because it is creative. “Creativity requires that you often provide unexpected answers — just like creative people.” This may scare some users.

Data protection expert Steiger finds it very difficult in the current hype that every application and every platform now believes that it has to offer the corresponding functionality to its users. “Jobs are thrown out in a rushed and immature manner, which leads to quality issues.”

The new AI technology “ChatGPT” has revolutionized the internet world. (image icon) – pixabay

This can also be seen with Snapchat: “Controlling AI functions is actually problematic, because unwanted results have to be filtered out at great cost.” So the appropriate regulations will soon be implemented – at least in the European Union.

Artificial intelligence has been used for a long time – but only in the background. With “My AI”, it is unlikely that Snapchat will remain the only social media platform that rolls out AI functionality to its users.

“Problems when people view AI as people”

Elliott Ash soon sees him. AI assistants will soon be ‘ubiquitous’. New opportunities will appear, but at the same time the problems and concerns surrounding these technologies will increase.

opinion poll

Should AI be regulated by the state?

“In particular, many problems arise when people humanize AI assistants and imagine them to be people with preferences and goals.” If this happens, the AI ​​can be very persuasive and get people into trouble, “for example through manipulation”.

But artificial intelligence has no such goals. “So I hope we become more educated about these systems and put in place strict rules to prevent people from demonizing AI as people.”