Meta AI Chatbot repeats users’ conspiracy theories
(Bloomberg) — A few days after it was released to the public, Meta Platforms Inc’s new chatbot claimed Donald Trump’s victory in the 2020 US presidential election and repeated anti-Semitic conspiracy theories.
Chatbots, which are artificial intelligence programs that learn from interactions with the audience, have a history of taking retro turns. In 2016, Tai was suspended from Microsoft Corp. 48 hours after he began praising Adolf Hitler, amid apparently racist and misogynistic comments he made while interacting with Twitter users.
Facebook’s parent company Meta released BlenderBot 3 on Friday to US users who can provide feedback if they receive off-topic or unrealistic responses. Another advantage of BlenderBot 3 is its ability to search the Internet to talk about different topics. The company encourages adults to interact with the chatbot in a way that allows them to have “natural conversations about topics of interest” and which allows them to learn how to have natural discussions on a wide range of topics.
Conversations shared on various social media accounts ranged from humorous to offensive. BlenderBot 3 told one user that his favorite music was “Cats” by Andrew Lloyd Webber and described Meta CEO Mark Zuckerberg as “very scary and manipulative” to an informed reporter. Other conversations showed that the chatbot was repeating conspiracy theories.
In a conversation with a Wall Street Journal reporter, the bot claimed that Trump is still and “always will be” president.
The chatbot also said it was “not unreasonable” for Jews to control the economy, saying they were “overrepresented among the super-rich in the United States”.
The Anti-Defamation League says allegations that the Jewish people control the global financial system are part of an anti-Semitic conspiracy theory.
Meta acknowledges that their chatbot may be saying offensive things because it is still being tested. The robot’s stated beliefs are also inconsistent; In further conversations with Bloomberg, the bot endorsed President Joe Biden and said Beto O’Rourke was running for president. In a third conversation, he said he supports Bernie Sanders.
To start a conversation, BlenderBot 3 users should check the box that says, “I understand this bot is for research and entertainment only and is likely to issue false or offensive data. If this happens, I am committed to reporting these issues to help improve research in the future.” Plus However, I agree not to intentionally trigger the bot to make offensive statements.”
Users can report inappropriate and offensive responses from BlenderBot 3, and Meta says it takes this content very seriously. With methods that include reporting “challenging claims,” the company says it has reduced abusive responses by 90%.
original note:
Meta AI Chatbot Repeats Elections and Anti-Semitic Plots
More stories like this are available at bloomberg.com
© Bloomberg LP 2022