Elon Musk Takes Jab at ChatGPT as Propaganda Machine: 'We Need TruthGPT'

Elon Musk Takes Jab at ChatGPT as Propaganda Machine: 'We Need TruthGPT'

Elon Musk Takes Jab at ChatGPT as Propaganda Machine: 'We Need TruthGPT'


Elon Musk is backing away from ChatGPT's growing popularity, saying the AI software is not safe for mainstream public use due to its ability to allegedly spread falsehoods and propaganda.

“What we need is TruthGPT,” he tweeted on Friday.

On Twitter, Musk was discussing the shortcomings of ChatGPT, which is being integrated into Microsoft's Bing search engine to improve the experience. Agreed! Musk wrote in response to a tweet requesting that Microsoft shut down ChatGPT in Bing.


Perhaps the exchange is a little ironic because the initial tweet came from an internet personality who isn't exactly known for telling the truth. Twitter has also been accused as a platform and caught itself spreading misinformation.

However, ChatGPT is clearly a powerful tool; It can write entire articles, summarize complex topics, and even generate computer code with just a text prompt from the user. But in recent days, social media users have been posting many examples of different errors that artificial intelligence software can make. This includes posting factual errors, emotional rants, and refusing to respond to some politically sensitive topics while answering others.

The flaws prompted Musk to strike several punches at ChatGPT, including mocking the AI program as a propaganda machine that could replace mainstream media.

In addition, he was taking aim at OpenAI, the creator of ChatGPT, a San Francisco company that Musk helped found before cutting ties in 2018.

“OpenAI was created as an open source (that's why I called it 'Open' AI) nonprofit company to serve as a counterweight to Google, but now it's a closed source, maximum profit company effectively controlled by Microsoft," Musk tweeted. today. "Not what I meant at all."

OpenAI admitted this week that its process for "configuring" ChatGPT is "imperfect".

“Sometimes fine-tuning just falls short of our goal (producing a safe and useful tool) and user intent (getting a useful output in response to a given input),” says OpenAI. “Improving our methods for aligning AI systems with human values is a top priority for our company, especially as AI systems become more capable.”

Musk's critical stance is not surprising. For years, it has sounded the alarm about the dangers of artificial intelligence. In 2014, he said, "With artificial intelligence, we are summoning the devil." In the same year, he wrote on Twitter that AI could be "more dangerous than nuclear weapons".

Earlier this week, Musk said it was time for governments to get involved. “I think we need to regulate AI safety, quite frankly,” he said at the World Government Summit in Dubai. "I think we should have similar regulatory oversight of AI because I think it poses a greater risk to society than cars, planes or medicines."

(Speaking of cars, Tesla — where Musk is also CEO — issued a voluntary recall of 362,758 vehicles this week due to a pilot software update to its artificial intelligence, which the company says could cause vehicles to disobey local traffic laws and increase risks from accident.)

Calls for regulation may grow as companies roll out AI-powered chatbots to more users around the world. But in the meantime, there are signs that both OpenAI and Microsoft are taking audience feedback into account with their ChatGPT modifications.

In the case of OpenAI, the company is an upgrade that could allow users to customize ChatGPT to address its bias on certain sensitive topics. "This will mean allowing system outputs that other people (including us) might strongly disagree with," the company said.

On the other hand, Microsoft is considering adding more firewalls to curb spooky responses from ChatGPT-powered Bing, according to The New York Times. This includes limiting conversation length, which can sometimes confuse Bing into giving odd responses.