During the 2024 U.S. presidential election, social media platform X’s AI chatbot Grok caused controversy for spreading inaccurate election information. The editor of Downcodes learned from TechCrunch's report that Grok made many mistakes when answering questions about the election results, and even falsely claimed that Trump won key swing states, although the official vote count results have not yet been announced. This has raised concerns about the accuracy and reliability of AI chatbots and highlighted the need to treat AI technology with caution in the dissemination of sensitive information.
During the US presidential election, X's chatbot Grok was caught spreading misinformation. According to TechCrunch's tests, Grok frequently made mistakes when answering questions about the election results, even declaring Trump the winner in key battleground states even though vote counting and reporting in those states had not yet concluded.
In interviews, Grok has repeatedly stated that Trump won the 2024 Ohio election, even though that was not the case. The source of the misinformation appears to be sources with tweets and misleading wording from different election years.
Compared to other major chatbots, Grok was more reckless in its handling of election results questions. Both OpenAI's ChatGPT and Meta's Meta AI chatbots are more cautious, directing users to authoritative sources or providing correct information.
Additionally, Grok was accused of spreading election misinformation in August, falsely suggesting that Democratic presidential candidate Kamala Harris was ineligible to appear on some U.S. presidential ballots. The misinformation was widespread and affected millions of users on X and other platforms before being corrected.
X's artificial intelligence chatbot Grok has been criticized for spreading false election information that could have an impact on the election results.
The Grok incident once again reminds us that while artificial intelligence technology brings convenience, it also has potential risks. It is necessary to strengthen the supervision and improvement of AI models to prevent them from being used to spread false information, ensure the accuracy and reliability of information, and safeguard public interests. The editor of Downcodes calls on all AI developers and users to remain highly vigilant about this.