Recently, technology media TechCrunch broke the news that Google is using Anthropic's Claude AI model to evaluate the performance of its own Gemini AI model. This move has triggered widespread doubts about its compliance in the industry. At the heart of the incident is Google's possible violation of Anthropic's terms of service, which explicitly prohibit the unauthorized use of Claude to build competing products or train competing AI models. At the same time, the incident also highlighted the differences in security between different AI models. Claude seems to perform better than Gemini in terms of security.
Google contractors reportedly compared the responses from Gemini and Claude to evaluate Gemini's accuracy and quality. Claude has shown greater security in some situations, refusing to answer danger prompts or giving more cautious responses, while Gemini has had serious security violations. Google DeepMind denied using Claude to train Gemini, but admitted that it would compare the output of different models for evaluation. It’s worth noting that Google is one of Anthropic’s major investors, which makes this incident even more complicated.
Highlights:
Google's use of Anthropic's Claude AI to evaluate Gemini may violate its terms of service.
Claude seems to be more stringent than Gemini when it comes to security.
Google DeepMind denied using the Anthropic model to train Gemini, while confirming the practice of evaluating model output.
Google's move has sparked discussions about competition and partnerships among big tech companies, as well as standards and ethical issues in the evaluation of AI models. In the future, similar incidents may prompt the industry to further standardize the evaluation methods and data use of AI models to ensure fair competition and maintain AI safety. This incident also reminds us that the rapid development of AI technology needs to be matched with a more complete regulatory and ethical framework.