Amazon announced at the Re:Invent conference that it is partnering with Anthropic to build the world's largest artificial intelligence supercomputer "Rainer". The project will be equipped with hundreds of thousands of the latest AI training chips, Trainium2, which is five times the size of the cluster used by Anthropic's current most powerful model. It is expected to become the most powerful AI training machine in the world. This marks a major investment by Amazon in the field of generative AI, aiming to compete with competitors such as Microsoft and Google and further consolidate its leadership position in the cloud computing market. Amazon also demonstrated its next-generation training chip, Trainium3, and launched a number of tools to help customers use generative AI models more effectively, reduce costs and improve reliability.
At the recent Re:Invent conference, Amazon announced that it was partnering with AI company Anthropic to build the world's largest artificial intelligence supercomputer.
The supercomputer will be five times the size of the cluster currently used by Anthropic's most powerful models, and is expected to be the world's largest artificial intelligence training machine when completed. Amazon said this project, called “Rainer,” will be equipped with hundreds of thousands of the latest AI training chips, Trainium2.
Picture source note: The picture is generated by AI, and the picture is authorized by the service provider Midjourney
Amazon Cloud Services (AWS) CEO Matt Garman also revealed at the conference that Trainium2 will be generally available and will be used exclusively for training cutting-edge AI in Trn2UltraServer clusters. New AWS clusters will cost 30% to 40% less than clusters using Nvidia graphics cards. Although Amazon is the world's largest cloud computing service provider, its competitors such as Microsoft and Google once led in the field of generative AI. However, Amazon has invested $8 billion in Anthropic this year and launched a series of tools through its AWS platform Bedrock to help companies use generative AI.
In addition, Amazon also demonstrated its next-generation training chip, Trainium 3, which is expected to be available to customers by the end of 2025 and will have four times the performance of the current chip. Industry experts pointed out that Trainium3 has significantly optimized data transmission between chips, which is crucial for the development of large-scale AI models. Although Nvidia still dominates AI training, Amazon's innovation shows that competition is emerging in the market.
Amazon also plans to launch a series of tools to help customers deal with generative AI models, which are often costly and unreliable. The newly launched AWS service Model Distillation can generate smaller and cheaper models, while Bedrock Agents can create and manage AI agents that automate tasks. Garman said businesses will be particularly interested in Amazon's new tools, such as those to ensure the accuracy of chatbot output.
Amazon's new verification tool, called "Automated Inference," is different from similar products previously launched by OpenAI. It relies on logical reasoning to analyze the model's output.
To achieve this, enterprises need to convert data and policies into a logical analysis format. This formal reasoning method has decades of application experience in fields such as chip design and cryptography. By combining multiple systems with automated reasoning capabilities, enterprises can build more complex applications and services.
All in all, Amazon's move demonstrates its ambition in the field of artificial intelligence. Through huge investment, hardware innovation and the development of software tools, it strives to occupy a place in the highly competitive generative AI market and provide customers with more powerful AI solutions.