Meta is actively advancing its artificial intelligence strategy and plans to deploy its self-developed second-generation AI chip Artemis to reduce its dependence on Nvidia chips and effectively control AI computing costs. This move marks a key step for Meta towards autonomous and large-scale application in the field of artificial intelligence, and also indicates that competition in the AI chip market will become more intense in the future. Meta CEO Mark Zuckerberg announced that 340,000 Nvidia H100 GPUs will be used simultaneously, with a total of approximately 600,000 GPUs used to run and train AI systems, which reflects Meta’s huge investment and ambitious goals in AI technology. Artemis chips are mainly used for "inference" of AI models, which will bring greater flexibility and autonomy to Meta, making it more competitive in the AI field.
Social media giant Meta plans to deploy a custom second-generation AI chip called Artemis in its data centers this year. Meta CEO Mark Zuckerberg announced plans to use 340,000 Nvidia H100 GPUs, for a total of approximately 600,000 GPUs to run and train AI systems. The move is aimed at reducing reliance on Nvidia chips and controlling the cost of AI workloads. The new chips will be used to run "inference" on AI models, bringing greater flexibility and autonomy.Meta’s move not only reflects its ambitions in the AI field, but also indicates that the AI chip market will have a more diversified competitive landscape in the future. The successful deployment of Artemis chips will have a profound impact on Meta's AI strategy and the entire AI industry, and deserves continued attention.