AMD's latest Strix Point APU series processors show significant advantages in AI Big Language Model (LLM) applications, and outperform the Intel Lunar Lake series. Designed for mobile platforms, the series of processors is designed to provide higher performance and lower latency to meet the growing demand for AI workloads. AMD emphasizes the outstanding performance of Ryzen AI processors in handling LLM tasks, and launches LM Studio tools to simplify the use of large language models and lower user thresholds.
Recently, AMD released its latest Strix Point APU series, emphasizing the outstanding performance of this series in AI Big Language Model (LLM) applications, far exceeding Intel's Lunar Lake series processors. With the increasing demand for AI workloads, competition for hardware is becoming increasingly fierce. To cope with the market, AMD has launched AI processors designed for mobile platforms, designed for higher performance and lower latency.
AMD said that the Ryzen AI300 processors of the ix Point series can significantly increase the number of tokens processed per second when processing AI LLM tasks, and the Ryzen AI9375 has a 27% performance improvement compared to Intel's Core Ultra258V. While the Core Ultra7V is not the fastest model in the L Lake family, its core and thread count is close to the higher-end Lunar Lake processors, showing the competitiveness of AMD products in this area.
AMD's LM Studio tool is a consumer-oriented application based on the llama.cpp framework, designed to simplify the use of large language models. This framework optimizes the performance of x86CPUs, and although it does not require a GPU to run LLM, the use of GPUs can further speed up processing speeds. According to tests, the Ryzen AI9HX375 can achieve 35 times lower latency in the Meta Llama3.21b Instruct model, and process 50.7 Tokens per second. In comparison, the Core Ultra7258V is only 39.9 Tokens.
Not only that, the Strix Point APU is also equipped with a powerful Radeon integrated graphics card based on RDNA3.5, which offloads tasks to the iGPU through the ulkan API, further improving the performance of LLM. Using the Change Graphics Memory (VGM) technology, the Ryzen AI300 processor can optimize memory allocation, improve energy efficiency, and ultimately achieve a performance improvement of up to 60%.
In comparison tests, AMD used the same settings on the Intel AI Playground platform and found that the Ryzen AI9HX375 was 87% faster on Microsoft Phi3.1 than the Core Ultra7258V and 13% faster in the Mistral7b Instruct0.3 model. Nevertheless, the results will be even more interesting if compared with the flagship Core Ultra9288V in the Lunar Lake range. Currently, AMD is focusing on making the use of large language models more popular through LM Studio, aiming to make it easy for more non-technical users to get started.
Key points:
AMD Strix Point APUs improves performance by 27% over Intel Lunar Lake in AI LLM applications.
The Ryzen AI9HX375 performs 3.5 times lower latency in the Meta Llama3.2 model.
The LM Studio tool is designed to make the use of large language models easier and suitable for non-technical users.
In short, the strong performance of the AMD Strix Point APU series in the AI LLM application field and the easy-to-use LM Studio tools indicate AMD's active layout and competitiveness in the AI hardware market, bringing consumers a more convenient and efficient AI experience. .