OpenAI is actively seeking to get rid of its reliance on Nvidia chips to reduce costs and solve supply chain problems. According to Reuters, OpenAI is collaborating with TSMC and Broadcom to develop self-developed AI chips, which are expected to be launched in 2026. At the same time, they have also begun using AMD and Nvidia chips to train AI models in order to seek more flexible and economical training solutions. This move shows OpenAI's strategic transformation in hardware independent research and development, aiming to improve efficiency and reduce dependence on external suppliers.
OpenAI has been working with Broadcom for months to develop an AI chip for running models, which could be released as early as 2026. Meanwhile, OpenAI plans to use AMD chips to train models through Microsoft's Azure cloud platform.
OpenAI previously relied on Nvidia GPUs almost entirely for training, but chip shortages and latency and high training costs prompted OpenAI to explore alternatives. OpenAI abandoned its plan to build a network of chip manufacturing plants and instead focused on internal chip design.
OpenAI's move marks its active exploration in the field of AI chips and provides new possibilities for future AI development. Through independent research and development and diversified cooperation, OpenAI is expected to take a solid step on the road to reducing costs, improving efficiency and enhancing technological autonomy. This also indicates that the AI chip market will be more diversified and fiercely competitive in the future.