Recently, domestic GPU manufacturer Moore Thread cooperated with "Teacher AI", a large AI model focusing on all-disciplinary education, and successfully completed the training and testing of a large model with 7 billion parameters. This test was completed based on the Moore Thread Kuae (KUAE) kilo-card intelligent computing cluster, and the training was completed efficiently within a week, demonstrating the powerful computing power of the domestic GPU cluster and its potential in large-scale AI model training. This marks that domestic AI technology has made important progress in the field of education, and also laid a solid foundation for the innovative development of future education AI models.
Recently, Moore Thread and the all-discipline education AI large model "Teacher AI" jointly announced that both parties have completed large model training and testing. Relying on the Moore Thread Kuae (KUAE) kilo-card intelligent computing cluster, Shizhe AI successfully completed the high-intensity training of a large model with 7 billion parameters. It took a week and the training efficiency reached expectations, fully demonstrating the 100-billion training of domestic full-featured GPUs. platform capabilities.
It is understood that "Teacher AI" was established in 2020. The core team comes from Tsinghua University and focuses on large-scale education models for all disciplines. Since the open beta, it has more than 25,000 users, supports more than 30 subject knowledge, and covers more than 2,000 textbooks.
This training test successfully verified the powerful performance of Moore Thread Kua'e Qianka intelligent computing cluster in large model training, laying the foundation for both parties' future innovation in large education AI models. Both parties will continue to carry out adaptation work on large model reasoning and optimize technology to cope with high-frequency reasoning needs.
Liu Chunjiang, CEO of Shizhe Model, said: "This training test demonstrates the powerful performance of Kua'e Qianka Intelligent Computing Cluster, and we are confident in the localized computing power. In the future, Shizhe Model will continue to use Kua'e Qianka Intelligent Computing Cluster. Run more core services on the computing cluster to provide users with efficient and stable computing services.”
The success of this cooperation marks a major breakthrough for domestic GPUs in the field of AI large model training, and also indicates that domestic AI technology will play a greater role in the education field. In the future, the two parties will continue to cooperate to provide users with better educational AI services and promote the intelligent development of the education field.