Yuanshi Intelligence (RWKV) relied on its disruptive generative AI architecture to receive tens of millions of yuan in angel round financing in December 2023, doubling its valuation. This round of financing was led by Skyrim Capital, and the funds will be mainly used for team building, technology research and development, and product commercialization. As an innovative attempt to the traditional Transformer architecture, the RWKV architecture aims to solve the deficiencies in efficiency and accuracy of existing large language models and is committed to opening up new possibilities in the field of AI.
Against the background of the global generative AI wave in 2022, Yuanshi Intelligence (RWKV) completed tens of millions of yuan in angel round financing in December 2023, invested by Skyrim Capital. After this financing, the company's valuation has doubled, and the funds will be used for team expansion, new structure research and development, and product commercialization.
The emergence of RWKV is a powerful challenge to the traditional Transformer architecture. With the development of large language models (LLM), although the parameter scale of the model is becoming larger and larger, its shortcomings in issues such as illusion and accuracy are always difficult to solve. Therefore, the founding team of RWKV decided to explore a completely new architecture in order to achieve greater efficiency and flexibility.
The design concept of RWKV is completely different from Transformer. Co-founder Luo Xuan said that the traditional Transformer model needs to re-read the previous text every time it generates a Token, while RWKV does not need to record the status of each Token, thus significantly reducing the amount of calculation. RWKV achieves breakthroughs in efficiency and language modeling capabilities by combining the advantages of RNN (Recurrent Neural Network).
The advantage of this innovative architecture is that RWKV can process information in a limited state space. Through reinforcement learning methods, the model can automatically determine when it needs to review the previous text, thus improving its memory ability. Compared with traditional models, RWKV performs superiorly in multiple benchmark tests, proving its improvement in language learning efficiency.
Currently, RWKV has completed model training from 0.1B to 14B, and has released a 32B preview model in overseas communities. In the future, Yuanshi Intelligence plans to launch RWKV-7 with 70B parameters and above in 2025, and explore new inference frameworks and chips to further improve model performance.
In terms of business, RWKV not only provides open source projects, but also actively carries out commercial layout, involving AI music generation and cooperation with enterprises. It has reached cooperation with many enterprises such as State Grid. With the development of technology and advancement of commercialization, RWKV strives to become the "Android and Linux" in the field of large models.
RWKV's innovative architecture and commercial layout have shown strong potential in the highly competitive large model field, and its future development is worth looking forward to. Its goal to become "Android and Linux" in the large model field also reflects its lofty ambitions. I believe that as the technology continues to mature and the business model improves, RWKV will achieve greater achievements in the field of AI.