The concept of artificial intelligence was first proposed at the Dartmouth Conference in 1956. In the following more than half a century, with the continuous development and upgrading of relevant theories, algorithms and computing power, artificial intelligence finally In recent years, it has ushered in a real "highlight moment" and become an important cornerstone of the fourth industrial revolution. In particular, the popularity of generative AI last year has allowed countless industries to see its infinite possibilities in improving customer operations, sales and marketing, and software engineering methods.
In addition to the well-known media content industries such as news, film and television, and creativity, which directly apply AIGC, the disruptive effect of artificial intelligence is accelerating in various industries. Many companies have realized the positive role of AI in improving their business competitiveness, and are trying to generate Modern AI is integrated into different business models. But it is undeniable that even though all walks of life have a very positive attitude towards AI, the industrial application of AI is still in the exploratory stage.
"The interesting thing is that when you continue to hear all kinds of new claims and new developments about AI like today, it means that AI has not yet reached ubiquity. And when you no longer hear about AI, it has entered every It is truly ubiquitous in the work and life of individuals and every enterprise," Tang Jiong, Intel China Software Technology Cooperation Division, said at the Intel Enterprise AI Open Software Ecosystem Media Sharing Conference a few days ago.
As Tang Jiong said, although artificial intelligence has gone through the stages of machine learning, deep learning and generative AI, there is still a long way to go before "AI is everywhere" in the true sense. In order to truly implement enterprise AI and help enterprises develop, Intel has been committed to building an open AI ecosystem for many years, realizing resource collaboration with ecological partners, and based on the actual needs of customers to create diversified AI based on a rich software stack. Solutions to promote enterprise business upgrades.
In Intel’s vision, promoting AI ubiquity requires starting from three aspects:
The first is to accelerate innovation . Nowadays, many surprising generative AI and large models have been developed, but whether it is Vincent pictures or Vincent videos, these applications have not yet been truly implemented in enterprises, and they are still far away from creating a real "liberating productivity" system. There is still a long way to go before "killer application". Therefore, it is necessary to provide developers with more convenience and encourage them to develop and innovate.
In order to accelerate innovation, Intel's answer is "openness", which means to provide more open resource platforms for individual and enterprise developers around software and applications, including PyTorch, TensorFlow, Python, etc., through these programmable open environments , which can help more developers carry out application innovation. At the same time, in order to ensure that different hardware platforms can obtain a consistent experience, Intel also provides a large number of open source tools such as oneAPI and OpenVINO to help developers achieve flexible development on different heterogeneous platforms without reprogramming.
In addition, in order to avoid possible "hallucination" problems in AI reasoning, Intel will also use different software modules to improve the reliability and reasoning accuracy of the entire solution on the premise of using reliable data.
The second is value maximization . The current AI is still in the stage of entertainment and is far from the stage of creating actual value. Therefore, enterprises need to allocate the most appropriate workloads to the most appropriate platforms to reduce costs while optimizing resource utilization.
"Compared with cloud computing, the cost structure of AI shows a completely different trend. The cost effectiveness of cloud computing comes from the efficient scheduling of CPU, that is, by allocating the idle time of the CPU to different users to reduce the cost of a single user, but In the AI era, there is no so-called free time for AI accelerators. To achieve the most value, To maximize efficiency, we need to allocate different AI workloads to the most appropriate hardware platform. For example, some AI tasks are suitable for running in the cloud, while other tasks are more suitable for execution on edge devices or on the device side. Effective distribution of loads to different hardware platforms actually helps save costs," Tang Jiong concluded.
The third is flexible deployment . The application of AI involves an extensive and complex software and hardware stack from cloud to edge to end. Different modules are deployed on different platforms and have different efficiencies. Therefore, how to flexibly deploy AI solutions on these heterogeneous platforms is also a big problem.
In fact, there is no company that can provide complete AI solutions on its own. Help from application vendors and database vendors is also an indispensable part. To this end, Intel has proposed the concept of "deconstructing" AI solutions, that is, Splitting complex solutions into smaller modules allows different partners to focus on their areas of expertise, while also allowing AI solutions to be flexibly deployed and optimized on different hardware.
In order to enable enterprises to quickly deploy generative AI, apply new technologies and algorithms as soon as possible, and strengthen the interoperability and RAG technology of heterogeneous ecosystems, Intel also joined hands with partners to launch OPEA, which aims to integrate different manufacturers code and modules to form a complete AI solution.
As an open and transparent platform, OPEA allows developers and enterprises to better understand their contributions and optimization points, and can avoid major rework caused by choosing the wrong large model or other technologies. So far, OPEA has attracted a global audience. Leading ISVs and domestic manufacturers have joined, and more Chinese software industry partners are expected to join.
As the saying goes, a single tree cannot make a forest. When the trend of AI technology comes, open, reliable and easy-to-deploy strategies are extremely critical for end users. This is also the basis for Intel's cooperation with the entire ecosystem. In order to fully promote business expansion and release the potential of enterprise AI, Intel also It has reached many in-depth cooperation with many companies including Orient Guosen, Hisin Zhisheng and Xinghuan Technology.
Oriental Guosen is a software company focusing on big data, cloud computing and artificial intelligence, serving many fields such as communications, finance, and industry. Over the past five years, Oriental Guosen has cooperated with Intel in various cutting-edge computing technology fields. , and teamed up with Intel to launch the "Staff" series of training and pushing all-in-one machines, Vice President and CEO of Oriental Guoxin CTO Zha Li said, "The 'Staff' all-in-one machine application coincides with OPEA at the enterprise level. The enterprise-level AI platform hopes to have a self-owned, proprietary, privatized deployment hardware platform, and such a The industry model packages these software and hardware together and delivers them to enterprise users end-to-end out of the box."
As one of the earliest companies in China to engage in AI-related business, Haixin Zhisheng has also launched a lot of cooperation with Intel. According to Meng Fanjun, general manager of Haixin Zhisheng, when many companies are deploying GPUs to solve AI problems, Haixin Zhisheng has Xinzhisheng uses OpenVINO and oneAPI software provided by Intel to increase the applicable width of the CPU and improve AI inference performance on Intel's newly launched Xeon CPU.
Founded in 2013, Starring Technology, which focuses on the research and development of large models, knowledge bases, vector libraries and other products, also has a close cooperative relationship with Intel. Zhang Lei, general manager of the Ecological Cooperation Department of Starring Technology, said in an interview that Starring Technology The Wuya·Wenzhi AIPC version developed in cooperation with Intel can effectively solve users' challenges in terms of insufficient cloud computing power and data security. It not only alleviates the resource bottleneck of cloud computing power, but also provides a more secure and flexible local AI application solution for users who are unable or unwilling to upload sensitive data to the public cloud.
From theoretical exploration in the last century to today's generative AI craze, artificial intelligence is constantly reshaping life and society. Today, more and more companies are actively exploring the possibility of AI in realizing business value. As a leading semiconductor manufacturer, Intel has been committed to helping companies through open, scalable software and hardware platforms and end-to-end solutions. Unleash the potential of AI. “For Intel, we will not discuss specific process operations or the details of AI process transformation directly with enterprises, but the framework we provide can bring enterprises a guidance document, similar to a blue paper or white paper, which contains the overall Steps and concepts to consider. Such documents indicate the key steps and elements that enterprises should consider when implementing AI technology,” Tang Jiong said at the end.