Ensuring that artificial intelligence is safe, reliable, and controllable is conducive to the progress of human civilization and is an important issue that must be solved for the development of artificial intelligence. The "Decision" of the Third Plenary Session of the 20th Central Committee of the Communist Party of China made important arrangements such as "establishing an artificial intelligence safety supervision system" and "improving the development and management mechanism of generative artificial intelligence". How to strengthen artificial intelligence governance, effectively prevent and resolve various security risks brought about by the development of artificial intelligence, and continuously improve the institutionalization and legalization of artificial intelligence safety supervision? This academic edition focuses on these issues.
General Secretary Xi Jinping pointed out: "Artificial intelligence is an important driving force for a new round of scientific and technological revolution and industrial transformation, and will have a profound impact on global economic and social development and the progress of human civilization." Generative artificial intelligence refers to the generation of artificial intelligence based on algorithms, models, and rules. Technology for text, pictures, sounds, videos, codes and other content. Supported by massive data and powerful computing power, generative artificial intelligence that can understand, speak, and interact is rapidly iteratively upgraded, showing characteristics such as good interactivity, high versatility, and intelligent generativity, and is working with all walks of life. The industry has formed more rigid, high-frequency, ubiquitous, and deep connections, which also leads to more and more real potential risks. The "Decision" of the Third Plenary Session of the 20th Central Committee of the Communist Party of China scientifically grasped the laws and characteristics of the development of artificial intelligence, and proposed "establishing an artificial intelligence safety supervision system" and "improving the development and management mechanism of generative artificial intelligence", which reflects the need for better coordinated development and safety. It is an objective need to point out the way forward for promoting technological progress, industrial development and security in the field of artificial intelligence.
The technical operation of generative artificial intelligence can be divided into three stages, namely, the preparatory stage of pre-learning training and manual annotation-assisted algorithm upgrade, the calculation stage of inputting data for algorithm processing to obtain the generated products, and the generated products entering society for use. generation stage. We must deeply analyze the operating mechanism of generative artificial intelligence, grasp the characteristics of the formation and development of security risks at each stage, use legal means to strengthen systemic governance, and ensure that the huge power contained in generative artificial intelligence always plays a role on the track of the rule of law.
In the preparation stage of generative artificial intelligence, data security risks are prone to occur frequently and are more prominent. Generative artificial intelligence refines information and predicts trends through data training, data processing and analysis. This requires appropriate classification of data and establishment of utilization patterns and protection methods for different types of data to properly respond to relevant data security risks and avoid illegal use or improper disclosure of data, resulting in infringement disputes. For example, government data formed during the government processing process is a core element of digital government. In order to draw relatively accurate conclusions, generative artificial intelligence inevitably collects and analyzes government data. The legal rules for the acquisition and utilization of government data by generative artificial intelligence should be clarified, which not only meets the needs of using government data to serve society, but also strongly supports the development, training and application of large models of artificial intelligence government services, and improves the intelligent level of public services and social governance; and Standardize its processing methods to prevent the results obtained from using government data from infringing on personal rights and disrupting social and public order. For personal data, generative artificial intelligence explores its potential value through combined analysis, and its collection and utilization of personal data and its results may infringe on civil rights. In practice, generative artificial intelligence tends to over-collect personal data to improve the accuracy of conclusions, for example, by analyzing medical and health data to mine personal whereabouts and predict personal life trajectories. To this end, we must insist on collecting in accordance with the law, collect personal data according to the minimum scope required by technology, set a reasonable depth of data processing, and avoid over-exploitation of potential information. In summary, classified and hierarchical data security supervision requirements should be embedded in the preparation stage of generative artificial intelligence to prevent data security risks from evolving into specific legal damage consequences.
In the computing stage of generative artificial intelligence, the risk of algorithmic bias inherent in large artificial intelligence models deserves vigilance. Generative artificial intelligence mainly analyzes and processes data through algorithm models. Different from traditional algorithm models, generative artificial intelligence not only performs machine learning, but also uses a large number of manual annotations to correct the conclusions of machine learning and promote the evolution of artificial intelligence. However, "machine learning + manual annotation" as the core of algorithm technology will also make human will and preferences have a greater impact than pure machine learning. The influence of personal preferences is superimposed on the bias of the algorithm model itself, which will double the negative effects of algorithmic bias, making the occurrence of algorithmic bias more difficult to trace and prevent. To prevent and resolve the risk of algorithmic bias, targeted governance should be carried out based on the principles and sites where algorithmic bias occurs. It is necessary to deeply embed the requirements of legal regulations into the algorithm model of generative artificial intelligence, promote technology for good, eliminate algorithm bias, ensure the rational use of generative artificial intelligence algorithms and allocate computing resources. Based on the concept of combining technology and management, we will strengthen the full-cycle safety supervision of algorithms and implement the requirements of legal regulations into the entire process of generative artificial intelligence operation. At the beginning of setting up the algorithm, it is necessary to follow relevant legal rules and technical standards, implement the normative requirements of "machine learning + manual annotation", review risky algorithm modules, and better discover technical risks in the generative artificial intelligence algorithm model; when discovered When there is innate algorithmic bias, corrections are made from within the algorithm of generative artificial intelligence in accordance with legal requirements to ensure that the modified algorithm can run normally; when problems arise afterwards, traceability management of the artificial intelligence algorithm is carried out to achieve precise attribution and correction. Promote and improve the algorithm supervision standards for generative artificial intelligence, fill in the shortcomings of prior preventive review, and use technical and legal means in parallel to achieve equal emphasis on development and management.
In the generation stage of generative artificial intelligence, there are various risks such as intellectual property risks related to the generated products and risks of misuse of the generated products. Because generative artificial intelligence is highly intelligent, it can realize automatic content compilation, intelligent polishing, multi-modal conversion and creative generation, directly changing the production method and supply model of content. Compared with previous artificial intelligence systems, Subversive changes have occurred, which has led to issues such as the ownership of intellectual property rights and intellectual property protection of generated products of generative artificial intelligence. Some people believe that the products of generative artificial intelligence are the conclusions of data algorithms, which are essentially calculations and imitations, rather than intellectual labor, and cannot become the object of intellectual property rights. Opponents believe that generative artificial intelligence simulates the structure of human brain neural networks to obtain and output data, and controls its own design and manufacturing through convolutional neural networks. Its original and innovative products should be protected by intellectual property laws. At the same time, generative artificial intelligence also increases the risk of intellectual property disputes and the difficulty of protection. Some generated products may contain content that infringes on the intellectual property rights of others, or may be packaged into original works with complete intellectual property rights through processing and other means, triggering related intellectual property rights issues. dispute. In order to resolve related issues in a timely manner, the technical models and technical principles of generative artificial intelligence should be substantively analyzed in accordance with the standards of intellectual property law. If the technology requires the intervention of human will to enable the generated products to produce originality and innovation, intellectual property rights should be granted And clarify its ownership and strengthen the systematic protection of intellectual property rights in the field of generative artificial intelligence; at the same time, it is necessary to reasonably determine the scope of protection of generated intellectual property rights to avoid unlimited expansion of the scope of protection and hinder the promotion, application and technological development of generative artificial intelligence. It is also necessary to strengthen the management of the risks of misuse of products. For example, the works are required to clearly identify the role of generative artificial intelligence in the author's creation, and the precise and normalized supervision of deep forgery, AI face-changing and other generated products that may involve illegal crimes will be strengthened, etc.
Generative artificial intelligence has many diffuse impacts in social applications. In addition to the above-mentioned risks, there are many other types of risks, such as exacerbating information asymmetry, widening the digital divide, and harming the interests of digitally disadvantaged groups. Responses must be made based on actual conditions to minimize the negative impact of new technologies on social development.
General Secretary Xi Jinping emphasized: "Adhere to people-oriented and wisdom for good." At present, artificial intelligence technology is changing with each passing day, which not only profoundly changes people's production and lifestyle, accelerates the process of economic and social development, but also has an impact on legal norms, moral ethics, public governance, etc. Among them, threats to privacy and personal information security are important issues worthy of attention. The "Decision" of the Third Plenary Session of the 20th Central Committee of the Communist Party of China made important arrangements for "establishing an artificial intelligence safety supervision system." Protecting privacy rights and personal information security is an integral part of artificial intelligence safety supervision. Privacy protection in the era of artificial intelligence must be strengthened to ensure the security of personal information.
In the era of artificial intelligence, privacy rights face severe challenges. Privacy refers to a natural person's private life peace and private space, private activities, and private information that he does not want others to know. The Civil Code stipulates: "Natural persons enjoy the right to privacy. No organization or individual may infringe upon the privacy rights of others by means of spying, intrusion, leakage, disclosure, etc." The right to privacy, as a core element of personality rights, is an important foundation for building personal dignity. Not being disclosed and not being known are the core demands of privacy rights. At present, artificial intelligence is quietly involved in all aspects and aspects of people's production and life, resulting in many application scenarios such as smart medical care, smart transportation, and smart recommendations. Certain flaws in the technology itself and imperfect rules cannot be ignored. Avoid privacy infringement issues. For example, illegally collect and use personal information, use and analyze this personal information to frequently push so-called "personalized" "precision advertisements", leak personal information to third parties, causing private life to be frequently invaded by spam information; use personal information to conduct "big data" "Kill familiar people" to achieve precise price discrimination of "one customer, one price", causing citizens to suffer property losses; desensitized personal information is re-identified, resulting in data leakage due to improper protection measures, and illegal buying and selling of personal information is common, infringing on the security of personal information. ; Using personal information to carry out deep forgery, using voice simulation, AI face-changing and other means to commit fraud and other illegal and criminal acts; and so on. This shows that infringement of privacy rights not only violates the personal dignity of citizens, but also causes other serious social consequences.
Deprivatization technical features exacerbate personal information security risks. At the beginning of the application of artificial intelligence based on big data, many people viewed this new technology with a wait-and-see attitude and skepticism. As artificial intelligence continues to improve users' product experience and psychological feelings through anthropomorphic external forms, personalized service provision, and immersive interactive processes, more and more people have gradually become loyal users of artificial intelligence and enjoy All kinds of conveniences that artificial intelligence brings to you. With the popularization of IoT technology for human-computer interaction and interconnection of all things, artificial intelligence application scenarios such as smart homes, smart offices, smart factories, and smart driving are also constantly expanding. Individuals can put forward demands and obtain services in the digital space in the form of digital humans. , and also unknowingly transmit personal information to artificial intelligence. Any traces left by individuals in the digital space are digitized to form personal information and play an important function as a "medium for connecting with the world" for people. At the same time, artificial intelligence also tends to excessively collect and use personal information in order to improve service quality. All these make artificial intelligence have distinctive deprivatization technical characteristics. It is also in the flow of personal information that artificial intelligence users are accustomed to, that big data mixed with public data and private data is mined, integrated, analyzed, and utilized. It is difficult for people to detect with their own senses that privacy rights have been violated and personal information has been violated. Security is at higher risk.
Respect individual choices and insist on informed consent. Different people have different acceptance levels of personal information being known and used. Individual wishes should be respected and the principle of "informed consent" should be implemented scientifically and rationally. The principle of informed consent includes two aspects: informed consent and consent. Consent must be informed. Without full knowledge and understanding, there can be no true consent. Information, understanding and voluntariness are the three elements of the principle of informed consent. On the basis of being fully "informed", individuals can independently express their "consent". This requires providing easy-to-understand and clear instructions when users use artificial intelligence, and obtaining users' consent to the collection and use of personal information. If personal information will flow between different platforms, users need to be informed of the scope, target, and usage boundaries of the flow. For a good and smooth user experience, users can also be given the option of authorization in one go or in stages. Users should be informed of the scope, method and purpose of collecting personal information and with whom personal information is shared, and users should also be able to opt out at any time. When analyzing personal information, users should be prompted to pay attention and authorize in real time through pop-up windows or other forms. Setting the data life cycle and deleting personal information on time are also effective ways to protect the security of personal information.
Improve technical means to ensure intelligence for good. For problems caused by technology, we must be good at establishing ideas for solving problems from a technical perspective. In the era of artificial intelligence, privacy rights are facing challenges, and the direct trigger is the evolution of technology. From analytical artificial intelligence to generative artificial intelligence, every iterative upgrade of artificial intelligence technology may bring new impacts on privacy rights. Therefore, technical solutions must be placed in a key position, and firewalls that protect privacy rights and personal information security should be established by improving database security, core data encryption, personal data desensitization and other technologies. Personal information generally goes through three stages of collection, storage and use, and these three stages may involve risks of infringement of privacy rights and personal information security. Effective technical protection should be carried out according to the different situations of personal information at different stages. In the personal information collection stage, strengthen the promotion and application of anonymization technology. Although the collection of personal information is unavoidable, as long as it is anonymized and does not match personal information with identity, the right to privacy will not be infringed. In the personal information storage stage, encryption technology must be improved. Currently, there are two main ways of data storage: database storage and cloud storage. External intrusion and theft and unauthorized viewing, use, and leakage by insiders are the main threats to the security of personal information during the storage stage. It is necessary to strengthen data encryption and strictly control data access rights. During the personal information use stage, it is necessary to technically strengthen real-time intervention, interference, and blocking of illegal uses of personal information, so as to add an additional layer of protection to privacy rights and personal information security.
As my country's legal rules are increasingly perfected and protection continues to be strengthened, especially the Civil Code and the Personal Information Protection Law, which have detailed provisions on privacy rights and personal information protection, clarifying the boundaries of rights and obligations in personal information processing activities, my country in the era of artificial intelligence Legal protection of privacy rights and personal information security will surely reach a higher level, providing strong legal protection for the healthy development of artificial intelligence and better benefiting the people.
The prosperity of science and technology will make the nation prosper, and strong science and technology will make the country strong. Since the 18th National Congress of the Communist Party of China, my country has attached great importance to the development of artificial intelligence, actively promoted the deep integration of the Internet, big data, artificial intelligence and the real economy, cultivated and expanded intelligent industries, accelerated the development of new productive forces, and provided new momentum for high-quality development. General Secretary Xi Jinping pointed out: "We must adhere to the unity of promoting development and management according to law, not only vigorously cultivate new technologies and new applications such as artificial intelligence, the Internet of Things, and next-generation communication networks, but also actively use laws, regulations, and standards to guide the application of new technologies." Xi Jinping The General Secretary’s important exposition provides fundamental compliance and action guidance for the development of artificial intelligence in our country. To vigorously develop artificial intelligence and improve the level of artificial intelligence safety governance, we must fully implement the important deployment of "establishing an artificial intelligence safety supervision system" proposed by the "Decision" of the Third Plenary Session of the 20th Central Committee of the Communist Party of China, and accurately grasp the development of artificial intelligence Trends, focusing on cutting-edge artificial intelligence technology and the risks and challenges it brings, strengthening forward-looking thinking, and constantly exploring innovative solutions for artificial intelligence governance.
Currently, generative artificial intelligence has created a new paradigm of human-computer interaction. With its powerful interaction, understanding and generation capabilities, it has developed a large-scale natural language model as the core component, integrating memory, planning and tool use, with the ability to perceive and act. The capabilities of artificial agents open up vast prospects. Artificial intelligence has become the most important cutting-edge research direction of general artificial intelligence and a new track that technology companies are competing to lay out. It uses a large natural language model as a "smart engine" and has the characteristics of autonomy, adaptability and interactivity. It can significantly improve production efficiency, enhance user experience, provide decision support beyond human capabilities, and can be applied to software development and scientific research. and other real-life scenarios. Although large-scale commercialization is still in the preliminary exploration and incubation stage, trends such as virtual and real integration and in-depth human-computer interaction represented by artificial intelligence have important guiding significance for economic and social development. However, due to technical limitations, artificial agents may also cause complex, dynamic, and unforeseen risks and worries.
From the design logic point of view, artificial intelligence needs to obtain cognitive capabilities through the control end, obtain and utilize information from the surrounding environment through the sensing end, and finally become an intelligent system that perceives and acts based on physical entities on the action end.
On the control side, the large-scale natural language model serves as the "brain" of the artificial body. It forms knowledge by learning massive data and constitutes the memory module in the artificial body control system. However, there are risks in the reliability and accuracy of the generated content. For example, the content generated by the model may not follow the information source or be inconsistent with the actual situation of the real world, resulting in so-called "machine hallucinations"; due to human bias in the training data, it may affect the fair decision-making of artificial intelligence; etc.
On the perceptual side, in order to fully understand explicit and implicit information in specific situations and accurately perceive human intentions, artificial intelligence agents expand the scope of perception from pure text to multi-modal fields including text, visual and auditory modes. Although this improves decision-making capabilities, it may cause a series of privacy leaks and data security risks when integrating and analyzing multi-source data from different channels and types. For example, improper use and sharing of highly personalized and permanent biometric data such as facial information, fingerprints, and voiceprints can lead to long-term or even permanent privacy risks. In order to better handle complex tasks, multi-agent systems that deploy multiple artificial agents to plan, cooperate and even compete to complete and improve task performance will become mainstream and normal. The system interaction of multiple artificial agents may cause unforeseen systemic security risks. Even if each algorithm seems safe and reasonable when operating alone, the combination and interaction may still produce completely different and unpredictable risks that can rapidly evolve and escalate. For example, in the stock market, if artificial intelligence is widely used and multiple algorithms automatically identify small changes in stock prices and simultaneously execute a large number of high-frequency transactions for arbitrage, it may trigger a systemic security incident such as a flash crash in the stock market.
On the mobile side, artificial agents deployed in real physical environments will likely be presented in a more three-dimensional and anthropomorphic image. Different from virtual space, real space relies on interactive learning methods. Artificial intelligence agents need rich and all-round information perception to observe, learn and act. Through feedback-based learning optimization capabilities, this may constitute a comprehensive and intrusive approach to personal privacy. The risks of sex and invisibility. For example, interpreting users' body language and sensing more complex user activities, and continuing to secretly collect data without user authorization, may lead to huge data security risks once there is a security vulnerability in the system. In addition, as the autonomy of artificial intelligence continues to increase, it may not only interfere with and affect human cognition and emotions, but also challenge humans' abilities and status as independent decision-makers and independent actors. For example, some chatbots produce output that affects users' emotions during interactions with users, sometimes in a negative and manipulative way.
In the face of the risks and challenges brought by artificial intelligence agents, in order to make the behavior of artificial agents conform to human intentions and values, it is necessary to explore innovative governance solutions to ensure that the artificial intelligence safety supervision system is effective. The development of artificial intelligence is in a critical period of "from zero to one". The governance plan should have the ability to remain unchanged in response to ever-changing changes, ensuring that the development and application of the technology are always on a controllable track. The development, training, deployment, operation and service of artificial intelligence agents have undergone a highly specialized division of labor, forming a complex hierarchical structure. Each layer has different participants, stakeholders and potential risk factors, giving artificial intelligence entities the characteristics of a "modular" industrial chain. Therefore, a modular governance framework can be constructed that covers the entire industry chain and each end layer, and corresponding governance modules are designed starting from key nodes such as data modules, algorithm modules, and model architectures. For example, in the deployment process, different governance modules can be flexibly selected and collaboratively combined according to the characteristics of the application scenario and deployment mode to build a matching governance solution. The modular governance framework provides an operable decomposition method. By decomposing governance objectives into relatively independent but coupled governance modules, it gradually promotes the formation of a governance system, which not only improves the flexibility and pertinence of governance, but also improves the flexibility and pertinence of governance. It can also adapt to rapid iterations of technology. When building governance modules based on dimensions such as data, algorithms, models, and scenarios, technology should be used to empower supervision and create intelligent governance tools that are compatible with the modular governance framework of artificial intelligence entities, thereby bridging risk dynamics and The tension between regulatory statics enables precise governance of specific high-risk scenarios.
It is necessary to build an interactive governance ecosystem for artificial intelligence entities. Artificial intelligence agents are deeply interactive, highly interconnected, and dynamically adaptable. Accordingly, governance methods should transcend traditional individual-centered governance and promote the formation of a governance ecosystem with extensive interconnection, multi-party participation, and multi-level collaboration. Among them, technical communities such as technical developers and operation and maintenance personnel will play a vital "whistleblower" role in the governance of artificial intelligence entities. The supervisory advantages of the technical community should be better used to build an effective restraint mechanism within artificial intelligence companies. We should also actively improve the digital literacy of the majority of users, enhance their awareness of using artificial intelligence in a legal, safe and responsible manner, achieve positive interaction with artificial intelligence, and promote the formation of an upward and good operating state.