Ethical reflections on "whether artificial intelligence should have legal subject status"
Author:Eve Cole
Update Time:2024-11-22 14:06:01
With the popularization of the application of large language models, generative artificial intelligence has shown increasingly strong autonomy, which not only allows people to see the possibility of strong artificial intelligence, but also once again stimulates people to discuss the subject status of this human creation. . Recently, Guangming Daily published a number of articles on the theoretical page to discuss whether artificial intelligence should become a legal subject, which comprehensively presented the main views of the academic community. Among them, two articles, "There Are No Theoretical Obstacles for Artificial Intelligence to Become a Legal Subject" and "Limited Legal Subject: Reasonable Choice of the Legal Status of Artificial Intelligence", demonstrate from the perspectives of philosophy and ethics, and propose that artificial intelligence becomes a legal subject and the conflict between the field of philosophy and The understanding of the elements of human subjectivity is not contradictory, will not devalue the subject status of human beings, and will not damage the human-centered subject system. In this regard, the author believes that these arguments do not grasp the ontological essence of why a subject is a subject and why a personality is a personality. Even from the bottom line of humanism, human beings, as the primates of all things, have personal dignity and subjective status that are different from all things. Giving artificial intelligence the subject status not only undermines human dignity and subject status, but is also not conducive to the attribution and assumption of realistic responsibilities. The presence of human beings is the fundamental condition for constituting a "subject", while artificial intelligence can only be used as an object. The prerequisite for discussing whether artificial intelligence should have legal subject status is whether artificial intelligence can constitute a subject or whether artificial intelligence has subjectivity. Subject is a philosophical concept with a specific reference. If artificial intelligence cannot prove the subject philosophically, then it will be difficult to confer subject qualifications on it from a legal relationship. However, even generative artificial intelligence that has strong autonomy and independence and shows a certain degree of emotional awareness is still far from possessing the status of a subject. Although subject and subjectivity have different connotations in different philosophers: Aristotle regards the subject as a recipient, Descartes regards the subject as a thinker with self-awareness, and Kant defines the subject as a rational being- —But nothing more than relating to singular or plural persons. Marx even pointed out directly: The subject is human beings, the object is nature, and “human beings are always the subject.” It can be seen that only people, and people who purposefully and consciously understand or transform the world, constitute the subject. The subject derived from people can be an individual, a group, an organization, or even the entire society, but it must have the existence and presence of concrete and practical people. The most essential stipulation of human beings as subjects is subjectivity, and the most important content of subjectivity is human creativity and practicality, that is, subjective initiative or self-awareness. This is the most fundamental characteristic of human beings and human subjects. So far, although artificial intelligence, including generative artificial intelligence, has shown increasingly powerful learning capabilities and a certain degree of autonomous behavior, the problems it solves are still only calculation and solution problems in closed scenarios, and it cannot set goals for the external environment. Or planning, autonomous and active feedback is far from "evolving" out of self-awareness or agency. Therefore, artificial intelligence does not possess the kind of subjectivity unique to humans, and cannot constitute a subject. Furthermore, artificial intelligence cannot constitute a legal subject or a limited legal subject. "Limited Legal Subjects: Reasonable Choice of the Legal Status of Artificial Intelligence" proposes that the historical evolution of civil subjects from "people can be non-human" to "non-humans can be human" reflects the defacement and deethicalization of civil subjects. But the foundation of any fictional subject can be traced back to human existence or presence. Not only does this not contradict the theory that only people can be the subject, but it actually strengthens the idea that only people can be the subject. On the one hand, the construction of legal persons such as companies and associations can be regarded as a collection of plural persons. The core element of a legal person as a legal subject is still a person who enjoys rights and obligations and assumes certain responsibilities; on the other hand, the philosophy of non-human organizations constituting legal subjects The foundation does not advocate a strong anthropocentrism, but only emphasizes the bottom line of humanism, that is, the existence or presence of people. Granting legal subject status to artificial intelligence that is completely automated, detached, or exists independently of humans fundamentally deviates from this philosophical purpose. Artificial intelligence is essentially a tool to serve mankind. The concept of "personality" fundamentally rejects instrumental value. Compared with arguments or refutations at the subject theory level, the debate surrounding whether artificial intelligence has legal subject status focuses more on the personality theory level. Scholars who hold a positive view mainly construct new personality types for artificial intelligence by proposing views such as the expansion of legal personality, electronic personality, instrumental personality, and limited personality, thereby proving the legal subject status of artificial intelligence. However, like "subject", "personality" is also a concept with special connotation and value. Artificial intelligence does not enjoy human dignity. Giving artificial intelligence corresponding personality may threaten the protection and realization of human dignity. The concept of personality and dignity is a modern product that has promoted human nature and pursued civilization and progress since the Enlightenment. It marks the uniqueness of human beings from animals or other things from a transcendental, abstract and universal perspective. As Kant said, some entities, although their existence is not based on our will but on nature, if they are irrational entities, they only have relative value as means, so they are called things. On the contrary, if they are irrational entities, they are called things. , rational beings are called personalities; people, and generally every rational being, exist as a purpose, and their existence itself has an absolute value. The concept of personality and dignity demonstrates the inherent value and absolute value of human beings as an end in itself rather than a means or tool for other purposes. Therefore, it has not only become the most important source of value in human society, but also constitutes an important basis for human rights and has become the " The legislative basis of the United Nations Charter and constitutions of various countries around the world. However, as a human creation, artificial intelligence not only does not enjoy the personality that exists as an end in itself and has intrinsic value, but also begins to threaten or damage human dignity due to wrong or improper use. On the one hand, artificial intelligence is essentially a complex tool invented and created by humans to expand human freedom and improve human capabilities and efficiency. Its entire life cycle from birth to operation to death serves people, so it only has the The relative value of tools cannot have the absolute value of human beings, and they will not enjoy human dignity, even if it appears in the future. Strong artificial intelligence with self-awareness still cannot abandon the positioning of tool value; on the other hand, the uncontrolled development of artificial intelligence, through large-scale collection and calculation of people's body, identity and behavioral data, leads to invasion of privacy, control Moral aberration issues such as spirituality, induced consumption, fraud and deception have, to a certain extent, threatened people’s subject status and personal dignity. Since "personality" fundamentally rejects instrumental value, the article "Limited Legal Subject: Reasonable Choice of the Legal Status of Artificial Intelligence" proposes a limited instrumental personality proposition. The word formation method that bundles personality with words such as tool and finite is neither There is no benefit to being rigorous. It is nothing more than the excessive imagination of posthumanism through literary mechanisms such as analogies and metaphors. It is essentially just an economic property right given to artificial intelligence, which is far from true personality rights. Personal dignity demonstrates the uniqueness of human beings. Giving personal dignity to artificial intelligence and transforming non-human entities or existences into existences that are as important as human beings is not conducive to the protection of human rights or the development of artificial intelligence for good. The final result It is the constant dissolution of the uniqueness of human beings and the personal dignity formed on this basis. Becoming a legal subject will not help solve the dilemma of liability for artificial intelligence, but will create a more complicated situation. Another highly defensible reason for granting the subject status of artificial intelligence comes from the needs of real development, that is, due to the large-scale application of artificial intelligence and the The increasing degree of autonomy and intelligence has led to practical dilemmas in the implementation process of the existing legal framework, such as the inability to find legal subjects, the inability to attribute responsibility, or the inability to hold people accountable. For example, in the field of contract law, it is common for intelligent robots to sign contracts on behalf of people. However, the legal mechanism is still unclear as to whose "autonomy of will" is expressed in a sales contract concluded with an intelligent program. Another example is in the field of tort liability law. If a self-driving car causes an accident while driving, causing injury or tort, how to attribute liability becomes a difficult problem. Is it the intelligent programmer, the car manufacturer, the user, or the victim? It is difficult for the current legal system to make an effective judgment. For example, generative artificial intelligence will lead to significant intellectual property issues, but giving robots intellectual property rights fundamentally violates the original intention of legislation to protect innovation. Therefore, some scholars have proposed from the perspective of practical needs that it is urgent to give artificial intelligence subject status or legal personality, and to clarify and establish a responsibility sharing mechanism for artificial intelligence. "There Are No Theoretical Obstacles for Artificial Intelligence to Become a Legal Subject" states that "the liability property of artificial intelligence can be protected by factory-set compulsory liability insurance with reference to the capital system of corporate legal persons", "Limited Legal Subject: Reasonable Legal Status of Artificial Intelligence" The article "Choice" also proposes to "unify the methods of opening corresponding trust accounts, purchasing insurance, etc. for artificial intelligence", so that artificial intelligence can participate in civil legal relationships as a bearer of special rights and obligations, and solve the dilemma of responsibility attribution and imputation required in practice. . However, the design of these property rights does not need to rise to the level of giving artificial intelligence subject status or personality rights. It only needs to appropriately supplement and adjust the property system of natural persons or legal persons related to artificial intelligence. Giving artificial intelligence legal subject status will not only not help get out of the liability dilemma caused by artificial intelligence, but will also create a more complicated liability situation by introducing new unnecessary "legal subjects". Indeed, artificial intelligence technology is no longer as simple as a tool in the agricultural age or a machine in the industrial age, but a "giant machine" as the American scholar Mumford said or a "frame" as Heidegger said. In such a complex system that is deeply coupled with people, people play roles in different identities and in different links with different mechanisms and promote the realization of artificial intelligence technology functions, forming a "distributed responsibility" situation with multiple responsible subjects and complex interactive behaviors. However, distributed responsibility only lengthens the causal chain between behaviors and makes it more difficult to assign responsibility, but will not lead to the disappearance or transfer of responsibility. As the creators or users of artificial intelligence, people have the responsibility to sort out the responsibility distribution of various mechanisms in each link of a complex system and make clear responsibility attributions. Even if there is opacity caused by the "algorithmic black box" or the behavior of generative artificial intelligence with a certain degree of autonomy in unexpected ways, it can still be attributed in the form of joint liability or strict liability. In any case, artificial intelligence is created by people for a certain purpose, so it must bear responsibility for the overall behavior of its creation or use, rather than transferring responsibility to non-human beings without subject status and personality qualifications. Otherwise, allowing artificial intelligence to assume some or all of the responsibilities on behalf of humans will inevitably lead to more complicated situations such as mutual buck-passing and deadlock, and may even lead to the disappearance of responsibilities because no one takes responsibility. Strictly speaking, various autonomous or intelligent behaviors of artificial intelligence are still just probabilistic choices based on human past experience or data. They are themselves an extension and projection of human will and value. Therefore, we must clearly attribute responsibility to creation or use. The specific singular or plural person of this artifact allows more people to take responsibility for these complex collective behaviors that are difficult to control, so as to use artificial intelligence more carefully and rationally.