How to enable artificial intelligence to achieve cognitive justice
Author:Eve Cole
Update Time:2024-11-22 17:54:01
In recent years, artificial intelligence has been applied in many industries and has become a "good helper" for mankind. But in this process, various problems also emerged. Among them, the artificial intelligence system generates erroneous "knowledge" based on poor data sources and defective algorithm design, and does not have the ability to make value judgments on the output content and cannot assume corresponding cognitive responsibilities, leading to systemic cognitive biases. This is a rather prominent issue. From the perspective of scientific and technological ethics, this violates the principle of cognitive justice. The so-called cognitive justice refers to ensuring that the voices of all individuals and groups can be fairly heard and understood in the process of knowledge generation, dissemination and acquisition, and have equal opportunities to be transformed into public knowledge of mankind. In the past, knowledge generation mainly relied on the perception, memory, reasoning and testimony of human individuals. However, with the rapid iteration of artificial intelligence, especially the widespread application of conversational artificial intelligence, traditional knowledge generation and dissemination methods are undergoing major changes. Today's artificial intelligence is not only good at collecting information and performing tasks, but also a "cognitive technology" that can generate and disseminate knowledge. It is used to process cognitive content (such as propositions, models, data) and perform cognitive operations (such as Statistical analysis, pattern recognition, prediction, inference and simulation). "Machine knowledge" based on data and algorithms challenges past human knowledge based on experience and professional judgment, leading to cognitive "fragmentation" and undermining the cognitive justice of traditional human knowledge systems. Today, generative artificial intelligence has begun to be fully embedded in all scenarios and social processes that may provide technical replacements for cognition and decision-making. Faced with the challenge of cognitive justice caused by artificial intelligence in knowledge generation, how to make artificial intelligence smarter? How to make it a helper in improving cognition and ensuring that science and technology is good? The author believes that it is necessary to start from the dimensions of improving data quality, improving algorithm design, optimizing human-machine collaboration, and strengthening ethical governance. Responsible algorithm design is a core architecture for achieving epistemic justice. As a powerful cognitive technology, artificial intelligence identifies information patterns and trends through data mining and statistical analysis, and participates in the generation of human public knowledge. Because the algorithm focuses on information patterns that appear frequently in the training data, data that is not common enough or statistically strong enough is often overlooked and excluded, preventing the algorithm from fully understanding and responding appropriately. Algorithm design that relies on statistical frequencies constitutes a specific kind of "cognitive blind obedience", which in turn leads to the systematic marginalization of the voices of some groups. This design flaw not only limits the algorithm’s cognitive capabilities, but also exacerbates inequality and cognitive oppression in society, undermining cognitive justice. The root cause behind the "blind obedience" behavior is the lack of understanding of the cultural backgrounds of different groups in the algorithm design and training process. Therefore, in addition to the algorithm transparency and explainability we often talk about, algorithm design that meets the requirements of cognitive justice should also take into account the cognitive diversity involving different communities. Quality data supply is the infrastructure for realizing epistemic justice. Another important factor causing AI to undermine epistemic justice is data quality. Big data is the cognitive basis and decision-making basis of intelligent technology. It can present the characteristics and trends of all aspects of human social life more clearly and intuitively. However, unlike traditional human public knowledge, data is not universally shared by people. Specifically, what data can be collected and used for analysis, how this data will be classified and extracted, and who it ultimately serves are all blurred, resulting in uneven data quality. The training data for algorithms often comes from large databases and communities on the Internet, and these data are likely to contain bias and discrimination. The knowledge generation of artificial intelligence requires ensuring that the source of the data is reliable and the content is diverse, the data must be debiased, and the data must be continuously monitored and updated to cope with new problems brought about by social and cultural changes. Only with high-quality data supply can artificial intelligence systems provide more accurate knowledge and decision-making support in multicultural and complex social structures. Large-scale human-machine collaboration is an effective means to achieve cognitive justice. From signal translation in brain-computer interfaces to joint human-machine actions such as intelligent medical decision-making and AI for Science, human-machine collaboration at different levels involves cognitive processes such as the transmission, interpretation, and integration of human knowledge and machine knowledge. In view of the typical cognitive characteristics of humans and machines, a large-scale and rational "human-machine cognitive division of labor" will effectively avoid more human-machine cognitive biases. For example, in scientific research, there can be such a division of labor: humans set goals, propose hypotheses, and interpret results, and are responsible for providing creative thinking, on-the-spot decision-making, ethical judgment, and intuitive understanding of unstructured problems; while artificial intelligence processes large amounts of structured data , perform pattern recognition and predictive analysis to provide unnoticed patterns and correlations. In this kind of collaboration, AI becomes more of a “partner” that inspires new ideas rather than a “machine” that generates erroneous knowledge. High-level ethical governance is the institutional support for realizing cognitive justice. Cognitive justice requires diverse knowledge generation, equal knowledge acquisition, unbiased knowledge dissemination and responsible knowledge use, all of which require a high level of artificial intelligence ethical governance. For enterprises, the needs and perspectives of different social groups should be considered in algorithm design, and continuous risk monitoring and value assessment of algorithms should be carried out; an artificial intelligence ethical crowdsourcing model should also be explored to encourage researchers and users of different backgrounds to participate in artificial intelligence In the research and judgment of intelligent ethical risks, ethical risks can be resolved in a timely manner. For the government, it should actively encourage the transformation of private data into public data, accelerate the opening and sharing of public data to the whole society, expand data diversity, and strengthen data reliability; it should also seek social solutions to deal with the potential ethical risks of artificial intelligence and establish a system that covers An agile governance mechanism with forward-looking foresight, real-time assessment and systematic adjustment.