Google recently updated its terms of use for generative AI, allowing customers to use its generative AI tools to make automated decisions in high-risk areas such as medical care and employment, but only with human supervision. This move has triggered discussions in the industry about the application of AI in high-risk areas and concerns about related supervision. Google's move contrasts with its competitors OpenAI and Anthropic, which have stricter restrictions on high-stakes automated decision-making. This update also highlights the urgent need for AI regulation globally and the challenge of balancing AI innovation with potential risks.
Google recently updated its terms of use for generative AI, explicitly allowing customers to use its generative AI tools for "automated decision-making" in "high-risk" areas, such as health care and employment, as long as human supervision is required. This change is reflected in the company’s newly released policy prohibiting the use of generative AI.
Picture source note: The picture is generated by AI, and the picture authorization service provider Midjourney
Under the updated policy, customers can use Google's generative AI, with supervision, to make automated decisions that could have a "material adverse impact" on an individual's rights. These high-risk areas include employment, housing, insurance, social welfare, etc. The previous terms seemed to have a blanket ban on high-risk automated decision-making, but Google said it has actually allowed the use of generative AI for such decisions under human supervision from the beginning.
A Google spokesperson responded to the media saying: "The requirement for human supervision has always existed in our policies and covers all high-risk areas. We have just reclassified some terms and enumerated some examples more clearly for users to understand."
Compared with Google's approach, Google's main competitors such as OpenAI and Anthropic have stricter regulations on high-stakes automated decision-making. OpenAI prohibits the use of its services for automated decisions related to credit, employment, housing, education, social scoring, and insurance. Anthropic allows its AI to make automated decisions in high-risk areas such as law, insurance, and medical care, but only under the supervision of "qualified professionals" and requires customers to clearly inform it of the use of AI for such decisions.
Regarding AI systems for automated decision-making, regulators have expressed concern that such technologies may lead to biased results. For example, research shows that AI may perpetuate historical discrimination in loan and mortgage application approvals.
Human Rights Watch and other non-profit organizations have specifically called for a ban on "social scoring" systems, arguing that they threaten people's access to social security and may invade privacy and create biased profiling.
In the EU, high-risk AI systems, including those involving personal credit and employment decisions, face the strictest supervision under the AI Act. Providers of these systems must register in databases, conduct quality and risk management, employ human supervisors, and report incidents to relevant authorities, among other things.
In the United States, Colorado recently passed a law requiring AI developers to disclose information about "high-risk" AI systems and publish a summary of the system's capabilities and limitations. Meanwhile, New York City prohibits employers from using automated tools to screen candidates unless the tool has undergone a bias audit within the past year.
Highlight:
Google allows the use of generative AI in high-risk areas, but requires human supervision.
Other AI companies such as OpenAI and Anthropic have stricter restrictions on high-risk decisions.
Regulators in various countries are reviewing AI systems for automated decision-making to prevent biased results.
Google’s update to the terms of use of generative AI has triggered widespread discussions on AI ethics and regulation. Governments and institutions across the world are also actively exploring how to better regulate AI technology to ensure its safe and responsible development. In the future, the application of AI in high-risk fields will face more stringent review and supervision.