Google recently updated its terms of use for generative AI to clearly define AI applications in “high-risk” areas such as health care. The new terms allow customers to use Google's generative AI for automated decision-making in these areas, but only with human supervision. The move has raised concerns in the AI field as it relates to the application of AI in high-stakes decision-making, as well as considerations of potential bias and ethical issues. Google's move responds to society's concerns about AI applications to a certain extent and reflects its efforts to balance AI technology applications and risk management.
Google recently updated its terms of use for generative AI, clarifying that customers can deploy its generative AI tools for "automated decision-making" in "high-risk" areas such as healthcare, but only with human supervision. According to the latest version of Google's "Generative AI Prohibited Use Policy" released on Tuesday, customers can use Google's generative AI to make "automated decisions" that may have a "material adverse impact" on individual rights. Customers can use Google's generative AI to make decisions about employment, housing, insurance, social welfare and other "high-risk" areas as long as there is some form of human oversight. In the field of AI, automated decision-making refers to decisions made by AI systems based on factual data and inferred data. For example, a system might automatically decide whether to approve a loan application, or screen job applicants. Previously, Google's draft terms hinted at a blanket ban on the use of the company's generative AI in high-stakes automated decision-making. But Google told TechCrunch that customers can always use its generative AI to automate decisions, even for high-risk applications, as long as there is human oversight. “As with all high-risk areas, our human oversight requirements are always present in our policies,” a Google spokesperson said when contacted via email. “We are reclassifying some items [in the terms] and listing them more clearly. Give some examples to give customers a clearer picture. "Google's main AI competitors, OpenAI and Anthropic, have stricter regulations on the use of AI in high-stakes automated decision-making. For example, OpenAI prohibits the use of its services for automated decisions related to credit, employment, housing, education, social scoring, and insurance. Anthropic allows its AI to be used to automate decision-making in legal, insurance, healthcare and other high-risk areas, but only if it is supervised by a "qualified professional" and requires customers to disclose that they are using the AI for this purpose. AI that automates decision-making that affects individuals has come under scrutiny from regulators, who have expressed concerns about the biased outcomes the technology could create. Research suggests that AI used to approve credit and mortgage applications, for example, could perpetuate historical discrimination. Human Rights Watch, a non-profit organization, has called for a ban on "social scoring" systems, which it says could undermine people's access to social security support, compromise their privacy and profile them in a biased way. High-risk AI systems, including those that make personal credit and employment decisions, face the strictest regulation under the EU's Artificial Intelligence Bill. Providers of these systems must register with the database, perform quality and risk management, employ human supervisors, and report incidents to relevant authorities, among other requirements. In the United States, Colorado recently passed a law requiring AI developers to disclose information about "high-risk" AI systems and issue a statement summarizing the system's capabilities and limitations. Meanwhile, New York City prohibits employers from using automated tools to screen candidates for employment decisions unless the tool has been audited for bias within the previous year. This time Google clarified the terms of AI use, indicating the company’s attitude towards the supervision of AI applications. Allowing automated decision-making in high-risk areas but emphasizing the importance of manual supervision not only reflects the potential of AI technology applications, but also reflects vigilance against potential risks.
Google’s update to the terms of use of AI, while emphasizing manual supervision, also reflects the challenges and explorations technology companies face in AI governance. How to balance innovation and risk in the context of the rapid development of AI technology will be an important issue that requires continued attention in the future.