Apple has made frequent moves in the field of artificial intelligence recently, actively responded to the White House's call, and promised to develop safe and reliable artificial intelligence products. The editor of Downcodes will explain to you Apple's latest developments in the field of artificial intelligence and its impact on the future development of the artificial intelligence industry. Apple signed the White House Voluntary Commitment to develop safe, reliable and trustworthy artificial intelligence, marking that Apple will soon integrate the generative artificial intelligence product Apple Intelligence into its core products to provide services to 2 billion users around the world. This move also heralds Apple’s official efforts on the artificial intelligence track.
Apple has signed the White House's voluntary commitment to develop safe, reliable and trustworthy artificial intelligence, according to a press release issued Friday. The move signals that Apple will soon introduce its generative artificial intelligence product, Apple Intelligence, into its core products to serve Apple's 2 billion users.
In July 2023, Apple and 15 other technology companies, including Amazon, Google, Meta and Microsoft, jointly pledged to abide by the basic rules for generative artificial intelligence set by the White House. Although Apple did not disclose its specific plans to integrate artificial intelligence technology into iOS at the time, at the Worldwide Developers Conference (WWDC) in June, Apple made it clear that it would fully invest in the field of generative artificial intelligence, starting with working with partners. Embed ChatGPT in iPhone.
Apple's voluntary commitment to the White House, while limited in binding force, is its first step in the field of artificial intelligence. The White House said this is the first step for Apple and other companies to develop safe, reliable and trustworthy artificial intelligence. It will be closely followed by President Biden's October executive order on artificial intelligence, and several bills are currently being considered in federal and state legislatures to better regulate artificial intelligence models.
Under the pledge, AI companies will conduct red team testing of AI models before releasing them publicly and share that information with the public. In addition, companies are required to keep unpublished AI model weights confidential and study these weights in a secure environment, limiting access to model weights. Finally, the companies agreed to develop content labeling systems, such as watermarks, to help users differentiate between AI-generated content and non-AI-generated content.
The U.S. Department of Commerce said it will soon release a report on the potential benefits, risks and impacts of the open source base model. Open source AI is becoming a politically charged regulatory battleground. Some camps want to limit the accessibility of model weights for powerful AI models on security grounds, but this could limit the growth of AI startups and research ecosystems. The White House's stance on this could have significant consequences for the AI industry as a whole.
The White House also noted that federal agencies have made significant progress on the tasks set out in the October executive order. To date, federal agencies have hired more than 200 AI-related personnel, granted access to computing resources to more than 80 research teams, and released multiple frameworks for developing AI.
Apple’s commitment marks its official entry into the field of generative artificial intelligence, and its future development is worth looking forward to. At the same time, the White House's regulatory measures also provide an important reference for the development direction of the artificial intelligence industry, and global artificial intelligence development will enter a new stage. The editor of Downcodes will continue to pay attention to relevant developments and bring you more reports.