Apple has agreed to adopt a set of artificial intelligence safeguards, set forth by the Biden-Harris administration.
The move was announced by the administration on Friday. Bloomberg was the first to report on the news.
By adopting the guidelines, Apple has joined the ranks of OpenAI, Google, Microsoft, Amazon, and Meta, to name a few.
The news comes ahead of Apple’s much-awaited launch of Apple Intelligence (Apple’s name for AI), which will become widely available in September, with the public launch of iOS 18, iPadOS 18, and macOS Sequoia. The new features, unveiled by Apple in June, aren’t available even as beta right now, but the company is expected to slowly roll them out in the months to come.
Apple is one of the signees of the Biden-Harris administration’s AI Safety Institute Consortium (AISIC), which was created in February. But now the company has pledged to abide by a set of safeguards which include testing AI systems for security flaws and sharing the results of those tests with the U.S. government, developing mechanisms that would allow users to know when content is AI-generated, as well as developing standards and tools to make sure AI systems are safe.
The safeguards are voluntary and not enforceable, meaning the companies won’t suffer consequences for not abiding to them.
The European Union’s AI Act – a set of regulations designed to protect citizens against high-risk AI – will be legally binding when it becomes effective on August 2, 2026, though some of its provisions will apply from February 2, 2025.
Apple’s upcoming set of AI features includes integration with OpenAI’s powerful AI chatbot, ChatGPT. The announcement prompted X owner and Tesla and xAI CEO Elon Musk to warn he would ban Apple devices at his companies, deeming them an « unacceptable security violation. » Musk’s companies are notably absent from the AISIC signee list.