The Biden administration has made clear its stance on deepfakes: Technology companies must play a critical role in stopping such imagery, which is generated by artificial intelligence.
On Thursday, the White House published a list of steps tech companies should take to curb image-based sexual abuse, a form of digital violence typically inflicted on girls, women, and lesbian, gay, bisexual, and transgender people.
Deepfake is one term used describe the synthetic creation of an image or video, which is often explicit or sexual in nature. Sometimes the material is made using the victim’s face obtained without their consent from an existing picture or video. In other cases, perpetrators use AI to generate entirely fake content.
In one criminal case, a Wisconsin man was recently arrested and charged with producing thousands of images of child sexual abuse using the text-to-image generative AI tool called Stable Diffusion.
Though the White House didn’t cite this case specifically, it described image-based sexual abuse as one of the « fastest growing harmful uses of AI to-date. »
The announcement was written by White House officials Jennifer Klein, director of the Gender Policy Council, and Arati Prabhakar, director of the Office of Science and Technology Policy.
Prabhakar told Mashable in a phone interview that the White House hopes companies implicated in the rise of image-based sexual abuse take action now, and move quickly to help stop it.
« We very much want companies to think harder about what they can do, and push to really make progress on this problem, » Prabhakar said.
The White House recommended that tech companies limit websites and apps that create, facilitate, monetize, or disseminate image-based sexual abuse, and restrict web services and apps that are marketed as providing users the tools to create and alter sexual images without individuals’ consent. Cloud service providers could similarly prohibit explicit deepfake sites and apps from accessing their product.
App stores could also require developers to prevent the creation of nonconsensual images, according to the White House. This requirement would be critical given that many AI apps are capable of generating explicit deepfakes, even if they’re not advertised for that purpose.
The White House called on payment platforms and financial institutions to curb access to payment services for sites and apps that do business in image-based sexual abuse, especially if those entities advertise images of minors.
The White House urged the industry to « opt in » to finding ways to help adult and youth survivors remove nonconsensual content of them from participating online platforms. Currently, the takedown process can be confusing and exhausting for victims, because not every online platform has a clear process.
Congress, too, has a role to play, said the White House. It asked the governing body to « strengthen legal protections and provide critical resources for survivors and victims of image-based sexual abuse. » There is currently no federal law that criminalizes the generation or dissemination of explicit deepfake imagery.
The White House statement acknowledged the high stakes of image-based sexual abuse: « For survivors, this abuse can be devastating, upending their lives, disrupting their education and careers, and leading to depression, anxiety, post-traumatic stress disorder, and increased risk of suicide. »
Prabhakar said the underlying technology of generative AI offers tremendous promise, but she described image-based sexual abuse as a « damaging and ugly application » of it that is growing.
« Part of the reason to get this right, » she added, « is to allow this technology to be used for the good things that can come. »
UPDATE: May. 23, 2024, 5:42 p.m. EDT This story has been updated to include Mashable’s interview with Arati Prabhakar.