Google on Thursday is issuing recent guidance for developers constructing AI apps distributed through Google Play, in hopes of cutting down on inappropriate and otherwise prohibited content. The corporate says apps offering AI features could have to forestall the generation of restricted content — which incorporates sexual content, violence and more — and might want to offer a way for users to flag offensive content they find. As well as, Google says developers have to “rigorously test” their AI tools and models, to make sure they respect user safety and privacy.
It’s also cracking down on apps where the marketing materials promote inappropriate use cases, like apps that undress people or create nonconsensual nude images. If ad copy says the app is able to doing this type of thing, it might be banned from Google Play, whether or not the app is definitely able to doing it.
The rules follow a growing scourge of AI undressing apps which were marketing themselves across social media in recent months. An April report by 404 Media, for instance, found that Instagram was hosting ads for apps that claimed to make use of AI to generate deepfake nudes. One app marketed itself using an image of Kim Kardashian and the slogan, “Undress any girl at no cost.” Apple and Google pulled the apps from their respective app stores, but the issue remains to be widespread.
Schools across the U.S. are reporting problems with students passing around AI deepfake nudes of other students (and sometimes teachers) for bullying and harassment, alongside other varieties of inappropriate AI content. Last month, a racist AI deepfake of a faculty principal led to an arrest in Baltimore. Worse still, the issue is even affecting students in middle schools, in some cases.
Google says that its policies will help to maintain out apps from Google Play that feature AI-generated content that may be inappropriate or harmful to users. It points to its existing AI-Generated Content Policy as a spot to examine its requirements for app approval on Google Play. The corporate says that AI apps cannot allow the generation of any restricted content and must also give users a method to flag offensive and inappropriate content, in addition to monitor and prioritize that feedback. The latter is especially necessary in apps where users’ interactions “shape the content and experience,” Google says, like apps where popular models get ranked higher or more prominently, perhaps.
Developers can also’t advertise that their app breaks any of Google Play’s rules, per Google’s App Promotion requirements. If it advertises an inappropriate use case, the app might be booted off the app store.
As well as, developers are liable for safeguarding their apps against prompts that might manipulate their AI features to create harmful and offensive content. Google says developers can use its closed testing feature to share early versions of their apps with users to get feedback. The corporate strongly suggests that developers not only test before launching but document those tests, too, as Google could ask to review it in the longer term.
The corporate can be publishing other resources and best practices, like its People + AI Guidebook, which goals to support developers constructing AI apps.