
A recent investigation by the Tech Transparency Project (TTP), a nonprofit watchdog organization, has revealed a significant failure in content moderation by Apple and Google. Despite both companies’ explicit policies prohibiting apps that generate sexualized, non-consensual, or explicit imagery—including those that “undress” people using artificial intelligence—dozens of such “nudify” apps remain accessible (or were recently available) in their respective app stores.
Published on January 27, 2026, the TTP report highlights that these apps enable users to upload photos—typically of women—and apply AI to remove clothing, render subjects nude or partially nude, or place them in sexualized poses, often without any consent from the individuals depicted. The tools are frequently marketed as entertainment, pranks, or novelty features, but they facilitate the creation of non-consensual intimate imagery, commonly referred to as deepfake porn or non-consensual intimate imagery (NCII).
The scale of the issue is staggering. The report identified 55 such apps on the Google Play Store and 47 on the Apple App Store. Collectively, apps of this nature have been downloaded more than 705 million times worldwide and have generated approximately $117 million in revenue, with Apple and Google earning a portion through their app store commissions.
The controversy gained renewed attention in the context of xAI’s Grok AI chatbot, integrated with the X platform (formerly Twitter). Grok’s relatively unrestricted image generation features have allowed users to create similar “undressing” edits of real people’s photos, sparking widespread criticism, regulatory scrutiny in regions like the EU, UK, and California, and even class-action lawsuits against xAI. While Grok and the X app continue to be available in both app stores, the TTP investigation demonstrates that the problem extends far beyond any single tool, with numerous standalone apps proliferating under the radar.
Both Apple and Google maintain clear guidelines against such content. Their policies ban:
- Depictions of sexual nudity or highly suggestive poses with minimal clothing.
- Apps that claim to undress individuals, simulate seeing through clothing, or otherwise objectify or degrade people—even if framed as humorous or fictional.
Yet enforcement has proven inconsistent and reactive. The TTP’s findings, based on searches for terms like “nudify” and “undress” conducted in January 2026, showed these apps were readily discoverable and functional as recently as late January. In response to the report and outreach from TTP and media outlets like CNBC:
- Apple removed 28 of the flagged apps and issued warnings to developers of others, indicating potential removal if violations persisted.
- Google suspended several apps and confirmed its review process was ongoing.
This is not an isolated incident. Similar exposés from 2024 and 2025—by organizations and media including the BBC, 404 Media, Wired, and others—have repeatedly uncovered waves of these apps, leading to temporary purges only for new ones to emerge as generative AI tools become more accessible and affordable.
The broader implications are serious. The rapid advancement of AI has democratized the creation of non-consensual explicit content, heightening risks of harassment, exploitation, privacy violations, and harm—particularly to women and, in some documented cases, minors. Critics argue that app store gatekeepers like Apple and Google bear responsibility for proactive vetting rather than relying on post-launch complaints or media pressure.
As AI capabilities continue to evolve, the persistence of these apps underscores a growing tension between platform openness, user safety, and the ethical deployment of powerful generative technologies. For now, the TTP report serves as a stark reminder that policy promises alone are insufficient without robust, ongoing enforcement.