Apple has taken a firm stance against Elon Musk’s Grok app, signaling potential removal from its app store over serious allegations regarding nonconsensual deepfakes. The tech giant's concerns were articulated in a letter dated January 30, directed to three Democratic senators.
In this correspondence, Apple revealed it had reached out to the teams behind both X and Grok following numerous complaints and media reports surrounding the app's controversial content. The letter, which has been obtained, outlines Apple's demand for Grok's developers to formulate a robust content moderation strategy.
Apple contended that Grok might be violating the App Store's strict guidelines, which prohibit applications from hosting “offensive, insensitive, upsetting” material. This assertion was underscored by a subsequent message from Apple, confirming the rejection of a revised version of the Grok app for being “out of compliance.”
“Consequently, we rejected the Grok submission and informed the developer that further modifications would be necessary to rectify the violation, or else the app could face removal from the App Store,” the message stated.

While an updated version of the X app received approval, it remains noteworthy that users can still access Grok’s technology through the platform. Eventually, Apple did approve a “substantially improved” version of the Grok app, as indicated in their communication.
This letter was reportedly sent to Senators Ron Wyden, Ed Markey, and Ben Ray Luján, shortly after they called on both Google and Apple to eliminate Grok from their respective app stores. The urgency stemmed from accusations that Grok was generating sexualized images of individuals without their consent, with some allegations involving minors.
In January 2026, a 22-year-old woman named Evie recounted her experience of receiving over 100 sexualized images of herself on X within a week, including one where she was digitally stripped naked. This incident highlighted the pressing need for effective content moderation.
In response to these issues, X issued a statement in mid-January asserting that it had restricted Grok’s account from enabling the editing of images featuring real people in revealing clothing. The company also announced plans to geoblock users in jurisdictions where such content is illegal from generating images of individuals in bikinis or similar attire.

Furthermore, access to create or edit images using the Grok tool was limited to paid subscribers. Despite these measures, cybersecurity experts reported instances of the AI tool being used to create explicit images of celebrities and political figures recently.
A review conducted by NBC News uncovered numerous AI-generated sexual images and videos featuring real women posted on X over the past month. Many of these images depicted women altered into various outfits, including sports bras and costumes associated with popular culture.
In light of these findings, X reiterated its policy against generating non-consensual explicit deepfakes and emphasized that its tools should not be used to undress real individuals. “xAI has implemented extensive safeguards to prevent misuse,” the statement elaborated, detailing measures such as continuous monitoring and real-time analysis of evasion attempts.
The Independent has reached out to both X and Apple for further comments regarding this ongoing situation.

















