In short
Australia’s eSafety Commissioner flagged a spike in complaints about Elon Musk’s Grok chatbot creating non-consensual sexual photos, with stories doubling since late 2025.
Some complaints contain potential youngster sexual exploitation materials, whereas others relate to adults subjected to image-based abuse.
The considerations come as governments worldwide examine Grok’s lax content material moderation, with the EU declaring the chatbot’s “Spicy Mode” unlawful.
Australia’s unbiased on-line security regulator issued a warning Thursday in regards to the rising use of Grok to generate sexualized photos with out consent, revealing her workplace has seen complaints in regards to the AI chatbot double in latest months.
The nation’s eSafety Commissioner Julie Inman Grant stated some stories contain potential youngster sexual exploitation materials, whereas others relate to adults subjected to image-based abuse.
“I am deeply involved in regards to the rising use of generative AI to sexualise or exploit folks, significantly the place youngsters are concerned,” Grant posted on LinkedIn on Thursday.
]]>
The feedback come amid mounting worldwide backlash in opposition to Grok, a chatbot constructed by billionaire Elon Musk’s AI startup xAI, which might be prompted straight on X to change customers’ photographs.
Grant warned that AI’s capacity to generate “hyper-realistic content material” is making it simpler for unhealthy actors to create artificial abuse and tougher for regulators, regulation enforcement, and child-safety teams to reply.
Not like opponents reminiscent of ChatGPT, Musk’s xAI has positioned Grok as an “edgy” various that generates content material different AI fashions refuse to provide. Final August, it launched “Spicy Mode” particularly to create express content material.
Grant warned that Australia’s enforceable business codes require on-line providers to implement safeguards in opposition to youngster sexual exploitation materials, whether or not AI-generated or not.
Final 12 months, eSafety took enforcement motion in opposition to widely-used “nudify” providers, forcing their withdrawal from Australia, she added.
“We have now entered an age the place firms should guarantee generative AI merchandise have applicable safeguards and guardrails in-built throughout each stage of the product lifecycle,” Grant stated, noting that eSafety will “examine and take applicable motion” utilizing its full vary of regulatory instruments.
Deepfakes on the rise
In September, Grant secured Australia’s first deepfake penalty when the federal courtroom fined Gold Coast man Anthony Rotondo $212,000 (A$343,500) for posting deepfake pornography of outstanding Australian girls.
The eSafety Commissioner took Rotondo to courtroom in 2023 after he defied elimination notices, saying they “meant nothing to him” as he was not an Australian resident, then emailing the photographs to 50 addresses, together with Grant’s workplace and media shops, in response to an ABC Information report.
Australian lawmakers are pushing for stronger protections in opposition to non-consensual deepfakes past present legal guidelines.
Unbiased Senator David Pocock launched the On-line Security and Different Laws Modification (My Face, My Rights) Invoice 2025 in November, which might permit people sharing non-consensual deepfakes to be fined $102,000 (A$165,000) up-front, with firms going through penalties as much as $510,000 (A$825,000) for non-compliance with elimination notices.
“We are actually dwelling in a world the place more and more anybody can create a deepfake and use it nevertheless they need,” Pocock stated in a assertion, criticizing the federal government for being “asleep on the wheel” on AI protections.
Usually Clever Publication
A weekly AI journey narrated by Gen, a generative AI mannequin.