Elon Musk's Grok chatbot faces backlash over non-consensual image editing.
Image: Supplied
X users have raised alarm over Grok, the platform’s AI assistant, being used to alter images of women by removing or changing clothing when prompted by third parties.
Critics argue that this practice enables non-consensual sexualised imagery and, in extreme cases involving minors, could result in the creation of material that meets the legal definition of child sexual abuse material (CSAM).
While Grok does not independently alter images, the ability for users to prompt the AI to reinterpret publicly posted photos has exposed serious gaps in consent, safety, and accountability.
Understanding what controls do exist is therefore critical.
Disable Grok’s access to your data
The most direct-action users can take is to limit Grok’s access to their content and data.
To do this:
Disable options that allow:
This does not prevent other users from prompting Grok with your images, but it does stop the platform itself from ingesting your content for AI development or analysis.
Restrict the visibility of your images
Because AI tools can currently be applied to publicly visible images, reducing visibility is one of the few effective safeguards.
Users should consider:
Act immediately if AI-generated content crosses a line
If you encounter Grok-generated imagery that:
You should:
IOL News
Related Topics: