Elon Musk’s Grok AI faces lawsuit from Baltimore over sexually explicit images

sue baltimore Elon Musk’s grok Aye On fake nude images. The mayor and city council of Baltimore, Maryland filed a lawsuit against Musk’s AI company on Tuesday, March 24. They alleged that Grok created non-consensual pornographic images in violation of ethics. The Baltimore lawsuit alleges that Grok, which is marketed as a harmless AI tool for routine use, failed to disclose the potential harm the tool could cause to users.

Baltimore city sues Elon Musk’s Grok AI over explicit image creation

Baltimore City files lawsuit against Elon Musk’s Grok AI over explicit image creation. The lawsuit has been filed in the Circuit Court of Baltimore City. Specifically, the company advertises and operates in Baltimore. “Grok has flooded the feeds of X users in Baltimore with NCII (non-consensual intimate images) and CSAM (child sexual abuse material),” the official complaint states.

Additionally, it says, “Grok exposed Baltimore residents to the risk that any photos they uploaded — of themselves or their children — could be captured by Grok and transformed into truly offensive deepfakes ** without their knowledge or consent ” (via Guardian).

Baltimore Mayor Brandon Scott shared, “We are talking about tech companies that enable the sexual exploitation of children. Our city will not remain silent and allow this to continue; it is a threat to privacy, dignity, and public safety, and those responsible must be held accountable.”

This is a unique, unique case. Unlike previous lawsuits filed by individuals, in this case, a city has sued Musk’s Grok AI.

Musk’s Grok AI has faced multiple lawsuits and failed international investigations for generating non-consensual, AI-generated** erotic images on the internet. The victims mainly include women and minors. According to researchers, Grok created 23,000 sexually explicit images of children over an 11-day period in December and January (via Center to Combat Digital Hate).

Interestingly, the Tesla founder denied that his AI tool Grok was being used to create non-consensual, explicit photos. He shared in January that he was “not aware of any nude underage images made by Grok. Literally zero.”

As a response, Musk’s company banned the production of Grok’s image to prevent further damage. This happened after they received serious feedback and threats of regulatory action from many countries.

Originally reported by Anvay Mane Mandatory.

Originally reported by Reality Tea

Leave a Comment