
The European Union (EU) has launched an investigation into Tesla CEO Elon Musk’s neural networking company, Neuralink, and its subsidiary GrokAI following reports of manipulated sexually explicit images being shown to users in the EU. The Commission, which is responsible for upholding EU laws on technology development, will assess whether these companies have violated any regulations regarding deepfake creation and dissemination.
Deepfakes are manipulated media that use artificial intelligence (AI) to create realistic but false images or videos of individuals. These manipulations can be used for various purposes, including entertainment, political propaganda, or malicious intent such as revenge pornography. The emergence and spreading of deepfakes have raised significant concerns about privacy, consent, and the ethical implications of AI technology.
GrokAI is a subsidiary company of Neuralink that specializes in developing advanced AI models for various industries. Its primary focus is on creating innovative solutions in the field of agriculture, but it also explores potential applications in other sectors like entertainment and advertising. The company’s use of deep learning algorithms to generate realistic images has reportedly led to the creation of sexually explicit content that was shown to unsuspecting EU users without their consent.
The Commission’s investigation comes after a series of complaints from European citizens who claim they have been exposed to these manipulated images on social media platforms and other digital channels. The exact number of affected individuals is unknown, but the concerned parties fear the potential consequences for privacy, dignity, and safety. Additionally, they are worried about the implications this may have for Neuralink’s ongoing development projects and its reputation within the EU tech community.
Elon Musk, who founded both companies in 2016 and serves as their CEO, has not yet commented on these allegations publicly. However, a spokesperson from Neuralink stated that “the company takes all reports of misuse of its technology seriously” and is cooperating fully with the Commission’s investigation. GrokAI also issued a statement saying they are “working closely with regulatory authorities to address any concerns related to deepfake content.”
The EU has been at the forefront of addressing the challenges posed by deepfakes through legislation, research, and public awareness campaigns. In 2019, it adopted the Deepfakes Pilot Project, which aimed to explore the technical capabilities of deepfake detection and countermeasures while involving key stakeholders from academia, industry, and civil society. The Commission has also expressed its intention to propose a regulatory framework for deepfakes once more comprehensive understanding and technological solutions have been established.
As this investigation unfolds, it serves as a reminder of the importance of responsible AI development and its ethical implications โ especially in regards to sensitive areas like personal privacy, consent, and safety. This is not just an
Discover more from jiveglow
Subscribe to get the latest posts sent to your email.















