AI Chatbots on Telegram Spark Concerns Over Creation of Explicit Deepfakes

AI-powered chatbots are being used to generate explicit deepfake images, putting millions at risk, especially women and teenage girls.

Breaking News

A recent investigation has exposed a troubling trend on Telegram, where AI-powered chatbots are being used to generate explicit deepfake images, putting millions at risk, especially women and teenage girls. According to Wired, these bots, used by over 4 million users monthly, can manipulate photos to fabricate nude or sexually explicit images with just a few clicks. This has raised serious ethical and privacy concerns about the misuse of AI technologies.

One of the key figures raising alarm is Henry Ajder, an expert on deepfakes, who first discovered these explicit AI chatbots in 2019. Ajder described the situation as “nightmarish,” emphasizing how easy it is to access these tools on one of the world’s most popular platforms. He said, “It is really concerning that these tools ruining lives primarily of young girls and women are so easy to find on the surface web.”

The implications of such technology are far-reaching. Not only are celebrities like Taylor Swift and Jenna Ortega being targeted with deepfakes, but everyday individuals, including teenagers, are becoming victims. Sextortion and blackmail cases are on the rise, with deepfake images used to exploit or coerce victims. A recent survey revealed that nearly 40% of US students had encountered deepfakes circulating in schools, adding another layer of concern.

Despite being contacted by Wired about these bots, Telegram has yet to issue a formal response. While some of the offending bots disappeared temporarily after inquiries, bot creators vowed to bring them back, underscoring the ongoing battle to regulate this space. Telegram’s CEO, Pavel Durov, previously faced legal trouble related to facilitating child pornography, but no significant changes have been made to curb the misuse of the platform for these harmful activities.

The urgent need for regulation and oversight is clear, as AI tools grow more advanced and accessible, posing threats not only to privacy but also to mental health and safety.

Latest News

Popular Videos

More Articles Like This

- Advertisement -spot_img