The rise of generative artificial intelligence has introduced a dangerous new tool for cybercriminals: the deepfake. In a disturbing case from Prayagraj, Uttar Pradesh, the intersection of AI technology and social media harassment led to the blackmail of a legal professional. The incident, which involved the creation and distribution of non-consensual AI-generated imagery, has culminated in the arrest of a local social media influencer.
The Mechanics of the Blackmail Scheme
The case centers on a lawyer in Prayagraj who became the target of a calculated harassment campaign. According to reports, a social media influencer identified as Sumit, a resident of the Behrana area, used platforms like Facebook and Instagram to execute the scheme. Sumit operated under the account name “Call Me Sumit.”
The perpetrator utilized a combination of authentic and synthetic media to intimidate the victim. The content included:
- AI-Generated Photos: Synthetic images designed to appear real but created using AI to place the victims in compromising positions.
- Explicit Video: A semi-nude video featuring a woman, which was used to further the blackmail attempt.
The attack followed a classic extortion pattern. Sumit initially attempted to blackmail the lawyer using the fabricated content. When the lawyer refused to comply with the demands, the influencer escalated the situation by posting the AI-generated photos and the explicit video across his social media accounts, accompanied by offensive captions. The repetitive nature of these posts was designed to maximize the psychological impact and public embarrassment of the victim.
Police Intervention and Legal Consequences
Following a written complaint from the victim, the Cantt Police launched an investigation. The lawyer stated that the viral nature of the posts significantly tarnished his professional reputation and personal image.
Law enforcement acted swiftly to apprehend the suspect. Sumit was arrested and charged under several serious sections of the law, including the Information Technology (IT) Act. The IT Act provides the legal framework for prosecuting cybercrimes in India, specifically addressing the publication of obscene material and identity theft/impersonation through electronic means.
The Growing Threat of AI-Driven Harassment
This case highlights a critical shift in cyber-harassment. While traditional “revenge porn” involves the leak of actual private images, AI-generated deepfakes allow attackers to create “synthetic” evidence. This means victims can be targeted even if they have never taken a compromising photo, making the threat pervasive and difficult to combat without technical forensic tools.

Key Takeaways for Digital Safety
- Document Everything: If you are a victim of AI blackmail, take screenshots of all posts, messages, and profiles before they are deleted.
- Avoid Compliance: Paying a blackmailer rarely stops the harassment; it often signals that the victim is susceptible, leading to further demands.
- Report Immediately: Use official channels like the National Cyber Crime Reporting Portal or local police (such as the Cantt Police in this instance) to initiate legal action.
- Platform Reporting: Report deepfake content to Facebook and Instagram immediately to trigger their automated removal systems for non-consensual sexual imagery.
Frequently Asked Questions
What is a deepfake?
A deepfake is a piece of media—usually a photo or video—that has been digitally manipulated using artificial intelligence to replace one person’s likeness with another, often making it look as though someone said or did something they never actually did.
Can AI-generated photos be used as evidence in court?
Yes, but they are used as evidence of the crime of creation and distribution rather than as evidence of the act depicted in the photo. Digital forensics can often prove that an image was synthetically generated, which helps exonerate the victim and convict the perpetrator.
What laws protect victims of AI blackmail in India?
Victims are primarily protected by the Information Technology Act, 2000, and relevant sections of the Bharatiya Nyaya Sanhita (formerly the Indian Penal Code), which cover defamation, criminal intimidation, and the distribution of obscene materials.
As AI tools become more accessible, the potential for misuse grows. This Prayagraj incident serves as a stark reminder that digital literacy and robust legal enforcement are the only effective defenses against the weaponization of synthetic media.