Man used AI to make false statements to shut down London nightclub, police say A businessman has pleaded guilty to using artificial intelligence to generate false complaints in an attempt to shut down a London nightclub, police have confirmed. The case highlights a growing concern over the misuse of AI to fabricate official statements for personal or commercial gain. Aldo d’Aponte, 47, the CEO of Arbitrage Group Properties, admitted to writing two letters that falsely claimed to be from neighbours objecting to the reopening of Heaven nightclub in central London. The letters were sent via an encrypted email address to Westminster Council during a licensing hearing in December 2024, following the temporary closure of the venue after a rape allegation against one of its security guards. Heaven, an LGBTQ+ venue known for its cultural significance in London’s nightlife scene, had its licence suspended in November 2024 after a 19-year-old woman accused a bouncer of sexual assault. The club was permitted to reopen a month later after a council review introduced enhanced welfare and security measures. The security guard in question was later found not guilty of the alleged offence. During the council’s review of the club’s licence, officials received multiple correspondence opposing its reopening. Philip Kolvin KC, a planning lawyer acting pro bono for the nightclub, raised concerns about the authenticity of the letters due to their unusual tone and content. His investigation revealed that the complaints were fabricated using AI-generated text and did not originate from actual residents. Metropolitan Police confirmed that d’Aponte had as well submitted a separate representation to the council in his own name, citing noise disturbances as a reason to oppose the club’s renewal. He pleaded guilty to making false statements under the Licensing Act 2003 and was sentenced to a 12-month conditional discharge, ordered to pay £85 in court costs and a £26 victim surcharge. Police sources described the use of AI to create fictitious complainants as an emerging issue, noting that two additional live investigations are currently underway involving similar allegations of AI-generated false representations. Authorities warn that such misuse undermines legitimate regulatory processes and risks eroding public trust in civic consultation systems. The case underscores the dual nature of AI technology—while offering transformative benefits across industries, it also presents novel challenges in areas such as disinformation, fraud, and the manipulation of public institutions. As AI tools become more accessible, regulators and law enforcement agencies are under increasing pressure to develop safeguards against their malicious use.
34