< Back

AI is a tool just like any other. Through its use, you can identify a criminal, assist in an investigation but also lose a fortune or be manipulated

Fot. PAP/DPA/Uli Deck
Fot. PAP/DPA/Uli Deck

For example, on the one hand, artificial intelligence (AI) was used to help track down a former Red Army Faction (RAF) terrorist who had been wanted for three decades. On the other, it was applied by cybercriminals in Hong Kong to steal USD 25 million from a company. In the latter situation, an employee transferred money to the cybercriminals' account thinking his bosses had given him the go-ahead to complete the transaction. This shows that it can be quite simple to fool someone through AI. In the Hong Kong incident, the fraudsters had used AI-generated virtual profiles, to deceive victims with a chatbot-generated text message, while in other cases they recreated the voice of a loved one played over the phone.

"Deepfakes are used in internet and telecommunications scams. Fake images and videos and even sounds make it easier to believe certain content," said Prof. Lennon Lee, from the Center for Cyber Resilience and Trust School of Information at Deakin University in Australia.
AI-driven criminal activity is becoming a growing nuisance for law enforcement and victims alike. Extorting money by pretending to be a loved one or acquaintance, voice manipulation, and the generation of fake content are just the tip of the iceberg. 


In 2020, the ‘Crime Science’ magazine published an article titled ‘AI-enabled future crime.’ Its authors with the assistance of academics, security experts, representatives from public and private institutions, examined eighteen categories of fraud (both actual and those that were projected to arise in the future), and then divided the categories based on levels of threat. Voice imitation and impersonation fraud made up the most dangerous group. The authors claim that deepfake technology poses and will continue to pose the greatest challenges in the long run, because “humans have a strong tendency to believe their own eyes and ears, so audio and video evidence has traditionally been given a great deal of credence.” In addition to the widespread mistrust that is being created, the societal harm caused by this kind of crime is also being highlighted. This is because the images of politicians or public personalities utilised in deepfake content have had their reputations damaged.
AI crimes also threaten the credibility of public institutions, which are unable to reveal every manipulation, potentially eroding public confidence. It's interesting to note that the authors of the paper view autonomous vehicles in the same light as impersonators. In the wake of the vehicular attacks of 2016–2017, which saw high-profile terrorist attacks carried out with the assistance of motor vehicles in Berlin, Barcelona, Paris, Nice, and other cities, they claimed that such cars “would potentially allow expansion of vehicular terrorism by reducing the need for driver recruitment, enabling single perpetrators to perform multiple attacks, even coordinating large numbers of vehicles at once”. 


Cyber warfare


Prof. Lennon Chang, in his analysis of the potential for using AI to commit crimes today, said: "We are already seeing deepfakes being used to create fake videos, as well as in disinformation campaigns. For example, during the recent elections in Taiwan, sex tapes appeared suggesting that several DPP (Democratic Progressive Party) candidates were having an affair, but all the candidates claimed the videos and photos were deepfakes.”
However, AI has a lot to offer in the fight against cybercrime. We can create better warning or preventive measures by using machine learning to better identify the trends of disinformation efforts. AI technology can assist in online child abuse investigations as well.
“Thanks to AI, photos showing abused children can be matched with others available on the internet to identify where they were taken," said Chang. His area of expertise is cyberwar and disinformation and, in February 2024, he wrote an article titled "Taiwan: A battlefield for cyberwar and disinformation" for the ‘Melbourne Asia Review’ magazine. Chang cited Sun Tze, the author of "The Art of War," and wrote that "cyberspace is providing a platform for new forms of war that might realise Sun Tze’s ideal of winning a war without fighting”. He also emphasised that Taiwan repels more than five million cyberattacks every day, making it a battlefield in cyberwarfare.


All thanks to photos


On 27 February 2024, in Berlin's Kreuzberg, German criminal police detained 65-year-old Daniela Klette, a former terrorist and member of the Red Army Faction, a far-left militant group based in West Germany that was active until 1998. Klette had spent more than 30 years hiding under an assumed name and was part of the RAF's so-called third generation, which was active in the 1980s and 1990s. She was considered "more professional than her predecessors". The media stated that if not for the investigative efforts of the crew behind the "Legion" podcast from ARD (German public broadcaster), police would not have located Klette. A journalist even said ironically that "the police success was that they listened to the podcast". Police officials, however, claim that they were notified about the former terrorist as early as November 2023, which is prior to the podcast about Daniela Klette. The journalists explained that they were able to locate Klette through the assistance of a listener named Sebastian, who disclosed that in 2017 he had met "at a party in Cologne a woman who said she was a former terrorist." That woman, who went by the working name "Monika" for the December 2023 podcast, had shown to those present at the party an arrest warrant issued for three RAF members. She said, pointing to a picture of a woman, "That's me."


The authors of the podcast started gathering information, such as all of Klette's archived online images, and forwarded them to a Canadian journalist Michael Colborne of the Bellingcat news team. Then he, thanks to the use of AI in facial recognition, discovered a photo of a woman that appeared to be an older Daniela Klette on the website of Capoeira, a Brazilian martial arts association located in Berlin. Two journalists investigated the association and also looked for Klette. They noticed that displayed on the walls were pictures from the association's numerous outings and workshops, several of which featured Klette. It turned out that Klette (at that time called Felizia) had not attended classes for at least four years. The only thing that became clear was Klette’s place of residence. Her home was in Berlin, not Cologne. It was proven on the day Klette was arrested that the journalists had been on the right track. 


As reported, Klette, who called herself "Claudia," had led an ordinary life, lived with a partner in Berlin's Kreuzberg area, tutored children, had a dog, and kept a low profile. A Kalashnikov, grenades and an anti-tank grenade launcher were found in Klette's apartment on Sebastianstrasse. Her neighbours were puzzled when found out that this pleasant woman they regularly interacted with was engaged in acts of terrorism, planted explosives, and committed robberies (the most recent robbery linked to her occurred on 25 June 2016, in Cremlingen, when a money van was broken into and EUR 400,000 was taken by the robbers). The fact that they were able to identify Daniela Klette with the use of AI remains true, regardless of how her story will end or the success of the police in finding her two other former terrorist accomplices, who are also wanted for thirty years. AI, the same tool that in the wrong hands can do a lot of damage, can also do much good if used properly. In the case of Klette, AI blew the dust off another box from the "analog X-files." 

 

Cases of misidentification


In September 2023, the US-based ‘Innocence Project’ published a story of a Detroit woman who was detained for 11 hours by police on suspicion of car theft. It turned out that she had been identified as a possible suspect by police face recognition software that makes use of CCTV photos. Thus, the eight-month pregnant student nurse found herself at the police station, having to defend herself against a false accusation. After a month, the prosecutor's office dropped the investigation for lack of sufficient evidence. Critics of blindly relying on AI capabilities caution that, similar to other tools, it is not always 100 percent reliable. 
It was argued in the Daniela Klette case that it was easier for journalists than the police to use AI to search for the former terrorist because police did not have the authority to use certain types of software. This would therefore imply that Europe approaches the verification of potential offenders somewhat more cautiously than the US does. There, the situation has already reached an extreme point and some critics say that FRT (facial recognition technology) and other AI technologies exacerbate racial inequities in law enforcement and the criminal legal system. Seven incidents of people being detained without authorization on the basis of FRT were documented by the Innocence Project. “The technology that was just supposed to be for investigation is now being proffered at trial as direct evidence of guilt,” stated Chris Fabricant, director of strategic litigation for the Innocence Project.

 

14.03.2024