< Back

AI is neither good nor bad. It’s a tool that we can use to both fact-check and spread disinformation

fot. arch. Piotra Łuczuka
fot. arch. Piotra Łuczuka

Over the past few years, deepfake technology has been quite clumsy and it was easy to ascertain the difference between real and artificially generated stories. Today, this is a growing problem. Researchers from the Australian National University have reached a surprising conclusion. “They proved that faces generated entirely by artificial intelligence (AI) seem to be more realistic than real human faces,” Piotr Luczuk, PhD, a media and cybersecurity expert as well as the deputy director of the Institute of Media Education and Journalism at the Cardinal Stefan Wyszyński University in Warsaw, told FakeHunter.



The Slovenian government wants to regulate the use of AI in the media. In a draft law, it proposed, among other things, to label content that has been created with the help of generative AI (GenAI). This also applies to deepfakes, which are only allowed in programmes of a comedic and/or satirical nature or projects with an educational purpose. Violation of these rules can result in a financial penalty of up to EUR 20,000. In your opinion, is this a good direction to take?


These kinds of ideas have been springing up like mushrooms in recent times. On the one hand, I’m not surprised. On the other, however, it makes me smile. There is a heated debate around the use of AI, both academically and socially. The same has happened with the internet, digital communication and social media. We are dealing with a progression that will be difficult to halt.
But I also agree that the use of AI—especially in the news media—should be somehow regulated. For the time being, however, I regret to say that I do not see any real and, above all, rational attempts to find a solution. Every proposed regulation, whether at national or EU level, poses the risk of "throwing the baby out with the bathwater". Meanwhile, AI in itself is neither good nor bad. It’s a tool that we can use to both fact-check and spread disinformation.Returning to the question—let's do a little thought experiment and ask ourselves—if we deprive newsrooms of the use of AI, or force them to report whether particular content has been generated by AI, will it change anything? Will those using AI for propaganda and disinformation cease their activities?I sincerely doubt it. Rather than stigmatising and punishing the use of AI in the media, I would suggest raising the level of awareness and skills of journalists.The media should, over the next few years, learn how to use AI effectively in their work in order to automate certain processes. To be clear, I don't mean that ChatGPT should start writing articles. However, a journalist can creatively use GenAI to synthesise certain content or prepare it based on already-written stories, templates for social media posts and other internet platforms. This is what I meant when I spoke about automating journalistic work—using AI wisely and, above all, creatively.


Is Poland planning to regulate the use of AI?


For quite a long time we have been discussing this and saying that some kind of AI regulation should be introduced. It seems obvious, therefore, that we should soon have more news on this topic. However, observing both Polish and EU legislation, I have the feeling that, despite everything, the regulations will be the result of the provisions proposed by the EU. Is this a good move to make? If we look at what has happened over the years in terms of attempts to regulate social media or at least combat disinformation or improve cybersecurity, we will have to be patient.
On the other hand, it is worth pointing out that Poland is among the leaders in innovation and legislation. As an example, I point to regulations concerning unmanned aerial vehicles, that is, drones. Solutions developed in Poland over the years have, to a large extent, shaped current EU regulations on BSP (drone) flights.
So, we have made an important contribution. We could also take the lead on AI. Among media or new technologies experts, the debate about regulating AI has been going on for years. All we need to do is to gather the best ideas and present them to the world.


AI is also increasingly associated with the production of deepfakes, and one of the most troubling ways in which someone's image can be used to generate deepfakes is through the creation of pornographic content. Deepfake pornography is causing quite a bit of controversy, something that mostly celebrities were victims of at the beginning of 2017. Today, it can affect all of us. Is there any mechanism to criminalise the unauthorised use of someone's image in pornographic content?


In the Criminal Code and the Civil Code, we find provisions on the protection of personal rights or images. In the case of the internet and the use of AI, this is a very complex topic. I do not wish to go into the legal aspects, as this is a task for lawyers. However, I would focus on two things—deepfake technology itself and some important changes in social processes and communication rules.
Deepfakes were initially used on a large scale mainly in the porn industry. Why? Because the possibility of "undressing" celebrities, actresses and actors guaranteed huge profits for the entire "industry". As recently as two decades ago, similar effects could only be achieved by resorting to crude photomontage or by using doubles. Today, deepfakes significantly reduce the cost of production and offer the possibility of inserting any image, any person, into any type of context. So, it can be either AI-generated pornographic content with images of our loved ones or, on the contrary, using the image of our enemies. Here, unfortunately, the creativity is almost limitless. If celebrities, represented by armies of lawyers, cannot protect themselves from this in the long term what chances do we have?
We have to be aware that the more content (especially multimedia content) we put out on the web, the bigger we leave a digital footprint, and AI has a lot to take from—it has a lot of data about us, which we regularly provide, driven by what I describe as virtual exhibitionism.
And then there is the second topic. Let’s take a look at how our social relationships have changed over the last two decades. Some time ago, a primary school student had about 20-30 friends in his class and another 60 or so in and around school. Only celebrities had to deal with the side effects of their recognisability. Today, the statistical student has not tens, not hundreds, but thousands of friends. Well, they are mainly on social media, but given the scale of the phenomenon, the modern teenager already has to contend with popularity on a par with celebrities.
Are young people prepared for this? Do they know how to deal with hate among their peers? From time to time, there are reports in the media about intimate recordings being made public by peers. Let's just consider how much of a threat deep fake technology poses in this aspect. It is said that false news spreads much more quickly than the truth. If the entire school or all our colleagues see an AI-generated video—how many people will we be able to convince that it is fake? These are questions we need to find answers to as soon as possible.


We are hearing more and more often about recordings of politicians generated with the help of AI. This was the case with the fake speech of the Japanese Prime Minister as well as the German Chancellor. In both cases, the alleged prank involving politicians was commented on by their offices, but no one mentioned the possible consequences for the authors of the pranks. Should these types of 'jokes' be taken seriously?


Let me just mention that in the first months of the war in Ukraine, the Russian side released a video created through deepfake, in which Ukrainian President Volodymyr Zelenskiy announced the surrender of the country and called for the laying down of arms. Over the past years, deepfake technology has been quite clumsy and it was easy to spot the difference between the real and the generated image. Today, we have a growing problem with this.
Scientists from the Australian National University have come to a surprising conclusion. They showed that faces generated entirely by AI appear more realistic than real, human faces. The experiment was reported in the journal ‘Psychological Science’. Imagine the diplomatic and geopolitical implications of this type of footage of politicians, which will start to be circulated by the media. Let's not kid ourselves—media will fall for this type of content.
Prof. Aleksandra Przegalińska in one of her recent posts declared that she has faith in mankind and the fact that we have the ability to distinguish generated content from reality, that we will not be so easily fooled by deepfakes and photomontages. However, in observing the modern capabilities of AI, I would not be so optimistic. It is not a question of an apocalyptic attitude and a desire to frighten all readers but simple realism.


While we are on the subject of realism, there is a problem that unfortunately affects a growing number of people. I am referring to phishing. How can you protect yourself from the next generation of phishing, that is, the kind of 'grandparent scam' modernised by AI? It is estimated that since ChatGPT went public, the number of phishing emails has increased by 1,265 per cent.


Over recent years, and especially with the dynamic development of the digital economy, the issue of cybersecurity has taken on a completely different dimension. It has penetrated the virtual world, forcing a fundamental change of approach in sectors strategic to the functioning of the state. Although not so long ago, threats with the prefix "cyber" were more common on the pages of science fiction novels and movies, nowadays, no one doubts that they are not confined to virtual space, but also—or perhaps above all—to the real world, and pose a very real threat. The failure to recognise the actual problem and the scale of the potential risk, especially in the governmental and banking sectors, poses a very serious threat to strategic parts of Europe's internet infrastructure.
Therefore, it has been suggested many times that decisive actions need be taken to improve the situation. Although for many years the issue of cybersecurity was often downplayed on many levels, it was the administration and banking sectors that quite quickly recognised its seriousness. Appropriate security protocols also began to be developed. Initially, heavily decentralised, but over time much more universal.
However, even when all cybersecurity measures were functioning properly, it was often found that the problem still existed. According to a statement by Kevin Mitnick, one of the world's most famous hackers, who has gone over to the other side and is now involved in cybersecurity advisory— the weakest link when it comes to cybersecurity is most often... the human being—humans who very often are completely unaware of the risks involved in using mobile devices and the internet.


Reading about people being scammed out of a lot of money because they believed a story or clicked on an improper link, one gets the impression that there is a lot of work ahead to improve our resilience to cyberfraud.


Usually, at this point, there comes to mind an argument of the type: "I am not in any kind of danger, because I have nothing to hide". It turns out, however, that this type of thinking should be seen as a myth.
Unfortunately, on a national scale, there is still not enough awareness of existing threats, especially in the business sectors, among small and medium-sized enterprises (over 65% of the EU market). The reluctance to invest in network security and asset protection is resulting in ICT structures becoming the 'weakest link' in the internet infrastructure, which may subsequently be attacked or be used to facilitate other, even more spectacular attacks. In this regard, the European Union Agency for Cybersecurity (ENISA) sees the need for appropriate training and awareness programmes.
Cybersecurity and counter-terrorism expert, Richard Clarke shed some light on the use of modern technology during various types of conflicts (including economic ones). Quoted by The Economist, the expert calculated that today, the level of sophistication of information technology and its appropriate use against potential enemies can lead, in as little as 15 minutes, to a catastrophic failure of the strategic systems responsible for any country's security and its entire infrastructure, transport and logistics. For example, viruses and computer worms are capable of enabling the manipulation of data on the world's stock exchanges and of being used to cut off electricity, so the vision of escalating cyberthreats is becoming quite apocalyptic.
Clarke mentioned that in such a situation society would quickly collapse, as withdrawing money from an ATM would be impossible and food would become a scarce commodity. A drastic and at the same time clear example of what cutting people off from the free use of ATMs and preventing them from withdrawing larger sums of money can lead to could had been seen during an economic crisis in Greece, or during the first month of the Covid-19 pandemic, when (also in Poland) information about alleged limits on ATM and bank withdrawals was spread. Such information spread like wildfire and it was enough to form queues in front of most ATMs in large cities.


And what AI challenges will be the most important in the New Year?


Ransomware attacks (taking over data and locking it with a ransom demand or ultimatum with a threat to release it will certainly be a significant challenge. In terms of AI, its use in disinformation campaigns is certainly to be expected. I would not underestimate deepfakes either. Then there is the possibility of generating virtual avatars that mimic the facial expressions and body language of the people they are modelled on.
To sum up, there is no shortage of challenges and risks. Nevertheless, I would recommend calmness and the use of common sense. AI is a tool. As with a knife we can slice bread or kill, and so the same goes for AI—we can use it to do both good or bad. At the end of the day, everything rests in human hands but, as I mentioned earlier, humans are the weakest link here.


Interview by: Olga Doleśniak-Harczuk

09.01.24
 

Footnotes: