The struggle for truth

                                  

In 2024, more than 50 countries are going to go to the polls, covering nearly 4.2 billion people around the globe. In this scenario, the growth of deep fakes and fake videos are a challenge for both the electorate and the candidates.

Till some time ago, for most of us an image, a video or an audio bite was proper proof that something is real or has happened. They were used in trials as proof.  But presently the idea that an image or audio is proof of authenticity has changed drastically. As AI is improving with computing power and data training increasing exponentially, doubts about images we see or audios we hear are a part and parcel of everyday life. At the dawn of the AI generated image, it was relatively simple to identify fake images as fabricated audio or video content often was out of sync or had some peculiar features. But that is increasingly becoming difficult.  

Photo by Google DeepMind on Pexels.com

A video that apparently showed musician Taylor Swift holding a flag promoting Donald Trump went viral on X, formerly known as Twitter. It has been seen by more than 4.5 million people. But the video is fake. Increasingly generative AI will be able to produce perfect fakes – digital clones of what a genuine image or recording would have been. It is a frightening future where scamsters will be able to impersonate loved ones, anyone’s photographs can be converted into pornography, or important political leaders could be shown saying or doing strange things! Deep fakes and doctored videos and audios are not only causing financial damage, but also impacting negatively the relationships between individuals, groups and communities. In the race between the generators and the detectors, the forgers seem to be winning.

The availability of generative AI tools means that forgers do not have to have lot of tech or money at their disposal. They can work out of homes, garages, and wreak havoc in everything from markets to elections to law and order. We have seen examples of the same around the globe in the recent past.

In this fight for the truth, technology, policy, and awareness all are trying to face the challenge. Many technology-based detection systems are in use, and many are being experimented with. In the latter category, ‘watermarking’ of digital goods is something that is being tried. The idea is to add a feature that subtly can be seen by anyone trying to find out if the text or image is real or fake. Such tweaking would be difficult for humans to pick but could be tracked by machines. Watermarking research is on on-going effort, because in the battle of true versus fake, it is better to have some safeguards rather than none.

While the techies work on watermarking and other methods of detection, what about us? The ultimate users and consumers of all content? All new technologies bring with them challenges of misuse, but societies have always adapted. Perhaps it is time for us to accept and learn that images, videos or audios of things do not prove that they are true and happened or did not happen. Applying the ‘zero trust approach’ of cyber security to online content seems to be the best approach. This means not trusting anything by default and instead verifying everything.  Online content no more testifies for itself, who posted it becomes more important than what was posted. The origin and trustworthiness of a source will be as important as always. Maybe the printed word will regain prominence, as we try to ascertain the truth of what we see or read online.

Photo by Pixabay on Pexels.com

As AI makes fakes easier and more believable, we need to pause and try to verify what is put before us. That is going to be needed more than ever. Till the machines become smart enough to detect the fakes, it is our own human reason and intelligence that is going to be the ultimate safeguard.

Published in Lokmat on 18-2-2024 and in The South Asian Times issue dated March 2-8, 2024

Leave a comment