A sustainable approach to future competitiveness

It is highly likely that you, dear reader, have heard about artificial intelligence and its countless beneficial applications. It is true that it is a very useful tool, capable of simplifying tedious administrative tasks and automating processes at the click of a button.

But we must also keep in mind that these applications can be used to perform less ethical “tasks”. In fact, one of these questionable utilities is the one I am here to write about: deepfakes

Deepfakes are altered or synthesised sound, image or video content that can mislead people into believing it is authentic or truthful, showing representations of people appearing to say or do things that they have not said or done. It is important to note that in order to generate such content, similar real files of the target person are needed to feed the system, another reason for the enhancement of our privacy on the networks.

It should be emphasised that this technology in itself does not have negative connotations, as it can be used for legitimate purposes.

Although the tools and techniques to manipulate this content have existed for years, the use of artificial intelligence applications makes it possible for anyone to manipulate audio, video, etc. with results of sufficient quality to deceive common minds.  

The use of audiovisual representations is one of the most powerful methods of social engineering that can influence social reality.

Either on a societal level, through the implementation of fake news that can destabilise an entire state. Or by generating content aimed at manipulating a specific individual or group of individuals.

At this point, we might ask: how does this phenomenon affect me?

It is likely that you have consumed content generated by deep fake techniques without realising this scenario.

In the cinema, for example, one of the techniques used is to “resurrect” (or rejuvenate), deceased actors and actresses, as was the case with Carrie Fisher in one of the last films in the Star Wars saga.

Another example you may have seen is the “Con mucho acento” advert for a well-known beer brand, featuring the late Lola Flores. Yes, this ad was created thanks to this technology….

This is great news for the audiovisual production industry, because the possibilities of content creation have grown tremendously.

But we need to stop and think about this new technology…

Today, anyone with access to the Internet and a modicum of interest in the subject can create an ultra-falsification of anything. There are digital libraries that technically explain how to make deepfakes, and even free software to create this type of content.

Behind these libraries, there is a real community of computer science experts who are perfecting the techniques for creating synthetic content every day. This free knowledge is used by cybercriminals to commit fraud and other crimes.

This means that every day it is more common to come across news about virtual scams, from the most unreasonable, a priori, as in the case of the scam of a woman from Granada, for an amount of about 170. 000 €1, thanks to the use of a deepfake of Brad Pitt, more elaborate scams such as the so-called “CEO scam2” and its variants, in which a person’s voice is cloned to get an action carried out for the benefit of the cybercriminals, such as the authorization of a bank transaction3.

There are various techniques for detecting ultra forgeries, including specific software developed for this purpose. At the level of an average user, we can look at the contours of the face, the shadows, blinks, tone of voice, the brightness of the eyes, the coherence of the sentences, etc.

Therefore, if you receive digital content, whether audio, video or audiovisual, from a person you know or with whom you have to carry out some kind of management that involves a patrimonial movement, be aware of this possibility and do not allow yourself to be deceived.