Chat GPT, Bard, MidJourney, DALL-E, and other similar technologies offer new opportunities, but they also present new risks.
Four months after the European Parliament adopted the AI Act, the European Union Agency for Cybersecurity (ENISA) has released a report warning about the risks associated with generative AI. As AI continues to play an increasingly prominent role in our daily lives, the risks associated with this emerging technology are not always well understood.
An ever-increasing role
Artificial intelligence is playing an increasingly significant role in our daily lives. Chat GPT, which is capable of generating text based on the questions asked to it, is an excellent example of this.
This conversational AI developed by OpenAI has become the fastest-growing consumer application in history. According to UBS, Chat GPT had 100 million active users just two months after its launch, a milestone that even the viral social network TikTok took 9 months to achieve.
However, despite its popularity, users are not always aware of the risks it poses. In March 2023, Chat GPT was taken offline for several hours due to a serious data breach involving sensitive user information.
The European Union is legislating
Faced with the widespread phenomenon of generative AI, such as Chat GPT or its Google counterpart, Bard, the European Parliament adopted the AI Act on Wednesday, June 14, 2023. This is a regulatory text for AI aimed at creating a regulatory framework for market deployment with a focus on issues of security, health, and fundamental rights.
In its report dated October 19, 2023, the European Union Agency for Cybersecurity (ENISA) also identified and warned against several typologies of risks associated with AI. It particularly highlights the increase in cyberattacks.
More sophisticated attacks
Indeed, with AI, cyberattacks can be more effective as they become more realistic and operate on a larger scale.
The era of smishing attempts (phishing via SMS) or phishing emails filled with spelling mistakes and inconsistencies is a thing of the past. With generative AI, cybercriminals can prepare more convincing scams. The same applies to phone scams: with the development of deepfakes, they can impersonate other individuals by cloning their voices!
Some risks to data
The risks associated with generative AI also lie in the information entrusted to these tools. It is essential to be vigilant about what is requested from them, as they do not guarantee the confidentiality of this data.
Samsung recently experienced this firsthand when employees allegedly entered confidential and strategic information into Chat GPT. As with any generative AI, this information was then transformed into training data for the AI to improve the accuracy of its future responses. This information could then be shared with other users outside the company to answer their questions.
Corporate secrets just a click away, and one can easily imagine the devastating impacts if they were to fall into the hands of a malicious individual.