When the European Data Protection Regulation was approved in 2016, artificial intelligence was best known for beating the world’s best player at a 2,500-year-old board game.
Seven years later, AI systems have quickly become intrinsic to our internet lives, a sudden shift that little were prepared for.
In the rush to address their newfound omnipresence, European regulators are trying to rein them in using a law designed for another era of the internet, as Italy and other nations taking aim at ChatGPT using the EU’s GDPR.
The chatbot is the best-known version of a number of relational AI platforms that are capable of emulating elaborate human conversations and have exploded in popularity.
Coming online in November 2022, the conversational software broke all records for the fastest-growing user base, gaining 100 million active users in two months.
On March 30, the Italian data protection authority notified OpenAI, the U.S. company that administers ChatGPT, of a provisional restriction on Italian territory.
The Italian Garante did so because it identified the absence of a “legal basis underpinning the massive collection and processing of personal data in order to ‘train’ the algorithms on which the platform relies.”
ChatGPT takes data from across the internet to learn how to emulate human conversations. The problem is that OpenAI has never provided any information on data processing and under EU law cannot conduct data scraping from third-party companies (such as Facebook or Linkedin) unless it can prove that it has an appropriate legal basis.
Following the decision of the Italian authority, OpenAI decided to suspend the service in Italy.
In a subsequent measure, the Italian watchdog stated that it would lift the suspension if OpenAI took the required actions by April 30, 2023.
According to Italy, OpenAI must verify users’ age before they use the chatbot and explain how and why it processes people’s data.
The company will have to ask users for their consent, but it will be able to justify the use of non-users’ data through the legitimate interest clause, which allows information to be collected if it is related to business activities and does not harm people involved.
ChatGPT will still also have to allow non-users to rectify incorrect information about them.
OpenAI will also have to conduct an information campaign via Italian television, radio, websites, and newspapers by May 14 to inform people how it uses personal data to train its ChatGPT algorithm.
The Italian Garante’s decision was based on the application of the provisions of EU Regulation 2016/679 on the protection of personal data. This regulation is commonly called the General Data Protection Regulation (GDPR).
GDPR is Europe’s near decade-old privacy and security law, which imposes obligations onto organizations anywhere, so long as they target or collect data related to people in European countries.
This regulation levies harsh fines against those who violate its privacy and security standards, with penalties reaching into the tens of millions of euros.
Italy has historically been at the forefront of data protection and privacy. One of the founding fathers of the current European framework was Italian jurist Stefano Rodotà, who pushed for the inclusion of privacy and personal data protection among the EU’s fundamental rights.
In the days following the Italian Garante’s decision, authorities in other countries tested the limits and measures to be taken against ChatGPT. Although not all blocked the service, France, Germany, and Ireland also explored what measures could allow artificial intelligence to coexist with data protection.
Only Germany actively distanced itself from Italy’s action, calling on the EU for common regulations and criticizing the work of the Italian watchdog.
In the past, the European Data Protection Regulation has proven to be a pioneering means of defending user privacy, thanks to its strong focus on individual rights. GDPR requires platforms to disclose what information they hold on users and how this information is used.
There’s just one problem: the text of GDPR was not designed to meet the challenges offered by the AI systems currently in circulation.
“At the time, nobody thought of systems like ChatGPT,” said Vincenzo Tiani, an expert in privacy and personal data protection, to the Daily Dot.
Theoretically, some AI systems could be written to comply with GDPR by only collecting information from consenting people, but at this stage, it is impossible to tell if OpenAI is making any effort in that regard. The company does not make ChatGPT’s inputs public.
The only reference to automated processes in the original text of GDPR comes in Article 22 and concerns a user’s right not to be judged solely by a machine when a process produces legal effects or significantly affects them. Given its lack of discussion about this node, people are worried GDPR cannot be used to focus on AI.
“Most of the criticism [of GDPR blocking ChatGPT] is the result of intellectual laziness because in reality the GDPR is very elastic,” Tiani argued, “the regulation contains principles and you have to prove to the authority that these principles have been respected.”
But Europeans are not entirely in agreement. A part of the public is actively asking European institutions for an exception to the application of the GDPR in order to safeguard their work using artificial intelligence systems.
Among them is Aindo, an Italian startup with 15 employees which feeds AI with “synthetic” data, in order to train algorithms without the need to anonymize the data of real people.
People are also comparing Italy’s restrictions to other countries that stifle innovation with Luddite policies.
Several Italian researchers and entrepreneurs have, in recent days, launched a petition calling for the reinstatement of ChatGPT in Italy, as well as an update to AI regulations.
“The GDPR is not too dated and certainly can be interpreted effectively by national supervisory authorities. It can be an appropriate legal basis for the use of data within AI systems,” Italian MEP Brando Benifei told the Daily Dot, in response to some of the backlash.
While the GDPR is an advanced tool, the EU is nevertheless discussing a specific legislative proposal to regulate the use of artificial intelligence: the AI Act, of which Benifei is one of the authors. “The Artificial Intelligence Regulation we are working on sets the GDPR as a rule with very few exceptions,” Benifei added, “one of which occurs when sensitive information is used to correct discriminatory or otherwise erroneous biases within AI systems.”
This proposal takes a risk-based approach and suggests new rules to be added to the GDPR to ensure that the AI systems used in the EU are secure, transparent, and ethical.
According to Tiani, one of the important elements of the AI Act is the presence of a fundamental rights impact assessment that companies using high-risk AI systems should make.
This would be an important tool for the protection of fundamental rights if integrated with the provisions of the GDPR. Tiani agreed that it is a “new European approach that makes sense from the perspective of protecting people’s rights.”
Benifei also explained that the negotiations within the European Parliament are at the final stage, adding “we have many meetings of the [authors] these days to negotiate the text and then it will go to a vote.” According to the MEP, the text will reach the European Commission at the latest by early June, after which negotiations with individual member states will begin. He hopes this stage will be concluded by the end of 2023 so as to reach the final vote in 2024, before the end of the current European legislature.
Meaning Europe, once again, may be at the forefront when it comes to regulating new tech.