Artificial Intelligence and Data Protection: an impossible relationship?

artificial-intelligence-and-data-protection

The development of Artificial Intelligence (AI) is one of the main challenges facing humanity in the near term. Nevertheless, the large geopolitical blocs are facing the situation from different perspectives. 

The EU is clear on its own viewpoint: it is based on respect for data privacy. Contrary to Chinese and U.S. models, Europe looks at AI’s basic raw material, and restrictively: not just anything goes when it comes to feeding information to the machine. This is why it is reasonable to ask whether the European model is fully compatible with the development of Artificial Intelligence. 

This was the main subject of a recent conversation between Ana Caballero, Vice President of the European Association for Digital Transition, and Leonardo Cervera, Director of the Office of the European Data Protection Supervisor and a lawyer specialised in EU Law. 

Full interview (Spanish)

Cervera argued that Artificial Intelligence and data protection are fully compatible, but “provided that there are clear policies and certain professionals who use them with judgment and independence”. In this area, he said, Europe is in an advantageous position, “because it has shown great leadership in data protection. And this isn’t a coincidence, but rather the result of clear political commitment and hard work”. So much, assured the lawyer, that you can talk about the ‘Brussels effect’: “Thanks to EU leadership, other regulations are adapting to our standards, and not us to theirs, which are usually not as respectful with the protection of data”. 

There is a clear indication of how the EU is committed to AI that is based on respect for data protection: the Commission’s proposal for a Regulation on the matter gives the European Data Protection Supervisor – currently Poland’s Wojciech Wiewiórowski – a leading role in the development of AI.  

Regulation of Artificial Intelligence

The digital transition, a process that is creating enormous opportunities, also has worrying side effects. Some of these came up in the conversation between Caballero and Cervera, such as disinformation, the polarisation reinforced by social media, or the little respect shown for people’s privacy. Even though to pay with data is paying, as the EADT often stresses, “We have all fallen into the trap of the big technology corporations, the supposed free-of-charge”, Cervera said. “The problems they are causing would have been avoided if the necessary regulatory measures had been taken ten or fifteen years ago”, he lamented. 

Europe must learn from the mistakes made and not underestimate the importance of regulation. The path, according to this expert in data protection, is already forged, differences aside, by what happened in the development of commercial aviation. “Today, there is nothing safer than boarding an airplane, and this same thing should happen with AI. But when commercial aviation began, there were many accidents, and the political class became aware that high standards were needed”. 

The key, Cervera assures, is to put red lines, with no fear. In the development of AI, one example would be drastic limitation of facial recognition systems using biometric cameras, “to prevent massive harm to people’s privacy; we could ruin anonymity in public spaces, which has always belonged to all humankind”, Cervera said. Human Rights cannot be subordinated to technology, however much the technology, such as Artificial Intelligence, implies a huge leap forward.