The challenges of the digital transition are complex and encompass all types of areas: from the defence of democratic values to purely legal matters, without forgetting free competition, defence against cyber-terrorism or data protection. Consequently, several European institutions are responsible, to a greater or lesser extent, for designing a proactive European response, without falling behind the United States and China and maintaining the standards that have made the European Union a tool for progress for more than 440 million citizens.
While the European Council focuses on negotiating the EU budget for the 2021-2027 period, leadership in the Digital Transition is being noticeably pursued by the European Commission. The team led by Ursula von der Leyen, who began her mandate in December 2019, has just published two essential documents in regard to digitisation: the European data strategy and the White Paper on Artificial Intelligence (AI).
The first lays the foundation to create a single European area for circulation of data and opens the door to the possibility of forcing the largest technology companies to share data with third parties, as is the case, for example, in the payments sector since the second Payment Services Directive took effect.
With the White Paper on AI, the European Commission marks a clear line between the most sensitive sectors, such as health, security and transportation, which will be looked at extremely closely, and other areas where strict regulations are unnecessary.
Von der Leyen is one of the visible faces of this EU effort to forge its own path in the data economy, but there are more names on the Commission to bear in mind. Among these are Margrethe Vestager, European Commissioner for Competition; Věra Jourová, Vice President of Values and Transparency; Thierry Breton, Internal Market Commissioner (with high level, vast experience in technology and telecommunications companies); and Didier Reynders, European Commissioner for Justice.
Matters having to do with Digital Transition are also the subject of debate and decisions in the European Parliament, which adopted a resolution in February 2020 on artificial intelligence, showing their concern about the risks that the automation of decisions with algorithms can mean for consumer protection. Parliament called on the Commission to clarify how to guarantee that consumers are protected against unfair or discriminatory trade practices.
The European Data Protection Board warns Google on user privacy
In this section about defence of consumers, and given the business model of the major technological platforms, everything related to data protection is of special importance. In the case of the EU, this area is the responsibility of the European Data Protection Supervisor (a position held since December 2019 and for a period of five years by the jurist Wojciech Wiewiórowski) and the European Data Protection Board (EDPB). The EDPB was created in 2018 with the goal of ensuring that the General Data Protection Regulation (GDPR) and the Directive on data protection in criminal matters are applied consistently throughout the EU as well as in Norway, Liechtenstein and Iceland.
A good example of the work of this last agency is its recent warning to Google about the purchase of the ‘wearables’ manufacturer Fitbit, whose devices monitor sports performance and other data, such as hours of sleep and location, of its 28 million active users. The EDPB has warned the data platform that it will have to take measures to mitigate the risks to user privacy stemming from this acquisition, that is if it wants to obtain approval from the authorities.
The EDPB, which advises the Commission on the application of regulations on data protection, is very clear: the combination and accumulation of Fitbit data along with data from Google “could pose a high risk for fundamental rights to privacy and the protection of personal data”.