
The EU’s AI Act Would Fuel the Rise of Responsible AI and Privacy Tech Innovations
Just As The GDPR Has Fueled Privacy Tech
by Lourdes M. Turrecha, with edits and research by chatGPT-4 (artificial intelligence) and Travis Yuille (intelligent human)
At The Rise of Privacy Tech (TROPT), we’re usually immersed in the tech side of privacy, steering clear of legal jargon. But this time, it’s different. The proposed EU AI Act, if passed, could fuel privacy tech and responsible AI innovation. That’s why we’ve dedicated this deep dive to unpack the current proposal, its key provisions, and its far-reaching implications for privacy tech and the broader tech industry.
Background
The proposed EU AI Act’s impetus traces back to the growing concerns surrounding the ethical use of artificial intelligence and its potential impact on fundamental rights, privacy, and public trust. In response to these concerns, the European Commission published a White Paper on AI in February 2020, which laid the groundwork for a comprehensive legal framework. Following a public consultation period, the European Commission unveiled a proposal to regulate AI in April 2021, marking a crucial step toward establishing a harmonized set of rules for AI systems across the EU and likely beyond.
The current amended proposal has since been under review by the European Parliament and the Council of the European Union, with debates and amendments shaping its final form. It is expected to pass later this year and to take effect in 2025 or 2026.
If passed, the AI Act will significantly impact the development, deployment, and use of AI technologies, setting a global precedent for responsible AI governance. The AI Act would also contribute to the continued rise of privacy technologies, as the General Data Protection Regulation (GDPR), which celebrates its 5-year anniversary today from its effective date on May 25, 2018.
Summary of Key Provisions (And Comparisons to GDPR)
The proposed AI Act brings forth several key provisions, many of which draw parallels to the GDPR. These provisions could still undergo changes; however, it’s likely that the final provisions will be close, given how the EU likes to do things, as illustrated by laws that have already passed, such as the GDPR.
Scope. Under Article 2 of the proposed AI Act, the EU’s proposed AI Act would apply to AI systems placed on the market or in use within the EU, regardless of the system’s place of origin; users of AI systems located within the EU; and providers and users of AI systems that are located outside the EU where the output produced by the system is used in the EU. This broad scope is very similar to the GDPR’s broad reach and applicability to companies outside of the EU.
Risk-Based Approach. The proposed AI Act would classify AI systems into different risk categories (minimal or limited, high, and unacceptable risks, as outlined in the explanatory memorandum to the EU’s proposed AI Act) based on their potential impact on users and society. The proposal imposes more stringent requirements on high-risk AI systems and prohibits unacceptable risk AI systems. Again, this is akin to the GDPR’s risk-based approach to data processing.
Requirements for High-Risk AI Systems. High-risk AI systems–further defined under Article 6 and examples illustrated in Annex III of the proposed AI Act–must meet specific requirements regarding transparency, data quality, documentation, human oversight, and robustness outlined under Article 16. These systems must also undergo “conformity assessments,” which are processes that demonstrate whether requirements (specifically, those under Title III, Chapter 2) of the proposed AI Act have been fulfilled before entering the market. The GDPR similarly contains an analogous data protection impact assessment (DPIA) requirement for high-risk data processing.
Prohibition of Certain AI Practices. Article 5 of the proposed AI Act prohibits AI systems posing an unacceptable risk, including those manipulating human behavior, exploiting vulnerabilities, or enabling government social scoring. The GDPR also contains prohibitions against the processing of special categories of personal data.
Transparency Obligations. Under Article 52, AI systems that interact with humans, generate content, or involve biometric identification must disclose their AI-based nature to users. Transparency and notice requirements are shared by both the proposed AI Act and the GDPR.
European Artificial Intelligence Board (EAIB). The proposal establishes the EAIB under Article 56(1) to provide guidance, share best practices, and ensure consistent application of the AI Act across Member States. This requirement board draws inspiration from the European Data Protection Board (EDPB) established by the GDPR.
Fines and Penalties. Non-compliance with the AI Act could result in fines of up to 7% (at least according to the current revised draft) of the global annual turnover of the legal entity responsible for the AI system, depending on the severity of the infringement, as outlined under Article 71. This is a significant increase from the GDPR’s fines of up to 4% of the global annual turnover, with the highest fine so far, levied earlier this week against Meta at €1.2B.
National Competent Authorities. Under Article 30 of the proposed EU’s proposed AI Act, each EU Member State would need to designate authorities responsible for monitoring and enforcing compliance with the proposed AI Act, similar to the competent authorities required by the GDPR.
Specific Implications for Responsible AI and Privacy Tech Innovation
If passed, the proposed EU AI Act would have several significant implications on responsible AI and privacy tech development.
The proposed AI Act would lead to increased responsible AI technologies.
AI technologies would be categorized based on their potential risks, with high-risk AI systems subject to stricter regulatory requirements. This differentiation would require developers to consider the potential risks of their AI systems and adopt necessary safeguards and controls.
AI developers and providers would need to ensure that their technologies meet specific requirements related to transparency, accountability, data quality, technical documentation, privacy, and human oversight, among others. This would likely lead to increased investment in research and development to create responsible AI systems that adhere to these standards.
The proposed AI Act would impact the competitive landscape by creating barriers for non-compliant AI systems, potentially hindering their access to the EU market. (Recall that the proposed AI Act would ban from the EU market AI systems that pose an unacceptable risk, including systems that manipulate human behavior, exploit vulnerabilities, or enable social scoring by governments.) Conversely, companies that can demonstrate compliance with the AI Act would likely have a competitive advantage in the EU market.
The proposed AI Act would lead to increased AI trust and adoption.
The proposed AI Act would promote transparency in AI systems, making it easier for users to understand how decisions are made and ensuring that they are aware of their interactions with AI technologies. This could lead to increased trust in AI systems and a greater willingness to adopt them.
The proposed AI Act would foster AI innovation, in general.
With increased AI adoption and a clear regulatory framework that encourages AI investment, research, and development, the proposed Act could lead to advancements in AI technologies that benefit society.
The proposed AI Act would also foster adoption of privacy technologies in AI and the development of privacy-centric AI technologies.
The proposed AI Act emphasizes the importance of data protection and privacy, particularly for high-risk AI systems. This focus would likely drive the development and adoption of privacy technologies, such as confidential computing, differential privacy, federated learning, homomorphic encryption, and privacy code scanners, as AI developers strive to comply.
The proposed AI Act would complement the GDPR in promoting privacy and data protection in the AI sector. AI developers and providers would need to ensure that their technologies meet both the GDPR and AI Act requirements, creating a strong incentive for developing privacy-centric AI solutions.
The proposed AI Act emphasizes the need for high-quality data sets and data minimization when training AI systems. This focus would encourage developers to explore innovative techniques for data anonymization, data synthesis, and data-efficient learning, ultimately fostering advancements in privacy technologies.
As the EU’s regulatory framework for AI emphasizes privacy, there would be an increased market demand for privacy technologies in AI. This demand could stimulate innovation and investment in the privacy technology sector, both within the EU and globally.
Just as the GDPR fueled the emerging privacy tech industry, if adopted, the proposed AI Act would fuel the rise of responsible AI and privacy technologies.
Other General Takeaways for Tech Startup Founders, Investors, and Operators
In addition to the proposed AI Act’s responsible AI and privacy tech implications, other tech industry founders, investors, and operators would be impacted by its broad scope and reach. Below or some more general takeaways for the tech industry:
- Understanding the regulatory landscape: The EU’s proposed AI Act will provide a harmonized legal framework for AI systems. Founders and investors need to familiarize themselves with these regulations to ensure their startups are compliant, avoiding potential legal repercussions and fines, which could go up to 7% of a startup’s global annual turnover.
- Enhanced focus on risk management: The proposed AI Act classifies AI systems based on risk levels. High-risk AI systems face stricter regulations, implying that AI startups working with such systems will need a robust risk management strategy.
- Privacy and transparency requirements: Given the proposed AI Act’s emphasis on privacy and transparency, founders should ensure their AI solutions respect user data privacy and can clearly explain their AI systems’ decision-making processes to users.
- Competitive advantage: Compliance with the proposed AI Act could be a strong selling point for startups, offering a competitive edge in a market increasingly aware of data privacy and security issues.
- Encouraging responsible AI: The proposed AI Act encourages the development of “responsible AI”. Founders should consider this in their product development strategies, while investors should factor it into their investment decisions.
- Innovation and R&D: The proposed Act is likely to drive research and development in responsible AI and privacy technologies. Investors and founders should consider how they can leverage these advancements.
- Long-term planning: Given the significant potential impact of the proposed AI Act on the tech industry, founders, investors, and board directors should incorporate its implications into their long-term planning and strategy.
- Global influence: As the EU is a significant global player, the proposed AI Act could influence AI regulations globally, just like the GDPR did with data protection laws. Founders and investors should consider this in their global expansion strategies.
- Ethics and AI: The AI Act underlines the importance of ethical considerations in AI development. Startups should prioritize ethical AI development practices, which could also attract ethically-minded investors and customers.
- Impact on fundraising: Compliance with the AI Act could influence fundraising, as some impact investors may prefer startups that are compliant with these regulations, viewing them as less risky and more future-proof.
TROPT AI Global Summit 2023: Privacy. Security. Ethics. Trust. Safety. event on September 26, 2023.