With the potential to shape AI governance globally, the European Union’s AI Act may have an even broader impact than the GDPR. While GDPR set the standard for data privacy worldwide, influencing similar legislation across the globe, the EU AI Act extends its regulatory reach into artificial intelligence, addressing critical issues of fundamental rights, safety, and consumer protection. This new legislation is poised to reshape how AI systems are governed, not just in Europe but across the world, as more countries look to implement AI regulations.
The EU AI Act officially took effect earlier this month and is already being touted as the next significant piece of global legislation. Just as the GDPR became the model for data privacy laws across the world, the EU AI Act is expected to become the foundation for future AI regulations. The U.S., for instance, has shown a willingness to collaborate with the EU on AI initiatives. The federal government has already published a blueprint for an AI Bill of Rights, a framework designed to guide the responsible development and deployment of AI systems. This U.S. initiative, which outlines five key principles, marks a critical first step toward a comprehensive AI regulatory framework that could follow the EU’s lead in protecting citizens from AI risks.
Setting Standards for AI Governance
While GDPR focuses on protecting personal data, the EU AI Act takes a more comprehensive approach by regulating the use of AI systems themselves. GDPR applies to any personal data processing, regardless of whether AI is involved, but the EU AI Act goes further by creating distinct regulations for AI systems that pose risks to individuals’ fundamental rights, safety, or consumer protection.
One key difference between the two regulations is the level of specificity. While the GDPR offers broad principles such as fairness, security, and lawfulness, the EU AI Act delves deeply into technical requirements. It outlines strict standards for data quality, human oversight, transparency, and accuracy, establishing a risk-based regulatory framework. AI systems are categorized based on their potential risks, and higher-risk systems are subject to stringent conformity assessments.
The EU AI Act’s approach ensures that AI developers and users remain accountable for their systems, marking a shift from individual rights protection under the GDPR to a system-oriented focus on safety and ethical responsibility. This sets a new global benchmark for AI development, creating a more regulated environment that will likely influence AI policies worldwide.
Balancing Innovation with Regulation
Despite the stricter rules, the EU AI Act isn’t designed to stifle innovation. In fact, regulators hope it will spur responsible innovation by establishing a framework that promotes confidence in AI. By encouraging transparency and safety, the Act aims to foster trust in AI technologies, which could increase investment and accelerate research. The EU AI Act also supports development in key areas such as environmental sustainability, public engagement with AI, and diversity.
However, not all sectors will find the new regulations equally beneficial. For example, AI applications in areas like biometric identification and social scoring are already facing restrictions. In Spain, recent guidelines on biometric data have prompted companies to eliminate biometric systems in favor of alternative technologies like RFID. This is because the regulations now require companies to conduct extensive privacy impact assessments and implement stringent data protection measures, which can be costly and time-consuming.
Other sectors, such as healthcare and education, are likely to face significant challenges due to the sensitive nature of the data they handle. Compliance with the EU AI Act may lead to higher operational costs, more extensive administrative processes, and delays in innovation. The act’s enhanced accountability requirements could burden smaller organizations that lack the resources to invest heavily in compliance measures.
The Road to Compliance
Navigating the complexities of the EU AI Act will require significant investment in compliance efforts from companies using AI systems. To meet the requirements, organizations will need to take several proactive steps:
- Ensuring Data Quality: AI systems should be trained on relevant, representative, and accurate data sets. Any biases, errors, or gaps in the data could lead to noncompliance.
- Transparency and Justification: Organizations must create verifiable methods for processing data and be able to provide clear explanations for AI-generated decisions.
- Error and Bias Reporting: Companies must report any biases, inaccuracies, or errors in AI systems to regulators and affected individuals promptly.
- Security Measures: Access controls, confidentiality safeguards, and other security practices must be implemented to protect AI systems from unauthorized use or data breaches.
- Employee Training: Staff must be educated on responsible AI use and the importance of adhering to privacy and safety regulations.
In addition to these primary requirements, companies must also be mindful of lesser-known obligations under the Act. For instance, organizations will need to monitor their AI systems post-deployment, continuously assessing performance, safety, and market impact. Any serious incidents or malfunctions must be reported to the appropriate national authorities. This level of post-market monitoring is crucial for compliance, as failing to do so could result in significant legal and financial penalties.
Learn more about DELTA Data Protection Manager Courses: DELTA Academy & Consulting
The Challenges Ahead
Despite the potential benefits of the EU AI Act, compliance will not come easy. A recent survey presented by privacy rights organization NOYB (None of Your Business) revealed that 74% of European data protection professionals believe that authorities would find GDPR violations within most companies. This statistic highlights the pervasive risk of noncompliance under the existing data protection regulations, and with the EU AI Act adding further layers of oversight, companies will face even greater challenges in adhering to the new standards.
Regular oversight, audits, and system updates will be necessary to ensure ongoing compliance. Smaller companies may find the costs and administrative burden overwhelming, especially if their use of AI doesn’t pose significant risks but still requires full compliance. This could lead to disproportionate costs for certain businesses, stifling innovation where it’s most needed.
The Future of AI Regulation
As AI technologies continue to evolve, so too will the regulatory frameworks designed to govern them. The EU AI Act is just the beginning, likely to inspire similar legislation worldwide. The Act sets a global precedent for responsible AI development, aiming to protect fundamental rights while encouraging innovation in a rapidly advancing field.
In the years to come, governments around the world will need to balance regulation with the need to foster technological growth. Countries that follow the EU’s example may create their own frameworks based on the Act, just as GDPR inspired global data privacy laws. The U.S., for instance, has already taken its first steps toward national AI regulation with the AI Bill of Rights blueprint, signaling the potential for a coordinated global approach.
While the EU AI Act introduces challenges, it also offers a vision for the future of AI governance. By establishing clear standards for safety, transparency, and accountability, the Act has the potential to ensure AI is developed responsibly, benefiting society as a whole while minimizing risks. Its impact may ultimately surpass that of the GDPR, reshaping the global AI landscape for years to come.
Author: Shernaz Jaehnel, Attorney at Law, Certified Data Protection Officer, Compliance Officer
DELTA Data Protection & Compliance, Inc. Academy & Consulting – The DELTA NEWS – Visit: delta-compliance.com