The European Union’s new Artificial Intelligence (AI) Act comes into force on 1st August and represents a significant regulatory milestone aimed at ensuring the ethical and responsible deployment of AI solutions across all industries. With the potential for hefty fines of up to €35 million or 7% of global turnover for non-compliance, understanding and adhering to this legislation is crucial for companies leveraging AI technologies.

We appreciate the importance of aligning AI deployment with your organisation’s values, as well as identifying and mitigating AI risks to stay compliant with these new regulations. Here, we hope to offer a guide on what you need to know about the EU AI Act and advice on how your organisation can remain compliant.

Understanding the EU AI Act

The EU AI Act aims to regulate AI systems based on their potential risks to individuals’ rights and safety. It classifies AI systems into different risk categories, with corresponding regulatory requirements for each category. Key areas of focus include risk management, data governance, transparency, human oversight, and cybersecurity.

The Act splits the applications of AI into four risk categories:

  1. Unacceptable risk
  2. High risk
  3. Limited risk
  4. Minimal or no risk

However, most provisions won’t become immediately applicable and give companies until 2026 to comply. The exceptions to this are provisions related to prohibited AI systems, which will apply after six months, and those related to general purpose AI, which will apply after 12 months.

The Act also mandates that all AI systems seeking entry into the EU internal market must comply with its requirements. Member States are required to establish governance bodies to ensure AI systems adhere to the Act’s guidelines. This mirrors the establishment of AI Safety Institutes in the UK and the US, a significant outcome of the AI Safety Summit hosted by the UK government in November 2023.

EU AI Act in the Real World

We have taken an in-depth look at the Act and considered some real-world examples where the new regulations would come into effect. For example, the Act bans AI systems that use subliminal, manipulative, or deceptive techniques to significantly alter a person’s behaviour in ways that impair their ability to make informed decisions, leading to potentially significant harm.

In the real world, this would prevent digital advertising platforms using AI to send subliminal messages that exploit psychological vulnerabilities, or coerce individuals into making decisions against their best interest, such as unnecessary purchases or unhealthy behaviours. There are plenty more examples of how different AI solutions will be monitored through the new Act and our team is always available to help you navigate these regulations.

Another important example is applications that can build facial recognition databases. Applications that build these databases by accessing online images or without consent pose severe privacy infringements and unauthorised surveillance risks. The Act stipulates that the creation or expansion of facial recognition databases through untargeted scraping of internet or CCTV footage by AI systems is now prohibited.

Supporting Responsible AI Deployment

To help organisations navigate the complexities of the EU AI Act, we offer an AI Ethics and Governance assessment, modelled on the acclaimed NIST AI framework. Additionally, tools like Decipher can assist in AI-powered code analysis and documentation, ensuring transparency and traceability across AI-driven projects, which is key for regulatory compliance and mitigating risk.

This provides organisations with a report, evaluating their current enterprise and IT governance structures, recommending responsible AI guidelines and frameworks, and identifying AI risks. To support your organisation even further, we have five actionable steps to help with regulatory compliance:

  1. Conduct a Thorough Gap Analysis:
    Identify where your current practices diverge from the requirements of the EU AI Act. This analysis will highlight areas of non-compliance and provide a roadmap for necessary changes.
  2. Develop and Implement Governance Measures:
    Establish a robust risk management strategy that includes systematic documentation processes. This is so that all aspects of AI deployment are monitored and managed in accordance with regulatory standards.
  3. Provide Comprehensive Training:
    Educate your team on the compliance requirements and best practices for AI governance. Continuous training helps boost stakeholder awareness of their roles and responsibilities in maintaining compliance.
  4. Undertake Continuous Monitoring:
    Regularly review and update your compliance measures to adapt to any changes in the regulatory landscape. Continuous monitoring helps in promptly addressing any new risks or compliance issues.
  5. Establish Enterprise-Wide AI Governance:
    Integrate people, processes, and technology to create an enterprise-wide AI governance framework. This holistic approach helps align AI deployment with your organisation’s values and regulatory requirements.

The EU AI Act represents a critical step towards responsible AI deployment, and compliance is essential to avoid reputational damage and financial penalties.

“The Act defines AI as ‘a fast evolving family of technologies that can bring a wide array of economic and societal benefits across the entire spectrum of industries and social activities.’ However, this could be left open to interpretation as these ‘benefits’ could be subjective, depending on who is harnessing the technology.”

Definitions and context of the Act

The Act then goes on to address that “the same elements and techniques that power the socio-economic benefits of AI can also bring about new risks or negative consequences for individuals or the society”. Organisations may find it more beneficial to follow a more descriptive definition that focuses on processes, like the one provided by the Oxford English Dictionary.

What’s more, the Act does not offer a separate definition for AI that is distinct from terms such as ‘AI system’ or ‘AI model’. Instead, the Act focuses on defining an ‘AI system’ as a machine-based system designed to operate with varying levels of autonomy and adaptiveness, generating outputs like predictions, content, recommendations, or decisions based on the input it receives.

With this in mind, the Act is far from being a perfect resolution to all risks because, like AI technologies, AI regulations are still evolving globally. For instance, the UK and US have their own regulations which match some, but not all, of the conditions set out in the EU’s AI Act. For all governing bodies, new legislation and regulations will require a balance between innovation and ethics, with an aim to ensure new technology is trustworthy, safe, and beneficial for society, while also respecting human rights and values.

Our AI Ethics and Governance assessment can support your AI initiatives so they are ethical, transparent, and compliant with the new regulations. While getting to grips with the EU AI Act may seem overwhelming, as there are various stipulations for different types of AI technology, we are always on hand to offer you guidance.

This rapidly evolving technology presents many exciting opportunities, but it has also created ethical minefields, so please get in touch to learn more about how we can help you navigate them.

For more detail and insights around all things AI and emerging technology, check out our Data X AI offering.