Openrijk logo
British Flag
European Bans on Unwanted AI Applications Enforced

European Bans on Unwanted AI Applications Enforced

Both realizing the economic opportunities of artificial intelligence (AI) and ensuring the safe and reliable functioning of the technology are the guiding principles of the European AI regulation (AI Act) that has been phased in since last summer. As of February 2, 2025, various bans on unwanted applications of AI will apply throughout the EU. For instance, on AI systems that classify or rank individuals based on social behavior or personal characteristics (social scoring) which could lead to disadvantageous or unfavorable treatment.

The entire AI regulation offers opportunities for both developers and entrepreneurs, as well as guarantees for European consumers. It includes basic agreements on the functioning of AI in products and services, requirements for (potentially) high-risk applications, and support for developers such as SMEs. This allows Europeans to trust AI and enables entrepreneurs to focus more effectively on innovations.

Minister Dirk Beljaarts (Economic Affairs): “We strive for AI models that operate according to European norms and values. Not only for the safe functioning of the technology. This also allows us to better exploit opportunities for innovation and entrepreneurs. As of today, we ban unwanted risks from multiple AI systems. This aligns with the balance we seek. Strict rules where necessary, but no unnecessary regulations for companies developing and applying low-risk AI systems.”

Key Bans

In addition to the ban on social scoring through technology, AI systems that use emotion recognition in work and education are also no longer permitted. Likewise, AI systems that employ manipulative or misleading techniques to negatively alter behavior are banned. The same applies to real-time biometric identification at a distance in public spaces for law enforcement, with limited exceptions. Using AI for risk assessments of criminal offenses solely based on profiling is no longer tolerated in the EU.

Phased Implementation

The AI regulation came into effect on August 1, 2024, but various components are being phased in. This gives developers and providers of high-risk AI the opportunity to ensure their applications meet the new requirements.

February 2, 2025: provisions on banned AI;
August 2, 2025: requirements for general-purpose AI models;
August 2, 2026: requirements for high-risk AI applications and transparency obligations;
August 2, 2027: requirements for high-risk AI products; full AI regulation applies;
August 2, 2030: requirements for high-risk AI systems in government organizations that were launched before August 2026.

Supervision

National regulators will ensure compliance with the bans on certain AI systems and the requirements for high-risk AI and transparency. The European Commission will supervise large AI models that can be used for many different purposes.

Share this article
back to overview
Source published: 3 February 2025
Source last updated: 3 February 2025
Published on Openrijk: 3 February 2025
Source: Economische Zaken