EU publishes guidelines on banned practices under AI Act
February 06th 2025![](https://www.buildfutureskills.com/wp-content/themes/Vanilla/images/calendar.jpg)
![](https://www.buildfutureskills.com/wp-content/themes/Vanilla/images/arrow-green.jpg)
While not legally binding, the guidelines offer the EU’s interpretations of the banned practices.
The first set of obligations as part of the European Union’s Artificial Intelligence (AI) Act kicked in on Sunday (2 February). Now, the Commission has published guidelines on prohibited AI practices.
As per the guidelines, companies will not be able to use an AI system to infer their employees’ emotions, or use AI to assess a person’s “risk” of committing a criminal offence.
Moreover, the guidelines also prohibit the usage of AI-enabled “dark patterns” to coerce individuals into actions they may not do otherwise, or chatbots that use subliminal messaging to manipulate persons into making harmful financial decisions.
The new guidelines published yesterday (4 February), which seek to increase the legal clarity around what the Commission prohibits regarding AI application, have not yet been adopted, nor are they legally binding, with the EU stating that the ultimate authority to interpret the AI Act rests with the court.
“The guidelines are designed to ensure the consistent, effective, and uniform application of the AI Act across the European Union. This initiative underscores the EU’s commitment to fostering a safe and ethical AI landscape,” the Commission said in a press release yesterday.
The EU Act, which entered into force last August, is a landmark regulation meant to bring the growing power of AI under legal control. The European Law Institute’s president Pascal Pichonnaz told SiliconRepublic.com last year that the Act’s flexibility would allow for it to adapt to new risks.
The Act lays down rules around the use and deployment of AI, including dividing AI systems into risk categories, with some, such as social security benefits providers using AI to evaluate persons, being prohibited, categorised as posing “unacceptable risks” to fundamental rights.
Penalties under the AI Act are hefty, with developers engaging in prohibited practices being liable for up to €35m or 7pc of their total global annual turnover, whichever is higher.
The next set of obligations, centred around general-purpose AI models will be applicable in August, according to the Commission’s timeline, while most of the AI Act will be fully applicable by August next year.
Article Source: Silicone Republic