A European Union-wide ban on artificial intelligence systems deemed to pose an “unacceptable” risk officially took effect on February 2, marking the first provisions of the EU’s AI Act to be enforced. However, ambiguity remains over the precise obligations companies must adhere to and the extent to which certain AI applications may fall afoul of the new rules.

The initial phase of the AI Act targets technologies that could cause significant harm to consumers. The legislation outlaws AI systems that undermine fundamental human rights, whether intentionally or unintentionally. Among the prohibited uses are AI applications that exploit vulnerable populations, employ deceptive or manipulative techniques, utilize social scoring for public or private purposes, process biometric data in real-time or for categorization, assess emotions in workplace settings, scrape facial images from the internet or CCTV for database expansion, and conduct certain types of predictive risk profiling.
Companies found in breach of these rules could face fines of up to €35 million (U.S. $38.2 million) or 7% of their annual global turnover, whichever is higher. However, enforcement of penalties will not begin until August 2, allowing companies a transitional period to align with the legislation’s full provisions.
Despite this leeway, legal experts and industry professionals argue that the AI Act leaves several key terms—such as social scoring, subliminal techniques, and predictive risk profiling—open to interpretation. Some aspects of these practices may not be inherently harmful or even illegal, leading to uncertainty among businesses about whether their AI applications comply with the new rules.
“The fundamental problem with the EU AI Act is it actually looks like it’s been written by AI,” remarked Derren Nisbet, CEO of AI testing firm Virtuoso.
Recognizing the confusion, the European Commission issued draft guidelines on February 4 to clarify how the prohibited AI applications should be interpreted. While non-binding, the 135-page document provides definitions and examples to help businesses determine whether their AI systems might be in violation of the ban.
For instance, the guidelines distinguish between lawful uses of risk-based profiling for fraud prevention and the prohibited use of social scoring for state-driven societal control. Similarly, it explains that only specific forms of predictive risk profiling—such as those leading to discriminatory outcomes or exclusion from essential services—are banned.
Yet, even with this additional guidance, legal professionals anticipate continued uncertainty until cases are tested in court. One of the most contentious areas is the prohibition of subliminal techniques, as the legislation does not explicitly define what qualifies as subliminal influence. Instead, it vaguely states that such methods encompass anything that could provoke behavioral changes or hinder informed decision-making. Experts argue that the context in which AI is used and the potential for harm should be the primary considerations, though the lack of clear thresholds complicates compliance.
Laura Franzese, co-founder and chief marketing officer at tech firm Prowler, emphasized the risk companies face in navigating the legislation. “Regulations like the EU AI Act draw lines between different risk categories, but the reality is that AI doesn’t fit neatly into boxes. Companies tend to underestimate risk, either because they do not fully understand the law or because they assume their use case is low risk when it is actually high risk under regulation.”
She further warned, “If you build AI and do not actively think about bias, fairness, and explainability, you are already behind. Regulators will go after the easy cases first, the ones where harm is obvious and documented.” She advised companies to “take compliance seriously now, assume your system is high risk, document your decisions, and make sure a human is in the loop where it matters.”
Victoria Hordern, a partner and data protection specialist at law firm Taylor Wessing, noted that businesses have already begun reassessing AI applications they previously considered compliant. One major area of concern is the ban on AI systems that assess emotions in the workplace. AI tools that assist HR departments by evaluating candidate reactions or scoring employees on performance metrics could fall under this prohibition. “Companies need to be aware that inferring emotions through the use of biometrics in a workplace setting, such as by detecting surprise on a candidate’s face in response to an interview question, is likely to fall within this prohibition,” she warned.
Another significant compliance risk is misclassification, according to Nathalie Moreno, a partner at Kennedys Law. She cautioned that businesses may incorrectly assess the risk level of their AI systems, potentially leading to inadvertent violations. Given that enforcement will rely on national authorities, market surveillance bodies, and consumer complaints, Moreno advised companies to conduct thorough AI risk assessments in line with the AI Act’s risk categories. She also recommended consulting the European Commission’s guidance and documenting compliance efforts to justify risk assessments. Engaging legal and technical experts to interpret the law’s ambiguous provisions could further help mitigate legal exposure.
Sam Peters, chief product officer at cybersecurity compliance firm ISMS.online, stressed that regulators must take responsibility for ensuring clearer definitions. “If businesses make reasonable efforts to interpret and act on the classifications in good faith, lawmakers are responsible for providing more explicit guidelines if misinterpretations become widespread.”
He also underscored the importance of structured AI governance for companies seeking to stay on the right side of the law. Key steps include risk mapping AI applications based on the Act’s classifications—unacceptable, high, limited, and minimal—along with conducting impact assessments to examine data processing risks and potential harm. Transparency and oversight measures such as human oversight, audit trails, and documentation of decision-making processes will be critical. He pointed to standards like ISO 42001 as a useful framework for responsible AI governance, helping organizations navigate evolving compliance requirements.
“The EU AI Act is just the beginning—other jurisdictions will follow with similar laws, so making proactive AI risk management a necessity rather than a regulatory box-ticking exercise for every organization makes sense,” Peters said. “Businesses that approach AI governance as a long-term strategy rather than a one-time compliance effort will reduce legal exposure and foster greater trust in their AI-driven innovations.”
By fLEXI tEAM
留言