top of page
Search
Flexi Group

DOJ’s Updated Compliance Guidance Places Spotlight on AI Risks, Sparking Industry Debate

When the U.S. Department of Justice (DOJ) revised its Evaluation of Corporate Compliance Programs (ECCP) earlier this year, it drew significant attention. Within the updated guidance was a striking series of questions related to risk assessments, which seemed to place the onus on corporate compliance teams to ensure the safe use of artificial intelligence (AI) tools by their organizations.


DOJ’s Updated Compliance Guidance Places Spotlight on AI Risks, Sparking Industry Debate

The move was unexpected, particularly because AI remains a relatively new technology. Many in the compliance sector view the DOJ’s inclusion of AI in its compliance framework as a clear signal of the government’s expectations for companies employing AI—especially given Congress’s hesitancy to establish regulatory guardrails for AI.


Two chief compliance officers (CCOs), speaking anonymously as they were not authorized to comment publicly, expressed their hope that the DOJ might adjust its guidance on AI in future updates to the ECCP. One CCO argued that the responsibility for ensuring the safe use of AI tools should be shared across all business functions, rather than falling solely on compliance.


“It creates a potential situation where compliance is seen as owning AI, even though it didn’t have input in the decision-making process,” the CCO stated. “It seems like the DOJ is dumping this responsibility on compliance.”


Growing Adoption and Risks of AI

The DOJ’s guidance comes as businesses increasingly explore the potential of AI to automate operations, capture new customers, and expand into new markets. However, these innovations come with substantial risks. President Joe Biden sought to address these risks in an executive order issued in October 2023, requiring companies developing advanced AI systems to conduct safety tests and share results with the government.


That order, however, faces the likelihood of being overturned by President-elect Donald Trump, further complicating the regulatory landscape. Congress has been slow to pass legislation that would clarify standards for safe and effective AI use, mirroring its inaction on related issues such as data breach reporting, data privacy, and cryptocurrency regulation.


Meanwhile, states have been taking the lead in passing AI-related laws. A report on 2025 regulatory trends by KPMG predicts that pressure for federal AI legislation will eventually build from state-level initiatives.


In the absence of federal legislation, the DOJ’s updated ECCP provides guidance on establishing governance, safeguards, and monitoring for AI systems, filling a critical regulatory gap. However, some in the compliance sector feel the DOJ is overemphasizing AI risks at the expense of other pressing concerns.


Mixed Reactions Within the Compliance Community

One CCO remarked that the inclusion of AI in the ECCP seemed “shoehorned in there” and suggested that other significant risks, such as climate change or responsibly implementing environmental, social, and governance (ESG) initiatives, could have been better highlighted.


Ellen Hunt, principal consultant and adviser at Spark Compliance Consulting, urged businesses not to overreact to the AI-related additions to the ECCP. Hunt noted that the DOJ is not assigning sole responsibility for AI compliance to compliance teams, but rather emphasizing their role in ensuring oversight.


“The only time that the DOJ will evaluate a company on these principles is when they are prosecuting that company for committing a criminal act,” Hunt explained. She emphasized that this represents a narrow risk window for most companies.


Hunt also pointed out that the level of scrutiny a company faces regarding AI use depends on how the technology is being implemented. “If your firm isn’t really using AI much at the moment, or at least not in any outward, customer-facing way, the risk of being called to the carpet for your company’s AI use is even smaller,” she said.


Cyprus Company Formation

Assessing and Managing AI Risks

Hunt recommended that businesses start by conducting risk assessments to evaluate their dependence on AI tools. Companies should consider whether AI is being used in critical areas such as decision-making for loans, hiring, investments, or health evaluations. These activities, which have a direct impact on people, are more likely to draw DOJ attention if they veer into criminal territory.


“Any prudent CCO starts with a risk assessment and plans actions from there,” Hunt said.

Another critical step is assessing how a firm might become a victim of AI misuse. External threats, such as hackers using AI to infiltrate systems and steal data, represent one type of risk. Firms need to examine whether they are using AI effectively to manage and mitigate these threats.


Internal risks also need attention. For example, if employees are using AI tools like ChatGPT to draft internal communications, the risk may be minimal. However, if AI-generated content is being incorporated into products or services, the stakes rise significantly. Companies must ensure that AI outputs are reviewed for inaccuracies and bias and have processes in place to validate these outputs.


The Compliance Role in AI Governance

While compliance teams play a crucial role in managing AI-related risks, the DOJ’s guidance does not suggest that compliance should shoulder the burden alone. Instead, the DOJ emphasizes the importance of compliance’s involvement in creating a robust framework for AI governance.

How companies manage AI risks will vary depending on their operations and usage of the technology. As Hunt summarized, “Compliance has a role to play in managing and mitigating the risks posed by AI. It’s not solely compliance’s responsibility to manage AI risks, but the DOJ believes it has an important role to play.”

By fLEXI tEAM



Comments


bottom of page