Companies are increasingly relying on artificial intelligence (AI) to achieve various business benefits, from boosting productivity to improving decision-making and operations. However, as they adopt AI, they also face significant risks, including potentially severe sanctions if they cannot answer critical questions about how their AI operates, adheres to privacy laws, or handles copyright and data provenance issues.
One of the most disruptive consequences is “algorithmic disgorgement,” where regulators may force businesses to delete algorithms and data models if they fail to comply with regulations. This risk is not limited to AI developers; it affects all companies using AI without fully understanding the underlying technology or the data it has been trained on. Clare Walsh, director of education at the Institute of Analytics, warns that many companies "may be pushing their luck" by using AI without knowing its full capabilities or legal implications. The risk of significant penalties is "substantial."
While algorithmic disgorgement has been rare and reserved for severe cases, experts believe it could become more common as a regulatory tool to address noncompliance or misuse of AI. Regulatory penalties have often targeted tech companies at the forefront of these issues. For example, in 2019, the U.S. Federal Trade Commission (FTC) ordered Cambridge Analytica, a U.K.-based political consulting firm, to delete all its algorithms and models developed using Facebook user data obtained without consent. The same year, the FTC imposed a $5 billion fine on Facebook for violating consumer privacy and forced the company to disgorge profits from its ad models using misappropriated data. “The relief is designed not only to punish future violations but, more importantly, to change Facebook’s entire privacy culture,” the FTC stated at the time.
In 2021, the FTC took action against Everalbum, a lesser-known company that used customer photos to build facial recognition technology without proper consent. The regulator required the company to delete the algorithms developed from this data. In 2022, WW International (formerly Weight Watchers) was similarly sanctioned for collecting data from children through its Kurbo weight-loss app without parental consent. The FTC ordered the deletion of all algorithms trained on the improperly collected data.
Although these orders are still relatively uncommon, the message from regulators is clear: companies face serious consequences for failing to comply with data privacy regulations. Adnan Masood, chief AI architect at UST, emphasized that “ethical AI practices, particularly around data collection and model training, are not just good business—they’re essential to avoiding significant regulatory and financial penalties.”
Governments are expected to focus their scrutiny on sectors like healthcare, financial services, and law enforcement, given the sensitive data involved and the potential harm that can arise from misusing AI in areas like facial recognition or bias. However, other industries that rely heavily on consumer data to train AI models may also be at risk. According to Jisha Dymond, chief ethics and compliance officer at OneTrust, “Anyone deploying AI is liable, so companies can’t pass the buck to vendors when things go wrong.”
Disgorgement orders are not only disruptive but are also typically enforced quickly, often within 90 days or less. Robert Taylor, counsel at law firm Carstens, Allen & Gourley, warns that this tight window could blindside companies and disrupt their operations if they haven’t done adequate due diligence on third-party solutions or AI risk assessments.
One of the major concerns is the widespread use of AI by non-experts, increasing the likelihood that businesses might unknowingly incorporate improperly sourced data into their systems. “This increases the likelihood that businesses unknowingly incorporate improperly sourced data into their systems,” said Jeremy Tilsner, managing director at Alvarez & Marsal. This lack of understanding makes companies more vulnerable to algorithmic disgorgement and other regulatory actions.
Another risk is that if a vendor is forced to comply with algorithmic disgorgement, it could cause "cascading disruption" for companies that rely on the vendor’s AI for critical operations, such as customer analytics or automation. Moreover, disgorgement may be just one part of a larger regulatory action. There is also concern that AI models may retain traces of the data they were trained on, even after deletion, a phenomenon called the "algorithmic shadow." In these cases, regulators may require companies to delete the entire AI model, wiping out any benefits derived from the misused data. Diane Gutiw, global AI research lead at CGI, noted that “the only assurance that the model has been fully cleansed is by rebuilding it.”
Although the U.S. has led in enforcing algorithmic disgorgement, this is partly because the European Union and the U.K. have stronger privacy laws that have already slowed the rollout of AI technologies. As Jeremy Tilsner points out, those laws may ultimately reduce the risk of sensitive data being misused by AI modelers in the first place.
By fLEXI tEAM
Comments