Excerpt from Balancing Innovation and Ethics: Navigating the Impact of the EU AI Act:
Litman affirmed that, “there are elements that relate to biometrics and security that will receive plenty of applause and criticism, but that will have a major impact on the use of AI in the use of PII data for monitoring and security.” Litman continued, saying that, “At the same time, copyright holders with grievances against the AI providers that are deriving value from their work will see little recourse as it appears that regulators are abdicating responsibility for the use of copyright material in the LLM’s.”
“Every piece of enterprise software and infrastructure is built today to be GDPR compliant, and it can be expected that all AI will similarly be built to be compliant with the EU AI Act. Even with a timeframe that may have us 12 – 24 months from enforcement, you can expect that all AI providers will move forward with the expectation that these regulations will be in place, and they will start planning and implementing now for that future, which effectively renders the regulations in place in advance of when they will be enforced,” Litman added.
For high-risk AI applications, there are explicit transparency requirements due to significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law. These requirements for ‘high-risk’ applications include providing information about the AI system’s capabilities and limitations, the logic involved, and the level of human oversight that will require AI developers maintain detailed documentation and records of their AI systems’ functioning and compliance measures, subject to auditing to ensure compliance.
Considering the breadth of tasks AI systems can accomplish and the rapid expansion of function, it was agreed that general-purpose AI systems, and the associated models those AI systems are based on, will have to adhere to transparency requirements as initially proposed by Parliament. These include drawing up technical documentation, complying with EU copyright law and disseminating detailed summaries about the content used for training.
Litman indicated, ”There are new transparency requirements for the providers of AI to explain how their models work and where the content came from, but with regulations not to be formalized for months and implementation 12 – 24 months away and with only a disclosure requirement without teeth or action, it basically means that the copyright wars are likely over before they start.”
The AI Act introduced penalties that will be similar to fines calculated under GDPR, based on a percentage of the liable party’s global annual turnover in the previous financial year, or a fixed sum, whichever is higher:
€35 million or 7% for violations which involve the use of banned AI applications;
€15 million or 3% for violations of the Act’s obligations; and
€7.5 million or 1.5% for the supply of incorrect information.
However, to Litman’s point, currently there is only the disclosure requirement and these penalties for bad actors would be many months off in the distance. Litman also offered that the effort started years ago and at one point was proclaimed as future proof, but then ChatGPT moved the goal line by about 1000 yards and EU policy making went from prescient to way behind. “The challenge with some of this is that the genie got out of the bottle on some very important items, and I don’t think there’s anything that anyone can do to put it back in. Most notably from my point of view is the nature of the content that has been used to train the LLM’s, which includes millions of pages under copyright.”
Read the full article here