Emerging as the first formal regulation of artificial intelligence, lawmakers in Europe signed off on a comprehensive set of rules—the EU’s Artificial Intelligence Act.
This groundbreaking legislation could serve as a potential blueprint for policymakers worldwide who are tasked with setting guardrails for the rapidly evolving technology.
What does the bill entail?
In the latest version of the bill, passed on Wednesday, generative AI would be subject to new transparency requirements. This includes publishing summaries of copyrighted material—something publishers have asked for under fair compensation. Additionally, makers of generative AI models will be required to put guardrails in place to prevent the generation of illegal content.
“The AI Act puts some fairly reasonable guardrails in place,” said Chris Pedigo, svp of government affairs at Digital Content Next. “The transparency piece gives publishers an opportunity to regain control over their content.”
The regulation is far from becoming law and its final version is not anticipated to be introduced until later this year. However, it’s the first of its kind and alleviates some publisher concerns over fair use and the possibility that they will lose out on traffic and revenue.
How does the rule work?
Measures to rein in AI were first proposed in 2021 but did not give much attention to generative AI. This time, makers of AI systems such as ChatGPT will be required to disclose information used to build the program. The law also regulates any product or service that uses AI while curtailing the use of facial recognition software.
The legislation follows a risk-based approach and categorizes AI systems into four levels of risk, ranging from minimal to unacceptable. Through risk assessments, makers of the technology will asses the everyday use of the tech before making it widely available.
The EU bloc, made up of 27 member states, will enforce the rules and could force companies to withdraw their products from the market. Proposed fines could reach $43 million or 7% of a company’s annual global revenue.
“It’s too early to tell if this act will have some real teeth to compel the tech companies to curb the harmful effects of AI,” said Chirag Shah, a professor at the Information School at the University of Washington. “What I see currently lacking, is a notion of accountability. Perhaps these details will emerge over time.”