EU's AI Act

Experts caution that the EU’s AI Act may have a chilling impact on open source initiatives.

According to scientists in a new report, proposed EU regulations may restrict the kind of research that results in cutting-edge AI tools like GPT-3.

This Monday, the nonpartisan think tank Brookings released a study criticising the EU’s regulation of open source AI, stating that it would make general-purpose AI systems legally liable while also impeding their growth. Open source developers would be required to abide by rules for risk management, data governance, technical documentation, and openness, as well as benchmarks for accuracy and cybersecurity, under the EU’s proposed AI Act.

The author argues that it’s not implausible that a firm may try to shift blame by suing the open source developers on which they based their product if it deployed an open source AI system that resulted in a negative outcome.

According to Alex Engler, the Brookings analyst who wrote the article, “This might further concentrate power over the future of AI in giant technological companies and hinder research that is vital to the public’s understanding of AI.” “In the end, [the EU’s] attempt to control open-source could generate a complex set of regulations that threaten open-source AI contributors, probably without enhancing use of general-purpose AI,” the study said.

The AI Act, which aims to promote “trustworthy AI” deployment in the EU, was published by the European Commission in 2021. As EU institutions seek industry input ahead of a vote this fall, they are looking to amend the regulations in an effort to strike a balance between innovation and accountability. However, other experts claim that the AI Act as it is now worded would place onerous restrictions on open efforts to create AI systems.

The legislation includes exceptions for some types of open source AI, such as those that are only used for study and with safeguards against abuse. Engler observes that it would be challenging, if not impossible, to stop these programmes from entering commercial networks where they might be misused by bad actors.

Stable Diffusion, an open source AI system that creates images from text prompts, was recently released with a licence that forbade the use of particular kinds of content. But it soon developed a following among groups that produce pornographic deepfakes of famous people using similar AI methods.

The founding CEO of the Allen Institute for AI, Oren Etzioni, concurs that there are issues with the AI Act as it is now written. Etzioni stated in an email interview with TechCrunch that the laws’ burdens could have a chilling effect on initiatives like the creation of open text-generating systems, which, in his opinion, are allowing developers to “catch up” to Big Tech firms like Google and Meta.

The good intentions of the EU, according to Etzioni, “are paving the route to regulation hell.” “Open source software creators shouldn’t be burdened with the same requirements as those creating for-profit products. Consider the scenario of a single student developing an AI capability; they cannot afford to comply with EU regulations and may be forced not to distribute their software, which would have a chilling effect on academic progress and the reproducibility of scientific results. It should always be the case that free software can be provided “as is.”

According to Etzioni, EU regulators should concentrate on particular AI applications rather than trying to govern AI technologies generally. “The slow-moving regulatory procedure is ineffective because AI is characterised by too much ambiguity and quick change,” he claimed. Instead, “AI applications like toys, bots, or autonomous vehicles should be the focus of regulation.”

Not all practitioners agree that the AI Act needs to be changed more. Open source AI regulation is “absolutely OK,” in the opinion of Mike Cook, an AI researcher and member of the Knives and Paintbrushes collaborative. He suggests that setting any kind of norm might be a method to demonstrate leadership globally, thereby inspiring others to do the same.

“The fearmongering about’stifling innovation’ comes largely from folks who want to do away with all regulation and have free rein,” Cook added. “I believe that legislating for a better society is acceptable, rather than fearing that your neighbour may regulate less stringently and somehow benefit from it,”

To illustrate, as my colleague Natasha Lomas has previously mentioned, the EU’s risk-based approach lists a number of AI-related uses that are prohibited (such as China-style state social credit scoring) while imposing limitations on AI systems that are deemed to be “high-risk” — such as those used in law enforcement. It might be necessary to create thousands of regulations, one for each type of product, which would cause conflict and even more regulatory uncertainty if the regulations were to target product types rather than product categories as Etzioni contends they should.

Lilian Edwards, a Newcastle School of Law professor and a part-time legal adviser at the Ada Lovelace Institute, concerns whether the developers of programmes like open source big language models (such as GPT-3) may be held accountable under the AI Act after all. According to her, the legislation’s language places responsibility for managing an AI system’s uses and affects on downstream deployers rather than always the original inventor.

According to what she writes, “[T]he way downstream deployers use [AI] and alter it may be as significant as how it is first constructed.” The AI Act acknowledges this to some extent, but not nearly enough to adequately govern the numerous actors who participate in the AI supply chain in a variety of ways “downstream.”

CEO Clément Delangue, attorney Carlos Muoz Ferrandis, and policy expert Irene Solaiman of AI firm Hugging Face believe that while they welcome laws to preserve consumer safeguards, the AI Act as envisaged is too nebulous. It’s not clear, for instance, whether the regulation would apply to the “pre-trained” machine learning models at the core of AI-powered software or merely to the software itself, they claim.

Delangue, Ferrandis, and Solaiman stated in a joint statement that “this lack of clarity, coupled with the non-observance of ongoing community governance initiatives such as open and responsible AI licences, might hinder upstream innovation at the very top of the AI value chain, which is a big focus for us at Hugging Face.” “From the perspectives of both competition and innovation, placing excessively onerous burdens on publicly available features at the top of the AI innovation stream runs the risk of stifling incremental innovation, product differentiation, and dynamic competition, the latter of which is essential in emerging technology markets like those related to AI. In order to properly identify and safeguard the primary sources of innovation in these markets, the regulation should consider the innovation dynamics of the AI markets.

Regardless of the final language of the AI Act, Hugging Face supports better AI governance tools including “responsible” AI licences and model cards that contain details about the intended usage and operation of an AI system. Delangue, Ferrandis, and Solaiman note that for significant AI releases like Meta’s OPT-175 language model, responsible licencing is starting to spread.

According to Delangue, Ferrandis, and Solaiman, “Open innovation and responsible innovation in the AI world are not mutually exclusive aims, but rather complementing ones. “Like it is currently for the AI community, the junction between both should be a primary goal for continuous regulatory initiatives.”

That very well might be possible. It will probably take years before AI regulation in the EU begins to take shape, given the numerous moving pieces and the numerous stakeholders it affects.

Total
0
Shares