Potential open source regulation of EU AI law sparks debate on Twitter

Couldn’t attend Transform 2022? View all summit sessions in our on-demand library now! Watch here.


Alex Engler, a researcher at the Brookings Institution, never expected his recent article, “EU’s attempt to regulate open source AI is counterproductive,” to spark a debate on Twitter.

According to Engler, as the European Union continues to discuss the development of the Artificial Intelligence Act (AI Act), one step he considered would be the regulation of open source general purpose artificial intelligence (GPAI). The EU AI Act defines GPAI as “artificial intelligence systems that have a wide range of potential uses, both intended and unintended by developers… these systems are sometimes referred to as “foundation models” and characterized by their widespread use as pre-trained models for other, more specialized AI systems.”

In Engler’s article, he said that while the proposal is intended to enable safer use of these AI tools, it “would create legal liability for open source GPAI models, undermining their development.” The result, he argued, would “further concentrate power in the future of artificial intelligence in big tech companies” and prevent critical research.

“It’s an interesting topic that I didn’t expect to get any attention,” he told VentureBeat.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to provide guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, California.

Register here

“I’m always surprised when I get phone calls, honestly.”

But after Emily Bender — a professor of linguistics at the University of Washington and a regular critic of how AI is covered on social media and in the mainstream media — wrote a thread about a piece she referenced from Engler’s article, an active support on Twitter and – started.

EU AI Act open source debate in the hot seat

“I haven’t studied AI law and I’m not a lawyer, so I can’t really comment on whether it will work well as a regulation.” Bender tweetedpointing out later in her thread: “How people get away with pretending, in 2022, that regulation isn’t needed to steer innovation away from exploitative, harmful, unsustainable, etc. practices?”

Engler replied to Bender’s thread with his own. In general, he said, “I’m in favor of regulating AI… and again I don’t think regulating models at the point of open source helps at all. Instead, what is better, and what the European Commission’s original proposal did, is to regulate whenever a model is used for something dangerous or harmful, regardless of whether it is open source.”

He also argued that he does not want to exempt open source models from EU AI law, but wants to exempt the act of open source AI. If it’s harder to release open-source AI models, he argued, those same requirements won’t prevent those models from being commercialized behind APIs. “We end up with more OpenAI and fewer OS alternatives – not my favorite outcome,” he tweeted.

Bender responded to Engler’s thread by pointing out that if part of the purpose of the regulation is to require documentation, “the only people in a position to really thoroughly document training data are the ones who collect it.”

Perhaps this could be addressed by banning any commercial products based on under-evidenced models, leaving the responsibility to the corporate interests that do the commercialization, he wrote, but added, “What happens when HF [Hugging Face] or similar GPT-4chan or Stable Diffusion hosts and individuals download copies and then maliciously use them to flood various online spaces with toxic content?”

Obviously, he continued, “the Googles and Metas of the world should also be subject to strict regulations about the ways in which data can be collected and developed. But I think there’s enough risk in building data collections/models trained on those that OSS developers shouldn’t have free reign.”

Engler, who studies the impact of artificial intelligence and emerging data technologies on society, admitted to VentureBeat that “this issue is pretty complicated, even for people who have broadly similar views.” He and Bender, he said, “share a concern about where regulatory responsibility and commercialization should fall … it’s interesting that people with relatively similar perspectives land in a somewhat different place.”

The Impact of Open Source AI Regulation

Engler made several points to VentureBeat about his views on open source AI regulation in the EU. First of all, he said the limited scope is a practical concern. “The EU pass requirements don’t affect the rest of the world, so you can release it somewhere else and the EU requirements will have very little impact,” he said.

Additionally, “the idea that a well-built, well-trained model that meets these regulatory requirements somehow couldn’t be put to harmful uses is simply not true,” he said. “I think we haven’t clearly shown that regulatory requirements and building good models will necessarily make them safe in malicious hands,” he added, noting that there is a lot of other software that people use for malicious purposes that would be difficult to begin to regulate.

“Even software that automates how you interact with a browser has the same problem,” he said. “So if I’m trying to create multiple fake accounts to spam social media, the software that allows me to do that has been public for at least 20 years. So [the open-source issue] it’s a bit of a departure.”

Finally, he said, the vast majority of open source software is created without the goal of selling the software. “So you’re taking an already uphill battle, which is that they’re trying to build these big, expensive models that can even compete with the big companies, and you’re also adding a legal and regulatory hurdle.” he said.

What the EU AI law will and won’t do

Engler stressed that the EU AI law will not be the cure for all AI ills. What the EU AI law will broadly help with, he said, is “preventing a kind of application of AI for things it can’t really do or does very badly”.

Furthermore, Engler believes the EU is doing a pretty good job of trying to “actually solve a pretty difficult problem about the proliferation of artificial intelligence in dangerous and dangerous areas,” adding that he wants the US to take a more active regulatory role in space. (although he does give credit to the work being done by the Equal Employment Opportunity Commission on biased hiring systems and artificial intelligence).

What the EU AI law won’t really address is dealing with the creation and public availability of models that people just use in a bad way.

“I think that’s a different question that the EU AI law doesn’t really address,” he said. “I’m not sure we’ve seen anything that prevents them from being out there in a way that’s actually going to work,” he added, adding that the open source discussion is a bit “stuck.”

“If there was a part of the EU AI law that said, hey, the spread of these big models is dangerous and we want to slow them down, that would be one thing — but it doesn’t say that,” he said. .

The debate will certainly continue

Clearly, the debate on Twitter about the EU AI Act and other AI regulations will continue as stakeholders from across the spectrum of AI research and industry consider dozens of recommendations for a comprehensive AI regulatory framework which could possibly be a template for a global standard.

And the debate continues offline: Engler said one of the European Parliament’s committees, advised by digital policy adviser Kai Zenner, plans to introduce a change to the EU’s AI law to address the issue of open source artificial intelligence. in another tweet:

VentureBeat’s mission is set to be a digital town square for technical decision makers to learn about and transact business-transformative technology. Discover our Updates.

Leave a Comment