Dan Nechita has spent the past year shuttling back and forth between Brussels and Strasbourg. As the head of cabinet (essentially chief of staff) for one of the two rapporteurs leading negotiations over the EU’s proposed new AI law, he’s helped hammer out compromises between those who want the technology to be tightly regulated and those who believe innovation needs more space to evolve.

The discussions have, Nechita says, been “long and tedious.” First there were debates about how to define AI—what it was that Europe was even regulating. “That was a very, very, very long discussion,” Nechita says. Then there was a split over what uses of AI were so dangerous they should be banned or categorized as high-risk. “We had an ideological divide between those who would want almost everything to be considered high-risk and those who would prefer to keep the list as small and precise as possible.”

But those often tense negotiations mean that the European Parliament is getting closer to a sweeping political agreement that would outline the body’s vision for regulating AI. That agreement is likely to include an outright ban on some uses of AI, such as predictive policing, and extra transparency requirements for AI judged to be high-risk, such as systems used in border control.

This is only the start of a long process. Once the members of the European Parliament (MEPs) vote on the agreement later this month, it will need to be negotiated all over again with EU member states. But Europe’s politicians are some of the first in the world to go through the grueling process of writing the rules of the road for AI. Their negotiations offer a glimpse of how politicians everywhere will have to find a balance between protecting their societies from AI’s risks while also trying to reap its rewards. What’s happening in Europe is being closely watched in other countries, as they wrestle with how to shape their own responses to increasingly sophisticated and prevalent AI.

“It’s going to have a spillover effect globally, just as we witnessed with the EU General Data Protection Regulation,” says Brandie Nonnecke, director of the CITRIS Policy Lab at the University of California, Berkeley.

At the core of the debate about regulating AI is the question of whether it’s possible to limit the risks it presents to societies without stifling the growth of a technology that many politicians expect to be the engine of the future economy.

The discussions about risks should not focus on existential threats to the future of humanity, because there are major issues with the way AI is being used right now, says Mathias Spielkamp, cofounder of AlgorithmWatch, a nonprofit that researches the use of algorithms in government welfare systems, credit scores, and the workplace, among other applications. He believes it is the role of politicians to put limits on how the technology can be used. “Take nuclear power: You can make energy out of it or you can build bombs with it,” he says “The question of what you do with AI is a political question. And it is not a question that should ever be decided by technologists.”

By the end of April, the European Parliament had zeroed in on a list of practices to be prohibited: social scoring, predictive policing, algorithms that indiscriminately scrape the internet for photographs, and real-time biometric recognition in public spaces. However, on Thursday, parliament members from the conservative European People’s Party were still questioning whether the biometric ban should be taken out. “It’s a strongly divisive political issue, because some political forces and groups see it as a crime-fighting force and others, like the progressives, we see that as a system of social control,” says Brando Benifei, co-rapporteur and an Italian MEP from the Socialists and Democrats political group.

Next came talks about the types of AI that should be flagged as high-risk, such as algorithms used to manage a company’s workforce or by a government to manage migration. These are not banned. “But because of their potential implications—and I underline the word potential—on our rights and interests, they are to go through some compliance requirements, to make sure those risks are properly mitigated,” says Nechita’s boss, the Romanian MEP and co-rapporteur Dragoș Tudorache, adding that most of these requirements are principally to do with transparency. Developers have to show what data they’ve used to train their AI, and they must demonstrate how they have proactively tried to eliminate bias. There would also be a new AI body set up to create a central hub for enforcement.

Companies deploying generative AI tools such as ChatGPT would have to disclose if their models have been trained on copyrighted material—making lawsuits more likely. And text or image generators, such as MidJourney, would also be required to identify themselves as machines and mark their content in a way that shows it’s artificially generated. They should also ensure that their tools do not produce child abuse, terrorism, or hate speech, or any other type of content that violates EU law.

One person, who asked to remain anonymous because they did not want to attract negative attention from lobbying groups, said some of the rules for general-purpose AI systems were watered down at the start of May following lobbying by tech giants. Requirements for foundation models—which form the basis of tools like ChatGPT—to be audited by independent experts were taken out.

However the parliament did agree that foundation models should be registered in a database before being released to the market, so companies would have to inform the EU of what they have started selling. “That’s a good start,” says Nicolas Moës, director of European AI governance at the Future Society, a think tank.

The lobbying by Big Tech companies, including Alphabet and Microsoft, is something that lawmakers worldwide will need to be wary of, says Sarah Myers West, managing director of the AI Now Institute, another think tank. “I think we’re seeing an emerging playbook for how they’re trying to tilt the policy environment in their favor,” she says.

What the European Parliament has ended up with is an agreement that tries to please everyone. “It’s a true compromise,” says a parliament official, who asked not to be named because they are not authorized to speak publicly. “Everybody’s equally unhappy.”

The agreement could still be altered before the vote—currently scheduled for May 11—that allows the AI Act to move to the next stage. With uncertainty over last-minute changes, tensions lingered through the final weeks of negotiations. There were disagreements until the end about whether AI companies should have to follow strict environmental requirements. “I would still say the proposal is already very overburdened for me,” says Axel Voss, a German MEP from the conservative European People’s Party, speaking to WIRED in mid-April.

“Of course, there are people who think the less regulation the better for innovation in the industry. I beg to differ,” says another German MEP, Sergey Lagodinsky, from the left-wing Greens group. “We want it to be a good, productive regulation, which would be innovation-friendly but would also address the issues our societies are worried about.”

The EU is increasingly an early mover on efforts to regulate the internet. Its privacy law, the General Data Protection Regulation, came into force in 2018, putting limits on how companies could collect and handle people’s data. Last year, MEPs agreed on new rules designed to make the internet safer as well as more competitive. These laws often set a global standard—the so-called “Brussels effect.”

As the first piece of omnibus AI legislation expected to pass into law, the AI Act will likely set the tone for global policymaking efforts surrounding artificial intelligence, says Myers West.

China released its draft AI regulations in April, and Canada’s Parliament is considering its own hotly contested Artificial Intelligence and Data Act. In the US, several states are working on their own approaches to regulating AI, while discussions at the national level are gaining momentum. White House officials, including vice president Kamala Harris, met with Big Tech CEOs in early May to discuss the potential dangers of the technology. In the coming weeks, US senator Ron Wyden of Oregon will begin a third attempt to pass a bill called the Algorithmic Accountability Act, a law that would require testing of high-risk AI before deployment.

There have also been calls to think beyond individual legislatures to try to formulate global approaches to regulating AI. Last month, 12 MEPs signed a letter asking European Commission president Ursula von der Leyen and US president Joe Biden to convene a global Summit on Artificial Intelligence. That call has, so far, remained unanswered. Benifei says he will insist on the summit and more international attention. “We think that our regulation will produce the Brussels effect towards the rest of the world,” he adds. “Maybe they won’t copy our legislation. But at least it will oblige everyone to confront the risks of AI.”