Whether through the frowning high-definition face of a chimpanzee or a psychedelic, pink-and-red-hued doppelganger of himself, Reuven Cohen uses AI-generated images to catch people’s attention. “I’ve always been interested in art and design and video and enjoy pushing boundaries,” he says—but the Toronto-based consultant, who helps companies develop AI tools, also hopes to raise awareness of the technology’s darker uses.

“It can also be specifically trained to be quite gruesome and bad in a whole variety of ways,” Cohen says. He’s a fan of the freewheeling experimentation that has been unleashed by open source image-generation technology. But that same freedom enables the creation of explicit images of women used for harassment.

After nonconsensual images of Taylor Swift recently spread on X, Microsoft added new controls to its image generator. Open source models can be commandeered by just about anyone and generally come without guardrails. Despite the efforts of some hopeful community members to deter exploitative uses, the open source free-for-all is near-impossible to control, experts say.

“Open source has powered fake image abuse and nonconsensual pornography. That’s impossible to sugarcoat or qualify,” says Henry Ajder, who has spent years researching harmful use of generative AI.

Ajder says that at the same time that it’s becoming a favorite of researchers, creatives like Cohen, and academics working on AI, open source image generation software has become the bedrock of deepfake porn. Some tools based on open source algorithms are purpose-built for salacious or harassing uses, such as “nudifying” apps that digitally remove women’s clothes in images.

But many tools can serve both legitimate and harassing use cases. One popular open source face-swapping program is used by people in the entertainment industry and as the “tool of choice for bad actors” making nonconsensual deepfakes, Ajder says. High-resolution image generator Stable Diffusion, developed by startup Stability AI, is claimed to have more than 10 million users and has guardrails installed to prevent explicit image creation and policies barring malicious use. But the company also open sourced a version of the image generator in 2022 that is customizable, and online guides explain how to bypass its built-in limitations.

Meanwhile, smaller AI models known as LoRAs make it easy to tune a Stable Diffusion model to output images with a particular style, concept, or pose—such as a celebrity’s likeness or certain sexual acts. They are widely available on AI model marketplaces such as Civitai, a community-based site where users share and download models. There, one creator of a Taylor Swift plug-in has urged others not to use it “for NSFW images.” However, once downloaded, its use is out of its creator’s control. “The way that open source works means it’s going to be pretty hard to stop someone from potentially hijacking that,” says Ajder.

4chan, the image-based message board site with a reputation for chaotic moderation is home to pages devoted to nonconsensual deepfake porn, WIRED found, made with openly available programs and AI models dedicated solely to sexual images. Message boards for adult images are littered with AI-generated nonconsensual nudes of real women, from porn performers to actresses like Cate Blanchett. WIRED also observed 4chan users sharing workarounds for NSFW images using OpenAI’s Dall-E 3.

That kind of activity has inspired some users in communities dedicated to AI image-making, including on Reddit and Discord, to attempt to push back against the sea of pornographic and malicious images. Creators also express worry about the software gaining a reputation for NSFW images, encouraging others to report images depicting minors on Reddit and model-hosting sites. Reddit’s policies prohibit all AI-generated “nonconsensual intimate media.”

Other AI creators, including Cohen, are also concerned about the ease of creating deepfakes with a new method, InstantID, published in January by researchers at Peking University and Chinese social media company Xiaohongshu, which can be used to swap faces in images using just a single example, therefore needing less processing or preparation. The team expressed concern for their creation’s potential for creating “offensive or culturally inappropriate imagery” with human faces in their paper introducing the model, but a YouTube channel with more than 143,000 subscribers that posts AI tutorials promotes the technique as enabling “Uncensored Open Source Face Cloning.”

“If you wanted to create a fake of someone in a compromising position,” Cohen says, “this makes it simple and easy. This could [eventually] be plugged into a browser, and you can now have it with zero overhead, which made it accessible to everybody.”

Some tools and software creators are themselves discouraging malicious use. When David Widder, an AI ethics researcher and postdoctoral fellow at Cornell Tech, interviewed the people behind an open source deepfake tool, they made clear that they did not want their software to be used for porn of any kind, consensual or nonconsensual. However, they felt powerless to make people respect that wish. “They don’t feel like they can do anything to stop it,” Widder says.

Other open source AI creators are creating hurdles to unwanted use cases. Researchers at Hugging Face, an online community and platform for open source models, have been promoting ethical tools for AI, including image “guarding,” which they say protects images from generative AI editing, as well as allowing developers to control access to models uploaded to the platform. Civitai says it prohibits depictions of real people, as well as minors, in a “mature context,” and in December it encouraged reporting of violations. However, users openly ask others on the site to create nonconsensual images, mostly of women, 404 Media reported in November.

A spokesperson for Civitai provided a company statement saying that images flagged as potentially meeting the site’s definition of mature content by its moderation system are sent for human review. Violations of company policies will result in images being removed, suspension of access to Civitai’s onsite image generator, or a ban from the platform, it said. Real people featured in models can also send Civitai a removal request.

“Techno-fixes” such as release-gating and licensing of open source, and contractual obligations for commercial platforms, might not stop all misuse but could stop some, says Widder. And even if individual community members might feel like they can’t make a difference, grassroots efforts could also be an effective source for change. “Setting norms in a community is an undervalued and often powerful way of influencing behavior and what is considered acceptable, cool, and not cool,” Widder says.

Sharing AI-generated intimate images without consent was made illegal in the UK in online safety legislation enacted in January, and the Swift scandal has added new fuel to calls for similar, federal, laws in the US. At least 10 states have deepfake-related laws. Tech companies and social platforms are also exploring AI “watermarking” from prominent tools, while tech companies signed an accord to crack down on AI during elections—though it’s unclear how this would impact generations made using niche models, if at all.

Elena Michael, director of NotYourPorn, a UK group campaigning against image-based sexual abuse, suggests more dialog between AI startups, open source developers, and entrepreneurs as well as governments, women’s organizations, academics, and civil society to explore possible deterrents to nonconsensual AI porn that don’t hinder accessibility to open source models. “There’s not enough conversation, and there’s not enough collaboration between the institutions trying to deal with this issue,” she says.

There might not be a way to completely control the problem, but proper coordination between these groups could help deter abuse and make people more accountable, she says. “It’s about creating a community online.”

Ultimately, the form of image-based abuse fosters often-devastating consequences for those affected, who are nearly always female. A 2019 study by the deepfake monitor Sensity AI found 96 percent of deepfakes are nonconsensual pornography, almost exclusively targeting women.

Cohen, the Toronto consultant, says he’s fearful of the environment that creates for young women. “The real terrible things are not happening on the level of Taylor Swift. The ability for random people to really hyper-target individuals is terrifying,” he says. “I’m generally an optimist, not a doomer, but I also have a daughter, and I want her to grow up in a world that’s safe for everyone else.”

Updated 3-6-2024, 8:30 pm EST: This article has been updated to note that Reddit’s policies forbid AI-generated nonconsensual intimate media.