People have been using ChatGPT, an artificial intelligence text generator, for a variety of things since it first launched in 2022, but they haven’t been able to coax it to create at least one genre of content: porn.
Currently, ChatGPT parent company OpenAI doesn’t allow its technology to be used to “build tools that may be inappropriate for minors, including: sexually explicit or suggestive content,” with an exception for content created for scientific or educational purposes. Now OpenAI is considering opening things up a bit, according to an “extensive document intended to gather feedback on the rules for its products,” reported on by NPR.
“We believe developers and users should have the flexibility to use our services as they see fit, so long as they comply with our usage policies,” said the document. “We’re exploring whether we can responsibly provide the ability to generate NSFW [not safe for work] content in age-appropriate contexts through the API and ChatGPT. We look forward to better understanding user and societal expectations of model behavior in this area.”
It also provided multiple examples, including how ChatGPT would respond to requests for erotica, for educational information about sex and for rap lyrics containing profanity.
Joanne Jang is an OpenAI model lead who helped write the document.
She said in an interview with NPR that the company hopes to “start a conversation about whether erotic text and nude images should always be banned in its AI products.”
“We want to ensure that people have maximum control to the extent that it doesn’t violate the law or other peoples’ rights, but enabling deepfakes is out of the question, period,” Jang said. “This doesn’t mean that we are trying now to create AI porn,” but she noted that also depends on what an individual considers to be porn.
According to NPR, the OpenAI document “troubled some observers, given the number of instances in recent months of cutting-edge AI tools being used to create deepfake porn and other kinds of synthetic nudes.”
OpenAI-generated content has already been at the root of at least one troubling incident. It was used to power an artificial intelligence-generated show inspired by the 1990s sitcom “Seinfeld” titled “Nothing, Forever” that was shut down last year after one of the characters started spewing homophobic and transphobic jokes.
Deepfake pornographic images have also been an issue for years. In 2019, The Verge reported on the site DeepNude, which shut down after receiving widespread attention.
“The team behind DeepNude said they ‘greatly underestimated’ interest in the project and that ‘the probability that people will misuse it is too high,’” said the report.
According to ContentDetector.AI, researchers had found more than 85,000 fake videos online by the end of 2020. As of 2019, around 96% of deepfakes were pornographic in nature, said the site. Experts expected 500,000 videos and voice deepfakes to flood social media last year.
“Deepfake fraud has been on the rise in North America,” said ContentDetector.AI. “In the U.S., instances of deepfakes used for fraud increased from 0.2% to 2.6% between 2022 and Q1 2023.”
This year, Audacy reported on the case of a fake racist rant a school employee generated using AI to sound like his boss. After it circulated online, the man whose voice was faked was suspended and faced backlash.
In February, the New Jersey Law Journal reported that a suit had been filed in federal court after a 15-year-old girl’s classmate allegedly used an AI application to create and distribute nonconsensual nude images of her.
“The suit, filed Friday, comes just as an assortment of free or inexpensive online tools is launched to help users generate images of people, including some that offer to ‘nudify’ subjects,” said the outlet. “ClothesOff is available for Apple and Android apps but its website gives no indication where the company is located or who owns it.”
Steven Houser, a 67-year-old teacher at Beacon Christian Academy in Florida, was accused this spring using artificial intelligence to create erotic content from the yearbook photos of three students, according to Fox 13 Tampa. Earlier this year, Audacy reported that “fake sexually explicit AI-generated images of superstar Taylor Swift have been flooding social media, particularly X, and shared so many times, it prompted a response from the White House.”
This month, Google Ads announced that its inappropriate content rules will be updated to prohibit promoting synthetic content that has been altered or generated to be sexually explicit or contain nudity. These new rules go into effect May 30. Legislation regarding deepfakes has also been introduced in Congress.
Open AI’s Jang said its platform will not allow for the creation of deepfakes. She told NPR “these are the exact conversations we want to have.”
As for ChatGPT, it wouldn’t be the first chatbot to create erotica if rules eventually do change to allow for that. Last October, TechCrunch covered the release of an app from Swedish startup Pirr.
“The company has released an app trained on an enormous library of smut, and is eager to help you fantasize alongside an AI,” said the report. “You write the first paragraph, and the AI takes it from there. After a couple of paragraphs, you either choose what happens next, choose-your-own-adventure style or you can write the next part of the story yourself, before letting the AI step back in. It’s also possible to collaborate on stories with the broader community.”
Romance novels, some of which include erotica, have been a popular category for readers in recent years. In the first half of 2022, sales of romance novels jumped by 33%, Audacy reported last year.
“There are creative cases in which content involving sexuality or nudity is important to our users,” Jang explained. “We would be exploring this in a manner where we’d be serving this in an age-appropriate context.”
Source: ChatGPT makers are exploring how AI can generate porn