Civitai, a popular online marketplace for sharing AI models, recently made headlines with the introduction of a controversial new feature called “bounties.” These bounties are meant to incentivize the development of realistic deepfake images of real people, with rewards in the form of virtual currency called “Buzz” that users can purchase with real money.
The issue at the heart of this controversy is the fact that many of the bounties posted on Civitai’s platform request the recreation of the likeness of celebrities, social media influencers, and even private individuals, some of whom have minimal online presence. What’s even more alarming is that a significant portion of these requests are for “nonconsensual sexual images,” which has understandably caused a wave of concern among the online community and beyond.
In response to these developments, Michele Alves, an Instagram influencer who has found herself the target of a bounty on Civitai, voiced her fears to 404 Media, stating “I am very afraid of what this can become. I don’t know what measures I could take, since the internet seems like a place out of control.” Her sentiment reflects the growing unease among potential targets of these deepfake bounties.
Despite the ethical and legal concerns surrounding deepfake technology, Civitai ranks as the seventh most popular generative AI platform, according to a report by 404 Media. This means there is a significant amount of visibility on the bounty requests being made on the platform. The ease with which a private person’s photos were sourced and submitted as part of a bounty request, and the subsequent revelation that the person was not the poster’s wife as claimed, underscores the troubling potential for misuse of this technology.
The introduction of laws in states like Virginia to punish deepfake creators, particularly in the context of revenge porn, highlights the legal ramifications of producing nonconsensual AI-generated images. However, despite the potential for legal repercussions, some bounty requests on Civitai still manage to elicit submissions, with the creators potentially facing severe consequences for their actions.
One concerning aspect of some of the bounty requests is the use of ambiguous or coded language to request sexual content. While Civitai explicitly states that bounties should not be used to create non-consensual AI-generated sexual images, the platform allows for the creation and sharing of non-sexual images of regular people, leaving the potential for combining the two. This was demonstrated by 404 Media, who used Civitai’s text-to-image tool to create non-consensual sexual images of a real person in a matter of seconds.
The presence of such bounties and the resulting deepfake content raises serious ethical and societal questions about consent, privacy, and the potential for harm caused by the misuse of AI technology. As the technology continues to advance, it is imperative that platforms like Civitai take proactive steps to prevent the proliferation of nonconsensual and harmful content.
In the face of these developments, it is crucial for both policymakers and technology companies to work together to create and enforce effective regulations that can mitigate the potential harm caused by the misuse of deepfake technology. Additionally, raising awareness about the implications of deepfake technology and promoting responsible use of AI models are essential in addressing the ethical challenges posed by the creation and distribution of nonconsensual AI-generated images.
Overall, the emergence of deepfake bounties on platforms like Civitai serves as a stark reminder of the urgent need for comprehensive measures to safeguard against the growing threats posed by malicious use of AI and deepfake technology. Only through concerted efforts and ethical frameworks can we hope to mitigate the potential harm and protect individuals from the adverse effects of nonconsensual AI-generated content.