Molly Kelley was shocked when she found out that someone she knew had used technology to create explicit images and videos of her. This happened in June, and the images were made from family photos she had shared on social media. The situation worsened when Kelley learned that this individual had targeted around 80 to 85 other women, most of whom lived in Minnesota and had some connection to him.
In response to this disturbing trend, Minnesota is considering new legislation aimed at curbing deepfake pornography. This proposed bill has bipartisan support and seeks to hold accountable the companies that operate websites and apps enabling users to upload images that can be altered into explicit content. The goal is to prevent such images from being created in the first place, rather than just dealing with their distribution after the fact.
As states and Congress explore ways to regulate artificial intelligence, many have moved to ban the sharing of nonconsensual explicit content, whether created by AI or not. The Minnesota bill, however, aims to be more proactive by targeting the technology that allows this content to be generated. Critics of the bill warn that it could face challenges based on free speech rights.
Senator Erin Maye Quade, who is leading the charge on this bill, emphasizes that the rapid advancement of AI technology necessitates these new restrictions. If passed, the legislation would require operators of "nudification" sites to restrict access to Minnesota users, with potential fines of up to $500,000 for violations.
Kelley has shared her experience with reporters, highlighting how easy it is for anyone to create realistic and explicit images in just minutes. This ease of access raises serious concerns about the implications for victims.
Other states are also taking action. San Francisco has filed a lawsuit against several nudification websites, claiming they violated laws against nonconsensual pornography and fraudulent business practices. Additionally, a bill in the U.S. Senate aims to make it a federal crime to publish nonconsensual sexual imagery, including AI-generated content.
In Kansas, lawmakers have expanded the definition of illegal sexual exploitation to include AI-generated images that resemble real children. Similar proposals are being considered in several other states, including Florida, Illinois, and New York.
While advocates for the bill argue that it is necessary to protect victims, legal experts caution that the broad language of the legislation may not hold up in court. They suggest that the bill should be more narrowly defined to focus on specific types of content to avoid potential First Amendment issues.
Maye Quade believes that the legislation is solid because it addresses conduct rather than speech. She argues that the technology must be regulated to prevent harm to individuals.
The stories of victims like Kelley and Megan Hurley, who also had explicit images created of her, highlight the urgent need for action. Hurley, a massage therapist, expressed her distress over the ease with which someone could create such damaging content.
As discussions continue in Minnesota and beyond, the need for effective measures to combat the misuse of AI technology is becoming increasingly clear.