Stronger AI Laws Urged Amidst Taylor Swift Deepfake and Fake Biden Robocall Incidents

4 min read

In the wake of alarming incidents involving deepfake images of Taylor Swift and fake robocalls using Joe Biden’s voice circulating on social media, lawmakers are intensifying their push for stronger regulations on the use of Artificial Intelligence (AI). Clyde Vanel, the Chair of the New York State Subcommittee on Internet and New Technology, highlighted the urgent need for robust safeguards to curb the misuse of AI.

The legislative process addressing deepfakes in New York began in September, marked by Governor Kathy Hochul signing legislation specifically targeting deepfake content. Vanel emphasized that the unauthorized distribution of manipulated images, such as those involving Taylor Swift, is deemed illegal in New York, constituting a class A misdemeanor. The legislation addresses the publication of generated photos or visual depictions containing explicit content, and Vanel stressed the importance of informing the public about these legal protections.

“It’s a class A misdemeanor for someone to knowingly or recklessly publish a generated photo or visual depiction of someone with sexual explicit content. We have to let the public know what we have in place. We have to let them know this is wrong, and we will prosecute these kinds of actions,” said Vanel.

Despite these legal strides, Vanel acknowledged that there is more work to be done, particularly in the context of political campaigns. Presently, New York does not mandate disclosure regarding the use of Artificial Intelligence in political campaigns, a loophole that lawmakers, including Vanel, have sought to address in previous legislative proposals.

Read more:

“As we approach the 2024 election, it’s crucial that we establish regulations requiring transparency in the use of AI for political purposes. People want to see reality in campaigns, not artificially manipulated content,” emphasized Vanel. Concerns about the potential impact on public perception and voting decisions have prompted calls for increased transparency in the use of AI in political advertising.

“People want to see reality, the fact. Who the person is not artificially made or anything,” remarked Sam Patel, echoing the sentiments of many who believe that distorted campaign content can influence voter opinions.

The risk of manipulated campaign content altering public perception was underscored by Markus, an 18-year-old Schenectady senior, who noted, “It can definitely change your view if you see a campaign online…especially if you already casted your vote or are really dedicated from that standpoint.”

The recent incidents involving Taylor Swift have prompted discussions about legal action against the companies responsible for disseminating fake images. Vanel emphasized the importance of collaboration with social media platforms to prevent the spread of such content.

“We found out with one of the platforms that they reduced the staff in this department to address these kinds of things. We need to make sure that there are certain things in place, and with the platforms, they have the resources to prevent this stuff and take it down,” Vanel asserted.

In a proactive move, Governor Kathy Hochul announced that the University of Buffalo would serve as the hub for her proposed Empire AI consortium. This consortium, comprising both private and public institutions, aims to conduct research and develop effective implementation strategies for AI moving forward. The initiative underscores the recognition of the growing importance of AI in various fields and the need for responsible development and deployment.

Vanel, who previously made headlines for creating legislation through AI, posted a deepfake video on his social media this week, resembling him. He emphasized the importance of adding warnings to such content, demonstrating a responsible approach to the use of AI in disseminating information.

“Just in the process of what I posted, I had to put warnings on. If you saw it, I posted warnings that said this is a deep fake; I had to make sure when I described it was deepfake,” said Vanel.

As technology continues to advance, the challenges surrounding AI’s ethical and legal implications remain at the forefront of legislative agendas. Lawmakers strive to strike a balance between technological innovation and protecting individuals and the democratic process from the potential harms posed by manipulated content.

You May Also Like

More From Author

+ There are no comments

Add yours