Tennessee’s governor is announcing plans to update the state’s law to protect the music industry from the misuse of artificial intelligence.
Governor Bill Lee of Tennessee is announcing new legislation around protecting the music industry within the state against the misuse of AI, his office announced late last week.
On Wednesday, January 10, Lee will announce the full legislature change alongside state leadership, artists, songwriters, and music industry stakeholders in Nashville. State law currently protects image and likeness, but the upcoming changes will enact further protections tailored to audio.
“From Beale Street to Broadway and beyond, Tennessee is known for our rich artistic heritage that tells the story of our great state,” said Lee on Friday, January 5. “As the technology landscape evolves with artificial intelligence, we’re proud to lead the nation in proposing legal protection for our best-in-class artists and songwriters.”
The legislature will bolster existing protections in Tennessee covering image and likeness rights, in addition to a wide range of audio-specific protections covering “songwriters, performers, and music industry professionals’ voices from the misuse of AI.”
As unauthorized AI-created songs continue to pop up online like a game of whack-a-mole, the music industry is hungry for any legislature to offer peace of mind — with legislation at the federal level the eventual goal. Protecting the Nashville music industry is certainly a welcome start.
In October, a group of US senators introduced the NO FAKES Act (Nurture Originals, Foster Art, and Keep Entertainment Safe Act), which aims to “protect the voice and visual likenesses of individuals from unfair use through generative artificial intelligence.”
Led by Senators Marsha Blackburn, Chris Coons, Thom Tillis, and Amy Klobuchar, the proposed bill would “prevent a person from producing or distributing an unauthorized AI-generated replica of an individual to perform in an audiovisual or sound recording without the consent of the individual being replicated.”
Further, persons who do so would be “liable for damages caused by the AI-generated fake,” while platforms hosting the fakes would be held liable if they have “knowledge of the fact that the replica was not authorized by the individual depicted.”
Exceptions would be granted to content created “for purposes of comment, criticism, or parody” per the First Amendment. Notably, the current iteration of the bill is a “discussion draft” for politicians to mull over.