National Journal Logo
×

Welcome to National Journal!

Enjoy this premium "unlocked" content until August 17, 2025.

Continue

'We've got to do something': AI copyright legislation is on its way

Sen. Josh Hawley vows to sponsor bills protecting artistic copyright.

Sen. Josh Hawley (AP Photo/Rod Lamkey, Jr.)
Sen. Josh Hawley (AP Photo/Rod Lamkey, Jr.)
None

Want more stories like this?

Subscribe to our free Sunday Nightcap newsletter, a weekly check-in on the latest in politics & policy with Editor in Chief, Jeff Dufour.

July 16, 2025, 7:13 p.m.

Legislation limiting how artificial-intelligence companies use copyrighted material to train their programs is coming soon, Sen. Josh Hawley vowed Wednesday, in what could prove a turning point in how Capitol Hill regulates AI.

The remarks came after a fiery Senate Judiciary subcommittee hearing in which the populist Republican accused big tech companies, including Meta, and leading AI firms, such as Anthropic, of stealing tens of millions of copyrighted works.

“I expect several pieces of bipartisan legislation from me really soon in this space,” Hawley said during the Crime and Counterterrorism Subcommittee hearing. “Everybody's worried about it, and we've got to do something to protect individuals.”

When the first major generative AI programs hit the market in late 2022, questions arose about whether the training and deployment of these programs violated intellectual-property laws. The programs work by first being fed massive amounts of data including books, YouTube videos, social media posts, and famous artworks, which they analyze to detect patterns. They then algorithmically recreate those patterns with tweaks based on a user's request as “new” text, photos, or video.

The tech companies claim that the use of copyrighted material for training followed by the creation of ostensibly new text constitutes fair use, which gives them the right to use material without permission of the copyright holders.

Creators counter that the companies, which might have stolen their copyrighted work without compensation, do little to transform their work significantly. They contend the companies have simply created a market that will soon be flooded with potentially millions of cheap, quickly produced knock-offs of their original work.

“There's no human that can read books at the pace and at the scale and retain all of this information the way that a machine can do, and then turn around and spit out just enormous numbers of competing work,” Kevin Amer, chief legal officer with the Authors Guild, told National Journal.

Dozens of lawsuits have been filed in the U.S. over the last two years by artists, authors, musicians, news organizations, and their representatives, including the Authors Guild.

While most of these cases are still in the very early stages of litigation, two federal courts last month issued contradictory rulings in cases against Meta and Anthropic. That indicates lengthy court battles to come on AI.

In a case brought by writers Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson against Anthropic, a federal judge in San Francisco handed down a partial win for the tech firm, finding that in principle the use of copyrighted material to train AI programs constituted fair use.

“The copies used to train specific [large language models] did not and will not displace demand for copies of Authors’ works, or not in the way that counts under the Copyright Act,” U.S. District Judge William Alsup wrote in his opinion. The “complaint is no different than it would be if they complained that training schoolchildren to write well would result in an explosion of competing works. This is not the kind of competitive or creative displacement that concerns the Copyright Act.”

But it was not all good news for Anthropic. One of the claims made by the authors in the lawsuit is that the company pirated 7 million copies of books that it uploaded into its central library before opting not to use them to train its AI model. Alsup allowed that part of the case to move forward. Anthropic faces a fine of up to $150,000 per pirated book.

Two days after the opinion in the Anthropic case was released, another federal judge in San Francisco handed Meta a victory in one of its copyright lawsuits, brought by a group of 13 authors.

The authors made claims similar to those in the Anthropic lawsuit: that the mass copying of their work, often illegally pirated, was used by the company in a way that violated copyright protections and threatened the entire book marketplace with AI-generated knockoffs.

Evidence Hawley presented during Wednesday’s Senate hearing suggested that Meta workers knew they were violating the law by pirating the material.

“It’s the piracy (and us knowing and being accomplices) that’s the issue,” said an internal message from a Meta engineer that Hawley presented during the hearing.

“These are Meta’s own engineers, Meta’s own employees saying they know what they're doing is ethically wrong, illegal, likely to subject them to legal liability, and they're doing it anyway because they need the money,” Hawley said.

In his opinion on the Meta case, U.S. District Judge Vince Chhabria focused on the effects of legally procured copyrighted material. Though in this instance he found in favor of Meta, he said that in “many circumstances” it would be illegal to use copyrighted material to train a generative AI program. He ruled that the plaintiffs in the case simply “made the wrong arguments and failed to develop a record in support of the right one.”

“No matter how transformative [large language model] training may be, it’s hard to imagine that it can be fair use to use copyrighted books to develop a tool to make billions or trillions of dollars while enabling the creation of a potentially endless stream of competing works that could significantly harm the market for those books,” Chhabria said, opening the door to other lawsuits from creatives.

Pointing to Chhabria’s opinion, Amer said current laws are likely to eventually force AI companies to license copyrighted material for use in their generative AI models without congressional intervention. He noted that the early contradictory rulings probably will force the issue before the Supreme Court, though it may be years before the high court even hears an AI copyright case.

In the meantime, Amer said Congress should focus on transparency laws that require companies to disclose the material they trained their AI on, along with labeling requirements so consumers will know whether the book or artwork they are looking to buy was created by a computer or a human.

“Consumers are always going to prefer human-author works, but they need to be sort of aware,” he said. The Authors Guild currently issues a certification mark for human-authored work.

In the previous session of Congress, Democratic Sen. Peter Welch introduced the TRAIN Act, which would have given copyright holders the right to subpoena tech companies to determine if their copyrighted material was used to train an AI model.

Hawley said he was not satisfied with waiting for the courts to act while simply focusing on transparency and labeling.

Though he did not give details, the Missouri Republican suggested that multiple pieces of bipartisan legislation on AI will be coming soon, and that he hopes the defeat of a provision preventing states from imposing a moratorium on AI regulations in the just-passed GOP budget-reconciliation bill will be a turning point in how quickly Congress moves to regulate the rapidly evolving technology.

“I think what it really reflects is where people are, where voters are, and people are worried, and they have a sense that they're being ripped off by these companies. And they're right, they are being ripped off,” Hawley said. “I'm all for innovation, but I'm not for destroying American journalism, authorship, music, photography, you name it. I mean, just, my gosh, it's going to destroy the country. We can't let that happen.”

Welcome to National Journal!

Enjoy this featured content until August 17, 2025. Interested in exploring more
content and tools available to members and subscribers?

×
×

Welcome to National Journal!

You are currently accessing National Journal from IP access. Please login to access this feature. If you have any questions, please contact your Dedicated Advisor.

Login