- May 2024
-
media.dltj.org media.dltj.org
-
why training artificial intelligence in research context is and should continue to be a fair use
Examination of AI training relative to the four factors of fair use
-
three different issues that are being implicated by artificial intelligence. And this is true with, you know, all artificial intelligence, not just a generative but particularly generative.
Three issues implicated by Generative AI
- Does ingestion for training AI constitute infringement?
- Does the output infringe?
- Is the output copyrightable?
The answer is different in different jurisdictions.
-
Handling Academic Copyright and Artificial Intelligence Research Questions as the Law Develops
Spring 2024 Member Meeting: CNI website • YouTube
Jonathan Band Copyright Attorney Counsel to the Library Copyright Alliance
Timothy Vollmer Scholarly Communication & Copyright Librarian University of California, Berkeley
The United States Copyright Office and courts in many United States jurisdictions are struggling to address complex copyright issues related to the use of generative artificial intelligence (AI). Meanwhile, academic research using generative AI is proliferating at a fast pace and researchers still require legal guidance on which sources they may use, how they can train AI legally, and whether the reproduction of source material will be considered infringing. The session will include discussion of current perspectives on copyright and generative AI in academic research.
-
- Jan 2024
-
spectrum.ieee.org spectrum.ieee.org
-
One user on X pointed to the fact that Japan has allowed AI companies to train on copyright materials. While this observation is true, it is incomplete and oversimplified, as that training is constrained by limitations on unauthorized use drawn directly from relevant international law (including the Berne Convention and TRIPS agreement). In any event, the Japanese stance seems unlikely to be carry any weight in American courts.
Specifics in Japan for training LLMs on copyrighted material
-
After a bit of experimentation (and in a discovery that led us to collaborate), Southen found that it was in fact easy to generate many plagiaristic outputs, with brief prompts related to commercial films (prompts are shown).
Plagiaristic outputs from blockbuster films in Midjourney v6
Was the LLM trained on copyrighted material?
-
- Jul 2023
-
arxiv.org arxiv.org
-
First, under a highly permissive view, theuse of training data could be treated as non-infringing because protected works are not directlycopied. Second, the use of training data could be covered by a fair-use exception because atrained AI represents a significant transformation of the training data [63, 64, 65, 66, 67, 68].1Third, the use of training data could require an explicit license agreement with each creatorwhose work appears in the training dataset. A weaker version of this third proposal, is to atleast give artists the ability to opt-out of their data being used for generative AI [69]. Finally,a new statutory compulsory licensing scheme that allows artworks to be used as training databut requires the artist to be remunerated could be introduced to compensate artists and createcontinued incentives for human creation [70].
For proposals for how copyright affects generative AI training data
- Consider training data a non-infringing use
- Fair use exception
- Require explicit license agreement with each creator (or an opt-out ability)
- Create a new "statutory compulsory licensing scheme"
Tags
Annotators
URL
-
- Feb 2023
-
storage.courtlistener.com storage.courtlistener.com
-
COMPLAINT filed with Jury Demand against Stability AI, Inc. Getty Images (US), Inc. v. Stability AI, Inc. (1:23-cv-00135) District Court, D. Delaware
https://www.courtlistener.com/docket/66788385/getty-images-us-inc-v-stability-ai-inc/
-
-
arxiv.org arxiv.org
-
Certainly it would not be possible if theLLM were doing nothing more than cutting-and-pasting fragments of text from its training setand assembling them into a response. But this isnot what an LLM does. Rather, an LLM mod-els a distribution that is unimaginably complex,and allows users and applications to sample fromthat distribution.
LLMs are not cut and paste; the matrix of token-following-token probabilities are "unimaginably complex"
I wonder how this fact will work its way into the LLM copyright cases that have been filed. Is this enough to make a the LLM output a "derivative work"?
Tags
Annotators
URL
-