Some artists have begun waging a legal battle against the alleged theft of billions of copyrighted images used to train AI art creators and reproduce unique styles without compensating the artists or asking for consent.
A group of artists represented by the law firm of Joseph Savery has filed a US federal lawsuit in San Francisco against AI-art Stability AI, Midjourney and DeviantArt for alleged violations of the Digital Millennium Copyright Act, right of publicity violations and unfair competition.
The artists who took action — Sarah Anderson, Kelly McKiernan, Carla Ortiz — “seek to end this flagrant and massive violation of their rights before their careers are wiped out by a computer program powered entirely by their hard work,” according to the official transcript of their court filing.
Using tools like Stability AI’s Stable Diffusion, Midjourney, or the DreamUp creator on DeviantArt, people can type phrases to create artwork similar to living artists. Since the mainstream emergence of AI image synthesis last year, AI-generated artwork has been the subject of much controversy among artists, sparking protests and culture wars on social media.
One notable absence from the list of companies included in the complaint is OpenAI, the creator of the DALL-E image superposition model that arguably got the ball rolling on generative AI mainstream art in April 2022. Unlike Stability AI, OpenAI has not publicly disclosed the exact details. contents of its training dataset and has commercially licensed some of its training data from companies such as Shutterstock.
Despite the controversy over stable prevalence, the legality of how AI image generators work has yet to be tested in court, although the Joesph Saveri Law Firm is no stranger to legal action against generative AI. In November 2022, the same company sued GitHub over its Copilot AI programming tool for alleged copyright violations.
False arguments and ethical violations
Alex Champandard, AI Analyst called him Artists’ rights without explicitly excluding artificial intelligence technology, and criticized the new lawsuit in several threads on Twitter, writing, “I do not trust the attorneys who filed this complaint, based on the content + how it was written. The case may do more harm than good because of this.” However, Champandard believes the lawsuit could harm potential defendants: “Anything the companies say to defendFor themselves will be used against them. “
To Champandard’s point of view, we noted that the complaint includes several statements that potentially misrepresent how the AI image superimposition technology works. For example, the fourth paragraph of the first section says, “When used to produce images from claims by its users, Stable Diffusion uses training images to produce images that look new through a mathematical software process. These ‘new’ images are based entirely on the training images and derivative works of Certain images Stable Diffusion spreads from when collecting certain outputs. In the end, it’s just a complex collage tool.”
In another section that attempts to describe how latent diffusion image synthesis works, the plaintiffs incorrectly compare a trained AI model to “having evidence on your computer containing billions of JPEG image files,” claiming that “a trained diffusion model can produce a copy of any of his training pictures.”
During the training process, Stable Diffusion was drawn from a large library of millions of cutout images. Using this data, its neural network statistically “learned” how certain image patterns appeared without storing exact copies of the images it viewed. Although in the rare cases of overrepresented images in the dataset (eg Mona Lisa), Kind of “overfittingThis allows Stable Diffusion to output a close representation of the original image.
Ultimately, if the underlying diffusion models are trained properly, they always generate new images and do not create collages or duplicate existing work—a technical reality that potentially undermines the plaintiffs’ argument for copyright infringement, despite their arguments about “derivative works.” That the images are generated by AI generators is an open question without clear legal precedent to our knowledge.
Some of the other points in the complaint, such as unlawful competition (by cloning an artist’s style and using a machine to replicate it) and infringement of the right of publicity (by allowing people to order artwork “in the style” of existing artists without permission), are less technical and may have legs in court.
Despite its problems, the lawsuit follows an outcry over disapproval of artists who feel threatened by AI art creators. By accepting it, the technology companies behind the AI photomontage obtained the intellectual property rights to train their models without the consent of the artists. They are already on trial in the court of public opinion, even if they are ultimately found to be in compliance with established case law regarding the excessive harvesting of public data from the Internet.
“Companies that build large models that rely on copyrighted data can get away with it if they do so privately.” chirp Champandard, “but doing it publicly *and* legally is very difficult—or impossible.”
If the lawsuit goes to trial, the courts will have to sort out the distinctions between the ethical and legal wrongdoings alleged. The plaintiffs hope to prove that AI companies profit commercially and make significant profits from the use of copyrighted images; They sought substantial damages and permanent punitive damages to prevent the allegedly infringing companies from further violations.
When reached for comment, Stability AI CEO Emad Mostaque responded that the company had not received any information about the lawsuit as of press time.