Copyright and AI: Current Implications and Challenges

Since the emergence of technologies like ChatGPT, the legal landscape surrounding artificial intelligence has drastically changed, marking a significant increase in legal claims targeted at the developers of these models. This growing phenomenon raises important questions about copyright and fair use in a context where digital content creation has become more automated than ever.

Plaintiffs, many of whom are content creators, argue that the works that have used their materials to train these systems violate their intellectual property rights. On the other hand, developers argue that the use of this data is protected by the fair use doctrine, a legal defense that allows limited use of protected material without explicit authorization from the rights holder.

In response to this growing legal pressure, some developers are opting for licensing agreements with content hosting platforms, a strategy that seems more like an attempt to negotiate than a simple legal compliance function. This suggests that, rather than mere legal disputes, these actions could be part of a complex bargaining game in which both sides seek to protect their interests.

The outcomes of these legal actions can vary significantly; from the prevalence of copyright rights, to agreements between the parties, or even the developers’ triumph in some cases. Analysts believe that, although content creators have legitimate concerns, an expansion of the legal framework of copyright would not necessarily guarantee the preservation of jobs in the face of automation. Furthermore, a ruling in favor of copyright could have negative repercussions, especially if fair use protections for research or artistic expressions are weakened.

Amidst these legal conflicts, various courts have dismissed claims under Section 1202(b) of the Digital Millennium Copyright Act. In a landmark case, “Raw Story Media v. OpenAI, Inc.”, the court dismissed the claims finding no evidence that the training of ChatGPT had harmed the plaintiff. This principle has been applied in other cases, such as “Andersen v. Stability AI, Ltd.” and “Kadrey v. Meta Platforms, Inc.”, where the claims were similarly dismissed.

However, not all disputes have ended in favorable rulings for developers. In “Andersen v. Stability AI Ltd.”, claims were allowed to proceed when it was alleged that the plaintiff’s works had been included in a training dataset, a point that could lead to clearer precedents being established in copyright law.

The fair use doctrine is a central theme in this debate, and to date, most cases related to artificial intelligence have not taken its application into account. A notable case illustrating this uncertainty is “Thomson Reuters Enterprise Centre GMBH v. Ross Intelligence, Inc.”, where a judge changed their perspective on whether the use of the technology was fair, generating new concerns for developers.

Meanwhile, technology giants like OpenAI and Google are forging licensing agreements worth millions of dollars with various media companies, leading to the emergence of a $2.5 billion training data licensing market. However, attention should not solely focus on the economic benefits that these corporations will gain. The real danger lies in a small group of companies, capable of bearing these high costs, being able to dictate the course of artificial intelligence, affecting its development and access in the near future.

Source: MiMub in Spanish

Scroll to Top
×