Generative AI, often likened to magic, has transformed the creative landscape, pushing the boundaries of what is possible in image and text generation. Platforms like Stable Diffusion, Midjourney, and DALL·E 2 have captivated audiences with their ability to produce stunning visuals, ranging from aged photographs to Pointillism. The attraction of these AI tools lies not only in their quality but also in their speed, beating average human performance. Remarkably, institutions like the Museum of Modern Art in New York and the Mauritshuis in The Hague have showcased AI-generated installations, highlighting the impact of this technology on the art world.
However, beneath the surface of this seemingly magical creativity lies a complex legal landscape, particularly in the realm of intellectual property (IP) law. The process of training generative AI platforms involves vast data lakes and question snippets, raising questions about copyright, patent, and trade mark infringement. Before businesses can fully embrace the benefits of generative AI, they must grapple with the associated risks and implement strategies to protect themselves.
Legal Implications and Ongoing Cases
Generative AI's entry into the market doesn't exist in a legal vacuum - existing laws have far-reaching implications for its use. Courts are currently grappling with infringement issues, uncertainty about ownership of AI-generated works, and concerns about unlicensed content in training data. Recent cases, such as Andersen v. Stability AI et al. in the United States of America, highlight the legal challenges, with artists suing generative AI platforms for using their original works without proper licensing.
In 2023, Getty filed a lawsuit against Stable Diffusion, alleging the improper use of its photos, citing copyright and trade mark violations. These cases prompt crucial questions about what constitutes a "derivative work" under intellectual property laws, and how the fair use doctrine applies. The outcomes will shape the legal landscape for businesses utilising generative AI.
To mitigate legal risks associated with generative AI, businesses, AI developers, and content creators must adopt proactive measures:
Developers
- Source Data Licensing: Ensure proper licensing of all content used to train your AI models.
- Opt-in Approach: Require content creators to opt-in to have their work included in training data, not opt-out.
- Maintain Source Information: Develop methods to track the origin of content used in training data.
Content Creators
- Monitor Content Usage: Track your work online to identify potential AI-generated derivatives.
- Protect Trade Marks: file for trade mark protection on all new brand/product names and logos in order to have protection for your original work.
Businesses
- Contractual Protections: Include clauses in contracts with vendors and customers that address generative AI usage and IP ownership.
- Disclose AI Usage: Clearly disclose the use of generative AI in contracts and take steps to protect confidential information from being input into such tools.
- Internal AI Policy: ensure you have a AI Policy for all employees to adhere to in terms of using AI in work.
Looking ahead, content creators and businesses can take proactive steps to build a sustainable future in the age of generative AI by, for example, building datasets from their own intellectual property to train AI platforms so as to control content creation.
Generative AI offers immense potential, but respecting the rights of content creators and brands is crucial. By understanding the legal landscape and implementing risk mitigation strategies, all stakeholders can navigate this new frontier responsibly and productively.
For more information or legal advice on how to implement and use AI within your business or line of work please contact our Technology and Intellectual Property team: