ButtonAI logoButtonAI
Back to Blog

The Copyright Strikes Back: What the Midjourney Lawsuit Means for the Future of AI-Generated Marketing Content

Published on October 6, 2025

The Copyright Strikes Back: What the Midjourney Lawsuit Means for the Future of AI-Generated Marketing Content

The Copyright Strikes Back: What the Midjourney Lawsuit Means for the Future of AI-Generated Marketing Content

In the whirlwind of digital transformation, generative AI has emerged as a game-changer for marketers. Tools like Midjourney, DALL-E, and Stable Diffusion can conjure stunning visuals, compelling ad copy, and entire campaign concepts in minutes, promising unprecedented efficiency and creativity. But beneath this shimmering surface of innovation, a legal storm is brewing. The landmark Midjourney lawsuit, a class-action case brought by artists against major AI companies, has thrown a harsh spotlight on the murky legal waters of AI-generated content copyright. For marketers who have enthusiastically adopted these tools, the implications are profound and potentially perilous.

This lawsuit isn't just a niche legal battle; it's a foundational challenge to the way generative AI models are built and used. It raises fundamental questions about ownership, infringement, and the very definition of creativity in the digital age. As a brand manager, content creator, or agency owner, you can no longer afford to treat AI image generators as simple stock photo alternatives. The legal risks are real, and the potential impact on your brand’s reputation and bottom line could be significant. This article will dissect the core issues of the case, analyze the current legal landscape, and provide actionable strategies to help you navigate the complex intersection of AI, marketing, and copyright law safely and effectively.

What is the Midjourney Lawsuit? A Simple Breakdown

At its heart, the case of Andersen et al. v. Stability AI et al. is a clash between the creators of original art and the creators of technology that learns from that art. Filed in early 2023, this class-action lawsuit consolidated a group of artists against some of the biggest names in the generative AI space: Stability AI (creators of Stable Diffusion), Midjourney Inc. (creators of the Midjourney tool), and DeviantArt (which launched its own AI art generator, DreamUp). The case represents a critical test of how existing copyright law applies to the novel processes of training and using generative AI models.

For marketers, understanding this lawsuit is not an academic exercise. Its outcome could dictate the legality of the AI tools you use every day, the ownership of the content you create with them, and the level of risk your business assumes with every AI-generated image you publish. It's the first major legal domino to fall in a chain that will likely redefine the rules of digital content creation for years to come.

The Plaintiffs: Who is Suing and Why?

The lawsuit is led by three prominent artists: Sarah Andersen, creator of the popular webcomic "Sarah's Scribbles"; Kelly McKernan, a respected illustrator and painter; and Karla Ortiz, a concept artist and illustrator known for her work on major film and video game franchises. They represent a potentially massive class of artists whose work was allegedly used without permission to train these powerful AI models.

Their central grievance is that the AI companies scraped billions of images from the internet, including their copyrighted artwork, to build the massive datasets that power their models. They argue that this constitutes direct copyright infringement on an unprecedented scale. The artists contend that these AI systems are not just learning 'styles' but are, in effect, storing and remixing compressed copies of their work, allowing users to generate new images that are derivative of and directly compete with their original creations. For them, this is not just about uncredited use; it's about the potential for technology to devalue their life's work and undermine their ability to make a living.

The Core Allegation: Copyright Infringement on a Massive Scale

The lawsuit’s core allegation is multi-faceted. The plaintiffs claim that the AI companies engaged in several forms of copyright infringement. Firstly, they allege direct copyright infringement through the act of scraping and copying their artwork to create the training datasets. Every time an image was downloaded and fed into the model, they argue, an unauthorized copy was made.

Secondly, they claim vicarious copyright infringement. This means they hold the AI companies responsible for the infringing acts of their users. When a user generates an image in the style of a specific artist, the plaintiffs argue the AI company is facilitating and profiting from the creation of an unauthorized derivative work. They point to the ability of models like Midjourney to mimic an artist's signature style with startling accuracy as evidence of this capability.

Finally, the lawsuit includes claims related to the removal of copyright management information (such as watermarks, signatures, and metadata) during the scraping process, a violation of the Digital Millennium Copyright Act (DMCA). The artists argue that by stripping this information, the AI companies made it impossible to trace the generated images back to their source material, further obscuring the alleged infringement. The sheer scale of this alleged data harvesting—involving billions of images—is what makes this case so monumental.

The Big Question: Can You Copyright AI-Generated Content?

Parallel to the infringement debate is a question that directly impacts every marketer using AI: who owns the output? If you use Midjourney to create the perfect hero image for your new landing page, can you claim copyright and prevent competitors from using it? The answer, according to current U.S. law, is complicated and hinges on a single, crucial concept: human authorship.

The U.S. Copyright Office's Stance on 'Human Authorship'

The U.S. Copyright Office has been very clear and consistent on one point: copyright protection can only be granted to works created by a human being. This principle has been tested multiple times, most famously in the "monkey selfie" case, where a court ruled that a photograph taken by a monkey was not eligible for copyright. The Copyright Office has extended this logic to the world of AI.

In its official guidance released in March 2023, the Office stated that if a work is produced by a technology operating without any creative input or intervention from a human author, the Office will not register it. They use the example of a user giving a simple prompt to an AI like "a photograph of a dog on a beach." If the AI then generates an image that the user accepts without further modification, the user cannot claim copyright. The Office considers the AI tool, not the human, to be the 'author' in this scenario, and since the AI is not human, the work falls into the public domain.

However, the guidance also leaves a door open. It suggests that a work containing AI-generated material *can* be copyrighted if a human author has selected, arranged, or modified the material in a sufficiently creative way. The key example is the case of Kristina Kashtanova's comic book, "Zarya of the Dawn." The Copyright Office granted her a copyright for the book as a whole—for her writing, the arrangement of the images, and the overall story—but it explicitly denied copyright for the individual images, which were created with Midjourney. This establishes a critical precedent: your creative contribution is what matters. A simple prompt is not enough. A detailed process of prompting, curating, editing, and combining AI elements with original human work might be.

Fair Use vs. Derivative Work: The Central Legal Battleground

The entire Midjourney lawsuit is poised to pivot on the legal doctrine of "fair use." This is the primary defense the AI companies are expected to mount. Fair use is a provision in copyright law that permits the unlicensed use of copyright-protected works in certain circumstances, such as for commentary, criticism, news reporting, and research. To determine if a use is 'fair,' courts analyze four factors:

  1. The purpose and character of the use: Is the new work transformative? Does it add a new meaning or message, or is it merely a substitute for the original? The AI companies will argue that training their models is highly transformative, creating a new tool rather than just copying images.
  2. The nature of the copyrighted work: Using factual works is more likely to be fair use than using highly creative works like art. This factor likely favors the artists.
  3. The amount and substantiality of the portion used: The AI companies used the entirety of the artworks, but they will argue that each individual work is a minuscule and insignificant part of the massive training dataset.
  4. The effect of the use upon the potential market for the original work: This is a crucial point. Do AI-generated images in the style of a particular artist harm that artist's ability to sell their own work? The plaintiffs will argue a resounding 'yes,' while the defense will claim they are creating new markets, not destroying existing ones.

Conversely, the artists will argue that the AI-generated outputs are "derivative works." A derivative work is a new piece that is substantially based on a pre-existing one, such as a movie based on a book. The right to create derivative works is an exclusive right of the copyright holder. If the court finds that AI outputs are derivative works of the training data, it would be a major victory for the plaintiffs and a devastating blow to the AI companies. This legal tug-of-war between 'transformative fair use' and 'infringing derivative work' is the central drama of the case, and its resolution will have far-reaching consequences.

Immediate Risks for Marketers Using AI-Generated Content

While the lawyers battle it out in court, marketers are left in a state of uncertainty. Using AI-generated content isn't just a creative choice anymore; it's a risk management decision. Understanding the immediate, tangible risks is the first step toward mitigating them.

Are Your AI-Generated Images Legally Safe for Commercial Use?

The blunt answer is: it depends, and it's risky. The terms of service for a tool like Midjourney may grant you a broad commercial license to use the images you create, but this license is only as good as Midjourney's own legal standing. If a court rules that Midjourney's model was trained illegally, the very foundation of that license crumbles. An image generated from a model trained on infringing data could itself be considered an infringing work.

This means that even if you have a 'commercial license' from the AI provider, you might not be protected from a copyright claim from an original artist whose work was part of the training data and is recognizably present in your output. Your license is with the AI company, not with the millions of artists whose work may have been used without permission. Until these legal issues are resolved, no AI-generated image from a tool trained on scraped data can be considered 100% legally safe for high-stakes commercial use, such as a logo, a major ad campaign, or product packaging.

Potential Liabilities: From Takedown Notices to Lawsuits

The legal risks for businesses using this content are not hypothetical. They fall into several categories, ranging from inconvenient to catastrophic:

  • DMCA Takedown Notices: The most common and immediate risk. An artist or their representative could identify an element of their work in your marketing materials and file a DMCA takedown notice with your web host, social media platform, or service provider. This can lead to your content being removed without warning, disrupting your campaigns and potentially leading to account suspension.
  • Cease and Desist Letters: A more direct and serious step. You could receive a letter from an artist's lawyer demanding that you stop using the image and potentially pay damages for its unlicensed use. Responding to these requires time, legal fees, and can be a significant distraction.
  • Direct Lawsuits: While less likely for a single image, the risk of being named in a lawsuit for copyright infringement is real, especially for large-scale or high-profile campaigns. If your company is seen as profiting significantly from what an artist considers a stolen work, you become a target. The legal fees alone, even if you win, can be substantial.
  • Reputational Damage: Beyond the legal jeopardy, there's a significant brand risk. Being publicly accused of stealing from artists can lead to a PR nightmare, alienating customers and damaging your brand's reputation as an ethical and creative company. In an era of social media accountability, this risk cannot be underestimated.

How to Protect Your Brand: Actionable Best Practices for 2024

The legal landscape is a minefield, but that doesn't mean you have to abandon AI altogether. It simply means you need to proceed with caution and a clear strategy. By adopting a proactive and informed approach, you can harness the power of AI while minimizing your legal exposure. Here are four essential steps every marketing team should take.

Step 1: Vet Your AI Tools and Their Terms of Service

Not all AI tools are created equal, especially when it comes to their legal underpinnings. Before integrating any AI generator into your workflow, perform due diligence. Go beyond the marketing claims and dig into the fine print. Ask these critical questions:

  • How was the model trained? This is the most important question. Does the company explicitly state the source of its training data? Tools like Adobe Firefly are trained on Adobe's own stock library and public domain content, making them a much safer choice than models trained on large, unaudited datasets scraped from the web.
  • What does the commercial license actually say? Read the Terms of Service. Do they grant you full commercial rights? Do you own the output, or do you just have a license to use it? Are there any restrictions on use (e.g., no use in logos)?
  • Do they offer indemnification? This is the new gold standard for enterprise-level AI tools. Companies like Adobe, Getty Images, and Shutterstock now offer full indemnification for the assets their AI generates. This means if you are sued for copyright infringement over an image you created with their tool, they will cover the legal costs and any damages. This is a powerful form of insurance for your business.

Step 2: Document Your Creative Process (Prompts & Revisions)

As established by the U.S. Copyright Office, 'human authorship' is your best claim to owning any part of an AI-assisted creation. Therefore, meticulously documenting your creative input is crucial. This isn't just about saving your final prompt; it's about building a case file that proves your significant creative contribution. Your documentation should include:

  • Initial concepts and goals: What was the marketing objective? What was the creative brief?
  • The tool and version used: E.g., Midjourney v6.0.
  • A log of prompts: Don't just save the final prompt. Record the entire iterative process. Show how you started with a broad idea and refined it through multiple, detailed prompts to achieve your specific vision.
  • Curation records: Note how many variations you generated and why you selected a specific one. This is a creative choice.
  • Post-generation edits: Document every change you made in software like Photoshop or Canva. Did you change colors, remove elements, add text, or composite the AI image with other assets? This is strong evidence of transformative human work.

This record can be invaluable if you ever need to defend your work's originality or file for copyright on the human-authored components of the final piece.

Step 3: Consider Using Indemnified AI Services

For any critical marketing asset—your website's main banner, a key ad visual, your product packaging—using a non-indemnified AI tool is an unnecessary gamble. The smart money is on services that stand behind their output legally. As mentioned, major players in the stock photography world have moved into this space, offering AI image generators that are not only trained on ethically sourced, licensed data but also come with a promise of legal protection. While these services might be part of a paid subscription, the cost is minimal compared to the potential cost of a copyright lawsuit. Think of it as essential business insurance for your creative assets.

Step 4: When to Involve a Legal Professional

While you don't need to call a lawyer for every social media graphic, there are clear situations where professional legal advice is non-negotiable. You should consult with an intellectual property (IP) attorney when:

  • Creating core brand assets: If you are considering using AI to generate a logo, a brand mascot, or any other long-term, foundational element of your brand identity, you must seek legal counsel. The inability to copyright or defend these assets could be devastating.
  • Launching a major, high-visibility campaign: For national or international advertising campaigns with large budgets, the potential damages from an infringement claim are magnified. A legal review of your key visuals is a prudent investment.
  • Receiving any legal notice: If you receive a DMCA takedown notice or a cease and desist letter, do not ignore it. Contact a lawyer immediately to understand your rights and obligations and to formulate a proper response.

The Future Outlook: What This Lawsuit Signals for AI in Marketing

The Midjourney lawsuit is more than just a single legal case; it's a bellwether for the future of creative industries. Regardless of the final verdict, which could be years away after appeals, its impact is already being felt. The industry is at a crossroads, and the path forward will likely be shaped by the resolution of these core legal and ethical questions.

We can anticipate several potential trends. Firstly, a bifurcation of the AI tool market. On one side, there will be 'ethically sourced' or 'commercially safe' models from companies like Adobe, Getty, and others who control their training data and offer indemnification. These will become the standard for serious businesses and enterprise clients. On the other side, more experimental, 'wild west' models will continue to exist, offering greater creative freedom but with significant, explicit legal risks. Marketers will need to choose their tools based on their risk tolerance.

Secondly, we will likely see new legislation and regulations specifically designed to address AI. Lawmakers are scrambling to catch up with the technology, and we may see new laws requiring transparency in training data, or perhaps even a new system of compulsory licensing for artists whose work is used in training datasets. Finally, the lawsuit is forcing a much-needed conversation about the value of human creativity. It highlights that true brand building isn't just about generating slick visuals; it's about telling a unique story, building an authentic connection, and having a defensible, original point of view. In the long run, AI will be a powerful tool, but it will augment, not replace, the human strategy and creativity at the heart of great marketing.

In conclusion, while the legal battle rages on, the path forward for marketers is one of cautious optimism and diligent risk management. The efficiency of AI is too great to ignore, but its legal pitfalls are too dangerous to overlook. By choosing your tools wisely, documenting your creative process, prioritizing legally indemnified solutions for important projects, and knowing when to seek expert advice, you can continue to innovate with confidence. The future of AI in marketing belongs not to the fastest adopters, but to the smartest and safest ones.