HomeBlogCan AI Be Creative? The Copyright Question That Could Shape a Generation

Can AI Be Creative? The Copyright Question That Could Shape a Generation

Can AI Be Creative? The Copyright Question That Could Shape a Generation - Scott Dylan

The Moment Everything Changed

Somewhere in a Washington courthouse, a judge’s gavel came down with what felt like finality. The ruling was clear: copyright protection requires human authorship. This wasn’t some peripheral matter buried in legal journals—it struck at the heart of what we’re building in artificial intelligence. For months, creative professionals had watched nervously as AI systems generated art, music, and text at speeds that made decades of human labour look quaint. The copyright question had lingered in the background, an uncomfortable conversation we were all half-avoiding. Now, suddenly, it demanded our full attention.

I’ve spent the last fifteen years investing in technology companies. I’ve seen countless innovations reshape industries, and I’ve learned that the ones that last are those built on solid legal and ethical foundations. The AI creativity question isn’t just legal theatre—it’s a fork in the road that determines whether we build tools that augment human creativity or systems that commodify and replace it. The stakes are genuinely profound, and the legal landscape is still being written.

What strikes me most is how the copyright question reveals something deeper about how we think of creativity itself. Is it a spark that requires consciousness? A statistical pattern recognition that any sufficiently powerful system can perform? The law is trying to answer this, but the answer will have implications far beyond courtrooms. It will shape what we invest in, what we build, and ultimately, what we value about being human.

Human Authorship: The New Legal Standard

The US Copyright Office and courts have been settling this question with increasing clarity. When the US Supreme Court and circuit courts examined whether AI-generated work qualifies for copyright protection, they returned to first principles: copyright law assumes human authors. This isn’t some anachronistic limitation—it’s foundational to how copyright itself works. A human author owns their creation. A human author can assign or sell those rights. A human author is legally accountable for whether their work infringes on others’ rights. An AI system does none of these things.

What’s particularly interesting is how this ruling emerged not from legislative fiat but from the logic of the law itself. When a copyright claim lands on a judge’s desk, there are fundamental questions to answer. Who created this? What was their intent? Did they have legal capacity to assign rights? If the answer to the first question is “an algorithm,” the subsequent questions become meaningless. The legal structure simply can’t accommodate a non-human author. It’s not that the law is hostile to AI—it’s that the law was designed to protect human creativity and human interests, and it can’t simply be extended to machines without fundamentally changing what copyright means.

This has immediate practical consequences. Artists using AI tools retain copyright only for their creative direction and intentional choices. A person who uses Midjourney to generate an image owns that image if they sufficiently directed the process and made creative decisions that shaped the output. But the moment the process becomes primarily algorithmic—where the human is essentially feeding prompts into a black box and accepting whatever emerges—copyright protection becomes questionable. Courts are drawing lines, and those lines matter enormously for how the technology industry develops.

The Lawsuits That Changed Everything

The New York Times lawsuit against OpenAI became the symbol of this confrontation, though it’s far from alone. The Times alleged that OpenAI’s training processes used their copyrighted material without permission or compensation, then built a system that could generate text remarkably similar to Times journalism. This isn’t academic theorising—it’s a practical accusation about how these models are built. Similar cases have been filed against Stability AI and other companies by visual artists who discovered their work was used to train image generation models without consent or compensation.

What’s remarkable is how these cases forced the industry to actually explain how generative AI works. In discovery, companies had to detail their training data acquisition processes, their data sourcing decisions, and whether they obtained rights to what they used. Some of the truth that emerged was uncomfortable. Training data often came from web scrapes that didn’t obtain permission. Some companies treated copyright notices as mere metadata, something to be processed rather than respected. The assumption seemed to be that if something was publicly accessible online, it could be used as training data. The law disagreed.

These lawsuits forced a reckoning that was overdue. AI companies had moved incredibly quickly, optimising for capability and performance while treating copyright compliance as a problem to be solved later. When “later” arrived in the form of litigation, the gaps in their approach became obvious. Now, the calculus has changed. Companies are having to think genuinely about their training data sources, about licensing, about consent. This is harder and more expensive than simply scraping the internet. But it’s also more sustainable, more honest, and more likely to build lasting business models that don’t rest on the assumption that everyone will eventually accept their work being used without permission.

The European AI Act and Transparency Requirements
Can AI Be Creative? The Copyright Question That Could Shape a Generation - Scott Dylan

Whilst American courts were handling the copyright question, Europe was taking a different but complementary approach. The EU AI Act, which entered force in January 2024 and is being phased in through 2026, takes a regulatory perspective on generative AI. Rather than waiting for courts to decide questions case by case, the EU mandated that generative AI systems must disclose their training data composition and sources. This sounds technical, but it’s revolutionary.

Under the EU’s framework, companies deploying generative AI must be transparent about what they trained their models on. If your training data includes copyrighted works, that needs to be disclosed. If you’ve used publicly available data without getting explicit permission from rights holders, that needs to be documented. The regulation essentially says: you can use copyrighted material in some circumstances, but you can’t hide it. You can’t claim ignorance. You can’t say the data was just “public” and therefore fair game.

This transparency requirement creates a mechanism for enforcement. Rights holders can now demand to know whether their work was used in training. If it was, conversations about licensing and compensation become mandatory rather than optional. The EU isn’t saying generative AI can’t use copyrighted material—it’s saying that if you want to operate in European markets, you need to be honest about it. This has already shifted negotiations. Some publishers and news organisations are negotiating licensing agreements with AI companies because the old approach of hoping no one noticed simply doesn’t work anymore. Google has signed licensing deals with news organisations. OpenAI has done the same. The financial model is starting to become clear: if you want training data, you may need to pay for it.

The Consent and Compensation Conversation

Beneath all the legal proceedings and regulatory frameworks, something more human is unfolding. Artists, photographers, musicians, and writers are asking a straightforward question: if my work made your AI system better, shouldn’t I benefit? It’s the same question that’s haunted the internet since the beginning—how do we fairly compensate creators when digital technology makes copying and distribution nearly costless?

The training data consent question matters enormously because it goes to the heart of what feels fair. Many creators didn’t even know their work was being used. An artist found their entire portfolio in an AI training dataset without being notified, let alone asked. A photographer discovered their work was being used to train systems competing with their own professional practice. Musicians learned their voices were being cloned without permission. The asymmetry was stark: large technology companies made decisions about everyone’s creative work without anyone’s agreement.

Some proposed solutions are emerging. Opt-out mechanisms would let creators exclude their work from training. Licensing models would compensate creators when their work is used. Data trusts would aggregate creators’ rights and negotiate collectively with AI companies. These aren’t perfect solutions—opt-out depends on people knowing about and using the mechanism, and licensing creates administrative overhead. But they’re attempts to restore consent and fairness to a process that had become distinctly non-consensual.

What interests me as an investor is that these problems have genuine solutions. Companies can build consent into their training pipelines. They can partner with creators and compensate them fairly. The technology doesn’t require extraction without permission. Some AI companies are demonstrating that you can build powerful models whilst respecting creator rights. The question is whether the entire industry will follow suit or whether regulatory and legal pressure will eventually force the issue. My prediction: companies that treat creators fairly from the start will build more sustainable businesses than those that hope legal challenges go away.

Creative Industries and the Displacement Question

Behind the copyright question sits something that worries creative professionals even more than legal uncertainty: displacement. When AI can generate artwork in seconds that would take a human artist days, when it can write copy that’s indistinguishable from professional writing, when it can compose music at scale, what happens to the humans who do these things for a living?

The creative industries have legitimate concerns. A graphic designer can now be asked to “compete” with AI that costs nothing and takes minutes. A copywriter finds that AI can produce acceptable content, even if not spectacular content. An illustrator watches as potential clients switch to AI image generation. A composer watches as AI music composition enters the mainstream. These aren’t abstract worries—people have already been displaced. Job postings that previously required human creatives now specify “AI generated content acceptable,” which fundamentally changes the value proposition of hiring humans.

Some of this was inevitable. Technology has always disrupted creative industries. Photography disrupted painting. Digital design disrupted hand-drawn illustration. Synthesisers disrupted musicians. The pattern is familiar: new technology emerges, skilled professionals worry about their livelihoods, some adapt and thrive in new roles, others face harder transitions. But the speed of AI’s development has been remarkable, and it’s not clear that the transition mechanisms that worked for previous technologies will work here.

The copyright framework matters here because it creates leverage. If creators maintain ownership and control over whether their work is used in training, they have negotiating power. If they’re compensated when their work improves AI systems, they gain a stake in the technology’s success rather than just experiencing it as a threat. The risk, if copyright law is weakened or circumvented, is that creators lose all leverage and become passive subjects of AI training rather than active participants. That’s not good for creators, and it’s not good for the long-term health of the AI industry either. Industries that build themselves on the unpaid labour of previously autonomous professionals tend not to be very sustainable.

The Philosophical Thread Running Through It All

All of this legal and regulatory activity points toward a deeper question that the law is actually asking: what is creativity, and what gives it value? Copyright law assumes that creativity is something humans do. They have ideas. They develop skills. They make choices about how to express those ideas. They put work into realising their vision. When a human does all this, they own the result. This framework reflects something important about human creativity—it’s not just the output, it’s the intentionality, the skill, the individuality that we value.

AI-generated content challenges this. A neural network doesn’t have intentions in the way humans do. It doesn’t express a unique perspective developed through lived experience. It patterns-matches against billions of examples and produces something statistically plausible. From one angle, this is remarkable and valuable—it’s a powerful tool. From another angle, it’s not really creativity in the sense that copyright law contemplates. It’s something else, something we don’t yet have words for.

I don’t think the law is wrong to distinguish between human and machine authorship. I think the law is actually protecting something worth protecting: the idea that human creativity, human expression, human skill have value that goes beyond economic output. Copyright doesn’t exist just to incentivise more creative production—it exists to recognise that creative expression is part of what makes us human, and there’s something important in maintaining human agency in the creative process.

This doesn’t mean AI shouldn’t be used creatively. Tools are tools. A human using AI to augment their creativity—to generate starting points they then refine, to explore possibilities they then develop, to speed up execution of a vision they’ve conceived—that’s still human creativity. It’s still expression. The tool is just more powerful than previous tools. But there’s a meaningful difference between using a tool to express yourself and having a tool generate expression that you passively accept.

What This Means for AI Investment and Development

At Nexatech Ventures, we’ve been thinking carefully about what the emerging legal and ethical clarity around copyright means for AI investment. The companies we back need to be built on solid legal foundations. Building on extracted training data that doesn’t have proper licensing or consent is building on sand. Sooner or later, either courts or regulators will intervene, and the costs of remediation will be substantial.

The companies that have the strongest long-term prospects are those that are thinking about copyright and consent from day one. Some are exploring licensing models where they compensate creators for training data. Others are developing synthetic data approaches that reduce dependence on scraping the internet. Still others are partnering directly with content creators to get consensual training data. These approaches are harder and more expensive initially. But they build sustainable business models that don’t depend on exploiting creators’ work.

I’m also watching closely how different regions are developing their frameworks. The US approach—letting courts handle copyright on a case-by-case basis—creates uncertainty but also flexibility. The EU approach—mandating transparency and disclosure—creates regulatory clarity and enforcement mechanisms. Eventually, a global standard will probably emerge, and my expectation is that it will be somewhere in the middle: AI companies will need to be transparent about training data, will need to have proper licensing for copyrighted material, and will probably need to offer some form of compensation or opt-out mechanism for creators.

The winners in the next phase of AI development will be companies that anticipated this shift and built it into their systems early. The losers will be those that tried to build value extraction into the business model. This isn’t just a legal prediction—it’s a business prediction. Companies that respect creator rights will have better relationships with the creative professionals who can enhance their products, will face lower legal risk, and will build more sustainable business models in a world where regulatory scrutiny of AI is only increasing.

The Broader Implications for Innovation

What we’re working through with copyright and AI creativity is actually a broader question about how we regulate innovation in the technology sector. The pattern is becoming familiar: companies move fast, they optimise for capability and speed, legal and ethical questions get deferred. Then regulators and courts catch up, create frameworks, and suddenly companies have to rebuild what they built. This cycle works out expensively and inefficiently.

I wonder if there’s a better path. What if the AI companies building generative models had built consent and compensation into their systems from the start? What if they’d treated the copyright question seriously rather than as an obstacle to work around? The technology would still work. The systems would still be powerful. But they’d be built on foundations of fairness and legality rather than on assumptions that eventually proved untenable.

This matters beyond just copyright. It matters for privacy, for bias, for truthfulness, for all the ways that AI systems interact with human values. The companies that win the long-term competition will probably be those that integrate ethical and legal considerations into their development process, not as compliance overhead but as core design criteria. It’s harder. It’s slower. It costs more upfront. But it produces products people can actually trust and use without worry that they’re built on exploited labour or stolen intellectual property.

The copyright question is ultimately asking the right question: who benefits from AI, and how do we make sure that the people who made the technology possible—the creators whose work trained the systems, the researchers who built the foundations, the communities whose data shaped the algorithms—actually share in the value that gets created? Getting this right will require thinking about copyright and consent and compensation, but it will also require thinking more broadly about fairness and inclusion in how AI gets developed.

Where We Go From Here

The copyright question isn’t going away. If anything, it’s going to become more central as generative AI becomes more economically important. More creators will discover their work was used in training. More lawsuits will emerge. More regulators will develop frameworks. The legal landscape will continue to shift, and it will eventually stabilise somewhere that’s more favourable to creators than where we started.

What I find hopeful is that this isn’t actually a race to the bottom. AI companies don’t need to exploit creators to build powerful systems. They want to build systems that are trustworthy and legally sound. There are genuine technical and business reasons to care about consent and compensation and transparency. The question is whether the industry will get there through choice and foresight or through legal pressure and regulation. Either way, I think we’re heading toward a future where AI systems are trained on properly licensed data, where creators maintain control and benefit from how their work is used, and where the technology is built on foundations that don’t require us to pretend we’re not standing on extracted value.

That future is better for everyone. It’s better for creators, who maintain agency and receive compensation. It’s better for AI companies, who build sustainable business models and maintain public trust. It’s better for society, because the AI systems that emerge are built on legitimate foundations. And it’s better for the technology itself, because it develops in conversation with human values rather than in defiance of them.

The moment that judge came down with that ruling about human authorship wasn’t the end of the conversation about AI and creativity. It was the beginning of a much deeper reckoning about what we want from this technology and what we’re willing to sacrifice to build it.

Related reading: How Emotional AI Claims to Read Your Feelings — and Why It Probably Can’t, Developing Ethical Frameworks for AI Implementation and What is information communication technology ict: A concise guide to ICT basics.


Discover more from Scott Dylan

Subscribe to get the latest posts sent to your email.

Written by
Scott Dylan