top of page

Authorship in the Age of Algorithms: Legal and Ethical Challenges of AI Generated Art

Introduction


The creation of art has long been considered a crucible of emotional labour and the pinnacle expression of human ingenuity . However, the emergence of generative AI in the creative sphere has destabilised this foundation and redefined not only what it means to produce art, but what is “art” and does the artist truly own the art they produce. From Monet-esque paintings being generated with a series of prompts and algorithms to Ai-Da, the first humanoid artist painting “self-portraits”, critical questions about originality, ownership, and artistic intent are raised.


Emerging technologies have historically challenged the aesthetic and philosophical boundaries that define artistic creation. In an increasingly mechanised society, it is not implausible to consider that art evolves from a solely human creation to a collective effort that reflects our growing dependence on technology. Innovation has always challenged existing structures and artistic hierarchies, therefore, AI’s role in the creative sphere should not be viewed as an isolated disruption but a part of a border continuum in which technology redefines artistic practice. Recognising this continuum is essential to develop robust and forward-looking frameworks that are capable of addressing questions of authorship and ownership to ensure ethical innovation.


This piece will consider the pivotal question — are current legal structures adequate to regulate the legal and ethical ownership of AI generated art?


Current Frameworks

have envisioned a future in which the concept of authorship would extend to “original” works created by algorithmically driven machines. International conventions remain ill-equipped to address the gaps in the protection of AI-generated works. As this gap persists, fundamental questions arise, who owns the output of AI generated artistic creation —the author, the developer or the AI platform itself?


The answer varies significantly across jurisdictions. US Copyright Law and subsequent case law has strictly maintained that there is no ownership of AI generated works as ownership requires human authorship and these works cannot be granted any copyright protection. Legal scholars argue that this could have far reaching economic implications as transaction costs of the US innovation system would become more expensive. Subsequently, without clear copyright protection, companies might prefer “closed” innovation models which would de-incentivise collaboration in development of AI systems.

However, the UK under its Copyright Designs and Patents Act 1988 legislation has been

somewhat of a pioneer by extending copyright protection to include “computer-generated works” and under Section 9(3), the individual who made the “arrangements necessary for the creation of the work” is the author of the copyrightable work. While commendable in intent, this legislation does not resolve the inherent ambiguity. Who is the precise individual who makes the “necessary arrangements”? Is it the developer of the platform who enables creation through the code or is it the user whose inputs determine the final artistic output? Toby Blair and Sarah Bond similarly underscore the English Courts’ reluctance to engage with the evolving definition of “originality” and signal urgent judicial intervention to resolve the ambiguity in existing legislation.


Ethical Challenges


Subsequently, beyond question of ownership and enforcement of legal rights, the ethical

foundations of AI-generated work are equally contentious. It can be contended that AI generated works should not automatically be granted copyright protection as many generative models are trained on vast datasets containing copyrighted materials used without the consent of the legal owner— conduct that could amount to copyright infringement. This issue has already reached judicial scrutiny as the High Court of Delhi, India is currently deciding upon this issue in a case filed by the Asian News International against OpenAI, alleging that storing and use of copyrighted works to train AI models is an infringement of copyright under the Copyright Act, 1957. The outcome of this landmark case would be pivotal in shaping the global debate on data ethics and legal ownership.


A pivotal report published by DACS surveyed artists across the UK and it was found that 74%

artists are concerned about their work being used to train AI models and over 89% of the

surveyed individuals expressed the need for regulation of AI by the Government to protect their works. This report is clearly reflective of the growing unease within the creative community regarding data transparency and infringement by AI platforms.


Rising Ambiguity


Such inconsistencies create a fragmented global protection framework that puts artists and

innovators at risk. A creator may hold valid ownership rights over an AI generated artwork in the UK, yet they might find themselves without any legal protection or remedies in the United States. This lack of harmonization highlights the urgent need for coordinated reform in the international policy framework to address ambiguity in authorship, ownership and enforcement of copyright of AI generated works.


Policy Recommendations and The Way Forward


Addressing the legal and ethical ambiguities surrounding AI-generated art requires coordinated and forward-looking policy action at both international and domestic levels. As of now, existing frameworks such as EU Artificial Intelligence Act, the UK Intellectual Property Office’s open consultation in 2022, and the US Copyright Office have acknowledged these challenges but have not provided any definitive or tangible guidance.


  1. International Harmonisation

    Policymakers must prioritize the creation of a harmonised international framework,

    similar to WIPO, clarifying who owns legal rights over an AI generated artistic creation.

    Transparency obligations should be introduced that require AI developers to disclose

    datasets used for training and to obtain consent wherever copyrighted material is involved to ensure fair compensation to legal owners.


    Some legal scholars contend that a feasible solution would be to create data markets

    similar to streaming services such as Netflix or Spotify or to use collective management

    organisations (CMOs) that collect and distribute royalties on behalf of creators.

    Moreover, ethical use certificates could be developed for AI platforms that adhere to fair standards.


  1. Domestic Reform

    At the domestic level, the government should introduce digital literacy programmes and legal aid initiatives that help artists navigate this emerging terrain. Reforms must also be made to existing legal frameworks to ensure strengthened protection of rights. Without such reform, innovation and creativity is at the risk of being eroded.


Conclusion


To conclude, the emergence of generative AI in the artistic sphere does not signal a rupture in creativity but an evolution within it. Rather than resisting the technological revolution,

policymakers must develop and refine the principles of copyright protection to ensure

transparency, fairness, and accountability. With coherent regulation and ethical oversight, AI can coexist with human artistry as part of a continuum that expands the possibilities of creative expression.



Durham Think Tank is a Durham SU student group whose details are: Durham Students’ Union (also known as Durham SU or DSU) is a charity registered in England and Wales (1145400) and a company limited by guarantee (07689815), and its principal address is Dunelm House, New Elvet, DURHAM, County Durham, DH1 3AN
bottom of page