Developers of the Belamy Portrait have created AI-art generation algorithms using Generative Adversarial Network framework in machine learning. Essentially, human operators would feed a collection of human-created art to the machine, from which it “learns” the creative process. In turn, the machine generates artwork using the creative intuition it has gained from this training data. The process includes a discriminator which distinguishes real images from the newly generated ones, and a generator which creates new images and tries to “fool” the discriminator by making it seem real. Another leading example of the AI art genre is being developed at the Rutgers’ Art and Artificial Intelligence Lab which has constructed a creative computerized system called the “Creative Adversarial Network (CAN).” More on this fascinating scientific concept can be found in this article.
Critics have taken issue with the originality of the AI generated artwork on grounds that the computer codes written to produce the images are borrowed. In fact, the creators of the Belamy Portrait—a French art collective called “Obvious”—acknowledged that the algorithm was a modified version of a code first developed by Robbie Barrat (a 19-year-old artist and programmer) who had openly shared his “modified DCGAN algorithm” on github. These concerns have raised questions about how copyright protection would apply in the context of AI-generated art.
Should AI Generated Art be Copyrightable?
For an artwork to be copyrightable, it must possess some “minimal degree of creativity” and be original to the “author.” This leads us to ask questions that test the boundaries of the traditional legal framework: Can AI generated work be deemed creative, and if so, who would be credited as the creative author?
Some critics argue that AI generated art lacks “creativity” because the work only “mimics creative expression of bygone eras, rather than creating new works under the current cultural framework.” While technological progress may reach a point in the future that generates artwork exceeding all expectations of creativity, the current demonstrations of AI-generated art, including the Belamy Portrait, are met by underwhelmed audiences in the art community.
Other critics argue that AI-generated work should not be copyrightable because it would dismantle the economic incentive justification for the enforcement of copyright protection: to compensate an artist for his or her labor as a “solution to the public-policy problem generated by the fact that informational works are often costly to create but inexpensive to copy.” Since AI software can be copyrighted and can generate artwork at mass scale, the functional rationale which seeks to encourage “the efforts and imaginations of private creative actors” would be foregone.
If AI generated artwork is copyrightable, who should be entitled to hold the copyright—humans or the machine?
Under the traditional copyright framework, the creators (programmers) of the AI software may hold copyrights over the software as well as the artwork produced by the software. Undergirding this concept is that “human authorship,” exemplified in the Visuals Artists’ Rights Act which acknowledges that artists, in the process of creation, inject his or her spirit and personality in the work.
Others assert that the “machine author” should be the copyright holder of the AI-generated art instead of its software programmers, because the AI software’s ability to engage in unsupervised learning results in producing an end product beyond the intention or expectation of the software programmers. Some would add that this quality, among others, allows AI art to possess an objectively “original” quality; that is, it carries unique and distinguishable qualities that generate a new experience from its audience.
Whether or not an AI generated product or decision is attributable to the machine itself versus the human creator invites a broader conversation, one that is potentially greater than the art world. Automated decisions that impact society at large (be it an algorithm-based hiring tool or facial recognition system) raise thorny questions: who assumes responsibility over the machine’s decisions in the event that it causes repercussions? If the AI art software infringes on another copyrighted work, will the machine be held responsible, or its human creator? Or will we build an AI to make these decisions for us too?
About the author: Juyoun Han is a lawyer at Eisenberg & Baum LLP based in NYC where she leads the firm’s Artificial Intelligence Fairness & Data Privacy Department. Juyoun’s litigation practice includes Art & Copyright Law and a wide range of anti-discrimination cases. Special thanks to Patrick Lin (Brooklyn Law School) for editorial input.
Disclaimer: The views expressed in this article are exclusively those of the author and do not reflect those of the author’s employers, partners, nor affiliates. This article has been prepared for informational purposes only and does not constitute legal advice. This information is not intended to create, and receipt of it does not constitute, a lawyer-client relationship. Readers should not act upon this without seeking advice from professional advisers.