Another Lawsuit Against AI: Is AI-Generated Code Illegal?

24-11-2022 | By Robin Mitchell

As Microsoft continues to pursue its use of OpenAI to generate code, one programmer and lawyer in the US has launched a lawsuit against Microsoft for its development. Why has the lawyer launched this lawsuit, is there any merit in it, and could this be problematic for future Ais?

Lawyer launches lawsuit against Microsoft for AI-generated code

Not long ago, we reported on how the CEO of Getty Images stated that AI-generated images could be illegal and that Getty Images would not be stocking AI-generated images as a result. Well, it seems that in the industry of AI-generated content, a lawyer and programmer from the US has now decided to sue Microsoft for its use of OpenAI to create an AI that can generate code automatically. 

Matthew Butterick, the lawyer and programmer, looking to take on Microsoft, has seen what Copilot (the AI developed by Microsoft to generate code) can do and is concerned that it is not only immoral but a breach of terms of service from websites and fair use thus making its generated code illegal. Teaming up with other lawyers on the matter, a class action lawsuit has been filed against Microsoft and others who have developed the AI to try and limit what companies can do with AI. Simply put, it is the fear of Matthew Butterick and many others in the tech industry that large tech firms will start to train their systems on data that can be crawled from the internet without needing to provide any attribution.

Currently, Copilot is able to generate blocks of recommended code, but even then, it is essential that designers check the code for correctness and relevance. As such, many programmers have found Copilot to be excellent when coding in new languages but not something that could replace a programmer outright. However, with enough training, it is possible that Copilot could replace a large proportion of programmers worldwide, which could seriously affect creativity in the programming industry. 

Is there any merit to this case?

Interestingly, the lawsuit doesn’t actually mention copyright but instead attacks the terms of services used to distribute code and share files. Simply put, it is often the case that code taken from some other program created by another programmer requires attribution. This not only gives credit to the original programmer but can help boost their career if their code examples are used in larger projects. Furthermore, just because code has been made publicly available doesn’t mean anyone can use it free of charge. For example, many shared projects will have non-commercialisation clauses that only allow code to be used on personal projects.

In the case of Copilot, the AI will scan through online code examples, learn how to code, and then provide similar suggestions to other programmers when in use. However, as the AI has learnt specific code segments from other online programmers, the AI may have indeed breached the terms of service offered by many different programmers. Furthermore, there are reports that some code blocks generated by the AI are near-identical to online examples with specific terms of service, such as attribution.

This is very similar to the potential legal challenges AI image generators face that create concept art based on other artists. While the image itself may be unique, heavily borrowing artistic styles from other artists could be considered a breach of copyright and possibly fraud (depending on how the final image is marketed).

Of course, there are counterarguments to such claims, with one being that humans learn in a very similar manner. Humans don’t spontaneously develop their own programming styles, they will likely learn from other programmers, and there are only so many ways a solution can be programmed, meaning that there will be some degree of similarity between different programmers. Just like humans, AI systems look at many different programming techniques to formulate their own programming technique that has been concluded to be correct. 

Could this case be problematic for future AI systems?

The outcome of this case is yet to be decided, but humanity is at a crossroads with AI. One of these paths is a future that sees AI as more of a threat than a benefit, and its use in society should be carefully controlled. At the same time, the creative abilities of humans need to be respected and awarded, and machines simply cannot take advantage of all the hard work done by humans.

The other path is one where we fully embrace AI and accept that AI is able to look at any data and learn from it. If humanity goes down this path, it could very well introduce the idea that AI’s actually conscious to some degree and therefore need to be awarded rights. Furthermore, AI that generates original content may even own the rights to that work, and no human can simply utilise AI as a slave to profit unfairly. 

Whether the case has any merit is up to the courts to decide, but the outcome of this case will likely influence how researchers approach AI learning in the decades to come.

Profile.jpg

By Robin Mitchell

Robin Mitchell is an electronic engineer who has been involved in electronics since the age of 13. After completing a BEng at the University of Warwick, Robin moved into the field of online content creation, developing articles, news pieces, and projects aimed at professionals and makers alike. Currently, Robin runs a small electronics business, MitchElectronics, which produces educational kits and resources.