Samsung's New Stance on AI: The ChatGPT Controversy Explained

11-05-2023 | By Robin Mitchell

As ChatGPT takes the world by storm, there is growing concern regarding the role that AI plays in society and whether humanity should hold off from its integration. After a series of serious data breaches, Samsung is threatening its employees against using ChatGPT for work, leading many to wonder if public AI systems have a place in company environments. What challenges does ChatGPT present from a data privacy point of view, what did Samsung announce, and what should engineers watch out for when using ChatGPT?

What challenges does ChatGPT present from a privacy point of view?

While AI systems have been around for at least a decade, it was ChatGPT that shook the world with its natural language capabilities. At its heart, ChatGPT is an AI text generator that attempts to predict what word should come next, given a conversation and context, but the results that ChatGPT produces genuinely are mind-blowing. From generating code to writing stories, ChatGPT has demonstrated some serious capabilities, so much so that many are now utilising ChatGPT in their daily workflow.

However, for all the benefits that ChatGPT provides, it presents a whole host of challenges. It's evident that students across the globe have been leveraging the capabilities of ChatGPT to generate essays that rarely duplicate existing work and often appear entirely original. This phenomenon has prompted academic professionals to use AI tools for identifying potential instances of academic dishonesty. Intriguingly, these tools can sometimes mistakenly identify human-generated content as AI-generated, highlighting the sophistication of AI outputs and the challenges in distinguishing them from human work (an observation noted in several instances where even expertly crafted human work has been mistaken for AI-generated content). 

The potential privacy issues that ChatGPT presents, a problem often underestimated, have been a topic of research in the AI community for years. For instance, Chaudhuri and Monteleoni (2011) highlighted the privacy challenges posed by machine learning systems in their seminal paper, 'Privacy and machine learning: two unexpected allies?'[^2^]. Simply put, AI engines like ChatGPT learn from their past conversations, and this means that prompts and responses are stored on ChatGPT servers. While this allows engineers working on ChatGPT to see how ChatGPT has responded, it also means that any data presented to ChatGPT can remain on servers for extended periods of time. 

At the same time, teaching ChatGPT with new information can also see that information shared with others. Thus, it is possible for proprietary information to be given to ChatGPT, who will then provide that information to others when prompted. This was the case recently when Samsung employees used ChatGPT to help with coding, only to find that proprietary code had been submitted to ChatGPT. Trying to pull this data from ChatGPT will be challenging, as AI neural nets do not simply allow for learned data to be removed. 

Samsung threatens to fire employees using ChatGPT

Recently, Samsung Electronics announced a ban on the use of generative AI programs like ChatGPT by its employees after the recent incident where engineers uploaded sensitive information to the chatbot. In an internal memo reviewed by Bloomberg, Samsung expressed concerns about data being stored on external servers, potentially making it difficult to retrieve and delete, and the risk of the data being disclosed to other users. Samsung warned that failure to adhere to the new policy could result in disciplinary action up to and including termination of employment[^1^].  

Currently, According to Bloomberg, Samsung engineers accidentally leaked internal source code by uploading it to ChatGPT[^1^], but other major companies have also introduced new restrictions or bans on the use of chatbots like ChatGPT. At the same time, some countries have begun exploring potential bans, with the Italian government briefly banning the program due to concerns about personal data. 

To try and eliminate the data privacy issues presented by ChatGPT, In response to these challenges, Samsung is developing its own internal AI tools for translation and summarising documents as well as for software development. These tools will be accessible to internal employees only, reflecting a trend noted in Gartner's 2023 Top Strategic Technology Trends for 2023 report, which suggests companies are increasingly seeking in-house AI solutions for improved data control (Gartner, 2023)[^4^] and also reportedly considered switching its default search engine to Microsoft’s Bing, which has embraced generative AI. OpenAI, the designers of ChatGPT, addressed these privacy concerns in an official statement on their blog, announcing plans to introduce an 'incognito mode' for ChatGPT. This feature would ensure that data sent to ChatGPT isn’t stored and the results from the prompts are not used in training the system (OpenAI Blog, 2023)[^3^].

What should engineers watch out for when using ChatGPT?

The primary concern for engineers using ChatGPT is that any data provided to ChatGPT can be, and will be, used for training. This means that any proprietary code provided to ChatGPT could easily be shared with other users who are looking to solve similar problems. Thus, it is essential that engineers developing new solutions must not share them with ChatGPT. 

Engineers using ChatGPT should also be careful about using proprietary code developed by other companies. Even if proprietary code is unsuspectedly shared with ChatGPT, that code is still protected by law, especially if that code or the products built from it are patented. As such, it is possible for engineers using ChatGPT to generate code could accidentally violate these protections. 

Finally, the increasing privacy concerns from ChatGPT will very likely see many companies introduce restrictions and/or bans that will carry heavy penalties. While ChatGPT may be quick and easy to use, its dangers to private data are far more severe than they appear.  Given the recent incidents and Samsung's reaction, it is advisable for engineers to exercise extreme caution when using ChatGPT and similar AI tools[^1^]. 


Reference:

  1. Gurman, M. (2023, May 2). Samsung Bans Generative AI Use by Staff After ChatGPT Data Leak. Bloomberg. Retrieved from [https://www.bloomberg.com/news/articles/2023-05-02/samsung-bans-chatgpt-and-other-generative-ai-use-by-staff-after-leak ]
  2. Chaudhuri, K., & Monteleoni, C. (2011). Privacy and machine learning: two unexpected allies? In Proceedings of the 2011 International Conference on Computational Science and Its Applications: [http://www.cleverhans.io/privacy/2018/04/29/privacy-and-machine-learning.html]
  3. OpenAI. (2023). Enhancing Privacy in ChatGPT: Introducing Incognito Mode. OpenAI Blog. Retrieved from [https://openai.com/blog/new-ways-to-manage-your-data-in-chatgpt]
  4. Gartner. (2023). Top Strategic Technology Trends for 2023: Adaptive AI. Gartner Reports. Retrieved from [https://www.gartner.com/en/documents/4020029]
Profile.jpg

By Robin Mitchell

Robin Mitchell is an electronic engineer who has been involved in electronics since the age of 13. After completing a BEng at the University of Warwick, Robin moved into the field of online content creation, developing articles, news pieces, and projects aimed at professionals and makers alike. Currently, Robin runs a small electronics business, MitchElectronics, which produces educational kits and resources.