Save time with unbiased, independent feedback on vendor solutions.
Watch weekly bite-sized webinars hosted by IANS Faculty.
The copyright and intellectual property risks that enterprises inherit through using code engines powered by Large Language Models (LLM) is currently tremendous, and organizations should avoid allowing developers to use LLMs for core technologies. We
suggest implementing strict guidance to all LLM users about the confidentiality risks that are associated with the use of LLMs and create policies, processes and controls for sensitive data.
This piece explains how the terms of service of the major players in the LLM ecosystem, including ChatGPT, do not provide adequate protections for LLM content, nor provable and auditable data segmentation controls, and their intellectual property policies
are more favorable to the LLM platform operator than the end user.
ChatGPT and other LLMs have disrupted nearly every corner of the technology ecosystem. From an information security perspective, there are important lessons to be learned about the integrity of data that enterprises use as a result of LLM output. Equally
important are considerations around the confidentiality of input into LLMs. Presently, there is a complete lack of maturity around LLM terms of service, content segmentation and intellectual property concerns for enterprises to integrate LLMs into
daily technology operations.
of this writing, OpenAI assigns users all its rights, title and interest in and to output. As far as Input, ChatGPT and DALL-E users agree to let OpenAI use their Input to improve OpenAI’s
models. At present, ChatGPT claims that all API usage of its services does not allow for Input to be used to improve OpenAI’s model.
READ: Exploring the Business Risks and Challenges of ChatGPT
For Microsoft services that use OpenAI modules to provide the service, such as GitHub Copilot, Content ownership gets a bit murkier. For example, in the Free, Pro & Team versions of GitHub Copilot,
the output is called “Suggestions” and states, “GitHub does not claim any rights in Suggestions, and you retain ownership of and responsibility for Your Code, including Suggestions you include in Your Code.” The sticking point
comes with the way that Copilot collects input as Microsoft reserves the right to collect snippets of ‘Your Code’ and will collect additional usage information through the integrated development environment or editor tied to your account.
This causes a significant concern for the general counsel of the organization. Supposing that an organization is working on a logistics improvement application, and Microsoft discovers certain aspects of logistics optimization through Copilot, it is within
Microsoft’s rights to commercialize that logistics optimization technology to others or through Microsoft’s own platforms like Dynamics.
With the intellectual property rights terms for Microsoft and OpenAI for any Content, Suggestions, etc., the AI risks that enterprises inherit through using LLM-powered code engines is currently tremendous. Eventually
organizations could start to see terms and conditions from organizations requiring for the disclosure of the use of LLMs in the development of all technologies. Organization that relied on LLMs to produce code could inherit all of the intellectual
property risks associated with the use of Content/Suggestions from those systems.
To clarify, any code that a developer inputs into an LLM should be considered as shared publicly. Also, any code provided as Output or Suggestions should be considered as tainted as OpenAI nor Microsoft make any representation that the code can be used
without impacting anyone who may have previously copyrighted or protected that code.
Organizations should have strict guidance for all LLM users about the confidentiality risks that are associated with the use of LLMs. OpenAI was recently forced to disclose a bug that allowed for users to gain access to the prompts of other users, including very large training data sets that had to be sent to ChatGPT as prompts. At present, there are no provable data protection models
and Subscription Agreements.
READ: ChatGPT: Uncovering Misuse Scenarios and AI Security Challenges
Unfortunately, there are very few automated controls, which will allow for the complete and comprehensive blocking of OpenAI service from enterprise networks. It will become increasingly important for technology teams to create clear policies, educate
users in regard to appropriate processes and implement controls where possible.
LLMs are incredible technology platforms that have the potential to provide real performance and efficiency benefits to technology teams ranging from developers to customer support. The danger in their current iterations lies in the fact that users have
little control over the use of any data that is input into the systems, and that any output could put code bases and platforms at risk of intellectual property rights disputes.
One of the few bright spots for the use of LLMs by security teams is through their use to improve security policies. For example, if a security team inputs an anonymized incident response policy into an LLM and then asks it for suggestions on how to improve
efficiency or flexibility, the LLM will likely make good suggestions. The lack of confidentiality for the input into the system will create a significant bit of overhead for organizations to train users on how to appropriately anonymize input into
Although reasonable efforts will be made to ensure the completeness and accuracy of the information contained in our blog posts, no liability can be accepted by IANS or our Faculty members for the results of any actions taken by individuals or firms in
connection with such information, opinions, or advice.
September 26, 2023
By IANS Faculty
Access key data sets from the 2023 edition of IANS and Artico Search’s Security Budget Benchmark Report. Gain valuable insights on security budget increases and the drivers behind them.
September 21, 2023
Learn why CISOs Need D&O Liability Insurance Coverage now more than ever along with guidance to help minimize potential cyber liability risk.
September 19, 2023
Discover the diversity of IANS Faculty's real-world expertise. Learn how our faculty members can help you solve your most challenging security issues.