Home » Your Company Probably Needs a ChatGPT Policy

Your Company Probably Needs a ChatGPT Policy

by delta
0 comment

ChatGPT is an AI language model developed by OpenAI, launched end of 2022, and already has millions of users. Most people initially used the public version of ChatGPT for personal tasks (generating recipes, poems, training routines, etc.), but many have started using it for work-related projects. This Debevoise Data Blog post discusses how people are using his ChatGPT at work, what the associated risks are, and what policies businesses should consider implementing to mitigate those risks. 

How employees are using ChatGPT at work

Dozens of articles have been written about how ChatGPT replaces specific jobs. But, at least for now, ChatGPT seems to be making workers more productive rather than replacing them. Here are some examples.

Fact check: Employees use ChatGPT in the same way they use Google or Wikipedia to verify facts in documents they’re writing or reviewing.

First draft: ChatGPT can generate drafts of speeches, memos, cover letters, and recurring emails. When asked to write this blog post, ChatGPT said, “Employees using ChatGPT should be trained to understand the tool’s capabilities and limitations, as well as best practices for using it in the workplace.” I came up with some helpful suggestions, such as:

Editing documents: A language model trained on millions of documents, ChatGPT is very good at editing text. Employees use poorly worded paragraphs, and ChatGPT fixes grammatical errors, makes them clearer, and generally improves readability.

Generate ideas: ChatGPT is surprisingly good at generating lists.

Coding: Two of the most common uses of ChatGPT at work are generating new code and checking existing code. Many programmers say ChatGPT has greatly improved their efficiency and productivity.

Risks of using ChatGPT at work

Quality control risk: While impressive, ChatGPT can produce inaccurate results. When drafting the Legal Overview section, we may cite irrelevant or non-existent cases. Because it is a language model, it often struggles with computational tasks and can give incorrect results when asked to solve basic algebraic problems. OpenAI is fully aware of these limitations. In fact, ChatGPT itself often warns that it may generate false information. There are also gaps in knowledge about world events in 2021 and beyond. If someone reviewing ChatGPT’s output can easily spot and fix these kinds of errors, these risks are likely to be low. But if a reviewer can’t easily identify what’s wrong (or missing) in her ChatGPT response, or if there’s no one to review it at all, it poses a high-quality control risk. How serious these risks are, depends on your use case. For example, summarizing news articles on a particular topic for internal recognition is less risky than generating code that is essential to the core operations of a company’s information systems.

Contractual risk: There are two main sources of contractual risk associated with using ChatGPT at work. First, there may be limitations on the company’s ability to share sensitive customer or client information with third parties, including her OpenAI via ChatGPT. Second, sharing certain client data with his ChatGPT may violate the terms of the contract with the client regarding the intended use of the data. When conducting this analysis, companies should keep in mind that ChatGPT’s usage rights are documented in multiple documents: Terms of service, Sharing and publishing policy, the content policy and further terms of service. This stipulates that OpenAI will use the content provided to ChatGPT to develop and improve its functionality. It’s also important to note that it’s not entirely clear to whom these terms apply, as many employees have signed up for ChatGPT in a personal capacity.

Privacy risk: As with some contractual risks, sharing personal information about your customers, clients, or employees with OpenAI via ChatGPT may pose privacy risks. According to ChatGPT FAQ, OpenAI may use ChatGPT conversations for training purposes and system improvement. Depending on the nature of personal information shared with ChatGPT, companies may be obligated to update their privacy policies, notify customers, obtain customer consent, provide opt-out rights, etc. These obligations may arise from the United States. It is a state or federal privacy law and businesses should consider their interpretation of evolving privacy laws. The use of ChatGPT, including personal data, also raises questions about how companies (and thus OpenAI) approach their rights or requests to remove data from ChatGPT-generated workstreams or models themselves.

Consumer protection risks: If a consumer is unaware that they are interacting with ChatGPT (as opposed to a human customer service representative) or receives documents generated by ChatGPT from companies that have not been expressly disclosed; Wrongful or fraudulent under state or federal law (except for obvious reputational risks). In some circumstances, clients may get upset if they paid for content that was generated by ChatGPT but later learned it was not identified as such.

Intellectual property risk: There are some complicated IP issues when using ChatGPT. First, if an employee uses her ChatGPT to generate software code or other content, that content may not be copyrightable in many jurisdictions because it was not created by humans. That’s the current position of the US Copyright Office, although a recently filed lawsuit challenges the human authorship requirement. Second, there is a risk that ChatGPT and the content it generates may be viewed as derivative works of the copyrighted material used to train the model. If that view prevails, software code, marketing materials, and other content generated by ChatGPT appear to be materially similar to copyrighted training data, in particular, such content may be found to infringe your rights. Additionally, if an employee submits sensitive code, financial data, or other trade secrets or confidential information to her ChatGPT for analysis, there is a risk that other users of ChatGPT will exfiltrate the same data and compromise confidentiality. I have. It also potentially supports the claim that such data was not subject to reasonable measures to maintain its confidentiality. Finally, if software submitted to ChatGPT contains open source, consider whether such submissions could be considered to constitute a distribution that could give rise to open-source license obligations.

Vendor risk: Many of the above risks also apply to corporate data provided to or received from vendors. For example, should the contract with the vendor specify that information provided by the vendor to the company cannot be generated by ChatGPT without prior consent? The contract must also specify that confidential company data cannot be entered into ChatGPT.

How to reduce the risk of ChatGPT

Given these legal, commercial, and reputational risks, some companies have begun training their employees on the proper use of ChatGPT and drafting policies for using ChatGPT at work. . Training should alert employees to the reality that ChatGPT is not perfect, and results from queries to ChatGPT should be validated using conventional means. In the policies surrounding ChatGPT, he tends to classify ChatGPT usage into three categories: (2) uses permitted with the permission of a designated authority (eg, code generation, so long as it is carefully reviewed by experts prior to implementation); (3) generally permitted uses without prior authorization (e.g. creating purely administrative inside information, such as generating icebreaker ideas for new hires); Additionally, companies are taking steps to mitigate the risks associated with using ChatGPT, including:

Risk assessment: Creates a set of criteria to assess whether a particular ChatGPT use is low, medium, or high risk (e.g., confidential company or client information is shared with ChatGPT, output is shared with clients).

Stock: require that all uses of ChatGPT at work be reported to a team that tracks these uses and rates them as low, medium, or high risk based on established criteria (with criteria where appropriate) ).

Internal labeling: For specific uses, require users to label content generated by ChatGPT in an easily identifiable way so that reviewers are aware that these materials require special attention. increase.

External transparency: Clearly identifies content created by ChatGPT when shared with clients or published.

Record keeping: For high-risk uses, maintain a record of when content was generated and the prompts used to generate it.

Training: Provide regular training to employees on both acceptable and prohibited uses of ChatGPT based on their experience with the company and other organizations.

Monitoring: For certain high-risk use cases, deploy tools (including those created by OpenAI or other providers of generative AI models) to expose information via ChatGPT or other AI tools in violation of company policy. 

You may also like

Leave a Comment

delta-compliance.com

Delta-Compliance.com is a premier news website that provides in-depth coverage of the latest developments in finance, startups, compliance, business, science, and job markets.

Editors' Picks

Latest Posts

This Website is operated by the Company DELTA Data Protection & Compliance, Inc., located in Lewes, DE 19958, Delaware, USA.
All feedback, comments, notices of copyright infringement claims or requests for technical support, and other communications relating to this website should be directed to: info@delta-compliance.com. The imprint also applies to the social media profiles of DELTA Data Protection & Compliance.

Copyright ©️ 2023  Delta Compliance. All Rights Reserved

Newsletter Signup

Subscribe to our weekly newsletter below and never miss the latest product or an exclusive offer.