ChatGPT Data Privacy and User Liability

Freelancers—especially those working in research and development in medical and STEM fields—cannot afford to not fully understand how the tools they use to handle, store, and share data work. Nor can they afford to choose to ignore or underestimate the liability they may face in using these tools. Today we look at ChatGPT’s data privacy and user liability guidelines; their potential legal, financial, and reputational impact on freelancers; and ways freelancers can approach integrating new technologies like generative AI into their practice.

Most generative AI programs, like OpenAI’s ChatGPT, do not guarantee data privacy. The ChatGPT bot warns users, “Information provided to me during an interaction should be considered public, not private, as I am not a secure platform for transmitting sensitive or personal information. I am not able to ensure the security or confidentiality of information exchanged during those interactions, and the conversations may be used and stored for research or training purposes. Therefore, it’s important to exercise caution when sharing personal or sensitive information during interactions with me or any other AI language model.”

Further, OpenAI clearly limits its own liability. Per its Terms of Use, “You will defend, indemnify, and hold harmless us, our affiliates, and our personnel, from and against any claims, losses, and expenses (including attorneys’ fees) arising from or relating to your use of the Services, including your Content, products or services . . .” In other words, ChatGPT users, you’re on your own.

What does this mean in real world terms for freelancers and other solopreneurs? For freelancers working in areas like research and development, the protection of intellectual property is paramount, with almost every professional relationship beginning with the signing of an NDA or CDA, followed by a contract outlining the penalty for disclosure of proprietary information. Meaning, should your use of a generative AI tool with terms like ChatGPT ‘s result in the disclosure of a client’s intellectual property or other sensitive or proprietary information, you are the one legally and financially on the hook for that disclosure. What could a disclosure look like in this context? Something as simple as a prompt.

Additionally, the damage done to your professional reputation could destroy whatever a lawsuit hadn’t already burned to the ground. This is true even absent a disclosure or breach. In medical, scientific, and other technical communication professions, there are additional concerns about generative AI’s tendency to plagiarize, violate copyright, prevaricate, and hallucinate—all toxic to a professional reputation.

In the final analysis, generative AI is a tool. Professionals use many tools in their trade, and the best professionals understand the tools with which they work in great detail, including each tool’s limitations. One of the limitations of technological tools has been their vulnerability in terms of cybersecurity. For many freelancers, solopreneurs, and individuals, securing the tech tools of their profession has been a learning curve, and perhaps the same will be true for the professional use of generative AI tools.

It would be naive to think that all will avoid the use of all generative AI; however, some of the lessons learned about cybersecurity and applied to other tech tools can be applied to generative AI tools. Always read the terms and conditions for any program, portal, or service. Understand how your data is collected and protected and with whom it is shared. Last, but most importantly, remember that breaches happen, regardless of all the terms and conditions and data privacy statements a technology may have, and that’s where the liability statements kick in.