Skip to content Skip to main navigation Report an accessibility issue
Information Security

ChatGPT



ChatGPT, like anything else powered by artificial intelligence, comes with risks that need to be considered. A couple of key risks associated with any AI platform, but particularly ChatGPT, are Privacy, Data, and Ethics.

Privacy and Data Security
When interacting with ChatGPT, users often provide sensitive information or personal data. There is a risk that this information could be stored or mishandled, potentially leading to privacy breaches or fines due to contractual or regulatory compliance. To reduce this risk, only use secure communication channels and ensure that proper data protection measures are in place. It is also important to be cautious when sharing personal, financial, or confidential information during interactions with AI systems. If you wouldn’t post it on a Cumberland Avenue billboard, don’t submit it to a public chatbot!

Bias and Misinformation
ChatGPT’s responses are generated based on patterns and information it has learned from the data it was trained on. A public ChatGPT chatbot “learns” from everything that it’s fed. If the data it’s fed contains biases or inaccuracies, there is a risk that the AI system may generate biased or misleading responses. This could perpetuate stereotypes, spread misinformation, or reinforce existing biases. To address this risk, ongoing monitoring and evaluation of AI systems like ChatGPT are necessary to detect and rectify any biases or inaccuracies in their responses.

Implementing robust risk management strategies and guidelines is important when deploying and using AI systems like ChatGPT. Regular auditing, transparency, and a collaborative effort between developers, users, and experts can help reduce these risks and ensure the responsible and ethical use of AI technology.

As a Reminder:
The UT System, at this time, does not have any legal agreements with any AI developer that provides any assurance of data confidentiality. Therefore, putting data into ChatGTP or similar services is equivalent to disclosing the data to the public. Therefore, we must use the same data sharing precautions that we use every day with the new technology. Specifically, this means the following information should not be placed into any AI service:
Any data whose disclosure to the public would be considered a breach under FERPA, HIPAA, PCI, GLBA or any other Federal or State Statute.
Examples include (not exhaustive):

  • SSN
  • Credit Card Numbers
  • Personally identifiable medical information
  • Financial Aid information
  • Student names and grades
  • Etc.

 
Additionally, great caution is suggested with the following information:

  • Research data/Intellectual Property
  • Source code
  • Proprietary data 
  • Internal meeting notes
  • Hardware related information
  • Presentation notes, emails