OIT News
Safer Generative AI Practices

National Cybersecurity Awareness Month
Artificial Intelligence (AI) is becoming a big part of our daily lives, helping with things like voice assistants (such as Siri and Alexa), healthcare tools, self-driving cars, and even managing finances. While AI can make life easier, it also raises important questions about security, safety, and privacy. There are many generative AI like ChatGPT, Copilot, Gemini, Grok, and Bard. It’s important to be aware of their features and potential risks so you can mitigate concerns and make the most of AI safely and responsibly.
When It Matters, Verify Information
It doesn’t take long to find multiple news stories that show that AI has some serious problems with understanding both the user’s prompts and the data on which it was trained. A common issue with generative AI is called hallucination. Hallucinations may never be fixed, requiring the prompter to verify all information provided by any AI. This problem extends even to the journalist and legal world, to the detriment of everyone.
Protecting Your Privacy and Data
There are concerns about both the security and privacy of many generative AI models, but the most popular by far is ChatGPT. It is important to know that there is a difference between the paid and free use of ChatGPT and the privacy policy for each. If you use the free version of ChatGPT (and even the Plus), your prompts and any data input can be used for model training, potentially allowing others to see your data. In general, Don’t Share It or Say It Online Unless You’d Scream It in Public.
Creating Media Responsibly
Other generative AI tools that can generate images and video like Sora, InVideo, Reve, FLUX, and Adobe Firefly can cause multiple issues when it comes to copyright and criminal enterprises like scams and other problems. Using these tools can allow users to create any video or images that can be used maliciously and can then become training materials.
Using AI for Wellness and Self-Help
A final, and emerging issue is the effect that using generative AI has on the user. A new condition called AI Psychosis has been seen in users. As generative AI does not understand the complex moral and ethical sides of human life, the model may tell the user to do things that are morally objectionable. Safety tests also found that some AI models can easily help encourage delusions and high-risk behaviors. There has also been a surge of “romantic” AI users to replace real-life human relationships.
Generative AI holds immense potential to transform our world in positive ways, from creating personalized learning experiences to advancing medical research and enabling innovative solutions across industries. However, like any technology, it can also be used for malicious ends. Ensuring that generative AI is developed and deployed with strong ethical guidelines, privacy safeguards, and transparency is essential to building trust and minimizing risks. By prioritizing safety and accountability, we can fully embrace the benefits of generative AI while protecting individuals and society.
Have You Tried UT Verse AI Assistant?
UT Verse is the University of Tennessee’s chat-based, AI-powered platform for faculty, staff, and students. It was developed with a strong emphasis on safeguarding user data and ensuring privacy for our community.

Explore
Write
Chat
Call