Q&A: Univ. of Phoenix CIO says chatbots could threaten innovation

The emergence of artificial intelligence (AI) has opened the door to endless opportunities across hundreds of industries, but privacy continues to be huge concern. The use of data to inform AI tools can unintentionally reveal sensitive and personal information.

Chatbots built atop large language models (LLMs) such as GPT-4 hold tremendous promise to reduce the amount of time knowedge workers spend summarizing meeting transcripts and online chats, creating presenations and campaigns, performing data analysis and even compiling code. But the technology is far from fully vetted. 

As AI tools continue to grow and gain acceptance — not just within consumer-facing applications such as Microsoft’s Bing and Google’s Bard chatbot-powered search engines — there’s a growing concern over data privacy and originality. 

To read this article in full, please click here