The race to use ChatGPT - What are the risks?

Everywhere I turn, there is new content about ChatGPT and how it revolutionizes artificial intelligence (AI). Like most technologies, companies must consider risks, whether they jump on the bandwagon or push it aside.

Chat Generative Pre-trained Transformer, better known as ChatGPT, is a chatbot. You use it to have a conversation and almost every topic out there. The restrictions relate to hate speech, discrimination, violence, or illegal activities. Additionally, as shown below, it cannot provide medical or legal advice.

Chat session with ChatGPT

ChatGPT uses information that it has been directed to ingest or train on. As users give it new data, it may be used to improve future responses to other’s questions. ChatGPT does provide a statement that “any data collected or used for this purpose is done so in accordance with strict data privacy and security standards to protect user privacy.” Well, that seems legitimate. Why should you be worried?

Earlier this month, the “Godfather of A.I.,” Dr. Geoffrey Hinton, stepped down from his position at Google. Why? Because he wants to be able to speak freely about the dangers of generative artificial intelligence that powers services like ChatGPT.

There are four security risk areas to consider when using an artificial intelligence capability like ChatGPT. If you can handle these risks, ChatGPT may be a solution that enhances your organization.

The first risk is People. Employees need to understand that, while seemingly innocent, AI chatbots are not friendly to information. The service may be unable to differentiate between your company’s sensitive and publicly available information. As with cybersecurity, employee awareness is vital.

The second area of concern should be data privacy. As previously mentioned, generative AI services can potentially save information related to users' interaction with the service. That information could include sensitive personal or company data.

Next, generative AI can produce malicious software upon request. Services like ChatGPT will refrain from directly writing malware; it can be coaxed into doing so if asked correctly. This gives less skilled criminals a method of putting together malware that could steal or hold for ransom information.

Finally, there is the potential loss of control of information. For example, a search for ChatGPT in the Google Play Store and the Apple App Store will produce a long list of applications that interface with ChatGPT and other generative A.I. solutions. These applications put themselves between the user and the service, which then can intercept the user's personally identifiable information and any information that the user may share.

The risks associated with generative AI are considerable but can be overcome by applying controls limiting access and security awareness training. Contact us to find out how we can help your enterprise address the risks associated with generative AI.


Previous
Previous

The Risks of Using AI Language Models like ChatGPT for Information Research