THE opportunity presented by Artificial Intelligence (AI) is only just starting to become apparent. Early users are leveraging AI for diverse tasks such as developing software, providing medical or personal advice, improving customer service, or writing first drafts of strategic reports.

While many companies are considering developing their own private AI capabilities, the public AI providers such as OpenAI, Google, and Perplexity AI are rapidly developing their own offerings to become more powerful and easier to access. However, not everyone is aware of the risks associated with sharing private and confidential information with the public AI providers.

Chat is the most natural and common way to interact with AI, and people are sharing far more in their Chats than ever before. Rather than a few carefully-crafted search terms in Google, many people are now pasting entire paragraphs of text into their chat messages that may include sensitive data such as their personal details, commercially sensitive, or medical information. 

Since AI is so powerful and can lead to so many efficiency gains, your employees are probably using AI in ways you’re almost certainly not aware of, resulting in a new avenue for data loss or unauthorized sharing of sensitive information. For example, an employee may use AI to draft a letter to a customer that includes their contact details, payments information, or other private information. Employees themselves may be unaware of the risks they are exposing your company to by sharing this sensitive information with a 3rd party.

Many of these AI products are new, rapidly evolving, and may include unknown vulnerabilities. The AI industry overall is still in its early stages of development, and the technology is constantly developing.

This means that there are likely to be unknown vulnerabilities in many of the AI providers and their technology. Accidental data breaches may occur, or vulnerabilities could be exploited by criminals to gain access to your private and confidential information through your chats.

It’s also possible your sensitive information may be used to train the AI, and may re-appear in another user’s queries. Although AI providers may claim they do not include your sensitive information in their training data, we're taking that on trust which may not have been earned yet, and it is difficult to verify these claims.

We are essentially taking their word for it, and in a competitive and fast-moving landscape there is no guarantee that they are being truthful or that they won’t make mistakes.

Your customers or suppliers may not have given permission, or have specifically prohibited you, from sharing their sensitive information with third parties. In many countries, legislation may also limit the use of such information to restricted purposes.

If you are sharing your customers' or suppliers' sensitive information with an AI provider, you need to make sure that you have their permission to do so, and are in compliance with regulations, or your professional ethical obligations.

You may also be subject to restrictions on where your data is being stored. In many cases, companies need to ensure the location of the physical storage is within jurisdictional boundaries. As the public AI providers operate “in the cloud” it may not be obvious in which country your data is being stored.

In many cases regulations and company policies are struggling to keep up with the rapid pace of change in artificial intelligence, however several recent risk assessments have pointed the way towards tighter controls in the use of AI.  For example, March 2024 State of California assessment of Generative AI risks identifies data breach of compromised information as the top risk overall, and the European Commission February 2024 study on the use of GenAI for Judiciary Professionals similarly highlighted the risk of sharing sensitive data.

While Artificial Intelligence offers many productivity and qualitative benefits that are only starting to be realised, there are also serious risks associated with sharing private and confidential information with public AI providers.

It is important to build awareness within your organisation of the potential risks, while also putting in place the necessary controls that both enable you to benefit from AI while mitigating and controlling the key risks.

 Companies can take steps to mitigate the use of GenAI in their organisations:

  • Employee Awareness: Educate employees about the dangers of inadvertently sharing sensitive data.
  • Policy Development: Implement clear guidelines on what type of information can be shared with AI providers. 
  • Data Classification: Categorize data based on sensitivity and regulate which categories are permissible for sharing with 3rd parties
  • Verification: Do not take AI providers' claims at face value; seek evidence of their data handling and security protocols.
  • Agreements: Secure explicit permissions from customers or suppliers before sharing their sensitive information with third parties.
  • Monitoring: Keep track of how and where Public AI technologies are being used in your organisation.
  • Data Sovereignty: Ensure your company data remains within jurisdictional boundaries to comply with relevant legal requirements.
  • Stay Updated: Regularly review and update your risk assessments to include the latest findings and recommendations related to AI vulnerabilities and threats.

By Mark Cowlishaw. Mark is founder of AnonAmaze (Secure Chat withGPT). He’s a respected leader with 20+ years of experience in the technology industry both withinA Australia and internationally. He has lead numerous key CEO-level initiatives to launch.