Can ChatGPT be relied upon to maintain the safety of your data?

OpenAI recently reported that a bug in ChatGPT allowed some users to see titles from another user’s chat history, which raises concerns about the privacy of your data on ChatGPT.

Unfortunately, the answer is no, and ChatGPT has explained why. As I have previously discussed the importance of verifying your GPT results, I asked my AI colleague Richard Davies to verify the answer generated.

Does ChatGPT security matter?

The ChatGPT bug and data security matter because they have wider consequences. OpenAI investigated the bug and found that it may have unintentionally exposed payment-related information for 1.2% of ChatGPT Plus subscribers who were active during a specific nine-hour timeframe. If you have been a victim of identity theft, you know how concerning it can be to have your payment details compromised.

Fortunately, OpenAI has fixed the bug, and there is no evidence at this time of any theft of sensitive data or fraud resulting from the bug.

ChatGPT recommends avoiding sharing sensitive or confidential information, such as personal identification numbers, bank account numbers, or passwords, over its platform. I asked the bot to explain why, and it generated the following answer:

It is always best to avoid sharing sensitive or confidential information, such as personal identification numbers, bank account numbers, or passwords, over chat with any digital platform, including chat with an AI, for several reasons:

  1. Security Risks: ChatGPT uses encryption to protect your conversations, but there is always the risk that your chat could be intercepted or hacked by a malicious third party. This could lead to the unauthorized access of your personal and financial information, which can be used to commit fraud or identity theft.
  1. Privacy Concerns: When you share sensitive information, you lose control over who can access that information. Even if ChatGPT does not share your information with anyone, there is always the risk that someone else could gain access to your conversations.
  1. Lack of Trust: ChatGPT is an AI language model, and while it is designed to respond to your questions and provide information, it is not a person you can build a personal relationship with. When you share sensitive information, you need to trust the recipient to keep that information safe and secure. With ChatGPT, that trust relationship may not exist.

It is always advisable to avoid sharing confidential or sensitive information through chat with any digital platform, including chat with an AI. If you must share sensitive information, it is better to use a more secure method, such as a phone call or in-person meeting, and to ensure that the recipient is trusted and verified.

ChatGPT raises a question about trust relationships

ChatGPT has helpfully highlighted that a trust relationship may not exist since you are communicating with an AI language model designed to respond conversationally. My colleague Richard Davies points out that “ChatGPT may sound human-like, but it lacks the capacity for personal relationships, trust, or emotions.

” He adds that “it is only a matter of time before someone falls in love with a ChatBot because they feel it understands them better than a human.”

Halt to AI race by Musk

Elon Musk and a group of scientists have called for a halt to the race to roll out AI, citing concerns that the push to develop artificial intelligence is spiralling out of control. The group has asked tech giants to pause for six months, warning that AI systems with “human-competitive intelligence” could pose a severe risk to society and humanity.

One significant concern is the risk of flooding the internet with disinformation that could lead to catastrophic future risks beyond the realms of science fiction.

How to use GPT4 AI securely

To use GPT4 AI securely, ChatGPT advises that if you need to share sensitive information or chat over a digital platform, it is better to use a more secure method and ensure that you communicate with a trusted and verified human.

Generative AI like GPT4 is a transformative technology that is already giving companies a competitive edge by improving their processes. This can be done safely by training AI on trusted data sets and implementing controls to safeguard your data.

OpenAI uses Kenyan workers to verify results

According to Time magazine, OpenAI has employed Kenyan workers on less than $2 per hour to ensure ChatGPT does not create toxic answers. OpenAI explains that human moderators can see responses generated by ChatGPT during their review process. The moderators only see the responses generated by ChatGPT and use their judgment to determine whether they meet standards for accuracy, safety, and appropriateness.

Find out how Guildhawk transformed their business with elevated digital technology.

How to verify GPT results with a certified expert

It is now possible to deploy new GPT technology safely and verify results using a certified human expert. An example of this is the new GAI AI-Translation product developed in partnership with Sheffield Hallam University.

The translation results generated by GAI are authentic because the machine learning model is trained on a vast private data lake that is trusted and verified. Therefore, it does not create the errors and bias that results from models trained on data sets scraped from the internet.

Global companies that use human and AI generated content want the assurance of a trusted linguist to guarantee tone of voice is perfect. Certified linguists who have been tested and verified are selected to perform this task.

Due to new advances like the Expert-in-the-Loop feature in GAI, it is now possible for companies to seamlessly commission their professional linguists online.

The AI race may pause but it has not stopped

Big tech companies may well heed the call from Elon Musk to pause the race to roll-out AI but behind the scenes, the race is well underway.

If you would like to learn how Generative AI can help you improve multilingual workflows or have a demo of GAI, please give us a call.