In an era dominated by artificial intelligence (AI), experts are urging caution when using AI tools like ChatGPT, DeepSeek, and others, emphasizing the risks of sharing personally identifiable information (PII).
Dorcas Akintade, a cybersecurity transformation expert, shared insights on the growing concerns regarding data privacy during an interview on Channels Television.
Akintade explained that AI tools, including DeepSeek and ChatGPT, are “double-edged swords” in the world of cybersecurity.
While both tools perform similar functions, they come with varying levels of risk depending on how they are used.
She stressed that the true danger lies in the kind of data users provide to these AI systems.
“AI feeds on data. So to a very large extent, a lot of people will think that ChatGPT is safer. But is it actually safer? Those are some of the questions I ask,” Akintade noted.
She went on to explain that whether users are interacting with ChatGPT, DeepSeek, or any other AI tool, the data they input could be at risk. She stressed that the true danger lies in the kind of data users provide to these AI systems.
“AI feeds on data. So to a very large extent, a lot of people will think that ChatGPT is safer. But is it actually safer? Those are some of the questions I ask,” Akintade noted. She went on to explain that whether users are interacting with ChatGPT, DeepSeek, or any other AI tool, the data they input could be at risk.
Akintade warned about sharing sensitive personal details with AI systems. While some might assume that certain information is harmless, the expert stressed the importance of avoiding the sharing of personal identifiable information (PII).
“But telling ChatGPT your name, the company where you are working or the web job offer is coming from, and how much is being offered. Sensitive information, anything that has to do with, we call it PII. That is your personal identifiable information. Anything that can identify you,” Akintade explained.
She further clarified that sensitive data includes details such as your name, company affiliation, salary information, and even more personal aspects like your children’s names and birth dates.
“Don’t tell ChatGPT your children’s name. Don’t tell ChatGPT your children’s date of birth. Those kinds of information are stored because it is being used to train the bots,” Akintade cautioned
How far is too far?
In response to concerns about the risks associated with using AI tools, Akintade emphasized the importance of being “street smart and wise” when interacting with these platforms.
- While acknowledging the utility of AI tools, she stressed that users must exercise caution in disclosing personal information.
“There’s nothing wrong with telling ChatGPT you just got a job offer. A lot of people haven’t even received job offers yet, and they’re already drafting their acceptance letters. So it’s fine to mention you’ve received a job offer. That’s not sensitive information,” she explained.
- However, Akintade cautioned against sharing more sensitive details, particularly health-related information. She specifically advised against disclosing personal health conditions.
“For example, I don’t want you to tell ChatGPT that you have cancer or that you have your blood type. Sensitive information should not be divulged.”
Ultimately, she urged users to avoid revealing any specific details that could potentially link back to their identity, reinforcing the need for caution in navigating AI platforms.
AI Cybersecurity concerns
As AI continues to revolutionize various industries, including cybersecurity, the conversation about data privacy is becoming increasingly urgent. Akintade’s insights serve as a timely reminder that while AI offers immense potential, users must exercise caution when sharing personal data with these tools.
By being mindful of the data shared, individuals can protect themselves from potential data breaches and privacy risks.