The artificial intelligence industry is growing stronger than ever. It has taken over most of our lives. When surrounded by so much artificial intelligence, the question is how can we protect users’ data privacy?
What is chatbot?
A Chatbot is a computer program designed to automatically answer questions or perform tasks conversing with humans over a chat interface. Chatbots are often used on online platforms, such as websites, mobile apps, or messaging services.
Chatbots can be built in many different programming languages, and can use technologies such as artificial intelligence (AI), natural language processing (NLP), and machine learning to Analyze and respond to user questions or requests. Chatbots can provide information, answer questions, assist in solving specific problems, or perform simple tasks like placing an order, making an appointment, or sending a message.
Concerns about chatbots
Chatbots are used in many areas, including customer service, marketing, sales, healthcare, education, finance, and many more. And chatbots are aimed at all customers, from individuals to organizations and companies. This has raised a wave of concerns about the information security of companies and regulatory agencies.
Due to compliance concerns related to the use of third-party software by employees, some companies, including JPMorgan Chase (JPM), have implemented stricter controls on the use of ChatGPT, the popular AI chatbot that sparked the AI arms race among Big Tech companies.
Adding to the growing concerns about privacy, OpenAI, the company behind ChatGPT, revealed that the tool had to be temporarily taken offline on March 20 to address a bug that allowed certain users to view subject lines from other users’ chat history.
This same bug, which has since been resolved, also had the potential to expose “another active user’s first and last name, email address, payment address, the last four digits (only) of a credit card number, and credit card expiration date,” as stated in a blog post by OpenAI.
Furthermore, recently, regulators in Italy issued a temporary ban on ChatGPT in the country, citing privacy concerns following OpenAI’s disclosure of the breach.
A black box
” The privacy considerations with something like ChatGPT cannot be overstated. It’s like a black box”. Mark McCreary, co-chair of data security and privacy practice at law firm Fox Rothschild LLP, highlighted the important privacy implications associated with ChatGPT, comparing it to a black box of risks. potential risks, as stated in an interview with CNN.
Since its public launch in late November, ChatGPT has enabled users to effortlessly generate essays, stories, and song lyrics by typing prompts. Subsequently, both Google and Microsoft have also introduced their own AI tools with similar functionality, utilizing large language models trained on extensive collections of online data.
McCreary pointed out that when users input information into these tools, the uncertainty of how it will be used raises significant concerns, particularly for companies. As more employees casually rely on these tools for tasks like work emails or meeting notes, McCreary expressed that the risk of company trade secrets being unintentionally shared with various AI models is likely to grow. Steve Mills, the chief AI ethics officer at Boston Consulting Group, echoed this sentiment, stating that the inadvertent disclosure of sensitive information is the primary privacy concern that most companies have with these tools.
Mills elaborated, stating, “You have employees who may innocently think, ‘Oh, I can use this tool to summarize meeting notes.’ But by pasting those notes into the prompt, they could inadvertently disclose a significant amount of sensitive information.”
Mills added that if the data inputted by users is utilized to further train these AI tools, as stated by many companies behind them, then there is a loss of control over that data, and it ends up being in possession of someone else.
Google also stated that users have the option to use Bard without saving their conversations to their Google Account, and they can review or delete Bard conversations through a designated link. Google further mentioned that there are safeguards in place to prevent personally identifiable information from being included in Bard’s responses.
Mills acknowledged that there is still much to learn about how these AI tools operate and how the information inputted may be used for retraining models and influencing outputs. He cited past experiences with early autocomplete features that had unintended consequences, such as completing a social security number, which caught users off-guard.
In conclusion, Mills expressed the opinion that users should be cautious about inputting sensitive information into these tools, as there is a possibility that it may be shared with others without their knowledge.
In short, it is still not entirely certain that we will not be “sold” when entering data into AI chatbots. So be careful.