Want to keep private, don’t confide in chatbot – 2023

The artificial intelligence industry is growing stronger than ever. It has taken over most of our lives. When surrounded by so much artificial intelligence, the question is how can we protect users’ data privacy?

What is chatbot?

A Chatbot is a computer program designed to automatically answer questions or perform tasks conversing with humans over a chat interface. Chatbots are often used on online platforms, such as websites, mobile apps, or messaging services.

Chatbots can be built in many different programming languages, and can use technologies such as artificial intelligence (AI), natural language processing (NLP), and machine learning to Analyze and respond to user questions or requests. Chatbots can provide information, answer questions, assist in solving specific problems, or perform simple tasks like placing an order, making an appointment, or sending a message.

Concerns about chatbots

Chatbots are used in many areas, including customer service, marketing, sales, healthcare, education, finance, and many more. And chatbots are aimed at all customers, from individuals to organizations and companies. This has raised a wave of concerns about the information security of companies and regulatory agencies.

Due to compliance concerns related to the use of third-party software by employees, some companies, including JPMorgan Chase (JPM), have implemented stricter controls on the use of ChatGPT, the popular AI chatbot that sparked the AI arms race among Big Tech companies.

Adding to the growing concerns about privacy, OpenAI, the company behind ChatGPT, revealed that the tool had to be temporarily taken offline on March 20 to address a bug that allowed certain users to view subject lines from other users’ chat history.

This same bug, which has since been resolved, also had the potential to expose “another active user’s first and last name, email address, payment address, the last four digits (only) of a credit card number, and credit card expiration date,” as stated in a blog post by OpenAI.

Furthermore, recently, regulators in Italy issued a temporary ban on ChatGPT in the country, citing privacy concerns following OpenAI’s disclosure of the breach.

A black box

” The privacy considerations with something like ChatGPT cannot be overstated. It’s like a black box”. Mark McCreary, co-chair of data security and privacy practice at law firm Fox Rothschild LLP, highlighted the important privacy implications associated with ChatGPT, comparing it to a black box of risks. potential risks, as stated in an interview with CNN.

Since its public launch in late November, ChatGPT has enabled users to effortlessly generate essays, stories, and song lyrics by typing prompts. Subsequently, both Google and Microsoft have also introduced their own AI tools with similar functionality, utilizing large language models trained on extensive collections of online data.

McCreary pointed out that when users input information into these tools, the uncertainty of how it will be used raises significant concerns, particularly for companies. As more employees casually rely on these tools for tasks like work emails or meeting notes, McCreary expressed that the risk of company trade secrets being unintentionally shared with various AI models is likely to grow. Steve Mills, the chief AI ethics officer at Boston Consulting Group, echoed this sentiment, stating that the inadvertent disclosure of sensitive information is the primary privacy concern that most companies have with these tools.

Mills elaborated, stating, “You have employees who may innocently think, ‘Oh, I can use this tool to summarize meeting notes.’ But by pasting those notes into the prompt, they could inadvertently disclose a significant amount of sensitive information.”

Mills added that if the data inputted by users is utilized to further train these AI tools, as stated by many companies behind them, then there is a loss of control over that data, and it ends up being in possession of someone else.

A privacy policy with 2000 words

Faced with the anxiety of individuals, regulatory organizations, and AI companies, they have spoken up to protect their privacy by releasing a 2000-word privacy policy.

In its privacy policy, OpenAI, the Microsoft-backed company behind ChatGPT, acknowledges that it collects various types of personal information from its users. This information may be used for purposes such as improving and analyzing its services, conducting research, communicating with users, and developing new programs and services, among others. The privacy policy also mentions that personal information may be shared with third parties without further notice to the user, unless required by law. The lengthy privacy policy may appear complex, but it is in line with the industry standard in the digital era. OpenAI also has a separate Terms of Use document, which places the responsibility on the user to take appropriate measures when interacting with its tools.

OpenAI recently published a blog post that outlines its approach to AI safety, stating that the data collected is not used for selling services, advertising, or building profiles of individuals. Instead, the data is utilized to enhance the helpfulness of its models, such as ChatGPT, through further training on user interactions. Similarly, Google’s privacy policy, which includes its Bard tool, is also comprehensive, and the company has additional terms of service for generative AI users. Google emphasizes that it takes steps to protect users’ privacy by carefully selecting conversations and using automated tools to remove personally identifiable information while improving Bard.

“These sample conversations are reviewable by trained reviewers and kept for up to 3 years, separately from your Google Account,” the company states in a separate FAQ for Bard. The company also warns: “Do not include info that can be used to identify you or others in your Bard conversations.” The FAQ also states that Bard conversations are not being used for advertising purposes, and “we will clearly communicate any changes to this approach in the future.”

Google also stated that users have the option to use Bard without saving their conversations to their Google Account, and they can review or delete Bard conversations through a designated link. Google further mentioned that there are safeguards in place to prevent personally identifiable information from being included in Bard’s responses.

Mills acknowledged that there is still much to learn about how these AI tools operate and how the information inputted may be used for retraining models and influencing outputs. He cited past experiences with early autocomplete features that had unintended consequences, such as completing a social security number, which caught users off-guard.

In conclusion, Mills expressed the opinion that users should be cautious about inputting sensitive information into these tools, as there is a possibility that it may be shared with others without their knowledge.

In short, it is still not entirely certain that we will not be “sold” when entering data into AI chatbots. So be careful.

Leave a Reply

Your email address will not be published. Required fields are marked *