Home » Google alerts employees about potential leaking of confidential information to Bard during ChatGPT competition

Google alerts employees about potential leaking of confidential information to Bard during ChatGPT competition

Google warns staff of spilling secrets to Bard in ChatGPT race

Google warns staff of spilling secrets to Bard in ChatGPT race

Alphabet, the parent company of Google, has issued a warning to its employees about the use of chatbots, including its own Bard program, amid concerns over the potential leak of confidential information. The caution comes as Alphabet markets the chatbot program globally. The company has advised employees against entering confidential materials into AI chatbots, citing its long-standing policy on safeguarding information. The chatbots, such as Bard and ChatGPT, use generative AI to hold conversations and answer user prompts. Human reviewers may read the chats, and researchers have discovered that similar AI can reproduce the data it absorbs during training, presenting a potential risk of leaks. Alphabet has also urged its engineers to avoid direct use of computer code that chatbots can generate. Google stated that while Bard may offer undesired code suggestions, it still assists programmers. This move by Alphabet reflects the company’s efforts to prevent any potential harm from its chatbot software, which competes with ChatGPT, developed by OpenAI and Microsoft. The stakes are high in this race, as there is significant investment and potential revenue from advertising and cloud services tied to these AI programs. Alphabet’s caution aligns with a growing trend among corporations to warn employees about the use of publicly available chat programs due to the security risks they pose. Other major companies, including Samsung, Amazon, and Deutsche Bank, have implemented similar protocols. Apple’s stance is unclear, as the company did not respond to requests for comment. According to a survey conducted by Fishbowl, 43% of professionals were already using AI tools like ChatGPT, often without informing their superiors. Insider reported that Google instructed its staff testing Bard not to provide it with internal information before its launch in February. Google is now rolling out Bard to more than 180 countries and in 40 languages, positioning it as a tool for creativity. However, the company’s warnings about entering information extend to its code suggestions. Google has confirmed that it has been in discussions with Ireland’s Data Protection Commission and is addressing regulators’ questions regarding Bard’s impact on privacy. A recent report from Politico indicated that Google had postponed Bard’s EU launch pending further information on this matter. Concerns over the potential exposure of sensitive information have arisen due to the nature of this technology. While it offers the promise of increased efficiency, there is also a risk of including misinformation, sensitive data, or even copyrighted material in the generated content. A privacy notice from Google updated on June 1 advises users not to include confidential or sensitive information in their Bard conversations. Some companies, like Cloudflare, have developed software to address these concerns by allowing businesses to tag and restrict certain data from being shared externally. Both Google and Microsoft offer conversational tools to business customers that come with a higher price but do not involve sharing data with public AI models. The default setting in Bard and ChatGPT is to save users’ conversation history, which can be deleted upon the user’s request. Microsoft’s consumer chief marketing officer, Yusuf Mehdi, believes it is sensible for companies to discourage the use of public chatbots for work and emphasizes that Microsoft has stricter policies for its enterprise software. However, Microsoft declined to comment on whether it has a blanket ban on staff entering confidential information into public AI programs. Cloudflare CEO Matthew Prince compared typing confidential matters into chatbots to “turning a bunch of PhD students loose in all of your private records.”

Google warns staff of spilling secrets to Bard in ChatGPT race
Google warns staff of spilling secrets to Bard in ChatGPT race

Google issues caution to employees about sharing confidential information with ChatGPT in-house competitor

Alphabet, the parent company of Google, is reportedly cautioning employees about the use of chatbots, including its own Bard program, while simultaneously marketing the program globally. According to sources familiar with the matter, Alphabet has advised employees not to input confidential materials into AI chatbots, as this could pose a risk of leaks. Bard and ChatGPT are chatbot programs that use generative artificial intelligence to hold conversations with users and answer various prompts. Human reviewers may read the chats, and researchers have found that similar AI can reproduce absorbed data, further increasing the risk of leaks. Alphabet has also alerted its engineers to avoid direct use of computer code generated by chatbots. The company stated that while Bard may offer undesired code suggestions, it is still helpful to programmers.

This caution from Alphabet reflects its desire to avoid potential harm to its business from ChatGPT, a competing software created by OpenAI and Microsoft. The competition between Alphabet and ChatGPT’s backers holds billions of dollars of investment and potential revenue from new AI programs. Other companies, including Samsung, Amazon, and Deutsche Bank, have also implemented guardrails for AI chatbots, warning employees about the use of publicly available programs.

A survey conducted by networking site Fishbowl found that as of January, approximately 43% of professionals were using ChatGPT or other AI tools without informing their superiors. In February, Google reportedly instructed its staff testing Bard not to provide internal information. Despite this caution, Google plans to launch Bard in over 180 countries and in 40 languages, positioning it as a tool for creativity. The company’s warnings about code suggestions also align with its efforts to be transparent about the limitations of its technology.

Google has been in discussions with Ireland’s Data Protection Commission regarding Bard’s impact on privacy, following reports that the company postponed Bard’s launch in the European Union pending further information. In addition to concerns about privacy, there are worries about the potential inclusion of sensitive information, misinformation, or copyrighted material in AI-generated content. Some companies have developed software to address these concerns, such as Cloudflare, which offers a capability for businesses to tag and restrict data from flowing externally.

Both Google and Microsoft are offering conversational tools to business customers that come at a higher price but do not involve data absorption into public AI models. The default setting in Bard and ChatGPT is to save users’ conversation history, but users have the option to delete it. Microsoft’s consumer chief marketing officer, Yusuf Mehdi, expressed understanding of companies’ reluctance to have their staff use public chatbots for work and stated that Microsoft has stricter policies for enterprise software compared to its free Bing chatbot. Cloudflare CEO Matthew Prince compared typing confidential matters into chatbots to “turning a bunch of PhD students loose in all of your private records.”

Overall, Alphabet’s caution regarding the use of chatbots reflects a growing security standard for corporations, as they aim to protect sensitive information and address potential risks associated with AI technology.

Leave a Reply

Your email address will not be published. Required fields are marked *