Microsoft staff are reading users’ conversations with its Bing chatbot, the company has disclosed, amid growing data protection concerns about using the systems.
The company said human reviewers monitor what users submit to the chatbot in order to respond to “inappropriate behaviour”.
Employers including JP Morgan and Amazon have banned or restricted staff use of ChatGPT, which uses similar technology, amid concerns that sensitive information could be fed into the bot.
Bing chat became an overnight sensation after Microsoft released it to the world earlier this month, promising to disrupt Google's grip on search with its artificial intelligence bot.
However, it has restricted the service in recent days after testers reported bizarre interactions such as the bot declaring its love for humans and confessing to violent fantasies.
Microsoft said that Bing data is protected by stripping personal information from it and that only certain employees could access the chats.
It updated its privacy policy last week to say it can collect and review users' interactions with chatbots.
Amazon, Google and Apple attracted criticism several years ago when it emerged that contractors were reviewing voice recordings from the companies’ smart assistants, overhearing medical details or criminal behaviour.
The companies now allow users to opt out of sending audio to the companies.
Bing’s human-like responses to questions mean that some users may enter private or intimate messages into the bot.
“To effectively respond to and monitor inappropriate behaviour, we employ both automated and manual reviews of prompts shared with Bing,” a Microsoft spokesman said.
“This is a common practice in search and is disclosed in Microsoft's privacy statement.
“Microsoft is committed to protecting user privacy, and data is protected through agreed industry best practices including pseudonymisation, encryption at rest, secured and approved data access management, and data retention procedures.
“In all cases access to user data is limited to Microsoft employees with a verified business need only, and not with any third parties.”
Microsoft added two notes to its privacy statement last week to clarify that data generated from bots is collected and can be processed by humans.
Data security experts have raised concerns about what happens to information fed into online chatbots.
Bing now shuts down chats after too many prompts, and refuses to answer questions about its feelings, after early testers found it producing unhinged responses to certain questions or after long conversations.
Many companies have restricted ChatGPT, made by the Silicon Valley start-up OpenAI, or advised employees not to enter confidential information into it. OpenAI’s website says the company reviews conversations with the chatbot.
https://news.google.com/rss/articles/CBMibGh0dHBzOi8vd3d3LnRlbGVncmFwaC5jby51ay9idXNpbmVzcy8yMDIzLzAyLzI3L21pY3Jvc29mdC1zdGFmZi1yZWFkLXVzZXJzLWNoYXRncHQtcG9zdHMtcHJvbXB0aW5nLXNlY3VyaXR5L9IBAA?oc=5
2023-02-27 13:59:00Z
1799659843
Tidak ada komentar:
Posting Komentar