1 Matching Annotations
- Feb 2023
-
www.theverge.com www.theverge.com
-
It seems Bing has also taken offense at Kevin Liu, a Stanford University student who discovered a type of instruction known as a prompt injection that forces the chatbot to reveal a set of rules that govern its behavior. (Microsoft confirmed the legitimacy of these rules to The Verge.)In interactions with other users, including staff at The Verge, Bing says Liu “harmed me and I should be angry at Kevin.” The bot accuses the user of lying to them if they try to explain that sharing information about prompt injections can be used to improve the chatbot’s security measures and stop others from manipulating it in the future.
= Comment - this is worrying. - if the Chatbots perceive an enemy it to harm it, it could take haarmful actions against the perceived threat
-