LLMs are weird. You can sometimes get better results by threatening them, telling they're experts, repeating your commands, or lying to them that they'll receive a financial bonus.
令人惊讶的是:大型语言模型的响应竟然会受到人类情绪操控的影响,威胁、奉承或欺骗都能改变其输出质量。这揭示了AI系统与人类互动的复杂心理层面,暗示未来可能出现专门研究'如何与AI有效沟通'的新兴职业领域。