- Feb 2023
-
-
“This is terra incognita,” Dr. Sejnowski said. “Humans have never experienced this before.”
Twilight Zone ending. Come on! Don't end by spooking us about what's unknown. Fulfill the promise of the title and show how specific kinds of prompting produce disturbing outputs!
-
They can also lead us
This puts the agency back with the LLM as if human prompters are helpless before LLM seduction.
No, we're not helpless, and LLMs are not actually actively coaxing us. If we start to see odd outputs, we could look back and reflect on our prompts and any unintended linguistic signals we may have sent.
-
common conceptual state,
Very misleading. Humans and LLMs do not have similar cognition. They cannot have a common conceptual state. Their text sequences may come to have certain similarities.
-
mystical
Not something mystical again! Please! Really? A magical object from Harry Potter?
Why not just mention the concept of projection?
-
have decided that the only way they can find out what the chatbots will do in the real world is by letting them loose — and reeling them in when they stray. They believe their big, public experiment is worth the risk.
This amounts to saying "I believe in the good intentions and sincerity of Microsoft and OpenAI's explanations of their decisions."
Beloved New York Times, why are you not asking the basic questions of why they would need to release the bots to test them? Why not test them first? It's ludicrous to say they can't imagine what the public might do.
And what about their economic motivations to release early and get free crowdsourced testing?
-
“Whatever you are looking for — whatever you desire — they will provide.”
Too mystical a formulation. Not accurate. They are not providing what we desire but predicting text based on statistical associations with the word sequences we provide. Sometimes we are not aware of all the associations our words call up. These may or may not align with desires we are not aware of. But Sejnowski's phrasing implies that these systems are able to know and intentionally respond to our psyches.
-
But there is still a bit of mystery about what the new chatbot can do — and why it would do it. Its complexity makes it hard to dissect and even harder to predict, and researchers are looking at it through a philosophic lens as well as the hard code of computer science.
This basically creates a sense of mystery without telling us much, implying that there is something spooky going on, something beyond what computer science can explain. Actually it's quite explainable as the article title implies. People start writing prompts in a certain genre and the completion follows the genre...
-
Why Do A.I. Chatbots Tell Lies and Act Weird? Look in the Mirror.
I was glad to see this as a fair assessment of what happened with Kevin Roose's famous conversation with Sydney/Bing. See the annotation conversation on his first article.
-