- Dec 2021
Sloman, S. A. (2021). How Do We Believe? Topics in Cognitive Science, 0(2021), 1–14. https://doi.org/10.1111/tops.12580
- representational language
- causal reasoning
- pattern recognition
- sophisticated associative model
- human thought
- cognitive science
- unfamiliar circumstance
- dual system of thinking
- representational scheme
- information processing
- Sep 2021
- Jul 2021
<small><cite class='h-cite via'>ᔥ <span class='p-author h-card'>Jill Rosen </span> in Team finds brain mechanism that automatically links objects in our minds | Hub (<time class='dt-published'>07/24/2021 18:07:51</time>)</cite></small>
A study that quantifies association within the brain and indicates the region where it occurs.
- Jun 2021
One cannot hope thus to equal the speed and flexibility with which the mind follows an associative trail, but it should be possible to beat the mind decisively in regard to the permanence and clarity of the items resurrected from storage.
the idea of an "[[associative trail]]" here brings to mind both the ars memorativa and the method of loci as well as--even more specifically--the idea of songlines.
Bush's version is the same thing simply renamed.
<small><cite class='h-cite ht'>↬ <span class='p-author h-card'>Jeremy Dean</span> in Via: ‘What I Really Want Is Someone Rolling Around in the Text’ - The New York Times (<time class='dt-published'>06/09/2021 14:50:00</time>)</cite></small>
- Oct 2020
How to best help users when they forget the answer to a question? Suppose a user can’t remember the answer to the question: “Who was the second President of the United States?” Perhaps they think it’s Thomas Jefferson, and are surprised to learn it’s John Adams. In a typical spaced-repetition memory system this would be dealt with by decreasing the time interval until the question is reviewed again. But it may be more effective to follow up with questions designed to help the user understand some of the surrounding context. E.g.: “Who was George Washington’s Vice President?” (A: “John Adams”). Indeed, there could be a whole series of followup questions, all designed to help better encode the answer to the initial question in memory.
Here they're using the word encode at the bottom of the example, but they're not encoding anything!! They're talking about making other tangential associations which may help to triangulate the answer, but they're not directly encoding the actual information itself.