- Jan 2023
If you have experienced trouble in rememberingdates try the following system which has proved beneficial to at least onestudent.
Maxfield suggest drawing out a timeline as a possible visual cue for helping to remember dates. He seemingly misses any mention of ars memoria techniques here.
- Aug 2022
Lynne Kelly mentioned that there might be some interesting research on birds and memory with respect to songlines-like activity.
- Jul 2022
Is anyone practicing sketchnotes like patterns in their notes?
I've noticed that u/khimtan has a more visual stye of note taking with respect to their cards, but is anyone else doing this sort of visualization-based type of note taking in the vein of sketchnotes or r/sketchnoting? I've read books by Mike Rohde and Emily Mills and tinkered around in the space, but haven't actively added it to my practice tacitly. For those who do, do you have any suggestions/tips? I suspect that even simple drollery-esque images on cards would help with the memory/recall aspects. This may go even further for those with more visual-based modes of thinking and memory.
For those interested in more, as well as some intro videos, here are some of my digital notes: https://hypothes.is/users/chrisaldrich?q=sketchnotes
- Jun 2022
Some digital notes apps allow you to displayonly the images saved in your notes, which is a powerful way ofactivating the more intuitive, visual parts of your brain.
Visual cues one can make in their notes and user interfaces that help to focus or center on these can be useful reminders for what appears in particular notes, especially if visual search is a possibility.
Is this the reason that Gyuri Lajos very frequently cuts and pastes images into his Hypothes.is notes?
Which note taking applications leverage this sort of visual mnemonic device? Evernote did certainly, but other text heavy tools like Obsidian, Logseq, and Roam Research don't. Most feed readers do this well leveraging either featured photos, photos in posts, or photos in OGP.
"The idea is that over long periods of time, traces of memory for visual objects are being built up slowly in the neocortex," Clerkin said. "When a word is spoken at a specific moment and the memory trace is also reactivated close in time to the name, this mechanism allows the infants to make a connection rapidly." The researchers said their work also has significant implications for machine learning researchers who are designing and building artificial intelligence to recognize object categories. That work, which focuses on how names teach categories, requires massive amounts of training for machine learning systems to even approach human object recognition. The implication of the infant pathway in this study suggests a new approach to machine learning, in which training is structured more like the natural environment, and object categories are learned first without labels, after which they are linked to labels.
visual objects are encoded into memory over a long period of time until it becomes familiar. When a word is spoken when the memory trace associated with the visual object is reactivated, connection between word and visual object is made rapidly.
- Mar 2022
Evaluations of the platform show that users who follow the avatar inmaking a gesture achieve more lasting learning than those who simply hear theword. Gesturing students also learn more than those who observe the gesture butdon’t enact it themselves.
Manuela Macedonia's research indicates that online learners who enact specific gestures as they learn words learn better and have longer retention versus simply hearing words. Students who mimic these gestures also learn better than those who only see the gestures and don't use them themselves.
How might this sort of teacher/avatar gesturing be integrated into online methods? How would students be encouraged to follow along?
Could these be integrated into different background settings as well to take advantage of visual memory?
Anecdotally, I remember some Welsh phrases from having watched Aran Jones interact with his cat outside on video or his lip syncing in the empty spaces requiring student responses. Watching the teachers lips while learning can be highly helpful as well.
Kerry Ann Dickson, an associate professor of anatomy and cell biology atVictoria University in Australia, makes use of all three of these hooks when sheteaches. Instead of memorizing dry lists of body parts and systems, her studentspractice pretending to cry (the gesture that corresponds to the lacrimal gland/tearproduction), placing their hands behind their ears (cochlea/hearing), and swayingtheir bodies (vestibular system/balance). They feign the act of chewing(mandibular muscles/mastication), as well as spitting (salivary glands/salivaproduction). They act as if they were inserting a contact lens, as if they werepicking their nose, and as if they were engaging in “tongue-kissing” (motionsthat represent the mucous membranes of the eye, nose, and mouth, respectively).Dickson reports that students’ test scores in anatomy are 42 percent higher whenthey are taught with gestures than when taught the terms on their own.
Example of the use of visual, auditory, and proprioceptive methods used in the pedagogy of anatomy.
proprioceptive cue may be the mostpowerful of the three: research shows that making gestures enhances our abilityto think even when our gesturing hands are hidden from our view.
Annie Murphy Paul indicates that proprioceptive associations may be more powerful than auditory or visual ones as she notes that "research shows that making gestures enhances our ability to think even when our gesturing hands are hidden from our view."
This is something that could be researched and analyzed.
My personal experience is that visual >> auditory >> smell >> proprioception. Smell with respect to memory is incredibly difficult to exercise as are auditory method. Visual and proprioceptive methods are easier to actively practice though.
In a study carried out by Susan Goldin-Meadow and colleagues at theUniversity of Chicago, a group of adults was recruited to watch video recordingsof children solving conservation problems, like the water-pouring task weencountered earlier. They were then offered some basic information aboutgesture: that gestures often convey important information not found in speech,and that they could attend not only to what people say with their words but alsoto what they “say” with their hands. It was suggested that they could payparticular attention to the shape of a hand gesture, to the motion of a handgesture, and to the placement of a hand gesture. After receiving these simpleinstructions, study subjects watched the videos once more. Before the briefgesture training, the observing adults identified only around 30 to 40 percent ofinstances when children displayed emerging knowledge in their gestures; afterreceiving the training, their hit rate shot up to about 70 percent.
Concentrating on the shape, motion, and placement of hand gestures dramatically help a learner to more concretely understand material and understanding in others.
Link this to the use of movement in dance with respect to memory in Lynne Kelly's work.
Research demonstrates that gesture can enhance our memory by reinforcing thespoken word with visual and motor cues.
Research shows that gesture can impact our memories by helping to associate speech with visual cues.
References for this?
Link this to the idea that our visual memories are much stronger than our verbal ones.
- associative memory
- visual memory
- Lynne Kelly
- lip reading
- mnemonic techniques
- auditory memory
- language pedagogy
- visual thinking
- language acquisition
- long-term memory
- olfactive memory
- Manuela Macedonia
- memory training
- proprioceptive memory
- Sep 2021
- Apr 2020
Antov, M.I., Plog, E., Bierwirth, P. et al. Visuocortical tuning to a threat-related feature persists after extinction and consolidation of conditioned fear. Sci Rep 10, 3926 (2020). https://doi.org/10.1038/s41598-020-60597-z