2 Matching Annotations
  1. Jul 2018
    1. On 2014 Jan 03, Ian Lyons commented:

      In this paper, Park and Brannon showed that training adult subjects on an approximate, nonsymbolic arithmetic task (adding and subtracting estimates of the number of dots in various dot arrays) led to improvement on a symbolic arithmetic task (i.e., using Indo-Arabic numerals). The authors suggest this result points to a causal role for the ‘approximate number system’ (ANS) in more complex, symbolic math processing and may thus inform the development of interventions designed to improve mathematical competence in children and adults. We believe the authors’ work takes an important step forward in terms of understanding the building blocks of mathematical performance, and, using their work as a jumping board, we offer several points of reflection concerning (1) the nature of the ANS, and (2) what it means to train performance on a task versus the process it is meant to measure.

      Park and Brannon’s crucial experimental condition trained participants using approximate, nonsymbolic addition. This differs from tasks used more commonly in the literature to measure individual differences in the ANS – adaptation and comparison – which involve simply distinguishing between two approximate quantities (e.g., in comparison tasks, one typically decides which of two arrays contains more dots). The difference in tasks is significant because previous attempts to train participants on just a nonsymbolic comparison task failed to show significant improvement in individuals’ symbolic math performance [Wilson et al., 2006 (http://www.ncbi.nlm.nih.gov/pubmed/16734906); DeWind & Brannon, 2012 (http://www.ncbi.nlm.nih.gov/pubmed/22529786)]. Why, then, does training on nonsymbolic arithmetic lead to improvement in symbolic arithmetic skills, but training on nonsymbolic comparison does not, even though both have been shown to correlate with symbolic arithmetic [e.g., Gilmore et al., 2010 (http://www.ncbi.nlm.nih.gov/pubmed/20347435); Halberda et al., 2012 (http://www.ncbi.nlm.nih.gov/pubmed/22733748)]? One possible conclusion is that it is not enough simply to tap the ANS; instead, accessing the ANS must be structured in a manner that more directly parallels the target skill – symbolic arithmetic. From a broader perspective, such a conclusion suggests that it is time to take a deeper look at what exactly we mean by an ‘approximate number system’, as Park and Brannon’s results may in fact point to an important division between approximate quantity representation and manipulation within the ANS. The view that the ANS is not a unitary construct is also leant support by the fact that performance on nonsymbolic quantity comparison and nonsymbolic arithmetic tasks are uncorrelated [Gilmore et al., 2011 (http://www.ncbi.nlm.nih.gov/pubmed/21846265)].

      That nonsymbolic arithmetic (but not nonsymbolic comparison) training leads to improved symbolic arithmetic brings us to a second point: It is crucial to make a distinction between a task meant to measure or be an index of some underlying process, and the process itself. To cure a fever, one does not build a more precise thermometer; and by extension, if one demonstrated that using a more precise thermometer indeed failed to reduce one’s fever, it would be rather rash to conclude ambient bodily temperature is irrelevant to one’s health. A nonsymbolic number comparison task may act like a thermometer, where the underlying process it indexes is ANS acuity. Training on nonsymbolic comparison tasks does not improve math skills (Wilson et al., 2006; DeWind & Brannon, 2012), but this does not mean that the ANS is irrelevant for math. By training on nonsymbolic arithmetic instead of nonsymbolic comparison, Park and Brannon showed that one’s training regimen simply needs to tap the ANS in a way that better parallels the types of cognitive operations used in symbolic arithmetic.

      One sees a similar distinction between tasks that index versus train an underlying process elsewhere in the numerical domain: when a person is asked to mark the location of a number on a number line, the linearity of their estimates predicts math achievement [Booth & Siegler, 2006 (http://www.ncbi.nlm.nih.gov/pubmed/16420128), 2008 (http://www.ncbi.nlm.nih.gov/pubmed/18717904)]; but rather than train on this task per se, researchers found success using a board game that trained children to linearize their visuo-spatial representations of symbolic numbers – i.e., the underlying process that was presumably being measured by the number line task. Training on the board game improved performance on both the numberline task as well as math achievement [Siegler & Ramani, 2009 (http://psycnet.apa.org/journals/edu/101/3/545/)].

      Further, we believe that Park and Brannon’s own dataset provides yet another example illustrating the distinction between a task meant to measure or be an index of some underlying process, and the process itself. The authors show a correlation between numerical ordering ability and symbolic math ability, replicating our previous work [Lyons & Beilock, 2011 (http://www.ncbi.nlm.nih.gov/pubmed/21855058)]. Nevertheless, training on the ordering task did not lead to improvement in symbolic math beyond what was seen for vocabulary training. If one concludes from this result that understanding ordinality is irrelevant for developing math skills, one is in danger of mistaking a means of measurement for the thing being measured – much as one might have done with the dot comparison or number-line tasks discussed above.

      In conclusion, Park and Brannon’s recent paper showing a causal relation between nonsymbolic and symbolic arithmetic, represents a step toward understanding the building blocks of complex arithmetic. Perhaps missed in the excitement, though, is that this work underscores the need for researchers – especially those interested in educational applications – to carefully consider what their tasks and paradigms truly mean with respect to the processes and representations they aim to investigate. Failing to do so risks conflating the means of measurement with what is being measured, and may in turn lead to recommendations for educators to train the wrong thing.

      Signed, Ian Lyons and Sian Beilock


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

  2. Feb 2018
    1. On 2014 Jan 03, Ian Lyons commented:

      In this paper, Park and Brannon showed that training adult subjects on an approximate, nonsymbolic arithmetic task (adding and subtracting estimates of the number of dots in various dot arrays) led to improvement on a symbolic arithmetic task (i.e., using Indo-Arabic numerals). The authors suggest this result points to a causal role for the ‘approximate number system’ (ANS) in more complex, symbolic math processing and may thus inform the development of interventions designed to improve mathematical competence in children and adults. We believe the authors’ work takes an important step forward in terms of understanding the building blocks of mathematical performance, and, using their work as a jumping board, we offer several points of reflection concerning (1) the nature of the ANS, and (2) what it means to train performance on a task versus the process it is meant to measure.

      Park and Brannon’s crucial experimental condition trained participants using approximate, nonsymbolic addition. This differs from tasks used more commonly in the literature to measure individual differences in the ANS – adaptation and comparison – which involve simply distinguishing between two approximate quantities (e.g., in comparison tasks, one typically decides which of two arrays contains more dots). The difference in tasks is significant because previous attempts to train participants on just a nonsymbolic comparison task failed to show significant improvement in individuals’ symbolic math performance [Wilson et al., 2006 (http://www.ncbi.nlm.nih.gov/pubmed/16734906); DeWind & Brannon, 2012 (http://www.ncbi.nlm.nih.gov/pubmed/22529786)]. Why, then, does training on nonsymbolic arithmetic lead to improvement in symbolic arithmetic skills, but training on nonsymbolic comparison does not, even though both have been shown to correlate with symbolic arithmetic [e.g., Gilmore et al., 2010 (http://www.ncbi.nlm.nih.gov/pubmed/20347435); Halberda et al., 2012 (http://www.ncbi.nlm.nih.gov/pubmed/22733748)]? One possible conclusion is that it is not enough simply to tap the ANS; instead, accessing the ANS must be structured in a manner that more directly parallels the target skill – symbolic arithmetic. From a broader perspective, such a conclusion suggests that it is time to take a deeper look at what exactly we mean by an ‘approximate number system’, as Park and Brannon’s results may in fact point to an important division between approximate quantity representation and manipulation within the ANS. The view that the ANS is not a unitary construct is also leant support by the fact that performance on nonsymbolic quantity comparison and nonsymbolic arithmetic tasks are uncorrelated [Gilmore et al., 2011 (http://www.ncbi.nlm.nih.gov/pubmed/21846265)].

      That nonsymbolic arithmetic (but not nonsymbolic comparison) training leads to improved symbolic arithmetic brings us to a second point: It is crucial to make a distinction between a task meant to measure or be an index of some underlying process, and the process itself. To cure a fever, one does not build a more precise thermometer; and by extension, if one demonstrated that using a more precise thermometer indeed failed to reduce one’s fever, it would be rather rash to conclude ambient bodily temperature is irrelevant to one’s health. A nonsymbolic number comparison task may act like a thermometer, where the underlying process it indexes is ANS acuity. Training on nonsymbolic comparison tasks does not improve math skills (Wilson et al., 2006; DeWind & Brannon, 2012), but this does not mean that the ANS is irrelevant for math. By training on nonsymbolic arithmetic instead of nonsymbolic comparison, Park and Brannon showed that one’s training regimen simply needs to tap the ANS in a way that better parallels the types of cognitive operations used in symbolic arithmetic.

      One sees a similar distinction between tasks that index versus train an underlying process elsewhere in the numerical domain: when a person is asked to mark the location of a number on a number line, the linearity of their estimates predicts math achievement [Booth & Siegler, 2006 (http://www.ncbi.nlm.nih.gov/pubmed/16420128), 2008 (http://www.ncbi.nlm.nih.gov/pubmed/18717904)]; but rather than train on this task per se, researchers found success using a board game that trained children to linearize their visuo-spatial representations of symbolic numbers – i.e., the underlying process that was presumably being measured by the number line task. Training on the board game improved performance on both the numberline task as well as math achievement [Siegler & Ramani, 2009 (http://psycnet.apa.org/journals/edu/101/3/545/)].

      Further, we believe that Park and Brannon’s own dataset provides yet another example illustrating the distinction between a task meant to measure or be an index of some underlying process, and the process itself. The authors show a correlation between numerical ordering ability and symbolic math ability, replicating our previous work [Lyons & Beilock, 2011 (http://www.ncbi.nlm.nih.gov/pubmed/21855058)]. Nevertheless, training on the ordering task did not lead to improvement in symbolic math beyond what was seen for vocabulary training. If one concludes from this result that understanding ordinality is irrelevant for developing math skills, one is in danger of mistaking a means of measurement for the thing being measured – much as one might have done with the dot comparison or number-line tasks discussed above.

      In conclusion, Park and Brannon’s recent paper showing a causal relation between nonsymbolic and symbolic arithmetic, represents a step toward understanding the building blocks of complex arithmetic. Perhaps missed in the excitement, though, is that this work underscores the need for researchers – especially those interested in educational applications – to carefully consider what their tasks and paradigms truly mean with respect to the processes and representations they aim to investigate. Failing to do so risks conflating the means of measurement with what is being measured, and may in turn lead to recommendations for educators to train the wrong thing.

      Signed, Ian Lyons and Sian Beilock


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.