Reviewer #1 (Public Review):
This valuable study demonstrates a novel mechanism by which implicit motor adaptation saturates for large visual errors in a principled normative Bayesian manner. Additionally, the study revealed two notable empirical findings: visual uncertainty increases for larger visual errors in the periphery, and proprioceptive shifts/implicit motor adaptation are non-monotonic, rather than ramp-like. This study is highly relevant for researchers in sensory cue integration and motor learning. However, I find some areas where statistical quantification is incomplete, and the contextualization of previous studies to be puzzling.
Issue #1: Contextualization of past studies.
While I agree that previous studies have focused on how sensory errors drive motor adaptation (e.g., Burge et al., 2008; Wei and Kording, 2009), I don't think the PReMo model was contextualized properly. Indeed, while PReMo should have adopted clearer language - given that proprioception (sensory) and kinaesthesia (perception) have been used interchangeably, something we now make clear in our new study (Tsay, Chandy, et al. 2023) - PReMo's central contribution is that a perceptual error drives implicit adaptation (see Abstract): the mismatch between the felt (perceived) and desired hand position. The current paper overlooks this contribution. I encourage the authors to contextualize PReMo's contribution more clearly throughout. Not mentioned in the current study, for example, PReMo accounts for the continuous changes in perceived hand position in Figure 4 (Figure 7 in the PReMo study).
There is no doubt that the current study provides important additional constraints on what determines perceived hand position: Firstly, it offers a normative Bayesian perspective in determining perceived hand position. PReMo suggests that perceived hand position is determined by integrating motor predictions with proprioception, then adding a proprioceptive shift; PEA formulates this as the optimal integration of these three inputs. Secondly, PReMo assumed visual uncertainty to remain constant for different visual errors; PEA suggests that visual uncertainty ought to increase (but see Issue #2).
Issue #2: Failed replication of previous results on the effect of visual uncertainty.
2a. A key finding of this paper is that visual uncertainty linearly increases in the periphery; a constraint crucial for explaining the non-monotonicity in implicit adaptation. One notable methodological deviation from previous studies is the requirement to fixate on the target: Notably, in the current experiments, participants were asked to fixate on the target, a constraint not imposed in previous studies. In a free-viewing environment, visual uncertainty may not attenuate as fast, and hence, implicit adaptation does not attenuate as quickly as that revealed in the current design with larger visual errors. Seems like this current fixation design, while important, needs to be properly contextualized considering how it may not represent most implicit adaptation experiments.
2b. Moreover, the current results - visual uncertainty attenuates implicit adaptation in response to large, but not small, visual errors - deviates from several past studies that have shown that visual uncertainty attenuates implicit adaptation to small, but not large, visual errors (Tsay, Avraham, et al. 2021; Makino, Hayashi, and Nozaki, n.d.; Shyr and Joshi 2023). What do the authors attribute this empirical difference to? Would this free-viewing environment also result in the opposite pattern in the effect of visual uncertainty on implicit adaptation for small and large visual errors?
2c. In the current study, the measure of visual uncertainty might be inflated by brief presentation times of comparison and referent visual stimuli (only 150 ms; our previous study allowed for a 500 ms viewing time to make sure participants see the comparison stimuli). Relatedly, there are some individuals whose visual uncertainty is greater than 20 degrees standard deviation. This seems very large, and less likely in a free-viewing environment.
2d. One important confound between clear and uncertain (blurred) visual conditions is the number of cursors on the screen. The number of cursors may have an attenuating effect on implicit adaptation simply due to task-irrelevant attentional demands (Parvin et al. 2022), rather than that of visual uncertainty. Could the authors provide a figure showing these blurred stimuli (gaussian clouds) in the context of the experimental paradigm? Note that we addressed this confound in the past by comparing participants with and without low vision, where only one visual cursor is provided for both groups (Tsay, Tan, et al. 2023).
Issue #3: More methodological details are needed.
3a. It's unclear why, in Figure 4, PEA predicts an overshoot in terms of perceived hand position from the target. In PReMo, we specified a visual shift in the perceived target position, shifted towards the adapted hand position, which may result in overshooting of the perceived hand position with this target position. This visual shift phenomenon has been discovered in previous studies (e.g., (Simani, McGuire, and Sabes 2007)).
3b. The extent of implicit adaptation in Experiment 2, especially with smaller errors, is unclear. The implicit adaptation function seems to be still increasing, at least by visual inspection. Can the authors comment on this trend, and relatedly, show individual data points that help the reader appreciate the variability inherent to these data?
3c. The same participants were asked to return for multiple days/experiments. Given that the authors acknowledge potential session effects, with attenuation upon re-exposure to the same rotation (Avraham et al. 2021), how does re-exposure affect the current results? Could the authors provide clarity, perhaps a table, to show shared participants between experiments and provide evidence showing how session order may not be impacting results?
3d. The number of trials per experiment should be detailed more clearly in the Methods section (e.g., Exp 4). Moreover, could the authors please provide relevant code on how they implemented their computational models? This would aid in future implementation of these models in future work. I, for one, am enthusiastic to build on PEA.
3f. In addition to predicting a correlation between proprioceptive shift and implicit adaptation on a group level, both PReMo and PEA (but not causal inference) predict a correlation between individual differences in proprioceptive shift and proprioceptive uncertainty with the extent of implicit adaptation (Tsay, Kim, et al. 2021). Interestingly, shift and uncertainty are independent (see Figures 4F and 6C in Tsay et al, 2021). Does PEA also predict independence between shift and uncertainty? It seems like PEA does predict a correlation.
References:
Avraham, Guy, Ryan Morehead, Hyosub E. Kim, and Richard B. Ivry. 2021. "Reexposure to a Sensorimotor Perturbation Produces Opposite Effects on Explicit and Implicit Learning Processes." PLoS Biology 19 (3): e3001147.<br />
Makino, Yuto, Takuji Hayashi, and Daichi Nozaki. n.d. "Divisively Normalized Neuronal Processing of Uncertain Visual Feedback for Visuomotor Learning."<br />
Parvin, Darius E., Kristy V. Dang, Alissa R. Stover, Richard B. Ivry, and J. Ryan Morehead. 2022. "Implicit Adaptation Is Modulated by the Relevance of Feedback." BioRxiv. https://doi.org/10.1101/2022.01.19.476924.<br />
Shyr, Megan C., and Sanjay S. Joshi. 2023. "A Case Study of the Validity of Web-Based Visuomotor Rotation Experiments." Journal of Cognitive Neuroscience, October, 1-24.<br />
Simani, M. C., L. M. M. McGuire, and P. N. Sabes. 2007. "Visual-Shift Adaptation Is Composed of Separable Sensory and Task-Dependent Effects." Journal of Neurophysiology 98 (5): 2827-41.<br />
Tsay, Jonathan S., Guy Avraham, Hyosub E. Kim, Darius E. Parvin, Zixuan Wang, and Richard B. Ivry. 2021. "The Effect of Visual Uncertainty on Implicit Motor Adaptation." Journal of Neurophysiology 125 (1): 12-22.<br />
Tsay, Jonathan S., Anisha M. Chandy, Romeo Chua, R. Chris Miall, Jonathan Cole, Alessandro Farnè, Richard B. Ivry, and Fabrice R. Sarlegna. 2023. "Implicit Motor Adaptation and Perceived Hand Position without Proprioception: A Kinesthetic Error May Be Derived from Efferent Signals." BioRxiv. https://doi.org/10.1101/2023.01.19.524726.<br />
Tsay, Jonathan S., Hyosub E. Kim, Darius E. Parvin, Alissa R. Stover, and Richard B. Ivry. 2021. "Individual Differences in Proprioception Predict the Extent of Implicit Sensorimotor Adaptation." Journal of Neurophysiology, March. https://doi.org/10.1152/jn.00585.2020.<br />
Tsay, Jonathan S., Steven Tan, Marlena Chu, Richard B. Ivry, and Emily A. Cooper. 2023. "Low Vision Impairs Implicit Sensorimotor Adaptation in Response to Small Errors, but Not Large Errors." Journal of Cognitive Neuroscience, January, 1-13.