Reviewer #2 (Public Review):
This important work presents an example of a contextual computation in a navigation task through a comparison of task driven RNNs and mouse neuronal data. Authors perform convincing state of the art analyses demonstrating compositional computation with valuable properties for shared and distinct readouts. This work will be of interest to those studying contextual computation and navigation in biological and artificial systems.
This work advances intuitions about recent remapping results. Authors trained RNNs to output spatial position and context given velocity and 1-bit flip-flops. Both of these tasks have been trained separately, but this is the first time to my knowledge that one network was trained to output both context and spatial position. This work is also somewhat similar to previous work where RNNs were trained to perform a contextual variation on the Ready-Set-Go with various input configurations (Remington et al. 2018). Additionally findings in the context of recent motor and brain machine interface tasks are consistent with these findings (Marino et al in prep). In all cases contextual input shifts neural dynamics linearly in state space. This shift results in a compositional organization where spatial position can be consistently decoded across contexts. This organization allows for generalization in new contexts. These findings in conjunction with the present study make a consistent argument that remapping events are the result of some input (contextual or otherwise) that moves the neural state along the remapping dimension.
The strength of this paper is that it tightly links theoretical insights with experimental data, demonstrating the value of running simulations in artificial systems for interpreting emergent properties of biological neuronal networks. For those familiar with RNNs and previous work in this area, these findings may not significantly advance intuitions beyond those developed in previous work. It's still valuable to see this implementation and satisfying demonstration of state of the art methods. The analysis of fixed points in these networks should provide a model for how to reverse engineer and mechanistically understand computation in RNNs.
I'm curious how the results might change or look the same if the network doesn't need to output context information. One prediction might be that the two rings would collapse resulting in completely overlapping maps in either context. I think this has interesting implications about the outputs of the biological system. What information should be maintained for potential readout and what information should be discarded? This is relevant for considering the number of maps in the network. Additionally, I could imagine the authors might reproduce their current findings in another interesting scenario: Train a network on the spatial navigation task without a context output. Fix the weights. Then provide a new contextual input for the network. I'm curious whether the geometric organization would be similar in this case. This would be an interesting scenario because it would show that any random input could translate the ring attractor that maintains spatial position information without degradation. It might not work, but it could be interesting to try!
I was curious and interested in the authors choice to not use activity or weight regularization in their networks. My expectation is that regularization might smooth the ring attractor to remove coding irrelevant fluctuations in neural activity. This might make Supplementary Figure 1 look more similar across model and biological remapping events (Line 74). I think this might also change the way authors describe potential complex and high dimensional remapping events described in Figure 2A.
Overall this is a nice demonstration of state-of-the-art methods to reverse engineer artificial systems to develop insights about biological systems. This work brings together concepts for various tasks and model organisms to provide a satisfying analysis of this remapping data.