Reviewer #2 (Public review):
Summary:
The work by Claudi et al. presents a framework for constructing continuous attractor neural networks (CANs) with user-defined topologies and integration capabilities. The framework unifies and generalizes classical attractor models and includes simulations across a range of topologies, including ring, torus, sphere, Möbius band, and Klein bottle. A key contribution of the paper is the introduction of Killing vectors to enable integration on non-parallelizable manifolds. However, the need for Killing vectors currently appears hypothetical, as biologically discovered manifolds-such as rings and tori-do not require them.
Moreover, throughout the manuscript, the authors claim to be addressing "biologically plausible" attractor networks, yet the constraints required by their construction - such as exact symmetry, fine-tuning of weights, and idealized geometry-seem incompatible with biological variability. It appears that "biologically plausible" is effectively used to mean "capable of integration." While these issues do not diminish the contributions of the work, they should be acknowledged and addressed more explicitly in the text. I applaud the authors for their interesting work. Below are my major and minor concerns.
Strengths:
(1) Theoretical framework for integrating CANs<br /> The paper introduces a systematic method for constructing continuous attractor networks (CANs) with arbitrary topologies. This goes beyond classical models and includes novel topologies such as the Möbius band, sphere, and Klein bottle. The approach generalizes well-known ring and torus attractor models and provides a unified view of their construction, dynamics, and integration capabilities.
(2) Novel use of killing vector fields<br /> A key theoretical innovation is the introduction of Killing vectors to support velocity integration on non-parallelizable manifolds. This is mathematically elegant and extends the domain of tractable attractor models.
(3) Insightful simulations across manifolds<br /> The paper includes detailed simulations demonstrating bump attractor dynamics across a range of topologies.
Weaknesses:
(1) Biological plausibility is overstated<br /> Despite frequent use of the term "biologically plausible," the models rely on assumptions (e.g., symmetric connectivity, perfect geometries, fine-tuning) that are not consistent with known biological networks, and the authors do not incorporate heterogeneity, noise, or constraints like Dale's law.
(2) Continuum of states not directly demonstrated<br /> The authors claim to generate a continuum of stable states but do not provide direct evidence (e.g., Jacobian analysis with zero eigenvalues along the manifold). This weakens the central claim about the nature of the attractor.
(3) Lack of clarity around assumptions<br /> Several assumptions and analyses (e.g., symmetry breaking, linearity, stability conditions) are introduced without justification or overstated. The analytical rigor in discussing alternative solutions and bifurcation behavior is limited.
(4) Scalability to high dimensions<br /> The authors claim their method scales better than learning-based approaches. This should be better discussed.
Major Concerns
(1) Biological plausibility
The claim that the proposed framework is "biologically plausible" is misleading, as it is unclear what the authors mean by this term. Biological plausibility could include features such as heterogeneity in synaptic weights, randomness in tuning curves, irregular geometries, or connectivity constraints consistent with known biological architectures (e.g., Dale's law, multiple cell types). None of these elements is implemented in the current framework. Furthermore, it is not clear whether the framework can be extended to include such features-for example, CANs with heterogeneous connections or tuning curves. The connectivity matrix is symmetric to allow an energy-based description and analytical tractability, which is fine, but not a biologically realistic constraint. I recommend removing or significantly qualifying the use of the term "biologically plausible."
(2) Continuum of stable states<br /> While the authors claim their model generates a continuum of stable states, this is not demonstrated directly in their simulations or in a stability analysis (though there are some indirect hints). One way to provide evidence would be to compute the Jacobian at various points along the manifold and show that it possesses (approximately) zero eigenvalues in the tangent/on-manifold directions at each point (e.g., see Ságodi et al. 2024 and others). It would be especially valuable to provide such analysis for the more complex topologies illustrated in the paper.
(3) Assumptions, limitations, and analytical rigor<br /> Some assumptions and derivations lack justification or are presented without sufficient detail. Examples include:
• Line 126: "If the homogeneous state (all neurons equally active) were unstable, there must exist some other stable state, with broken symmetry." Is this guaranteed? In the ring model with ReLU activation, there could also be unbounded solutions-not just bump solutions-and, in principle, there could also be oscillatory or other solutions. In general, multiple states can co-exist, with differing stability. It appears the authors only analyze the homogeneous case and do not study the stability or bifurcations of other solutions, limiting their theoretical work.
• Line 122: "The conditions for the formation..." What are these conditions, precisely? A citation or elaboration would be helpful. Why is the assumption σ≪L necessary, and how does it impact the construction or conclusions?
• The theory relies heavily on exact symmetries and fine-tuned parameters. Indeed, in line 106, the authors write: "We seek interaction weights consistent with the formation, through symmetry breaking." Is this symmetry-breaking necessary for all CANs? Or is it a limitation specific to hand-crafted models (see also below)? There is insufficient discussion of such limitations. For example, it is difficult to envision how the authors' framework might form attractor manifolds with different geometries or heterogeneous tuning curves.
(4) Comparison with models of learned attractors<br /> While the connectivity patterns of learned attractors often resemble classical hand-crafted models (e.g., see also Vafidis et al. 2022), this is not always the case. If initial conditions include randomness or if the geometry of the attractor deviates from standard forms, the solutions can diverge significantly from hand-designed architectures. Such biologically realistic conditions highlight the limitations the hand-crafted CANs like those proposed here. I suggest updating the discussion accordingly.
(5) High-Dimensional Manifolds<br /> The authors argue that their method scales better than training-based approaches in high dimensions and that it is straightforward to extend their framework to generate high-dimensional CANs. It would be useful for the authors to elaborate further. First, it is unclear what k refers to in the expression k^M used in the introduction. Second, trained neural networks seem to exhibit inductive bias (e.g., Cantar et al. 2021; Bordelon & Pehlevan 2022; Darshan & Rivkind 2022), which may mitigate such scaling issues. To support their claim, the authors could also provide an example of a high-dimensional manifold and show that their framework efficiently supports a (semi-)continuum of stable states.