1. Last 7 days
    1. the Maker, Modeler,named Bearer, Begetter,Hunahpu Possum, Hunahpu Coyote,Great White Peccary, Coati,Sovereign Plumed Serpent,Heart of the Lake, Heart of the Sea,plate shaper, bowl shaper, as they are called,also named, also described asthe midwife, matchmakernamed Xpiyacoc, Xmucane,defender, protector,twice a midwife, twice a matchmaker,

      Names of divine entities present at the beginning of creation. Some of the Gods have more names than one.

    2. Andhereweshalltake upthedemonstration,revelation,andaccountofhowthingswereputinshadowandbroughttolightby

      Seems as if the Quiché Mayans sought to explain the world and its natural laws.

    1. Ce que je remarque dans cette transcription de minute [mes

      contrairement aux autres paragraphes, celui-ci est très proche de l'espace pour les notes de page. il touche presque la note 9

    1. I will tell the secret to you, to you, only to you. Come closer. This song is a cry for help: Help me! Only you, only you can, you are unique

      "I will tell the secret to you" she is referencing the song that is what the sirens tell them it makes them feel special and that they have to do something. They listen every time which is ultimately what gets them killed

    1. But the true nature of Maya society, the meaning of its hieroglyphics, and the chronicle of its history remained unknown to scholars for centuries after the Spaniards discovered the ancient Maya building sites.

      I wonder why it still remains a mystery

    2. They began to build ceremonial centers, and by 200 ce these had developed into cities containing temples, pyramids, palaces, courts for playing ball, and plazas.

      An early community. I wonder if they also had a HOA; maybe in the for of some form of Tax

    3. They practiced agriculture, built great stone buildings and pyramid temples, worked gold and copper, and used a form of hieroglyphic writing that has now largely been deciphered.

      Pyramid temples similar to egyptians. Did Mayas and egyptians build three-dimentinal for a reason? Maybe they where easier to construct or less labor needed to be involved.

    4. features the Hero Twins, Hunahpu and Xbalanque, who were transformed into, respectively, the Sun and the Moon

      I wonder if other cultures throughout time represent the sun and moon similar to the mayans.From what i can think of Egyption Gods are similar

    5. In 2009 archaeologist Richard Hansen discovered two 8-metre- (26-foot-) long panels carved in stucco from the pre-Classic Mayan site of El Mirador, Guatemala, that depict aspects of the Popol Vuh. The panels—which date to about 300 bce, some 500 years before the Classic-period fluorescence of Mayan culture—attested to the antiquity of the Popol Vuh.

      Thats amazing lets assume the 8-metre- (26-foot-) stucco is 4ft x 26ft. If the average weight for an sqft of stucco is 10lb the one panel would have weight 1040. (Giving that the stucco used does weigh 10lb per sqft)

    6. It chronicles the creation of humankind, the actions of the gods, the origin and history of the K’iche’ people, and the chronology of their kings down to 1550.

      The popol vah seems to be the mayans "bible" equivalent

    1. et ajustées en fonction de l’inflation liées aux projets et incluses dans

      remplacer par: ", rajustées pour tenir compte de l'inflation, pour les projets prévus comprenant"

    2. Les efforts continus de Parcs Canada concernant l'évaluation des obligations liées à la mise hors service d'immobilisations peuvent entraîner des passifs supplémentaires. Tout passif supplémentaire sera comptabilisé pendant l'exercice au cours duquel il sera connu et pourra être raisonnablement estimé.

      remove

    3. et autres obligations liées à la mise hors service d’immobilisations.

      remplacer par: ", des obligations de fermeture et post-fermeture associées aux décharges, des activités de mise hors service liées aux navires, embarcations et autres véhicules et réservoirs de stockage souterrains."

    4. Lorsque les flux de trésorerie futurs requis pour régler un passif sont estimables, prévisibles et devraient se produire dans le futur, une technique de valeur actuelle est utilisée. Le taux d'actualisation utilisé reflète le coût d'emprunt du gouvernement, associé au nombre estimé d'années pour compléter la mise hors service ou l'assainissement du site. Le passif comptabilisé est rajusté chaque année, au besoin, en fonction des rajustements de la valeur actuelle, de l’inflation, des nouvelles obligations, des variations des estimations de la direction et des coûts réels engagés. S’il est impossible de déterminer la probabilité de la responsabilité du gouvernement, un passif éventuel est indiqué dans les notes afférentes aux états financiers.

      Remove

  2. physerver.hamilton.edu physerver.hamilton.edu
    1. Second, since the charge on the drop wasmultiplied more than four times withoutchanging at all the value of G, or the value ofe l , the observations prove conclusively that inthe case of drops like this, the drag which theair exerts upon the drop is independentof whether the drop is charged or uncharged.

      Clear, logical inference.

    2. How completelythe error arising from evaporation, convectioncurrents, or any sort of disturbances in theair, are eliminated, is shown by the constancyduring all this time in the value of the velocityunder gravity.

      I did not even think that there would be so many potential source of error to consider.

    1. stunninglybeautiful and has much intrinsic value

      wouldn't you say the value is that of human art, art history, and culture, rather than just intrinsic value?

    1. con-front the reality thatconservation may beexpensive and stopdeceiving ourselves and partners in conser-vation with hopes that win-win solutions canalways be found.

      right, but how do you justify spending money (an inherently human thing) on protecting nature rather than protecting people (i.e. social reforms)

    2. make ecosystem services the foundationof our conservation strategies is to imply —intentionally or otherwise — that nature is onlyworth conserving when it is, or can be made,profitable

      i understand this argument, but I disagree; if done properly, nature is worth conserving for humans and for itself

    1. eLife Assessment

      This manuscript reports the development and characterization of iGABASnFR2, a genetically encoded GABA sensor that demonstrates substantially improved performance compared to its predecessor, iGABASnFR1. The work is comprehensive and methodologically rigorous, combining high-throughput mutagenesis, functional screening, structural analysis, biophysical characterization, and in vivo validation. The significance of the findings is fundamental, and the supporting evidence is compelling. iGABASnFR2 represents a notable advance in GABA sensor engineering, enabling enhanced imaging of GABA transmission both in brain slices and in vivo, and constitutes a timely, technically robust addition to the molecular toolkit for neuroscience research.

    2. Reviewer #1 (Public review):

      Summary:

      This manuscript by Kolb and Hasseman et al. introduces a significantly improved GABA sensor, building on the pioneering work of the Janelia team. Given GABA's role as the main inhibitory neurotransmitter and the historical lack of effective optical tools for real-time in vivo GABA dynamics, this development is particularly impactful. The new sensor boasts an enhanced signal-to-noise ratio (SNR) and appropriate kinetics for detecting GABA dynamics in both in vitro and in vivo settings. The study is well-presented, with convincing and high-quality data, making this tool a valuable asset for future research into GABAergic signaling.

      Strengths:

      The core strength of this work lies in its significant advancement of GABA sensing technology. The authors have successfully developed a sensor with higher SNR and suitable kinetics, enabling the detection of GABA dynamics both in vitro and in vivo. This addresses a critical gap in neuroscience research, offering a much-needed optical tool for understanding the most important inhibitory neurotransmitter. The clear representation of the work and the convincing, high-quality data further bolster the manuscript's strengths, indicating the sensor's reliability and potential utility. We anticipate this tool will be invaluable for further investigation of GABAergic signaling.

      Weaknesses:

      Despite the notable progress, a key limitation is that the current generation of GABA sensors, including the one presented here, still exhibits inferior performance compared to state-of-the-art glutamate sensors. While this work is a substantial leap forward, it highlights that further improvements in GABA sensors would still be highly beneficial for the field to match the capabilities seen with glutamate sensors.

    3. Reviewer #2 (Public review):

      Summary:

      This manuscript presents the development and characterization of iGABASnFR2, a genetically encoded GABA sensor with markedly improved performance over its predecessor, iGABASnFR1. The study is comprehensive and methodologically rigorous, integrating high-throughput mutagenesis, functional screening, structural analysis, biophysical characterization, and in vivo validation. iGABASnFR2 represents a significant advancement in GABA sensor engineering and application in imaging GABA transmission in slice and in vivo. This is a timely and technically strong contribution to the molecular toolkit for neuroscience.

      Strengths:

      The authors apply a well-established sensor optimization pipeline and iterative engineering strategy from single-site to combinatorial mutants to engineer iGABASnFR2. The development of both positive and negative going variants (iGABASnFR2 and iGABASnFR2n) offers experimental flexibility. The structure and interpretation of the key mutations provide insights into the working mechanism of the sensor, which also suggest optimization strategies. Although individual improvements in intrinsic properties are incremental, their combined effect yields clear functional gains, enabling detection of direction-selective GABA release in the retina and volume-transmitted GABA signaling in somatosensory cortex, which were challenging or missed using iGABASnFR1.

      Weaknesses:

      With minor revisions and clarifications, especially regarding membrane trafficking, this manuscript will be a valuable resource for probing inhibitory transmission.

    1. Public Reviews:

      Reviewer #1 (Public Review):

      Summary:

      Plasmodium vivax can persist in the liver of infected individuals in the form of dormant hypnozoites, which cause malaria relapses and are resistant to most current antimalarial drugs. This highlights the need to develop new drugs active against hypnozoites that could be used for radical cure. Here, the authors capitalize on an in vitro culture system based on primary human hepatocytes infected with P. vivax sporozoites to screen libraries of repurposed molecules and compounds acting on epigenetic pathways. They identified a number of hits, including hydrazinophthalazine analogs. They propose that some of these compounds may act on epigenetic pathways potentially involved in parasite quiescence. To provide some support to this hypothesis, they document DNA methylation of parasite DNA based on 5-methylcytosine immunostaining, mass spectrometry, and bisulfite sequencing.

      Strengths:

      -The drug screen itself represents a huge amount of work and, given the complexity of the experimental model, is a tour de force.

      -The screening was performed in two different laboratories, with a third laboratory being involved in the confirmation of some of the hits, providing strong support that the results were reproducible.

      -The screening of repurposing libraries is highly relevant to accelerate the development of new radical cure strategies.

      We thank the reviewer for pointing out the strengths of our report.

      Weaknesses:

      The manuscript is composed of two main parts, the drug screening itself and the description of DNA methylation in Plasmodium pre-erythrocytic stages. Unfortunately, these two parts are loosely connected. First, there is no evidence that the identified hits kill hypnozoites via epigenetic mechanisms. The hit compounds almost all act on schizonts in addition to hypnozoites, therefore it is unlikely that they target quiescence-specific pathways. At least one compound, colforsin, seems to selectively act on hypnozoites, but this observation still requires confirmation. Second, while the description of DNA methylation is per se interesting, its role in quiescence is not directly addressed here. Again, this is clearly not a specific feature of hypnozoites as it is also observed in P. vivax and P. cynomolgi hepatic schizonts and in P. falciparum blood stages. Therefore, the link between DNA methylation and hypnozoite formation is unclear. In addition, DNA methylation in sporozoites may not reflect epigenetic regulation occurring in the subsequent liver stages.

      We agree our report lacks direct evidence that hydrazinophthalazines are interacting with parasite epigenetic mechanisms. We spent significant resources attempting several novel approaches to establish a direct connection, but technological advances are needed to enable such studies, which we mention in the introduction and discussion. We disagree that schizonticidal activity automatically excludes the possibility a hypnozonticidal hit is acting on quiescence-specific pathways because both hypnozoites and schizonts are under epigenetic control and these pathways are likely performing different functions in different stages. Also important is the use of the word ‘specific’ as this term could be used to indicate parasite versus host (a drug that clears a parasite infection with a safety margin), parasite-directed effect versus host-directed effect (a drug acting via an agonistic or antagonistic effect on parasite or host pathway(s), but leading to parasite death in either case), hypnozoite versus schizont, or P. vivax versus other Plasmodium species. We were careful to indicate the usage of ‘specific’ throughout the text. Given the almost-nonexistent hit rate when screening diverse small molecule libraries screening against P. vivax hypnozoites, and remarkable increase in hits when screening epigenetic inhibitors as described in this report, our data suggests epigenetic pathways are important to the regulation of hypnozoite dormancy in addition to regulation of other parasite stages, but those effects are outside the scope of this report.

      -The mode of action of the hit compounds remains unknown. In particular, it is not clear whether the drugs act on the parasite or on the host cell. Merely counting host cell nuclei to evaluate the toxicity of the compounds is probably acceptable for the screen but may not be sufficient to rule out an effect on the host cell. A more thorough characterization of the toxicity of the selected hit compounds is required.

      We agree, and mention in the results and discussion, that the effect could be mediated through host pathways. This is not unlike the 8-aminoquinolones, which are activated by host cytochromes and kill via ROS, which is a nonspecific mechanism (that is, the compound is not directly interacting with a parasite target) leading to a parasite-specific effect (the parasite cannot tolerate the ROS produced, but the host can). During screening, it is generally the case that detecting hits with direct effects on the target organism are more desirable, so hits are counterscreened for general cytotoxicity. In this report, we show an effect on the parasite in direct comparison to the effect on host primary hepatocytes in the P. vivax assay itself, and follow up on hits with general counterscreens using two mammalian cell lines using CellTiter Glo, which does not rely on nuclei counts. Some compounds did show general cytotoxic effects, but with selectivity (more potency) against P. vivax liver stages, while other hits like the hydrazinophthalazines did not show an effect against primary hepatocytes and show only weak toxicity against mammalian cells at the highest dose tested. Further studies are needed to determine if the effect is indeed host- or parasite-directed and, if hydrazinophthalazines are to be developed into marketed antimalarials, extensive safety testing would be part of the development process.

      -There is no convincing explanation for the differences observed between P. vivax and P. cynomolgi. The authors question the relevance of the simian model but the discrepancy could also be due to the P. vivax in vitro platform they used.

      Fully characterizing the chemo-sensitivity of P. vivax and P. cynomolgi liver stages is outside the scope of this report. Rather, we report tool compounds which could be used in future studies to further characterize these sister species. We also make the point that P. cynomolgi is the gold standard for in vivo antirelapse activity, but it is still a model species, not a target species, and so few experimental hypnozonticidal compounds have been reported that the predictive value of P. cynomolgi is not fully understood. We found that several of our hits were species-specific using our in vitro platforms, thus future studies are needed to ensure this predictive value.

      -Many experiments were performed only once, not only during the screen (where most compounds were apparently tested in a single well) but also in other experiments. The quality of the data would be increased with more replication.

      Due to their size, compound library screens are typically performed once, with confirmation in dose-response assays, which were repeated several times. Rhesus PK studies was performed once on three animals, which is typical. All other studies were performed at least twice and most were performed three times or more. We provide a data table showing readers the source material for all replication as well as other source data tables showing the raw data for dose-response and other assays.

      -While the extended assay (12 days versus 8 days) represents an improvement of the screen, the relevance of adding inhibitors of core cytochrome activity is less clear, as under these conditions the culture system deviates from physiological conditions.

      We agree that cytochrome inhibitors render the platform less physiologically relevant, but the goal of screening is to detect hits which could be improved upon using medicinal chemistry, including metabolic stability. Metabolic stability is better assessed using standard assays such as liver microsomes, thus our goal was to characterize the effects of test compounds on the parasite without the confounding effect of hepatic metabolism.

      Reviewer #2 (Public Review):

      Summary:

      In this manuscript, inhibitors of the P. vivax liver stages are identified from the Repurposing, Focused Rescue, and Accelerated Medchem (ReFRAME) library as well as a 773-member collection of epigenetic inhibitors. This study led to the discovery that epigenetics pathway inhibitors are selectively active against P. vivax and P. cynomolgi hypnozoites. Several inhibitors of histone post-translational modifications were found among the hits and genomic DNA methylation mapping revealed the modification on most genes. Experiments were completed to show that the level of methylation upstream of the gene (promoter or first exon) may impact gene expression. With the limited number of small molecules that act against hypnozoites, this work is critically important for future drug leads. Additionally, the authors gleaned biological insights from their molecules to advance the current understanding of essential molecular processes during this elusive parasite stage.

      Strengths:

      -This is a tremendously impactful study that assesses molecules for the ability to inhibit Plasmodium hypnozoites. The comparison of various species is especially relevant for probing biological processes and advancing drug leads.

      -The SI is wonderfully organized and includes relevant data/details. These results will inspire numerous studies beyond the current work.

      We thank the reviewer for pointing out the strengths of our report.

      Reviewer #3 (Public Review):

      Although this work represents a massive screening effort to find new drugs targeting P. vivax hypnozoites, the authors should balance their statement that they identified targetable epigenetic pathways in hypnozoites.

      -They should emphasize the potential role of the host cell in the presentation of the results and the discussion, as it is known that other pathogens modify the epigenome of the host cell (i.e. toxoplasma, HIV) to prevent cell division. Also, hydrazinophtalazines target multiple pathways (notably modulation of calcium flux) and have been shown to inhibit DNA-methyl transferase 1 which is lacking in Plasmodium.

      -In a drug repurposing approach, the parasite target might also be different than the human target.

      -The authors state that host-cell apoptotic pathways are downregulated in P. vivax infected cells (p. 5 line 162). Maybe the HDAC inhibitors and DNA-methyltransferase inhibitors are reactivating these pathways, leading to parasite death, rather than targeting parasites directly.

      We agree caution must be taken as we did not directly confirm the mechanism of our hits. Many follow up studies will be needed to do so. We do point out in the discussion that the mechanism of hits could be host-directed. We agree with the notion that some of these hits could be affecting parasitized host cell pathways, which lead to death of the parasitized cell, with the parasite being collateral damage, yet such a mechanism could lead to a safe and effective novel antimalarial.

      It would make the interpretation of the results easier if the authors used EC50 in µM rather than pEC50 in tables and main text. It is easy to calculate when it is a single-digit number but more complicated with multiple digits.

      We apologize for the atypical presentation of potency data. However, there is growing concern in drug discovery when Standard Deviation is applied to Potency data because Standard Deviation is a linear calculation and Potency is a log effect, making the math incompatible. We understand thousands of papers are reported every year using this mathematically incorrect method, making our presentation of these data less familiar. However, we define pEC50 in its use in the text and table legends and hope to increase its use in the broader scientific community.

      Authors mention hypnozoite-specific effects but in most cases, compounds are as potent on hypnozoite and schizonts. They should rather use "liver stage specific" to refer to increased activity against hypnozoites and schizonts compared to the host cell. The same comment applies to line 351 when referring to MMV019721. Following the same idea, it is a bit far-fetched to call MMV019721 "specific" when the highest concentration tested for cytotoxicity is less than twice the EC50 obtained against hypnozoites and schizonts.

      We have reviewed and revised statements in the manuscript to ensure the effect we are describing is accurate in terms of parasite versus parasite form.

      Page 5 lines 187-189, the authors state "...hydrazinophtalazines were inactive when tested against P. berghei liver schizonts and P. falciparum asexual blood stages, suggesting that hypnozoite quiescence may be biologically distinct from developing schizonts". The data provided in Figure 1B show that these hydrazinophtalazines are as potent in P. vivax schizonts than in P. vivax hypnozoites, so the distinct activity seems to be Plasmodium species specific and/or host-cell specific (primary human hepatocytes rather than cell lines for P. berghei) rather than hypnozoite vs schizont specific.

      We agree the effect of hydrazinophtalazine could be more species specific than stage specific, but the context of our comment has to do with current methods in antimalarial discovery and development. Given the biological uniqueness of the various Plasmodium species and stages, any hypnozonticidal hit may or may not have pan-species or pan-stage activity; our goal was to characterize this. Regardless of the mechanism, we found it interesting that the hydrazinophtalazines kill P. vivax hypnozoites, but not P. cynomolgi hypnozoites nor other species and stages used in antimalarial drug development. This result makes the point that hypnozoite-focused assays may be required to detect and develop hypnozonticidal hits, regardless of what other species or stages they may or may not act on.

      Why choose to focus on cadralazine if abandoned due to side effects? Also, why test the pharmacokinetics in monkeys? As it was a marketed drug, were no data available in humans?

      Cadralazine was found more potent than hydralazine and PK data was available from humans, thus dose prediction calculations showed an efficacious dose was more achievable with cadralazine than hydralazine. Side effects are often dependent on dose and regimen, which are very likely to be much different for treating malaria versus hypertension. Thus, the potential side effects of cadralazine if it was to be used as an antimalarial are simply unknown and are not disqualifying at this step. The PK study was done in Rhesus macaques so we could calculate the dose needed to achieve coverage of EC90 during a planned follow up in a Rhesus-P. cynomolgi relapse model. However, this planned in vivo efficacy study was not justified once we concurrently discovered cadralazine was inactive on P. cynomolgi in vitro.

      In the counterscreen mentioned on page 6, the authors should mention that the activity of poziotinib in P. berghei and P. cynomolgi is equivalent to cell toxicity, so likely not due to parasite specificity.

      Poziotinib shows activity against mammalian cell lines but not against the primary hepatocyte cultures supporting dose-response assays against P. vivax liver forms, which do not replicate. Thus, poziotinib appears selective in the liver stage assay but also may have a much more potent effect in continuously replicating cell lines.

      To improve the clarity and flow of the manuscript, could the authors make a recapitulative table/figure for all the data obtained for poziotinib and hydrazinophtalazines in the different assays (8-days vs 12-days) and laboratory settings rather than separate tables in main and supplementary figures. Maybe also reorder the results section notably moving the 12-day assay before the DNA methylation part.

      We apologize for the large amount of data presented but believe we are presenting it in the clearest way possible. All raw data is available if readers wish to re-analyze or re-organize our findings.

      The isobologram plot shows an additive effect rather than a synergistic effect between cadralazine and 5-azacytidine, please modify the paragraph title accordingly. Please put the same axis scale for both fractional EC50 in the isobologram graph (Figure 2A).

      The isobologram shows the effect approaching synergy at some combinations. The isobologram was rendered using standard methods. The raw data is available if readers wish to re-analyze it.

      Concerning the immunofluorescence detection of 5mC and 5hmC, the authors should be careful with their conclusions. The Hoechst signal of the parasites is indistinguishable because of the high signal given by the hepatocyte nuclei. The signal obtained with the anti-5hmC in hepatocyte nuclei is higher than with the anti-5mC, thus if a low signal is obtained in hypnozoites and schizonts, it might be difficult to dissociate from the background. In blood stages (Figure S18), the best to obtain a good signal is to lyse the red blood cell using saponin, before fixation and HCl treatment.

      We spent many hours using high resolution imaging of hundreds of parasites trying to detect clear 5hmC signal in both hypnozoites and schizonts but never saw a clearly positive signal. Indeed, the host signal can be confounding, thus we felt the most clear and unbiased way to quantify and present these data was using HCI. We appreciate the suggestion to lyse cells first for detecting in the blood stage.

      To conclude that 5mC marks are the predominate DNA methylation mark in both P. falciparum and P. vivax, authors should also mention that they compare different stages of the life cycle, that might have different methylation levels.

      We do mention at the start of this section our reasoning that quantifying marks in sporozoites was technically achievable, but not in a mixed culture of parasites and hepatocytes. We agree they could have different marks at these different stages.

      Also, the authors conclude that "[...] 5mC is present at low level in P. vivax and P. cynomolgi sporozoites and could control liver stage development and hypnozoite quiescence". Based on the data shown here, nothing, except presence the of 5mC marks, supports that DNA methylation could be implicated in liver stage development or hypnozoite quiescence.

      We clearly show sporozoite and liver stage DNA is methylated, which implicates this fundamental cell function exists in P. vivax liver stages, and that compounds with characterized activity against DNMT are active on liver stages. We acknowledge we were unable to show a direct effect and use the qualifier ‘could’ for this very reason.

      How many DNA-methyltransferase inhibitors were present in the epigenetic library? Out of those, none were identified as hits, maybe the hydrazinophtalazines effect is not linked to DNMT inhibition but another target pathway of these molecules like calcium transport?

      We supply the complete list of inhibitors in the epigenetic library as a supplemental file, the library contained 773 compounds. Hydrazinophtalazines were not included in the library, but several other DNA methyltransferase inhibitors were inactive. It is possible that hydrazinophtalazine activity is linked to other mechanisms but the inactivity of other DNMT inhibitors does not preclude the possibility hydrazinophtalazines are acting through DNMT.

      The authors state (line 344): "These results corroborate our hypothesis that epigenetic pathways regulate hypnozoites". This conclusion should be changed to "[...] that epigenetic pathways are involved in P. vivax liver stage survival" because:

      -The epigenetic inhibitors described here are as active on hypnozoite than liver schizonts.

      -Again, we cannot rule out that the host cell plays a role in this effect and that the compound may not act directly on the parasite.

      The same comment applies to the quote in lines 394 to 396. There is no proof in the results presented here that DNA methylation plays any role in the effect of hydrazinophtalazines in the anti-plasmodial activity obtained in the assay.

      We maintain that we use words throughout the text that express uncertainty about the mechanisms involved. It is important to point out that, prior to this paper, the number of hypnozonticidal hits was incredibly low and this field is just emerging. The fundamental role of epigenetic mechanisms is regulation of gene expression. Finding several hypnozonticial hits when screening epigenetic libraries implies epigenetic pathways are important for hypnozoite survival. We intentionally do not specify exact mechanisms or if they are host or parasite pathways. Host-parasite interactions in the liver stage are incredibly difficult to resolve and are outside the scope of this report. Furthermore, this statement is not exclusive to schizonts, but since screens of diversity sets against schizonts result in a much higher hit rate, the focus of this comment is unearthing rare hypnozonticidal hits.

    1. We recognize the natural impatience of people who feel that their hopesare slow in being realized. But we are convinced that these demonstrations are unwise and untimely.

      WE get you are impatient. Your actions are unwise and untimely.

    1. eLife Assessment

      This useful study characterises motor and somatosensory cortex neural activity during naturalistic eating and drinking tongue movement in nonhuman primates. The data, which include both electrophysiology and nerve block manipulations, will be of value to neuroscientists and neural engineers interested in tongue use. Although the current analyses provide a solid description of single neuron activity in these areas, both the population level analyses and the characterisation of activity changes following nerve block could be improved.

    2. Reviewer #1 (Public review):

      Summary:

      Hosack and Arce-McShane investigate how the 3D movement direction of the tongue is represented in the orofacial part of the sensory-motor cortex and how this representation changes with the loss of oral sensation. They examine the firing patterns of neurons in the orofacial parts of the primary motor cortex (MIo) and somatosensory cortex (SIo) in non-human primates (NHPs) during drinking and feeding tasks. While recording neural activity, they also tracked the kinematics of tongue movement using biplanar video-radiography of markers implanted in the tongue. Their findings indicate that many units in both MIo and SIo are directionally tuned during the drinking task. However, during the feeding task, directional turning was more frequent in MIo units and less prominent in SIo units. Additionally, in some recording sessions, they blocked sensory feedback using bilateral nerve block injections, which seemed to result in fewer directionally tuned units and changes in the overall distribution of the preferred direction of the units.

      Strengths:

      The most significant strength of this paper lies in its unique combination of experimental tools. The author utilized a video-radiography method to capture 3D kinematics of the tongue movement during two behavioral tasks while simultaneously recording activity from two brain areas. This specific dataset and experimental setup hold great potential for future research on the understudied orofacial segment of the sensory-motor area.

      Weaknesses:

      A substantial portion of the paper is dedicated to establishing directional tuning in individual neurons, followed by an analysis of how this tuning changes when sensory feedback is blocked. While such characterizations are valuable, particularly in less-studied motor cortical areas and behaviors, the discrepancies in tuning changes across the two NHPs, coupled with the overall exploratory nature of the study, render the interpretation of these subtle differences somewhat speculative. At the population level, both decoding analyses and state space trajectories from factor analysis indicate that movement direction (or spout location) is robustly represented. However, as with the single-cell findings, the nuanced differences in neural trajectories across reach directions and between baseline and sensory-block conditions remain largely descriptive. To move beyond this, model-based or hypothesis-driven approaches are needed to uncover mechanistic links between neural state space dynamics and behavior.

    3. Reviewer #2 (Public review):

      Summary:

      This manuscript by Hosack and Arce-McShane examines the directional tuning of neurons in macaque primary motor (MIo) and somatosensory (SIo) cortex. The neural basis of tongue control is far less studied than, for example, forelimb movements, partly because the tongue's kinematics and kinetics are difficult to measure. A major technical advantage of this study is using biplanar video-radiography, processed with modern motion tracking analysis software, to track the movement of the tongue inside the oral cavity. Compared to prior work, the behaviors are more naturalistic behaviors (feeding and licking water from one of three spouts), although the animals were still head-fixed.

      The study's main findings are that:

      • A majority of neurons in MIo and a (somewhat smaller) percentage of SIo modulated their firing rates during tongue movements, with different modulation depending on the direction of movement (i.e., exhibited directional tuning). Examining the statistics of tuning across neurons, there was anisotropy (e.g., more neurons preferring anterior movement) and a lateral bias in which tongue direction neurons preferred that was consistent with the innervation patterns of tongue control muscles (although with some inconsistency between monkeys).<br /> • Consistent with this encoding, tongue position could be decoded with moderate accuracy even from small ensembles of ~28 neurons.<br /> • There were differences observed in the proportion and extent of directional tuning between the feeding and licking behaviors, with stronger tuning overall during licking. This potentially suggests behavioral context-dependent encoding.<br /> • The authors then went one step further and used a bilateral nerve block to the sensory inputs (trigeminal nerve) from the tongue. This impaired the precision of tongue movements and resulted in an apparent reduction and change in neural tuning in Mio and SIo.

      Strengths:

      The data are difficult to obtain and appear to have been rigorously measured, and provide a valuable contribution to this under-explored subfield of sensorimotor neuroscience. The analyses adopt well-established methods especially from the arm motor control literature, and represent a natural starting point for characterizing tongue 3D direction tuning.

      Weaknesses:

      There are alternative explanations from some of the interpretations, but those interpretations are described in a way that clearly distinguishes results from interpretations, and readers can make their own assessments. Some of these limitations are described in more detail below.

      One weakness of the current study is that there is substantial variability in results between monkeys.

      This study focuses on describing directional tuning using the preferred direction (PD) / cosine tuning model popularized by Georgopoulous and colleagues for understanding neural control of arm reaching in the 1980s. This is a reasonable starting point and a decent first order description of neural tuning. However, the arm motor control field has moved far past that viewpoint, and in some ways an over-fixation on static representational encoding models and PDs held that field back for many years. The manuscript benefit from drawing the readers' attention (perhaps in their Discussion) that PDs are a very simple starting point for characterizing how cortical activity relates to kinematics, but that there is likely much richer population-level dynamical structure and that a more mechanistic, control-focused analytical framework may be fruitful. A good review of this evolution in the arm field can be found in Vyas S, Golub MD, Sussillo D, Shenoy K. 2020. Computation Through Neural Population Dynamics. Annual Review of Neuroscience. 43(1):249-75. A revised version of the manuscript incorporates more population-level analyses, but with inconsistent use of quantifications/statistics and without sufficient contextualization of what the reader is to make of these results.

      The described changes in tuning after nerve block could also be explained by changes in kinematics between these conditions, which temper the interpretation of these interesting results.

      I am not convinced of the claim that tongue directional encoding fundamentally changes between drinking and feeding given the dramatically different kinematics and the involvement of other body parts like the jaw (e.g., the reference to Laurence-Chasen et al. 2023 just shows that there is tongue information independent of jaw kinematics, not that jaw movements don't affect these neurons' activities). I also find the nerve block results inconsistent (more tuning in one monkey, less in the other?) and difficult to really learn something fundamental from, besides that neural activity and behavior both change - in various ways - after nerve block (not at all surprising but still good to see measurements of).

      The manuscript states that "Our results suggest that the somatosensory cortex may be less involved than the motor areas during feeding, possibly because it is a more ingrained and stereotyped behavior as opposed to tongue protrusion or drinking tasks". An alternative explanation be more statistical/technical in nature: that during feeding, there will be more variability in exactly what somatosensation afferent signals are being received from trial to trial (because slight differences in kinematics can have large differences in exactly where the tongue is and the where/when/how of what parts of it are touching other parts of the oral cavity)? This variability could "smear out" the apparent tuning using these types of trial-averaged analyses. Given how important proprioception and somatosensation are for not biting the tongue or choking, the speculation that somatosensory cortical activity is suppressed during feedback is very counter-intuitive to this reviewer. In the revised manuscript the authors note these potential confounds and other limitations in the Discussion.

    4. Reviewer #3 (Public review):

      Summary

      In this study, the authors aim to uncover how 3D tongue direction is represented in the Motor (M1o) and Somatosensory (S1o) cortex. In non-human primates implanted with chronic electrode arrays, they use X-ray based imaging to track the kinematics of the tongue and jaw as the animal is either chewing food or licking from a spout. They then correlate the tongue kinematics with the recorded neural activity. They perform both single-unit and population level analyses during feeding and licking. Then, they recharacterize the tuning properties after bilateral lidocaine injections in the two sensory branches of the trigeminal nerve. They report that their nerve block causes a reorganization of the tuning properties and population trajectories. Overall, this paper concludes that M1o and S1o both contain representations of the tongue direction, but their numbers, their tuning properties and susceptibility to perturbed sensory input are different.

      Strengths

      The major strengths of this paper are in the state-of-the-art experimental methods employed to collect the electrophysiological and kinematic data. In the revision, the single-unit analyses of tuning direction are robustly characterized. The differences in neural correlations across behaviors, regions and perturbations are robust. In addition to the substantial amount of largely descriptive analyses, this paper makes two convincing arguments 1) The single-neuron correlates for feeding and licking in OSMCx are different - and can't be simply explained by different kinematics and 2) Blocking sensory input alters the neural processing during orofacial behaviors. The evidence for these claims is solid.

      Weaknesses

      The main weakness of this paper is in providing an account for these differences to get some insight into neural mechanisms. For example, while the authors show changes in neural tuning and different 'neural trajectory' shapes during feeding and drinking - their analyses of these differences are descriptive and provide limited insight for the underlying neural computations.

    5. Author response:

      The following is the authors’ response to the current reviews.

      We thank the editors and the reviewers for their helpful comments. We have provided a response to reviewers’ recommendations and made some revisions on the manuscript. 

      Reviewer #1 (Recommendations for the authors): 

      In the newly added population factor analysis, several methodological decisions remain unclear to me:

      In Figure 7, why do the authors compare the mean distance between conditions in the latent spaces of MIo and SIo? Since these latent spaces are derived separately, they exist on di@erent scales (with MIo appearing roughly four times larger than SIo), and this discrepancy is reflected in the reported mean distances (Figure 7, inset plots). Wouldn't this undermine a direct comparison?

      Thank you for this helpful feedback. The reviewer is correct that the latent spaces are derived separately for MIo and SIo, thus they exist on diGerent scales as we have noted in the caption of Figure 7: “Axes for SIo are 1/4 scale of MIo.” 

      To allow for a direct comparison between MIo and SIo, we corrected the analysis by comparing their normalized mean inter-trajectory distances obtained by first calculating the geometric index (GI) of the inter-trajectory distances, d, between each pair of population trajectories per region as: GI= (d<sub>1</sub>-d<sub>2</sub>)/ (d<sub>1</sub>+d<sub>2</sub>). We then performed the statistics on the GIs and found a significant diGerence between mean inter-trajectory distances in MIo vs. SIo. We performed the same analysis comparing the distance travelled between MIo and SIo trajectories by getting the normalized diGerence in distances travelled and still found a significant diGerence in both tasks. We have updated the results and figure inset to reflect these changes.

      In Figure 12, unlike Figure 7 which shows three latent dimensions, only two factors are plotted. While the methods section describes a procedure for selecting the optimal number of latent factors, Figure 7 - figure supplement 3 shows that variance explained continues to increase up to about five latent dimensions across all areas. Why, then, are fewer dimensions shown?

      Thank you for the opportunity to clarify the figure. The m obtained from the 3-fold crossvalidation varied for the full sample and was 20 factors for the subsample. We clarify that all statistical analyses were done using 20 latent factors. Using the full sample of neurons, the first 3 factors explained 81% of variance in feeding data compared to 71% in drinking data. When extended to 5 factors, feeding maintained its advantage with 91% variance explained versus 82% for drinking. Because feeding showed higher variance explained than drinking across 3 or 5 factors, only three factors were shown in Figure 7 for better visualization. We added this clarification to the Methods and Results.

      Figure 12 shows the diGerences in the neural trajectories between the control and nerve block conditions. The control vs. nerve block comparison complicated the visualization of the results. Thus, we plotted only the two latent factors with the highest separation between population trajectories. This was clarified in the Methods and caption of Figure 12.

      In Figure 12, factor 2 and 3 are plotted against each other? and factor 1 is left out?

      This observation is incorrect; Factor 1 was included: Top subplots (feeding) show Factor 1 vs 3 (MIo) and Factor 1 vs 2 (SIo) while the bottom subplots (drinking) show Factor 2 vs 3 (MIo) and Factor 1 vs 2 (SIo).  We have clarified this in the Methods and caption of Figure 12.

      Finally, why are factor analysis results shown only for monkey R? 

      Factor analysis results were performed on both animals, but the results were shown only for monkey R to decrease the number of figures in the manuscript. Figure 7- figure supplement 1 shows the data for both monkeys. Here are the equivalent Figure 7 plots for monkey Y. 

      Author response image 1.

      Reviewer #2 (Recommendations for the authors): 

      Overall, the manuscript has been improved. 

      New analyses provide improved rigor (as just one example, organizing the feeding data into three-category split to better match the three-direction drinking data decoding analysis and also matching the neuron counts).

      The updated nerve block change method (using an equal number of trials with a similar leftright angle of movement in the last 100 ms of the tongue trajectory) somewhat reduces my concern that kinematic diGerences could account for the neural changes, but on the other hand the neural analyses use 250 ms (meaning that the neural diGerences could be related to behavioral diGerences earlier in the trial). Why not subselect to trials with similar trajectories throughout the whole movement(or at least show that as an additional analysis, albeit one with lower trial counts). 

      As the reviewer pointed out, selecting similar trajectories throughout the whole movement would result in lower trial counts that lead to poor statistical power. We think that the 100 ms prior to maximum tongue protrusion is a more important movement segment to control for similar kinematics between the control and nerve block conditions since this represents the subject’s intended movement endpoint. 

      A lot of the Results seemed like a list of measurements without suGicient hand-holding or guide-posting to explain what the take-away for the reader should be. Just one example to make concrete this broadly-applicable feedback: "Cumulative explained variance for the first three factors was higher in feeding (MIo: 82%, SIo: 81%) than in drinking (MIo: 74%, SIo: 63%) when all neurons were used for the factor analysis (Fig. 7)": why should we care about 3 factors specifically? Does this mean that in feeding, the neural dimensionality is lower (since 3 factors explain more of it)? Does that mean feeding is a "simpler" behavior (which is counter-intuitive and does not conform to the authors' comments about the higher complexity of feeding). And from later in that paragraph: what are we do make of the diGerences in neural trajectory distances (aside from quantifying using a diGerent metric the same larger changes in firing rates that could just as well be quantified as statistics across single-neuron PETHs)?

      Thank you for the feedback on the writing style. We have made some revisions to describe the takeaway for the reader. That fewer latent factors explain 80% of the variance in the feeding data means that the underlying network activity is relatively simple despite apparent complexity. When neural population trajectories are farther away from each other in state space, it means that the patterns of activity across tongue directions are more distinct and separable, thus, less likely to be confused with each other. This signifies that neural representations of 3D tongue directions are more robust. When there is better neural discrimination and more reliable information processing, it is easier for downstream brain regions to distinguish between diGerent tongue directions.  

      The addition of more population-level analyses is nice as it provides a more eGicient summary of the neural measurements. However, it's a surface-level dive into these methods; ultimately the goal of ensemble "computation through dynamics" analyses is to discover simpler structure / organizational principles at the ensemble level (i.e., show things not evidence from single neurons), rather than just using them as a way to summarize data. For instance, here neural rotations are remarked upon in the Results, without referencing influential prior work describing such rotations and why neural circuits may use this computational motif to separate out conditions and shape muscle activity-generating readouts (Churchland et al. Nature 2012 and subsequent theoretical iterations including the Russo et al.). That said, the Russo et al tangling study was well-referenced and the present tangling results were eGectively contextualized with respect to that paper in terms of the interpretation. I wish more of the results were interpreted with comparable depth. 

      Speaking of Russo et al: the authors note qualitative diGerences in tangling between brain areas, but do not actually quantify tangling in either. These observations would be stronger if quantified and accompanied with statistics.

      Contrary to the reviewer’s critique, we did frame these results in the context of structure/organizational principles at the ensemble level. We had already cited prior work of Churchland et al., 2012; Michaels et al., 2016and Russo et al., 2018. In the Discussion, DiGerences across behaviors, we wrote: “In contrast, MIo trajectories in drinking exhibited a consistent rotational direction regardless of spout location (Fig. 7). This may reflect a predominant non-directional information such as condition-independent time-varying spiking activity during drinking (Kaufman et al., 2016; Kobak et al., 2016; Arce-McShane et al., 2023).” 

      Minor suggestions: 

      Some typos, e.g. 

      • no opening parenthesis in "We quantified directional diGerences in population activity by calculating the Euclidean distance over m latent factors)"

      • missing space in "independent neurons(Santhanam et al., 2009;..."); 

      • missing closing parentheses in "followed by the Posterior Inferior (Figure 3 - figure supplement 1."

      There is a one-page long paragraph in the Discussion. Please consider breaking up the text into more paragraphs each organized around one key idea to aid readability.

      Thank you, we have corrected these typos.

      Could it be that the Kaufman et al 2013 reference was intended to be Kaufman et al 2015 eNeuro (the condition-invariant signal paper)?

      Thank you, we have corrected this reference.

      At the end of the Clinical Implications subsection of the Discussion, the authors note the growing field of brain-computer interfaces with references for motor read-out or sensory write-in of hand motor/sensory cortices, respectively. Given that this study looks at orofacial cortices, an even more clinically relevant development is the more recent progress in speech BCIs (two     recent reviews: https://www.nature.com/articles/s41583-024-00819-9, https://www.annualreviews.org/content/journals/10.1146/annurev-bioeng-110122012818) many of which record from human ventral motor cortex and aspirations towards FES-like approaches for orofacial movements (e.g., https://link.springer.com/article/10.1186/s12984-023-01272-y).  

      Thank you, we have included these references.

      Reviewer #3 (Recommendations for the authors): 

      Major Suggestions 

      (1) For the factor analysis of feeding vs licking, it appears that the factors were calculated separately for the two behaviors. It could be informative to calculate the factors under both conditions and project the neural data for the two behaviors into that space. The overlap/separations of the subspace could be informative. 

      We clarify that we performed a factor analysis that included both feeding and licking for MIo, as stated in the Results: “To control for factors such as diGerent neurons and kinematics that might influence the results, we performed factor analysis on stable neurons across both tasks using all trials (Fig. 7- figure supplement 2A) and using trials with similar kinematics (Fig. 7- figure supplement 2B).” We have revised the manuscript to reflect this more clearly.

      (2) For the LSTM, the Factor analyses and the decoding it is unclear if the firing rates are mean subtracted and being normalized (the methods section was a little unclear). Typically, papers in the field either z-score the data or do a softmax.

      The firing rates were z-scored for the LSTM and KNN. For the factor analysis, the spike counts were not z-scored, but the results were normalized. We clarified this in the Methods section.

      Minor: 

      Page 1: Abstract- '... how OSMCx contributes to...' 

      Since there are no direct causal manipulations of OSMCx in this manuscript, this study doesn't directly study the OSMCx's contribution to movement - I would recommend rewording this sentence.

      Similarly, Page 2: 'OSMCx plays an important role in coordination...' the citations in this paragraph are correlative, and do not demonstrate a causal role.

      There are similar usages of 'OSMCx coordinates...' in other places e.g. Page 8. 

      Thank you, we revised these sentences.

      Page 7: the LSTM here has 400 units, which is a very large network and contains >12000 parameters. Networks of this size are prone to memorization, it would be wise to test the rsquare of the validation set against a shuGled dataset to see if the network is actually working as intended. 

      Thank you for bringing up this important point of verifying that the network is learning meaningful patterns versus memorizing. Considering the size of our training samples, the ratio of samples to parameters is appropriate and thus the risk of memorization is low. Indeed, validation tests and cross-validation performed indicated expected network behavior and the R squared values obtained here were similar to those reported in our previous paper (Laurence-Chasen et al., 2023).


      The following is the authors’ response to the original reviews

      Public Reviews:

      Reviewer #1 (Public review):

      Summary:

      In their paper, Hosack and Arce-McShane investigate how the 3D movement direction of the tongue is represented in the orofacial part of the sensory-motor cortex and how this representation changes with the loss of oral sensation. They examine the firing patterns of neurons in the orofacial parts of the primary motor cortex (MIo) and somatosensory cortex (SIo) in non-human primates (NHPs) during drinking and feeding tasks. While recording neural activity, they also tracked the kinematics of tongue movement using biplanar videoradiography of markers implanted in the tongue. Their findings indicate that most units in both MIo and SIo are directionally tuned during the drinking task. However, during the feeding task, directional turning was more frequent in MIo units and less prominent in SIo units. Additionally, in some recording sessions, they blocked sensory feedback using bilateral nerve block injections, which resulted in fewer directionally tuned units and changes in the overall distribution of the preferred direction of the units.

      Strengths:

      The most significant strength of this paper lies in its unique combination of experimental tools. The author utilized a video-radiography method to capture 3D kinematics of the tongue movement during two behavioral tasks while simultaneously recording activity from two brain areas. Moreover, they employed a nerve-blocking procedure to halt sensory feedback. This specific dataset and experimental setup hold great potential for future research on the understudied orofacial segment of the sensory-motor area.

      Weaknesses:

      Aside from the last part of the result section, the majority of the analyses in this paper are focused on single units. I understand the need to characterize the number of single units that directly code for external variables like movement direction, especially for less-studied areas like the orofacial part of the sensory-motor cortex. However, as a field, our decadelong experience in the arm region of sensory-motor cortices suggests that many of the idiosyncratic behaviors of single units can be better understood when the neural activity is studied at the level of the state space of the population. By doing so, for the arm region, we were able to explain why units have "mixed selectivity" for external variables, why the tuning of units changes in the planning and execution phase of the movement, why activity in the planning phase does not lead to undesired muscle activity, etc. See (Gallego et al. 2017; Vyas et al. 2020; Churchland and Shenoy 2024) for a review. Therefore, I believe investigating the dynamics of the population activity in orofacial regions can similarly help the reader go beyond the peculiarities of single units and in a broader view, inform us if the same principles found in the arm region can be generalized to other segments of sensorymotor cortex.

      We thank and agree with the reviewer on the value of information gained from studying population activity. We also appreciate that population analyses have led to the understanding that individual neurons have “mixed selectivity”. We have shown previously that OSMCx neurons exhibit mixed selectivity in their population activity and clear separation between latent factors associated with gape and bite force levels (Arce-McShane FI, Sessle BJ, Ram Y, Ross CF, Hatsopoulos NG (2023) Multiple regions of primate orofacial sensorimotor cortex encode bite force and gape. Front Systems Neurosci. doi: 10.3389/fnsys.2023.1213279. PMID: 37808467 PMCID: 10556252), and chew-side and food types (Li Z & Arce-McShane FI (2023). Cortical representation of mastication in the primate orofacial sensorimotor cortex. Program No. NANO06.05. 2023 Neuroscience Meeting Planner. Washington, D.C.: Society for Neuroscience, 2023. Online.). 

      The primary goal of this paper was to characterize single units in the orofacial region and to do a follow-up paper on population activity. In the revised manuscript, we have now incorporated the results of population-level analyses. The combined results of the single unit and population analyses provide a deeper understanding of the cortical representation of 3D direction of tongue movements during natural feeding and drinking behaviors. 

      Further, for the nerve-blocking experiments, the authors demonstrate that the lack of sensory feedback severely alters how the movement is executed at the level of behavior and neural activity. However, I had a hard time interpreting these results since any change in neural activity after blocking the orofacial nerves could be due to either the lack of the sensory signal or, as the authors suggest, due to the NHPs executing a different movement to compensate for the lack of sensory information or the combination of both of these factors. Hence, it would be helpful to know if the authors have any hint in the data that can tease apart these factors. For example, analyzing a subset of nerve-blocked trials that have similar kinematics to the control.

      Thank you for bringing this important point. We agree with the reviewer that any change in the neural activity may be attributed to lack of sensory signal or to compensatory changes or a combination of these factors. To tease apart these factors, we sampled an equal number of trials with similar kinematics for both control and nerve block feeding sessions. We added clarifying description of this approach in the Results section of the revised manuscript: “To confirm this e ect was not merely due to altered kinematics, we conducted parallel analyses using carefully subsampled trials with matched kinematic profiles from both control and nerve-blocked conditions.”

      Furthermore, we ran additional analysis for the drinking datasets by subsampling a similar distribution of drinking movements from each condition. We compared the neural data from an equal number of trials with a similar left-right angle of movement in the last 100 ms of the tongue trajectory, nearest the spout. We compared the directional tuning across an equal number of trials with a similar left-right angle of movement in the last 100 ms of the tongue trajectory, nearest the spout. These analyses that control for similar kinematics showed that there was still a decrease in the proportion of directionally modulated neurons with nerve block compared to the control. This confirms that the results may be attributed to the lack of tactile information. These are now integrated in the revised paper under Methods section: Directional tuning of single neurons, as well as Results section: E ects of nerve block: Decreased directional tuning of MIo and SIo neurons and Figure 10 – figure supplement 1.

      Reviewer #2 (Public review):

      Summary:

      This manuscript by Hosack and Arce-McShane examines the directional tuning of neurons in macaque primary motor (MIo) and somatosensory (SIo) cortex. The neural basis of tongue control is far less studied than, for example, forelimb movements, partly because the tongue's kinematics and kinetics are difficult to measure. A major technical advantage of this study is using biplanar video-radiography, processed with modern motion tracking analysis software, to track the movement of the tongue inside the oral cavity. Compared to prior work, the behaviors are more naturalistic behaviors (feeding and licking water from one of three spouts), although the animals were still head-fixed.

      The study's main findings are that:

      • A majority of neurons in MIo and a (somewhat smaller) percentage of SIo modulated their firing rates during tongue movements, with different modulations depending on the direction of movement (i.e., exhibited directional tuning). Examining the statistics of tuning across neurons, there was anisotropy (e.g., more neurons preferring anterior movement) and a lateral bias in which tongue direction neurons preferred that was consistent with the innervation patterns of tongue control muscles (although with some inconsistency between monkeys).

      • Consistent with this encoding, tongue position could be decoded with moderate accuracy even from small ensembles of ~28 neurons.

      • There were di erences observed in the proportion and extent of directional tuning between the feeding and licking behaviors, with stronger tuning overall during licking. This potentially suggests behavioral context-dependent encoding.

      • The authors then went one step further and used a bilateral nerve block to the sensory inputs (trigeminal nerve) from the tongue. This impaired the precision of tongue movements and resulted in an apparent reduction and change in neural tuning in Mio and SIo.

      Strengths:

      The data are difficult to obtain and appear to have been rigorously measured, and provide a valuable contribution to this under-explored subfield of sensorimotor neuroscience. The analyses adopt well-established methods, especially from the arm motor control literature, and represent a natural starting point for characterizing tongue 3D direction tuning.

      Weaknesses:

      There are alternative explanations for some of the interpretations, but those interpretations are described in a way that clearly distinguishes results from interpretations, and readers can make their own assessments. Some of these limitations are described in more detail below.

      One weakness of the current study is that there is substantial variability in results between monkeys, and that only one session of data per monkey/condition is analyzed (8 sessions total). This raises the concern that the results could be idiosyncratic. The Methods mention that other datasets were collected, but not analyzed because the imaging pre-processing is very labor-intensive. While I recognize that time is precious, I do think in this case the manuscript would be substantially strengthened by showing that the results are similar on other sessions.

      We acknowledge the reviewer’s concern about inter-subject variability. Animal feeding and drinking behaviors are quite stable across sessions, thus, we do not think that additional sessions will address the concern that the results could be idiosyncratic. Each of the eight datasets analyzed here have su icient neural and kinematic data to capture neural and behavioral patterns.  Nevertheless, we performed some of the analyses on a second feeding dataset from Monkey R. The results from analyses on a subset of this data were consistent across datasets; for example, (1) similar proportions of directionally tuned neurons, (2) similar distances between population trajectories (t-test p > 0.9), and (3) a consistently smaller distance between Anterior-Posterior pairs than others in MIo (t-test p < 0.05) but not SIo (p > 0.1). 

      This study focuses on describing directional tuning using the preferred direction (PD) / cosine tuning model popularized by Georgopoulous and colleagues for understanding neural control of arm reaching in the 1980s. This is a reasonable starting point and a decent first-order description of neural tuning. However, the arm motor control field has moved far past that viewpoint, and in some ways, an over-fixation on static representational encoding models and PDs held that field back for many years. The manuscript benefits from drawing the readers' attention (perhaps in their Discussion) that PDs are a very simple starting point for characterizing how cortical activity relates to kinematics, but that there is likely much richer population-level dynamical structure and that a more mechanistic, control-focused analytical framework may be fruitful. A good review of this evolution in the arm field can be found in Vyas S, Golub MD, Sussillo D, Shenoy K. 2020. Computation Through Neural Population Dynamics. Annual Review of Neuroscience. 43(1):249-75

      Thank you for highlighting this important point. Research on orofacial movements hasn't progressed at the same pace as limb movement studies. Our manuscript focused specifically on characterizing the 3D directional tuning properties of individual neurons in the orofacial area—an analysis that has not been conducted previously for orofacial sensorimotor control. While we initially prioritized this individual neuron analysis, we recognize the value of broader population-level insights.

      Based on your helpful feedback, we have incorporated additional population analyses to provide a more comprehensive picture of orofacial sensorimotor control and expanded our discussion section. We appreciate your expertise in pushing our work to be more thorough and aligned with current neuroscience approaches.

      Can the authors explain (or at least speculate) why there was such a large difference in behavioral e ect due to nerve block between the two monkeys (Figure 7)?

      We acknowledge this as a variable inherent to this type of experimentation. Previous studies have found large kinematic variation in the effect of oral nerve block as well as in the following compensatory strategies between subjects. Each animal’s biology and response to perturbation vary naturally. Indeed, our subjects exhibited different feeding behavior even in the absence of nerve block perturbation (see Figure 2 in Laurence-Chasen et al., 2022). This is why each individual serves as its own control.

      Do the analyses showing a decrease in tuning after nerve block take into account the changes (and sometimes reduction in variability) of the kinematics between these conditions? In other words, if you subsampled trials to have similar distributions of kinematics between Control and Block conditions, does the effect hold true? The extreme scenario to illustrate my concern is that if Block conditions resulted in all identical movements (which of course they don't), the tuning analysis would find no tuned neurons. The lack of change in decoding accuracy is another yellow flag that there may be a methodological explanation for the decreased tuning result.

      Thank you for bringing up this point. We accounted for the changes in the variability of the kinematics between the control and nerve block conditions in the feeding dataset where we sampled an equal number of trials with similar kinematics for both control and nerve block. However, we did not control for similar kinematics in the drinking task. In the revised manuscript, we have clarified this and performed similar analysis for the drinking task. We sampled a similar distribution of drinking movements from each condition. We compared the neural data from an equal number of trials with a similar left-right angle of movement in the last 100 ms of the tongue trajectory, nearest the spout. There was a decrease in the percentage of neurons that were directionally modulated (between 30 and 80%) with nerve block compared to the control. These results have been included in the revised paper under Methods section: Directional tuning of single neurons, as well as Results section: E ects of nerve block: Decreased directionality of MIo and SIo neurons.

      While the results from decoding using KNN did not show significant differences between decoding accuracies in control vs. nerve block conditions, the results from the additional factor analysis and decoding using LSTM were consistent with the decrease in directional tuning at the level of individual neurons.  

      The manuscript states that "Our results suggest that the somatosensory cortex may be less involved than the motor areas during feeding, possibly because it is a more ingrained and stereotyped behavior as opposed to tongue protrusion or drinking tasks". Could an alternative explanation be more statistical/technical in nature: that during feeding, there will be more variability in exactly what somato sensation afferent signals are being received from trial to trial (because slight differences in kinematics can have large differences in exactly where the tongue is and the where/when/how of what parts of it are touching other parts of the oral cavity)? This variability could "smear out" the apparent tuning using these types of trial-averaged analyses. Given how important proprioception and somatosensation are for not biting the tongue or choking, the speculation that somatosensory cortical activity is suppressed during feedback is very counter-intuitive to this reviewer.

      Thank you for bringing up this point. We have now incorporated this in our revised Discussion (see Comparison between MIo and SIo). We agree with the reviewer that trialby-trial variability in the a erent signals may account for the lower directional signal in SIo during feeding than in drinking. Indeed, SIo’s mean-matched Fano factor in feeding was significantly higher than those in drinking (Author response image 1). Moreover, the results of the additional population and decoding analyses also support this.  

      Author response image 1.

      Comparison of mean-matched Fano Factor between Sio neurons during feeding and drinking control tasks across both subjects (Wilcoxon rank sum test, p < 0.001).

      Reviewer #3 (Public review):

      Summary:

      In this study, the authors aim to uncover how 3D tongue direction is represented in the Motor (M1o) and Somatosensory (S1o) cortex. In non-human primates implanted with chronic electrode arrays, they use X-ray-based imaging to track the kinematics of the tongue and jaw as the animal is either chewing food or licking from a spout. They then correlate the tongue kinematics with the recorded neural activity. Using linear regressions, they characterize the tuning properties and distributions of the recorded population during feeding and licking. Then, they recharacterize the tuning properties after bilateral lidocaine injections in the two sensory branches of the trigeminal nerve. They report that their nerve block causes a reorganization of the tuning properties. Overall, this paper concludes that M1o and S1o both contain representations of the tongue direction, but their numbers, their tuning properties, and susceptibility to perturbed sensory input are different.

      Strengths:

      The major strengths of this paper are in the state-of-the-art experimental methods employed to collect the electrophysiological and kinematic data.

      Weaknesses:

      However, this paper has a number of weaknesses in the analysis of this data.

      It is unclear how reliable the neural responses are to the stimuli. The trial-by-trial variability of the neural firing rates is not reported. Thus, it is unclear if the methods used for establishing that a neuron is modulated and tuned to a direction are susceptible to spurious correlations. The authors do not use shuffling or bootstrapping tests to determine the robustness of their fits or determining the 'preferred direction' of the neurons. This weakness colors the rest of the paper.

      Thank you for raising these points. We have performed the following additional analyses: (1) We have added analyses to ensure that the results could not be explained by neural variability. To show the trial-by-trial variability of the neural firing rates, we have calculated the Fano factor (mean overall = 1.34747; control = 1.46471; nerve block = 1.23023). The distribution was similar across directions, suggesting that responses of MIo and SIo neurons to varying 3D directions were reliable. (2) We have used a bootstrap procedure to ensure that directional tuning cannot be explained by mere chance. (3) To test the robustness of our PDs we also performed a bootstrap test, which yielded the same results for >90% of neurons, and a multiple linear regression test for fit to a cosine-tuning function. In the revised manuscript, the Methods and Results sections have been updated to include these analyses.  

      Author response image 2.

      Comparison of Fano Factor across directions for MIo and SIo Feeding Control (Kruskal-Wallis, p > 0.7).

      The authors compare the tuning properties during feeding to those during licking but only focus on the tongue-tip. However, the two behaviors are different also in their engagement of the jaw muscles. Thus many of the differences observed between the two 'tasks' might have very little to do with an alternation in the properties of the neural code - and more to do with the differences in the movements involved. 

      Using the tongue tip for the kinematic analysis of tongue directional movements was a deliberate choice as the anterior region of the tongue is highly mobile and sensitive due to a higher density of mechanoreceptors. The tongue tip is the first region that touches the spout in the drinking task and moves the food into the oral cavity for chewing and subsequent swallowing. 

      We agree with the reviewer that the jaw muscles are engaged differently in feeding vs. drinking (see Fig. 2). For example, a wider variety of jaw movements along the three axes are observed in feeding compared to the smaller amplitude and mostly vertical jaw movements in drinking. Also, the tongue movements are very different between the two behaviors. In feeding, the tongue moves in varied directions to position the food between left-right tooth rows during chewing, whereas in the drinking task, the tongue moves to discrete locations to receive the juice reward. Moreover, the tongue-jaw coordination differs between tasks; maximum tongue protrusion coincides with maximum gape in drinking but with minimum gape in the feeding behavior. Thus, the different tongue and jaw movements required in each behavior may account for some of the differences observed in the directional tuning properties of individual neurons and population activity. These points have been included in the revised Discussion.

      Author response image 3.

      Tongue tip position (mm) and jaw pitch(degree) during feeding (left) and drinking (right) behaviors. Most protruded tongue position coincides with minimum gape (jaw pitch at 0°) during  feeding but with maximum gape during drinking.

      Many of the neurons are likely correlated with both Jaw movements and tongue movements - this complicates the interpretations and raises the possibility that the differences in tuning properties across tasks are trivial.

      We thank the reviewer for raising this important point. In fact, we verified in a previous study whether the correlation between the tongue and jaw kinematics might explain di erences in the encoding of tongue kinematics and shape in MIo (see Supplementary Fig. 4 in Laurence-Chasen et al., 2023): “Through iterative sampling of sub-regions of the test trials, we found that correlation of tongue kinematic variables with mandibular motion does not account for decoding accuracy. Even at times where tongue motion was completely un-correlated with the jaw, decoding accuracy could be quite high.” 

      The results obtained from population analyses showing distinct properties of population trajectories in feeding vs. drinking behaviors provide strong support to the interpretation that directional information varies between these behaviors.

      The population analyses for decoding are rudimentary and provide very coarse estimates (left, center, or right), it is also unclear what the major takeaways from the population decoding analyses are. The reduced classification accuracy could very well be a consequence of linear models being unable to account for the complexity of feeding movements, while the licking movements are 'simpler' and thus are better accounted for.

      We thank the reviewer for raising this point. The population decoding analyses provide additional insight on the directional information in population activity,  as well as a point of comparison with the results of numerous decoding studies on the arm region of the sensorimotor cortex. In the revised version, we have included the results from decoding tongue direction using a long short-term memory (LSTM) network for sequence-tosequence decoding. These results di ered from the KNN results, indicating that a linear model such as KNN was better for drinking and that a non-linear and continuous decoder was better suited for feeding.  These results have been included in the revised manuscript.

      The nature of the nerve block and what sensory pathways are being affected is unclear - the trigeminal nerve contains many different sensory afferents - is there a characterization of how e ectively the nerve impulses are being blocked? Have the authors confirmed or characterized the strength of their inactivation or block, I was unable to find any electrophysiological evidence characterizing the perturbation.

      The strength of the nerve block is characterized by a decrease in the baseline firing rate of SIo neurons, as shown in Supplementary Figure 6 of “Loss of oral sensation impairs feeding performance and consistency of tongue–jaw coordination” (Laurence-Chasen et al., 2022)..

      Overall, while this paper provides a descriptive account of the observed neural correlations and their alteration by perturbation, a synthesis of the observed changes and some insight into neural processing of tongue kinematics would strengthen this paper.

      We thank the reviewer for this suggestion. We have revised the Discussion to provide a synthesis of the results and insights into the neural processing of tongue kinematics.

      Recommendations for the authors:

      Reviewer #1 (Recommendations for the authors):

      (1) The procedure for anesthesia explained in the method section was not clear to me. The following information was missing: what drug/dose was used? How long the animal was under anesthesia? How long after the recovery the experiments were done?

      The animals were fully sedated with ketamine (100 mg/ml, 10 mg/kg) for less than 30 minutes, and all of the data was collected within 90 minutes after the nerve block was administered.

      (2) In Figure 10, panels A and B are very close together, it was not at first clear whether the text "Monkey R, Monkey Y" belongs to panel A or B.

      We have separated the two panels further in the revised figure.

      (3) I found Figure 11 very busy and hard to interpret. Separating monkeys, fitting the line for each condition, or using a bar plot can help with the readability of the figure.

      Thank you for the suggestion. We agree with you and have reworked this figure. To simplify it we have shown the mean accuracy across iterations.

      (4) I found the laterality discussions like "This signifies that there are more neurons in the left hemisphere contributes toward one direction of tongue movement, suggesting that there is some laterality in the PDs of OSMCx neurons that varies between individuals" bit of an over-interpretation of data, given the low n value and the dissimilarity in how strongly the nerve blocking altered monkies behavior.

      Thank you for sharing this viewpoint. We do think that laterality is a good point of comparison with studies on M1 neurons in the arm/hand region. In our study, we found that the peak of the PD distribution coincides with leftward tongue movements in feeding. The distribution of PDs provides insight into how tongue muscles are coordinated during movement. Intrinsic and extrinsic tongue muscles are involved in shaping the tongue (e.g., elongation, broadening) and positioning the tongue (e.g., protrusion/retraction, elevation/depression), respectively. These muscles receive bilateral motor innervation except for genioglossus. Straight tongue protrusion requires the balanced action of the right and left genioglossi while the lateral protrusion involves primarily the contralateral genioglossus. Given this unilateral innervation pattern, we hypothesized that left MIo/SIo neurons would preferentially respond to leftward tongue movements, corresponding to right genioglossus activation. 

      Reviewer #2 (Recommendations for the authors):

      Are the observation of tuning peaks being most frequently observed toward the anterior and superior directions consistent with the statistics of the movements the tongue typically makes? This could be analogous to anisotropies previously reported in the arm literature, e.g., Lillicrap TP, Scott SH. 2013. Preference Distributions of Primary Motor Cortex Neurons Reflect Control Solutions Optimized for Limb Biomechanics. Neuron. 77(1):168-79

      Thank you for bringing our attention to analogous findings by Lillicrap & Scott, 2013. Indeed, we do observe the highest number of movements in the Anterior Superior directions, followed by the Posterior Inferior. This does align with the distribution of tuning peaks that we observed. Author response image 4 shows the proportions of observed movements in each group of directions across all feeding datasets. We have incorporated this data in the Results section: Neuronal modulation patterns di er between MIo and SIo, as well as added this point in the Discussion.

      Author response image 4.

      Proportion of feeding trials in each group of directions. Error bars represent ±1 standard deviation across datasets (n = 4).

      "The Euclidean distance was used to identify nearest neighbors, and the number of nearest neighbors used was K = 7. This K value was determined after testing different Ks which yielded comparable results." In general, it's a decoding best practice to tune hyperparameters (like K) on fully held-out data from the data used for evaluation. Otherwise, this tends to slightly inflate performance because one picks the hyperparameter that happened to give the best result. It sounds like that held-out validation set wasn't used here. I don't think that's going to change the results much at all (especially given the "comparable results" comment), but providing this suggestion for the future. If the authors replicate results on other datasets, I suggest they keep K = 7 to lock in the method.

      K = 7 was chosen based on the size of our smallest training dataset (n = 55). The purpose of testing different K values was not to select which value gave the best result, but to demonstrate that similar K values did not affect the results significantly. We tested the di erent K values on a subset of the feeding data, but that data was not fully held-out from the training set. We will keep your suggestion in mind for future analysis.

      The smoothing applied to Figure 2 PSTHs appears perhaps excessive (i.e., it may be obscuring interesting finer-grained details of these fast movements). Can the authors reduce the 50 ms Gaussian smoothing (I assume this is the s.d.?) ~25 ms is often used in studying arm kinematics. It also looks like the movement-related modulation may not be finished in these 200 ms / 500 ms windows. I suggest extending the shown time window. It would also be helpful to show some trial-averaged behavior (e.g. speed or % displacement from start) under or behind the PSTHs, to give a sense of what phase of the movement the neural activity corresponds to.

      Thank you for the suggestion. We have taken your suggestions into consideration and modified Figure 2 accordingly. We decreased the Gaussian kernel to 25 ms and extended the time window shown. The trial-averaged anterior/posterior displacement was also added to the drinking PSTHs.

      Reviewer #3 (Recommendations for the authors):

      The major consideration here is that the data reported for feeding appears to be very similar to that reported in a previous study:

      "Robust cortical encoding of 3D tongue shape during feeding in macaques"

      Are the neurons reported here the same as the ones used in this previous paper? It is deeply concerning that this is not reported anywhere in the methods section.

      These are the same neurons as in our previous paper, though here we include several additional datasets of the nerve block and drinking sessions. We have now included this in the methods section.

      Second, I strongly recommend that the authors consider a thorough rewrite of this manuscript and improve the presentation of the figures. As written, it was not easy to follow the paper, the logic of the experiments, or the specific data being presented in the figures.

      Thank you for this suggestion. We have done an extensive rewrite of the manuscript and revision of the figures.

      A few recommendations:

      (1) Please structure your results sections and use descriptive topic sentences to focus the reader. In the current version, it is unclear what the major point being conveyed for each analysis is.

      Thank you for this suggestion. We have added topic sentences to the begin each section of the results.

      (2) Please show raster plots for at least a few example neurons so that the readers have a sense of what the neural responses look like across trials. Is all of Figure 2 one example neuron or are they different neurons? Error bars for PETH would be useful to show the reliability and robustness of the tuning.

      Figure 2 shows different neurons, one from MIo and one from SIo for each task. There is shading showing ±1 standard error around the line for each direction, however this was a bit difficult to see. In addition to the other changes we have made to these figures, we made the lines smaller and darkened the error bar shading to accentuate this. We also added raster plots corresponding to the same neurons represented in Figure 2 as a supplement.

      (3) Since there are only two data points, I am not sure I understand why the authors have bar graphs and error bars for graphs such as Figure 3B, Figure 5B, etc. How can one have an error bar and means with just 2 data points?

      Those bars represent the standard error of the proportion. We have changed the y-axis label on these figures to make this clearer.

      (4) Results in Figure 6 could be due to differential placement of the electrodes across the animals. How is this being accounted for?

      Yes, this is a possibility which we have mentioned in the discussion. Even with careful placement there is no guarantee to capture a set of neurons with the exact same function in two subjects, as every individual is different. Rather we focus on analyses of data within the same animal. The purpose of Figure 6 is to show the di erence between MIo and SIo, and between the two tasks, within the same subject. The more salient result from calculating the preferred direction is that there is a change in the distribution between control and nerve block within the same exact population. Discussions relating to the comparison between individuals are speculative and cannot be confirmed without the inclusion of many more subjects.

      (5) For Figure 7, I would recommend showing the results of the Sham injection in the same figure instead of a supplement.

      Thank you for the suggestion, we have added these results to the figure.

      (6) I think the e ects of the sensory block on the tongue kinematics are underexplored in Figure 7 and Figure 8. The authors could explore the deficits in tongue shape, and the temporal components of the trajectory.

      Some of these effects on feeding have been explored in a previous paper, LaurenceChasen et al., 2022. We performed some additional analyses on changes to kinematics during drinking, including the number of licks per 10 second trial and the length of individual licks. The results of these are included below. We also calculated the difference in the speed of tongue movement during drinking, which generally decreased and exhibited an increase in variance with nerve block (f-test, p < 0.001). However, we have not included these figures in the main paper as they do not inform us about directionality.

      Author response image 5.

      Left halves of hemi-violins (black) are control and right halves (red) are nerve block for an individual. Horizontal black lines represent the mean and horizontal red lines the median. Results of two-tailed t-test and f-test are indicated by asterisks and crosses, respectively: *,† p < 0.05; **,†† p < 0.01; ***,††† p < 0.001.

      (9) In Figures 9 and 10. Are the same neurons being recorded before and after the nerve block? It is unclear if the overall "population" properties are different, or if the properties of individual neurons are changing due to the nerve block.

      Yes, the same neurons are being recorded before and after nerve block. Specifically, Figure 9B shows that the properties of many individual neurons do change due to the nerve block. Di erences in the overall population response may be attributed to some of the units having reduced/no activity during the nerve block session.

      Additionally, I recommend that the authors improve their introduction and provide more context to their discussion. Please elaborate on what you think are the main conceptual advances in your study, and place them in the context of the existing literature. By my count, there are 26 citations in this paper, 4 of which are self-citations - clearly, this can be improved upon.

      Thank you for this suggestion. We have done an extensive rewrite of the Introduction and Discussion. We discussed the main conceptual advances in our study and place them in the context of the existing literature.

    1. Előfeltétel: A hozamgörbe pontok rögzítésének feltétele, hogy a Törzsadatok menü Admin (fogaskerék) Instrumentum tabján Indexes Page type-ján az Indexek között a hozamgörbe, és az Instrumentum törzsadatban Reference index tabon a Tenorja is rögzítésre került.

      Egész kijelölt részt fogalmazzuk át erre: A hozamgörbe pontok rögzítésének feltétele, hogy a törzsadatok menü instrumentum tabján a hozamgörbe, és az Instrumentum törzsadatban a Tenor is rögzítésre kerüljön

    2. A sikerdíj számítás külön fülön található a NAV számításon belül.

      A sikerdíj számítás eredménye külön fülön található a NAV számításon belül.

    3. Sikerdíj számítás

      "Performance fee calculation" Nem tudom miért de az angol fordításban a fejezetek címénél minden szó nagy betűkkel kezdődik. Ezt majd nézzétek meg pls!

    1. Kubb (Learn how to play) - Lawn games

      Master this fun Swedish lawn game that combines skill and strategy with our complete guide on how to play Kubb. By learning the simple Kubb rules and clever throwing techniques, you'll be ready to dominate your next outdoor get-together with friends and family. Read the full Kubb game instructions below and start playing this classic game today!

    1. eLife Assessment

      This paper performs a valuable critical reassessment of anatomical and functional data, proposing a reclassification of the mouse visual cortex in which almost all the higher visual areas are consolidated into a single area V2. However, the evidence supporting this unification is incomplete, as the key experimental observations that the model attempts to reproduce do not accurately reflect the literature. This study will likely be of interest to neuroscientists focused on the mouse visual cortex and the evolution of cortical organization.

    2. Reviewer #1 (Public review):

      Summary:

      In this manuscript, the authors argue that defining higher visual areas (HVAs) based on reversals of retinotopic tuning has led to an over-parcellation of secondary visual cortices. Using retinotopic models, they propose that the HVAs are more parsimoniously mapped as a single area V2, which encircles V1 and exhibits complex retinotopy. They reanalyze functional data to argue that functional differences between HVAs can be explained by retinotopic coverage. Finally, they compare the classification of mouse visual cortex to that of other species to argue that our current classification is inconsistent with those used in other model species.

      Strengths:

      This manuscript is bold and thought-provoking, and is a must-read for mouse visual neuroscientists. The authors take a strong stance on combining all HVAs, with the possible exception of area POR, into a single V2 region. Although I suspect many in the field will find that their proposal goes too far, many will agree that we need to closely examine the assumptions of previous classifications to derive a more accurate areal map. The authors' supporting analyses are clear and bolster their argument. Finally, they make a compelling argument for why the classification is not just semantic, but has ramifications for the design of experiments and analysis of data.

      Weaknesses:

      Although I enjoyed the polemic nature of the manuscript, there are a few issues that weaken their argument.

      (1) Although the authors make a compelling argument that retinotopic reversals are insufficient to define distinct regions, they are less clear about what would constitute convincing evidence for distinct visual regions. They mention that a distinct area V3 has been (correctly) defined in ferrets based on "cytoarchitecture, anatomy, and functional properties", but elsewhere argue that none of these factors are sufficient to parcellate any of the HVAs in mouse cortex, despite some striking differences between HVAs in each of these factors. It would be helpful to clearly define a set of criteria that could be used for classifying distinct regions.

      (2) On a related note, although the authors carry out impressive analyses to show that differences in functional properties between HVAs could be explained by retinotopy, they glossed over some contrary evidence that there are functional differences independent of retinotopy. For example, axon projections to different HVAs originating from a single V1 injection - presumably including neurons with similar retinotopy - exhibit distinct functional properties (Glickfeld LL et al, Nat Neuro, 2013). As another example, interdigitated M2+/M2- patches in V1 show very different HVA connectivity and response properties, again independent of V1 location/retinotopy (Meier AM et al., bioRxiv). One consideration is that the secondary regions might be considered a single V2 with distinct functional modules based on retinotopy and connectivity (e.g., V2LM, V2PM, etc).

      (3) Some of the HVAs-such as AL, AM, and LI-appear to have redundant retinotopic coverage with other HVAS, such as LM and PM. Moreover, these regions have typically been found to have higher "hierarchy scores" based on connectivity (Harris JA et al., Nature, 2019; D'Souza RD et al., Nat Comm, 2022), though unfortunately, the hierarchy levels are not completely consistent between studies. Based on existing evidence, there is a reasonable argument to be made for a hybrid classification, in which some regions (e.g., LM, P, PM, and RL) are combined into a single V2 (though see point #2 above) while other HVAs are maintained as independent visual regions, distinct from V2. I don't expect the authors to revise their viewpoint in any way, but a more nuanced discussion of alternative classifications is warranted.

    3. Reviewer #2 (Public review):

      Summary:

      The study by Rowley and Sedigh-Sarvestani presents modeling data suggesting that map reversals in mouse lateral extrastriate visual cortex do not coincide with areal borders, but instead represent borders between subregions within a single area V2. The authors propose that such an organization explains the partial coverage in higher-order areas reported by Zhuang et al., (2017). The scheme revisits an organization proposed by Kaas et al., (1989), who interpreted the multiple projection patches traced from V1 in the squirrel lateral extrastriate cortex as subregions within a single area V2. Kaas et al's interpretation was challenged by Wang and Burkhalter (2007), who used a combination of topographic mapping of V1 connections and receptive field recordings in mice. Their findings supported a different partitioning scheme in which each projection patch mapped a specific topographic location within single areas, each containing a complete representation of the visual field. The area map of mouse visual cortex by Wang and Burkhalter (2007) has been reproduced by hundreds of studies and has been widely accepted as ground truth (CCF) (Wang et al., 2020) of the layout of rodent cortex. In the meantime, topographic mappings in marmoset and tree shew visual cortex made a strong case for map reversals in lateral extrastriate cortex, which represent borders between functionally diverse subregions within a single area V2. These findings from non-rodent species raised doubts about whether during evolution, different mammalian branches have developed diverse partitioning schemes of the cerebral cortex. Rowley and Sedigh-Sarvestani favor a single master plan in which, across evolution, all mammalian species have used a similar blueprint for subdividing the cortex.

      Strengths:

      The story illustrates the enduring strength of science in search of definitive answers.

      Weaknesses:

      To me, it remains an open question whether Rowley and Sedigh-Sarvestani have written the final chapter of the saga. A key reason for my reservation is that the areas the maps used in their model are cherry-picked. The article disregards published complementary maps, which show that the entire visual field is represented in multiple areas (i.e. LM, AL) of lateral extrastriate cortex and that the map reversal between LM and AL coincides precisely with the transition in m2AChR expression and cytoarchitecture (Wang and Burkhalter, 2007; Wang et al., 2011). Evidence from experiments in rats supports the gist of the findings in the mouse visual cortex (Coogan and Burkhalter, 1993).

      (1) The selective use of published evidence, such as the complete visual field representation in higher visual areas of lateral extrastriate cortex (Wang and Burkhalter, 2007; Wang et al., 2011) makes the report more of an opinion piece than an original research article that systematically analyzes the area map of mouse visual cortex we have proposed. No direct evidence is presented for a single area V2 with functionally distinct subregions.

      (2) The article misrepresents evidence by commenting that m2AChR expression is mainly associated with the lower field. This is counter to published findings showing that m2AChR spans across the entire visual field (Gamanut et al., 2018; Meier et al., 2021). The utility of markers for delineating areal boundaries is discounted, without any evidence, in disregard of evidence for distinct areal patterns in early development (Wang et al., 2011). Pointing out that markers can be distributed non-uniformly within an area is well-familiar. m2AChR is non-uniformly expressed in mouse V1, LM and LI (Ji et al., 2015; D'Souza et al., 2019; Meier et al., 2021). Recently, it has been found that the patchy organization within V1 plays a role in the organization of thalamocortical and intracortical networks (Meier et al., 2025). m2AChR-positive patches and m2AChR-negative interpatches organize the functionally distinct ventral and dorsal networks, notably without obvious bias for upper and lower parts of the visual field.

      (3) The study has adopted an area partitioning scheme, which is said to be based on anatomically defined boundaries of V2 (Zhuang et al., 2017). The only anatomical borders used by Zhuang et al. (2017) are those of V1 and barrel cortex, identified by cytochrome oxidase staining. In reality, the partitioning of the visual cortex was based on field sign maps, which are reproduced from Zhuang et al., (2017) in Figure 1A. It is unclear why the maps shown in Figures 2E and 2F differ from those in Figure 1A. It is possible that this is an oversight. But maintaining consistent areal boundaries across experimental conditions that are referenced to the underlying brain structure is critical for assigning modeled projections to areas or sub-regions. This problem is evident in Figure 2F, which is presented as evidence that the modeling approach recapitulates the tracings shown in Figure 3 of Wang and Burkhalter (2007). The dissimilarities between the modeling and tracing results are striking, unlike what is stated in the legend of Figure 2F.

      (4) The Rowley and Sedigh-Sarvestani find that the partial coverage of the visual field in higher order areas shown by Zhuang et al (2017) is recreated by the model. It is important to caution that Zhuang et al's (2017) maps were derived from incomplete mappings of the visual field, which was confined to -25-35 deg of elevation. This underestimates the coverage we have found in LM and AL. Receptive field mappings show that LM covers 0-90 deg of azimuth and -30-80 elevation (Wang and Burkhalter, 2007). AL covers at least 0-90 deg of azimuth and -30-50 deg of elevation (Wang and Burkhalter, 2007; Wang et al., 2011). These are important differences. Partial coverage in LM and AL underestimates the size of these areas and may map two projection patches as inputs to subregions of a single area rather than inputs to two separate areas. Complete, or nearly complete, visual representations in LM and AL support that each is a single area. Importantly, both areas are included in a callosal-free zone (Wang and Burkhalter, 2007). The surrounding callosal connections align with the vertical meridian representation. The single map reversal is marked by a transition in m2AChR expression and cytoarchitecture (Wang et al., 2011).

      (5) The statement that the "lack of visual field overlap across areas is suggestive of a lack of hierarchical processing" is predicated on the full acceptance of the mappings by Zhuang et al (2017). Based on the evidence reviewed above, the reclassification of visual areas proposed in Figure 1C seems premature.

      (6) The existence of lateral connections is not unique to rodent cortex and has been described in primates (Felleman and Van Essen, 1991).

      (7) Why the mouse and rat extrastriate visual cortex differ from those of many other mammals is unclear. One reason may be that mammals with V2 subregions are strongly binocular.

    4. Reviewer #3 (Public review):

      Summary:

      The authors review published literature and propose that a visual cortical region in the mouse that is widely considered to contain multiple visual areas should be considered a single visual area.

      Strengths:

      The authors point out that relatively new data showing reversals of visual-field sign within known, single visual areas of some species require that a visual field sign change by itself should not be considered evidence for a border between visual areas.

      Weaknesses:

      The existing data are not consistent with the authors' proposal to consolidate multiple mouse areas into a single "V2". This is because the existing definition of a single area is that it cannot have redundant representations of the visual field. The authors ignore this requirement, as well as the data and definitions found in published manuscripts, and make an inaccurate claim that "higher order visual areas in the mouse do not have overlapping representations of the visual field". For quantification of the extent of overlap of representations between 11 mouse visual areas, see Figure 6G of Garrett et al. 2014. [Garrett, M.E., Nauhaus, I., Marshel, J.H., and Callaway, E.M. (2014). Topography and areal organization of mouse visual cortex. The Journal of neuroscience 34, 12587-12600. 10.1523/JNEUROSCI.1124-14.2014.

    5. Author response:

      eLife Assessment:

      This paper performs a valuable critical reassessment of anatomical and functional data, proposing a reclassification of the mouse visual cortex in which almost all the higher visual areas are consolidated into a single area V2. However, the evidence supporting this unification is incomplete, as the key experimental observations that the model attempts to reproduce do not accurately reflect the literature . This study will likely be of interest to neuroscientists focused on the mouse visual cortex and the evolution of cortical organization.

      We do not agree or understand which 'key experimental observations' that the model attempts to reproduce do not accurately reflect the literature. The model reproduces a complete map of the visual field, with overlap in certain regions. When reversals are used to delineate areas, as is the current custom, multiple higher order areas are generated, and each area has a biased and overlapping visual field coverage. These are the simple outputs of the model, and they are consistent with the published literature, including recent publications such as Garrett et al. 2014 and Zhuang et al. 2017, a paper published in this journal. The area boundaries produced by the model are not identical to area boundaries in the literature, because the model is a simplification.

      Reviewer #1 (Public review):

      Summary:

      In this manuscript, the authors argue that defining higher visual areas (HVAs) based on reversals of retinotopic tuning has led to an over-parcellation of secondary visual cortices. Using retinotopic models, they propose that the HVAs are more parsimoniously mapped as a single area V2, which encircles V1 and exhibits complex retinotopy. They reanalyze functional data to argue that functional differences between HVAs can be explained by retinotopic coverage. Finally, they compare the classification of mouse visual cortex to that of other species to argue that our current classification is inconsistent with those used in other model species.

      Strengths:

      This manuscript is bold and thought-provoking, and is a must-read for mouse visual neuroscientists. The authors take a strong stance on combining all HVAs, with the possible exception of area POR, into a single V2 region. Although I suspect many in the field will find that their proposal goes too far, many will agree that we need to closely examine the assumptions of previous classifications to derive a more accurate areal map. The authors' supporting analyses are clear and bolster their argument. Finally, they make a compelling argument for why the classification is not just semantic, but has ramifications for the design of experiments and analysis of data.

      Weaknesses:

      Although I enjoyed the polemic nature of the manuscript, there are a few issues that weaken their argument.

      (1) Although the authors make a compelling argument that retinotopic reversals are insufficient to define distinct regions, they are less clear about what would constitute convincing evidence for distinct visual regions. They mention that a distinct area V3 has been (correctly) defined in ferrets based on "cytoarchitecture, anatomy, and functional properties", but elsewhere argue that none of these factors are sufficient to parcellate any of the HVAs in mouse cortex, despite some striking differences between HVAs in each of these factors. It would be helpful to clearly define a set of criteria that could be used for classifying distinct regions.

      We agree the revised manuscript would benefit from a clear discussion of updated rules of area delineation in the mouse. In brief, we argue that retinotopy alone should not be used to delineate area boundaries in mice, or any other species. Although there is some evidence for functional property, architecture, and connectivity changes across mouse HVAs, area boundaries continue to be defined primarily, and sometimes solely (Garrett et al., 2014; Juavinett et al., 2018; Zhuang et al., 2017), based on retinotopy. We acknowledge that earlier work (Wang and Burkhalter, 2007; Wang et al., 2011) did consider cytoarchitecture and connectivity alongside retinotopy, but more recent work has shifted to a focus on retinotopy as indicated by the currently accepted criterion for area delineation.  

      As reviewer #2 points out, the present criteria for mouse visual area delineation can be found in the Methods section of: [Garrett, M.E., Nauhaus, I., Marshel, J.H., and Callaway, E.M. (2014)].

      Criterion 1: Each area must contain the same visual field sign at all locations within the area.

      Criterion 2: Each visual area cannot have a redundant representation of visual space.

      Criterion 3: Adjacent areas of the same visual field sign must have a redundant representation.

      Criterion 4: An area's location must be consistently identifiable across experiments.

      As discussed in the manuscript, recent evidence in higher order visual cortex of tree shrews and rats led us to question the universality of these criteria across species. Specifically, tree shrew V2, macaque V2, and marmoset DM, exhibit reversals in visual field-sign in what are defined as single visual areas. This suggests that criterion 1 should be updated. It also suggests that Criterion 2 and 3 should be updated since visual field sign reversals often co-occur with retinotopic redundancies, since reversing course in the direction of progression along the visual field can easily lead to coverage of visual field regions already traveled.  

      More broadly, we argue that topography is just one of several criteria that should be considered in area delineation. We understand that few visual areas in any species meet all criteria, but we emphasize that topography cannot consistently be the sole satisfied criterion – as it currently appears to be for many mouse HVAs. Inspired by a recent perspective on cortical area delineation (Petersen et al., 2024), we suggest the following rules, that will be worked into the revised version of the manuscript. Topography is a criterion, but it comes after considerations of function, architectonics and connectivity.

      (1) Function—Cortical areas differ from neighboring areas in their functional properties  

      (2) Architectonics—Cortical areas often exhibit distinctions from neighboring areas in multiple cyto- and myeloarchitectonic markers

      (3) Connectivity—Cortical areas are characterized by a specific set of connectional inputs and outputs from and to other areas

      (4) Topography—Cortical areas often exhibit a distinct topography that balances maximal coverage of the sensory field with minimal redundancy of coverage within an area.

      As we discuss in the manuscript, although there are functional, architectonic, and connectivity differences across mouse HVAs, they typically vary smoothly across multiple areas – such that neighboring areas share the same properties and there are no sharp borders. For instance, sharp borders in cytoarchitecture are generally lacking in the mouse HVAs. A notable exceptions to this is the clear and sharp change in m2AChR expression that occurs between LM and AL (Wang et al., 2011). 

      (2) On a related note, although the authors carry out impressive analyses to show that differences in functional properties between HVAs could be explained by retinotopy, they glossed over some contrary evidence that there are functional differences independent of retinotopy. For example, axon projections to different HVAs originating from a single V1 injection - presumably including neurons with similar retinotopy - exhibit distinct functional properties (Glickfeld LL et al, Nat Neuro, 2013). As another example, interdigitated M2+/M2- patches in V1 show very different HVA connectivity and response properties, again independent of V1 location/retinotopy (Meier AM et al., bioRxiv). One consideration is that the secondary regions might be considered a single V2 with distinct functional modules based on retinotopy and connectivity (e.g., V2LM, V2PM, etc).

      Thank you for the correction. We will revise the text to discuss (Glickfeld et al., 2013), as it remains some of the strongest evidence in favor of retinotopy-independent functional specialization of mouse HVAs. However, one caveat of this study is the size of the V1 injection that is the source of axons studied in the HVAs. As apparent in Figure 1B, the large injection covers nearly a quarter of V1. It is worth nothing that (Han et al., 2018) found, using single-cell reconstructions and MAPseq, that the majority of V1 neurons project to multiple nearby HVA targets. In this experiment the tracing does not suffer from the problem of spreading over V1’s retinotopic map, and suggests that, presumably retinotopically matched, locations in each area receive shared inputs from the V1 population rather than a distinct but spatially interspersed subset. In fact, the authors conclude “Interestingly, the location of the cell body within V1 was predictive of projection target for some recipient areas (Extended Data Fig. 8). Given the retinotopic organization of V1, this suggests that visual information from different parts of visual field may be preferentially distributed to  specific target areas, which is consistent with recent findings (Zhuang et al., 2017)”. Given an injection covering a large portion of the retinotopic map, and the fact that feed-forward projections from V1 to HVAs carry coarse retinotopy - it is difficult to prove that functional specializations noted in the HVA axons are retinotopyindependent. This would require measurement of receptive field location in the axonal boutons, which the authors did not perform (possibly because the SNR of calcium indicators prevented such measurements at the time).  

      Another option would be to show that adjacent neurons in V1, that project to far-apart HVAs, exhibit distinct functional properties on par with differences exhibited by neurons in very different parts of V1 due to retinotopy. In other words, the functional specificity of V1 inputs to HVAs at retinotopically identical locations is of the same order as those that might be gained by retinotopic biases. To our knowledge, such a study has not been conducted, so we have decided to measure the data in collaboration with the Allen Institute. As part of the Allen Institute’s pioneering OpenScope project, we will make careful two-photon and electrophysiology measurements of functional properties, including receptive field location, SF, and TF in different parts of the V1 retinotopic map. Pairing this data with existing Allen Institute datasets on functional properties of neurons in the HVAs will allow us to rule in, or rule-out, our hypotheses regarding retinotopy as the source of functional specialization in mouse HVAs. We will update the discussion in the revised manuscript to better reflect the need for additional evidence to support or refute our proposal.

      Meier AM et al., bioRxiv 2025 (Meier et al., 2025) was published after our submission, but we are thankful to the reviewers for guiding our attention to this timely paper. Given the recent findings on the influence of locomotion on rodent and primate visual cortex, it is very exciting to see clearly specialized circuits for processing self-generated visual motion in V1. However, it is difficult to rule out the role of retinotopy as the HVA areas (LM, AL, RL) participating in the M2+ network less responsive to self-generated visual motion exhibit a bias for the medial portion of the visual field and the HVA area (PM) involved in the M2- network responsive to self-generated visual motion exhibit a bias for the lateral (or peripheral) parts of the visual field. For instance, a peripheral bias in area PM has been shown using retrograde tracing as in Figure 6 of (Morimoto et al., 2021), single-cell anterograde tracing  as in Extended Data Figure 8 of (Han et al., 2018), and functional imaging studies (Zhuang et al., 2017). Recent findings in the marmoset also point to visual circuits in the peripheral, but not central, visual field being significantly modulated by selfgenerated movements (Rowley et al., 2024). 

      However, a visual field bias in area PM that selectively receive M2- inputs is at odds with the clear presence of modular M2+/M2- patches across the entire map of V1 (Ji et al., 2015).  One possibility supported by existing data is that neurons in M2- patches, as well as those in M2+ patches, in the central representation of V1 make fewer or significantly weaker connections with area PM compared to areas LM, AL and RL. Evidence to the contrary would support retinotopy-independent and functionally specialized inputs from V1 to HVAs.

      (3) Some of the HVAs-such as AL, AM, and LI-appear to have redundant retinotopic coverage with other HVAS, such as LM and PM. Moreover, these regions have typically been found to have higher "hierarchy scores" based on connectivity (Harris JA et al., Nature, 2019; D'Souza RD et al., Nat Comm, 2022), though unfortunately, the hierarchy levels are not completely consistent between studies. Based on existing evidence, there is a reasonable argument to be made for a hybrid classification, in which some regions (e.g., LM, P, PM, and RL) are combined into a single V2 (though see point #2 above) while other HVAs are maintained as independent visual regions, distinct from V2. I don't expect the authors to revise their viewpoint in any way, but a more nuanced discussion of alternative classifications is warranted.

      We understand that such a proposal would combine a subset of areas with matched field sign (LM, P, PM, and RL) would be less extreme and received better by the community. This would create a V2 with a smooth map without reversals or significant redundant retinotopic coverage. However, the intuition we have built from our modeling studies suggest that both these areas, and the other smaller areas with negative field sign (AL, AM, LI), are a byproduct of a complex single map of the visual field that exhibits reversals as it contorts around the triangular and tear-shaped boundaries of V1. In other words, we believe the redundant coverage and field-sign changes/reversals are a byproduct of a single secondary visual field in V2 constrained by the cortical dimensions of V1. That being said, we understand that area delineations are in part based on a consensus by the community. Therefore we will continue to discuss our proposal with community members, and we will incorporate new evidence supporting or refuting our hypothesis, before we submit our revised manuscript.

      Reviewer #2 (Public review):

      Summary:

      The study by Rowley and Sedigh-Sarvestani presents modeling data suggesting that map reversals in mouse lateral extrastriate visual cortex do not coincide with areal borders, but instead represent borders between subregions within a single area V2. The authors propose that such an organization explains the partial coverage in higher-order areas reported by Zhuang et al., (2017). The scheme revisits an organization proposed by Kaas et al., (1989), who interpreted the multiple projection patches traced from V1 in the squirrel lateral extrastriate cortex as subregions within a single area V2. Kaas et al's interpretation was challenged by Wang and Burkhalter (2007), who used a combination of topographic mapping of V1 connections and receptive field recordings in mice. Their findings supported a different partitioning scheme in which each projection patch mapped a specific topographic location within single areas, each containing a complete representation of the visual field. The area map of mouse visual cortex by Wang and Burkhalter (2007) has been reproduced by hundreds of studies and has been widely accepted as ground truth (CCF) (Wang et al., 2020) of the layout of rodent cortex. In the meantime, topographic mappings in marmoset and tree shew visual cortex made a strong case for map reversals in lateral extrastriate cortex, which represent borders between functionally diverse subregions within a single area V2. These findings from non-rodent species raised doubts about whether during evolution, different mammalian branches have developed diverse partitioning schemes of the cerebral cortex. Rowley and Sedigh-Sarvestani favor a single master plan in which, across evolution, all mammalian species have used a similar blueprint for subdividing the cortex.

      Strengths:

      The story illustrates the enduring strength of science in search of definitive answers.

      Weaknesses:

      To me, it remains an open question whether Rowley and Sedigh-Sarvestani have written the final chapter of the saga. A key reason for my reservation is that the areas the maps used in their model are cherry-picked. The article disregards published complementary maps, which show that the entire visual field is represented in multiple areas (i.e. LM, AL) of lateral extrastriate cortex and that the map reversal between LM and AL coincides precisely with the transition in m2AChR expression and cytoarchitecture (Wang and Burkhalter, 2007; Wang et al., 2011). Evidence from experiments in rats supports the gist of the findings in the mouse visual cortex (Coogan and Burkhalter, 1993).

      We would not claim to have written the final chapter of the saga. Our goal was to add an important piece of new evidence to the discussion of area delineations across species. We believe this new evidence supports our unification hypothesis.  We also believe that there are several missing pieces of data that could support or refute our hypothesis. We have begun a collaboration to collect some of this data.  

      (1) The selective use of published evidence, such as the complete visual field representation in higher visual areas of lateral extrastriate cortex (Wang and Burkhalter, 2007; Wang et al., 2011) makes the report more of an opinion piece than an original research article that systematically analyzes the area map of mouse visual cortex we have proposed. No direct evidence is presented for a single area V2 with functionally distinct subregions.

      This brings up a nuanced issue regarding visual field coverage. Wang & Burkhalter, 2007 Figure 6 shows the receptive field of sample neurons in area LM that cover the full range between 0 and 90 degrees of azimuth, and -40 to 80 degree of elevation – which essentially matches the visual field coverage in V1. However, we do not know whether these neurons are representative of most neurons in area LM. In other words, while these single-cell recordings along selected contours in cortex show the span of the visual field coverage, they may not be able to capture crucial information about its shape, missing regions of the visual field or potential bias. To mitigate this, visual field maps measured with electrophysiology are commonly produced by even sampling across the two dimensions of the visual area, either by moving a single electrode along a grid-pattern (e.g. (Manger et al., 2002)), or using a grid-liked multi-electrode probe (e.g. (Yu et al., 2020)). This was not carried out either in Wang & Burkhalter 2007 or Wang et al. 2011.  Even sampling of cortical space is time consuming and difficult with electrophysiology, but efficient with functional imaging. Therefore, despite the likely under-estimation of visual field coverage, imaging techniques are valuable in that they can efficiently exhibit not only the span of the visual field of a cortical region, but also its shape and bias.  

      Multiple functional imaging studies that simultaneously measure visual field coverage in V1 and HVAs report a bias in the coverage of HVAs, relative to that in V1 (Garrett et al., 2014; Juavinett et al., 2018; Zhuang et al., 2017). While functional imaging will likely underestimate receptive fields compared to electrophysiology, the consistent observation of an orderly bias for distinct parts of the visual field across the HVAs suggests that at least some of the HVAs do not have full and uniform coverage of the visual field comparable to that in V1. For instance, (Garrett et al., 2014) show that the total coverage in HVAs, when compared to V1, is typically less than half (Figure 6D) and often irregularly shaped.

      Careful measurements of single-cell receptive fields, using mesoscopic two-photon imaging across the HVAs would settle this question. As reviewer #1 points out, this is technically feasible, though no dataset of this kind exists to our knowledge.

      (2) The article misrepresents evidence by commenting that m2AChR expression is mainly associated with the lower field. This is counter to published findings showing that m2AChR spans across the entire visual field (Gamanut et al., 2018; Meier et al., 2021). The utility of markers for delineating areal boundaries is discounted, without any evidence, in disregard of evidence for distinct areal patterns in early development (Wang et al., 2011). Pointing out that markers can be distributed non-uniformly within an area is well-familiar. m2AChR is non-uniformly expressed in mouse V1, LM and LI (Ji et al., 2015; D'Souza et al., 2019; Meier et al., 2021). Recently, it has been found that the patchy organization within V1 plays a role in the organization of thalamocortical and intracortical networks (Meier et al., 2025). m2AChR-positive patches and m2AChR-negative interpatches organize the functionally distinct ventral and dorsal networks, notably without obvious bias for upper and lower parts of the visual field.

      We wrote that “Future work showed boundaries in labeling of histological markers such as SMI-32 and m2ChR labeling, but such changes mostly delineated area LM/AL (Wang et al., 2011) and seemed to be correlated with the representation of the lower visual field.” The latter statement regarding the representation of the lower visual field is directly referencing the data in Figure 1 of (Wang et al., 2011), which is titled “Figure 1: LM/AL border identified by the transition of m2AChR expression coincides with receptive field recordings from lower visual field.” Similar to the Wang et al., we were simply referring to the fact that the border of area LM/AL co-exhibits a change in m2AChR expression as well as lower-visual field representation.  

      (3) The study has adopted an area partitioning scheme, which is said to be based on anatomically defined boundaries of V2 (Zhuang et al., 2017). The only anatomical borders used by Zhuang et al. (2017) are those of V1 and barrel cortex, identified by cytochrome oxidase staining. In reality, the partitioning of the visual cortex was based on field sign maps, which are reproduced from Zhuang et al., (2017) in Figure 1A. It is unclear why the maps shown in Figures 2E and 2F differ from those in Figure 1A. It is possible that this is an oversight. But maintaining consistent areal boundaries across experimental conditions that are referenced to the underlying brain structure is critical for assigning modeled projections to areas or sub-regions. This problem is evident in Figure 2F, which is presented as evidence that the modeling approach recapitulates the tracings shown in Figure 3 of Wang and Burkhalter (2007). The dissimilarities between the modeling and tracing results are striking, unlike what is stated in the legend of Figure 2F.

      Thanks for this correction. By “anatomical boundaries of higher visual cortex”, we meant the cortical boundary between V1 and higher order visual areas on one end, and the outer edge of the envelope that defines the functional boundaries of the HVAs in cortical space (Zhuang et al., 2017). The reviewer is correct that we should have referred to these as functional boundaries. The word ‘anatomical’ was meant to refer to cortical space, rather than visual field space.

      More generally though, there is no disagreement between the partitioning of visual cortex in Figure 1 and 2. Rather, the portioning in Figure 1 is directly taken from Zhuang et al., (2017) whereas those in Figure 2 are produced by mathematical model simulation. As such, one would not expect identical areal boundaries between Figure 2 and Figure 1. What we aimed to communicate with our modeling results, is that a single area can exhibit multiple visual field reversals and retinotopic redundancies if it is constrained to fit around V1 and cover a visual field approximately matched to the visual field coverage in V1. We defined this area explicitly as a single area with a single visual field (boundaries shown in Figure 2A). So  the point of our simulation is to show that even an explicitly defined single area can appear as multiple areas if it is constrained by the shape of mouse V1, and if visual field reversals are used to indicate areal boundaries. As in most models, different initial conditions and parameters produce a complex visual field which will appear as multiple HVAs when delineated by areal boundaries. What is consistent however, is the existence of complex single visual field that appears as multiple HVAs with partially overlapping coverage.

      Similarly, we would not expect a simple model to exactly reproduce the multi-color tracer injections in Wang and Burkhalter (2007). However, we find it quite compelling that the model can produce multiple groups of multi-colored axonal projections beyond V1 that can appear as multiple areas each with their own map of the visual field using current criteria, when the model is explicitly designed to map a single visual field. We will explain the results of the model, and their implications, better in the revised manuscript.

      (4) The Rowley and Sedigh-Sarvestani find that the partial coverage of the visual field in higher order areas shown by Zhuang et al (2017) is recreated by the model. It is important to caution that Zhuang et al's (2017) maps were derived from incomplete mappings of the visual field, which was confined to -25-35 deg of elevation. This underestimates the coverage we have found in LM and AL. Receptive field mappings show that LM covers 0-90 deg of azimuth and -30-80 elevation (Wang and Burkhalter, 2007). AL covers at least 0-90 deg of azimuth and -30-50 deg of elevation (Wang and Burkhalter, 2007; Wang et al., 2011). These are important differences. Partial coverage in LM and AL underestimates the size of these areas and may map two projection patches as inputs to subregions of a single area rather than inputs to two separate areas. Complete, or nearly complete, visual representations in LM and AL support that each is a single area. Importantly, both areas are included in a callosal-free zone (Wang and Burkhalter, 2007). The surrounding callosal connections align with the vertical meridian representation. The single map reversal is marked by a transition in m2AChR expression and cytoarchitecture (Wang et al., 2011).

      This is a good point. We do not expect that expanding the coverage of V1 will change the results of the model significantly. However, for the revised manuscript, we will update V1 coverage to be accurate, repeat our simulations, and report the results.  

      (5) The statement that the "lack of visual field overlap across areas is suggestive of a lack of hierarchical processing" is predicated on the full acceptance of the mappings by Zhuang et al (2017). Based on the evidence reviewed above, the reclassification of visual areas proposed in Figure 1C seems premature.

      The reviewer is correct. In the revised manuscript, we will be careful to distinguish bias in visual field coverage across areas from presence or lack of visual field overlap.  

      (6) The existence of lateral connections is not unique to rodent cortex and has been described in primates (Felleman and Van Essen, 1991).

      (7) Why the mouse and rat extrastriate visual cortex differ from those of many other mammals is unclear. One reason may be that mammals with V2 subregions are strongly binocular.

      This is an interesting suggestion, and careful visual topography data from rabbits and other lateral eyed animals would help to evaluate it. For what it’s worth, tree shrews are lateral eyed animals with only 50 degrees of binocular visual field and also show V2 subregions.

      Reviewer #3 (Public review):

      Summary:

      The authors review published literature and propose that a visual cortical region in the mouse that is widely considered to contain multiple visual areas should be considered a single visual area.

      Strengths:

      The authors point out that relatively new data showing reversals of visual-field sign within known, single visual areas of some species require that a visual field sign change by itself should not be considered evidence for a border between visual areas.

      Weaknesses:

      The existing data are not consistent with the authors' proposal to consolidate multiple mouse areas into a single "V2". This is because the existing definition of a single area is that it cannot have redundant representations of the visual field. The authors ignore this requirement, as well as the data and definitions found in published manuscripts, and make an inaccurate claim that "higher order visual areas in the mouse do not have overlapping representations of the visual field". For quantification of the extent of overlap of representations between 11 mouse visual areas, see Figure 6G of Garrett et al. 2014. [Garrett, M.E., Nauhaus, I., Marshel, J.H., and Callaway, E.M. (2014). Topography and areal organization of mouse visual cortex. The Journal of neuroscience 34, 12587-12600. 10.1523/JNEUROSCI.1124-14.2014.

      Thank you for this correction, we admit we should have chosen our words more carefully. In the revised manuscript, we will emphasize that higher order visual areas in the mouse do have some overlap in their representations but also exhibit bias in their coverage. This is consistent with our proposal and in fact our model simulations in Figure 2E also show overlapping representations along with differential bias in coverage. However, we also note Figure 6 of Garret et al. 2014 provides several pieces of evidence in support of our proposal that higher order areas are sub-regions of a single area V2. Specifically, the visual field coverage of each area is significantly less than that in V1 (Garret et al. 2014, Figure 6D). While the imaging methods used in Garret et al. likely under-estimate receptive fields, one would assume they would similarly impact measurements of coverage in V1 and HVAs. Secondly, each area exhibits a bias towards a different part of the visual field (Figure 6C and E), that this bias is distinct for different areas but proceeds in a retinotopic manner around V1 - with adjacent areas exhibiting biases for nearby regions of the visual field (Figure 6E). Thus, the biases in the visual field coverage across HVAs appear to be related and not independent of each other. As we show in our modeling and in Figure 2, such orderly and inter-related biases can be created from a single visual field constrained to share a border with mouse V1.   

      With regards to the existing definition of a single area: we did not ignore the requirement that single areas cannot have redundant representations of the visual field. Rather, we believe that this requirement should be relaxed considering new evidence collected from other species, where multiple visual field reversals exist within the same visual area. We understand this issue is nuanced and was not made clear in the original submission.  

      In the revised manuscript, we will clarify that visual field reversals often exhibit redundant retinotopic representation on either side of the reversal. In the revised manuscript we will clarify that our argument that multiple reversals can exist within a single visual area in the mouse, is an argument that some retinotopic redundancy can exist with single visual areas. Such a re-classification would align how we define visual areas in mice with existing classification in tree shrews, ferrets, cats, and primates – all of whom have secondary visual areas with complex retinotopic maps exhibiting multiple reversals and redundant retinotopic coverage.

    1. Author response:

      We sincerely thank the reviewers for the time and care they have invested in evaluating our manuscript. We greatly appreciate their thoughtful feedback, which highlights both the strengths and the areas where the work can be improved. We recognize the importance of the concerns raised, particularly regarding the TMS analyses and interpretation, as well as aspects of the manuscript structure and clarity. The authors are committed to transparency and a rigorous scientific process, and we will therefore carefully consider all reviewer comments. In the coming months, we will revise the manuscript to incorporate additional analyses, provide clearer methodological detail, and refine the interpretation of the stimulation results.

    2. Reviewer #4 (Public review):

      Summary:

      Several behavioral experiments and one TMS experiment were performed to examine adaptation to room reverberation for speech intelligibility in noise. This is an important topic that has been extensively studied by several groups over the years. And the study is unique in that it examines one candidate brain area, dlPFC, potentially involved in this learning, and finds that disrupting this area by TMS results in a reduction in the learning. The behavioral conditions are in many ways similar to previous studies. However, they find results that do not match previous results (e.g., performance in anechoic condition is worse than in reverberation), making it difficult to assess the validity of the methods used. One unique aspect of the behavioral experiments is that Ambisonics was used to simulate the spaces, while headphone simulation was mostly used previously. The main behavioral experiment was performed by interleaving 3 different rooms and measuring speech intelligibility as a function of the number of words preceding the target in a given room on a given trial. The findings are that performance improves on the time scale of seconds (as the number of words preceding the target increases), but also on a much larger time scale of tens to hundreds of seconds (corresponding to multiple trials), while for some listeners it is degraded for the first couple of trials. The study also finds that the performance is best in the room that matches the T60 most commonly observed in everyday environments. These are potentially interesting results. However, there are issues with the design of the study and analysis methods that make it difficult to verify the conclusions based on the data.

      Strengths:

      (1) Analysis of the adaptation to reverberation on multiple time scales, for multiple reverberant and anechoic environments, and also considering contextual effects of one environment interleaved with the other two environments.

      (2) TMS experiment showing reduction of some of the learning effects by temporarily disabling the dlPFC.

      Weaknesses:

      While the study examines the adaptation for different carrier lengths, it keeps multiple characteristics (mainly talker voice and location) fixed in addition to reverberation. Therefore, it is possible that the subjects adapt to other aspects of the stimuli, not just to reverberation. A condition in which only reverberation would switch for the target would allow the authors to separate these confounding alternatives. Now, the authors try to address the concerns by indirect evidence/analyses. However, the evidence provided does not appear sufficient.

      The authors use terms that are either not defined or that seem to be defined incorrectly. The main issue then is the results, which are based on analysis of what the authors call d', Hit Rate, and Final Hit rate. First of all, they randomly switch between these measures. Second, it's not clear how they define them, given that their responses are either 4-alternative or 8-alternative forced choice. d', Hit Rate, and False Alarm Rate are defined in Signal detection theory for the detection of the presence of a target. It can be easily extended to a 2-alternative forced choice. But how does one define a Hit, and, in particular, a False Alarm, in a 4/8-alternative? The authors do not state how they did it, and without that, the computation of d' based on HR and FAR is dubious. Also, what the authors call Hit Rate, is presumably the percent correct performance (PCC), but even that is not clear. Then they use FHR and act as if this was the asymptotic value of their HR, even though in many conditions their learning has not ended, and randomly define a variable of +-10 from FHR, which must produce different results depending on whether the asymptote was reached or not. Other examples of usage of strange usage of terms: they talk about "global likelihood learning" (L426) without a definition or a reference, or about "cumulative hit rate" (L1738), where it is not clear to me what "cumulative" means there.

      There are not enough acoustic details about the stimuli. The authors find that reverberant performance is overall better than anechoic in 2 rooms. This goes contrary to previous results. And the authors do not provide enough acoustic details to establish that this is not an artefact of how the stimuli were normalized (e.g., what were the total signal and noise levels at the two ears in the anechoic and reverberant conditions?).

      There are some concerns about the use of statistics. For example, the authors perform two-way ANOVA (L724-728) in which one factor is room, but that factor does not have the same 3 levels across the two levels of the other factor. Also, in some comparisons, they randomly select 11 out of 22 subjects even though appropriate test correct for such imbalances without adding additional randomness of whether the 11 selected subjects happened to be the good or the bad ones.

      Details of the experiments are not sufficiently described in the methods (L194-205) to be able to follow what was done. It should be stated that 1 main experiment was performed using 3 rooms, and that 3 follow-ups were done on a new set of subjects, each with the room swapped.

    3. Reviewer #3 (Public review):

      Summary:

      This manuscript presents a well-designed and insightful behavioural study examining human adaptation to room acoustics, building on prior work by Brandewie & Zahorik. The psychophysical results are convincing and add incremental but meaningful knowledge to our understanding of reverberation learning. However, I find the transcranial magnetic stimulation (TMS) component to be over-interpreted. The TMS protocol, while interesting, lacks sufficient anatomical specificity and mechanistic explanation to support the strong claims made regarding a unique role of the dorsolateral prefrontal cortex (dlPFC) in this learning process. More cautious interpretation is warranted, especially given the modest statistical effects, the fact that the main TMS result of interest is a null result, the imprecise targeting of dlPFC (which is not validated), and the lack of knowledge about the timescale of TMS effects in relation to the behavioural task. I recommend revising the manuscript to shift emphasis toward the stronger behavioural findings and to present a more measured and transparent discussion of the TMS results and their limitations.

      Strengths:

      (1) Well-designed acoustical stimuli and psychophysical task.

      (2) Comparisons across room combinations are well conducted.

      (3) The virtual acoustic environment is impressive and applied well here.

      (4) A timely study with interesting behavioural results.

      Weaknesses:

      (1) Lack of hypotheses, particularly for TMS.

      (2) Lack of evidence for targeting TMS in [brain] space and time.

      (3) The most interesting effect of TMS is a null result compared to a weak statistical effect for "meta adaptation"

    4. Reviewer #2 (Public review):

      Summary:

      This study investigated how listeners adapt to and utilize statistical properties of different acoustic spaces to improve speech perception. The researchers used repetitive TMS to perturb neural activity in DLPFC, inhibiting statistical learning compared to sham conditions. The authors also identified the most effective room types for the effective use of reverberations in speech in noise perception, with regular human-built environments bringing greater benefits than modified rooms with lower or higher reverberation times.

      Strengths:

      The introduction and discussion sections of the paper are very interesting and highlight the importance of the current study, particularly with regard to the use of ecologically valid stimuli in investigating statistical learning. However, they could be condensed into parts. TMS parameters and task conditions were well-considered and clearly explained.

      Weaknesses

      (1) The Results section is difficult to follow and includes a lot of detail, which could be removed. As such, it presents as confusing and speculative at times.

      (2) The hypotheses for the study are not clearly stated.

      (3) Multiple statistical models are implemented without correcting the alpha value. This leaves the analyses vulnerable to Type I errors.

      (4) It is confusing to understand how many discrete experiments are included in the study as a whole, and how many participants are involved in each experiment.

      (5) The TMS study is significantly underpowered and not robust. Sample size calculations need further explanation (effect sizes appear to be based on behavioural studies?). I would caution an exploratory presentation of these data, and calculate a posteriori the full sample size based on effect sizes observed in the TMS data.

    5. Reviewer #1 (Public review):

      Summary:

      This manuscript describes the results of an experiment that demonstrates a disruption in statistical learning of room acoustics when transcranial magnetic stimulation (TMS) is applied to the dorsolateral prefrontal cortex in human listeners. The work uses a testing paradigm designed by the Zahorik group that has shown improvement in speech understanding as a function of listening exposure time in a room, presumably through a mechanism of statistical learning. The manuscript is comprehensive and clear, with detailed figures that show key results. Overall, this work provides an explanation for the mechanisms that support such statistical learning of room acoustics and, therefore, represents a major advancement for the field.

      Strengths:

      The primary strength of the work is its simple and clear result, that the dorsolateral prefrontal cortex is involved in human room acoustic learning.

      Weaknesses:

      A potential weakness of this work is that the manuscript is quite lengthy and complex.

    6. eLife Assessment:

      This study addresses valuable questions about the neural mechanisms underlying statistical learning of room acoustics, combining robust behavioral measures with non-invasive brain stimulation. The behavioral findings are strong and extend previous work in psychoacoustics, but the TMS results are modest, with methodological limitations and over-interpretation that weaken the mechanistic conclusions. The strength of evidence is therefore incomplete, and a more cautious interpretation of the stimulation findings, alongside strengthened analyses, would improve the manuscript.

    1. Length TBD based on project pitch: remix or re-imagine a paper youʼve written forclass in a creative format (playbill, skit, poem, sketch or painting, photo essay, trailer

      super excited for this!! and I think this is a great idea as a final project. it'll be easier to think about what I want to do during the semester.

    2. What is the history of empathy from a philosophical and historical perspective in theEuroAmerican tradition and beyond

      I think this question and the following questions are really important to note, they're are questions are that are not normally asked when thinking about musical theater.

    1. However, tangible changes in the operationof businesses and governments have notbeen dramatic, especially compared with thescale and urgency of the issue.

      !!!!!

    1. And proofread before submission!!!

      It always helps me to proofread out loud if I can just so I can avoid my brain accidentally skipping words or filling in words I didn't actually write.

    2. upper right corner

      I'm unsure of how to add this into my essay, especially the signature. I'll be sure to ask about it before I submit my essays.

    3. It is my hope that this course will therefore dispel two myths about writing:1.) it is merely an academic exercise and 2.) it refers narrowly to setting words on paper or screen.

      We can't define writing so narrowly as it rather encompasses a multitude of things. It can be used in academics but also for pleasure. It isn't merely words but actually a whole new world waiting to be explored.

    Annotators

    1. Note: This response was posted by the corresponding author to Review Commons. The content has not been altered except for formatting.

      Learn more at Review Commons


      Reply to the reviewers

      We thank the reviewers for their careful assessment and enthusiastic appreciation of our work.

      __Reviewer #1 (Evidence, reproducibility and clarity (Required)): __In this article, Thomas et al. use a super-resolution approach in living cells to track proteins involved in the fusion event of sexual reproduction. They study the spatial organization and dynamics of the actin fusion focus, a key structure in cell-cell fusion in Schizosaccharomyces pombe. The researchers have adapted a high-precision centroid mapping method using three-color live-cell epifluorescence imaging to map the dynamic architecture of the fusion focus during yeast mating. The approach relies on tracking the centroid of fluorescence signals for proteins of interest, spatially referenced to Myo52-mScarlet-I (as a robust marker) and temporally referenced using a weakly fluorescent cytosolic protein (mRaspberry), which redistributes strongly upon fusion. The trajectories of five key proteins, including markers of polarity, cytoskeleton, exocytosis and membrane fusion, were compared to Myo52 over a 75-minute window spanning fusion. Their observations indicate that secretory vesicles maintain a constant distance from the plasma membrane whereas the actin network compacts. Most importantly, they discovered a positive feedback mechanism in which myosin V (Myo52) transports Fus1 formin along pre-existing actin filaments, thereby enhancing aster compaction.

      This article is well written, the arguments are convincing and the assertions are balanced. The centroid tracking method has been clearly and solidly controlled. Overall, this is a solid addition to our understanding of cytoskeletal organization in cell fusion.

      Major comments: No major comment.

      Minor comments: _ Page 8 authors wrote "Upon depletion of Myo52, Ypt3 did not accumulate at the fusion focus (Figure 3C). A thin, wide localization at the fusion site was occasionally observed (Figure 3C, Movies S3)" : Is there a quantification of this accumulation in the mutant?

      We will provide the requested quantification. The localization is very faint, so we are not sure that quantification will capture this faithfully, but we will try.

      _ The framerate of movies could be improved for reader comfort: For example, movie S6 lasts 0.5 sec.

      We agree that movies S3 and S6 frame rates could be improved. We will provide them with slower frame rate.

      Reviewer #1 (Significance (Required)):

      This study represents a conceptual and technical breakthrough in our understanding of cytoskeletal organization during cell-cell fusion. The authors introduce a high-precision, three-color live-cell centroid mapping method capable of resolving the spatio-temporal dynamics of protein complexes at the nanometer scale in living yeast cells. This methodological innovation enables systematic and quantitative mapping of the dynamic architecture of proteins at the cell fusion site, making it a powerful live-cell imaging approach. However, it is important to keep in mind that the increased precision achieved through averaging comes at the expense of overlooking atypical or outlier behaviors. The authors discovered a myosin V-dependent mechanism for the recruitment of formin that leads to actin aster compaction. The identification of Myo52 (myosin V) as a transporter of Fus1 (formin) to the fusion focus adds a new layer to our understanding of how polarized actin structures are generated and maintained during developmentally regulated processes such as mating.

      Previous studies have shown the importance of formins and myosins during fusion, but this paper provides a quantitative and dynamic mapping that demonstrates how Myo52 modulates Fus1 positioning in living cells. This provides a better understanding of actin organization, beyond what has been demonstrated by fixed-cell imaging or genetic perturbation.

      Audience: Cell biologists working on actin dynamics, cell-cell fusion and intracellular transport. Scientists involved in live-cell imaging, single particle tracking and cytoskeleton modeling.

      I have expertise in live-cell microscopy, image analysis, fungal growth machinery and actin organization.

      We thank the reviewer for their appreciation of our work.

      __Reviewer #2 (Evidence, reproducibility and clarity (Required)): __ A three-color imaging approach to use centroid tracking is employed to determine the high resolution position over time of tagged actin fusion focus proteins during mating in fission yeast. In particular, the position of different protein components (tagged in a 3rd color) were determined in relation to the position (and axis) of the molecular motor Myo52, which is tagged with two different colors in the mating cells. Furthermore, time is normalized by the rapid diffusion of a weak fluorescent protein probe (mRaspberry) from one cell to the other upon fusion pore opening. From this approach multiple important mechanistic insights were determined for the compaction of fusion focus proteins during mating, including the general compaction of different components as fusion proceeds with different proteins having specific stereotypical behaviors that indicate underlying molecular insights. For example, secretory vesicles remain a constant distance from the plasma membrane, whereas the formin Fus1 rapidly accumulates at the fusion focus in a Myo52-dependent manner.

      I have minor suggestions/points: (1) Figure 1, for clarity it would be helpful if the cells shown in B were in the same orientation as the cartoon cells shown in A. Similarly, it would be helpful to have the orientation shown in D the same as the data that is subsequently presented in the rest of the manuscript (such as Figure 2) where time is on the X axis and distance (position) is on the Y axis.

      We have turned each image in panel B by 180° to match the cartoon in A. For panel D, we are not sure what the reviewer would like. This panel shows the coordinates of each Myo52 position, whereas Figure 2 shows oriented distance (on the Y axis) over time (on the X axis). Perhaps the reviewer suggests that we should display panel D with a rotation onto the Y axis rather than the X axis. We feel that this would not bring more clarity and prefer to keep it as is.

      (2) Figure 2, for clarity useful to introduce how the position of Myo52 changes over time with respect to the fusion site (plasma membrane) earlier, and then come back to the positions of different proteins with respect to Myo52 shown in 2E. Currently the authors discuss this point after introducing Figure 2E, but better for the reader to have this in mind beforehand.

      We have added a sentence at the start of the section describing Figure 2, pointing out that the static appearance of Myo52 is due to it being used as reference, but that in reality, it moves relative to the plasma membrane: “Because Myo52 is the reference, its trace is flat, even though in reality Myo52 also moves relative to other proteins and the plasma membrane (see Figure 2E)”. This change is already in the text.

      (3) First sentence of page 8 "..., peaked at fusion time and sharply dropped post-fusion (Figure S3)." Figure S3 should be cited so that the reader knows where this data is presented.

      Thanks, we have added the missing figure reference to the text.

      (4) Figure 3D-H, why is Exo70 used as a marker for vesicles instead of Ypt3 for these experiments? Exo70 seems to have a more confusing localization than Ypt3 (3C vs 3D), which seems to complicate interpretations.

      There are two main reasons for this choice. First, the GFP-Ypt3 fluorescence intensity is lower than that of Exo70-GFP, which makes analysis more difficult and less reliable. Second, in contrast to Exo70-GFP where the endogenous gene is tagged at the native genomic locus, GFP-Ypt3 is expressed as additional copy in addition to endogenous untagged Ypt3. Although GFP-Ypt3 was reported to be fully functional as it can complement the lethality of a ypt3 temperature sensitive mutant (Cheng et al, MBoC 2002), its expression levels are non-native and we do not have a strain in which ypt3 is tagged at the 5’ end at the native genomic locus. For these reasons, we preferred to examine in detail the localization of Exo70. We do not think it complicates interpretations. Exo70 faithfully decorates vesicles and exhibits the same localization as Ypt3 in WT cells (see Figure 2D) and in myo52-AID (see Figure 3C-D). We realize that our text was a bit confusing as we opposed the localization of Exo70 and Ypt3, when all we wanted to state was that the Exo70-GFP signal is stronger. We have corrected this in the text.

      (5) Page 10, end of first paragraph, "We conclude...and promotes separation of Myo52 from the vesicles." This is an interesting hypothesis/interpretation that is consistent with the spatial-temporal organization of vesicles and the compacting fusion focus, but the underlying molecular mechanism has not be concluded.

      This is an interpretation that is in line with our data. Firm conclusion that the organization of the actin fusion focus imposes a steric barrier to bulk vesicle entry will require in vitro reconstitution of an actin aster driven by formin-myosin V feedback and addition of myosin V vesicle-like cargo, which can be a target for future studies. To make clear that it is an interpretation and not a definitive statement, we have added “likely” to the sentence, as in: “We conclude that the distal position of vesicles in WT cells is a likely steric consequence of the architecture of the fusion focus, which restricts space at the center of the actin aster and promotes separation of Myo52 from the vesicles”.

      (6) Figure 5F and 5G, the results are confusing and should be discussed further. Depletion of Myo52 decreases Fus1 long-range movements, indicating that Fus1 is being transported by Myo52 (5F). Similarly, the Fus1 actin assembly mutant greatly decreases Fus1 long-range movements and prevents Myo52 binding (5G), perhaps indicating that Fus1-mediated actin assembly is important. It seems the author's interpretations are oversimplified.

      We show that Myo52 is critical for Fus1 long-range movements, as stated by the reviewer. We also show that Fus1-mediated actin assembly is important. The question is in what way.

      One possibility is that FH2-mediated actin assembly powers the movement, which in this case represents the displacement of the formin due to actin monomer addition on the polymerizing filament. A second possibility is that actin filaments assembled by Fus1 somehow help Myo52 move Fus1. This could be for instance because Fus1-assembled actin filaments are preferred tracks for Myo52-mediated movements, or because they allow Myo52 to accumulate in the vicinity of Fus1, enhancing their chance encounter and thus the number of long-range movements (on any actin track). Based on the analysis of the K1112A point mutant in Fus1 FH2 domain, our data cannot discriminate between these three different options, which is why we concluded that the mutant allele does not allow us to make a firm conclusion. However, the Myo52-dependence clearly shows that a large fraction of the movements requires the myosin V. We have clarified the end of the paragraph in the following way: “Therefore, analysis of the K1112A mutant phenotype does not allow us to clearly distinguish between Fus1-powered from Myo52-powered movements. Future work will be required to test whether, in addition to myosin V-dependent transport, Fus1-mediated actin polymerization also directly contributes to Fus1 long-range movements.”

      (7) Figure 6, why not measure the fluorescence intensity of Fus1 as a proxy for the number of Fus1 molecules (rather than the width of the Fus1 signal), which seems to be the more straight-forward analysis?

      The aim of the measurement was to test whether Myo52 and Fus1 activity help focalize the formin at the fusion site, not whether these are required for localization in this region. This is why we are measuring the lateral spread of the signal (its width) rather than the fluorescence intensity of the signal. We know from previous work that Fus1 localizes to the shmoo tip independently of myosin V (Dudin et al, JCB 2015), and we also show this in Figure 6. However, the precise distribution of Fus1 is wider in absence of the myosins.

      We can and will measure intensities to test whether there is also a quantitative difference in the number of molecules at the shmoo tip.

      (8) Figure 7, the authors should note (and perhaps discuss) any evidence as to whether activation of Fus1 to facilitate actin assembly depends upon Fus1 dissociating from Myo52 or whether Fus1 can be activated while still associated with Myo52, as both circumstances are included in the figure.

      This is an interesting point. We have no experimental evidence for or against Fus1 dissociating from Myo52 to assemble actin. However, it is known that formins rotate along the actin filament double helix as they assemble it, a movement that seems poorly compatible with processive transport by myosin V. In Figure 7, we do not particularly want to imply that Myo52 associates with Fus1 linked or not with an actin filament. The figure serves to illustrate the focusing mechanism of myosin V transporting a formin, which is more evident when we draw the formin attached to a filament end. We have now added a sentence in the figure legend to clarify this point: “Note that it is unknown whether Myo52 transports Fus1 associated or not with an actin filament.”

      (9) Figure 7, the color of secretory vesicles should be the same in A and B.

      This is now corrected.

      Reviewer #2 (Significance (Required)):

      This is an impactful and high quality manuscript that describes an elegant experimental strategy with important insights determined. The experimental imaging strategy (and analysis), as well as the insight into the pombe mating fusion focus and its comparison to other cytoskeletal compaction events will be of broad scientific interest.

      We thank the reviewer for their appreciation of our work.

      Reviewer #3 (Evidence, reproducibility and clarity (Required)):

      Summary:

      Fission yeast cell-cell fusion during mating is mediated by an actin-based structure called the 'fusion focus', which orchestrates actin polymerization by the mating-specific formin, Fus1, to direct polarized secretion towards the mating site. In the current study, Thomas and colleagues quantitatively map the spatial distribution of proteins mediating cell-cell fusion using a three-color fluorescence imaging methodology in the fission yeast Schizosaccharomyces pombe. Using Myo52 (Type V myosin) as a fluorescence reference point, the authors discover that proteins known to localize to the fusion focus have distinct spatial distributions and accumulation profiles at the mating site. Myo52 and Fus1 form a complex in vivo detected by co-immunoprecipitation and each contribute to directing secretory vesicles to the fusion focus. Previous work from this group has shown that the intrinsically disordered region (IDR) of Fus1 plays a critical role in forming the fusion focus. Here, the authors swap out the IDR of fission yeast Fus1 for the IDR of an unrelated mammalian protein, coincidentally called 'fused in sarcoma' (FUS). They express the Fus1∆IDR-FUSLC-27R chimera in mitotically dividing fission yeast cells, where Fus1 is not normally expressed, and discover that the Fus1∆IDR-FUSLC-27R chimera can travel with Myo52 on actively polymerizing actin cables. Additionally, they show that acute loss of Myo52 or Fus1 function, using Auxin-Inducible Degradation (AID) tags and point mutations, impair the normal compaction of the fusion focus, suggesting that direct interaction and coordination of Fus1 and Myo52 helps shape this structure.

      Major Comments:

      (1) In the Results section for Figure 2, the authors claim that actin filaments become shorter and more cross-linked they move away from the fusion site during mating, and suggest that this may be due to the presence of Myo51. However, the evidence to support this claim is not made clear. Is it supported by high-resolution electron microscopy of the actin filaments, or some other results? This needs to be clarified.

      Sorry if our text was unclear. The basis for the claim that actin filaments become shorter comes from our observation that the average position of tropomyosin and Myo51, both of which decorate actin filaments, is progressively closer to both Fus1 and the plasma membrane. Thus, the actin structure protrudes less into the cytosol as fusion progresses. The basis for claiming that Myo51 promotes actin filament crosslinking comes mainly from previously published papers, which had shown that 1) Myo51 forms complexes with the Rng8 and Rng9 proteins (Wang et al, JCB 2014), and 2) the Myo51-Rng8/9 not only binds actin through Myo51 head domain but also binds tropomyosin-decorated actin through the Rng8/9 moiety (Tang et al, JCB 2016; reference 27 in our manuscript). We had also previously shown that these proteins are necessary for compaction of the fusion focus (Dudin et al, PLoS Genetics 2017; reference 28 in our manuscript). Except for measuring the width of Fus1 distribution in myo51∆ mutants, which confirms previous findings, we did not re-investigate here the function of Myo51.

      We have now re-written this paragraph to present the previous data more clearly: “The distal localization of Myo51 was mirrored by that of tropomyosin Cdc8, which decorates linear actin filaments (Figure 2B) (Hatano et al, 2022). The distal position of the bulk of Myo51-decorated actin filaments was confirmed using Airyscan super-resolution microscopy (Figure 2B, right). Thus, the average position of actin filaments and decreasing distance to Myo52 indicates they initially extend a few hundred nanometers into the cytosol and become progressively shorter as fusion proceeds. Previous work had shown that Myo51 cross-links and slides Cdc8-decorated actin filaments relative to each other (Tang et al, 2016) and that both proteins contribute to compaction of the fusion focus in the lateral dimension along the cell-cell contact area (perpendicular to the fusion axis) (Dudin et al, 2017). We confirmed this function by measuring the lateral distribution of Fus1 along the cell-cell contact area (perpendicular to the fusion axis), which was indeed wider in myo51∆ than WT cells (see below Figure 6A-B).”

      (2) In Figure 4, the authors comment that disrupting Fus1 results in more disperse Myo52 spatial distribution at the fusion focus, raising the possibility that Myo52 normally becomes focused by moving on the actin filaments assembled by Fus1. This can be tested by asking whether latrunculin treatment phenocopies the 'more dispersed' Myo52 localization seen in fus1∆ cells? If Myo52 is focused instead by its direct interaction with Fus1, the latrunculin treatment should not cause the same phenotype.

      This is in principle a good idea, though it is technically challenging because pharmacological treatment of cell pairs in fusion is difficult to do without disturbing pheromone gradients which are critical throughout the fusion process (see Dudin et al, Genes and Dev 2016). We will try the experiment but are unsure about the likelihood of technical success.

      We note however that a similar experiment was done previously on Fus1 overexpressed in mitotic cells (Billault-Chaumartin et al, Curr Biol 2022; Fig 1D). Here, Fus1 also forms a focus and latrunculin A treatment leads to Myo52 dispersion while keeping the Fus1 focus, which is in line with our proposal that Myo52 becomes focused by moving on Fus1-assembled actin filaments. Similarly, we showed in Figure 5B that Latrunculin A treatment of mitotic cells expressing Fus1∆IDR-FUSLC-27R also results in Myo52, but not Fus1 dispersion.

      (3) The Fus1∆IDR-FUSLC-27R chimera used in Figure 5 is an interesting construct to examine actin-based transport of formins in cells. I was curious if the authors could provide the rates of movement for Myo52 and for Fus1∆IDR-FUSLC-27R, both before and after acute depletion of Myo52. It would be interesting to see if loss of Myo52 alters the rate of movement, or instead the movement stems from formin-mediated actin polymerization.

      We will measure these rates.

      (4) Also, Myo52 is known to interact with the mitotic formin For3. Does For3 colocalize with Myo52 and Fus1∆IDR-FUSLC-27R along actin cables?

      This is an interesting question for which we do not have an answer. For technical reasons, we do not have the tools to co-image For3 with Fus1∆IDR-FUSLC-27R because both are tagged with GFP. We feel that this question goes beyond the scope of this paper.

      (5) If Fus1∆IDR-FUSLC-27R is active, does having ectopic formin activity in mitotic cells affect actin cable architecture? This could be assessed by comparing phalloidin staining for wildtype and Fus1∆IDR-FUSLC-27R cells.

      We are not sure what the purpose of this experiment is, or how informative it would be. If it is to evaluate whether Fus1∆IDR-FUSLC-27R is active, our current data already demonstrates this. Indeed, Fus1∆IDR-FUSLC-27R recruits Myo52 in a F-actin and FH2 domain-dependent manner (shown in Figure 5B and 5G), which demonstrates that Fus1∆IDR-FUSLC-27R FH2 domain is active. Even though Fus1∆IDR-FUSLC-27R assembles actin, we predict that its effect on general actin organization will be weak. Indeed, it is expressed under endogenous fus1 promoter, leading to very low expression levels during mitotic growth, such that only a subset of cells exhibit a Fus1 focus. Furthermore, most of these Fus1 foci are at or close to cell poles, where linear actin cables are assembled by For3, such that they may not have a strong disturbing effect. Because analysis of actin cable organization by phalloidin staining is difficult (due to the more strongly staining actin patches), cells with clear change in organization predicted to be rare in the population, and the gain in knowledge not transformative, we are not keen to do this experiment.

      Minor Comments:

      Prior studies are referenced appropriately. Text and figures are clear and accurate. My only suggestion would be Figure 1E-H could be moved to the supplemental material, due to their extremely technical nature. I believe this would help the broad audience focus on the experimental design mapped out in Figure 1A-D.

      We are relatively neutral about this. If this suggestion is supported by the Editor, we can move these panels to supplement.

      Reviewer #3 (Significance (Required)):

      Significance: This study provides an improved imaging method for detecting the spatial distributions of proteins below 100 nm, providing new insights about how a relatively small cellular structure is organized. The use of three-color cell imaging to accurately measure accumulation rates of molecular components of the fusion focus provides new insight into the development of this structure and its roles in mating. This method could be applied to other multi-protein structures found in different cell types. This work uses rigorously genetic tools such as knockout, knockdown and point mutants to dissect the roles of the formin Fus1 and Type V myosin Myo52 in creating a proper fusion focus. The study could be improved by biochemical assays to test whether Myo52 and Fus1 directly interact, since the interaction is only shown by co-immunoprecipitation from extracts, which may reflect an indirect interaction.

      Indeed, future studies should dissect the Fus1-Myo52 interaction, to determine whether it is direct and identify mutants that impair it.

      I believe this work advances the cell-mating field by providing others with a spatial and temporal map of conserved factors arriving to the mating site. Additionally, they identified a way to study a mating specific protein in mitotically dividing cells, offering future questions to address.

      This study should appeal to a range of basic scientists interested in cell biology, the cytoskeleton, and model organisms. The three-colored quantitative imaging could be applied to defining the architecture of many other cellular structures in different systems. Myosin and actin scientists will be interested in how this work expands the interplay of these two fields.

      I am a cell biologist with expertise in live cell imaging, genetics and biochemistry.

      We thank the reviewer for their appreciation of our work.

    2. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #3

      Evidence, reproducibility and clarity

      Summary:

      Fission yeast cell-cell fusion during mating is mediated by an actin-based structure called the 'fusion focus', which orchestrates actin polymerization by the mating-specific formin, Fus1, to direct polarized secretion towards the mating site. In the current study, Thomas and colleagues quantitatively map the spatial distribution of proteins mediating cell-cell fusion using a three-color fluorescence imaging methodology in the fission yeast Schizosaccharomyces pombe. Using Myo52 (Type V myosin) as a fluorescence reference point, the authors discover that proteins known to localize to the fusion focus have distinct spatial distributions and accumulation profiles at the mating site. Myo52 and Fus1 form a complex in vivo detected by co-immunoprecipitation and each contribute to directing secretory vesicles to the fusion focus. Previous work from this group has shown that the intrinsically disordered region (IDR) of Fus1 plays a critical role in forming the fusion focus. Here, the authors swap out the IDR of fission yeast Fus1 for the IDR of an unrelated mammalian protein, coincidentally called 'fused in sarcoma' (FUS). They express the Fus1∆IDR-FUSLC-27R chimera in mitotically dividing fission yeast cells, where Fus1 is not normally expressed, and discover that the Fus1∆IDR-FUSLC-27R chimera can travel with Myo52 on actively polymerizing actin cables. Additionally, they show that acute loss of Myo52 or Fus1 function, using Auxin-Inducible Degradation (AID) tags and point mutations, impair the normal compaction of the fusion focus, suggesting that direct interaction and coordination of Fus1 and Myo52 helps shape this structure.

      Major Comments:

      • In the Results section for Figure 2, the authors claim that actin filaments become shorter and more cross-linked they move away from the fusion site during mating, and suggest that this may be due to the presence of Myo51. However, the evidence to support this claim is not made clear. Is it supported by high-resolution electron microscopy of the actin filaments, or some other results? This needs to be clarified.

      • In Figure 4, the authors comment that disrupting Fus1 results in more disperse Myo52 spatial distribution at the fusion focus, raising the possibility that Myo52 normally becomes focused by moving on the actin filaments assembled by Fus1. This can be tested by asking whether latrunculin treatment phenocopies the 'more dispersed' Myo52 localization seen in fus1∆ cells? If Myo52 is focused instead by its direct interaction with Fus1, the latrunculin treatment should not cause the same phenotype.

      • The Fus1∆IDR-FUSLC-27R chimera used in Figure 5 is an interesting construct to examine actin-based transport of formins in cells. I was curious if the authors could provide the rates of movement for Myo52 and for Fus1∆IDR-FUSLC-27R, both before and after acute depletion of Myo52. It would be interesting to see if loss of Myo52 alters the rate of movement, or instead the movement stems from formin-mediated actin polymerization.

      • Also, Myo52 is known to interact with the mitotic formin For3. Does For3 colocalize with Myo52 and Fus1∆IDR-FUSLC-27R along actin cables?

      • If Fus1∆IDR-FUSLC-27R is active, does having ectopic formin activity in mitotic cells affect actin cable architecture? This could be assessed by comparing phalloidin staining for wildtype and Fus1∆IDR-FUSLC-27R cells.

      Minor Comments:

      • Prior studies are referenced appropriately.

      • Text and figures are clear and accurate. My only suggestion would be Figure 1E-H could be moved to the supplemental material, due to their extremely technical nature. I believe this would help the broad audience focus on the experimental design mapped out in Figure 1A-D.

      Significance

      Significance: This study provides an improved imaging method for detecting the spatial distributions of proteins below 100 nm, providing new insights about how a relatively small cellular structure is organized. The use of three-color cell imaging to accurately measure accumulation rates of molecular components of the fusion focus provides new insight into the development of this structure and its roles in mating. This method could be applied to other multi-protein structures found in different cell types. This work uses rigorously genetic tools such as knockout, knockdown and point mutants to dissect the roles of the formin Fus1 and Type V myosin Myo52 in creating a proper fusion focus. The study could be improved by biochemical assays to test whether Myo52 and Fus1 directly interact, since the interaction is only shown by co-immunoprecipitation from extracts, which may reflect an indirect interaction.

      I believe this work advances the cell-mating field by providing others with a spatial and temporal map of conserved factors arriving to the mating site. Additionally, they identified a way to study a mating specific protein in mitotically dividing cells, offering future questions to address.

      This study should appeal to a range of basic scientists interested in cell biology, the cytoskeleton, and model organisms. The three-colored quantitative imaging could be applied to defining the architecture of many other cellular structures in different systems. Myosin and actin scientists will be interested in how this work expands the interplay of these two fields.

      I am a cell biologist with expertise in live cell imaging, genetics and biochemistry.

    3. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #2

      Evidence, reproducibility and clarity

      A three-color imaging approach to use centroid tracking is employed to determine the high resolution position over time of tagged actin fusion focus proteins during mating in fission yeast. In particular, the position of different protein components (tagged in a 3rd color) were determined in relation to the position (and axis) of the molecular motor Myo52, which is tagged with two different colors in the mating cells. Furthermore, time is normalized by the rapid diffusion of a weak fluorescent protein probe (mRaspberry) from one cell to the other upon fusion pore opening. From this approach multiple important mechanistic insights were determined for the compaction of fusion focus proteins during mating, including the general compaction of different components as fusion proceeds with different proteins having specific stereotypical behaviors that indicate underlying molecular insights. For example, secretory vesicles remain a constant distance from the plasma membrane, whereas the formin Fus1 rapidly accumulates at the fusion focus in a Myo52-dependent manner.

      I have minor suggestions/points:

      (1) Figure 1, for clarity it would be helpful if the cells shown in B were in the same orientation as the cartoon cells shown in A. Similarly, it would be helpful to have the orientation shown in D the same as the data that is subsequently presented in the rest of the manuscript (such as Figure 2) where time is on the X axis and distance (position) is on the Y axis.

      (2) Figure 2, for clarity useful to introduce how the position of Myo52 changes over time with respect to the fusion site (plasma membrane) earlier, and then come back to the positions of different proteins with respect to Myo52 shown in 2E. Currently the authors discuss this point after introducing Figure 2E, but better for the reader to have this in mind beforehand.

      (3) First sentence of page 8 "..., peaked at fusion time and sharply dropped post-fusion (Figure S3)." Figure S3 should be cited so that the reader knows where this data is presented.

      (4) Figure 3D-H, why is Exo70 used as a marker for vesicles instead of Ypt3 for these experiments? Exo70 seems to have a more confusing localization than Ypt3 (3C vs 3D), which seems to complicate interpretations.

      (5) Page 10, end of first paragraph, "We conclude...and promotes separation of Myo52 from the vesicles." This is an interesting hypothesis/interpretation that is consistent with the spatial-temporal organization of vesicles and the compacting fusion focus, but the underlying molecular mechanism has not be concluded.

      (6) Figure 5F and 5G, the results are confusing and should be discussed further. Depletion of Myo52 decreases Fus1 long-range movements, indicating that Fus1 is being transported by Myo52 (5F). Similarly, the Fus1 actin assembly mutant greatly decreases Fus1 long-range movements and prevents Myo52 binding (5G), perhaps indicating that Fus1-mediated actin assembly is important. It seems the author's interpretations are oversimplified.

      (7) Figure 6, why not measure the fluorescence intensity of Fus1 as a proxy for the number of Fus1 molecules (rather than the width of the Fus1 signal), which seems to be the more straight-forward analysis?

      (8) Figure 7, the authors should note (and perhaps discuss) any evidence as to whether activation of Fus1 to facilitate actin assembly depends upon Fus1 dissociating from Myo52 or whether Fus1 can be activated while still associated with Myo52, as both circumstances are included in the figure.

      (9) Figure 7, the color of secretory vesicles should be the same in A and B.

      Significance

      This is an impactful and high quality manuscript that describes an elegant experimental strategy with important insights determined. The experimental imaging strategy (and analysis), as well as the insight into the pombe mating fusion focus and its comparison to other cytoskeletal compaction events will be of broad scientific nterest.

    4. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #1

      Evidence, reproducibility and clarity

      Summary:

      • In this article, Thomas et al. use a super-resolution approach in living cells to track proteins involved in the fusion event of sexual reproduction. They study the spatial organization and dynamics of the actin fusion focus, a key structure in cell-cell fusion in Schizosaccharomyces pombe. The researchers have adapted a high-precision centroid mapping method using three-color live-cell epifluorescence imaging to map the dynamic architecture of the fusion focus during yeast mating. The approach relies on tracking the centroid of fluorescence signals for proteins of interest, spatially referenced to Myo52-mScarlet-I (as a robust marker) and temporally referenced using a weakly fluorescent cytosolic protein (mRaspberry), which redistributes strongly upon fusion. The trajectories of five key proteins, including markers of polarity, cytoskeleton, exocytosis and membrane fusion, were compared to Myo52 over a 75-minute window spanning fusion. Their observations indicate that secretory vesicles maintain a constant distance from the plasma membrane whereas the actin network compacts. Most importantly, they discovered a positive feedback mechanism in which myosin V (Myo52) transports Fus1 formin along pre-existing actin filaments, thereby enhancing aster compaction.

      • This article is well written, the arguments are convincing and the assertions are balanced. The centroid tracking method has been clearly and solidly controlled. Overall, this is a solid addition to our understanding of cytoskeletal organization in cell fusion. Major comments: No major comment.

      Minor comments:

      • Page 8 authors wrote "Upon depletion of Myo52, Ypt3 did not accumulate at the fusion focus (Figure 3C). A thin, wide localization at the fusion site was occasionally observed (Figure 3C, Movies S3)" : Is there a quantification of this accumulation in the mutant?

      • The framerate of movies could be improved for reader comfort: For example, movie S6 lasts 0.5 sec.

      Significance

      This study represents a conceptual and technical breakthrough in our understanding of cytoskeletal organization during cell-cell fusion. The authors introduce a high-precision, three-color live-cell centroid mapping method capable of resolving the spatio-temporal dynamics of protein complexes at the nanometer scale in living yeast cells. This methodological innovation enables systematic and quantitative mapping of the dynamic architecture of proteins at the cell fusion site, making it a powerful live-cell imaging approach. However, it is important to keep in mind that the increased precision achieved through averaging comes at the expense of overlooking atypical or outlier behaviors. The authors discovered a myosin V-dependent mechanism for the recruitment of formin that leads to actin aster compaction. The identification of Myo52 (myosin V) as a transporter of Fus1 (formin) to the fusion focus adds a new layer to our understanding of how polarized actin structures are generated and maintained during developmentally regulated processes such as mating.

      Previous studies have shown the importance of formins and myosins during fusion, but this paper provides a quantitative and dynamic mapping that demonstrates how Myo52 modulates Fus1 positioning in living cells. This provides a better understanding of actin organization, beyond what has been demonstrated by fixed-cell imaging or genetic perturbation.

      Audience: Cell biologists working on actin dynamics, cell-cell fusion and intracellular transport. Scientists involved in live-cell imaging, single particle tracking and cytoskeleton modeling.

      I have expertise in live-cell microscopy, image analysis, fungal growth machinery and actin organization.

    1. eLife Assessment

      This important study evaluates a model for multisensory correlation detection, focusing on the detection of correlated transients in visual and auditory stimuli. Overall, the experimental design is sound and the evidence is compelling. The synergy between the experimental and theoretical aspects of the article is strong, and the work will be of interest to both neuroscientists and psychologists working in the domain of sensory processing and perception

    2. Reviewer #1 (Public review):

      Summary:

      Parise presents another instantiation of the Multisensory Correlation Detector model that can now accept stimulus-level inputs. This is a valuable development as it removes researcher involvement in the characterization/labeling of features and allows analysis of complex stimuli with a high degree of nuance that was previously unconsidered (i.e. spatial/spectral distributions across time). The author demonstrates the power of the model by fitting data from dozens of previous experiments including multiple species, tasks, behavioral modality, and pharmacological interventions.

      Strengths:

      One of the model's biggest strengths, in my opinion, is its ability to extract complex spatiotemporal co-relationships from multisensory stimuli. These relationships have typically been manually computed or assigned based on stimulus condition and often distilled to a single dimension or even single number (e.g., "-50 ms asynchrony"). Thus, many models of multisensory integration depend heavily on human preprocessing of stimuli and these models miss out on complex dynamics of stimuli; the lead modality distribution apparent in figure 3b and c are provocative. I can imagine the model revealing interesting characteristics of the facial distribution of correlation during continuous audiovisual speech that have up to this point been largely described as "present" and almost solely focused on the lip area.

      Another aspect that makes the MCD stand out among other models is the biological inspiration and generalizability across domains. The model was developed to describe a separate process - motion perception - and in a much simpler organism - drosophila. It could then describe a very basic neural computation that has been conserved across phylogeny (which is further demonstrated in the ability to predict rat, primate, and human data) and brain area. This aspect makes the model likely able to account for much more than what has already been demonstrated with only a few tweaks akin to the modifications described in this and previous articles from Parise.

      What allows this potential is that, as Parise and colleagues have demonstrated in those papers since our (re)introduction of the model in 2016, the MCD model is modular - both in its ability to interface with different inputs/outputs and its ability to chain MCD units in a way that can analyze spatial, spectral, or any other arbitrary dimension of a stimulus. This fact leaves wide-open the possibilities for types of data, stimuli, and tasks a simplistic neutrally inspired model can account for.

      And so it's unsurprising (but impressive!) that Parise has demonstrated the model's ability here to account for such a wide range of empirical data from numerous tasks (synchrony/temporal order judgement, localization, detection, etc.) and behavior types (manual/saccade responses, gaze, etc.) using only the stimulus and a few free parameters. This ability is another of the model's main strengths that I think deserves some emphasis: it represents a kind of validation of those experiments - especially in the context of cross-experiment predictions.

      Finally, what is perhaps most impressive to me is that the MCD (and the accompanying decision model) does all this with very few (sometimes zero) free parameters. This highlights the utility of the model and the plausibility of its underlying architecture, but also helps to prevent extreme overfitting if fit correctly.

      Weaknesses:

      The model boasts an incredible versatility across tasks and stimulus configurations and its overall scope of the model is to understand how and what relevant sensory information is extracted from a stimulus. We still need to exercise care when interpreting its parameters, especially considering the broader context of top-down control of perception and that some multisensory mappings may not be derivable purely from stimulus statistics (e.g., the complementary nature of some phonemes/visemes).

    3. Reviewer #2 (Public review):

      Summary:

      Building on previous models of multisensory integration (including their earlier correlation-detection framework used for non-spatial signals), the author introduces a population-level Multisensory Correlation Detector (MCD) that processes raw auditory and visual data. Crucially, it does not rely on abstracted parameters, as is common in normative Bayesian models," but rather works directly on the stimulus itself (i.e., individual pixels and audio samples). By systematically testing the model against a range of experiments spanning human, monkey, and rat data - the authors show that their MCD population approach robustly predicts perception and behavior across species with a relatively small (0-4) number of free parameters.

      Strengths:

      (1) Unlike prior Bayesian models that used simplified or parameterized inputs, the model here is explicitly computable from full natural stimuli. This resolves a key gap in understanding how the brain might extract "time offsets" or "disparities" from continuously changing audio-visual streams.

      (2) The same population MCD architecture captures a remarkable range of multisensory phenomena, from classical illusions (McGurk, ventriloquism) and synchrony judgments, to attentional/gaze behavior driven by audio-visual salience. This generality strongly supports the idea that a single low-level computation (correlation detection) can underlie many distinct multisensory effects.

      (3) By tuning model parameters to different temporal rhythms (e.g., faster in rodents, slower in humans), the MCD explains cross-species perceptual data without reconfiguring the underlying architecture.

      (4) The authors frame their model as a plausible algorithmic account of the Bayesian multisensory-integration models in Marr's levels of hierarchy.

      Weaknesses:

      What remains unclear is how the parameters themselves relate to stimulus quantities (like stimulus uncertainty), as is often straightforward in Bayesian models. A theoretical missing link is the explicit relationship between the parameters of the MCD models and those of a cue combination model, thereby bridging Marr's levels of hierarchy.

      Likely Impact and Usefulness

      The work offers a compelling unification of multiple multisensory tasks-temporal order judgments, illusions, Bayesian causal inference, and overt visual attention-under a single, fully stimulus-driven framework. Its success with natural stimuli should interest computational neuroscientists, systems neuroscientists, and machine learning scientists. This paper thus makes an important contribution to the field by moving beyond minimalistic lab stimuli, illustrating how raw audio and video can be integrated using elementary correlation analyses.

    4. Author response:

      The following is the authors’ response to the original reviews.

      Reviewer #1 (Public review):

      Summary:

      Parise presents another instantiation of the Multisensory Correlation Detector model that can now accept stimulus-level inputs. This is a valuable development as it removes researcher involvement in the characterization/labeling of features and allows analysis of complex stimuli with a high degree of nuance that was previously unconsidered (i.e., spatial/spectral distributions across time). The author demonstrates the power of the model by fitting data from dozens of previous experiments, including multiple species, tasks, behavioral modalities, and pharmacological interventions.

      Thanks for the kind words!

      Strengths:

      One of the model's biggest strengths, in my opinion, is its ability to extract complex spatiotemporal co-relationships from multisensory stimuli. These relationships have typically been manually computed or assigned based on stimulus condition and often distilled to a single dimension or even a single number (e.g., "-50 ms asynchrony"). Thus, many models of multisensory integration depend heavily on human preprocessing of stimuli, and these models miss out on complex dynamics of stimuli; the lead modality distribution apparent in Figures 3b and c is provocative. I can imagine the model revealing interesting characteristics of the facial distribution of correlation during continuous audiovisual speech that have up to this point been largely described as "present" and almost solely focused on the lip area.

      Another aspect that makes the MCD stand out among other models is the biological inspiration and generalizability across domains. The model was developed to describe a separate process - motion perception - and in a much simpler organism - Drosophila. It could then describe a very basic neural computation that has been conserved across phylogeny (which is further demonstrated in the ability to predict rat, primate, and human data) and brain area. This aspect makes the model likely able to account for much more than what has already been demonstrated with only a few tweaks akin to the modifications described in this and previous articles from Parise.

      What allows this potential is that, as Parise and colleagues have demonstrated in those papers since our (re)introduction of the model in 2016, the MCD model is modular - both in its ability to interface with different inputs/outputs and its ability to chain MCD units in a way that can analyze spatial, spectral, or any other arbitrary dimension of a stimulus. This fact leaves wide open the possibilities for types of data, stimuli, and tasks a simplistic, neutrally inspired model can account for.

      And so it's unsurprising (but impressive!) that Parise has demonstrated the model's ability here to account for such a wide range of empirical data from numerous tasks (synchrony/temporal order judgement, localization, detection, etc.) and behavior types (manual/saccade responses, gaze, etc.) using only the stimulus and a few free parameters. This ability is another of the model's main strengths that I think deserves some emphasis: it represents a kind of validation of those experiments, especially in the context of cross-experiment predictions (but see some criticism of that below).

      Finally, what is perhaps most impressive to me is that the MCD (and the accompanying decision model) does all this with very few (sometimes zero) free parameters. This highlights the utility of the model and the plausibility of its underlying architecture, but also helps to prevent extreme overfitting if fit correctly (but see a related concern below).

      We sincerely thank the reviewer for their thoughtful and generous comments. We are especially pleased that the core strengths of the model—its stimulus-computable architecture, biological grounding, modularity, and cross-domain applicability—were clearly recognized. As the reviewer rightly notes, removing researcher-defined abstractions and working directly from naturalistic stimuli opens the door to uncovering previously overlooked dynamics in complex multisensory signals, such as the spatial and temporal richness of audiovisual speech.

      We also appreciate the recognition of the model’s origins in a simple organism and its generalization across species and behaviors. This phylogenetic continuity reinforces our view that the MCD captures a fundamental computation with wide-ranging implications. Finally, we are grateful for the reviewer’s emphasis on the model’s predictive power across tasks and datasets with few or no free parameters—a property we see as key to both its parsimony and explanatory utility.

      We have highlighted these points more explicitly in the revised manuscript, and we thank the reviewer for their generous and insightful endorsement of the work.

      Weaknesses:

      There is an insufficient level of detail in the methods about model fitting. As a result, it's unclear what data the models were fitted and validated on. Were models fit individually or on average group data? Each condition separately? Is the model predictive of unseen data? Was the model cross-validated? Relatedly, the manuscript mentions a randomization test, but the shuffled data produces model responses that are still highly correlated to behavior despite shuffling. Could it be that any stimulus that varies in AV onset asynchrony can produce a psychometric curve that matches any other task with asynchrony judgements baked into the task? Does this mean all SJ or TOJ tasks produce correlated psychometric curves? Or more generally, is Pearson's correlation insensitive to subtle changes here, considering psychometric curves are typically sigmoidal? Curves can be non-overlapping and still highly correlated if one is, for example, scaled differently. Would an error term such as mean-squared or root mean-squared error be more sensitive to subtle changes in psychometric curves? Alternatively, perhaps if the models aren't cross-validated, the high correlation values are due to overfitting?

      The reviewer is right: the current version of the manuscript only provides limited information about parameter fitting. In the revised version of the manuscript, we included a parameter estimation and generalizability section that includes all information requested by the reviewer.

      To test whether using the MSE instead of Pearson correlation led to a similar estimated set of parameter values, we repeated the fitting using the MSE. The parameter estimated with this method (TauV, TauA, TauBim) closely followed those estimated using Pearson correlation (TauV, TauA, TauBim). Given the similarity of these results, we have chosen not to include further figures, however this analysis is now included in the new section (pages 23-24).

      Regarding the permutation test, it is expected that different stimuli produce analogous psychometric functions: after all, all studies relied on stimuli containing identical manipulation of lags. As a result, MCD population responses tend to be similar across experiments. Therefore, it is not a surprise that the permuted distribution of MCD-data correlation in Supplementary Figure 1K has a mean as high as 0.97. However, what is important is to demonstrate that the non-permuted dataset has an even higher goodness of fit. Supplementary Figure 1K demonstrates that none of the permuted stimuli could outperform the non-permuted dataset; the mean of the non-permuted distribution is 4.7 (standard deviations) above the mean of the already high  permuted distribution.

      We believe the new section, along with the present response, fully addresses the legitimate concerns of the reviewer.

      While the model boasts incredible versatility across tasks and stimulus configurations, fitting behavioral data well doesn't mean we've captured the underlying neural processes, and thus, we need to be careful when interpreting results. For example, the model produces temporal parameters fitting rat behavior that are 4x faster than when fitting human data. This difference in slope and a difference at the tails were interpreted as differences in perceptual sensitivity related to general processing speeds of the rat, presumably related to brain/body size differences. While rats no doubt have these differences in neural processing speed/integration windows, it seems reasonable that a lot of the differences in human and rat psychometric functions could be explained by the (over)training and motivation of rats to perform on every trial for a reward - increasing attention/sensitivity (slope) - and a tendency to make mistakes (compression evident at the tails). Was there an attempt to fit these data with a lapse parameter built into the decisional model as was done in Equation 21? Likewise, the fitted parameters for the pharmacological manipulations during the SJ task indicated differences in the decisional (but not the perceptual) process and the article makes the claim that "all pharmacologically-induced changes in audiovisual time perception" can be attributed to decisional processes "with no need to postulate changes in low-level temporal processing." However, those papers discuss actual sensory effects of pharmacological manipulation, with one specifically reporting changes to response timing. Moreover, and again contrary to the conclusions drawn from model fits to those data, both papers also report a change in psychometric slope/JND in the TOJ task after pharmacological manipulation, which would presumably be reflected in changes to the perceptual (but not the decisional) parameters.

      Fitting or predicting behaviour does not in itself demonstrate that a model captures the underlying neural computations—though it may offer valuable constraints and insights. In line with this, we were careful not to extrapolate the implications of our simulations to specific neural mechanisms.

      Temporal sensitivity is, by definition, a behavioural metric, and—as the reviewer correctly notes—its estimation may reflect a range of contributing factors beyond low-level sensory processing, including attention, motivation, and lapse rates (i.e., stimulus-independent errors). In Equation 21, we introduced a lapse parameter specifically to account for such effects in the context of monkey eye-tracking data. For the rat datasets, however, the inclusion of a lapse term was not required to achieve a close fit to the psychometric data (ρ = 0.981). While it is likely that adding a lapse component would yield a marginally better fit, the absence of single-trial data prevents us from applying model comparison criteria such as AIC or BIC to justify the additional parameter. In light of this, and to avoid unnecessary model complexity, we opted not to include a lapse term in the rat simulations.

      With respect to the pharmacological manipulation data, we acknowledge the reviewer’s point that observed changes in slope and bias could plausibly arise from alterations at either the sensory or decisional level—or both. In our model, low-level sensory processing is instantiated by the MCD architecture, which outputs the MCDcorr and MCDlag signals that are then scaled and integrated during decision-making. Importantly, this scaling operation influences the slope of the resulting psychometric functions, such that changes in slope can arise even in the absence of any change to the MCD’s temporal filters. In our simulations, the temporal constants of the MCD units were fixed to the values estimated from the non-pharmacological condition (see parameter estimation section above), and only the decision-related parameters were allowed to vary. From this modelling perspective, the behavioural effects observed in the pharmacological datasets can be explained entirely by changes at the decisional level. However, we do not claim that such an explanation excludes the possibility of genuine sensory-level changes. Rather, we assert that our model can account for the observed data without requiring modifications to early temporal tuning.

      To rigorously distinguish sensory from decisional effects, future experiments will need to employ stimuli with richer temporal structure—e.g., temporally modulated sequences of clicks and flashes that vary in frequency, phase, rhythm, or regularity (see Fujisaki & Nishida, 2007; Denison et al., 2012; Parise & Ernst, 2016, 2025; Locke & Landy, 2017; Nidiffer et al., 2018). Such stimuli engage the MCD in a more stimulus-dependent manner, enabling a clearer separation between early sensory encoding and later decision-making processes. Unfortunately, the current rat datasets—based exclusively on single click-flash pairings—lack the complexity needed for such disambiguation. As a result, while our simulations suggest that the observed pharmacologically induced effects can be attributed to changes in decision-level parameters, they do not rule out concurrent sensory-level changes.

      In summary, our results indicate that changes in the temporal tuning of MCD units are not necessary to reproduce the observed pharmacological effects on audiovisual timing behaviour. However, we do not assert that such changes are absent or unnecessary in principle. Disentangling sensory and decisional contributions will ultimately require richer datasets and experimental paradigms designed specifically for this purpose. We have now modified the results section (page 6) and the discussion (page 11) to clarify these points.

      The case for the utility of a stimulus-computable model is convincing (as I mentioned above), but its framing as mission-critical for understanding multisensory perception is overstated, I think. The line for what is "stimulus computable" is arbitrary and doesn't seem to be followed in the paper. A strict definition might realistically require inputs to be, e.g., the patterns of light and sound waves available to our eyes and ears, while an even more strict definition might (unrealistically) require those stimuli to be physically present and transduced by the model. A reasonable looser definition might allow an "abstract and low-dimensional representation of the stimulus, such as the stimulus envelope (which was used in the paper), to be an input. Ultimately, some preprocessing of a stimulus does not necessarily confound interpretations about (multi)sensory perception. And on the flip side, the stimulus-computable aspect doesn't necessarily give the model supreme insight into perception. For example, the MCD model was "confused" by the stimuli used in our 2018 paper (Nidiffer et al., 2018; Parise & Ernst, 2025). In each of our stimuli (including catch trials), the onset and offset drove strong AV temporal correlations across all stimulus conditions (including catch trials), but were irrelevant to participants performing an amplitude modulation detection task. The to-be-detected amplitude modulations, set at individual thresholds, were not a salient aspect of the physical stimulus, and thus only marginally affected stimulus correlations. The model was of course, able to fit our data by "ignoring" the on/offsets (i.e., requiring human intervention), again highlighting that the model is tapping into a very basic and ubiquitous computational principle of (multi)sensory perception. But it does reveal a limitation of such a stimulus-computable model: that it is (so far) strictly bottom-up.

      We appreciate the reviewer’s thoughtful engagement with the concept of stimulus computability. We agree that the term requires careful definition and should not be taken as a guarantee of perceptual insight or neural plausibility. In our work, we define a model as “stimulus-computable” if all its inputs are derived directly from the stimulus, rather than from experimenter-defined summary descriptors such as temporal lag, spatial disparity, or cue reliability. In the context of multisensory integration, this implies that a model must account not only for how cues are combined, but also for how those cues are extracted from raw inputs—such as audio waveforms and visual contrast sequences.

      This distinction is central to our modelling philosophy. While ideal observer models often specify how information should be combined once identified, they typically do not address the upstream question of how this information is extracted from sensory input. In that sense, models that are not stimulus-computable leave out a key part of the perceptual pipeline. We do not present stimulus computability as a marker of theoretical superiority, but rather as a modelling constraint that is necessary if one’s aim is to explain how structured sensory input gives rise to perception. This is a view that is also explicitly acknowledged and supported by Reviewer 2.

      Framed in Marr’s (1982) terms, non–stimulus-computable models tend to operate at the computational level, defining what the system is doing (e.g., computing a maximum likelihood estimate), whereas stimulus-computable models aim to function at the algorithmic level, specifying how the relevant representations and operations might be implemented. When appropriately constrained by biological plausibility, such models may also inform hypotheses at the implementational level, pointing to potential neural substrates that could instantiate the computation.

      Regarding the reviewer’s example illustrating a limitation of the MCD model, we respectfully note that the account appears to be based on a misreading of our prior work. In Parise & Ernst (2025), where we simulated the stimuli from Nidiffer et al. (2018), the MCD model reproduced participants’ behavioural data without any human intervention or adjustment. The model was applied in a fully bottom-up, stimulus-driven manner, and its output aligned with observer responses as-is. We suspect the confusion may stem from analyses shown in Figure 6 - Supplement Figure 5 of Parise & Ernst (2025), where we investigated the lack of a frequency-doubling effect in the Nidiffer et al. data. However, those analyses were based solely on the Pearson correlation between auditory and visual stimulus envelopes and did not involve the MCD model. No manual exclusion of onset/offset events was applied, nor was the MCD used in those particular figures. We also note that Parise & Ernst (2025) is a separate, already published study and is not the manuscript currently under review. 

      In summary, while we fully agree that stimulus computability does not resolve all the complexities of multisensory perception (see comments below about speech), we maintain that it provides a valuable modelling constraint—one that enables robust, generalisable predictions when appropriately scoped. 

      The manuscript rightly chooses to focus a lot of the work on speech, fitting the MCD model to predict behavioral responses to speech. The range of findings from AV speech experiments that the MCD can account for is very convincing. Given the provided context that speech is "often claimed to be processed via dedicated mechanisms in the brain," a statement claiming a "first end-to-end account of multisensory perception," and findings that the MCD model can account for speech behaviors, it seems the reader is meant to infer that energetic correlation detection is a complete account of speech perception. I think this conclusion misses some facets of AV speech perception, such as integration of higher-order, non-redundant/correlated speech features (Campbell, 2008) and also the existence of top-down and predictive processing that aren't (yet!) explained by MCD. For example, one important benefit of AV speech is interactions on linguistic processes - how complementary sensitivity to articulatory features in the auditory and visual systems (Summerfield, 1987) allow constraint of linguistic processes (Peelle & Sommers, 2015; Tye-Murray et al., 2007).

      We thank the reviewer for their thoughtful comments, and especially for the kind words describing the range of findings from our AV speech simulations as “very convincing.”

      We would like to clarify that it is not our view that speech perception can be reduced to energetic correlation detection. While the MCD model captures low- to mid-level temporal dependencies between auditory and visual signals, we fully agree that a complete account of audiovisual speech perception must also include higher-order processes—including linguistic mechanisms and top-down predictions. These are critical components of AV speech comprehension, and lie beyond the scope of the current model.

      Our use of the term “end-to-end” is intended in a narrow operational sense: the model transforms raw audiovisual input (i.e., audio waveforms and video frames) directly into behavioural output (i.e., button press responses), without reliance on abstracted stimulus parameters such as lag, disparity or reliability. It is in this specific technical sense that the MCD offers an end-to-end model. We have revised the manuscript to clarify this usage to avoid any misunderstanding.

      In light of the reviewer’s valuable point, we have now edited the Discussion to acknowledge the importance of linguistic processes (page 13) and to clarify what we mean by end-to-end account (page 11). We agree that future work will need to explore how stimulus-computable models such as the MCD can be integrated with broader frameworks of linguistic and predictive processing (e.g., Summerfield, 1987; Campbell, 2008; Peelle & Sommers, 2015; Tye-Murray et al., 2007).

      References

      Campbell, R. (2008). The processing of audio-visual speech: empirical and neural bases. Philosophical Transactions of the Royal Society B: Biological Sciences, 363(1493), 1001-1010. https://doi.org/10.1098/rstb.2007.2155

      Nidiffer, A. R., Diederich, A., Ramachandran, R., & Wallace, M. T. (2018). Multisensory perception reflects individual differences in processing temporal correlations. Scientific Reports 2018 8:1, 8(1), 1-15. https://doi.org/10.1038/s41598-018-32673-y

      Parise, C. V, & Ernst, M. O. (2025). Multisensory integration operates on correlated input from unimodal transient channels. ELife, 12. https://doi.org/10.7554/ELIFE.90841

      Peelle, J. E., & Sommers, M. S. (2015). Prediction and constraint in audiovisual speech perception. Cortex, 68, 169-181. https://doi.org/10.1016/j.cortex.2015.03.006

      Summerfield, Q. (1987). Some preliminaries to a comprehensive account of audio-visual speech perception. In B. Dodd & R. Campbell (Eds.), Hearing by Eye: The Psychology of Lip-Reading (pp. 3-51). Lawrence Erlbaum Associates.

      Tye-Murray, N., Sommers, M., & Spehar, B. (2007). Auditory and Visual Lexical Neighborhoods in Audiovisual Speech Perception: Trends in Amplification, 11(4), 233-241. https://doi.org/10.1177/1084713807307409

      Reviewer #2 (Public review):

      Summary:

      Building on previous models of multisensory integration (including their earlier correlation-detection framework used for non-spatial signals), the author introduces a population-level Multisensory Correlation Detector (MCD) that processes raw auditory and visual data. Crucially, it does not rely on abstracted parameters, as is common in normative Bayesian models," but rather works directly on the stimulus itself (i.e., individual pixels and audio samples). By systematically testing the model against a range of experiments spanning human, monkey, and rat data, the authors show that their MCD population approach robustly predicts perception and behavior across species with a relatively small (0-4) number of free parameters.

      Strengths:

      (1) Unlike prior Bayesian models that used simplified or parameterized inputs, the model here is explicitly computable from full natural stimuli. This resolves a key gap in understanding how the brain might extract "time offsets" or "disparities" from continuously changing audio-visual streams.

      (2) The same population MCD architecture captures a remarkable range of multisensory phenomena, from classical illusions (McGurk, ventriloquism) and synchrony judgments, to attentional/gaze behavior driven by audio-visual salience. This generality strongly supports the idea that a single low-level computation (correlation detection) can underlie many distinct multisensory effects.

      (3) By tuning model parameters to different temporal rhythms (e.g., faster in rodents, slower in humans), the MCD explains cross-species perceptual data without reconfiguring the underlying architecture.

      We thank the reviewer for their positive evaluation of the manuscript, and particularly for highlighting the significance of the model's stimulus-computable architecture and its broad applicability across species and paradigms. Please find our responses to the individual points below.

      Weaknesses:

      (1) The authors show how a correlation-based model can account for the various multisensory integration effects observed in previous studies. However, a comparison of how the two accounts differ would shed light on the correlation model being an implementation of the Bayesian computations (different levels in Marr's hierarchy) or making testable predictions that can distinguish between the two frameworks. For example, how uncertainty in the cue combined estimate is also the harmonic mean of the unimodal uncertainties is a prediction from the Bayesian model. So, how the MCD framework predicts this reduced uncertainty could be one potential difference (or similarity) to the Bayesian model.

      We fully agree with the reviewer that a comparison between the correlation-based MCD model and Bayesian accounts is valuable—particularly for clarifying how the two frameworks differ conceptually and where they may converge.

      As noted in the revised manuscript, the key distinction lies in the level of analysis described by Marr (1982). Bayesian models operate at the computational level, describing what the system is aiming to compute (e.g., optimal cue integration). In contrast, the MCD functions at the algorithmic level, offering a biologically plausible mechanism for how such integration might emerge from stimulus-driven representations.

      In this context, the MCD provides a concrete, stimulus-grounded account of how perceptual estimates might be constructed—potentially implementing computations with Bayesian-like characteristics (e.g., reduced uncertainty, cue weighting). Thus, the two models are not mutually exclusive but can be seen as complementary: the MCD may offer an algorithmic instantiation of computations that, at the abstract level, resemble Bayesian inference.

      We have now updated the manuscript to explicitly highlight this relationship (pages 2 and 11). In the revised manuscript, we also included a new figure (Figure 5) and movie (Supplementary Movie 3), to show how the present approach extends previous Bayesian models for the case of cue integration (i.e., the ventriloquist effect).

      (2) The authors show a good match for cue combination involving 2 cues. While Bayesian accounts provide a direction for extension to more cues (also seen empirically, for eg, in Hecht et al. 2008), discussion on how the MCD model extends to more cues would benefit the readers.

      We thank the reviewer for this insightful comment: extending the MCD model to include more than two sensory modalities is a natural and valuable next step. Indeed, one of the strengths of the MCD framework lies in its modularity. Let us consider the MCDcorr​ output (Equation 6), which is computed as the pointwise product of transient inputs across modalities. Extending this to include a third modality, such as touch, is straightforward: MCD units would simply multiply the transient channels from all three modalities, effectively acting as trimodal coincidence detectors that respond when all inputs are aligned in time and space.

      By contrast, extending MCDlag is less intuitive, due to its reliance on opponency between two subunits (via subtraction). A plausible solution is to compute MCDlag in a pairwise fashion (e.g., AV, VT, AT), capturing relative timing across modality pairs.

      Importantly, the bulk of the spatial integration in our framework is carried by MCDcorr, which generalises naturally to more than two modalities. We have now formalised this extension and included a graphical representation in a supplementary section of the revised manuscript.

      Likely Impact and Usefulness:

      The work offers a compelling unification of multiple multisensory tasks- temporal order judgments, illusions, Bayesian causal inference, and overt visual attention - under a single, fully stimulus-driven framework. Its success with natural stimuli should interest computational neuroscientists, systems neuroscientists, and machine learning scientists. This paper thus makes an important contribution to the field by moving beyond minimalistic lab stimuli, illustrating how raw audio and video can be integrated using elementary correlation analyses.

      Reviewer #1 (Recommendations for the authors):

      Recommendations:

      My biggest concern is a lack of specificity about model fitting, which is assuaged by the inclusion of sufficient detail to replicate the analysis completely or the inclusion of the analysis code. The code availability indicates a script for the population model will be included, but it is unclear if this code will provide the fitting details for the whole of the analysis.

      We thank the reviewer for raising this important point. A new methodological section has been added to the manuscript, detailing the model fitting procedures used throughout the study. In addition, the accompanying code repository now includes MATLAB scripts that allow full replication of the spatiotemporal MCD simulations.

      Perhaps it could be enlightening to re-evaluate the model with a measure of error rather than correlation? And I think many researchers would be interested in the model's performance on unseen data.

      The model has now been re-evaluated using mean squared error (MSE), and the results remain consistent with those obtained using Pearson correlation. Additionally, we have clarified which parts of the study involve testing the model on unseen data (i.e., data not used to fit the temporal constants of the units). These analyses are now included and discussed in the revised fitting section of the manuscript (pages 23-24).

      Otherwise, my concerns involve the interpretation of findings, and thus could be satisfied with minor rewording or tempering conclusions.

      The manuscript has been revised to address these interpretative concerns, with several conclusions reworded or tempered accordingly. All changes are marked in blue in the revised version.

      Miscellanea:

      Should b0 in equation 10 be bcrit to match the below text?

      Thank you for catching this inconsistency. We have corrected Equation 10 (and also Equation 21) to use the more transparent notation bcrit instead of b0, in line with the accompanying text.

      Equation 23, should time be averaged separately? For example, if multiple people are speaking, the average correlation for those frames will be higher than the average correlation across all times.

      We thank the reviewer for raising this thoughtful and important point. In response, we have clarified the notation of Equation 23 in the revised manuscript (page 20). Specifically, we now denote the averaging operations explicitly as spatial means and standard deviations across all pixel locations within each frame.

      This equation computes the z-score of the MCD correlation value at the current gaze location, normalized relative to the spatial distribution of correlation values in the same frame. That is, all operations are performed at the frame level, not across time. This ensures that temporally distinct events are treated independently and that the final measure reflects relative salience within each moment, not a global average over the stimulus. In other words, the spatial distribution of MCD activity is re-centered and rescaled at each frame, exactly to avoid the type of inflation or confounding the reviewer rightly cautioned against.

      Reviewer #2 (Recommendations for the authors):

      The authors have done a great job of providing a stimulus computable model of cue combination. I had just a few suggestions to strengthen the theoretical part of the paper:

      (1) While the authors have shown a good match between MCD and cue combination, some theoretical justification or equivalence analysis would benefit readers on how the two relate to each other. Something like Zhang et al. 2019 (which is for motion cue combination) would add to the paper.

      We agree that it is important to clarify the theoretical relationship between the Multisensory Correlation Detector (MCD) and normative models of cue integration, such as Bayesian combination. In the revised manuscript, we have now modified the introduction and added a paragraph in the Discussion addressing this link more explicitly. In brief, we see the MCD as an algorithmic-level implementation (in Marr’s terms) that may approximate or instantiate aspects of Bayesian inference.

      (2) Simulating cue combination for tasks that require integration of more than two cues (visual, auditory, haptic cues) would more strongly relate the correlation model to Bayesian cue combination. If that is a lot of work, at least discussing this would benefit the paper

      This point has now been addressed, and a new paragraph discussing the extension of the MCD model to tasks involving more than two sensory modalities has been added to the Discussion section.

    1. Histogram showing distribution of per-unit weights across all countries and all years (2007-2024), imports

      i didn't see much difference between years, exports and imports. All positively skewed, with weighted mean much closer to the other HS codes proposed to be part of the UNU Key.

      For me it does not make sense to add the heavier ones though. At least from the boxplot, it doesn't look like there's such variation of weight per unit as it is in this case. What do we think about doing a cut-off weight per unit so only those that are less than x is considered? Certainly lifetime and composition of the heavier ones are not the same?

    1. he Haskell functions div and ^ are partial, meaning they can crash with a so-called imprecise exception (an exception that is not visible in the type, also sometimes called IO exceptions).

      partial functions

    1. eLife Assessment

      This study is a fundamental advance in the field of developmental biology and transcriptional regulation that demonstrates the use of hPSC-derived organoids to generate reproducible organoids to study the mechanisms that drive neural tube closure. The work is exceptional in its development of tools to use CRISPR interference to screen for genes that regulate morphogenesis in human PSC organoids. The additional characterization of the role of specific transcription factors in neural tube formation is solid. The work provides both technical advances and new knowledge on human development through embryo models.

    2. Reviewer #1 (Public review):

      Summary:

      This is a wonderful and landmark study in the field of human embryo modeling. It uses patterned human gastruloids and conducts a functional screen on neural tube closure, and identifies positive and negative regulators, and defines the epistasis among them.

      Strengths:

      The above was achieved following optimization of the micro-pattern-based gastruloid protocol to achieve high efficiency, and then optimized to conduct and deliver CRISPRi without disrupting the protocol. This is a technical tour de force as well as one of the first studies to reveal new knowledge on human development through embryo models, which has not been done before.

      The manuscript is very solid and well-written. The figures are clear, elegant, and meaningful. The conclusions are fully supported by the data shown. The methods are well-detailed, which is very important for such a study.

      Weaknesses:

      This reviewer did not identify any meaningful, major, or minor caveats that need addressing or correcting.

      A minor weakness is that one can never find out if the findings in human embryo models can be in vitro revalidated in humans in vivo. This is for obvious and justified ethical reasons. However, the authors acknowledge this point in the section of the manuscript detailing the limitations of their study.

    3. Reviewer #2 (Public review):

      Summary:

      This manuscript is a technical report on a new model of early neurogenesis, coupled to a novel platform for genetic screens. The model is more faithful than others published to date, and the screening platform is an advance over existing ones in terms of speed and throughput.

      Strengths:

      It is novel and useful.

      Weaknesses:

      The novelty of the results is limited in terms of biology, mainly a proof of concept of the platform and a very good demonstration of the hierarchical interactions of the top regulators of GRNs.

      The value of the manuscript could be enhanced in two ways:

      (1) by showing its versatility and transforming the level of neural tube to midbrain and hindbrain, and looking at the transcriptional hierarchies there.

      (2) by relating the patterning of the organoids to the situation in vivo, in particular with the information in reference 49. The authors make a statement "To compare our findings with in vivo gene expression patterns, we applied the same approach to published scRNA-seq data from 4-week-old human embryos at the neurula stage" but it would be good to have a more nuanced reference: what stage, what genes are missing, what do they add to the information in that reference?

    1. Comparing the outputs with Eurostat data in tabular format

      Nice to see that overall is better. Much higher differences at EU than World level. I think this is somewhat justifiable since we "lock" the EU data since DGEnv, thus not accounting for updated trade, production data in recent years. It can also be just a confirmation bias from my side.

    1. MI PRESENTACIÓN EN HIVE, Mis primeros años

      MY PRESENTATION AT HIVE, My Early Years

      Adzael Tovar shares his journey as a young content creator in the Hive community. Reflecting on his family background. He shares experiences with internet connectivity in Venezuela. And aspirations to study engineering. While engaging in blockchain and web3 opportunities. He expresses excitement about participating in community events. And the support he has received from friends and family in his new endeavors.

    1. 06_overwrite_EU.R if 03a1_use_WOT_EU_data.R is ran?

      I don't think so as long as 03a1 is called at the end of 03POM, and then you run 4 and 5. From the main GEM, I haven't called 06.

    1. Set-off. Set-off is the discharge of obligations without money. This is done by balancing obligationsacross balance sheets so they offset each other. If Alice owes Bob and Bob owes Alice, they can doset-off. Set-off is more interesting when there are cycles of size greater than two – if Alice owes Boband Bob owes Carol and Carol owes Alice, they can all set off the lowest amount.

      This reframes “we don’t have money to pay” into “we have a mutual trust system that can settle this.” It’s empowering: No need for extra debts for the SMEs.

  3. sellercentral.amazon.de sellercentral.amazon.de
    1. Mit Handbuch vom Arzt Erfahren Sie mehr über Ihre Erkrankung, die verfügbaren Behandlungsmöglichkeiten, die besten Tipps und Übungen. Enthält hilfreiche Informationen und Übungen, die Ihnen bei der Genesung helfen!

      ;)

  4. pressbooks.library.torontomu.ca pressbooks.library.torontomu.ca
    1. Two things everybody’s got tuh do fuh theyselves. They got tuh go tuh God, and they got tuh find out about livin’ fuh theyselves.”

      research

    2. “Ah know all dem sitters-and-talkers gointuh worry they guts into fiddle strings till dey find out whut we been talkin’ ’bout. Dat’s all right, Pheoby, tell ’em. Dey gointuh make ’miration ’cause mah love didn’t work lak they love, if dey ever had any. Then you must tell ’em dat love ain’t somethin’ lak uh grindstone dat’s de same thing everywhere and do de same thing tuh everything it touch. Love is lak de sea. It’s uh movin’ thing, but still and all, it takes its shape from de shore it meets, and it’s different with every shore.”

      research

    1. d_worker_sex_fully_male 0 1.00 0.04 0.20 0.00 0.00 0.00 0.00 1.00 ▇▁▁▁▁ d_worker_sex_fully_female 0 1.00 0.05 0.22 0.00 0.00 0.00 0.00 1.00 ▇▁▁▁▁ d_unskilled_low_skilled_workers 0 1.00 0.02 0.13 0.00 0.00 0.00 0.00 1.00 ▇▁▁▁▁ d_medium_high_skilled_workers 0 1.00 0.00 0.06 0.00 0.00 0.00 0.00 1.00 ▇▁▁▁▁ d_only_blue_collar_workers 0 1.00 0.21 0.41 0.00 0.00 0.00 0.00 1.00 ▇▁▁▁▂ d_only_white_collar_workers

      alle ins regresion file tun

    2. "Individual" "Industry" [3] "Firm" "Establishment" [5] "Labor market" "Employment zone" [7] "Individual/Region" "Aggregate" [9] "Plant" "Aggregate (Business Sector)" [11] "Aggregate (One Sector)"

      Umcodieren auf micro/meso/macro:

      micro: "Individual", "Firm", "Establishment", "Plant" meso: "Industry", "Labor market", "Employment zone", "Aggregate (Business Sector)", "Aggregate (One Sector)" macro: "Aggregate"

      Aber was ist "Individual/Region"?

    3. wtr_begin 95 0.92 1989.73 11.64 1915.00 1985.00 1985.00 1997.00 2013.00 ▁▁▁▇▅ wtr_end 95 0.92 1990.93 12.38 1920.00 1985.00 1985.00 2000.00 2013.00 ▁▁▁▇▆ wtr_hours_old 130 0.89 41.75 3.09 39.00 40.00 40.00 44.00 59.00 ▇▂▂▁▁ wtr_hours_new 110 0.91 39.09 2.24 35.00 38.50 38.50 40.00 48.00 ▁▇▁▁▁ sample_begin 34 0.97 1988.14 10.97 1914.00 1985.00 1986.00 1994.00 2013.00 ▁▁▁▇▃ sample_end 34

      Bitte checken

    4. “Germany (West)”, “Germany (East)”, “Germany (Baden-Württemberg)”, “Germany (Nordrhein-Westfalen)”, “Germany (Hessen)”, “Germany (Niedersachsen)”, “Germany (Hamburg)”, “Germany (Schleswig-Holstein)” & “Germany (Baden-Württemberg, Nordrhein-Westfalen, Hessen, Niedersachsen, Hamburg, Schleswig-Holstein)” → Germany

      Eventuell unterteilen in BRD/DDR?

    5. Vorschlag “State (Law)”, “State (Law), Tripartite Comission”, “State (Law), in discussion with employers and employees”, “Agreements between firms and the president”, “State (Law), agreement between trade unions and industry”, “State (Law) & collective agreements”, “State (Law), partly collective agreements” → State “Employers and unions (collective agreements)” & “Employers and unions (collective agreements) & plant-based (firm-based agreemenst)”, “Establishment” → Collective agreement “Establishment” → ?

      WIr müssen diskutieren, ob Establishment wirklich teil von Employers and employeessein soll, oder ob wir das als eigenen Daummy kodieren und dafür Employers and employees in Collective agreements zurückbenennen.

    6. wtr_end

      Idee: Den Unterschied zwischen wtr_end und Jahr der jeweiligen Schätzung nehmen und als indep variable reinschmeißen - Idealerweise messen wir damit kurz-/mittel-/langfristige Effekte von AZV.

      Vorschlag: sample_end - wtr_end

      Achtung: in 26 Fällen ist sample_end -kleiner als wtr_end (in nur 20 Fällen davon ist before_policy_implementation = Yes - muss überprüft werden.

    7. To-do Wir haben hier einen Fehleintrag von 20024 drinnen, der muss noch ausgebessert werden. 1937, 1939, 1980, 1982, 1985, 1986, 1987, 1990, 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2002, 2003, 2004, 2005, 2006, 2007, 2012, 2013, 2017, 20024 Ist das eigentlich 2024?

      Dasselbe wie voher, 20024 checken