1. Last 7 days
    1. Python is not a great language for data science. Part 1: The experience
      • The blog argues Python is not ideal for data science tasks due to performance issues and inefficiencies in libraries like Pandas.
      • Python often requires supplementary libraries such as NumPy for numerical calculations, which adds complexity.
      • The author feels Python is heavily pushed despite there being possibly better alternatives like R for statistics and data analysis.
      • Python’s flexibility and dynamic typing can lead to slower code and difficulties in managing large-scale data science projects.
      • The article criticizes Python’s packaging ecosystem, type checking, and runtime performance.
      • There is a perception that Python’s popularity is partly due to team and community familiarity rather than technical superiority.
      • Overall, the blog emphasizes that Python is good for beginners but may not scale well for advanced data science needs.

      Hacker News Discussion

      • Many commenters agree Python has limitations in data science, particularly citing Pandas as inefficient and cumbersome for rapid data manipulation.
      • Some defend Python by highlighting NumPy’s effectiveness and community support, saying Python’s ecosystem overall is a strength despite some weaknesses.
      • Performance issues and the Global Interpreter Lock (GIL) are frequent complaints, leading to suggestions of other languages like R for some tasks.
      • Several users note Python’s packaging and dependency management remain problematic despite tools like Poetry.
      • The diversity of opinions includes those who appreciate Python’s readability and vast ecosystem versus those who find it limiting and inefficient for production-scale data science.
      • Some highlight the inertia behind Python’s use in teams, making switching to languages considered technically better difficult.
      • The discussion includes various technical nuances such as duck typing problems, difficulty with type checking, and the challenge of scaling beyond prototype-level work.
    1. Practicing piano scales is a boring grind, but even the world’s best pianists do it. What is your version of this? You should have an answer. Tyler Cowen has some thoughts here and here.
    2. I’d argue your north star should be building the relevant skills and expertise while letting people know that you have them, not just “getting a cool job.”

      Very actionable insight

    3. Get a job from the 80,000 Hours job board at a capital-E capital-A Effective Altruist organization right out of college, as fast as possible, otherwise feel like a failure, oh god, oh god...

      Yea Collage is not the real world, people need to experience some of the real world first before trying to change the world. Though some get damaged by it so it can go both ways.

    4. Obsessively improve at the rare and valuable skills to solve this problem and do so in a legible way for others to notice. Leverage this career capital to keep the flywheel going— skills allow you to solve more problems, which builds more skills. Rare and valuable roles require rare and valuable traits, so get so good they can’t ignore you.

      How do we identify a specific skill to get good at and draw a strong enough boundary around it so that it can be measured.

    1. Announcing Azure Copilot agents and AI infrastructure innovations
      • Microsoft Azure showcased modernization of cloud infrastructure at Microsoft Ignite 2025, focusing on reliability, security, and AI-era performance.
      • The strategy includes strengthening Azure’s global foundation, modernizing workloads with advanced systems (Azure Cobalt, Azure Boost, AKS Automatic, HorizonDB), and transforming team workflows through AI agents like Azure Copilot and GitHub Copilot.
      • Azure Copilot introduces an agentic cloud operations model with specialized agents for migration, deployment, optimization, observability, resiliency, and troubleshooting to automate tasks and enhance innovation.
      • Azure’s AI infrastructure supports global scale with over 70 regions and datacenters, featuring Fairwater AI datacenters with liquid cooling, a dedicated AI WAN, and NVIDIA GB300 GPUs to deliver unmatched capacity and speed.
      • Innovations like Azure Boost offload virtualization tasks onto specialized hardware, improving disk throughput and network performance; AKS Automatic simplifies Kubernetes management for fast, reliable app deployment.
      • Azure HorizonDB for PostgreSQL offers scalable, AI-integrated databases and new partnerships (e.g., SAP Business Data Cloud Connect) facilitate data sharing across platforms.
      • Azure emphasizes operational excellence with zone-redundant networking, automated fault detection, and resilience co-engineered with customers via Azure Resiliency.
      • Security improvements include Azure Bastion Secure by Default for hardened VM access, Network Security Perimeter for centralized firewall control, and AI-powered defenses like Web Application Firewall with Captcha.
      • Modernization supports a mix of legacy and cloud-native systems with AI tools enabling faster migration and modernization for .NET, Java, SQL Server, Oracle, and PostgreSQL workloads.
      • Azure enables faster, more efficient modernization and management across infrastructure, improving governance, compliance, and cost-effectiveness.
      • The cloud future is agentic, intelligent, and human-centered, with continued innovation in Azure infrastructure, AI agents, and open-source contributions driving business growth.
    1. Sinclair is still reeling, too. In a column for the Winnipeg Free Press, he called the news “a gut punch, pipe bomb, and pulling out the rug from underneath the feet of the publishing world — even if King didn’t do anything out of malice.”

      This is really a gut punch and a bitter pill to swallow to reach this information to find out at 82 years old that you have identity all your life is not so

    2. For Anishinaabe writer and scholar Niigaan Sinclair, the revelation that author Thomas King is not Indigenous is a painful one.  King is an award-winning Canadian-American author who earned his living writing about his complex relationship with Indigenous identity, most notably in his 2012 book The Inconvenient Indian.

      i did it

    3. For Anishinaabe writer and scholar Niigaan Sinclair, the revelation that author Thomas King is not Indigenous is a painful one.  King is an award-winning Canadian-American author who earned his living writing about his complex relationship with Indigenous identity, most notably in his 2012 book The Inconvenient Indian. But on Monday, he revealed in a Globe and Mail editorial that he is “not an Indian at all.”

      Introduction to this passage

    1. The LatestThe FBI is leading the investigation into the shooting of two members of the National Guard in Washington, D.C.FBI director Kash Patel said the officers are in hospital with critical injuries.A suspect was also taken to hospital after being shot.Officials said it was too soon to speculate on a potential motive for what D.C. Metropolitan Police described as an ambush.Featured MediaPrevious itemNext item

      the latest news

    1. In particular, the increasing inequality of wealth among merchants could haveundermined political support for the system.

      Today, we are likely through the phase of moving to complete international laws and reliance on "impartial' laws - but these have become partial (idealogocally captured) and so the rich once again aim to escape (they have the means) so they can once again be subject to impartial laws. Protests around UK/US bias in judgement is evident currently.

    2. it motivated the invention andenabled the use of such financial instruments as letters of credit, letters of exchangeand insurance contracts that became widely used in Europe during that time.

      Massive financial upside from trust in the system, built slowly over time to allow creative freedom to invent various favourable "futures".

    3. Ironically, the system seems to have undermined itself; theprocesses it fostered were those that increased trade and urban growth—the causesof its decline.

      "A system is what is does" Stafford Beer

    1. The degree of air pollution can significantly influence individual schedules and life satisfaction, especially for vulnerable groups such as children, the older adult, and those with respiratory or cardiovascular conditions.

      Large-scale social media analyses and other studies show that higher air pollution reliably lowers happiness—even beyond its physical health effects—likely due to aesthetic unpleasantness, sensory discomfort, and health-related anxieties, with especially strong impacts on vulnerable groups such as children, older adults, and people with respiratory or heart conditions.

    2. Yuan et al. (59) demonstrated a significant negative correlation between the Air Quality Index (AQI) and individuals’ life satisfaction, and Song et al. (58) found a positive correlation between urban smog levels and subjective happiness.

      Life satisfaction is strongly and consistently harmed by poor air quality, with most studies showing that pollutants like CO₂, NO₂, PM10, and SO₂ lower perceived quality of life, and research linking higher AQI and urban smog to reduced well-being, though some results may be influenced by factors such as income differences across cities.

    3. Their findings indicated a negative correlation between air pollution and hedonic well-being, as well as a positive correlation with the incidence of depression symptoms. However, no significant association was found between air pollution and life satisfaction.

      Air quality strongly influences both physical and psychological well-being, with many studies showing that higher pollution levels reduce happiness and increase depression symptoms, though evidence is mixed regarding whether air pollution affects life satisfaction, suggesting its emotional impact may be stronger than its effect on cognitive evaluations of well-being.

    4. This suggests that calmer wind conditions are associated with higher life satisfaction, particularly among women.

      Lower wind speeds are linked to higher life satisfaction, especially for women, with research showing that a one–standard deviation drop in wind speed significantly increases reported life satisfaction.

    5. They found that precipitation in January had a significant negative correlation with happiness, indicating that rain was associated with lower levels of well-being.

      Cloud cover, rain, humidity, and wind speed generally reduce subjective well-being, with studies showing that cloudy or rainy days, high humidity, and related conditions consistently correlate with lower happiness and more negative emotional states.

    6. This suggests that while environmental factors like temperature may influence well-being, their effects might be overshadowed by individual characteristics.

      Research shows that high temperatures generally reduce life satisfaction and that people in hotter climates report lower happiness than those in cooler regions, but findings vary, as some studies suggest temperature effects disappear when individual traits are accounted for, highlighting that temperature can influence well-being, yet its impact may be overshadowed by personal characteristics and context.

    7. The study also showed that cooler temperatures helped reduce sadness and depression.

      Studies using large-scale Twitter data and daily diary reports show that well-being drops sharply at temperatures above 70°F (especially when paired with high humidity), while cooler temperatures (around 10–15°C) are linked to higher happiness and reduced sadness, with hourly temperature changes exerting a stronger emotional impact than daily averages.

    8. Specifically, when temperatures exceeded 70°F (21°C), individuals reported reduced positive emotions (e.g., joy, pleasure), increased negative emotions (e.g., stress, anger), and more fatigue (e.g., tiredness, lack of energy), all of which were linked to a decrease in subjective well-being. This effect was particularly pronounced in individuals with lower education levels and older age groups.

      Temperature strongly influences well-being, with research showing a non-linear pattern where both extreme heat and extreme cold reduce happiness; studies indicate that temperatures above about 70°F (21°C) lead to lower positive emotions, higher stress and fatigue, and overall reduced life satisfaction, especially among older adults and individuals with lower education levels.

    9. Previous research into ipRGC-influenced light (IIL) responses indicates that both illuminance and correlated color temperature (CCT) affect people’s well-being (15). Moreover, illuminance and CCT can also affect occupants’ satisfaction with and comfort in the environment (16, 17), improved satisfaction with environmental conditions is associated with improved daily life satisfaction.

      Overall, sunlight and other light characteristics, such as illuminance and color temperature, significantly influence happiness, emotional health, and life satisfaction, with brighter and more comfortable lighting improving well-being and environmental satisfaction, though these effects are moderated by individual behaviors and contextual factors, making the relationship complex and multifaceted.

    10. In these studies, higher IQ levels were associated with a reduced positive impact of sunlight on happiness (Add Health study, U.S., 1994–2008).

      The effects of sunlight on emotions and life satisfaction vary across individuals and contexts, with factors like increased outdoor activity and personal traits—including intelligence—modifying how strongly sunlight boosts happiness, demonstrating that light’s influence on well-being is not universal but shaped by individual differences.

    11. The effect of sunlight on mood is also reflected in the fact that light can either enhance or reduce feelings of joy and sadness, depending on its timing and intensity. Denissen et al. (12) found that sunshine increases both positive and negative emotions, indicating that the relationship between sunlight and mood is complex and not purely beneficial.

      Light conditions strongly influence well-being by shaping emotional states and life satisfaction, with sunlight generally improving mood but poorly timed or insufficient light contributing to negative emotions, and research shows that sunlight can intensify both positive and negative feelings, highlighting a complex, not purely positive, relationship between light exposure and emotional experience.

    12. The positive effect of sunlight on life satisfaction may be attributed to both its direct influence on mood and the fact that people are more likely to engage in outdoor activities when the weather is sunny, further contributing to well-being. However, studies like those by Buscha (56) have suggested that exposure to sunlight may have a negligible effect on well-being, particularly in cases where other factors, such as personal preferences or environmental stressors, influence mood more strongly.

      Sunlight generally enhances life satisfaction—people report feeling more satisfied on sunny, clear days, partly because sunlight boosts mood and encourages outdoor activity—but some research shows this effect can be minimal when personal preferences or other environmental stressors play a stronger role in shaping well-being.

    13. As a result, environmental psychology often considers how these multidimensional weather factors work together to shape an individual’s emotional state, social behavior, and the quality of social interactions, ultimately impacting subjective well-being.

      Overall, weather conditions such as light, temperature, rainfall, humidity, air quality, and wind speed directly influence well-being by shaping physiological responses, emotional states, and social behaviors; factors like sunlight and comfortable temperatures generally promote positive emotions, while gloomy weather, high humidity, poor air quality, and strong winds can trigger negative feelings or stress, and together these multidimensional weather elements interact to shape mental health, daily behavior, and overall subjective well-being.

    14. The above theories provide profound insights into how weather influences well-being through physiological, emotional, and cognitive pathways, revealing the complex relationship between weather conditions and individual well-being.

      These theories show that weather influences well-being through interconnected physiological, emotional, and cognitive processes rooted in both basic needs and evolutionary history.

    15. When individuals are in a non-threatening environment, they naturally produce positive emotional responses. These responses further influence attention, physiological reactions, and behavior, thereby enhancing well-being (6).

      Evolutionary psychology explains that humans are naturally inclined to feel positive in safe, non-threatening natural environments, meaning favorable weather conditions automatically boost emotional and physiological well-being.

    16. As a key component of the natural environment, weather conditions can significantly influence an individual’s emotions and overall well-being. Weather not only affects people’s living conditions and external environmental settings but can also impact happiness by altering emotional states.

      Maslow’s theory and Lyubomirsky’s happiness model suggest that pleasant weather fulfills basic needs and contributes to the environmental portion of well-being, thereby directly influencing emotions and overall happiness.

    17. The affective events theory suggests that an individual’s emotional state is often directly influenced by life events, with weather, as an inescapable external variable in daily life, typically representing the “atmospheric conditions” of daily living.

      Environmental psychology offers a key framework for understanding how weather—through factors like temperature, sunlight, precipitation, and air quality—affects emotional and cognitive well-being, but inconsistent findings across studies show that these effects depend not only on objective weather conditions but also on individual physiological, psychological, and social differences.

    18. Well-being is not only the core pursuit of human life but also an important predictor of various positive life outcomes, such as longevity, creativity, quality of interpersonal relationships, and work efficiency (1).

      Well-being is deeply intertwined with weather conditions, a relationship long noted in daily life and literature. Modern research on this topic has rapidly expanded, revealing through multidisciplinary studies how weather influences well-being via physiological, psychological, and social pathways.

    1. Developers do not build housing where it is needed most; they build where it is profitable. That often means luxury housing in gentrifying neighborhoods, not affordable housing in underserved ones.

      how to uproot this...? mattress must have ideas

    2. They’re against the kind of housing that they believe is not for them — units that are too expensive, too unwelcoming to residents who look the way they do, too indicative of a city slipping out of their grasp.

      so this is really just about not building more expensive apartment buildings in say, flatlands of oakland. if we are building affordable (on par w local market) housing this subset of marginalized community NIMBYs need not worry? (ofc?)

    3. These are not reactionary property owners clinging to racial segregation but marginalized communities trying to hold on to the tenuous roots they’ve established in cities that have long underserved them.

      so intrigued to see how this might manifest in sf/ oakland. not quite landing yet

    1. the Lord thundered with loud thunder against the Philistines and threw them into such a panic that they were routed before the Israelite

      Imagine this SOUND! Thunder can be so loud and intimidating at times I imagine this shook the earth to how loud it was.

    2. ‘Do not stop crying out to the Lord our God for us, that he may rescue us from the hand of the Philistines’ ” (1 Samuel 7:8). In response, Samuel sacrificed a lamb to God, crying out to Him on Israel’s behalf, “and the Lord answered him” (v. 9).

      Even though the Israelite knew they were God's chosen people they did not allow that to get in the way of them genuinely seeking God. Whether rich, poor or inbetween from you genuinely seek God He will show up.

    1. At the basic level of learning, we accumulate knowledge and assimilate it into our existing frameworks. But accumulated knowledge doesn’t necessarily help us in situations where we have to apply that knowledge

      It is important to do more than simply acquire cultural knowledge; competence is both knowing, being able to apply, and exhibit cognitive flexibility, where one assimilates new information into new categories rather than creating meaning through immediate, socialized ways of thinking. Intercultural encounters can be both formal(one consciously pursues) and informal(through volunteering or working in diverse settings). The example of cooking a frozen pizza illustrates the need for cognitive flexibility: the author relied on an enculturated belief of Farenheit being the standard for measurement but realized that she should have been more informed of Swedish systems of measurement, thus adapt her knowledge to another cultural context.

    2. Cognitive flexibilit

      Uncertainty can be prevalent in intercultural encounters, which leads to us doubting what we should or should not do; those with low tolerances for uncertainty are more likely to exhibit lower competence or avoid future encounters while those with high tolerance are more willing to "experientially learn" and may be more intrinsically motivated.

    3. other-knowledge

      Other knowledge can be developed by making an effort to interact outside of a culture or shared identity; e.g. of Swedes and Americans. This leads us to become more mindful. There are various ways to build self and other knowledge, but is important to vet the sources we are engaging with; also, learning another language does not immediately lead to knowledge of the other culture.

    4. Members of dominant groups are often less motivated, intrinsically and extrinsically, toward intercultural communication than members of nondominant groups, because they don’t see the incentives for doing so

      Motivation(intrinsic and extrinsic) are both part of ICC but cannot alone produce ICC. People with dominant identities, for example, may engage in intercultural communication but once they obtain a reward or get what they need out of it, they may abandon a relationship altogether=no relational maintenance. Furthermore, non-dominant peoples are more likely to communicate interculturally when they percieve power imbalances: for example, Black individuals may speak Standard English because it is more acceptable in a corporate setting while others suppress their sexualities. People with dominant identities may also expect the other communicator to adjust to their communication styles: this is evident when international commerce is considered. E.g. Indian cell-phone receptionists were pressured to adopt American accents and names to avoid frustrating Western clients.

    1. How is it possible to endure difficult times? Where can you find comfort in God’s promises?

      Even in the difficult times if we trust God and hold firmly on His promises he will come through for us because He is always our present hope in times of need.

    2. The people were condemned more than thirty times in Jeremiah for not listening to Him. Seventy years might have felt like forever, but God would be with them, and He promised that the hard season would eventually end (29:10).

      God's grace has a limit. He punished the disobedience by making the King of Babylon hold them in bondage for 70 years but He was still with them (to not allow them to be completely destroyed).

      God told them He would be with them even through the difficult time as he always provides an opening for help if only we just trust Him.

    1. omniscient

      Having infinite awareness, understanding, and insight

      I don't think Intelligence maps the "Omniscient". Like we as humans all have distinct intelligences.

      Wait unless you want to go full Schizo, The First of the Seven Hermetic Principals is Mentalism.

      "The All is Mind; the Universe is Mental." —The Kybalion

      Intelligence is a byproduct of the mind. And if the universe is a mind intelligence is Omniscient.

      I still don't think this use of the word intelligence maps to human sovereignty in a useful way. Intelligence is ones capacity to model the world to make predictions. It's the decision making process we use as resource meaning searching agents.

    2. it is a complex systems generating function for high coherency across scalable groups (where the functional scalability is proportional to the sovereignty score distribution of the population, ie, greater sovereignty gives rise to higher Dunbar numbers.

      Okay I got lost in this one,

      Does it mean that the greater your sovereignty the more people you can deal with?

      Or the greater your sovereignty the greater your influence over people.

      That makes sense, Sovereignty scales with Power(Agency). Of course you can pull an Uncle Ted and live in the woods but that is limited Sovereignty when compared with Andrew Tate who has multiple passports, houses on multiple continents, and shell corporations to shield his wealth. They can bother be Sovereign but they are not the same type of Sovereign.

    1. All living creatures born of the flesh shall sit at last in the boat of the West, andwhen it sinks, when the boat of Magilum sinks, they are gone; but we shall go forward and fixour eyes on this monster

      This quote is showing how Gilgamesh is telling Enkidu how he will not have peace if Enkidu leaves this battle first. Gilgamesh was trying to make Enkidu feel bad because he is reminding him of how when this boat full of people, peoples lives, will die and it will be his fault. But, if he commits to that, he will still have to go on and fight.

    1. Obverse: The Slaying of the Demon Pralamba. Reverse: The Fight between Arjuna and Karna

      This water color painting displays the epic battle between Karna and Arjuna. In the painting, it shows the exact depiction of how Karna and Arjuna were riding on chariots in the story during the battle. At the bottom of the painting, you can see the destroyed chariots and dead people who died in the battle. The medium the artist used is water color on paper. In these times they did not have actual water colors paints, I assume they got the color for the paintings from plants, flowers, food, vegetables, etc.

    1. Cylinder seal Near Eastern, Iranian, Elamite Proto-Elamite 3100–2900 B.C. Medium/Technique Black basalt

      After looking at other art pieces and objects from this era and location, I have noticed that black basalt is a common medium. This medium can be used in many ways, in this era it was popular to carve in designs and art work that showed civilization during that time.

    1. receive the same standard of care regardless of jurisdiction;

      The report assumes universal “standard care” exists, but feminist methods challenge this by emphasizing that care must be situated, responsive to culture, trauma history, and lived experience. From my personal experience, affects everyone differently. That being why feminist methods reject “universal” standards, since standpoint theory by Harding, states that real care must be rooted in culture, identity, and lived experience (Harding 1987).

    2. For years, we have known that sexual assaults are among the most under-reported crimes in Canada.

      Who is “we”? Is it referring to institutions, survivors, or the public? Feminist methods interrogate whose knowledge and whose voices get centered. The report as well do not acknowledge that institutions themselves contribute to under-reporting because survivors don’t trust them.

    3. Once physical distancing measures are relaxed, the SAIRC training and implementation will resume.

      The document ended with control, institutional, and systemic procedures instead of centering the suvrivors voices and experiences.

    4. Most of the remaining divisions were well on their way to have their respective SAIRCs established by spring/summer 2020;

      Sexual assault is happening no matter when or where. Deflecting responsibility due to COVID-19 seems justifiable, but minimizes the urgency for current survivors and the ongoing sexual violence regardless of the barriers.

    5. Budget 2018 included $10 million over five years, and $2 million per year ongoing, for the RCMP to establish the Sexual Assault Review Team

      Change is reflected through quality not quantity. Relying for change on institutional numbers such as number of cases solved or reviewed, is not a show of progress. Survivors needs safety, and trust within the system itself. More money doesn’t immediately build trust.

    6. The RCMP and myself are committed to ensuring that victims are treated fairly by the justice system

      Systemic harm can't be resolved through individual accountability, but instead requires structural change for misogyny, racism and victim blaming. Systemic harm needs systemic change, not just statements of commitment. (Smith 2005) WHO IS SMITH?

    7. Working collaboratively with victim advocates and other experts will strengthen the RCMP's response to sexual assault crimes

      Terminology matters. Calling individuals as "victim" rather than "survivor" constructs survivors to feel less than empowered and strong for coming forward. Language shapes power, therefore, by using “survivor” it recognizes agency. (Lamb 1999) WHO IS LAMB?

    8. The creation of an Advisory Committee for Sexual Assault Investigations also serves as an open forum to share information on good practices

      The review shows accountability in a way that reinforces the RCMP’s own legitimacy rather than centering survivor safety. Institutions can act accountable to reinforce their own legitimacy, but not necessarily centering survivor safety or autonomy.

    9. The RCMP established the National Sexual Assault Review Team (SART) to conduct reviews of sexual assault investigations

      Whose knowledge is used to shape the review? Survivors, educated individuals, or was only institutional actors dominate? Are survivors part of this review, or is the system reviewing itself?

    10. With the goal to strengthen public trust in policing, survivors are encouraged to report these serious crimes through whatever mechanism they are most comfortable with.

      Intersecting identities such as race, gender identity, sexuality, etc. can shape an individuals choice/ability to report. identity shapes who feels safe engaging with the police. For Indigenous, people of colour, or queer survivors, reporting is shaped by power, history, and fear of mistrust. (Crenshaw 1991)

    11. The RCMP is working hard to ensure that all sexual assault survivors feel comfortable bringing their allegations forward;

      How? They claim to be doing a lot, but don’t show how survivors are being supported by explaining how, when, etc. This feels vague. Trust and safety needs more than statements but actual action.

    12. To date, Sexual Assault Investigations Review Committees (SAIRCs) have been established in six (6) RCMP divisions

      Does "established" mean in practice? However, they don’t explain what functioning actually looks like in practice. “Established” might only mean legally. but not clear if survivors are being helped or if the system just created a committee and called it progress. (RCMP 2021)

    1. For these types of papers, primary research is the main focus. If you are writing about a work (including non-print works, such as a movie or a painting), it is crucial to gather information and ideas from the original work, rather than relying solely on others’ interpretations

      always focus on the source never look for other interpretations.

    2. This technique is appropriate when only the major ideas are relevant to your paper or when you need to simplify complex information into a few key points for your readers.

      summarize your main idea

    3. presenting background information. From there, the writer builds toward a thesis, which is traditionally placed at the end of the introduction.

      introduction, information, thesis

    1. When you finalize your conclusion, make sure your text is not too repetitive. While your goal is to reintroduce your argument,

      don't make the essay a repeat of what you've wrote

    1. Each Works Cited entry has 9 components. You may not use each component in the reference; however, they all form a function to help the reader find the source you have cited.  Note the punctuation after each element: Author. Title of Source. Title of Container, Other Contributors, Version, Number, Publisher, Publication date, Location.

      You may not use each of these components but make sure to keep these in mind when you are citing a source

    2. However, hyperlinks are not very useful for academic papers. Here are some reasons: Links change: The internet changes every day. Websites add and remove articles, on-line magazines and newspapers change their links. If there is only a link to a source and if that link changes, then the reader cannot find the source. Inaccessible Databases: Some of the information you will use will be from CNM databases. The readers of your article may not have access to the same database; therefore, a link is not sufficient. The reader needs to know pertinent information, such as the author’s name, title, etc., to be able to find the source.

      Its better to cite the source instead of just adding a hyperlink. Think about internet changes, websites that need you to sign up in order to read, as well as being able to print out the document for someone to use as a reference. Set the reader up for success in a way.

    3. Start the Works Cited page on a separate page. This should be the last page of your paper. Margins and pagination (last name and page number on the top right) remain the same as the rest of the paper. Title the page Works Cited. Center the title Do not italicize the title Only the title is centered; the rest of the page is left justified The entire Works Cited should be double-spaced. Do not add a space between citations (i.e., do not add an extra double space between citations). Citations should be in alphabetical order.

      Format guidelines to be aware of

  2. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Hernán Cortés.

      Hernán Cortés was a Spanish conquistador who led an expedition that was responsible for the fall of the Aztec Empire. This expedition brought much of what has become mainland Mexico under the rule of Castile.

    2. Margaret Kohn and Kavita Reddy. Colonialism. In Edward N. Zalta and Uri Nodelman, editors, The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University, spring 2023 edition, 2023. URL: https://plato.stanford.edu/archives/spr2023/entries/colonialism/ (visited on 2023-12-10).

      This reading made me think about how tech companies like Meta operate in ways that feel similar to modern forms of colonialism. They don’t take land, but they take data, attention, and control over communication in many countries that didn’t design these platforms. The power is very one-sided. It made me question whether global platforms are really “connecting the world,” or just creating a new digital version of old power structures.

    1. In what ways do you see capitalism, socialism, and other funding models show up in the country you are from or are living in?

      I see capitalism very frequently in the US. It can be seen especially well in the business models of companies right now, where everything becomes monetized to the point where one has to pay a lot of money to get an ad-free service.

    1. Here are examples:   If the source has page numbers: (Pauling 113).  If the source does not have page numbers: (Pauling).  If the source has page numbers: (“Bilingual Minds” 113).  If the source does not have page numbers: (“Bilingual Minds”).

      Use this as a reference when citing your sources

    2. This material must always be cited:  A direct quote  A statistic  An idea that is not your own  Someone else’s opinion  Concrete facts, not considered “common knowledge”  Knowledge not considered “common”

      Keep this in mind as to what needs to be cited

    3. In-text citations are used throughout your paper to credit your sources of information. In MLA style, the in-text citation in the body of the essay links to the Works Cited page at the end. This way, the reader will know which item in the Works Cited is the source of the information.

      Make sure to cite your sources and do so properly. If you do NOT cite your sources, this may be seen as you plagiarizing the information you are giving to your reader.

    1. Par conséquent, je mets la notion d’édition critique numérique du théâtre au centre de ma réflexion. En partant de l’étude approfondie de l’histoire de l’édition dramatique dans l’environnement numérique, je considère reprendre le paradigme de l’édition traditionnelle et je veux proposer une nouvelle manière d’éditer qui nous permet de rester dans le sillage de la tradition tout en produisant, avec le même outil, une version numérique augmentée et complémentaire. Robert Alessi (Alessi, 2020) a développé cette dynamique en particulier à travers son outil ekdosis et l’a appliquée notamment aux textes classiques. Bien qu’il existe des applications au théâtre, elles se font au sein de la littérature fragmentaire latine (Debouy, 2021) ; par conséquent, cet outil s’avère incomplet pour l’édition du théâtre classique.

      Conclusion très claire

    2. comment arrimer les différents enjeux, quantitatifs et qualitatifs, au sein d’une équipe éditoriale, comme le projet d’édition complète de Thomas Corneille

      super important

    1. Consult your instructor because they will often specify what resources you are required to use.

      Read instructions carefully as well as reference back to them when needed.

    2. Your sources will include both primary sources and secondary sources. As you conduct research, you will want to take detailed, careful notes about your discoveries. These notes will help trigger your memory about each article’s key ideas and your initial response to the information when you return to your sources during the writing process. As you read each source, take a minute to evaluate the reliability of each source you find.

      Keep your annotations organized as well as seperate

    3. The following are examples of secondary sources: Magazine articles Biographical books Literary and scientific reviews Television documentaries

      Secondar sources that you should be aware of. When quoting from them, do so correctly and make sure that the information accurate as well as reliable

  3. milenio-nudos.github.io milenio-nudos.github.io
    1. These items ask about different skills, from text editing in digital services to identifying the source of an error in software.

      Deberíamos identificar en este párrafo la diferencia entre PISA e ICILS en considerar las nuevas dimensiones del DIig Comp...

  4. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Safiya Umoja Noble. Algorithms of Oppression: How Search Engines Reinforce Racism. New York University Press, New York, UNITED STATES, 2018. ISBN 978-1-4798-3364-1. URL: https://orbiscascade-washington.primo.exlibrisgroup.com/permalink/01ALLIANCE_UW/8iqusu/alma99162068349301452 (visited on 2023-12-10).

      Noble shows that technology isn’t neutral — it can carry and strengthen existing racism through algorithms like search engines. This connects directly to this chapter’s point that tech creators often don’t think enough about ethical consequences. Even if they don’t intend harm, their systems can still hurt marginalized groups. It made me realize that ethical responsibility in tech isn’t just about future risks, but about real harm that is already happening.

    1. 21.2. Ethics in Tech# In the first chapter of our book we quoted actor Kumail Nanjiani on tech innovators’ lack of consideration of ethical implications of their work. Of course, concerns about the implications of technological advancement are nothing new. In Plato’s Phaedrus [u1] (~370BCE), Socrates tells (or makes up[1]) a story from Egypt critical of the invention of writing: Now in those days the god Thamus was the king of the whole country of Egypt, […] [then] came Theuth and showed his inventions, desiring that the other Egyptians might be allowed to have the benefit of them; […] [W]hen they came to letters, This, said Theuth, will make the Egyptians wiser and give them better memories; it is a specific both for the memory and for the wit. Thamus replied: […] this discovery of yours will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality. In England in the early 1800s, Luddites [u2] were upset that textile factories were using machines to replace them, leaving them unemployed, so they sabotaged the machines. The English government sent soldiers to stop them, killing and executing many. (See also Sci-Fi author Ted Chiang on Luddites and AI [u3]) Fig. 21.1 The start of an xkcd comic [u4] compiling a hundred years of complaints about how technology has speed up the pace of life. (full transcript of comic available at explainxkcd [u5])# Inventors ignoring the ethical consequences of their creations is nothing new as well, and gets critiqued regularly: Fig. 21.2 A major theme of the movie Jurassic Park (1993) [u6] is scientists not thinking through the implications of their creations.# Fig. 21.3 Tweet parodying how tech innovator often do blatantly unethical things [u7]# Many people like to believe (or at least convince others) that they are doing something to make the world a better place, as in this parody clip from the Silicon Valley show [u8] (the one Kumail Nanjiani was on, though not in this clip): But even people who thought they were doing something good regretted the consequences of their creations, such as Eli Whitney [u9] who hoped his invention of the cotton gin would reduce slavery in the United States, but only made it worse, or Alfred Nobel [u10] who invented dynamite (which could be used in construction or in war) and decided to create the Nobel prizes, or Albert Einstein regretting his role in convincing the US government to invent nuclear weapons [u11], or Aza Raskin regretting his invention infinite scroll. [1] In response to Socrates’ story, his debate partner Phaedrus says, “Yes, Socrates, you can easily invent tales of Egypt, or of any other country.”

      I like how this chapter connects modern tech problems with very old concerns. From Plato’s story about writing to today’s AI and social media, people have always worried that new technologies change how we think and live. What stood out to me is that many inventors didn’t have bad intentions, but their creations still caused harm. It shows that “good intentions” are not enough — ethical thinking has to be part of the design process from the beginning, not an afterthought.

    1. 19.3. Responses to Meta’s Business Strategies# Let’s look at some responses to Meta’s business plan. 19.3.1. Competition# When Facebook started, there were already other social media platforms in use that Facebook had to compete against, but Facebook became dominant. Since then other companies have tried to compete with Facebook, with different levels of success. Google+ tried to mimic much of what Facebook did, but it got little use and never took off (not enough people to benefit from the network effect). Other social media sites have used more unique features to distinguish themselves from Facebook and get a foothold, such as Twitter with its character limit (forcing short messages, so you can see lots of posts in quick succession), Vine and then TikTok based on short videos, etc. Mastodon [s48] (Fediverse [s49] set of connected social media platforms that it is part of) has a different way of distinguishing itself as a social media network, in that it is an open-source, community-funded social media network (no ads), and hopes people will join to get away from corporate control. Other social media networks have focused on parts of the world where Facebook was less dominant, and so they got a foothold there first, and then spread, like the social media platforms in China (e.g., Sina Weibo, QQ, and TikTok). 19.3.2. Privacy Concerns# Another source of responses to Meta (and similar social media sites), is concern around privacy (especially in relation to surveillance capitalism). The European Union passed the General Data Protection Regulation (GDPR) [s50] law, which forces companies to protect user information in certain ways and give users a “right to be forgotten” [s51] online. Apple also is concerned about privacy, so it introduced app tracking transparency in 2021 [s52]. In response, Facebook says Apple iOS privacy change will result in $10 billion revenue hit this year [s53]. Note that Apple can afford to be concerned with privacy like this because it does not make much money off of behavioral data. Instead, Apple’s profits [s54] are mostly from hardware (e.g., iPhone) and services (e.g., iCloud, Apple Music, Apple TV+).

      I find it interesting how Meta’s biggest threat isn’t just other apps, but privacy changes. Platforms like Mastodon show that some users really care about moving away from corporate control, while Apple’s tracking restrictions show that big tech companies can also limit each other. To me, this shows that Meta’s power isn’t absolute — it’s shaped by competition, laws, and other companies, not just by what it wants.

    1. Note: This response was posted by the corresponding author to Review Commons. The content has not been altered except for formatting.

      Learn more at Review Commons


      Reply to the reviewers

      Reviewer #1 (Evidence, reproducibility and clarity (Required)):

      These authors have developed a method to induce MI or MII arrest. While this was previously possible in MI, the advantage of the method presented here is it works for MII, and chemically inducible because it is based on a system that is sensitive to the addition of ABA. Depending on when the ABA is added, they achieve a MI or MII delay. The ABA promotes dimerizing fragments of Mps1 and Spc105 that can't bind their chromosomal sites. The evidence that the MI arrest is weaker than the MII arrest is convincing and consistent with published data and indicating the SAC in MI is less robust than MII or mitosis. The authors use this system to find evidence that the weak MI arrest is associated with PP1 binding to Spc105. This is a nice use of the system.

      The remainder of the paper uses the SynSAC system to isolate populations enriched for MI or MII stages and conduct proteomics. This shows a powerful use of the system but more work is needed to validate these results, particularly in normal cells.

      Overall the most significant aspect of this paper is the technical achievement, which is validated by the other experiments. They have developed a system and generated some proteomics data that maybe useful to others when analyzing kinetochore composition at each division. Overall, I have only a few minor suggestions.

      We appreciate the reviewers’ support of our study.

      1) In wild-type - Pds1 levels are high during M1 and A1, but low in MII. Can the authors comment on this? In line 217, what is meant by "slightly attenuated? Can the authors comment on how anaphase occurs in presence of high Pds1? There is even a low but significant level in MII.

      The higher levels of Pds1 in meiosis I compared to meiosis II has been observed previously using immunofluorescence and live imaging1–3. Although the reasons are not completely clear, we speculate that there is insufficient time between the two divisions to re-accumulate Pds1 prior to separase re-activation.

      We agree “slightly attenuated” was confusing and we have re-worded this sentence to read “Addition ABA at the time of prophase release resulted in Pds1securin stabilisation throughout the time course, consistent with delays in both metaphase I and II”.

      We do not believe that either anaphase I or II occur in the presence of high Pds1. Western blotting represents the amount of Pds1 in the population of cells at a given time point. The time between meiosis I and II is very short even when treated with ABA. For example, in Figure 2B, spindle morphology counts show that the anaphase I peak is around 40% at its maxima (105 min) and around 40% of cells are in either metaphase I or metaphase II, and will be Pds1 positive. In contrast, due to the better efficiency of meiosis II, anaphase II hardly occurs at all in these conditions, since anaphase II spindles (and the second nuclear division) are observed at very low frequency (maximum 10%) from 165 minutes onwards. Instead, metaphase II spindles partially or fully breakdown, without undergoing anaphase extension. Taking Pds1 levels from the western blot and the spindle data together leads to the conclusion that at the end of the time-course, these cells are biochemically in metaphase II, but unable to maintain a robust spindle. Spindle collapse is also observed in other situations where meiotic exit fails, and potentially reflects an uncoupling of the cell cycle from the programme governing gamete differentiation3–5. We will explain this point in a revised version while referring to representative images that from evidence for this, as also requested by the reviewer below.

      2) The figures with data characterizing the system are mostly graphs showing time course of MI and MII. There is no cytology, which is a little surprising since the stage is determined by spindle morphology. It would help to see sample sizes (ie. In the Figure legends) and also representative images. It would also be nice to see images comparing the same stage in the SynSAC cells versus normal cells. Are there any differences in the morphology of the spindles or chromosomes when in the SynSAC system?

      This is an excellent suggestion and will also help clarify the point above. We will provide images of cells at the different stages. For each timepoint, 100 cells were scored. We have already included this information in the figure legends

      3) A possible criticism of this system could be that the SAC signal promoting arrest is not coming from the kinetochore. Are there any possible consequences of this? In vertebrate cells, the RZZ complex streams off the kinetochore. Yeast don't have RZZ but this is an example of something that is SAC dependent and happens at the kinetochore. Can the authors discuss possible limitations such as this? Does the inhibition of the APC effect the native kinetochores? This could be good or bad. A bad possibility is that the cell is behaving as if it is in MII, but the kinetochores have made their microtubule attachments and behave as if in anaphase.

      In our view, the fact that SynSAC does not come from kinetochores is a major advantage as this allows the study of the kinetochore in an unperturbed state. It is also important to note that the canonical checkpoint components are all still present in the SynSAC strains, and perturbations in kinetochore-microtubule interactions would be expected to mount a kinetochore-driven checkpoint response as normal. Indeed, it would be interesting in future work to understand how disrupting kinetochore-microtubule attachments alters kinetochore composition (presumably checkpoint proteins will be recruited) and phosphorylation but this is beyond the scope of this work. In terms of the state at which we are arresting cells – this is a true metaphase because cohesion has not been lost but kinetochore-microtubule attachments have been established. This is evident from the enrichment of microtubule regulators but not checkpoint proteins in the kinetochore purifications from metaphase I and II. While this state is expected to occur only transiently in yeast, since the establishment of proper kinetochore-microtubule attachments triggers anaphase onset, the ability to capture this properly bioriented state will be extremely informative for future studies. We appreciate the reviewers’ insight in highlighting these interesting discussion points which we will include in a revised version.

      Reviewer #1 (Significance (Required)):

      These authors have developed a method to induce MI or MII arrest. While this was previously possible in MI, the advantage of the method presented here is it works for MII, and chemically inducible because it is based on a system that is sensitive to the addition of ABA. Depending on when the ABA is added, they achieve a MI or MII delay. The ABA promotes dimerizing fragments of Mps1 and Spc105 that can't bind their chromosomal sites. The evidence that the MI arrest is weaker than the MII arrest is convincing and consistent with published data and indicating the SAC in MI is less robust than MII or mitosis. The authors use this system to find evidence that the weak MI arrest is associated with PP1 binding to Spc105. This is a nice use of the system.

      The remainder of the paper uses the SynSAC system to isolate populations enriched for MI or MII stages and conduct proteomics. This shows a powerful use of the system but more work is needed to validate these results, particularly in normal cells.

      Overall the most significant aspect of this paper is the technical achievement, which is validated by the other experiments. They have developed a system and generated some proteomics data that maybe useful to others when analyzing kinetochore composition at each division.

      We appreciate the reviewer’s enthusiasm for our work.

      Reviewer #2 (Evidence, reproducibility and clarity (Required)):

      The manuscript submitted by Koch et al. describes a novel approach to collect budding yeast cells in metaphase I or metaphase II by synthetically activating the spinde checkpoint (SAC). The arrest is transient and reversible. This synchronization strategy will be extremely useful for studying meiosis I and meiosis II, and compare the two divisions. The authors characterized this so-named syncSACapproach and could confirm previous observations that the SAC arrest is less efficient in meiosis I than in meiosis II. They found that downregulation of the SAC response through PP1 phosphatase is stronger in meiosis I than in meiosis II. The authors then went on to purify kinetochore-associated proteins from metaphase I and II extracts for proteome and phosphoproteome analysis. Their data will be of significant interest to the cell cycle community (they compared their datasets also to kinetochores purified from cells arrested in prophase I and -with SynSAC in mitosis).

      I have only a couple of minor comments:

      1) I would add the Suppl Figure 1A to main Figure 1A. What is really exciting here is the arrest in metaphase II, so I don't understand why the authors characterize metaphase I in the main figure, but not metaphase II. But this is only a suggestion.

      This is a good suggestion, we will do this in our full revision.

      2) Line 197, the authors state: ...SyncSACinduced a more pronounced delay in metaphase II than in metaphase I. However, line 229 and 240 the auhtors talk about a "longer delay in metaphase Thank you for pointing this out, this is indeed a typo and we have corrected it.

      3) The authors describe striking differences for both protein abundance and phosphorylation for key kinetochore associated proteins. I found one very interesting protein that seems to be very abundant and phosphorylated in metaphase I but not metaphase II, namely Sgo1. Do the authors think that Sgo1 is not required in metaphase II anymore? (Top hit in suppl Fig 8D).

      This is indeed an interesting observation, which we plan to investigate as part of another study in the future. Indeed, data from mouse indicates that shugoshin-dependent cohesin deprotection is already absent in meiosis II in mouse oocytes6, though whether this is also true in yeast is not known. Furthermore, this does not rule out other functions of Sgo1 in meiosis II (for example promoting biorientation). We will include this point in the discussion.

      Reviewer #2 (Significance (Required)):

      The technique described here will be of great interest to the cell cycle community. Furthermore, the authors provide data sets on purified kinetochores of different meiotic stages and compare them to mitosis. This paper will thus be highly cited, for the technique, and also for the application of the technique.

      Reviewer #3 (Evidence, reproducibility and clarity (Required)):

      In their manuscript, Koch et al. describe a novel strategy to synchronize cells of the budding yeast Saccharomyces cerevisiae in metaphase I and metaphase II, thereby facilitating comparative analyses between these meiotic stages. This approach, termed SynSAC, adapts a method previously developed in fission yeast and human cells that enables the ectopic induction of a synthetic spindle assembly checkpoint (SAC) arrest by conditionally forcing the heterodimerization of two SAC components upon addition of the plant hormone abscisic acid (ABA). This is a valuable tool, which has the advantage that induces SAC-dependent inhibition of the anaphase promoting complex without perturbing kinetochores. Furthermore, since the same strategy and yeast strain can be also used to induce a metaphase arrest during mitosis, the methodology developed by Koch et al. enables comparative analyses between mitotic and meiotic cell divisions. To validate their strategy, the authors purified kinetochores from meiotic metaphase I and metaphase II, as well as from mitotic metaphase, and compared their protein composition and phosphorylation profiles. The results are presented clearly and in an organized manner.

      We are grateful to the reviewer for their support.

      Despite the relevance of both the methodology and the comparative analyses, several main issues should be addressed: 1.- In contrast to the strong metaphase arrest induced by ABA addition in mitosis (Supp. Fig. 2), the SynSAC strategy only promotes a delay in metaphase I and metaphase II as cells progress through meiosis. This delay extends the duration of both meiotic stages, but does not markedly increase the percentage of metaphase I or II cells in the population at a given timepoint of the meiotic time course (Fig. 1C). Therefore, although SynSAC broadens the time window for sample collection, it does not substantially improve differential analyses between stages compared with a standard NDT80 prophase block synchronization experiment. Could a higher ABA concentration or repeated hormone addition improve the tightness of the meiotic metaphase arrest?

      For many purposes the enrichment and extended time for sample collection is sufficient, as we demonstrate here. However, as pointed out by the reviewer below, the system can be improved by use of the 4A-RASA mutations to provide a stronger arrest (see our response below). We did not experiment with higher ABA concentrations or repeated addition since the very robust arrest achieved with the 4A-RASA mutant deemed this unnecessary.

      2.- Unlike the standard SynSAC strategy, introducing mutations that prevent PP1 binding to the SynSAC construct considerably extended the duration of the meiotic metaphase arrests. In particular, mutating PP1 binding sites in both the RVxF (RASA) and the SILK (4A) motifs of the Spc105(1-455)-PYL construct caused a strong metaphase I arrest that persisted until the end of the meiotic time course (Fig. 3A). This stronger and more prolonged 4A-RASA SynSAC arrest would directly address the issue raised above. It is unclear why the authors did not emphasize more this improved system. Indeed, the 4A-RASA SynSAC approach could be presented as the optimal strategy to induce a conditional metaphase arrest in budding yeast meiosis, since it not only adapts but also improves the original methods designed for fission yeast and human cells. Along the same lines, it is surprising that the authors did not exploit the stronger arrest achieved with the 4A-RASA mutant to compare kinetochore composition at meiotic metaphase I and II.

      We agree that the 4A-RASA mutant is the best tool to use for the arrest and going forward this will be our approach. We collected the proteomics data and the data on the SynSAC mutant variants concurrently, so we did not know about the improved arrest at the time the proteomics experiment was done. Because very good arrest was already achieved with the unmutated SynSAC construct, we could not justify repeating the proteomics experiment which is a large amount of work using significant resources. However, we will highlight the potential of the 4A-RASA mutant more prominently in our full revision.

      3.- The results shown in Supp. Fig. 4C are intriguing and merit further discussion. Mitotic growth in ABA suggest that the RASA mutation silences the SynSAC effect, yet this was not observed for the 4A or the double 4A-RASA mutants. Notably, in contrast to mitosis, the SynSAC 4A-RASA mutation leads to a more pronounced metaphase I meiotic delay (Fig. 3A). It is also noteworthy that the RVAF mutation partially restores mitotic growth in ABA. This observation supports, as previously demonstrated in human cells, that Aurora B-mediated phosphorylation of S77 within the RVSF motif is important to prevent PP1 binding to Spc105 in budding yeast as well.

      We agree these are intriguing findings that highlight key differences as to the wiring of the spindle checkpoint in meiosis and mitosis and potential for future studies, however, currently we can only speculate as to the underlying cause. The effect of the RASA mutation in mitosis is unexpected and unexplained. However, the fact that the 4A-RASA mutation causes a stronger delay in meiosis I compared to mitosis can be explained by a greater prominence of PP1 phosphatase in meiosis. Indeed, our data (Figure 4A) show that the PP1 phosphatase Glc7 and its regulatory subunit Fin1 are highly enriched on kinetochores at all meiotic stages compared to mitosis.

      We agree that the improved growth of the RVAF mutant is intriguing and points to a role of Aurora B-mediated phosphorylation, though previous work has not supported such a role 7.

      We will include a discussion of these important points in a revised version.

      4.- To demonstrate the applicability of the SynSAC approach, the authors immunoprecipitated the kinetochore protein Dsn1 from cells arrested at different meiotic or mitotic stages, and compared kinetochore composition using data independent acquisition (DIA) mass spectrometry. Quantification and comparative analyses of total and kinetochore protein levels were conducted in parallel for cells expressing either FLAG-tagged or untagged Dsn1 (Supp. Fig. 7A-B). To better detect potential changes, protein abundances were next scaled to Dsn1 levels in each sample (Supp. Fig. 7C-D). However, it is not clear why the authors did not normalize protein abundance in the immunoprecipitations from tagged samples at each stage to the corresponding untagged control, instead of performing a separate analysis. This would be particularly relevant given the high sensitivity of DIA mass spectrometry, which enabled quantification of thousands of proteins. Furthermore, the authors compared protein abundances in tagged-samples from mitotic metaphase and meiotic prophase, metaphase I and metaphase II (Supp. Fig. 7E-F). If protein amounts in each case were not normalized to the untagged controls, as inferred from the text (lines 333 to 338), the observed differences could simply reflect global changes in protein expression at different stages rather than specific differences in protein association to kinetochores.

      While we agree with the reviewer that at first glance, normalising to no tag makes the most sense, in practice there is very low background signal in the no tag sample which means that any random fluctuations have a big impact on the final fold change. This approach therefore introduces artefacts into the data rather than improving normalisation.

      To provide reassurance that our kinetochore immunoprecipitations are specific, and that the background (no tag) signal is indeed very low, we will provide a new supplemental figure showing the volcanos comparing kinetochore purifications at each stage with their corresponding no tag control. These volcano plots show very clearly that the major enriched proteins are kinetochore proteins and associated factors, in all cases.

      It is also important to note that our experiment looks at relative changes of the same protein over time, which we expect to be relatively small in the whole cell lysate. We previously documented proteins that change in abundance in whole cell lysates throughout meiosis8. In this study, we found that relatively few proteins significantly change in abundance, supporting this view.

      Our aim in the current study was to understand how the relative composition of the kinetochore changes and for this, we believe that a direct comparison to Dsn1, a central kinetochore protein which we immunoprecipitated is the most appropriate normalisation.

      5.- Despite the large amount of potentially valuable data generated, the manuscript focuses mainly on results that reinforce previously established observations (e.g., premature SAC silencing in meiosis I by PP1, changes in kinetochore composition, etc.). The discussion would benefit from a deeper analysis of novel findings that underscore the broader significance of this study.

      We strongly agree with this point and we will re-frame the discussion to focus on the novel findings, as also raised by the other reviewers.

      Finally, minor concerns are: 1.- Meiotic progression in SynSAC strains lacking Mad1, Mad2 or Mad3 is severely affected (Fig. 1D and Supp. Fig. 1), making it difficult to assess whether, as the authors state, the metaphase delays depend on the canonical SAC cascade. In addition, as a general note, graphs displaying meiotic time courses could be improved for clarity (e.g., thinner data lines, addition of axis gridlines and external tick marks, etc.).

      We will generate the data to include a checkpoint mutant +/- ABA for direct comparison. We will take steps to improve the clarity of presentation of the meiotic timecourse graphs, though our experience is that uncluttered graphs make it easier to compare trends.

      2.- Spore viability following SynSAC induction in meiosis was used as an indicator that this experimental approach does not disrupt kinetochore function and chromosome segregation. However, this is an indirect measure. Direct monitoring of genome distribution using GFP-tagged chromosomes would have provided more robust evidence. Notably, the SynSAC mad3Δ mutant shows a slight viability defect, which might reflect chromosome segregation defects that are more pronounced in the absence of a functional SAC.

      Spore viability is a much more sensitive way of analysing segregation defects that GFP-labelled chromosomes. This is because GFP labelling allows only a single chromosome to be followed. On the other hand, if any of the 16 chromosomes mis-segregate in a given meiosis this would result in one or more aneuploid spores in the tetrad, which are typically inviable. The fact that spore viability is not significantly different from wild type in this analysis indicates that there are no major chromosome segregation defects in these strains, and we therefore do not plan to do this experiment.

      3.- It is surprising that, although SAC activity is proposed to be weaker in metaphase I, the levels of CPC/SAC proteins seem to be higher at this stage of meiosis than in metaphase II or mitotic metaphase (Fig. 4A-B).

      We agree, this is surprising and we will point this out in the revised discussion. We speculate that the challenge in biorienting homologs which are held together by chiasmata, rather than back-to-back kinetochores results in a greater requirement for error correction in meiosis I. Interestingly, the data with the RASA mutant also point to increased PP1 activity in meiosis I, and we additionally observed increased levels of PP1 (Glc7 and Fin1) on meiotic kinetochores, consistent with the idea that cycles of error correction and silencing are elevated in meiosis I.

      4.- Although a more detailed exploration of kinetochore composition or phosphorylation changes is beyond the scope of the manuscript, some key observations could have been validated experimentally (e.g., enrichment of proteins at kinetochores, phosphorylation events that were identified as specific or enriched at a certain meiotic stage, etc.).

      We agree that this is beyond the scope of the current study but will form the start of future projects from our group, and hopefully others.

      5.- Several typographical errors should be corrected (e.g., "Knetochores" in Fig. 4 legend, "250uM ABA" in Supp. Fig. 1 legend, etc.)

      Thank you for pointing these out, they have been corrected.

      Reviewer #3 (Significance (Required)):

      Koch et al. describe a novel methodology, SynSAC, to synchronize budding yeast cells in metaphase I or metaphase II during meiosis, as well and in mitotic metaphase, thereby enabling differential analyses among these cell division stages. Their approach builds on prior strategies originally developed in fission yeast and human cells models to induce a synthetic spindle assembly checkpoint (SAC) arrest by conditionally forcing the heterodimerization of two SAC proteins upon addition of abscisic acid (ABA). The results from this manuscript are of special relevance for researchers studying meiosis and using Saccharomyces cerevisiae as a model. Moreover, the differential analysis of the composition and phosphorylation of kinetochores from meiotic metaphase I and metaphase II adds interest for the broader meiosis research community. Finally, regarding my expertise, I am a researcher specialized in the regulation of cell division.

    2. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #3

      Evidence, reproducibility and clarity

      In their manuscript, Koch et al. describe a novel strategy to synchronize cells of the budding yeast Saccharomyces cerevisiae in metaphase I and metaphase II, thereby facilitating comparative analyses between these meiotic stages. This approach, termed SynSAC, adapts a method previously developed in fission yeast and human cells that enables the ectopic induction of a synthetic spindle assembly checkpoint (SAC) arrest by conditionally forcing the heterodimerization of two SAC components upon addition of the plant hormone abscisic acid (ABA). This is a valuable tool, which has the advantage that induces SAC-dependent inhibition of the anaphase promoting complex without perturbing kinetochores. Furthermore, since the same strategy and yeast strain can be also used to induce a metaphase arrest during mitosis, the methodology developed by Koch et al. enables comparative analyses between mitotic and meiotic cell divisions. To validate their strategy, the authors purified kinetochores from meiotic metaphase I and metaphase II, as well as from mitotic metaphase, and compared their protein composition and phosphorylation profiles. The results are presented clearly and in an organized manner. Despite the relevance of both the methodology and the comparative analyses, several main issues should be addressed:

      1.- In contrast to the strong metaphase arrest induced by ABA addition in mitosis (Supp. Fig. 2), the SynSAC strategy only promotes a delay in metaphase I and metaphase II as cells progress through meiosis. This delay extends the duration of both meiotic stages, but does not markedly increase the percentage of metaphase I or II cells in the population at a given timepoint of the meiotic time course (Fig. 1C). Therefore, although SynSAC broadens the time window for sample collection, it does not substantially improve differential analyses between stages compared with a standard NDT80 prophase block synchronization experiment. Could a higher ABA concentration or repeated hormone addition improve the tightness of the meiotic metaphase arrest? 2.- Unlike the standard SynSAC strategy, introducing mutations that prevent PP1 binding to the SynSAC construct considerably extended the duration of the meiotic metaphase arrests. In particular, mutating PP1 binding sites in both the RVxF (RASA) and the SILK (4A) motifs of the Spc105(1-455)-PYL construct caused a strong metaphase I arrest that persisted until the end of the meiotic time course (Fig. 3A). This stronger and more prolonged 4A-RASA SynSAC arrest would directly address the issue raised above. It is unclear why the authors did not emphasize more this improved system. Indeed, the 4A-RASA SynSAC approach could be presented as the optimal strategy to induce a conditional metaphase arrest in budding yeast meiosis, since it not only adapts but also improves the original methods designed for fission yeast and human cells. Along the same lines, it is surprising that the authors did not exploit the stronger arrest achieved with the 4A-RASA mutant to compare kinetochore composition at meiotic metaphase I and II. 3.- The results shown in Supp. Fig. 4C are intriguing and merit further discussion. Mitotic growth in ABA suggest that the RASA mutation silences the SynSAC effect, yet this was not observed for the 4A or the double 4A-RASA mutants. Notably, in contrast to mitosis, the SynSAC 4A-RASA mutation leads to a more pronounced metaphase I meiotic delay (Fig. 3A). It is also noteworthy that the RVAF mutation partially restores mitotic growth in ABA. This observation supports, as previously demonstrated in human cells, that Aurora B-mediated phosphorylation of S77 within the RVSF motif is important to prevent PP1 binding to Spc105 in budding yeast as well. 4.- To demonstrate the applicability of the SynSAC approach, the authors immunoprecipitated the kinetochore protein Dsn1 from cells arrested at different meiotic or mitotic stages, and compared kinetochore composition using data independent acquisition (DIA) mass spectrometry. Quantification and comparative analyses of total and kinetochore protein levels were conducted in parallel for cells expressing either FLAG-tagged or untagged Dsn1 (Supp. Fig. 7A-B). To better detect potential changes, protein abundances were next scaled to Dsn1 levels in each sample (Supp. Fig. 7C-D). However, it is not clear why the authors did not normalize protein abundance in the immunoprecipitations from tagged samples at each stage to the corresponding untagged control, instead of performing a separate analysis. This would be particularly relevant given the high sensitivity of DIA mass spectrometry, which enabled quantification of thousands of proteins. Furthermore, the authors compared protein abundances in tagged-samples from mitotic metaphase and meiotic prophase, metaphase I and metaphase II (Supp. Fig. 7E-F). If protein amounts in each case were not normalized to the untagged controls, as inferred from the text (lines 333 to 338), the observed differences could simply reflect global changes in protein expression at different stages rather than specific differences in protein association to kinetochores. 5.- Despite the large amount of potentially valuable data generated, the manuscript focuses mainly on results that reinforce previously established observations (e.g., premature SAC silencing in meiosis I by PP1, changes in kinetochore composition, etc.). The discussion would benefit from a deeper analysis of novel findings that underscore the broader significance of this study.

      Finally, minor concerns are:

      1.- Meiotic progression in SynSAC strains lacking Mad1, Mad2 or Mad3 is severely affected (Fig. 1D and Supp. Fig. 1), making it difficult to assess whether, as the authors state, the metaphase delays depend on the canonical SAC cascade. In addition, as a general note, graphs displaying meiotic time courses could be improved for clarity (e.g., thinner data lines, addition of axis gridlines and external tick marks, etc.). 2.- Spore viability following SynSAC induction in meiosis was used as an indicator that this experimental approach does not disrupt kinetochore function and chromosome segregation. However, this is an indirect measure. Direct monitoring of genome distribution using GFP-tagged chromosomes would have provided more robust evidence. Notably, the SynSAC mad3Δ mutant shows a slight viability defect, which might reflect chromosome segregation defects that are more pronounced in the absence of a functional SAC. 3.- It is surprising that, although SAC activity is proposed to be weaker in metaphase I, the levels of CPC/SAC proteins seem to be higher at this stage of meiosis than in metaphase II or mitotic metaphase (Fig. 4A-B). 4.- Although a more detailed exploration of kinetochore composition or phosphorylation changes is beyond the scope of the manuscript, some key observations could have been validated experimentally (e.g., enrichment of proteins at kinetochores, phosphorylation events that were identified as specific or enriched at a certain meiotic stage, etc.). 5.- Several typographical errors should be corrected (e.g., "Knetochores" in Fig. 4 legend, "250uM ABA" in Supp. Fig. 1 legend, etc.)

      Significance

      Koch et al. describe a novel methodology, SynSAC, to synchronize budding yeast cells in metaphase I or metaphase II during meiosis, as well and in mitotic metaphase, thereby enabling differential analyses among these cell division stages. Their approach builds on prior strategies originally developed in fission yeast and human cells models to induce a synthetic spindle assembly checkpoint (SAC) arrest by conditionally forcing the heterodimerization of two SAC proteins upon addition of abscisic acid (ABA). The results from this manuscript are of special relevance for researchers studying meiosis and using Saccharomyces cerevisiae as a model. Moreover, the differential analysis of the composition and phosphorylation of kinetochores from meiotic metaphase I and metaphase II adds interest for the broader meiosis research community. Finally, regarding my expertise, I am a researcher specialized in the regulation of cell division.

    3. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #2

      Evidence, reproducibility and clarity

      The manuscript submitted by Koch et al. describes a novel approach to collect budding yeast cells in metaphase I or metaphase II by synthetically activating the spinde checkpoint (SAC). The arrest is transient and reversible. This synchronization strategy will be extremely useful for studying meiosis I and meiosis II, and compare the two divisions. The authors characterized this so-named syncSACapproach and could confirm previous observations that the SAC arrest is less efficient in meiosis I than in meiosis II. They found that downregulation of the SAC response through PP1 phosphatase is stronger in meiosis I than in meiosis II. The authors then went on to purify kinetochore-associated proteins from metaphase I and II extracts for proteome and phosphoproteome analysis. Their data will be of significant interest to the cell cycle community (they compared their datasets also to kinetochores purified from cells arrested in prophase I and -with SynSAC in mitosis).

      I have only a couple of minor comments:

      1) I would add the Suppl Figure 1A to main Figure 1A. What is really exciting here is the arrest in metaphase II, so I don't understand why the authors characterize metaphase I in the main figure, but not metaphase II. But this is only a suggestion.

      2) Line 197, the authors state: ...SyncSACinduced a more pronounced delay in metaphase II than in metaphase I. However, line 229 and 240 the auhtors talk about a "longer delay in metaphase <i compared to metaphase II"... this seems to be a mix-up.

      3) The authors describe striking differences for both protein abundance and phosphorylation for key kinetochore associated proteins. I found one very interesting protein that seems to be very abundant and phosphorylated in metaphase I but not metaphase II, namely Sgo1. Do the authors think that Sgo1 is not required in metaphase II anymore? (Top hit in suppl Fig 8D).

      Significance

      The technique described here will be of great interest to the cell cycle community. Furthermore, the authors provide data sets on purified kinetochores of different meiotic stages and compare them to mitosis. This paper will thus be highly cited, for the technique, and also for the application of the technique.

    4. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #1

      Evidence, reproducibility and clarity

      These authors have developed a method to induce MI or MII arrest. While this was previously possible in MI, the advantage of the method presented here is it works for MII, and chemically inducible because it is based on a system that is sensitive to the addition of ABA. Depending on when the ABA is added, they achieve a MI or MII delay. The ABA promotes dimerizing fragments of Mps1 and Spc105 that can't bind their chromosomal sites. The evidence that the MI arrest is weaker than the MII arrest is convincing and consistent with published data and indicating the SAC in MI is less robust than MII or mitosis. The authors use this system to find evidence that the weak MI arrest is associated with PP1 binding to Spc105. This is a nice use of the system.

      The remainder of the paper uses the SynSAC system to isolate populations enriched for MI or MII stages and conduct proteomics. This shows a powerful use of the system but more work is needed to validate these results, particularly in normal cells.

      Overall the most significant aspect of this paper is the technical achievement, which is validated by the other experiments. They have developed a system and generated some proteomics data that maybe useful to others when analyzing kinetochore composition at each division. Overall, I have only a few minor suggestions.

      1) In wild-type - Pds1 levels are high during M1 and A1, but low in MII. Can the authors comment on this? In line 217, what is meant by "slightly attenuated? Can the authors comment on how anaphase occurs in presence of high Pds1? There is even a low but significant level in MII.

      2) The figures with data characterizing the system are mostly graphs showing time course of MI and MII. There is no cytology, which is a little surprising since the stage is determined by spindle morphology. It would help to see sample sizes (ie. In the Figure legends) and also representative images. It would also be nice to see images comparing the same stage in the SynSAC cells versus normal cells. Are there any differences in the morphology of the spindles or chromosomes when in the SynSAC system?

      3) A possible criticism of this system could be that the SAC signal promoting arrest is not coming from the kinetochore. Are there any possible consequences of this? In vertebrate cells, the RZZ complex streams off the kinetochore. Yeast don't have RZZ but this is an example of something that is SAC dependent and happens at the kinetochore. Can the authors discuss possible limitations such as this? Does the inhibition of the APC effect the native kinetochores? This could be good or bad. A bad possibility is that the cell is behaving as if it is in MII, but the kinetochores have made their microtubule attachments and behave as if in anaphase.

      Significance

      These authors have developed a method to induce MI or MII arrest. While this was previously possible in MI, the advantage of the method presented here is it works for MII, and chemically inducible because it is based on a system that is sensitive to the addition of ABA. Depending on when the ABA is added, they achieve a MI or MII delay. The ABA promotes dimerizing fragments of Mps1 and Spc105 that can't bind their chromosomal sites. The evidence that the MI arrest is weaker than the MII arrest is convincing and consistent with published data and indicating the SAC in MI is less robust than MII or mitosis. The authors use this system to find evidence that the weak MI arrest is associated with PP1 binding to Spc105. This is a nice use of the system.

      The remainder of the paper uses the SynSAC system to isolate populations enriched for MI or MII stages and conduct proteomics. This shows a powerful use of the system but more work is needed to validate these results, particularly in normal cells.

      Overall the most significant aspect of this paper is the technical achievement, which is validated by the other experiments. They have developed a system and generated some proteomics data that maybe useful to others when analyzing kinetochore composition at each division.

    1. On the international stage, the crusade led by Jacques Chirac against the Anglo-American invasion of Iraq now looks like the last gasp of a dying creed. In 2009 Nicolas Sarkozy rescinded the Gaullian withdrawal of France from Nato command.

      ?!

    2. The secretary of state, Cordell Hull, was ‘conscientious’ but ‘hampered ... by his summary understanding of what was not American’.

      Not unfair

    1. eLife Assessment

      This study, from the group that pioneered migrasome, describes a novel vaccine platform of engineered migrasomes that behave like natural migrasomes. Importantly, this platform has the potential to overcome obstacles associated with cold chain issues for vaccines such as mRNA. In the revised version, the authors have addressed previous concerns and the results from additional experiments provide compelling evidence that features methods, data, and analyses more rigorous than the current state-of-the-art. Although the findings are important with practical implications for the vaccine technology, results from additional experiments would make this an outstanding study.

    2. Reviewer #1 (Public review):

      Summary:

      Outstanding fundamental phenomenon (migrasomes) en route to become transitionally highly significant.

      Strengths:

      Innovative approach at several levels: Migrasomes, discovered by DR. Yu's group, are an outstanding biological phenomenon of fundamental interest and now of potentially practical value.

      Weaknesses:

      I feel that the overemphasis on practical aspects (vaccine), however important, eclipses some of the fundamental aspects that may be just as important and actually more interesting. If this can be expanded, the study would be outstanding.

      Comments on revisions: This reviewer feels that the authors have addressed all issues.

    3. Reviewer #2 (Public review):

      Summary:

      The authors report describes a novel vaccine platform derived from a newly discovered organelle called a migrasome. First, the authors address a technical hurdle for using migrasomes as a vaccine platform. Natural migrasome formation occurs at low levels and is labor intensive, however, by understanding the molecular underpinning of migrasome formation, the authors have designed a method to make engineered migrasomes from cultures cells at higher yields utilizing a robust process. These engineered migrasomes behave like natural migrasomes. Next, the authors immunized mice with migrasomes that either expressed a model peptide or the SARS-CoV-2 spike protein. Antibodies against the spike protein were raised that could be boosted by a 2nd vaccination and these antibodies were functional as assessed by an in vitro pseudoviral assay. This new vaccine platform has the potential to overcome obstacles such as cold chain issues for vaccines like messenger RNA that require very stringent storage conditions.

      Strengths:

      The authors present very robust studies detailing the biology behind migrasome formation and this fundamental understanding was used to from engineered migrasomes, which makes it possible to utilize migrasomes as a vaccine platform. The characterization of engineered migrasomes is thorough and establishes comparability with naturally occurring migrasomes. The biophysical characterization of the migrasomes is well done, including thermal stability and characterization of the particle size (important characterizations for a good vaccine).

      Weaknesses:

      With a new vaccine platform technology, it would be nice to compare them head-to-head against a proven technology. The authors would improve the manuscript if they made some comparisons to other vaccine platforms such as a SARS-CoV-2 mRNA vaccine or even an adjuvanted recombinant spike protein. This would demonstrate a migrasome based vaccine could elicit responses comparable to a proven vaccine technology. Additionally, understanding the integrity of the antigens expressed in their migrasomes could be useful. This could be done by looking at functional monoclonal antibody binding to their migrasomes in a confocal microscopy experiment.

      Updates after revision:

      The revised manuscript has additional experiments that I believe improve the strength of evidence presented in the manuscript and address the weaknesses of the first draft. First, they provide a comparison to the antibody responses induced by their migrasome based platform to recombinant protein formulated in an adjuvant and show the response is comparable. Second, they provide evidence that the spike protein incorporated into their migrasomes retains structural integrity by preserving binding to monoclonal antibodies. Together, these results strengthen the paper significantly and support the claims that the novel migrasome based vaccine platform could be a useful in the vaccine development field.

    4. Author response:

      The following is the authors’ response to the original reviews

      Public Reviews:

      Reviewer #1 (Public Review):

      Summary:

      This is an excellent study by a superb investigator who discovered and is championing the field of migrasomes. This study contains a hidden "gem" - the induction of migrasomes by hypotonicity and how that happens. In summary, an outstanding fundamental phenomenon (migrasomes) en route to becoming transitionally highly significant.

      Strengths:

      Innovative approach at several levels. Migrasomes - discovered by Dr Yu's group - are an outstanding biological phenomenon of fundamental interest and now of potentially practical value.

      Weaknesses:

      I feel that the overemphasis on practical aspects (vaccine), however important, eclipses some of the fundamental aspects that may be just as important and actually more interesting. If this can be expanded, the study would be outstanding.

      We sincerely thank the reviewer for the encouraging and insightful comments. We fully agree that the fundamental aspects of migrasome biology are of great importance and deserve deeper exploration.

      In line with the reviewer’s suggestion, we have expanded our discussion on the basic biology of engineered migrasomes (eMigs). A recent study by the Okochi group at the Tokyo Institute of Technology demonstrated that hypoosmotic stress induces the formation of migrasome-like vesicles, involving cytoplasmic influx and requiring cholesterol for their formation (DOI: 10.1002/1873-3468.14816, February 2024). Building on this, our study provides a detailed characterization of hypoosmotic stressinduced eMig formation, and further compares the biophysical properties of natural migrasomes and eMigs. Notably, the inherent stability of eMigs makes them particularly promising as a vaccine platform.

      Finally, we would like to note that our laboratory continues to investigate multiple aspects of migrasome biology. In collaboration with our colleagues, we recently completed a study elucidating the mechanical forces involved in migrasome formation (DOI: 10.1016/j.bpj.2024.12.029), which further complements the findings presented here.

      Reviewer #2 (Public review):

      Summary:

      The authors' report describes a novel vaccine platform derived from a newly discovered organelle called a migrasome. First, the authors address a technical hurdle in using migrasomes as a vaccine platform. Natural migrasome formation occurs at low levels and is labor intensive, however, by understanding the molecular underpinning of migrasome formation, the authors have designed a method to make engineered migrasomes from cultured, cells at higher yields utilizing a robust process. These engineered migrasomes behave like natural migrasomes. Next, the authors immunized mice with migrasomes that either expressed a model peptide or the SARSCoV-2 spike protein. Antibodies against the spike protein were raised that could be boosted by a 2nd vaccination and these antibodies were functional as assessed by an in vitro pseudoviral assay. This new vaccine platform has the potential to overcome obstacles such as cold chain issues for vaccines like messenger RNA that require very stringent storage conditions.

      Strengths:

      The authors present very robust studies detailing the biology behind migrasome formation and this fundamental understanding was used to form engineered migrasomes, which makes it possible to utilize migrasomes as a vaccine platform. The characterization of engineered migrasomes is thorough and establishes comparability with naturally occurring migrasomes. The biophysical characterization of the migrasomes is well done including thermal stability and characterization of the particle size (important characterizations for a good vaccine).

      Weaknesses:

      With a new vaccine platform technology, it would be nice to compare them head-tohead against a proven technology. The authors would improve the manuscript if they made some comparisons to other vaccine platforms such as a SARS-CoV-2 mRNA vaccine or even an adjuvanted recombinant spike protein. This would demonstrate a migrasome-based vaccine could elicit responses comparable to a proven vaccine technology. 

      We thank the reviewer for the thoughtful evaluation and constructive suggestions, which have helped us strengthen the manuscript. 

      Comparison with proven vaccine technologies:

      In response to the reviewer’s comment, we now include a direct comparison of the antibody responses elicited by eMig-Spike and a conventional recombinant S1 protein vaccine formulated with Alum. As shown in the revised manuscript (Author response image 1), the levels of S1-specific IgG induced by the eMig-based platform were comparable to those induced by the S1+Alum formulation. This comparison supports the potential of eMigs as a competitive alternative to established vaccine platforms. 

      Author response image 1.

      eMigrasome-based vaccination showed similar efficacy compared with adjuvanted recombinant spike protein The amount of S1-specific IgG in mouse serum was quantified by ELISA on day 14 after immunization. Mice were either intraperitoneally (i.p.) immunized with recombinant Alum/S1 or intravenously (i.v.) immunized with eM-NC, eM-S or recombinant S1. The administered doses were 20 µg/mouse for eMigrasomes, 10 µg/mouse (i.v.) or 50 µg/mouse (i.p.) for recombinant S1 and 50 µl/mouse for Aluminium adjuvant.

      Assessment of antigen integrity on migrasomes:

      To address the reviewer’s suggestion regarding antigen integrity, we performed immunoblotting using antibodies against both S1 and mCherry. Two distinct bands were observed: one at the expected molecular weight of the S-mCherry fusion protein, and a higher molecular weight band that may represent oligomerized or higher-order forms of the Spike protein (Figure 5b in the revised manuscript).

      Furthermore, we performed confocal microscopy using a monoclonal antibody against Spike (anti-S). Co-localization analysis revealed strong overlap between the mCherry fluorescence and anti-Spike staining, confirming the proper presentation and surface localization of intact S-mCherry fusion protein on eMigs (Figure 5c in the revised manuscript). These results confirm the structural integrity and antigenic fidelity of the Spike protein expressed on eMigs.

      Recommendations for the authors

      Reviewer #1 (Recommendations For The Authors):

      I feel that the overemphasis on practical aspects (vaccine), however important, eclipses some of the fundamental aspects that may be just as important and actually more interesting. If this can be expanded, the study would be outstanding.

      I know that the reviewers always ask for more, and this is not the case here. Can the abstract and title be changed to emphasize the science behind migrasome formation, and possibly add a few more fundamental aspects on how hypotonic shock induces migrasomes?

      Alternatively, if the authors desire to maintain the emphasis on vaccines, can immunological mechanisms be somewhat expanded in order to - at least to some extent - explain why migrasomes are a better vaccine vehicle?

      One way or another, this reviewer is highly supportive of this study and it is really up to the authors and the editor to decide whether my comments are of use or not.

      My recommendation is to go ahead with publishing after some adjustments as per above.

      We’d like to thank the reviewer for the suggestion. We have changed the title of the manuscript and modified the abstract, emphasizing the fundamental science behind the development of eMigrasome. To gain some immunological information on eMig illucidated antibody responses, we characterized the type of IgG induced by eM-OVA in mice, and compared it to that induced by Alum/OVA. The IgG response to Alum/OVA was dominated by IgG1. Quite differently, eM-OVA induced an even distribution of IgG subtypes, including IgG1, IgG2b, IgG2c, and IgG3 (Figure 4i in the revised manuscript). The ratio between IgG1 and IgG2a/c indicates a Th1 or Th2 type humoral immune response. Thus, eM-OVA immunization induces a balance of Th1/Th2 immune responses.

      Reviewer #2 (Recommendations For The Authors):

      The study is a very nice exploration of a new vaccine platform. This reviewer believes that a more head-to-head comparison to the current vaccine SARS-CoV-2 vaccine platform would improve the manuscript. This comparison is done with OVA antigen, but this model antigen is not as exciting as a functional head-to-head with a SARS-CoV-2 vaccine.

      I think that two other discussion points should be included in the manuscript. First, was the host-cell protein evaluated? If not, I would include that point on how issues of host cell contamination of the migrasome could play a role in the responses and safety of a vaccine. Second, I would discuss antigen incorporation and localization into the platform. For example, the full-length spike being expressed has a native signal peptide and transmembrane domain. The authors point out that a transmembrane domain can be added to display an antigen that does not have one natively expressed, however, without a signal peptide this would not be secreted and localized properly. I would suggest adding a discussion of how a non-native signal peptide would be necessary in addition to a transmembrane domain.

      We thank the reviewer for these thoughtful suggestions and fully agree that the points raised are important for the translational development of eMig-based vaccines.

      (1) Host cell proteins and potential immunogenicity:

      We appreciate the reviewer’s suggestion to consider host cell protein contamination. Considering potential clinical application of eMigrasomes in the future, we will use human cells with low immunogenicity such as HEK-293 or embryonic stem cells (ESCs) to generate eMigrasomes. Also, we will follow a QC that meets the standard of validated EV-based vaccination techniques. 

      (2) Antigen incorporation and localization—signal peptide and transmembrane domain:

      We also agree with the reviewer’s point that proper surface display of antigens on eMigs requires both a transmembrane domain and a signal peptide for correct trafficking and membrane anchoring. For instance, in the case of full-length Spike protein, the native signal peptide and transmembrane domain ensure proper localization to the plasma membrane and subsequent incorporation into eMigs. In case of OVA, a secretary protein that contains a native signal peptide yet lacks a transmembrane domain, an engineered transmembrane domain is required. For antigens that do not naturally contain these features, both a non-native signal peptide and an artificial transmembrane domain are necessary. We have clarified this point in the revised discussion and explicitly noted the requirement for a signal peptide when engineering antigens for surface display on migrasomes.

    1. traditional exemplification of parents and local com-munities, but via the technologies of an enormous communications revolution that hasoccurred virtually in the space of a single human lifetime

      rapid outreach via social media, film, other visual media with technology

    Annotators

    1. eLife Assessment

      This paper reports the fundamental finding of how Raman spectral patterns correlate with proteome profiles using Raman spectra of E. coli cells from different physiological conditions and found global stoichiometric regulation on proteomes. The authors' findings provide compelling evidence that stoichiometric regulation of proteomes is general through analysis of both bacterial and human cells. In the future, similar methodology can be applied on various tissue types and microbial species for studying proteome composition with Raman spectral patterns.

    2. Reviewer #1 (Public review):

      Summary

      This work performed Raman spectral microscopy for E. coli cells with 15 different culture conditions. The author developed a theoretical framework to construct a regression matrix which predicts proteome composition by Raman data. Specifically, this regression matrix is obtained by statistical inference from various experimental conditions. With this model, the authors categorized co-expressed genes and illustrate how proteome stoichiometry is regulated among different culture conditions. Co-expressed gene clusters were investigated and identified as homeostasis core, carbon-source dependent, and stationary phase dependent genes. Overall, the author demonstrates a strong and comprehensive data analysis scheme for the joint analysis of Raman and proteome datasets.

      Strengths and major contributions

      Major contributions: (1) Experimentally, the authors contributed Raman datasets of E. coli with various growth conditions. (2) In data analysis, the authors developed a scheme to compare proteome and Raman datasets. Protein co-expression clusters were identified, and their biological meaning were investigated.

      Discussion and impact for the field

      Raman signature contains both proteomic and metabolomic information and is an orthogonal method to infer the composition biomolecules. This work is a strong initiative for introducing the powerful technique to systems biology and provide a rigorous pipeline for future data analysis. The regression matrix can be used for cross-comparison among future experimental results on proteome-Raman datasets.

      Comments on revisions:

      The authors addressed all my questions nicely. In particular, the subsampling test demonstrated that with enough "distinct" physiological condition (even for m=5) one could already explore the major mode of proteome regulation and Raman signature. The main text has been streamlined and the clarity is improved. I have a minor suggestion:

      (i) For equation (1), it is important to emphasize that the formula works for every j=1,...,15, and the regression matrix B is obtained by statistical inference by summarizing data from all 15 conditions.

    3. Author response:

      The following is the authors’ response to the original reviews.

      Reviewer #1 (Public review):

      Summary

      This work performed Raman spectral microscopy at the single-cell level for 15 different culture conditions in E. coli. The Raman signature is systematically analyzed and compared with the proteome dataset of the same culture conditions. With a linear model, the authors revealed correspondence between Raman pattern and proteome expression stoichiometry indicating that spectrometry could be used for inferring proteome composition in the future. With both Raman spectra and proteome datasets, the authors categorized co-expressed genes and illustrated how proteome stoichiometry is regulated among different culture conditions. Co-expressed gene clusters were investigated and identified as homeostasis core, carbon-source dependent, and stationary phase-dependent genes. Overall, the authors demonstrate a strong and solid data analysis scheme for the joint analysis of Raman and proteome datasets.

      Strengths and major contributions

      (1) Experimentally, the authors contributed Raman datasets of E. coli with various growth conditions.

      (2) In data analysis, the authors developed a scheme to compare proteome and Raman datasets. Protein co-expression clusters were identified, and their biological meaning was investigated.

      Weaknesses

      The experimental measurements of Raman microscopy were conducted at the single-cell level; however, the analysis was performed by averaging across the cells. The author did not discuss if Raman microscopy can used to detect cell-to-cell variability under the same condition.

      We thank the reviewer for raising this important point. Though this topic is beyond the scope of our study, some of our authors have addressed the application of single-cell Raman spectroscopy to characterizing phenotypic heterogeneity in individual Staphylococcus aureus cells in another paper (Kamei et al., bioRxiv, doi: 10.1101/2024.05.12.593718). Additionally, one of our authors demonstrated that single-cell RNA sequencing profiles can be inferred from Raman images of mouse cells (Kobayashi-Kirschvink et al., Nat. Biotechnol. 42, 1726–1734, 2024). Therefore, detecting cell-to-cell variability under the same conditions has been shown to be feasible. Whether averaging single-cell Raman spectra is necessary depends on the type of analysis and the available dataset. We will discuss this in more detail in our response to Comment (1) by Reviewer #1 (Recommendation for the authors).

      Discussion and impact on the field

      Raman signature contains both proteomic and metabolomic information and is an orthogonal method to infer the composition of biomolecules. It has the advantage that single-cell level data could be acquired and both in vivo and in vitro data can be compared. This work is a strong initiative for introducing the powerful technique to systems biology and providing a rigorous pipeline for future data analysis.

      Reviewer #2 (Public review):

      Summary and strengths:

      Kamei et al. observe the Raman spectra of a population of single E. coli cells in diverse growth conditions. Using LDA, Raman spectra for the different growth conditions are separated. Using previously available protein abundance data for these conditions, a linear mapping from Raman spectra in LDA space to protein abundance is derived. Notably, this linear map is condition-independent and is consequently shown to be predictive for held-out growth conditions. This is a significant result and in my understanding extends the earlier Raman to RNA connection that has been reported earlier.

      They further show that this linear map reveals something akin to bacterial growth laws (ala Scott/Hwa) that the certain collection of proteins shows stoichiometric conservation, i.e. the group (called SCG - stoichiometrically conserved group) maintains their stoichiometry across conditions while the overall scale depends on the conditions. Analyzing the changes in protein mass and Raman spectra under these conditions, the abundance ratios of information processing proteins (one of the large groups where many proteins belong to "information and storage" - ISP that is also identified as a cluster of orthologous proteins) remain constant. The mass of these proteins deemed, the homeostatic core, increases linearly with growth rate. Other SCGs and other proteins are condition-specific.

      Notably, beyond the ISP COG the other SCGs were identified directly using the proteome data. Taking the analysis beyond they then how the centrality of a protein - roughly measured as how many proteins it is stoichiometric with - relates to function and evolutionary conservation. Again significant results, but I am not sure if these ideas have been reported earlier, for example from the community that built protein-protein interaction maps.

      As pointed out, past studies have revealed that the function, essentiality, and evolutionary conservation of genes are linked to the topology of gene networks, including protein-protein interaction networks. However, to the best of our knowledge, their linkage to stoichiometry conservation centrality of each gene has not yet been established.

      Previously analyzed networks, such as protein-protein interaction networks, depend on known interactions. Therefore, as our understanding of the molecular interactions evolves with new findings, the conclusions may change. Furthermore, analysis of a particular interaction network cannot account for effects from different types of interactions or multilayered regulations affecting each protein species.

      In contrast, the stoichiometry conservation network in this study focuses solely on expression patterns as the net result of interactions and regulations among all types of molecules in cells. Consequently, the stoichiometry conservation networks are not affected by the detailed knowledge of molecular interactions and naturally reflect the global effects of multilayered interactions. Additionally, stoichiometry conservation networks can easily be obtained for non-model organisms, for which detailed molecular interaction information is usually unavailable. Therefore, analysis with the stoichiometry conservation network has several advantages over existing methods from both biological and technical perspectives.

      We added a paragraph explaining this important point to the Discussion section, along with additional literature.

      Finally, the paper built a lot of "machinery" to connect ¥Omega_LE, built directly from proteome, and ¥Omega_B, built from Raman, spaces. I am unsure how that helps and have not been able to digest the 50 or so pages devoted to this.

      The mathematical analyses in the supplementary materials form the basis of the argument in the main text. Without the rigorous mathematical discussions, Fig. 6E — one of the main conclusions of this study — and Fig. 7 could never be obtained. Therefore, we believe the analyses are essential to this study. However, we clarified why each analysis is necessary and significant in the corresponding sections of the Results to improve the manuscript's readability.

      Please see our responses to comments (2) and (7) by Reviewer #1 (Recommendations for the authors) and comments (5) and (6) by Reviewer #2 (Recommendations for the authors).

      Strengths:

      The rigorous analysis of the data is the real strength of the paper. Alongside this, the discovery of SCGs that are condition-independent and that are condition-dependent provides a great framework.

      Weaknesses:

      Overall, I think it is an exciting advance but some work is needed to present the work in a more accessible way.

      We edited the main text to make it more accessible to a broader audience. Please see our responses to comments (2) and (7) by Reviewer #1 (Recommendations for the authors) and comments (5) and (6) by Reviewer #2 (Recommendations for the authors).

      Reviewer #1 (Recommendations for the authors):

      (1) The Raman spectral data is measured from single-cell imaging. In the current work, most of the conclusions are from averaged data. From my understanding, once the correspondence between LDA and proteome data is established (i.e. the matrix B) one could infer the single-cell proteome composition from B. This would provide valuable information on how proteome composition fluctuates at the single-cell level.

      We can calculate single-cell proteomes from single-cell Raman spectra in the manner suggested by the reviewer. However, we cannot evaluate the accuracy of their estimation without single-cell proteome data under the same environmental conditions. Likewise, we cannot verify variations of estimated proteomes of single cells. Since quantitatively accurate single-cell proteome data is unavailable, we concluded that addressing this issue was beyond the scope of this study.

      Nevertheless, we agree with the reviewer that investigating how proteome composition fluctuates at the single-cell level based on single-cell Raman spectra is an intriguing direction for future research. In this regard, some of our authors have studied the phenotypic heterogeneity of Staphylococcus aureus cells using single-cell Raman spectra in another paper (Kamei et al., bioRxiv, doi: 10.1101/2024.05.12.593718), and one of our authors has demonstrated that single-cell RNA sequencing profiles can be inferred from Raman images of mouse cells (Kobayashi-Kirschvink et al., Nat. Biotechnol. 42, 1726–1734, 2024). Therefore, it is highly plausible that single-cell Raman spectroscopy can also characterize proteomic fluctuations in single cells. We have added a paragraph to the Discussion section to highlight this important point.

      (2) The establishment of matrix B is quite confusing for readers who only read the main text. I suggest adding a flow chart in Figure 1 to explain the data analysis pipeline, as well as state explicitly what is the dimension of B, LDA matrix, and proteome matrix.

      We thank the reviewer for the suggestion. Following the reviewer's advice, we have explicitly stated the dimensions of the vectors and matrices in the main text. We have also added descriptions of the dimensions of the constructed spaces. Rather than adding another flow chart to Figure 1, we added a new table (Table 1) to explain the various symbols representing vectors and matrices, thereby improving the accessibility of the explanation.

      (3) One of the main contributions for this work is to demonstrate how proteome stoichiometry is regulated across different conditions. A total of m=15 conditions were tested in this study, and this limits the rank of LDA matrix as 14. Therefore, maximally 14 "modes" of differential composition in a proteome can be detected.

      As a general reader, I am wondering in the future if one increases or decreases the number of conditions (say m=5 or m=50) what information can be extracted? It is conceivable that increasing different conditions with distinct cellular physiology would be beneficial to "explore" different modes of regulation for cells. As proof of principle, I am wondering if the authors could test a lower number (by sub-sampling from m=15 conditions, e.g. picking five of the most distinct conditions) and see how this would affect the prediction of proteome stoichiometry inference.

      We thank the reviewer for bringing an important point to our attention. To address the issue raised, we conducted a new subsampling analysis (Fig. S14).

      As we described in the main text (Fig. 6E) and the supplementary materials, the m x m orthogonal matrix, Θ, represents to what extent the two spaces Ω<sub>LE</sub> and Ω<sub>B</sub> are similar (m is the number of conditions; in our main analysis, m = 15). Thus, the low-dimensional correspondence between the two spaces connected by an orthogonal transformation, such as an m-dimensional rotation, can be evaluated by examining the elements of the matrix Θ. Specifically, large off-diagonal elements of the matrix  mix higher dimensions and lower dimensions, making the two spaces spanned by the first few major axes appear dissimilar. Based on this property, we evaluated the vulnerability of the low-dimensional correspondence between Ω<sub>LE</sub> and Ω<sub>B</sub> to the reduced number of conditions by measuring how close Θ was to the identity matrix when the analysis was performed on the subsampled datasets.

      In the new figure (Fig. S14), we first created all possible smaller condition sets by subsampling the conditions. Next, to evaluate the closeness between the matrix Θ and the identity matrix for each smaller condition set, we generated 10,000 random orthogonal matrices of the same size as . We then evaluated the probability of obtaining a higher level of low-dimensional correspondence than that of the experimental data by chance (see section 1.8 of the Supplementary Materials). This analysis was already performed in the original manuscript for the non-subsampled case (m = 15) in Fig. S9C; the new analysis systematically evaluates the correspondence for the subsampled datasets.

      The results clearly show that low-dimensional correspondence is more likely to be obtained with more conditions (Fig. S14). In particular, when the number of conditions used in the analysis exceeds five, the median of the probability that random orthogonal matrices were closer to the identity matrix than the matrix Θ calculated from subsampled experimental data became lower than 10<sup>-4</sup>. This analysis provides insight into the number of conditions required to find low-dimensional correspondence between Ω<sub>LE</sub> and Ω<sub>B</sub>.

      What conditions are used in the analysis can change the low-dimensional structures of Ω<sub>LE</sub> and Ω<sub>B</sub> . Therefore, it is important to clarify whether including more conditions in the analysis reduces the dependence of the low-dimensional structures on conditions. We leave this issue as a subject for future study. This issue relates to the effective dimensionality of omics profiles needed to establish the diverse physiological states of cells across conditions. Determining the minimum number of conditions to attain the condition-independent low-dimensional structures of Ω<sub>LE</sub> and Ω<sub>B</sub> would provide insight into this fundamental problem. Furthermore, such an analysis would identify the range of applications of Raman spectra as a tool for capturing macroscopic properties of cells at the system level.

      We now discuss this point in the Discussion section, referring to this analysis result (Fig. S14). Please also see our reply to the comment (1) by Reviewer #2 (Recommendations for the authors).

      (4) In E. coli cells, total proteome is in mM concentration while the total metabolites are between 10 to 100 mM concentration. Since proteins are large molecules with more functional groups, they may contribute to more Raman signal (per molecules) than metabolites. Still, the meaningful quantity here is the "differential Raman signal" with different conditions, not the absolute signal. I am wondering how much percent of differential Raman signature are from proteome and how much are from metabolome.

      It is an important and interesting question to what extent changes in the proteome and metabolome contribute to changes in Raman spectra. Though we concluded that answering this question is beyond the scope of this study, we believe it is an important topic for future research.

      Raman spectral patterns convey the comprehensive molecular composition spanning the various omics layers of target cells. Changes in the composition of these layers can be highly correlated, and identifying their contributions to changes in Raman spectra would provide insight into the mutual correlation of different omics layers. Addressing the issue raised by the reviewer would expand the applications of Raman spectroscopy and highlight the advantage of cellular Raman spectra as a means of capturing comprehensive multi-omics information.

      We note that some studies have evaluated the contributions of proteins, lipids, nucleic acids, and glycogen to the Raman spectra of mammalian cells and how these contributions change in different states (e.g., Mourant et al., J Biomed Opt, 10(3), 031106, 2005). Additionally, numerous studies have imaged or quantified metabolites in various cell types (see, for example, Cutshaw et al., Chemical Reviews, 123(13), 8297–8346, 2023, for a comprehensive review). Extending these approaches to multiple omics layers in future studies would help resolve the issue raised by the reviewer.

      (5) It is known that E. coli cells in different conditions have different cell sizes, where cell width increases with carbon source quality and growth rate. Does this effect be normalized when processing the Raman signal?

      Each spectrum was normalized by subtracting the average and dividing it by the standard deviation. This normalization minimizes the differences in signal intensities due to different cell sizes and densities. This information is shown in the Materials and Methods section of the Supplementary Materials.

      (6) I have a question about interpretation of the centrality index. A higher centrality indicates the protein expression pattern is more aligned with the "mainstream" of the other proteins in the proteome. However, it is possible that the proteome has multiple" mainstream modes" (with possibly different contributions in magnitudes), and the centrality seems to only capture the "primary mode". A small group of proteins could all have low centrality but have very consistent patterns with high conservation of stoichiometry. I wondering if the author could discuss and clarify with this.

      We thank the reviewer for drawing our attention to the insufficient explanation in the original manuscript. First, we note that stoichiometry conserving protein groups are not limited to those composed of proteins with high stoichiometry conservation centrality. The SCGs 2–5 are composed of proteins that strongly conserve stoichiometry within each group but have low stoichiometry conservation centrality (Fig. 5A, 5K, 5L, and 7A). In other words, our results demonstrate the existence of the "primary mainstream mode" (SCG 1, i.e., the homeostatic core) and condition-specific "non-primary mainstream modes" (SCGs 2–5). These primary and non-primary modes are distinguishable by their position along the axis of stoichiometry conservation centrality (Fig. 5A, 5K, and 5L).

      However, a single one-dimensional axis (centrality) cannot capture all characteristics of stoichiometry-conserving architecture. In our case, the "non-primary mainstream modes" (SCGs 2–5) were distinguished from each other by multiple csLE axes.

      To clarify this point, we modified the first paragraph of the section where we first introduce csLE (Revealing global stoichiometry conservation architecture of the proteomes with csLE). We also added a paragraph to the Discussion section regarding the condition-specific SCGs 2–5.

      (7) Figures 3, 4, and 5A-I are analyses on proteome data and are not related to Raman spectral data. I am wondering if this part of the analysis can be re-organized and not disrupt the mainline of the manuscript.

      We agree that the structure of this manuscript is complicated. Before submitting this manuscript to eLife, we seriously considered reorganizing it. However, we concluded that this structure was most appropriate because our focus on stoichiometry conservation cannot be explained without analyzing the coefficients of the Raman-proteome correspondence using COG classification (see Fig. 3; note that Fig. 3A relates to Raman data). This analysis led us to examine the global stoichiometry conservation architecture of proteomes (Figs. 4 and 5) and discover the unexpected similarity between the low-dimensional structures of Ω<sub>LE</sub> and Ω<sub>B</sub>

      Therefore, we decided to keep the structure of the manuscript as it is. To partially resolve this issue, however, we added references to Fig. S1, the diagram of this paper’s mainline, to several places in the main text so that readers can more easily grasp the flow of the manuscript.

      (8) Supplementary Equation (2.6) could be wrong. From my understanding of the coordinate transformation definition here, it should be [w1 ... ws] X := RHS terms in big parenthesis.

      We checked the equation and confirmed that it is correct.

      Reviewer #2 (Recommendations for the authors):

      (1) The first main result or linear map between raman and proteome linked via B is intriguing in the sense that the map is condition-independent. A speculative question I have is if this relationship may become more complex or have more condition-dependent corrections as the number of conditions goes up. The 15 or so conditions are great but it is not clear if they are often quite restrictive. For example, they assume an abundance of most other nutrients. Now if you include a growth rate decrease due to nitrogen or other limitations, do you expect this to work?

      In our previous paper (Kobayashi-Kirschvink et al., Cell Systems 7(1): 104–117.e4, 2018), we statistically demonstrated a linear correspondence between cellular Raman spectra and transcriptomes for fission yeast under 10 environmental conditions. These conditions included nutrient-rich and nutrient-limited conditions, such as nitrogen limitation. Since the Raman-transcriptome correspondence was only statistically verified in that study, we analyzed the data from the standpoint of stoichiometry conservation in this study. The results (Fig. S11 and S12) revealed a correspondence in lower dimensions similar to that observed in our main results. In addition, similar correspondences were obtained even for different E. coli strains under common culture conditions (Fig. S11 and S12). Therefore, it is plausible that the stoichiometry-conservation low-dimensional correspondence between Raman and gene expression profiles holds for a wide range of external and internal perturbations.

      We agree with the reviewer that it is important to understand how Raman-omics correspondences change with the number of conditions. To address this issue, we examined how the correspondence between Ω<sub>LE</sub> and Ω<sub>B</sub> changes by subsampling the conditions used in the analysis. We focused on , which was introduced in Fig. 5E, because the closeness of Θ to the identity matrix represents correspondence precision. We found a general trend that the low-dimensional correspondence becomes more precise as the number of conditions increases (Fig. S14). This suggests that increasing the number of conditions generally improves the correspondence rather than disrupting it.

      We added a paragraph to the Discussion section addressing this important point. Please also refer to our response to Comment (3) of Reviewer #1 (Recommendations for the authors).

      (2) A little more explanation in the text for 3C/D would help. I am imagining 3D is the control for 3C. Minor comment - 3B looks identical to S4F but the y-axis label is different.

      We thank the reviewer for pointing out the insufficient explanation of Fig. 3C and 3D in the main text. Following this advice, we added explanations of these plots to the main text. We also added labels ("ISP COG class" and "non-ISP COG class") to the top of these two figures.

      Fig. 3B and S4F are different. For simplicity, we used the Pearson correlation coefficient in Fig. 3B. However, cosine similarity is a more appropriate measure for evaluating the degree of conservation of abundance ratios. Thus, we presented the result using cosine similarity in a supplementary figure (Fig. S4F). Please note that each point in Fig. S4F is calculated between proteome vectors of two conditions. The dimension of each proteome vector is the number of genes in each COG class.

      (3) Can we see a log-log version of 4C to see how the low-abundant proteins are behaving? In fact, the same is in part true for Figure 3A.

      We added the semi-log version of the graph for SCG1 (the homeostatic core) in Fig. 4C to make low-abundant proteins more visible. Please note that the growth rates under the two stationary-phase conditions were zero; therefore, plotting this graph in log-log format is not possible.

      Fig. 3A cannot be shown as a log-log plot because many of the coefficients are negative. The insets in the graphs clarify the points near the origin.

      (4) In 5L, how should one interpret the other dots that are close to the center but not part of the SCG1? And this theme continues in 6ACD and 7A.

      The SCGs were obtained by setting a cosine similarity threshold. Therefore, proteins that are close to SCG 1 (the homeostatic core) but do not belong to it have a cosine similarity below the threshold with any protein in SCG 1. Fig. 7 illustrates the expression patterns of the proteins in question.

      (5) Finally, I do not fully appreciate the whole analysis of connecting ¥Omega_csLE and ¥Omega_B and plots in 6 and 7. This corresponds to a lot of linear algebra in the 50 or so pages in section 1.8 in the supplementary. If the authors feel this is crucial in some way it needs to be better motivated and explained. I philosophically appreciate developing more formalism to establish these connections but I did not understand how this (maybe even if in the future) could lead to a new interpretation or analysis or theory.

      The mathematical analyses included in the supplementary materials are important for readers who are interested in understanding the mathematics behind our conclusions. However, we also thought these arguments were too detailed for many readers when preparing the original submission and decided to show them in the supplemental materials.

      To better explain the motivation behind the mathematical analyses, we revised the section “Representing the proteomes using the Raman LDA axes”.

      Please also see our reply to the comment (6) by Reviewer #2 (Recommendations for the authors) below.

      (6) Along the lines of the previous point, there seems to be two separate points being made: a) there is a correspondence between Raman and proteins, and b) we can use the protein data to look at centrality, generality, SCGs, etc. And the two don't seem to be linked until the formalism of ¥Omegas?

      The reviewer is correct that we can calculate and analyze some of the quantities introduced in this study, such as stoichiometry conservation centrality and expression generality, without Raman data. However, it is difficult to justify introducing these quantities without analyzing the correspondence between the Raman and proteome profiles. Moreover, the definition of expression generality was derived from the analysis of Raman-proteome correspondence (see section 2.2 of the Supplementary Materials). Therefore, point b) cannot stand alone without point a) from its initial introduction.

      To partially improve the readability and resolve the issue of complicated structure of this manuscript, we added references to Fig. S1, which is a diagram of the paper’s mainline, to several places in the main text. Please also see our reply to the comment (7) by Reviewer #1 (Recommendations for the authors).

    1. eLife Assessment

      The authors analyzed spectral properties of neural activity recorded using laminar probes while mice engaged in a global/local visual oddball paradigm. They found solid evidence for an increase in gamma (and theta in some cases) for unpredictable versus predictable stimuli, and a reduction in alpha/beta, which they consider evidence towards a "predictive routing" scheme. The study is overall important because it addresses the basis of predictive processing in the cortex, but some of the analytical choices could be better motivated, and overall, the manuscript can be improved by performing additional analyses.

    2. Reviewer #1 (Public review):

      Summary:

      The authors recorded neural activity using laminar probes while mice engaged in a global/local visual oddball paradigm. The focus of the article is on oscillatory activity, and found activity differences in theta, alpha/beta, and gamma bands related to predictability and prediction error.

      I think this is an important paper, providing more direct evidence for the role of signals in different frequency bands related to predictability and surprise in the sensory cortex.

      Comments:

      Below are some comments that may hopefully help further improve the quality of this already very interesting manuscript.

      (1) Introduction:

      The authors write in their introduction: "H1 further suggests a role for θ oscillations in prediction error processing as well." Without being fleshed out further, it is unclear what role this would be, or why. Could the authors expand this statement?

      (2) Limited propagation of gamma band signals:

      Some recent work (e.g. https://www.cell.com/cell-reports/fulltext/S2211-1247(23)00503-X) suggests that gamma-band signals reflect mainly entrainment of the fast-spiking interneurons, and don't propagate from V1 to downstream areas. Could the authors connect their findings to these emerging findings, suggesting no role in gamma-band activity in communication outside of the cortical column?

      (3) Paradigm:

      While I agree that the paradigm tests whether a specific type of temporal prediction can be formed, it is not a type of prediction that one would easily observe in mice, or even humans. The regularity that must be learned, in order to be able to see a reflection of predictability, integrates over 4 stimuli, each shown for 500 ms with a 500 ms blank in between (and a 1000 ms interval separating the 4th stimulus from the 1st stimulus of the next sequence). In other words, the mouse must keep in working memory three stimuli, which partly occurred more than a second ago, in order to correctly predict the fourth stimulus (and signal a 1000 ms interval as evidence for starting a new sequence).

      A problem with this paradigm is that positive findings are easier to interpret than negative findings. If mice do not show a modulation to the global oddball, is it because "predictive coding" is the wrong hypothesis, or simply because the authors generated a design that operates outside of the boundary conditions of the theory? I think the latter is more plausible. Even in more complex animals, (eg monkeys or humans), I suspect that participants would have trouble picking up this regularity and sequence, unless it is directly task-relevant (which it is not, in the current setting). Previous experiments often used simple pairs (where transitional probability was varied, eg, Meyer and Olson, PNAS 2012) of stimuli that were presented within an intervening blank period. Clearly, these regularities would be a lot simpler to learn than the highly complex and temporally spread-out regularity used here, facilitating the interpretation of negative findings (especially in early cortical areas, which are known to have relatively small temporal receptive fields).

      I am, of course, not asking the authors to redesign their study. I would like to ask them to discuss this caveat more clearly, in the Introduction and Discussion, and situate their design in the broader literature. For example, Jeff Gavornik has used much more rapid stimulus designs and observed clear modulations of spiking activity in early visual regions. I realize that this caveat may be more relevant for the spiking paper (which does not show any spiking activity modulation in V1 by global predictability) than for the current paper, but I still think it is an important general caveat to point out.

      (4) Reporting of results:

      I did not see any quantification of the strength of evidence of any of the results, beyond a general statement that all reported results pass significance at an alpha=0.01 threshold. It would be informative to know, for all reported results, what exactly the p-value of the significant cluster is; as well as for which performed tests there was no significant difference.

      (5) Cluster test:

      The authors use a three-dimensional cluster test, clustering across time, frequency, and location/channel. I am wondering how meaningful this analytical approach is. For example, there could be clusters that show an early difference at some location in low frequencies, and then a later difference in a different frequency band at another (adjacent) location. It seems a priori illogical to me to want to cluster across all these dimensions together, given that this kind of clustering does not appear neurophysiologically implausible/not meaningful. Can the authors motivate their choice of three-dimensional clustering, or better, facilitating interpretability, cluster eg at space and time within specific frequency bands (2d clustering)?

    3. Reviewer #2 (Public review):

      Summary:

      Sennesh and colleagues analyzed LFP data from 6 regions of rodents while they were habituated to a stimulus sequence containing a local oddball (xxxy) and later exposed to either the same (xxxY) or a deviant global oddball (xxxX). Subsequently, they were exposed to a controlled random sequence (XXXY) or a controlled deterministic sequence (xxxx or yyyy). From these, the authors looked for differences in spectral properties (both oscillatory and aperiodic) between three contrasts (only for the last stimulus of the sequence).

      (1) Deviance detection: unpredictable random (XXXY) versus predictable habituation (xxxy)

      (2) Global oddball: unpredictable global oddball (xxxX) versus predictable deterministic (xxxx), and

      (3) "Stimulus-specific adaptation:" locally unpredictable oddball (xxxY) versus predictable deterministic (yyyy).

      They found evidence for an increase in gamma (and theta in some cases) for unpredictable versus predictable stimuli, and a reduction in alpha/beta, which they consider evidence towards the "predictive routing" scheme.

      While the dataset and analyses are well-suited to test evidence for predictive coding versus alternative hypotheses, I felt that the formulation was ambiguous, and the results were not very clear. My major concerns are as follows:

      (1) The authors set up three competing hypotheses, in which H1 and H2 make directly opposite predictions. However, it must be noted that H2 is proposed for spatial prediction, where the predictability is computed from the part of the image outside the RF. This is different from the temporal prediction that is tested here. Evidence in favor of H2 is readily observed when large gratings are presented, for which there is substantially more gamma than in small images. Actually, there are multiple features in the spectral domain that should not be conflated, namely (i) the transient broadband response, which includes all frequencies, (ii) contribution from the evoked response (ERP), which is often in frequencies below 30 Hz, (iii) narrow-band gamma oscillations which are produced by large and continuous stimuli (which happen to be highly predictive), and (iv) sustained low-frequency rhythms in theta and alpha/beta bands which are prominent before stimulus onset and reduce after ~200 ms of stimulus onset. The authors should be careful to incorporate these in their formulation of PC, and in particular should not conflate narrow-band and broadband gamma.

      (2) My understanding is that any aspect of predictive coding must be present before the onset of stimulus (expected or unexpected). So, I was surprised to see that the authors have shown the results only after stimulus onset. For all figures, the authors should show results from -500 ms to 500 ms instead of zero to 500 ms.

      (3) In many cases, some change is observed in the initial ~100 ms of stimulus onset, especially for the alpha/beta and theta ranges. However, the evoked response contributes substantially in the transient period in these frequencies, and this evoked response could be different for different conditions. The authors should show the evoked responses to confirm the same, and if the claim really is that predictions are carried by genuine "oscillatory" activity, show the results after removing the ERP (as they had done for the CSD analysis).

      (4) I was surprised by the statistics used in the plots. Anything that is even slightly positive or negative is turning out to be significant. Perhaps the authors could use a more stringent criterion for multiple comparisons?

      (5) Since the design is blocked, there might be changes in global arousal levels. This is particularly important because the more predictive stimuli in the controlled deterministic stimuli were presented towards the end of the session, when the animal is likely less motivated. One idea to check for this is to do the analysis on the 3rd stimulus instead of the 4th? Any general effect of arousal/attention will be reflected in this stimulus.

      (6) The authors should also acknowledge/discuss that typical stimulus presentation/attention modulation involves both (i) an increase in broadband power early on and (ii) a reduction in low-frequency alpha/beta power. This could be just a sensory response, without having a role in sending prediction signals per se. So the predictive routing hypothesis should involve testing for signatures of prediction while ruling out other confounds related to stimulus/cognition. It is, of course, very difficult to do so, but at the same time, simply showing a reduction in low-frequency power coupled with an increase in high-frequency power is not sufficient to prove PR.

      (7) The CSD results need to be explained better - you should explain on what basis they are being called feedforward/feedback. Was LFP taken from Layer 4 LFP (as was done by van Kerkoerle et al, 2014)? The nice ">" and "<" CSD patterns (Figure 3B and 3F of their paper) in that paper are barely observed in this case, especially for the alpha/beta range.

      (8) Figure 4a-c, I don't see a reduction in the broadband signal in a compared to b in the initial segment. Maybe change the clim to make this clearer?

      (9) Figure 5 - please show the same for all three frequency ranges, show all bars (including the non-significant ones), and indicate the significance (p-values or by *, **, ***, etc) as done usually for bar plots.

      (10) Their claim of alpha/beta oscillations being suppressed for unpredictable conditions is not as evident. A figure akin to Figure 5 would be helpful to see if this assertion holds.

      (11) To investigate the prediction and violation or confirmation of expectation, it would help to look at both the baseline and stimulus periods in the analyses.

    4. Reviewer #3 (Public review):

      Summary:

      In their manuscript entitled "Ubiquitous predictive processing in the spectral domain of sensory cortex", Sennesh and colleagues perform spectral analysis across multiple layers and areas in the visual system of mice. Their results are timely and interesting as they provide a complement to a study from the same lab focussed on firing rates, instead of oscillations. Together, the present study argues for a hypothesis called predictive routing, which argues that non-predictable stimuli are gated by Gamma oscillations, while alpha/beta oscillations are related to predictions.

      Strengths:

      (1) The study contains a clear introduction, which provides a clear contrast between a number of relevant theories in the field, including their hypotheses in relation to the present data set.

      (2) The study provides a systematic analysis across multiple areas and layers of the visual cortex.

      Weaknesses:

      (1) It is claimed in the abstract that the present study supports predictive routing over predictive coding; however, this claim is nowhere in the manuscript directly substantiated. Not even the differences are clearly laid out, much less tested explicitly. While this might be obvious to the authors, it remains completely opaque to the reader, e.g., as it is also not part of the different hypotheses addressed. I guess this result is meant in contrast to reference 17, by some of the same authors, which argues against predictive coding, while the present work finds differences in the results, which they relate to spectral vs firing rate analysis (although without direct comparison).

      (2) Most of the claims about a direction of propagation of certain frequency-related activities (made in the context of Figures 2-4) are - to the eyes of the reviewer - not supported by actual analysis but glimpsed from the pictures, sometimes, with very little evidence/very small time differences to go on. To keep these claims, proper statistical testing should be performed.

      (3) Results from different areas are barely presented. While I can see that presenting them in the same format as Figures 2-4 would be quite lengthy, it might be a good idea to contrast the right columns (difference plots) across areas, rather than just the overall averages.

      (4) Statistical testing is treated very generally, which can help to improve the readability of the text; however, in the present case, this is a bit extreme, with even obvious tests not reported or not even performed (in particular in Figure 5).

      (5) The description of the analysis in the methods is rather short and, to my eye, was missing one of the key descriptions, i.e., how the CSD plots were baselined (which was hinted at in the results, but, as far as I know, not clearly described in the analysis methods). Maybe the authors could section the methods more to point out where this is discussed.

      (6) While I appreciate the efforts of the authors to formulate their hypotheses and test them clearly, the text is quite dense at times. Partly this is due to the compared conditions in this paradigm; however, it would help a lot to show a visualization of what is being compared in Figures 2-4, rather than just showing the results.

    5. Author response:

      We would like to thank the three Reviewers for their thoughtful comments and detailed feedback. We are pleased to hear that the Reviewers found our paper to be “providing more direct evidence for the role of signals in different frequency bands related to predictability and surprise” (R1), “well-suited to test evidence for predictive coding versus alternative hypotheses” (R2), and “timely and interesting” (R3).

      We perceive that the reviewers have an overall positive impression of the experiments and analyses, but find the text somewhat dense and would like to see additional statistical rigor, as well as in some cases additional analyses to be included in supplementary material. We therefore here provide a provisional letter addressing revisions we have already performed and outlining the revision we are planning point-by-point. We begin each enumerated point with the Reviewer’s quoted text and our responses to each point are made below.

      Reviewer 1:

      (1) Introduction:

      The authors write in their introduction: "H1 further suggests a role for θ oscillations in prediction error processing as well." Without being fleshed out further, it is unclear what role this would be, or why. Could the authors expand this statement?”

      We have edited the text to indicate that theta-band activity has been related to prediction error processing as an empirical observation, and must regrettably leave drawing inferences about its functional role to future work, with experiments designed specifically to draw out theta-band activity.

      (2) Limited propagation of gamma band signals:

      Some recent work (e.g. https://www.cell.com/cell-reports/fulltext/S2211-1247(23)00503-X) suggests that gamma-band signals reflect mainly entrainment of the fast-spiking interneurons, and don't propagate from V1 to downstream areas. Could the authors connect their findings to these emerging findings, suggesting no role in gamma-band activity in communication outside of the cortical column?”

      We have not specifically claimed that gamma propagates between columns/areas in our recordings, only that it synchronizes synaptic current flows between laminar layers within a column/area. We nonetheless suggest that gamma can locally synchronize a column, and potentially local columns within an area via entrainment of local recurrent spiking, to update an internal prediction/representation upon onset of a prediction error. We also point the Reviewer to our Discussion section, where we state that our results fit with a model “whereby θ oscillations synchronize distant areas, enabling them to exchange relevant signals during cognitive processing.” In our present work, we therefore remain agnostic about whether theta or gamma or both (or alternative mechanisms) are at play in terms of how prediction error signals are transmitted between areas.

      (3) Paradigm:

      While I agree that the paradigm tests whether a specific type of temporal prediction can be formed, it is not a type of prediction that one would easily observe in mice, or even humans. The regularity that must be learned, in order to be able to see a reflection of predictability, integrates over 4 stimuli, each shown for 500 ms with a 500 ms blank in between (and a 1000 ms interval separating the 4th stimulus from the 1st stimulus of the next sequence). In other words, the mouse must keep in working memory three stimuli, which partly occurred more than a second ago, in order to correctly predict the fourth stimulus (and signal a 1000 ms interval as evidence for starting a new sequence).

      A problem with this paradigm is that positive findings are easier to interpret than negative findings. If mice do not show a modulation to the global oddball, is it because "predictive coding" is the wrong hypothesis, or simply because the authors generated a design that operates outside of the boundary conditions of the theory? I think the latter is more plausible. Even in more complex animals, (eg monkeys or humans), I suspect that participants would have trouble picking up this regularity and sequence, unless it is directly task-relevant (which it is not, in the current setting). Previous experiments often used simple pairs (where transitional probability was varied, eg, Meyer and Olson, PNAS 2012) of stimuli that were presented within an intervening blank period. Clearly, these regularities would be a lot simpler to learn than the highly complex and temporally spread-out regularity used here, facilitating the interpretation of negative findings (especially in early cortical areas, which are known to have relatively small temporal receptive fields).

      I am, of course, not asking the authors to redesign their study. I would like to ask them to discuss this caveat more clearly, in the Introduction and Discussion, and situate their design in the broader literature. For example, Jeff Gavornik has used much more rapid stimulus designs and observed clear modulations of spiking activity in early visual regions. I realize that this caveat may be more relevant for the spiking paper (which does not show any spiking activity modulation in V1 by global predictability) than for the current paper, but I still think it is an important general caveat to point out.”

      We appreciate the Reviewer’s concern about working memory limitations in mice. Our paradigm and training followed on from previous paradigms such as Gavornik and Bear (2014), in which predictive effects were observed in mouse V1 with presentation times of 150ms and interstimulus intervals of 1500ms. In addition, we note that Jamali et al. (2024) recently utilized a similar global/local paradigm in the auditory domain with inter-sequence intervals as long as 28-30 seconds, and still observed effects of a predicted sequence (https://elifesciences.org/articles/102702). For the revised manuscript, we plan to expand on this in the Discussion section.

      That being said, as the Reviewer also pointed out, this would be a greater concern had we not found any positive findings in our study. However, even with the rather long sequence periods we used, we did find positive evidence for predictive effects, supporting the use of our current paradigm. We agree with the reviewer that these positive effects are easier to interpret than negative effects, and plan to expand upon this in the Discussion when we resubmit.

      (4) Reporting of results:

      I did not see any quantification of the strength of evidence of any of the results, beyond a general statement that all reported results pass significance at an alpha=0.01 threshold. It would be informative to know, for all reported results, what exactly the p-value of the significant cluster is; as well as for which performed tests there was no significant difference.”

      For the revised manuscript, we can include the p-values after cluster-based testing for each significant cluster, as well as show data that passes a more stringent threshold of p<0.001 (1/1000) or p<0.005 (1/200) rather than our present p<0.01 (1/100).

      (5) Cluster test:

      The authors use a three-dimensional cluster test, clustering across time, frequency, and location/channel. I am wondering how meaningful this analytical approach is. For example, there could be clusters that show an early difference at some location in low frequencies, and then a later difference in a different frequency band at another (adjacent) location. It seems a priori illogical to me to want to cluster across all these dimensions together, given that this kind of clustering does not appear neurophysiologically implausible/not meaningful. Can the authors motivate their choice of three-dimensional clustering, or better, facilitating interpretability, cluster eg at space and time within specific frequency bands (2d clustering)?”

      We are happy to include a 3D plot of a time-channel-frequency cluster in the revised manuscript to clarify our statistical approach for the reviewer. We consider our current three-dimensional cluster-testing an “unsupervised” way of uncovering significant contrasts with no theory-driven assumptions about which bounded frequency bands or layers do what.

      Reviewer 2:

      Sennesh and colleagues analyzed LFP data from 6 regions of rodents while they were habituated to a stimulus sequence containing a local oddball (xxxy) and later exposed to either the same (xxxY) or a deviant global oddball (xxxX). Subsequently, they were exposed to a controlled random sequence (XXXY) or a controlled deterministic sequence (xxxx or yyyy). From these, the authors looked for differences in spectral properties (both oscillatory and aperiodic) between three contrasts (only for the last stimulus of the sequence).

      (1) Deviance detection: unpredictable random (XXXY) versus predictable habituation (xxxy)

      (2) Global oddball: unpredictable global oddball (xxxX) versus predictable deterministic (xxxx), and

      (3) "Stimulus-specific adaptation:" locally unpredictable oddball (xxxY) versus predictable deterministic (yyyy).

      They found evidence for an increase in gamma (and theta in some cases) for unpredictable versus predictable stimuli, and a reduction in alpha/beta, which they consider evidence towards the "predictive routing" scheme.

      While the dataset and analyses are well-suited to test evidence for predictive coding versus alternative hypotheses, I felt that the formulation was ambiguous, and the results were not very clear. My major concerns are as follows:”

      We appreciate the reviewer’s concerns and outline how we will address them below:

      (1) The authors set up three competing hypotheses, in which H1 and H2 make directly opposite predictions. However, it must be noted that H2 is proposed for spatial prediction, where the predictability is computed from the part of the image outside the RF. This is different from the temporal prediction that is tested here. Evidence in favor of H2 is readily observed when large gratings are presented, for which there is substantially more gamma than in small images. Actually, there are multiple features in the spectral domain that should not be conflated, namely (i) the transient broadband response, which includes all frequencies, (ii) contribution from the evoked response (ERP), which is often in frequencies below 30 Hz, (iii) narrow-band gamma oscillations which are produced by large and continuous stimuli (which happen to be highly predictive), and (iv) sustained low-frequency rhythms in theta and alpha/beta bands which are prominent before stimulus onset and reduce after ~200 ms of stimulus onset. The authors should be careful to incorporate these in their formulation of PC, and in particular should not conflate narrow-band and broadband gamma.”

      We have clarified in the manuscript that while the gamma-as-prediction hypothesis (our H2) was originally proposed in a spatial prediction domain, further work (specifically Singer (2021)) has extended the hypothesis to cover temporal-domain predictions as well.

      To address the reviewer’s point about multiple features in the spectral domain: Our analysis has specifically separated aperiodic components using FOOOF analysis (Supp. Fig. 1) and explicitly fit and tested aperiodic vs. periodic components (Supp. Figs 1&2). We did not find strong effects in the aperiodic components but did in the periodic components (Supp. Fig. 2), allowing us to be more confident in our conclusions in terms of genuine narrow-band oscillations. In the revised manuscript, we will include analysis of the pre-stimulus time window to address the reviewer’s point (iv) on sustained low frequency oscillations.

      (2) My understanding is that any aspect of predictive coding must be present before the onset of stimulus (expected or unexpected). So, I was surprised to see that the authors have shown the results only after stimulus onset. For all figures, the authors should show results from -500 ms to 500 ms instead of zero to 500 ms.

      In our revised manuscript we will include a pre-stimulus analysis and supplementary figures with time ranges from -500ms to 500ms. We have only refrained from doing so in the initial manuscript because our paradigm’s short interstimulus interval makes it difficult to interpret whether activity in the ISI reflects post-stimulus dynamics or pre-stimulus prediction. Nonetheless, we can easily show that in our paradigm, alpha/beta-band activity is elevated in the interstimulus activity after the offset of the previous stimulus, assuming that we baseline to the pre-trial period.

      (3) In many cases, some change is observed in the initial ~100 ms of stimulus onset, especially for the alpha/beta and theta ranges. However, the evoked response contributes substantially in the transient period in these frequencies, and this evoked response could be different for different conditions. The authors should show the evoked responses to confirm the same, and if the claim really is that predictions are carried by genuine "oscillatory" activity, show the results after removing the ERP (as they had done for the CSD analysis).

      We have included an extra sentence in our Materials and Methods section clarifying that the evoked potential/ERP was removed in our existing analyses, prior to performing the spectral decomposition of the LFP signal. We also note that the FOOOF analysis we applied separates aperiodic components of the spectral signal from the strictly oscillatory ones.

      In our revised manuscript we will include an analysis of the evoked responses as suggested by the reviewer.

      (4) I was surprised by the statistics used in the plots. Anything that is even slightly positive or negative is turning out to be significant. Perhaps the authors could use a more stringent criterion for multiple comparisons?

      As noted above to Reviewer 1 (point 4), we are happy to include supplemental figures in our resubmission showing the effects on our results of setting the statistical significance threshold with considerably greater stringency.

      (5) Since the design is blocked, there might be changes in global arousal levels. This is particularly important because the more predictive stimuli in the controlled deterministic stimuli were presented towards the end of the session, when the animal is likely less motivated. One idea to check for this is to do the analysis on the 3rd stimulus instead of the 4th? Any general effect of arousal/attention will be reflected in this stimulus.

      In order to check for the brain-wide effects of arousal, we plan to perform similar analyses to our existing ones on the 3rd stimulus in each block, rather than just the 4th “oddball” stimulus. Clusters that appear significantly contrasting in both the 3rd and 4th stimuli may be attributable to arousal.  We will also analyze pupil size as an index of arousal to check for arousal differences between conditions in our contrasts, possibly stratifying our data before performing comparisons to equalize pupil size within contrasts. We plan to include these analyses in our resubmission.

      (6) The authors should also acknowledge/discuss that typical stimulus presentation/attention modulation involves both (i) an increase in broadband power early on and (ii) a reduction in low-frequency alpha/beta power. This could be just a sensory response, without having a role in sending prediction signals per se. So the predictive routing hypothesis should involve testing for signatures of prediction while ruling out other confounds related to stimulus/cognition. It is, of course, very difficult to do so, but at the same time, simply showing a reduction in low-frequency power coupled with an increase in high-frequency power is not sufficient to prove PR.

      Since many different predictive coding and predictive processing hypotheses make very different hypotheses about how predictions might encoded in neurophysiological recordings, we have focused on prediction error encoding in this paper.

      For the hypothesis space we have considered (H1-H3), each hypothesis makes clearly distinguishable predictions about the spectral response during the time period in the task when prediction errors should be present. As noted by the reviewer, a transient increase in broadband frequencies would be a signature of H3. Changes to oscillatory power in the gamma band in distinct directions (e.g., increasing or decreasing with prediction error) would support either H1 and H2, depending on the direction of change. We believe our data, especially our use of FOOOF analysis and separation of periodic from aperiodic components, coupled to the three experimental contrasts, speaks clearly in favor of the Predictive Routing model, but we do not claim we have “proved” it. This study provides just one datapoint, and we will acknowledge this in our revised Discussion in our resubmission.

      (7) The CSD results need to be explained better - you should explain on what basis they are being called feedforward/feedback. Was LFP taken from Layer 4 LFP (as was done by van Kerkoerle et al, 2014)? The nice ">" and "<" CSD patterns (Figure 3B and 3F of their paper) in that paper are barely observed in this case, especially for the alpha/beta range.

      We consider a feedforward pattern as flowing from L4 outwards to L2/3 and L5/6, and a feedback pattern as flowing in the opposite direction, from L1 and L6 to the middle layers. We will clarify this in the revised manuscript.

      Since gamma-band oscillations are strongest in L2/3, we re-epoched LFPs to the oscillation troughs in L2/3 in the initial manuscript. We can include in the revised manuscript equivalent plots after finding oscillation troughs in L4 instead, as well as calculating the difference in trough times within-band between layers to quantify the transmission delay and add additional rigor to our feedforward vs. feedback interpretation of the CSD data.

      (8) Figure 4a-c, I don't see a reduction in the broadband signal in a compared to b in the initial segment. Maybe change the clim to make this clearer?

      We are looking into the clim/colorbar and plot-generation code to figure out the visibility issue that the Reviewer has kindly pointed out to us.

      (9) Figure 5 - please show the same for all three frequency ranges, show all bars (including the non-significant ones), and indicate the significance (p-values or by *, **, ***, etc) as done usually for bar plots.

      We will add the requested bar-plots for all frequency ranges, though we note that the bars given here are the results of adding up the spectral power in the channel-time-frequency clusters that already passed significance tests and that adding secondary significance tests here may not prove informative.

      (10) Their claim of alpha/beta oscillations being suppressed for unpredictable conditions is not as evident. A figure akin to Figure 5 would be helpful to see if this assertion holds.

      As noted above, we will include the requested bar plot, as well as examining alpha/beta in the pre-stimulus time-series rather than after the onset of the oddball stimulus.

      (11) To investigate the prediction and violation or confirmation of expectation, it would help to look at both the baseline and stimulus periods in the analyses.

      We will include for the Reviewer’s edification a supplementary figure showing the spectrograms for the baseline and full-trial periods to look at the difference between baseline and prestimulus expectation.

      Reviewer 3:

      Summary:

      In their manuscript entitled "Ubiquitous predictive processing in the spectral domain of sensory cortex", Sennesh and colleagues perform spectral analysis across multiple layers and areas in the visual system of mice. Their results are timely and interesting as they provide a complement to a study from the same lab focussed on firing rates, instead of oscillations. Together, the present study argues for a hypothesis called predictive routing, which argues that non-predictable stimuli are gated by Gamma oscillations, while alpha/beta oscillations are related to predictions.

      Strengths:

      (1) The study contains a clear introduction, which provides a clear contrast between a number of relevant theories in the field, including their hypotheses in relation to the present data set.

      (2) The study provides a systematic analysis across multiple areas and layers of the visual cortex.”

      We thank the Reviewer for their kind comments.

      Weaknesses:

      (1) It is claimed in the abstract that the present study supports predictive routing over predictive coding; however, this claim is nowhere in the manuscript directly substantiated. Not even the differences are clearly laid out, much less tested explicitly. While this might be obvious to the authors, it remains completely opaque to the reader, e.g., as it is also not part of the different hypotheses addressed. I guess this result is meant in contrast to reference 17, by some of the same authors, which argues against predictive coding, while the present work finds differences in the results, which they relate to spectral vs firing rate analysis (although without direct comparison).

      We agree that in this manuscript we should restrict ourselves to the hypotheses that were directly tested. We have revised our abstract accordingly,  and softened our claim to note only that our LFP results are compatible with predictive routing.

      (2) Most of the claims about a direction of propagation of certain frequency-related activities (made in the context of Figures 2-4) are - to the eyes of the reviewer - not supported by actual analysis but glimpsed from the pictures, sometimes, with very little evidence/very small time differences to go on. To keep these claims, proper statistical testing should be performed.

      In our revised manuscript, we will either substantiate (with quantification of CSD delays between layers) or soften the claims about feedforward/feedback direction of flow within the cortical column.

      (3) Results from different areas are barely presented. While I can see that presenting them in the same format as Figures 2-4 would be quite lengthy, it might be a good idea to contrast the right columns (difference plots) across areas, rather than just the overall averages.

      In our revised manuscript we will gladly include a supplementary figure showing the right-column difference plots across areas, in order to make sure to include aspects of our dataset that span up and down the cortical hierarchy.

      (4) Statistical testing is treated very generally, which can help to improve the readability of the text; however, in the present case, this is a bit extreme, with even obvious tests not reported or not even performed (in particular in Figure 5).

      We appreciate the Reviewer’s concern for statistical rigor, and as noted to the other reviewers, we can add different levels of statistical description and describe the p-values associated with specific clusters. Regarding Figure 5, we must protest as the bar heights were computed came from clusters already subjected to statistical testing and found significant.  We could add a supplementary figure which considers untested narrowband activity and tests it only in the “bar height” domain, if the Reviewer would like.

      (5) The description of the analysis in the methods is rather short and, to my eye, was missing one of the key descriptions, i.e., how the CSD plots were baselined (which was hinted at in the results, but, as far as I know, not clearly described in the analysis methods). Maybe the authors could section the methods more to point out where this is discussed.

      We have added some elaboration to our Materials and Methods section, especially to specify that CSD, having physical rather than arbitrary units, does not require baselining.

      (6) While I appreciate the efforts of the authors to formulate their hypotheses and test them clearly, the text is quite dense at times. Partly this is due to the compared conditions in this paradigm; however, it would help a lot to show a visualization of what is being compared in Figures 2-4, rather than just showing the results.

      In the revised manuscript we will add a visual aid for the three contrasts we consider.

      We are happy to inform the editors that we have implemented, for the Reviewed Preprint, the direct textual Recommendations for the Authors given by Reviewers 2 and 3. We will implement the suggested Figure changes in our revised manuscript. We thank them for their feedback in strengthening our manuscript.

  5. jus-mer.github.io jus-mer.github.io
    1. esearch strate

      add ISSP como estudio central para este tema en comparación internacional, trackear preguntas cumulative, y mencionar que esto ha sido un elemento fundamental de la agenda de market justice

    1. Brain has five ‘eras’, scientists say – with adult mode not starting until early 30s
      • A new study from Cambridge scientists identifies five distinct ages or structural eras of the human brain throughout the average lifespan.
      • Four major brain development turning points occur around the ages of 9, 32, 66, and 83 years.
      • These eras represent different phases of neural network organization, integration, and segregation, correlating with key cognitive, behavioral, and mental health outcomes.
      • The first stage, from birth to about 9 years, involves decreasing global integration and increasing local segregation in the brain's networks.
      • The second stage, spanning ages 9 to 32, labeled "adolescence," shows increasing network integration and efficiency and is when "adult mode" of brain wiring begins.
      • From 32 to 66 years ("adulthood"), there is a transition to decreased integration but increased segregation.
      • The study sheds light on why adolescence may last until the early 30s and how brain efficiency and topology change across life.
      • Understanding these phases may inform about critical periods for cognitive development and age-related cognitive decline.

      Hacker News Discussion

      • The discussion briefly mentions socioeconomic impacts, noting that median income almost doubles between ages 23 and 35, which aligns with the brain development "adult mode" onset around early 30s.
      • Other comments are limited and fragmented, mostly consisting of quick reactions and some contextual mentions without deep analysis of the study.
      • There is a general acknowledgment of the relevance of the study's findings for understanding cognitive and life milestone transitions, but no extended debate or critique.
    1. eLife Assessment

      Mark and colleagues developed and validated a valuable method for examining subspace generalization in fMRI data and applied it to understand whether the entorhinal cortex uses abstract representations that generalize across different environments with the same structure. The manuscript presents convincing evidence for the conclusion that abstract entorhinal representations of hexagonal associative structures generalize across different stimulus sets.

    2. Reviewer #1 (Public review):

      Summary:

      This study develops and validates a neural subspace similarity analysis for testing whether neural representations of graph structures generalize across graph size and stimulus sets. The authors show the method works in rat grid and place cell data, finding that grid but not place cells generalize across different environments, as expected. The authors then perform additional analyses and simulations to show that this method should also work on fMRI data. Finally, the authors test their method on fMRI responses from entorhinal cortex (EC) in a task that involves graphs that vary in size (and stimulus set) and statistical structure (hexagonal and community). They find neural representations of stimulus sets in lateral occipital complex (LOC) generalize across statistical structure and that EC activity generalizes across stimulus sets/graph size, but only for the hexagonal structures.

      Strengths:

      (1) The overall topic is very interesting and timely and the manuscript is well written.

      (2) The method is clever and powerful. It could be important for future research testing whether neural representations are aligned across problems with different state manifestations.

      (3) The findings provide new insights into generalizable neural representations of abstract task states in entorhinal cortex.

      Weaknesses:

      (1) There are two design confounds that are not sufficiently discussed.

      (1.1) First, hexagonal and community structures are confounded by training order. All subjects learned the hexagonal graph always before the community graph. As such, any differences between the two graphs could be explained (in theory) by order effects (although this is unlikely). However, because community and hexagonal structures shared the same stimuli, it is possible that subjects had to find ways to represent the community structures separately from the hexagonal structures. This could potentially explain why there was no generalization across graph size for community structures.

      (1.2) Second, subjects had more experience with the hexagonal and community structures before and during fMRI scanning. This is another possible reason why there was no generalization for the community structure.

      (2) The authors include the results from a searchlight analysis to show specificity of the effects for EC. A more convincing way (in my opinion) to show specificity would be to test for (and report the results) of a double dissociation between the visual and structural contrast in two independently defined regions (e.g., anatomical ROIs of LOC and EC). This would substantiate the point that EC activity generalizes across structural similarity while sensory regions like LOC generalize across visual similarity.

    3. Reviewer #2 (Public review):

      Summary:

      Mark and colleagues test the hypothesis that entorhinal cortical representations may contain abstract structural information that facilitates generalization across structurally similar contexts. To do so, they use a method called "subspace generalization" designed to measure abstraction of representations across different settings. The authors validate the method using hippocampal place cells and entorhinal grid cells recorded in a spatial task, then show perform simulations that support that it might be useful in aggregated responses such as those measured with fMRI. Then the method is applied to an fMRI data that required participants to learn relationships between images in one of two structural motifs (hexagonal grids versus community structure). They show that the BOLD signal within an entorhinal ROI shows increased measures of subspace generalization across different tasks with the same hexagonal structure (as compared to tasks with different structures) but that there was not evidence for the complementary result (ie. increased generalization across tasks that share community structure, as compared to those with different structures). Taken together, this manuscript describes and validates a method for identifying fMRI representations that generalize across conditions and applies it to reveal that entorhinal representations that emerge across specific shared structural conditions.

      Strengths:

      I found this paper interesting both in terms of its methods and its motivating questions. The question asked is novel and the methods employed are new - and I believe this is the first time that they have been applied to fMRI data. I also found the iterative validation of the methodology to be interesting and important - showing persuasively that the method could detect a target representation - even in the face of random combination of tuning and with the addition of noise, both being major hurdles to investigating representations using fMRI.

      Weaknesses:

      The primary weakness of the paper in terms of empirical results is that the representations identified in EC had no clear relationship to behavior, raising questions about their functional importance.

      The method developed is a clearly valuable tool that can serve as part of a larger battery of analysis techniques, but a small weakness on the methodological side is that for a given dataset, it might be hard to determine whether the method developed here would be better or worse than alternative methods.

    4. Reviewer #3 (Public review):

      Summary:

      The article explores the brain's ability to generalize information, with a specific focus on the entorhinal cortex (EC) and its role in learning and representing structural regularities that define relationships between entities in networks. The research provides empirical support for the longstanding theoretical and computational neuroscience hypothesis that the EC is crucial for structure generalization. It demonstrates that EC codes can generalize across non-spatial tasks that share common structural regularities, regardless of the similarity of sensory stimuli and network size.

      Strengths:

      At first glance, a potential limitation of this study appears to be its application of analytical methods originally developed for high-resolution animal electrophysiology (Samborska et al., 2022) to the relatively coarse and noisy signals of human fMRI. Rather than sidestepping this issue, however, the authors embrace it as a methodological challenge. They provide compelling empirical evidence and biologically grounded simulations to show that key generalization properties of entorhinal cortex representations can still be robustly detected. This not only validates their approach but also demonstrates how far non-invasive human neuroimaging can be pushed. The use of multiple independent datasets and carefully controlled permutation tests further underscores the reliability of their findings, making a strong case that structural generalization across diverse task environments can be meaningfully studied even in abstract, non-spatial domains that are otherwise difficult to investigate in animal models.

      Weaknesses:

      While this study provides compelling evidence for structural generalization in the entorhinal cortex (EC), several limitations remain that pave the way for promising future research. One issue is that the generalization effect was statistically robust in only one task condition, with weaker effects observed in the "community" condition. This raises the question of whether the null result genuinely reflects a lack of EC involvement, or whether it might be attributable to other factors such as task complexity, training order, or insufficient exposure possibilities that the authors acknowledge as open questions. Moreover, although the study leverages fMRI to examine EC representations in humans, it does not clarify which specific components of EC coding-such as grid cells versus other spatially tuned but non-grid codes-underlie the observed generalization. While electrophysiological data in animals have begun to address this, the human experiments do not disentangle the contributions of these different coding types. This leaves unresolved the important question of what makes EC representations uniquely suited for generalization, particularly given that similar effects were not observed in other regions known to contain grid cells, such as the medial prefrontal cortex (mPFC) or posterior cingulate cortex (PCC). These limitations point to important future directions for better characterizing the computational role of the EC and its distinctiveness within the broader network supporting learning and decision making based on cognitive maps.

    5. Author response:

      The following is the authors’ response to the original reviews

      Public Reviews:

      Reviewer #1 (Public review):

      Summary:

      This study develops and validates a neural subspace similarity analysis for testing whether neural representations of graph structures generalize across graph size and stimulus sets. The authors show the method works in rat grid and place cell data, finding that grid but not place cells generalize across different environments, as expected. The authors then perform additional analyses and simulations to show that this method should also work on fMRI data. Finally, the authors test their method on fMRI responses from the entorhinal cortex (EC) in a task that involves graphs that vary in size (and stimulus set) and statistical structure (hexagonal and community). They find neural representations of stimulus sets in lateral occipital complex (LOC) generalize across statistical structure and that EC activity generalizes across stimulus sets/graph size, but only for the hexagonal structures.

      Strengths:

      (1) The overall topic is very interesting and timely and the manuscript is well-written.

      (2) The method is clever and powerful. It could be important for future research testing whether neural representations are aligned across problems with different state manifestations.

      (3) The findings provide new insights into generalizable neural representations of abstract task states in the entorhinal cortex.

      We thank the reviewer for their kind comments and clear summary of the paper and its strengths.

      Weaknesses:

      (1) The manuscript would benefit from improving the figures. Moreover, the clarity could be strengthened by including conceptual/schematic figures illustrating the logic and steps of the method early in the paper. This could be combined with an illustration of the remapping properties of grid and place cells and how the method captures these properties.

      We agree with the reviewer and have added a schematic figure of the method (figure 1a).

      (2) Hexagonal and community structures appear to be confounded by training order. All subjects learned the hexagonal graph always before the community graph. As such, any differences between the two graphs could thus be explained (in theory) by order effects (although this is practically unlikely). However, given community and hexagonal structures shared the same stimuli, it is possible that subjects had to find ways to represent the community structures separately from the hexagonal structures. This could potentially explain why the authors did not find generalizations across graph sizes for community structures.

      We thank the reviewer for their comments. We agree that the null result regarding the community structures does not mean that EC doesn’t generalise over these structures, and that the training order could in theory contribute to the lack of an effect. The decision to keep the asymmetry of the training order was deliberate: we chose this order based on our previous study (Mark et al. 2020), where we show that learning a community structure first changes the learning strategy of subsequent graphs. We could have perhaps overcome this by increasing the training periods, but 1) the training period is already very long; 2) there will still be asymmetry because the group that first learn community structure will struggle in learning the hexagonal graph more than vice versa, as shown in Mark et al. 2020.

      We have added the following sentences on this decision to the Methods section:

      “We chose to first teach hexagonal graphs for all participants and not randomize the order because of previous results showing that first learning community structure changes participants’ learning strategy (mark et al. 2020).”

      (3) The authors include the results from a searchlight analysis to show the specificity of the effects of EC. A better way to show specificity would be to test for a double dissociation between the visual and structural contrast in two independently defined regions (e.g., anatomical ROIs of LOC and EC).

      Thanks for this suggestion. We indeed tried to run the analysis in a whole-ROI approach, but this did not result in a significant effect in EC. Importantly, we disagree with the reviewer that this is a “better way to show specificity” than the searchlight approach. In our view, the two analyses differ with respect to the spatial extent of the representation they test for. The searchlight approach is testing for a highly localised representation on the scale of small spheres with only 100 voxels. The signal of such a localised representation is likely to be drowned in the noise in an analysis that includes thousands of voxels which mostly don’t show the effect - as would be the case in the whole-ROI approach.

      (4) Subjects had more experience with the hexagonal and community structures before and during fMRI scanning. This is another confound, and possible reason why there was no generalization across stimulus sets for the community structure.

      See our response to comment (2).

      Reviewer #2 (Public review):

      Summary:

      Mark and colleagues test the hypothesis that entorhinal cortical representations may contain abstract structural information that facilitates generalization across structurally similar contexts. To do so, they use a method called "subspace generalization" designed to measure abstraction of representations across different settings. The authors validate the method using hippocampal place cells and entorhinal grid cells recorded in a spatial task, then perform simulations that support that it might be useful in aggregated responses such as those measured with fMRI. Then the method is applied to fMRI data that required participants to learn relationships between images in one of two structural motifs (hexagonal grids versus community structure). They show that the BOLD signal within an entorhinal ROI shows increased measures of subspace generalization across different tasks with the same hexagonal structure (as compared to tasks with different structures) but that there was no evidence for the complementary result (ie. increased generalization across tasks that share community structure, as compared to those with different structures). Taken together, this manuscript describes and validates a method for identifying fMRI representations that generalize across conditions and applies it to reveal entorhinal representations that emerge across specific shared structural conditions.

      Strengths:

      I found this paper interesting both in terms of its methods and its motivating questions. The question asked is novel and the methods employed are new - and I believe this is the first time that they have been applied to fMRI data. I also found the iterative validation of the methodology to be interesting and important - showing persuasively that the method could detect a target representation - even in the face of a random combination of tuning and with the addition of noise, both being major hurdles to investigating representations using fMRI.

      We thank the reviewer for their kind comments and the clear summary of our paper.

      Weaknesses:

      In part because of the thorough validation procedures, the paper came across to me as a bit of a hybrid between a methods paper and an empirical one. However, I have some concerns, both on the methods development/validation side, and on the empirical application side, which I believe limit what one can take away from the studies performed.

      We thank the reviewer for the comment. We agree that the paper comes across as a bit of a methods-empirical hybrid. We chose to do this because we believe (as the reviewer also points out) that there is value in both aspects of the paper.

      Regarding the methods side, while I can appreciate that the authors show how the subspace generalization method "could" identify representations of theoretical interest, I felt like there was a noticeable lack of characterization of the specificity of the method. Based on the main equation in the results section of the paper, it seems like the primary measure used here would be sensitive to overall firing rates/voxel activations, variance within specific neurons/voxels, and overall levels of correlation among neurons/voxels. While I believe that reasonable pre-processing strategies could deal with the first two potential issues, the third seems a bit more problematic - as obligate correlations among neurons/voxels surely exist in the brain and persist across context boundaries that are not achieving any sort of generalization (for example neurons that receive common input, or voxels that share spatial noise). The comparative approach (ie. computing difference in the measure across different comparison conditions) helps to mitigate this concern to some degree - but not completely - since if one of the conditions pushes activity into strongly spatially correlated dimensions, as would be expected if univariate activations were responsive to the conditions, then you'd expect generalization (driven by shared univariate activation of many voxels) to be specific to that set of conditions.

      We thank the reviewer for their comments. We would like to point out that we demean each voxel within all states/piles (3-pictures sequences) in a given graph/task (what the reviewer is calling “a condition”). Hence there is no shared univariate activation of many voxels in response to a graph going into the computation, and no sensitivity to the overall firing rate/voxel activation.  Our calculation captures the variance across states conditions within a task (here a graph), over and above the univariate effect of graph activity. In addition, we spatially pre-whiten the data within each searchlight, meaning that noisy voxels with high noise variance will be downweighted and noise correlations between voxels are removed prior to applying our method.

      A second issue in terms of the method is that there is no comparison to simpler available methods. For example, given the aims of the paper, and the introduction of the method, I would have expected the authors to take the Neuron-by-Neuron correlation matrices for two conditions of interest, and examine how similar they are to one another, for example by correlating their lower triangle elements. Presumably, this method would pick up on most of the same things - although it would notably avoid interpreting high overall correlations as "generalization" - and perhaps paint a clearer picture of exactly what aspects of correlation structure are shared. Would this method pick up on the same things shown here? Is there a reason to use one method over the other?

      We thank the reviewer for this important and interesting point. We agree that calculating correlation between the upper triangular elements of the covariance or correlation matrices picks up similar, but not identical aspects of the data (see below the mathematical explanation that was added to the supplementary). When we repeated the searchlight analysis and calculated the correlation between the upper triangular entries of the Pearson correlation matrices we obtained an effect in the EC, though weaker than with our subspace generalization method (t=3.9, the effect did not survive multiple comparisons). Similar results were obtained with the correlation between the upper triangular elements of the covariance matrices(t=3.8, the effect did not survive multiple comparisons).

      The difference between the two methods is twofold: 1) Our method is based on the covariance matrix and not the correlation matrix - i.e. a difference in normalisation. We realised that in the main text of the original paper we mistakenly wrote “correlation matrix” rather than “covariance matrix” (though our equations did correctly show the covariance matrix). We have corrected this mistake in the revised manuscript. 2) The weighting of the variance explained in the direction of each eigenvector is different between the methods, with some benefits of our method for identifying low-dimensional representations and for robustness to strong spatial correlations.  We have added a section “Subspace Generalisation vs correlating the Neuron-by-Neuron correlation matrices” to the supplementary information with a mathematical explanation of these differences.

      Regarding the fMRI empirical results, I have several concerns, some of which relate to concerns with the method itself described above. First, the spatial correlation patterns in fMRI data tend to be broad and will differ across conditions depending on variability in univariate responses (ie. if a condition contains some trials that evoke large univariate activations and others that evoke small univariate activations in the region). Are the eigenvectors that are shared across conditions capturing spatial patterns in voxel activations? Or, related to another concern with the method, are they capturing changing correlations across the entire set of voxels going into the analysis? As you might expect if the dynamic range of activations in the region is larger in one condition than the other?

      This is a searchlight analysis, therefore it captures the activity patterns within nearby voxels. Indeed, as we show in our simulation, areas with high activity and therefore high signal to noise will have better signal in our method as well. Note that this is true of most measures.

      My second concern is, beyond the specificity of the results, they provide only modest evidence for the key claims in the paper. The authors show a statistically significant result in the Entorhinal Cortex in one out of two conditions that they hypothesized they would see it. However, the effect is not particularly large. There is currently no examination of what the actual eigenvectors that transfer are doing/look like/are representing, nor how the degree of subspace generalization in EC may relate to individual differences in behavior, making it hard to assess the functional role of the relationship. So, at the end of the day, while the methods developed are interesting and potentially useful, I found the contributions to our understanding of EC representations to be somewhat limited.

      We agree with this point, yet believe that the results still shed light on EC functionality. Unfortunately, we could not find correlation between behavioral measures and the fMRI effect.

      Reviewer #3 (Public review):

      Summary:

      The article explores the brain's ability to generalize information, with a specific focus on the entorhinal cortex (EC) and its role in learning and representing structural regularities that define relationships between entities in networks. The research provides empirical support for the longstanding theoretical and computational neuroscience hypothesis that the EC is crucial for structure generalization. It demonstrates that EC codes can generalize across non-spatial tasks that share common structural regularities, regardless of the similarity of sensory stimuli and network size.

      Strengths:

      (1) Empirical Support: The study provides strong empirical evidence for the theoretical and computational neuroscience argument about the EC's role in structure generalization.

      (2) Novel Approach: The research uses an innovative methodology and applies the same methods to three independent data sets, enhancing the robustness and reliability of the findings.

      (3) Controlled Analysis: The results are robust against well-controlled data and/or permutations.

      (4) Generalizability: By integrating data from different sources, the study offers a comprehensive understanding of the EC's role, strengthening the overall evidence supporting structural generalization across different task environments.

      Weaknesses:

      A potential criticism might arise from the fact that the authors applied innovative methods originally used in animal electrophysiology data (Samborska et al., 2022) to noisy fMRI signals. While this is a valid point, it is noteworthy that the authors provide robust simulations suggesting that the generalization properties in EC representations can be detected even in low-resolution, noisy data under biologically plausible assumptions. I believe this is actually an advantage of the study, as it demonstrates the extent to which we can explore how the brain generalizes structural knowledge across different task environments in humans using fMRI. This is crucial for addressing the brain's ability in non-spatial abstract tasks, which are difficult to test in animal models.

      While focusing on the role of the EC, this study does not extensively address whether other brain areas known to contain grid cells, such as the mPFC and PCC, also exhibit generalizable properties. Additionally, it remains unclear whether the EC encodes unique properties that differ from those of other systems. As the authors noted in the discussion, I believe this is an important question for future research.

      We thank the reviewer for their comments. We agree with the reviewer that this is a very interesting question. We tried to look for effects in the mPFC, but we did not obtain results that were strong enough to report in the main manuscript, but we do report a small effect in the supplementary.

      Recommendations for the authors:

      Reviewer #1 (Recommendations for the authors):

      (1) I wonder how important the PCA on B1(voxel-by-state matrix from environment 1) and the computation of the AUC (from the projection on B2 [voxel-by-state matrix from environment 1]) is for the analysis to work. Would you not get the same result if you correlated the voxel-by-voxel correlation matrix based on B1 (C1) with the voxel-by-voxel correlation matrix based on B2 (C2)? I understand that you would not have the subspace-by-subspace resolution that comes from the individual eigenvectors, but would the AUC not strongly correlate with the correlation between C1 and C2?

      We agree with the reviewer comments - see our response to reviewer 2 second issue above. 

      (2) There is a subtle difference between how the method is described for the neural recording and fMRI data. Line 695 states that principal components of the neuron x neuron intercorrelation matrix are computed, whereas line 888 implies that principal components of the data matrix B are computed. Of note, B is a voxel x pile rather than a pile x voxel matrix. Wouldn't this result in U being pile x pile rather than voxel x voxel?

      The PCs are calculated on the neuron x neuron (or voxel x voxel) covariance matrix of the activation matrix. We’ve added the following clarification to the relevant part of the Methods:

      “We calculated noise normalized GLM betas within each searchlight using the RSA toolbox. For each searchlight and each graph, we had a nVoxels (100) by nPiles (10) activation matrix (B) that describes the activation of a voxel as a result of a particular pile (three pictures’ sequence). We exploited the (voxel x voxel) covariance matrix of this matrix to quantify the manifold alignment within each searchlight.”

      (3) It would be very helpful to the field if the authors would make the code and data publicly available. Please consider depositing the code for data analysis and simulations, as well as the preprocessed/extracted data for the key results (rat data/fMRI ROI data) into a publicly accessible repository.

      The code is publicly available in git (https://github.com/ShirleyMgit/subspace_generalization_paper_code/tree/main).

      (4) Line 219: "Kolmogorov Simonov test" should be "Kolmogorov Smirnov test".

      thanks!

      (5) Please put plots in Figure 3F on the same y-axis.

      (6) Were large and small graphs of a given statistical structure learned on the same days, and if so, sequentially or simultaneously? This could be clarified.

      The graphs are learned on the same day.  We clarified this in the Methods section.

      Reviewer #2 (Recommendations for the authors):

      Perhaps the advantage of the method described here is that you could narrow things down to the specific eigenvector that is doing the heavy lifting in terms of generalization... and then you could look at that eigenvector to see what aspect of the covariance structure persists across conditions of interest. For example, is it just the highest eigenvalue eigenvector that is likely picking up on correlations across the entire neural population? Or is there something more specific going on? One could start to get at this by looking at Figures 1A and 1C - for example, the primary difference for within/between condition generalization in 1C seems to emerge with the first component, and not much changes after that, perhaps suggesting that in this case, the analysis may be picking up on something like the overall level of correlations within different conditions, rather than a more specific pattern of correlations.

      The nature of the analysis means the eigenvectors are organized by their contribution to the variance, therefore the first eigenvector is responsible for more variance than the other, we did not check rigorously whether the variance is then splitted equally by the remaining eigenvectors but it does not seems to be the case.

      Why is variance explained above zero for fraction EVs = 0 for figure 1C (but not 1A) ? Is there some plotting convention that I'm missing here?

      There was a small bug in this plot and it was corrected - thank you very much!

      The authors say:

      "Interestingly, the difference in AUCs was also 190 significantly smaller than chance for place cells (Figure 1a, compare dotted and solid green 191 lines, p<0.05 using permutation tests, see statistics and further examples in supplementary 192 material Figure S2), consistent with recent models predicting hippocampal remapping that is 193 not fully random (Whittington et al. 2020)."

      But my read of the Whittington model is that it would predict slight positive relationships here, rather than the observed negative ones, akin to what one would expect if hippocampal neurons reflect a nonlinear summation of a broad swath of entorhinal inputs.

      Smaller differences than chance imply that the remapping of place cells is not completely random.

      Figure 2:

      I didn't see any description of where noise amplitude values came from - or any justification at all in that section. Clearly, the amount of noise will be critical for putting limits on what can and cannot be detected with the method - I think this is worthy of characterization and explanation. In general, more information about the simulations is necessary to understand what was done in the pseudovoxel simulations. I get the gist of what was done, but these methods should clear enough that someone could repeat them, and they currently are not.

      Thanks, we added noise amplitude to the figure legend and Methods.

      What does flexible mean in the title? The analysis only worked for the hexagonal grid - doesn't that suggest that whatever representations are uncovered here are not flexible in the sense of being able to encode different things?

      Flexible here means, flexible over stimulus’ characteristics that are not related to the structural form such as stimuli, the size of the graph etc.

      Reviewer #3 (Recommendations for the authors):

      I have noticed that the authors have updated the previous preprint version to include extensive simulations. I believe this addition helps address potential criticisms regarding the signal-to-noise ratio. If the authors could share the code for the fMRI data and the simulations in an open repository, it would enhance the study's impact by reaching a broader readership across various research fields. Except for that, I have nothing to ask for revision.

      Thanks, the code will be publicly available: (https://github.com/ShirleyMgit/subspace_generalization_paper_code/tree/main).

    1. Pokolenie Z pociesza się zestawami happy meal i używa ChatGPT jako psychologa. Zatrważający raport
      • 80% of Generation Z (Gen Z) admit they could not financially sustain themselves for a month after losing their main income source; 27% have no emergency savings.
      • Financial insecurity is widespread: 56% live paycheck to paycheck, and 30% feel financially unsafe globally; in Poland, 43% fear lacking financial independence, and one-third doubt owning a home.
      • Job security fears are high: only 43% of junior employees (mainly Gen Z) trust their employers' stability in the next six months—the lowest in years.
      • Gen Z is cutting spending drastically; instead of small luxuries, they share "no-buy lists," use coupons, opt for children's meal deals, or salvage discarded items to save money.
      • ChatGPT is used by many young adults as a free alternative to therapy, serving as a conversational partner or interactive diary amid mental health and financial stress.
      • About 47% of Polish Gen Z women consider starting their own business, and over half of European Gen Z plan side hustles within three years, motivated by financial independence and work flexibility.
      • This younger generation is adapting by reducing expenses, seeking free mental health tools, and pursuing entrepreneurship as a response to economic uncertainty and recession fears.
    1. período de planificación.

      hay que definir el periodo de planificación con alguna letra ... ya hay alguna definida en capítulos siguientes??

    2. rtenecen a un espacio discreto

      es el mismo espacio X de arriba? hay que hablar de él por su nombre ... o será otro que no es necesario especificar?

    3. la variable binaria

      llama almacén, no instalación. Hay que refiriese a lo largo del trabajo a los objetos de la misma forma y no ponerles nombres sinónimos, porque se pierde precisión. Más aún en esta tesis que tiene muchas cosas similares

    4. refleja la naturaleza multinivel del problema

      explicar más a fondo a qué se hace referencia con niveles en el problema: son clasificaciones de los tipos de decisión que define el problema? hay que aclarar

    5. espacio es híbrido,

      cual espacio? el espacio al que pertenecen las variables de decisión? el espacio factible donde están las soluciones del problema ?

    6. En el problema integrado de localización e inventario

      que problema es este? ya fué definido? poner referencia. pero hasta loq ue he leido no veo que a un problema se le hayan colocado estas 3 características. o si ya está debe ser claro y preciso

    7. Los conjuntos I y J constituyen estructuras matemáticas esenciales que determinan la escala, la conectividad y la complejidad del modelo. Su correcta definición resulta crucial para la formalización posterior de variables, restricciones y dependencias del sistema.

      sobra ... muy IA

    8. determina directamente la complejidad combinatoria del problema

      explicar porqué hay complejidad combinatoria y cómo vá a afectar el problema

    9. conjuntos I y J se asumen disjuntos en función,

      en la definición vá estas hipótesis adicionales. Qué significa que dos conjuntos son disjuntos en función?? eso está raro. es otro tipo de operación entre conjuntos?

    10. J:={1,2,…,n},n∈N,n≥1, el conjunto finito y numerable de zonas afectadas o potenciales de demanda. C

      lo mismo ... ya esta definido antes. Se deinie con claridad y completamente una vez y luego se hace referencia cruzada de la definición

    11. rquitectura

      qué es una arquitectura? es lo mismo que toplogía, es lo mismo que las componentes del problema? hay que evitar estas ambigüedades

    12. a estructura topológica

      rescribir ... que es una topología aqui? que es una semántica? cualquier concepto nuevo debe ser presentado antes porque entonces deja más preguntas que claridad

    13. Lewis & Overton, 2013

      cita .... y esto es un teorema ...hay que buscarlo ... la próxima vemos si debe ser incluido, pero si debe ser aclarado

    14. la sensibilidad estructural

      hay que definir este concepto ... o decirlo en palabras simples ... no hay que escribir conceptos sueltos que generan las interrogantes que calidad

    1. ollective invention

      Collective invention is an economic and innovation concept where firms or individuals freely share their technological discoveries, improvements, and know-how with each other—even when they are competitors.

    1. Because governments in the region are starting to establish transformative agreements with commercial publishers, high APCs are becoming increasingly visible.

      For research communities negotiating transformative agreements, they are not only a mechanism to understand and redirect spending on publication outputs and APCs—they are, perhaps more importantly, a way of bringing libraries directly into the mechanics of scholarly publishing. Negotiations expose the true costs of the system, revealing that the subscription fees institutions have been paying for years are often higher than the publishing costs that are now under scrutiny. This shift in visibility is transformative in itself. It enables libraries, funders, and scholars to make informed decisions, to question entrenched assumptions, and to draw on models such as SciELO’s that align investments with open access objectives. The more we understand the real economics of publishing, the more agency we gain in reshaping it.