1. Oct 2024
    1. These Cards (if used only once) should be labelledand catalogued very carefully.

      How does he define "labelled" and "catalogued"?

      Presumably he means a version of tagging/categorization and possibly indexing them to be able to easily find them again?

    2. A great help towards Arrangement and Clearnessis to have Cards of different sizes and shapes, and ofdifferent colours, or with different marks on them

      Miles goes against the grain of using "cards of equal size", but does so to emphasize the affordance of using them for "Arrangement and Clearness".

    3. The Cards can be turned afterwards.

      Miles admits that one can use both sides of index cards in a card system, but primarily because he's writing at a time (1899) when, although paper is cheap (which he mentions earlier), some people may have an objection to the system's use due to the expense, which he places at the top of his list of objections. (And he does this in a book in which he emphasizes multiple times the ideas of selection and ordering!)

    4. and of course writing only on one side of the Card ata time.
    5. And the same will apply to the objection that theSystem is unusual. Seldom have there been any newsuggestions which have not been condemned as ' un-us
    6. Objections to the Card-System,

      Miles lists the following objections: - expense - inconvenience - unusual (new, novel)

      Notice that he starts not with benefits or affordances, but with the objections.

      What would a 2024 list of objections look like? - anachronism - harder than digital methods - lack of easier search - complexity - ... others?

    7. At first, also, it might be thought that the Cardswould be inconvenient to use, but the personal ex-perience of thousands shows that, at any rate forbusiness-purposes, exactly the reverse is true

      Miles' uses the ubiquity of card systems (even at the writing in 1899, prior to publication) within business as evidence for bolstering their use in writing and composition.

      (Recall that he's also writing in the UK.)

    8. Good Practice for this will be to studyLoisette's System of Memory, e.g. in "How to Remember"(see p. 264) ; in fact Loisette's System might be calledthe Link-System ; and Comparisons and Contrasts willvery often be a great help as Links.

      Interesting to see a mention of Alphonse Loisette here!

      But also nice to see the concept of linking ideas and association (associative memory) pop up here in the context of note making, writing, and creating card systems.

    9. include anything which links one Ideato another. See further " How to Remember " (to bepublished in February, 1900, by Warne & Co.).

      This book was finally published in 1905. The introduction was written in 1899 and the mentioned Feb 1900 publication of How to Remember didn't happen until 1901.

      Miles, Eustace Hamilton. How to Remember: Without Memory Systems or with Them. Frederick Warne & Co., 1901.

    10. If the Letter is important, especially if it be aBusiness-Letter, there should be as long an interval as isfeasible between the writing and the sending off.

      writing and waiting is useful in many instances, and particularly for clarity of expression.

      see also: <br /> - angry letter https://hypothes.is/a/6OoqHofyEe-1mtOohGA63w - diffuse thinking<br /> - typewriter (waiting) <br /> - editing (waiting) https://hypothes.is/a/VxRNeofvEe-5n1dpCEM48Q

    11. fter the Letter has been done it should beread through, and should (if possible) be read out loud,and you should ask yourself, as you read it, whetherit is clear, whether it is fair and true, and (last but notleast) whether it is kind. Putting it in another way,you might ask yourself, ' What will the person feel andthink on reading this ? ' or, * Should I eventually besorry to have received such a Letter myself? ' or, again,'Should I be sorry to have written it, say a yearhe

      Recall: Abraham Lincoln's angry letter - put it in a drawer

    12. You can prepare your Letters any-where, even in the train, and so save a great deal oftime ; and it may be noticed here that the idlenessof people, during that great portion of their lives whichthey spend in travelling and waiting, can easily beavoided in this way.

      Using a card system, particularly while travelling, can help to more efficiently use one's time in preventing idleness while travelling and waiting.

    13. s we have often said before, paper is so cheap thatthere is no need for such economy.

      Compare this with the reference in @Kimmerer2013 about responsibility to the tree and not wasting paper: https://hypothes.is/a/pvQ_4ofxEe-NfSOv5wMFGw

      where is the balance?

    14. How to Express Ideas : Style.

      It could be interesting/useful to create a checklist or set of procedures (perhaps a la Oblique Strategies") for editing a major work.

      Sections in this TOC could be useful for creating such.

    15. The third reading should again be a slow reading,

      relationship to Adler's levels of reading?

    16. But in my opinion nothing can excuse the laziness ofa great number of Editors. When the Writers arepoor and have staked a great deal on their Writings,then the laziness is simply disgusting : in fact, it amountsto cruelty. It is concerned with some of the verysaddest tragedies that the world has ever seen, andI only mention it because it is very common and be-cause itis as well that the novice should know what toexpect.
    17. Another Article I sent to a Paper, and after twentyweeks, and after many letters (which enclosed stampedand addressed envelopes), I was told that the Articlewas unsuitable for the Paper.

      Even in 1905 writers had to wait interminably after submitting their writing...

      it's only gotten worse since then...

    18. Very few have the strength of mind tokeep back for a whole week a piece of Writing whichthey have finished. Type-writing sometimes necessitatesthis interval, or at any rate a certain interval.

      The process of having a work typewritten forced the affordance of creating time away from the writing of a piece. This allows for both active and diffuse thinking on the piece as well as the ability to re-approach it with fresh eyes days or weeks later.

    19. there is a great distinction between a thing whichis heard, and a thing which is read in ordinary writing,and a thing which is read in print. In fact these differ-ences almost necessitate certain differences in Style.Now Type-writing is far nearer to print than ordinarywriting is.
    20. When an Article or Book has been written, it must betype-written before it is sent to the Editor or Publisher,that is to say, unless it has been ordered beforehand orunless you are well known. The reason is not simplythat Type-writing looks better than ordinary writing,and that it is easier to read, but it actually is a fact thatfew Editors or Publishers will read anything that is notType- written.

      Even as early as 1905 (or 1899 if we go by the dating of the introduction), typewritten manuscripts were de rigueur for submission to editors and publishers.

    21. Type-writing (see p. 369) is becoming more and morecommonly used, and for certain purposes it is indispen-s

      Note that he's writing in 1899 (via the introduction), and certainly not later than 1905 (publication date).

    22. Carlyle

      One of the major values of fame is that it often allows the dropping of context in communication between people.

      Example: Carlyle references in @Miles1905

    23. Carlyle

      It bears noting in this book on writing and composition, Miles (nor the indexer if it was done by someone else) never uses Carlyle's first name (Thomas) in any of the eleven instances in which it appears, as he's famous enough in the context (space, time) to need only a single name.

    24. General Hints on Preparing Essays etc., in Rhyme.

      One ought to ask what purpose this Rhyme serves?

      • Providing emphasis of the material in the chapter;
      • scaffolding for hanging the rest of the material of the book upon, and
      • potentially meant to be memorized as a sort of outline of the book and the material.
    25. WITH A RHYME.

      did I miss the "rhyme" in this section or is he using a more figurative sense (as in "rhyme or reason")?

      Ha! Didn't get far enough, it's on page 36, but also works the other way as well.

    26. IN this Chapter I shall try to summarise the main partof this work, so that those who have not the time orthe inclination to go right through it may at any rategrasp the general plan of it, and may be able to referto any particular Chapter or page for further informa-tion on any particular topic.

      This chapter is essentially what one ought to glean from skimming the TOC, the Index, and doing a brief inspectional read (Adler, 1972).

    27. In these two latter sections it is aswell to emphasise the general advice, " Try a thing foryourself before you go to anything or anyone for infor-mation." You should try (if there is time) to work outthe subject beforehand ; and then, after you have reador listened to the information, you should note it downin a special Note-book, and if possible make certain ofunderstanding it, of remembering it, and of using it.

      Echoes of my own advice to "practice, practice, practice".

    28. Interest is required especially in the Beginning,
    29. But, the more heexamines the subject, and the more he goes by hispersonal experience, the more he will find it worthwhile to spend time on, and to practise carefully,fthisfirst department of Composition, as opposed to the mereExpression^] Indeed one might almost say that, if thisfirst department has been thoroughly well done, that isto say, if the Scheme of Headings and Sub-Headingshas been well prepared, the Expression will be a com-paratively easy matter.

      Definition of the "first department of composition": <br /> The preparation (mise en place) for writing as opposed to the actual expression of the writing. By this he likely means the actions of Part II (collecting, selecting, arranging) of this book versus Part III.

    30. Humour is to be classed as a Rhetoricalweapon, and indeed as one of the most powerful.
    31. sCarlyle's writings show. Proverb, Paradox, Epigram,exaggeration, humour, and unexpected order of words,all these can be means of Emphasis.
    32. One might think at first that it was a Universal Lawthat all Writing or Speaking should be so clear as tobe transparent. And yet, as we have seen, no readerof Carlyle can doubt that a great deal of his Forcewould be gone if one made his Writings transparent.If one took some of Carlyle's most typical works andparaphrased them in simple English, the effect wouldnot be a quarter as good as it is.

      How is this accomplished exactly? How could one imitate this effect?

      How do we break down his material and style to re-create it?

    33. as Vigour, but the two generally go hand in hand.

      "Brevity is not always the same as Vigour, but the two generally go hand in hand." -Miles

    34. As to the other extreme, it is a questionwhether a sentence can be too clear, whether the Ideacan be too simply expressed ; and, if we once admitthat Carlyle's writings produced a greater effect anda better effect than they would have done if they hadbeen perfectly clear, then we must admit that forcertain purposes absolute Clearness is a Fault.
    35. No Writer seems to be going off the point, and tobe violating the Law of ' Unity ' and Economy, morethan Carlyle does. As we read his "Frederick theGreat", the characters at first appear to us to have nomore connexion with one another than the characters
    36. The reader will doubtless be amazed at the amountof time which has to be spent before he arrives at thestage of Expressing his Ideas at all.
    37. In order to give the reader some chance of havinga good Collection of Headings, and less chance ofomitting the important Headings, I have offered (e.g.on pp. 83, 92) a few General Lists, which are not quitecomplete but yet approach to completeness ; two ofthese Lists will be found sufficient for most purposes.One of these is called the List of Period- Headings,such as Geography, Religion, Education, Commerce,War, etc. (see p. 83); the other is called the List ofGeneral Headings, and includes Instances, Causes andHindrances, Effects, Aims, etc. : this latter List will befound on p. 92.
    38. Rhythm, Grammar, Vocab-ulary, Punctuation, etc. It was hard to break thefaggots when they were in a bundle, but it was easyto break them when they were taken one by one.

      Notice that again he's emphasizing breaking down the problem into steps, and he's using a little analogy to do so, just like he had described previously.

      (see: https://hypothes.is/a/NDArGoemEe-9BXcYJSUyMQ)

    39. I shall try to give the ChiefFaults in Composition. The reader will see that thelist is long : and that, if he merely tries to write wholeEssays all at one 'sitting', he is little likely to escapethem all.S

      Attempting to escape the huge list of potential "Chief faults in composition" is a solid reason not to try to cram a paper or essay in a single night/day.

    40. Teaching is one of the best means of Learning, notonly because it forces one to prepare one's work care-fully, and to be criticised whether one wishes it or not,but also because it gives one a sense of responsibility :it reminds one that one is no longer working for selfalone.
    41. whether you are Writing or Speaking, the generalprinciple to remember is that you must appeal, innearly everything you say, to the very stupidest peoplepossible.
    42. It is important to learn as much and at the sametime as little as possible.J

      By abstracting and concatenating portions of material, one can more efficiently learn material that would otherwise take more time.

    43. But of all methods of Learning none is better thanthe attempt to teach others

    Tags

    Annotators

    1. SOTs generated by the anomalous Hall effect inFM/NM/FM multilayers were predicted 13 and experimentallyrealized14

      Is this normal?

    Annotators

    1. Marrim thinks they will still find a way to smoke. “Kids break the rules — that’s the way of the world,” she said. “We were all kids and we tried it for the first time,” she added. “Might as well do it in the safety of a lounge.”

      Marrim feels that hookah is a big part of her life because it helped her feel liberated even though she was looked as shameful because she is a women but that did not stop her she would make her own hookah when she was younger to smoke some hookah she's not wrong kids like to break the rules

    2. the chemicals in hookah smoke are similar to those found in cigarette smoke.

      due to hookah being Tabaco that you inhale in to your lungs so it's still a health problem because you get smoke in your lungs.

    3. birthdays, graduations, that time you cried over the crush who didn’t like you back or showed off your smoke ring skills to your friends. “It’s like a rite of passage here when you start smoking hookah,” Marrim said.

      The hookah lounge is more then a place to smoke it a place where people to together to celebrate special events like birthdays or to relief dome stress or hang with friends.

    4. “And it’s something you have to create for yourself when you’re displaced, and you might not ever be able to go back home because you don’t really know what home is anymore.”

      hookah is a sacred traditions for Muslim people they don't know if they will every go home some day so to keep their tradition hookah is important for them.

    1. on the other side

      The water that has only recently brought about death to an unfortunate sailor and has seemingly threatened “Gentile[s]” and “Jew[s]” and even us, the readers, now becomes the force an absence of which leads to death. What if the “death by water” that Madame Sosotris warned about was not the drowning but the death brought by its lack? This absence—spiritual and physical—defines the drought that pervades society.

      In essence, that warning has already become true. In the search of meaning, the earthly desires have drowned humanity. What comes after it is stillness: a period of profound spiritual drought. This lack of spiritually induces apocalypse: a cycle of life seemingly becomes broken. The silence in the mountains does not give way to voice, and stillness, described in The Death of Water, that follows the storm does not imply recovery; instead, it leads to further destruction. There is no resurgence after the storm, only desolation.

      This desolation is no less overwhelming than the indulgence that preceded it. The absence of water—a metaphor for spiritual sustenance—is inescapable. The mountains, once symbols of “solitude,” “silence” and reflection are now dry and barren. The use of “even” in these lines underscores the notion of the totality of this spiritual drought. There is no refuge, no shelter, not “even” in the mountains.

      Eliot further juxtaposes biblical light, associated with Christ as “the sun shineth in his strength,” as described in Revelations, with thunder, transforming it into a symbol of apocalypse – the thunder itself represents a loud rumbling or crashing noise after a lightning flash. This choice of title and imagery seems to suggest that divine intervention may have already occurred—unrecognized and unheeded, leaving only a loud noice as its product. What if Jesus is already here “walking beside” us? Left unrecognized, however, he does not intervene. This notion is underscored by the repetition of the question regarding the personality of this third creature: “Who is the third who walks always beside you?” In the second reiteration of the question, however, “beside” changes to “on the other side.” This divine creature, most likely Christ, is present, yet now isolated, by the walls of mountains we ourselves have built.

      The tragedy of this drought, thus, seems to lie not in the absence of divine intervention, but in humanity’s inability to recognize it. In this contemporary world, it is not the storm that destroys; it is the stillness after, where the absence of recognition leads to a deeper decay. The apocalypse has already begun (or potentially has almost reached its culmination), not in fire or flood, but in silence and spiritual blindness.

    1. orientação para a construçãode uma e-atividade

      Há premissas importantes que servem para estruturar o pensamento na fase de conceção das e-atividades. Gostaria de sublinhar a importância de que elementos como a correta adequação dos conteúdos e os consequentes objetivos de aprendizagem estejam logicamente alinhados. A clareza das instruções/orientações deve ser condição fundamental, para assim existir uma progressão lógica nas diferentes etapas/fases da e-atividade, garantindo, desta forma, uma aprendizagem efetiva por parte dos alunos. Também será importante que o professor divulgue os resultados e faça um balanço de uma dada e-atividade, garantindo assim que os alunos tomam consciência da medida de concretização dos objetivos propostos, validando dessa forma eventuais oportunidades de melhoria. António Costa

    2. Num contexto de ensino a distância digital, a planificação de E-atividades proporciona ao formando uma maior noção da sua assimilação de conceitos, conteúdos e consiste num método de avaliação de aprendizagens. São sem dúvida uma forma dinâmica e interativa de promover uma aprendizagem ativa, autónoma e onde se privilegia o pensamento crítico. De acordo com Almenara, Osuna & Cejudo (2014) e transportando-nos para um ambiente virtual, as e-atividades são o elemento que facilita a inter-relação entre o Ensino e a Aprendizagem. Na perspetiva do formador os desafios que se colocam ao desenhar uma e-atividade são vários, nomeadamente, ser objetivo, claro, adequar os conteúdos, conhecer os públicos, definir o tempo, apresentar recursos, selecionar o formato mais adequado, diversificar e avaliar.

      Em forma de conclusão as e-atividades permitem uma aprendizagem online ativa, participativa, colaborativa, seja desenvolvida de forma individual ou em grupo e cujo principal objetivo centra-se nas aprendizagens.

    3. Como vimos anteriormente existe uma panóplia de tipologias de e-atividades.A pergunta que se impõe é saber como selecionar a e-atividade maisadequada ao nosso propósito.

      Esta tarefa que nos cabe a nós não é fácil, porque temos de ter vários fatores em consideração, tais como, o objetivo e finalidade do curso, a faixa etária, a aplicabilidade prática, etc...

    4. 43Tabela 3.2. | Modelo de desenho de e-atividades de acordo com Almenara, Osuna &Cejudo (2014)

      Acho interessante este modelo de desenho de uma e-atividade de acordo com Almenara Osuna & Cejudo mas também gostei da proposta do Maina

    5. Fornecer feedback: Após a conclusão da atividade deve serfornecido feedback construtivo aos alunos, destacando o queeles fizeram bem e onde podem melhorar

      o feedback é muito importante, ajuda os alunos a compreenderem melhor seus erros e acertos, proporcionando uma direção clara sobre como melhorar suas habilidades e conhecimentos.

    6. observar que essas e-atividades podem ser concebidas para darresposta a cada um dos cinco estágios do modelo e, com isso, ajudar osalunos a construir comunidades virtuais de aprendizagem e a alcançar osseus objetivos de aprendizagem online

      Tendo por base na minha opinião a minha experiência como formadora de informática/programação a adolescentes, eu incentivava bastante trabalhos de grupo, primeiro porque é uma condição essencial para se estar em sociedade e eles demonstravam bastante interesse e empenhavam-se mais do que quando eram trabalhos individuais. Sentia que entre eles havia entreajuda e esforçavam-se para que o resultado final e a apresentação do trabalho fosse o melhor.

    7. Isso porque essas atividades podem ser utilizadas paradiversificar as formas de aprendizagem e envolver os alunos em processosmais dinâmicos e interativos.

      Não conhecia esta ferramenta hypothes.is software que tem como objetivo recolher comentários sobre declarações feitas em qualquer conteúdo acessível pela web. Esta passagem apela à diversificação para cativar a atenção e interesse dos alunos/formandos. Eu dei formação de programação a adolescentes e tive algum sucesso quando usei ferramentas como: kahoot para elaborar quizz , Scratch: que é uma plataforma de programação visual que permite aos alunos criar projetos interativos utilizando blocos de código de forma lúdica, e o **CoSpaces ** que é uma aplicação que permite criar conteúdos de realidade virtual.

    8. é importante garantir que as e-atividades sejam inclusivas eacessíveis a todos os alunos, independentemente das suas habilidades erecursos tecnológicos disponíveis.

      A inclusão representa um ato de igualdade entre os diferentes indivíduos na sociedade que permite que todos tenham o direito de integrar e participar das várias dimensões do seu ambiente, sem sofrer qualquer tipo de discriminação e preconceito. Assim, na minha opinião a importância das e-atividades inclusivas está diretamente ligada à necessidade de garantir que todos os alunos, independentemente das suas condições físicas, cognitivas, culturais ou socio-económicas, tenham igualdade de oportunidades no processo de aprendizagem. A inclusão é um princípio fundamental em ambientes de educação, presencial ou digital, e torna-se ainda mais relevante em plataformas online, onde as barreiras tecnológicas podem aumentar as desigualdades se não forem devidamente consideradas. Saudações Rui Ventura

    9. as atividades realizadas por meio de dispositivoseletrónicos, têm um papel importante no desenho das estratégias deaprendizagem. Isso porque essas atividades podem ser utilizadas paradiversificar as formas de aprendizagem e envolver os alunos em processosmais dinâmicos e interativos.

      Nesta passagem anotada na pg. 32, a autora relembra quanto importantes são os dispositivos eletrónicos no desenvolvimento de estratégias de ensino e de aprendizagem. Quando integramos as tecnologias, nos processos de ensino e de aprendizagem, vamos no caminho da diversificação dos métodos de ensino, tornando a aprendizagem mais dinâmica e interativa. Esta situação vai estimular o envolvimento dos alunos e possibilita abordagens mais personalizadas, adaptando-se às diferentes formas de aprender de cada discente. Por outro lado, o uso de atividades mediadas por tecnologias digitais no âmbito da educação vai facilitar o acesso a recursos variados, consequentemente o professor vai promover a autonomia dos alunos, ampliando suas oportunidades de aprendizagem dentro e fora da sala de aula e ao longo da vida. Maria Barreto

    1. Thanksgiving is a time to reflect on the things we’re grateful for and to share that gratitude with the people who matter most. Along with gathering around the table, sending a heartfelt card is a meaningful way to reach out to friends, family, and co-workers—especially those who can’t join you in person. A thoughtful message can remind them how much they’re loved and appreciated. Whether you’re sending Thanksgiving cards or inviting loved ones to celebrate with you, these Thanksgiving messages and well wishes will help express your gratitude this season. function scrollListener(){document.getElementById("carousel-product-list-wrapper").addEventListener("scroll",scrollListener)}function scrollListener(){manageArrowsVisibility()}function manageArrowsVisibility(e=0){var t,l,s,r=document.getElementById("carousel-product-list-wrapper");r&&(e=r.scrollLeft+e,t=r.offsetWidth,r=r.scrollWidth,l=document.getElementById("apc-left-arrow"),s=document.getElementById("apc-right-arrow"),e<=0?(l.classList.add("disable"),s.classList.remove("disable")):e+t<r?(l.classList.remove("disable"),s.classList.remove("disable")):r<=e+t&&(l.classList.remove("disable"),s.classList.add("disable")))}function handleArrowClick(e){var t,l=document.getElementById("carousel-product-list-wrapper");l&&(scrollAmount=0,t=setInterval(function(){"left"===e?(manageArrowsVisibility(-10),l.scrollLeft-=10):(manageArrowsVisibility(10),l.scrollLeft+=10),720<=(scrollAmount+=10)&&window.clearInterval(t)},25))} .apc-spinner{display:flex;width:72px;position:absolute;text-align:center;justify-content:space-around}.apc-spinner>div{width:18px;height:18px;background:#aaa;border-radius:100%;display:inline-block;-webkit-animation:sk-bouncedelay 1.4s ease-in-out infinite both;animation:sk-bouncedelay 1.4s ease-in-out infinite both}.apc-spinner .bounce1{-webkit-animation-delay:-.32s;animation-delay:-.32s}.apc-spinner .bounce2{-webkit-animation-delay:-.16s;animation-delay:-.16s}@-webkit-keyframes sk-bouncedelay{0%,80%,to{-webkit-transform:scale(0)}40%{-webkit-transform:scale(1)}}@keyframes sk-bouncedelay{0%,80%,to{-webkit-transform:scale(0);transform:scale(0)}40%{-webkit-transform:scale(1);transform:scale(1)}}.shimmer-background{position:absolute;width:100%;height:100%;background:#fff;flex-direction:column;z-index:3}.shimmer-background,.shimmer-thumb{display:flex;align-items:center;justify-content:center}.shimmer-thumb{width:70%;height:55%;background:#ebedf0;margin:10%}.shimmer-text{width:50%;height:16px;background:#ebedf0;border-radius:8px;margin-bottom:4%}.apc-product-wrapper{background:0 0;padding:4px;display:flex;flex-direction:column;justify-content:center;align-items:center;margin:0 auto;box-sizing:content-box;max-width:280px}.mobile .apc-product-wrapper{height:250px}.grid-layout .apc-product-wrapper{height:295px;width:calc(20% - 8px)}.carousel-layout .apc-product-wrapper{height:264px;width:248px}.apc-product{flex-direction:column;width:100%;height:100%;flex-grow:1;border:none;position:relative;box-sizing:border-box;cursor:pointer;background:0 0}.apc-product,.product-image-wrapper{display:flex;justify-content:center;align-items:center}.product-image-wrapper{height:80%}.product-image{max-width:90%;height:auto;width:auto;transition:-webkit-transform 1s ease-out;transition:transform 1s ease-out;transition:transform 1s ease-out,-webkit-transform 1s ease-out;max-height:100%}.product-image:hover{transform:scale(1.05);-ms-transform:scale(1.05);-webkit-transform:scale(1.05)}.product-name{width:100%;height:24px;font-size:14px;position:relative;text-align:center;color:#58595b;white-space:nowrap;text-overflow:ellipsis;overflow:hidden}.product-name.short{font-size:15px}.mobile .product-name{font-size:14px}.tablet.grid-layout.landscape .apc-product-wrapper{width:calc(25% - 16px);margin:0}.tablet.grid-layout.landscape .preview .apc-product-wrapper:nth-child(n+9){display:none}.tablet.grid-layout.portrait .apc-product-wrapper{width:calc(33% - 12px);margin:0}.tablet.grid-layout.portrait .preview .apc-product-wrapper:nth-child(n+7){display:none}.mobile.grid-layout .apc-product-wrapper{width:calc(50% - 8px);margin:0;padding:0}.mobile.grid-layout .preview .apc-product-wrapper:nth-child(n+5){display:none}.apc-product-list{display:flex;justify-content:center;align-items:center}.carousel-layout .apc-product-list{width:-webkit-fit-content;width:-moz-fit-content;width:fit-content;margin:0 4px}.grid-layout .apc-product-list{flex-flow:row wrap;width:100%;height:100%}.grid-wrapper{height:calc(100% - 70px);display:flex;flex-direction:row;align-items:center;justify-content:center;overflow-y:hidden;position:relative}.mobile .grid-wrapper,.tablet .grid-wrapper{flex-direction:column}.see-more{height:44px;width:100%;min-width:260px;color:#58595b;font-size:14px;text-align:center;border:none;border-top:1px solid #dcdee1;background:0 0;margin:16px 0;display:flex;align-items:center;justify-content:center;cursor:pointer}.see-more-icon{margin:-8px 0 0 12px;-webkit-transform:rotate(135deg);transform:rotate(135deg)}.apc-recommendation-title{color:#58595b;font-size:32px;font-weight:700;line-height:40px}.mobile .apc-recommendation-title{font-size:24px;line-height:28px}.apc-recommendation-subtitle{color:#58595b;font-size:17px;line-height:36px}.mobile .apc-recommendation-subtitle{font-size:17px;line-height:25px}.apc-header{display:flex;flex-direction:column;overflow:hidden}.icon{height:16px;width:16px}.replace-button{color:#58595b;font-size:14px;text-align:center;background:0 0;width:250px;height:36px;margin:8px 0;display:flex;align-items:center;justify-content:space-around;align-self:center}.apc-arrow,.replace-button{border:1px solid #58595b;border-radius:4px;cursor:pointer}.apc-arrow{padding:0;background:#fff;width:24px;height:48px;margin:0 8px}.apc-arrow:hover{box-shadow:0 4px 4px -1px #c6c7c9}.apc-arrow.disable{opacity:.3;cursor:auto}.apc-arrow.disable:hover{box-shadow:none}.arrow-icon{position:relative;height:12px;width:12px;border-right:2px solid #58595b;border-top:2px solid #58595b}.arrow-icon.left{-webkit-transform:rotate(225deg);transform:rotate(225deg);margin-left:7px}.arrow-icon.right{-webkit-transform:rotate(45deg);transform:rotate(45deg);margin-right:7px}.carousel-wrapper{display:flex;flex-direction:row;justify-content:center;align-items:center}.carousel-product-list-wrapper{overflow-x:scroll}.carousel-product-list-wrapper::-webkit-scrollbar{background-color:#fff;width:16px}.carousel-product-list-wrapper::-webkit-scrollbar-track{background-color:#fff}.carousel-product-list-wrapper::-webkit-scrollbar-thumb{background-color:#fff;border-radius:16px;border:4px solid #fff}.carousel-product-list-wrapper:hover::-webkit-scrollbar-thumb{background-color:#babac0;border-radius:16px;border:4px solid #fff}.carousel-product-list-wrapper::-webkit-scrollbar-button{display:none}.apc-wrapper{display:flex;flex-direction:column;align-items:center;justify-content:flex-start;transition:min-height 3s;max-width:1600px;margin:0 auto}.apc-wrapper.mobile,.apc-wrapper.table{width:auto}.container{display:flex;flex-direction:column;width:100%;margin:16px 0;background-color:#fff}.apc-container-product-list-wrapper{overflow-x:scroll;width:85%}.widget-loading{display:flex;width:100%;border-radius:12px;background:#fff;padding:24px;box-sizing:border-box;opacity:.5;position:relative;z-index:5;align-items:center;justify-content:center;height:421px}.align-left{align-self:start;text-align:start;margin-left:8px}.align-center{align-self:center;text-align:center}.align-right{align-self:end;text-align:end;margin-right:8px}.apc-global{margin:0}.apc-global::-webkit-scrollbar{display:block;overflow:auto;border-radius:10px}.apc-global *{font-family:Montserrat Medium,Verdana,-apple-system,BlinkMacSystemFont,Segoe UI,Roboto,Oxygen,Ubuntu,Cantarell,Fira Sans,Droid Sans,Helvetica Neue,sans-serif;-moz-osx-font-smoothing:grayscale}.apc-global .apc-pipe{font-family:system-ui,sans-serif}.apc-product:link{text-decoration:none}.apc-product:visited{text-decoration:none}.apc-product:hover{text-decoration:none}.apc-product:active{text-decoration:none}

      Well-written paragraph, reads very smoothly. 1. First sentence states what Thanksgiving is all about. 2. Second and third smoothly transition from the first into the need for sending messages for Thanksgiving. 3. Last hints at some of the tangible options to be discussed, then summarizes the value of Thanksgiving messages.

  2. docdrop.org docdrop.org
    1. "You guys are no help. Literally no help. Why do you guys have me in here?" she protested. Sofia's step-grandfather was so angry with the school administrators (and perhaps intimidated by them) that Lola tried to intervene. (He tells us that when he was growing up here in the 1950s, all the parents were involved in the schools, but now they are completely uninterested. "They would rather let others do it, but then no one gets involved."

      Nowadays, I think that's the case for a lot of schools in the U.S.. Many parents aren't involved in school affairs as back in the day. parents actually cared about their children's education and the material they're being taught, but now parents just send kids to these schools and are not involved whatsoever.

    1. Cyber-EnabledNetworks
      • Has the capacity to be entwined with other crimes such as extortion

      • Intensifies amid the rise of ais

    2. Mafia networks
      • Only 18 of them in Canada

      • They're mainly held in Ontario and Quebec but have connections to more than 10 countries

      • They're very violent

      • Active in the private business sector where they commit money laundering

    3. Extortion
      • Force or threats are used to obtain money

      Eg. Co-op extortion 1. Perpetrator threatened to release sensitive data to the public if certain demands weren't made

    4. Money Laundering
      • Money is obtained illegally but is disguised as legit

      • 30% of OCGs are involved in money laundering

      • 40 billion dollars is laundered annually in Canada

    5. Piracy
      • The crime of stealing intellectual property and distributing them either for a reduced price or for free

      • Eg. Zlibrary

      • Causes financial consequences to the movie producer and its promoter and other associated with the production of the movie

      • What would be the difference between piracy and buying something and selling it at a thrifting site at a reduced price?

      • Manga mura

    6. What are Loan Sharks? (Beware!)
      • Happens frequently in social media
      • Professionals or experts who collect loans at extremely high rates to clients, by means of threats and violence
      • You can still experience loan sharking even in legitimate places(eg. payday)

    Annotators

    1. Responsibility to the tree makes everyone pause before beginning.Sometimes I have that same sense when I face a blank sheet of paper.For me, writing is an act of reciprocity with the world; it is what Ican give back in return for everything that has been given to me. Andnow there’s another layer of responsibility, writing on a thin sheet oftree and hoping the words are worth it. Such a thought could make aperson set down her pen.
    1. Welcome back and in this lesson I want to talk through another type of storage.

      This time instant store volumes.

      It's essential for all of the AWS exams and real-world usage that you understand the pros and cons for this type of storage.

      It can save money, improve performance or it can cause significant headaches so you have to appreciate all of the different factors.

      So let's just jump in and get started because we've got a lot to cover.

      Instant store volumes provide block storage devices so raw volumes which can be attached to an instance presented to the operating system on that instance and used as the basis for a file system which can then in turn be used by applications.

      So far they're just like EBS only local instead of being presented over the network.

      These volumes are physically connected to one EC2 host and that's really important.

      Each EC2 host has its own instant store volumes and they're isolated to that one particular host.

      Instances which are on that host can access those volumes and because they're locally attached they offer the highest storage performance available within AWS much higher than EBS can provide and more on why this is relevant very soon.

      They're also included in the price of any instances which they come with.

      Different instance types come with different selections of instant store volumes and for any instances which include instant store volumes they're included in the price of that instance so it comes down to use it or lose it.

      One really important thing about instant store volumes is that you have to attach them at launch time and unlike EBS you can't attach them afterwards.

      I've seen this question come up a few times in various AWS exams about adding new instant store volumes after instance launch and it's important that you remember that you can't do this it's launch time only.

      Depending on the instance type you're going to be allocated a certain number of instant store volumes you can choose to use them or not but if you don't you can't adjust this later.

      This is how instant store architecture looks.

      Each instance can have a collection of volumes which are backed by physical devices on the EC2 host which that instance is running on.

      So in this case host A has three physical devices and these are presented as three instant store volumes and host B has the same three physical devices.

      Now in reality EC2 hosts will have many more but this is a simplified diagram.

      Now on host A instance 1 and 2 are running instance 1 is using one volume and instance 2 is using the other two volumes and the volumes are named ephemeral 0, 1 and 2.

      Roughly the same architecture is present on host B but instance 3 is the only instance running on that host and it's using ephemeral 1 and ephemeral 2 volumes.

      Now these are ephemeral volumes they're temporary storage as a solutions architect or a developer or an engineer you need to think of them as such.

      If instance 1 stored some data on ephemeral volume 0 on EC2 host A let's say a cat picture and then for some reason the instance migrated from host A through to host B then it would still have access to an ephemeral 0 volume but it would be a new physical volume a blank block device.

      So this is important if an instance moves between hosts then any data that was present on the instant store volumes is lost and instances can move between hosts for many reasons.

      If they're stopped and started this causes a migration between hosts or another example is if host A was undergoing maintenance then instances would be migrated to a different host.

      When instances move between hosts they're given new blank ephemeral volumes data on the old volumes is lost they're wiped before being reassigned but the data is gone and even if you do something like change an instance type this will cause an instance to move between hosts and that instance will no longer have access to the same instant store volumes.

      This is another risk to keep in mind you should view all instant store volumes as ephemeral.

      The other danger to keep in mind is hardware failure if a physical volume fails say the ephemeral 1 volume on EC2 host A then instance 2 would lose whatever data was on that volume.

      These are ephemeral volumes treat them as such their temporary data they should not be used for anything where persistence is required.

      Now the size of instant store volumes and the number of volumes available to an instance vary depending on the type of instance and the size of instance.

      Some instance types don't support instant store volumes different instance types have different types of instance store volumes and as you increase in size you're generally allocated larger numbers of these volumes so that's something that you need to keep in mind.

      One of the primary benefits of instance store volumes is performance you can achieve much higher levels of throughput and more IOPS by using instance store volumes versus EBS.

      I won't consume your time by going through every example but some of the higher-end figures that you need to consider are things like if you use a D3 instance which is storage optimized then you can achieve 4.6 GB per second of throughput and this instance type provides large amounts of storage using traditional hard disks so it's really good value for large amounts of storage.

      It provides much high levels of throughput than the maximums available when using HDD based EBS volumes.

      The I3 series which is another storage optimized family of instances these provide NVMe SSDs and this provides up to 16 GB per second of throughput and this is significantly higher than even the most high performance EBS volumes can provide and the difference in IOPS is even more pronounced versus EBS with certain I3 instances able to provide 2 million read IOPS and 1.6 million write IOPS when optimally configured.

      In general instance store volumes perform to a much higher level versus the equivalent storage in EBS.

      I'll be doing a comparison of EBS versus instance store elsewhere in this section which will help you in situations where you need to assess suitability but these are some examples of the raw figures.

      Now before we finish this lesson just a number of exam power-ups.

      Instance store volumes are local to an EC2 host so if an instance does move between hosts you lose access to the data on that volume you can only add instance store volumes to an instance at launch time if you don't add them you cannot come back later and add additional instance store volumes and any data on instance store volumes is lost if that instance moves between hosts if it gets resized or if you have either host failure or specific volume hardware failure.

      Now in exchange for all these restrictions of course instance store volumes provide high performance so it's the highest data performance that you can achieve within AWS you just need to be willing to accept all of the shortcomings around the risk of data loss its temporary nature and the fact that it can't survive through restarts or moves or resizes.

      It's essentially a performance trade-off you're getting much faster storage as long as you can tolerate all of the restrictions.

      Now with instance store volumes you pay for it anyway it's included in the price of an instance so generally when you're provisioning an instance which does come with instance store volumes there is no advantage to not utilizing them you can decide not to use them inside the OS but you can't physically add them to the instance at a later date.

      Just to reiterate and I'm going to keep repeating this throughout this section of the course instance store volumes are temporary you cannot use them for any data that you rely on or data which is not replaceable so keep that in mind it does give you amazing performance but it is not for the persistent storage of data but at this point that's all of the theory that I wanted to cover so that's the architecture and some of the performance trade-offs and benefits that you get with instance store volumes go ahead and complete this video and when you're ready join me in the next which will be an architectural comparison of EBS and instance store which will help you in exam situations to pick between the two.

    1. Welcome back and in this lesson I want to talk about the Hard Disk Drive or HDD-based volume types provided by EBS.

      HDD-based means they have moving bits, platters which spin little robot arms known as heads which move across those spinning platters.

      Moving parts means slower which is why you'd only want to use these volume types in very specific situations.

      Now let's jump straight in and look at the types of situations where you would want to use HDD-based storage.

      Now there are two types of HDD-based storage within EBS.

      Well that's not true, there are actually three but one of them is legacy.

      So I'll be covering the two ones which are in general usage.

      And those are ST1 which is throughput optimized HDD and SC1 which is cold HDD.

      So think about ST1 as the fast hard drive not very agile but pretty fast and think about SC1 as cold.

      ST1 is cheap, it's less expensive than the SSD volumes which makes it ideal for any larger volumes of data.

      SC1 is even cheaper but it comes with some significant trade-offs.

      Now ST1 is designed for data which is sequentially accessed because it's HDD-based it's not great at random access.

      It's more designed for data which needs to be written or read in a fairly sequential way.

      Applications where throughput and economy is more important than IOPS or extreme levels of performance.

      ST1 volumes range from 125 GB to 16 TB in size and you have a maximum of 500 IOPS.

      But and this is important IO on HDD-based volumes is measured as 1 MB blocks.

      So 500 IOPS means 500 MB per second.

      Now their maximums HDD-based storage works in a similar way to how GP2 volumes work with a credit bucket.

      Only with HDD-based volumes it's done around MB per second rather than IOPS.

      So with ST1 you have a baseline performance of 40 MB per second for every 1 TB of volume size.

      And you can burst to a maximum of 250 MB per second for every TB of volume size.

      Obviously up to the maximum of 500 IOPS and 500 MB per second.

      ST1 is designed for when cost is a concern but you need frequent access storage for throughput intensive sequential workloads.

      So things like big data, data warehouses and log processing.

      Now ST1 on the other hand is designed for infrequent workloads.

      It's geared towards maximum economy when you just want to store lots of data and don't care about performance.

      So it offers a maximum of 250 IOPS.

      Again this is with a 1 MB IO size.

      So this means a maximum of 250 MB per second of throughput.

      And just like with ST1 this is based on the same credit pool architecture.

      So it has a baseline of 12 MB per TB of volume size and a burst of 80 MB per second per TB of volume size.

      So you can see that this offers significantly less performance than ST1 but it's also significantly cheaper.

      And just like with ST1 volumes can range from 125 GB to 16 TB in size.

      This storage type is the lowest cost EBS storage available.

      It's designed for less frequently accessed workloads.

      So if you have colder data, archives or anything which requires less than a few loads or scans per day then this is the type of storage volume to pick.

      And that's it for HDD based storage.

      Both of these are lower cost and lower performance versus SSD.

      Designed for when you need economy of data storage.

      Picking between them is simple.

      If you can tolerate the trade-offs of ST1 then use that.

      It's super cheap and for anything which isn't day to day accessed it's perfect.

      Otherwise choose ST1.

      And if you have a requirement for anything IOPS based then avoid both of these and look at SSD based storage.

      With that being said though that's everything that I wanted to cover in this lesson.

      Thanks for watching.

      Go ahead and complete the video and when you're ready I'll look forward to you joining me in the next.

    1. The conclusion we would draw is apparent; — if there is a similarity of minds discernible in the whole human race, can dissimilitude of forms or the gradations of complexion prove that the earth is peopled by many different species of men?

      The question: what matters more? Hearts/minds? Or Complexion?

    1. Rémunération du personnel, 65 000 €

      Compte de résultat, charges, 641

    2. Ventes de produits finis, 511 200 €

      Compte de résultat, produit, 701

    3. Stock de matières 1ères du 31/12/N, 35 000 €

      Bilan, actif, 31

    4. Achats de matières premières, 284 000 €

      Compte de résultat, charges, 601

    1. Welcome back and in this lesson I want to continue my EBS series and talk about provisioned IOPS SSD.

      So that means IO1 and IO2.

      Let's jump in and get started straight away because we do have a lot to cover.

      Strictly speaking there are now three types of provisioned IOPS SSD.

      Two which are in general release IO1 and its successor IO2 and one which is in preview which is IO2 Block Express.

      Now they all offer slightly different performance characteristics and different prices but the common factors is that IOPS are configurable independent of the size of the volume and they're designed for super high performance situations where low latency and consistency of that low latency are both important characteristics.

      With IO1 and IO2 you can achieve a maximum of 64,000 IOPS per volume and that's four times the maximum for GP2 and GP3 and with IO1 and IO2 you can achieve a 1000 MB per second of throughput.

      This is the same as GP3 and significantly more than GP2.

      Now IO2 Block Express takes this to another level.

      With Block Express you can achieve 256,000 IOPS per volume and 4000 MB per second of throughput per volume.

      In terms of the volume sizes that you can use with provisioned IOPS SSDs with IO1 and IO2 it ranges from 4 GB to 16 TB and with IO2 Block Express you can use larger up to 64 TB volumes.

      Now I mentioned that with these volumes you can allocate IOPS performance values independently of the size of the volume.

      Now this is useful for when you need extreme performance for smaller volumes or when you just need extreme performance in general but there is a maximum of the size to performance ratio.

      For IO1 it's 50 IOPS per GB of size so this is more than the 3 IOPS per GB for GP2.

      For IO2 this increases to 500 IOPS per GB of volume size and for Block Express this is 1000 IOPS per GB of volume size.

      Now these are all maximums and with these types of volumes you pay for both the size and the provisioned IOPS that you need.

      Now because with these volume types you're dealing with extreme levels of performance there is also another restriction that you need to be aware of and that's the per instance performance.

      There is a maximum performance which can be achieved between the EBS service and a single EC2 instance.

      Now this is influenced by a few things.

      The type of volumes so different volumes have a different maximum per instance performance level, the type of the instance and then finally the size of the instance.

      You'll find that only the most modern and largest instances support the highest levels of performance and these per instance maximums will also be more than one volume can provide on its own and so you're going to need multiple volumes to saturate this per instance performance level.

      With IO1 volumes you can achieve a maximum of 260,000 IOPS per instance and a throughput of 7,500 MB per second.

      It means you'll need just over four volumes of performance operating at maximum to achieve this per instance limit.

      Oddly enough IO2 is slightly less at 160,000 IOPS for an entire instance and 4,750 MB per second and that's because AWS have split these new generation volume types.

      They've added block express which can achieve 260,000 IOPS and 7,500 MB per second for an instance maximum.

      So it's important that you understand that these are per instance maximums so you need multiple volumes all operating together and think of this as a performance cap for an individual EC2 instance.

      Now these are the maximums for the volume types but you also need to take into consideration any maximums for the type and size of the instance so all of these things need to align in order to achieve maximum performance.

      Now keep these figures locked in your mind it's not so much about the exact numbers but having a good idea about the levels of performance that you can achieve with GP2 or GP3 and then IO1, IO2 and IO2 block express will really help you in real-world situations and in the exam.

      Instance store volumes which we're going to be covering elsewhere in this section can achieve even higher performance levels but this comes with a serious limitation in that it's not persistent but more on that soon.

      Now as a comparison the per instance maximums for GP2 and GP3 is 260,000 IOPS and 7,000 MB per second per instance.

      Again don't focus too much on the exact numbers but you need to have a feel for the ranges that these different types of storage volumes occupy versus each other and versus instance store.

      Now you'll be using provisioned IOPS SSD for anything which needs really low latency or sub millisecond latency, consistent latency and higher levels of performance.

      One common use case is when you have smaller volumes but need super high performance and that's only achievable with IO1, IO2 and IO2 block express.

      Now that's everything that I wanted to cover in this lesson.

      Again if you're doing the sysops or developer streams there's going to be a demo lesson where you'll experience the storage performance levels.

      For the architecture stream this theory is enough.

      At this point though thanks for watching that's everything I wanted to cover go ahead and complete the video and when you're ready I look forward to you joining me in the next.

    1. Disease: Von Willebrand Disease (VWD) type 1

      Patient(s): 13 yo, female and 14 yo, female, both Italian

      Variant: VWF NM_000552.5: c.820A>C p. (Thr274Pro)

      Dominant negative effect

      Heterozygous carrier

      Variant located in the D1 domain on VWF

      Phenotypes:

      heterozygous carriers have no bleeding history

      reduced VWF levels compatible with diagnosis of VWD type 1

      increased FVIII:C/VWF:Ag ratio, suggests reduced VWF synthesis/secretion as possible phathophysiological mechanism

      Normal VWFpp/VWF:Ag ratio

      Modest alteration of multimeric pattern in plasma and platelet multimers

      plasma VWF showed slight increase of LMWM and decrease of IMWM and HMWM

      Platelet VWF showed quantitative decrease of IMWM, HMWM, and UL multimers

      In silico analysis:

      SIFT, ALIGN, GVD Polyphen 2.0, SNP&GO, Mutation Taster, Pmut all suggest damaging consequences.

      PROVEAN and Effect suggest neutral effect

      according to ACMG guidelines this variant was classified as pathogenic

    1. Sorry boy, but I've been hit by purple rain

      Ventura Highway, track 14 on the album Here & Now by America (1972-11-04)

      It’s unsure whether a connection between this lyric and the famous Prince song (which was released 12 years after “Ventura Highway”) exists, but at least two journalists from The San Diego Union and the Post-Tribune wrote that Prince got the phrase “Purple Rain” from here.

      Asked to explain the phrase “purple rain” in “Ventura Highway,” Gerry Beckley responded: “You got me.”

    1. Welcome back and in this lesson I want to talk about two volume types available within AWS GP2 and GP3.

      Now GP2 is the default general purpose SSD based storage provided by EBS.

      GP3 is a newer storage type which I want to include because I expect it to feature on all of the exams very soon.

      Now let's just jump in and get started.

      General Purpose SSD storage provided by EBS was a game changer when it was first introduced.

      It's high performance storage for a fairly low price.

      Now GP2 was the first iteration and it's what I'm going to be covering first because it has a simple but initially difficult to understand architecture.

      So I want to get this out of the way first because it will help you understand the different storage types.

      When you first create a GP2 volume it can be as small as 1 GB or as large as 16 TB.

      And when you create it the volume is created with an I/O credit allocation.

      Think of this like a bucket.

      So an I/O is one input output operation.

      An I/O credit is a 16 kb chunk of data.

      So an I/O is one chunk of 16 kilobytes in one second.

      If you're transferring a 160 kb file that represents 10 I/O blocks of data.

      So 10 blocks of 16 kb.

      And if you do that all in one second that's 10 credits in one second.

      So 10 I/Ops.

      When you aren't using the volume much you aren't using many I/Ops and you aren't using many credits.

      During periods of high disc load you're going to be pushing a volume hard and because of that it's consuming more credits.

      For example during system boots or backups or heavy database work.

      Now if you have no credits in this I/O bucket you can't perform any I/O on the disc.

      The I/O bucket has a capacity of 5.4 million I/O credits.

      And it fills at the baseline performance rate of the volume.

      So what does this mean?

      Well every volume has a baseline performance based on its size with a minimum.

      So streaming into the bucket at all times is a 100 I/O credits per second refill rate.

      This means as an absolute minimum regardless of anything else you can consume 100 I/O credits per second which is 100 I/Ops.

      Now the actual baseline rate which you get with GP2 is based on the volume size.

      You get 3 I/O credits per second per GB of volume size.

      This means that a 100 GB volume gets 300 I/O credits per second refilling the bucket.

      Anything below 33.33 recurring GB gets this 100 I/O minimum.

      Anything above 33.33 recurring gets 3 times the size of the volume as a baseline performance rate.

      Now you aren't limited to only consuming at this baseline rate.

      By default GP2 can burst up to 3000 I/Ops so you can do up to 3000 input output operations of 16 kb in one second.

      And that's referred to as your burst rate.

      It means that if you have heavy workloads which aren't constant you aren't limited by your baseline performance rate of 3 times the GB size of the volume.

      So you can have a small volume which has periodic heavy workloads and that's OK.

      What's even better is that the credit bucket it starts off full so 5.4 million I/O credits.

      And this means that you could run it at 3000 I/Ops so 3000 I/O per second for a full 30 minutes.

      And that assumes that your bucket isn't filling up with new credits which it always is.

      So in reality you can run at full burst for much longer.

      And this is great if your volumes are used initially for any really heavy workloads because this initial allocation is a great buffer.

      The key takeaway at this point is if you're consuming more I/O credits than the rate at which your bucket is refilling then you're depleting the bucket.

      So if you burst up to 3000 I/Ops and your baseline performance is lower then over time you're decreasing your credit bucket.

      If you're consuming less than your baseline performance then your bucket is replenishing.

      And one of the key factors of this type of storage is the requirement that you manage all of the credit buckets of all of your volumes.

      So you need to ensure that they're staying replenished and not depleting down to zero.

      Now because every volume is credited with 3 I/O credits per second for every GB in size, volumes which are up to 1 TB in size they'll use this I/O credit architecture.

      But for volumes larger than 1 TB they will have a baseline equal to or exceeding the burst rate of 3000.

      And so they will always achieve their baseline performance as standard.

      They don't use this credit system.

      The maximum I/O per second for GP2 is currently 16000.

      So any volumes above 5.33 recurring TB in size achieves this maximum rate constantly.

      GP2 is a really flexible type of storage which is good for general usage.

      At the time of creating this lesson it's the default but I expect that to change over time to GP3 which I'm going to be talking about next.

      GP2 is great for boot volumes, for low latency interactive applications or for dev and test environments.

      Anything where you don't have a reason to pick something else.

      It can be used for boot volumes and as I've mentioned previously it is currently the default.

      Again over time I expect GP3 to replace this as it's actually cheaper in most cases but more on this in a second.

      You can also use the elastic volume feature to change the storage type between GP2 and all of the others.

      And I'll be showing you how that works in an upcoming lesson if you're doing the CIS Ops or developer associate courses.

      If you're doing the architecture stream then this architecture theory is enough.

      At this point I want to move on and explain exactly how GP3 is different.

      GP3 is also SSD based but it removes the credit bucket architecture of GP2 for something much simpler.

      Every GP3 volume regardless of size starts with a standard 3000 IOPS so 3000 16 kB operations per second and it can transfer 125 MB per second.

      That standard regardless of volume size and just like GP2 volumes can range from 1 GB through to 16 TB.

      Now the base price for GP3 at the time of creating this lesson is 20% cheaper than GP2.

      So if you only intend to use up to 3000 IOPS then it's a no brainer.

      You should pick GP3 rather than GP2.

      If you need more performance then you can pay for up to 16000 IOPS and up to 1000 MB per second of throughput.

      And even with those extras generally it works out to be more economical than GP2.

      GP3 offers a higher max throughput as well so you can get up to 1000 MB per second versus the 250 MB per second maximum of GP2.

      So GP3 is just simpler to understand for most people versus GP2 and I think over time it's going to be the default.

      For now though at the time of creating this lesson GP2 is still the default.

      In summary GP3 is like GP2 and IO1 which I'll cover soon had a baby.

      You get some of the benefits of both in a new type of general purpose SSD storage.

      Now the usage scenarios for GP3 are also much the same as GP2.

      So virtual desktops, medium sized databases, low latency applications, dev and test environments and boot volumes.

      You can safely swap GP2 to GP3 at any point but just be aware that for anything above 3000 IOPS the performance doesn't get added automatically like with GP2 which scales on size.

      With GP3 you would need to add these extra IOPS which come at an extra cost and that's the same with any additional throughput.

      Beyond the 125 MB per second standard it's an additional extra but still even including those extras for most things this storage type is more economical than GP2.

      At this point that's everything that I wanted to cover about the general purpose SSD volume types in this lesson.

      Go ahead, complete the lesson and then when you're ready, I'll look forward to you joining me in the next.

    1. Wet in formele zin die geen wet in materiële zin is:

      De wet in materiele zin heeft dus een algemeen karakter en is abstract omdat het niet tot 1 specifiek persoon of gebeurtenis is gericht, maar dus algemeen geldt voor een groep personen.

    1. Nesse ponto, não está mais claro quem treina quem, quem é o mestre e quem é o servo

      Fico pensando na influência desse comportamento na Psicanálise, em especial nos Discurso de Lacan, principalmente no Discurso do Mestre e o Discurso do Capitalista (o quinto discurso)

    1. Welcome back and in this lesson I want to quickly step through the basics of the Elastic Block Store service known as EBS.

      You'll be using EBS directly or indirectly, constantly as you make use of the wider AWS platform and as such you need to understand what it does, how it does it and the product's limitations.

      So let's jump in and get started straight away as we have a lot to cover.

      EBS is a service which provides block storage.

      Now you should know what that is by now.

      It's storage which can be addressed using block IDs.

      So EBS takes raw physical disks and it presents an allocation of those physical disks and this is known as a volume and these volumes can be written to or read from using a block number on that volume.

      Now volumes can be unencrypted or you can choose to encrypt the volume using KMS and I'll be covering that in a separate lesson.

      Now you see two instances when you attach a volume to them they see a block device, a raw storage and they can use this to create a file system on top of it such as EXT3, EXT4 or XFS and many more in the case of Linux or alternatively NTFS in the case of Windows.

      The important thing to grasp is that EBS volumes appear just like any other storage device to an EC2 instance.

      Now storage is provisioned in one availability zone.

      I can't stress enough the importance of this.

      EBS in one availability zone is different than EBS in another availability zone and different from EBS in another AZ in another region.

      EBS is an availability zone service.

      It's separate and isolated within that availability zone.

      It's also resilient within that availability zone so if a physical storage device fails there's some built-in resiliency but if you do have a major AZ failure then the volumes created within that availability zone will likely fail as will instances also in that availability zone.

      Now with EBS you create a volume and you generally attach it to one EC2 instance over a storage network.

      With some storage types you can use a feature called Multi-Attach which lets you attach it to multiple EC2 instances at the same time and this is used for clusters but if you do this the cluster application has to manage it so you don't overwrite data and cause data corruption by multiple writes at the same time.

      You should by default think of EBS volumes as things which are attached to one instance at a time but they can be detached from one instance and then reattached to another.

      EBS volumes are not linked to the instance lifecycle of one instance.

      They're persistent.

      If an instance moves between different EC2 hosts then the EBS volume follows it.

      If an instance stops and starts or restarts the volume is maintained.

      An EBS volume is created, it has data added to it and it's persistent until you delete that volume.

      Now even though EBS is an availability zone based service you can create a backup of a volume into S3 in the form of a snapshot.

      Now I'll be covering these in a dedicated lesson but snapshots in S3 are now regionally resilient so the data is replicated across availability zones in that region and it's accessible in all availability zones.

      So you can take a snapshot of a volume in availability zone A and when you do so EBS stores that data inside a portion of S3 that it manages and then you can use that snapshot to create a new volume in a different availability zone.

      For example availability zone B and this is useful if you want to migrate data between availability zones.

      Now don't worry I'll be covering how snapshots work in detail including a demo later in this section.

      For now I'm just introducing them.

      EBS can provision volumes based on different physical storage types, SSD based, high performance SSD and volumes based on mechanical disks and it can also provision different sizes of volumes and volumes with different performance profiles all things which I'll be covering in the upcoming lessons.

      For now again this is just an introduction to the service.

      The last point which I want to cover about EBS is that you'll build using a gigabyte per month metric so the price of one gig for one month would be the same as two gig for half a month and the same as half a gig for two months.

      Now there are some extras for certain types of volumes for certain enhanced performance characteristics but I'll be covering that in the dedicated lessons which are coming up next.

      For now before we finish this service introduction let's take a look visually at how this architecture fits together.

      So we're going to start with two regions in this example that's US-EAST-1 and AP-SOUTH EAST-2 and then in those regions we've got some availability zones AZA and AZB and then another availability zone in AP-SOUTH EAST 2 and then finally the S3 service which is running in all availability zones in both of those regions.

      Now EBS as I keep stressing and I will stress this more is availability zone based so in the cut-down example which I'm showing in US-EAST-1 you've got two availability zones and so two separate deployments of EBS one in each availability zone and that's just the same architecture as you have with EC2.

      You have different sets of EC2 hosts in every availability zone.

      Now visually let's say that you have an EC2 instance in availability zone A.

      You might create an EBS volume within that same availability zone and then attach that volume to the instance so critically both of these are in the same availability zone.

      You might have another instance which this time has two volumes attached to it and over time you might choose to detach one of those volumes and then reattach it to another instance in the same availability zone and that's doable because EBS volumes are separate from EC2 instances.

      It's a separate product with separate life cycles.

      Now you can have the same architecture in availability zone B where volumes can be created and then attached to instances in that same availability zone.

      What you cannot do and I'm stressing this for the 57th time small print it might not actually be 57 but it's close.

      What I'm stressing is that you cannot communicate cross availability zone with storage.

      So the instance in availability zone B cannot communicate with and so logically cannot attach to any volumes in availability zone A.

      It's an availability zone service so no cross AZ attachments are possible.

      Now EBS replicates data within an availability zone so the data on a volume it's replicated across multiple physical devices in that AZ but and this is important again the failure of an entire availability zone is going to impact all volumes within that availability zone.

      Now to resolve that you can snapshot volumes to S3 and this means that the data is now replicated as part of that snapshot across AZs in that region so that gives you additional resilience and it also gives you the ability to create an EBS volume in another availability zone from this snapshot.

      You can even copy the snapshot to another AWS region in this example AP - Southeastern -2 and once you've copied the snapshot it can be used in that other region to create a volume and that volume can then be attached to an EC2 instance in that same availability zone in that region.

      So that at a high level is the architecture of EBS.

      Now depending on what course you're studying there will be other areas that you need to deep dive on so over the coming section of the course we're going to be stepping through the features of EBS which you'll need to understand and these will differ depending on the exam but you will be learning everything you need for the particular exam that you're studying for.

      At this point that's everything I wanted to cover so go ahead finish this lesson and when you're ready I look forward to you joining me in the next.

    1. Chomsky has long been an opponent of the statistical learning tradition of language modeling, essentially claiming that it does not provide insight about what humans know about languages, and that engineering success probably can’t be achieved without explicitly incorporating important mathematical facts about the underlying structure of language
    1. allsoalso Mu=sickeMusic, whether vocallvocal, or instrumentallinstrumental: herein the ancient Philosophers - did soeso exercise themselves, that heehe was reputed unlearned, and forcdforced to sing to the Myrtle, who refused the Harp in festivallsfestivals, as is declared of Themistocles: in MusickeMusic was Socrates instructed, and Plato himselfehimself, who concluded him not harmoniously compounded, that delighted not in MusicallMusical harmony: Pythagoras was very famous in the same, who is saydsaid to have used the symphony of musickemusic morning, and evening to compose the minds of his disciples: for this is a peculiar virtue of MusickeMusic, to quicken or refresh the affections by the different musicallmusical measures: SoeSo the Phrygian tune was - by the GræksGreeks termed warrlikewarlike, because it was sung in warrewar, and upon en=gagement, and had a singular virtue in stirring up the Spirits of the - Soldiers; instead of which the JonickeIonic is sometimes used for the same pur=pose, which was formerly esteemed

      It appears that we are still in the period where all intellectual arts—music, mathematics, war tactics, etc—are expressions of one and the same phenomenon of the mind, and work off of each other, than the artificial separations of Chemistry and other disciplines we see later.

    1. i.e. an ethical pedagogy must be a critical one

      There are a variety of important, ethical pedagogies that don't involve imposing one's political views on your students, as this author suggests.

    2. Critical Pedagogy is an approach to teaching and learning predicated on fostering agency and empowering learners (implicitly and explicitly critiquing oppressive power structures).

      This seems narrow to me: teaching contributes to agency and learning in many ways beyond critiquing power structures, .e.g by enhancing attention, calling into question implicit cognitive biases, equipping students with habits and tools that allow them to extract greater meaning or probe hidden assumptions from all kinds of texts. In my view, it would be a consequential reduction to understand all of this only in terms of critiquing oppressive power structures.

    3. rites, “It doesn’t matter to me if my classroom is a little rectangle in a building or a little rectangle above my keyboard. Doors are rectangles; rectangles are portals.

      This terrifies me! I always have screens in the classroom because we are so often watching clips, but I am afraid of all our screenified minds and want to resist the dissolution of rectangles in general...

    4. How can we build platforms that support learning across age, race, culture, gender, ability, geography?

      Interesting that class is missing here, when the digital divide remains a real challenge to online access....

    5. objective, quantifiable, apolitical

      of course education is not alone - almost every sphere of humanistic knowledge has been eclipsed by the logic of data analytics.

    6. Paulo Freire, Pedagogy of the Oppressed

      As a historian, I always want to know what year something was published!

    7. “content

      Or "coverage"

    1. American attitudes toward international affairs followed the advice given by President George Washington in his 1796 Farewell Address. Washington had urged his countrymen to avoid “foreign alliances, attachments, and intrigues”,

      It’s interesting that the George Washington warned to stay out of foreign affairs considering in today’s year we are more involved with other countries than any other country in the world.

    1. Post-conventional

      Geen universele ervaring die behaald word; vooral binnen collectivistische culturen.

    2. zone of proximal development

      Doormiddel van hulp iets toch kunnen uitvoeren.

    3. scaffolding

      Helpen en daarmee elkaar stimuleren tot hoger denken.

    4. Peers are a powerful agent of enculturation

      Leeftijdsgenoten

    5. The study found that the economic/utilitarian value of having children decreased as socioeconomic development increased. However, the psychological value did not change

      Materiële onafhankelijkheid is niet onverenigbaar met emotionele interdependentie; mogelijk om economisch zelfstandig te zijn, maar nog wel emotioneel verbonden te zijn met anderen en hechte relaties te onderhouden.

    6. Reciprocity

      Reciprocating: wederkerigheid.

    7. Permissive

      Permissief: f=lief; responsiviteit.

    8. Authoritative

      Autoritatief: f=lief; hoog in responsiviteit.

    1. Argent vive

      Cambridge's "Dictionary of Alchemical Imagery" asserts that Argent vivre is synonymous with mercury and must be combined with Sulfur to produce the philoshopher's stone. Interestingly, Sulfur is a popular snake repellent, so perhaps there is something about these two metals being oppositional that makes them more powerful together.

    1. 4. Conclusiones

      Faltan dos cosas: 1) Cramer / Phi 2) test de proporciones

    2. Vemos que la correlación entre el sentido de injusticia distributiva recodificado (sj_gerente_rec) y la justificación de la violencia por el cambio social (jv_cambio_rec) es positiva, muy pequeña y estadísticamente significativa (r = 0.11; p < 0.05).

      Entonces, en términos de responder la pregunta de investigación ...

    3. Vemos que la correlación entre el sentido de injusticia distributiva (sj_gerente) y la justificación de la violencia por el cambio social (jv_cambio_rec) es positiva, pequeña y estadísticamente significativa (r = 0.11; p < 0.05)

      falta luego la interpretación sustantiva de esto

    4. Por otro lado, el sentido de injusticia distribuva se mide con un indicador denominado evaluación de injusticia (Jasso, 1980). Este representa cuanta justicia evalúan las personas en la distribución de recompensas de una situación. En este caso, representa la evalaución de qué tan justa es la distribución de ingresos de un gerente hipótetico, en tanto este representa el extremo mayor del espectro ocupacional. El indicador (en una versión simplificada) se lee de la siguiente manera:

      no se entiende, al resumir e intentar simplificar se pierde el sentido. Dar detalles de cómo se construye esto, o elegir otro item para el práctico que no requiera tanta necesidad de explicaciones, me iría por algo más simple. Ya lo de justificación de la violencia requiere explicación, y con eso basta. Ahora, se puede intentar explicar igual ...

    5. Originalmente, esta variable es ordinal, sin embargo, para efectos del ejemplo de este práctico, trabajaremos con la variable recodificada de la siguiente manera:

      antes de esto mostrar gráfico de distribución de respuestas en los distintos valores

    6. jv_control Justificación de la violencia por el control social 3407 0.2926544 1.4029938 0.8712963 4 (1-5) 1 jv_cambio

      ordenar, no intercalar para evitar confusión.

    7. Pregunta 1: ¿En qué medida se relacionan el sentido de injusticia distributiva y la justificación de la violencia por el cambio social en Chile al año 2019? H1: A mayor sentido de injusticia distributiva, mayor es la justificación de la violencia por el cambio social Pregunta 2: ¿En qué medida se relacionan el sentido de injusticia distributiva y la justificación de la violencia por el control social en Chile al año 2019? H2: A mayor sentido de injusticia distributiva, menor es la justificación de la violencia por el control social

      Partir con las preguntas, y resumir mucho el párrafo introductorio, máximo 150 palabras. Además de resumir, focalizar más, ya que a pesar de lo extenso no aparecen distinciones fundamentales vinculadas con el ejercicio. No incluir conceptos que no se van a definir y que pueden aumentar confusión (dominancia, etc). Definir qué es la justificación de la violanencia en sus dos variantes principales, y luego por qué se relacionaría con la justicia distributiva. Después adelantar la operacionalización, ya que por ejemplo luego no se entiende qué tienen que ver con esto lo de los gerentes

    8. ijusticia

      injusticia

    1. eLife Assessment

      Wittkamp et al. investigated the spatiotemporal dynamics of expectation of pain using an original fMRI-EEG approach. The methods are solid and the evidence for a substantially different neural representation between the anticipatory and the actual pain period is convincing. These important findings are discussed within a general framework that encompasses their research questions, hypotheses, and analysis of results. Although the choice of conditions and their influence on the results might accept different interpretations, the manuscript is strong and contributes beneficial insights to the field.

    2. Reviewer #1 (Public review):

      Summary:

      In this important paper the authors investigate the temporal dynamics of expectation of pain using a combined fMRI-EEG approach. More specifically, by modifying the expectations of higher or lower pain on a trial-to- trial basis they report that expectations largely share the same set of activations before the administration of the painful stimulus and that the coding of the valence of the stimulus is observed only after the nociceptive input has been presented. fMRI informed EEG analysis suggested that the temporal sequence of information processing involved the Dorsolateral prefrontal cortex (DLPFC), the anterior insula and the anterior cingulate cortex. The strength of evidence is convincing, the methods are solid, but a few alternative interpretations about the findings related to the control group, as well as a more in depth discussion on the correlations between the BOLD and EEG signals would strengthen the manuscript.

      Strengths:

      In line with open science principles, the article presents the data and the results in a complete and transparent fashion.<br /> On the theoretical standpoint, the authors make a step forward in our understanding of how expectations modulate pain by introducing a combination of spatial and temporal investigation. It is becoming increasingly clear that our appraisal of the world is dynamic, guided by previous experiences and mapped on a combination of what we expect and what we get. New research methods, questions and analyses are needed to capture this evolving process.

      Weaknesses:

      The authors have addressed my concerns about the control condition and made some adjustments, namely acknowledging that participants cannot be "expectations" free and investigating whether scores in the control condition are simply due to a "regression to the mean".

      General considerations and reflections

      Inducing expectations in the desired direction is not a straightforward task, and results might depend on the exact experimental conditions and the comparison group. In this sense, the authors choice of having 3 groups of positive, negative and "neutral" expectations is to be praised. On the other hand, also control groups form their expectations, and this can constitute a confounder in every experiment using expectation manipulation, if not appropriately investigated. The authors have addressed this element in their revised submission.

      In addition, although fMRI is still (probably) the best available tool we have to understand the spatial representation of cortical processing, limitations about not only the temporal but even the spatial resolution should be acknowledged. This has been done. Given the anatomical and physiological complexity of the cortical connections, as we know from the animal world, it is still well possible that sub circuits are activated also for positive and negative expectations, but cannot be observed due to the limitation of our techniques. Indeed, on an empirical/evolutionary bases, it would remain unclear why we should have a system that waits for the valence of a stimulus to show differential responses.<br /> Also, moving in a dimension of network and graph theory, one would not expect single areas to be responsible for distinct processes, but rather that they would more integrate information in a shared way, potentially with different feedback and feedforward communications. As such, it becomes more difficult to assume the insula as a center for coding potential pain, perhaps more of a node in a system that signals potential dangers for the integrity of the body.<br /> The rationale for the choice of their EEG band has been outlined.

    3. Reviewer #2 (Public review):

      I appreciate the authors' thorough revision of the manuscript, which has significantly improved its quality. I have no additional comments or requests for further changes.

      However, I remain in slight disagreement regarding the characterization of the neutral condition. My perspective is that it resembles more of a "medium" condition, making it challenging to understand what would be common to "high-medium" and "low-medium" contrasts. I suspect that the neutral condition might represent a state of high uncertainty since participants are informed that the algorithm cannot provide a prediction. From this viewpoint, the observed similarities in effects for both positive and negative expectations may actually reflect differences between certainty and uncertainty rather than the specific expectations themselves.

      Nevertheless, the authors have addressed alternative interpretations of their discussion section, and I have no further requests. The paper is well-executed and demonstrates several strengths: the procedure effectively induced varying levels of expectations with clear impacts on pain ratings. Additionally, the integration of fMRI with EEG is commendable for tracking the transition from anticipatory to pain periods. Overall, the manuscript is strong and contributes valuable insights to the field.

    4. Author response:

      The following is the authors’ response to the original reviews.

      We thank the reviewers for their careful and overall positive evaluation of our work and the constructive feedback! To address the main concerns, we have:

      – Clarified a major misunderstanding of our instructions: Participants were only informed that they would receive different stimuli of medium intensity and were thus not aware that the stimulation temperature remained constant

      – Implemented a new analysis to evaluate how participants rated their expectation and pain levels in the control condition

      – Added a paragraph in the discussion in which we argue that our paradigm is comparable to previous studies

      Below, we provide responses to each of the reviewers’ comments on our manuscript.

      Reviewer #1 (Public Review):

      Summary:  

      In this important paper, the authors investigate the temporal dynamics of expectation of pain using a combined fMRI-EEG approach. More specifically, by modifying the expectations of higher or lower pain on a trial-to-trial basis, they report that expectations largely share the same set of activations before the administration of the painful stimulus, and that the coding of the valence of the stimulus is observed only after the nociceptive input has been presented. fMRIinformed EEG analysis suggested that the temporal sequence of information processing involved the Dorsolateral prefrontal cortex (DLPFC), the anterior insula, and the anterior cingulate cortex. The strength of evidence is convincing, and the methods are solid, but a few alternative interpretations about the findings related to the control group, as well as a more in-depth discussion on the correlations between the BOLD and EEG signals would strengthen the manuscript. 

      Thank you for your positive evaluation! In the revised version of the manuscript, we elaborated on the control condition and the BOLD-EEG correlations in more detail.

      Strengths:  

      In line with open science principles, the article presents the data and the results in a complete and transparent fashion. 

      From a theoretical standpoint, the authors make a step forward in our understanding of how expectations modulate pain by introducing a combination of spatial and temporal investigation. It is becoming increasingly clear that our appraisal of the world is dynamic, guided by previous experiences, and mapped on a combination of what we expect and what we get. New research methods, questions, and analyses are needed to capture these evolving processes.  

      Thank you very much for these positive comments!

      Weaknesses:  

      The control condition is not so straightforward. Across the manuscript it is defined as "no expectation", and in the legend of Figure 1 it is mentioned that the third state would be "no prediction". However, it is difficult to conceive that participants would not have any expectations or predictions. Indeed, in the description of the task it is mentioned that participants were instructed that they would receive stimuli during "intermediate sensitive states". The results of the pain scores and expectations might support the idea that the control condition is situated in between the placebo and nocebo conditions. However, since this control condition was not part of the initial conditioning, and participants had no reference to previous stimuli, one might expect that some ratings might have simply "regressed to the mean" for a lack of previous experience. 

      General considerations and reflections:  

      Inducing expectations in the desired direction is not a straightforward task, and results might depend on the exact experimental conditions and the comparison group. In this sense, the authors' choice of having 3 groups of positive, negative, and "neutral" expectations is to be praised. On the other hand, also control groups form their expectations, and this can constitute a confounder in every experiment using expectation manipulation, if not appropriately investigated. 

      Thank you for raising these important concerns! Firstly, as it seems that we did not explain the experimental procedure in a clear fashion, there appeared to be a general misunderstanding regarding our instructions. We want to emphasize that we did not tell participants that the stimulus intensity would always be the same, but that pain stimuli would be different temperatures of medium intensity. Furthermore, our instruction did not necessarily imply that our algorithm detected a state of medium sensitivity, but that the algorithm would not make any prediction, e.g., due to highly fluctuating states of pain sensitivity, or no clear-cut state of high or low pain sensitivity. We changed this in the Methods (ll. 556-560, 601-606, 612-614) and Results (ll. 181-192) sections of the manuscript to clarify these important features of our procedure.

      Then, we absolutely agree that participants explicitly and implicitly form expectations regarding all conditions over time, including the control condition. We carefully considered your feedback and rephrased the control condition, no longer framing it as eliciting “no expectations” but as “neutral expectations” in the revised version of the manuscript. This follows the more common phrasing in the literature and acknowledges that participants indeed build up expectations in the control condition. However, we do still think that we can meaningfully compare the placebo and nocebo condition to the control condition to investigate the neuronal underpinnings of expectation effects. Independently of whether participants build up an expectation of “medium” intensities in the control condition, which caused them to perceive stimuli in line with this expectation, or if they simply perceived the stimuli as they were (of medium intensity) with limited effects of expectations, the crucial difference to the placebo and nocebo conditions is that there was no alteration of perception due to previous experiences or verbal information and no shift of perception from the actual stimulus intensity towards any direction in the control condition. This allowed us to compare the neural basis of a modulation of pain perception in either direction to a condition in which this modulation did not take place. 

      Author response image 1.

      Variability within conditions over time. Relative variability index for expectation (left) and pain ratings (right) per condition and measurement block. 

      Lastly, we want to highlight that our finding of the control condition being rated in between the placebo and nocebo condition is in line with many previous studies that included similar control conditions and advanced our understanding of pain-related expectations (Bingel et al., 2011; Colloca et al., 2010; Shih et al., 2019). We thank the reviewer for the very interesting idea to evaluate the development of ratings in the control condition in more detail and added a new analysis to the manuscript in which we compared how much intra-subject variance was within the ratings of each of the three conditions and how much this variance changed over time. For this aim, we computed the relative variability index (Mestdagh et al., 2018), a measure that quantifies intra-subject variation over multiple ratings, and compared between the three conditions and the three measurement blocks. We observed differences in variances between conditions for both expectation (F(2,96) = 8.14, p < .001) and pain ratings (F(2,96) = 3.41, p = .037). For both measures, post-hoc tests revealed that there was significantly more variance in the placebo compared to the control condition (both p_holm < .05), but no difference between control and nocebo. The substantial and comparable variation in pain and expectation ratings in all three conditions (or at least between control and nocebo) shows that participants did not always expect and perceive the same intensity within conditions. Variance in expectation ratings decreased from the first block compared to the other two blocks (_F(1.35,64.64) = 5.69, p = .012; both p_holm < .05), which was not the case for pain ratings. Most importantly, there was no interaction effect of block and condition for neither expectation (_F(2.65,127.06) = 0.40, p = .728) nor pain ratings (F(4,192) = 0.48, p = .748), which implies that expectations were similarly dynamically updated in all conditions over the course of the experiment. This speak against a “regression to the mean” in the control condition and shows that control ratings fluctuated from trial to trial. We included this analysis and a more in-depth discussion of the choice of conditions in the Result (ll. 219-232) and Discussion (ll. 452-486) sections of the revised manuscript.

      In addition, although fMRI is still (probably) the best available tool we have to understand the spatial representation of cortical processing, limitations about not only the temporal but even the spatial resolution should be acknowledged. Given the anatomical and physiological complexity of the cortical connections, as we know from the animal world, it is still well possible that subcircuits are activated also for positive and negative expectations, but cannot be observed due to the limitation of our techniques. Indeed, on an empirical/evolutionary basis it would remain unclear why we should have a system that waits for the valence of a stimulus to show differential responses. 

      We agree that the spatial resolution of fMRI is limited and that our signal is often not able to dissociate different subcircuits. Whether on this basis differential processes occurred cannot be observed in fMRI but is indeed possible. We now include this reasoning in our Discussion (ll. 373-377):

      “Importantly, the spatial resolution of fMRI is limited when it comes to discriminating whether the same pattern of activity is due to identical activation or to activation in different sub-circuits within the same area. Nonetheless, the overlap of areas is an indicator for similar processes involved in a more general preparation process.

      Also, moving in a dimension of network and graph theory, one would not expect single areas to be responsible for distinct processes, but rather that they would integrate information in a shared way, potentially with different feedback and feedforward communications. As such, it becomes more difficult to assume the insula is a center for coding potential pain, perhaps more of a node in a system that signals potential dangers for the integrity of the body. 

      We appreciate the feedback on our interpretation of our results and agree that the overall network activity most likely determines how a large part of expectations and pain are coded. We therefore adjusted the Discussion, embedding the results in an interpretation considering networks (ll. 427-430, 432-435,438-442 ). 

      The authors analyze the EEG signal between 0.5 to 128 Hz, finding significant results in the correlation between single-trial BOLD and EEG activity in the higher gamma range (see Figure 6 panel C). It would be interesting to understand the rationale for including such high frequencies in the signal, and the interpretation of the significant correlation in the high gamma range. 

      On a technical level, we adapted our EEG processing pipeline from Hipp et al. (2011) who similarly investigated signals up to 128 Hz. Of note, the spectral smoothing was adjusted to match 3/4 octave, meaning that the frequency resolution at 128 Hz is rather broad and does not only contain oscillations at 128 Hz sharp. Gamma oscillations in general have repeatedly been reported in relation to pain and feedforward signals reflecting noxious information (e.g. Ploner et al., 2017; Strube et al., 2021). Strube et al. (2021) reported the highest effects of pain stimulus intensity and prediction error processing at high gamma frequencies (100 and 98 Hz, respectively). These findings could also serve as basis to interpret our results in this frequency range: If anticipatory activation in the ACC is linked to high gamma oscillations, which appear to play an important role in feedforward signaling of pain intensity and prediction errors, this could indicate that later processing of intensity in this area is already pre-modulated before the stimulus actually occurs. Of note: although not significant, it looks as if the cluster extends further into pain processing on a descriptive level. We added additional explanation regarding the interpretation of the correlation in the Discussion (ll. 414425):

      “The link between anticipatory activity in the ACC and EEG oscillatory activity was observed in the high gamma band, which is consistent with findings that demonstrate a connection between increased fMRI BOLD signals and a relative shift from lower to higher frequencies (Kilner et al., 2005). Gamma oscillations have been repeatedly reported in the context of pain and expectations and have been interpreted as reflecting feedforward signals of noxious information ( e.g. Ploner et al., 2017; Strube et al., 2021). In combination with our findings, this might imply that high frequency oscillations may not only signal higher actual or perceived pain intensity during pain processing (Nickel et al., 2022; Ploner et al., 2017; Strube et al., 2021; Tu et al., 2016), but might also be instrumental in the transfer of directed expectations from anticipation into pain processing.”

      Reviewer #2 (Public Review):  

      I think this is a very promising paper. The combination of EEG and fMRI is unique and original. However, I also have some suggestions that I think could help improve the manuscript. 

      This manuscript reports the findings of an EEG-fMRI study (n = 50) on the effects of expectations on pain. The combination of EEG with fMRI is extremely original and well-suited to study the transition from expectation to perception. However, I think that the current treatment of the data, as well as the way that the manuscript is currently written, does not fully capitalize on the potential of this unique dataset. Several findings are presented but there is currently no clear message coming out of this manuscript. 

      First, one positive point is that the experimental manipulation clearly worked. However, it should be noted that the instructions used are not typical of studies on placebo/nocebo. Participants were not told that the stimulations would be of higher/lower intensity. Rather, they were told that objective intensities were held constant, but that EEG recordings could be used to predict whether they would perceive the stimulus as more or less intense. I think that this is an interesting way to manipulate expectations, but there could have been more justification in the introduction for why the authors have chosen this unusual procedure. 

      Most importantly, we again want to emphasize again that participants were not aware that the stimulation temperature was always the same but were informed that they would receive different stimuli of medium intensity. We now clarify this in the revised Results (ll. 190-192) and Methods (ll. 612-614) sections.

      While we agree that our procedure was not typical, we do not think that the manipulation is not comparable to previous studies on pain-related expectations. To our knowledge, either expectations regarding a treatment that changes pain perception (treatment expectancy) or expectations regarding stimulus intensities (stimulus expectancy) are manipulated (see Atlas & Wager, 2014). In our study, participants received a cue that induced expectations in regard to a ”treatment”, although in this case the “treatment” came from changes in their own brain activity. This is comparable to studies using TENS-devices that are supposedly changing peripheral pain transmission (Skvortsova et al., 2020). Thus, although not typical, our paradigm could be classified as targeting treatment expectancies and allowed us to examine effects on a trial-by-trial level within subjects. We added a paragraph regarding the comparability of our paradigm with previous studies in the Discussion of the revised manuscript (ll. 452-464) .

      Also, the introduction mentions that little is known about potential cerebral differences between expectations of high vs. low pain expectations. I think the fear conditioning literature could be cited here. Activations in ACC, SMA, Ins, parahippocampal gyrus, PAG, etc. are often associated with upcoming threat, whereas activations vmPFC/default mode network are associated with safety. 

      We thank you for your suggestions to add literature on fear conditioning. We agree there is some overlap between fear conditioning and expectation effects in humans, but we also believe there are fundamental differences regarding their underlying processes and paradigms. E.g. the expectation effects are not driven by classical learning algorithms but act in a large amount as self-fulfilling prophecies (see e.g. Jepma et al., 2018). However, we now acknowledge the similarities e.g in the recruitment of the insula and the vmPFC of the modalities in our Introduction (ll. 132-136 ).

      The fact that the authors didn't observe a clearer distinction between high and low expectations here could be related to their specific instructions that imply that the stimulus is the same and that it is the subjective perception that is expected to change. In any case, this is a relatively minor issue that is easy to address. 

      We apologize again for the lack of clarity in our instructions: Participants were unaware that they would receive the exact same stimulus. The clear effects of the different conditions on expectation and pain ratings also challenge the notion that participants always expected the same level of stimulation and/or perception. Additionally, if participants were indeed expecting a consistent level of intensity in all conditions, one would also assume to see the same anticipatory activation in the control condition as in the placebo and nocebo conditions, which is not the case. Thus, we respectfully disagree that the common effects might be explained by our instructions but would argue that they indeed reflect common (anticipatory) processes of positive and negative expectations.

      Towards the end of the introduction, the authors present the aims of the study in mainly exploratory terms: 

      (1) What are the differences between anticipation and perception? 

      (2) What regions display a difference between high and low expectations (high > low or low < high) vs. an effect of expectation regardless of the direction (high and low different than neutral)? 

      I think these are good questions, but the authors should provide more justification, or framework, for these questions. More specifically, what will they be able to conclude based on their observations? 

      For instance (note that this is just an example to illustrate my point. I encourage the authors to come up with their own framework/predictions) : 

      (1) Possibility #1: A certain region encodes expectations in a directed fashion (high > low) and that same region also responds to perception in the same direction (high > low). This region would therefore modulate pain by assimilating perception towards expectations. 

      (2) Possibility # 2: different regions are involved in expectation and perception. Perhaps this could mean that certain regions influence pain processing through descending facilitation for instance...  

      Thank you for pointing out that our hypotheses were not crafted carefully enough. We tried to give better explanations for the possible interpretations of our hypotheses. Additionally, we interpreted our results on the background of a broader framework for placebo and nocebo effects (predictive coding) to derive possible functions of the described brain areas. We embedded this in our Introduction (ll. 74-86, 158-175 ) and Discussion (ll. 384-388 ), interpreting the anticipatory activity and the activity during pain processing in the context of expectation formation as described in Büchel et al. (2014).

      Interpretation derived from our framework (ll. 384-388):

      e.g.: “Following the framework of predictive coding, our results would suggest that the DPMS is the network responsible for integrating ascending signals with descending signals in the pain domain and that this process is similar for positive and negative valences during anticipation of pain but differentiates during pain processing.”

      Regarding analyses, I think that examining the transition from expectations to perception is a strong angle of the manuscript given the EGG-fMRI nature of the study. However, I feel that more could have been done here. One problem is that the sequence of analyses starts by identifying an fMRI signal of interest and then attempts to find its EEG correlates. The problem is that the low temporal resolution of fMRI makes it difficult to differentiate expectation from perception, which doesn't make this analysis a good starting point in my opinion. Why not start by identifying an EEG signal that differentiates perception vs expectation, and then look for its fMRI correlates?  

      We appreciate your feedback on the transition from expectations to perceptions and also think that additional questions could be answered with our data set. However, based on the literature we had specific hypotheses regarding specific brain areas, and we therefore decided to start from the fMRI data with the superior spatial resolution and EEG was used to focus on the temporal dynamics within the areas important for anticipatory processes. We share the view that many different approaches in analyzing our data are possible. On the other hand, identifying relevant areas based on EEG characteristics inherits even more uncertainty due to the spatial filtering of the EEG signal. For the research question of this study a more accurate evaluation of the involved areas and the related representation was more important. We therefore decided to only implement the procedure already present in the manuscript. 

      Finally, I found the hypotheses on "valenced" vs. "absolute" effects a little bit more difficult to follow. This is because "neutral" is not really neutral: it falls in between low and high. If I follow correctly, participants know that the temperature is always the same. Therefore, if they are told that the machine cannot predict whether their perception is going to be low or high, then it must be because it is likely to be in between. Ratings of expectation and pain ratings confirm that. The neutral condition is not "devoid" of expectations as the authors suggest.

      Therefore, it would make sense to look at regions with the following pattern low > neutral > high, or vice-versa, low < neutral < high. Low & high being different than neutral is more difficult to interpret. I don't think that you can say that it reflects "absolute" expectations because neutral is also the expectation of a medium temperature. Perhaps it reflects "certainty/uncertainty" or something like that, but it is not clear that it reflects "expectations". 

      Thank you for your valuable feedback! We considered your concerns about the interpretation of our results and completely agree that the control condition cannot be interpreted as void of expectations (ll. 119-123). We therefore evaluated the control condition in more detail in a separate analysis (ll. 219-232) and integrated a new assessment of the conditions into the Discussion (ll. 465-486). We changed the phrasing of our control condition to “neutral expectations”, as we agree that the control condition is not void of expectations and this phrasing is more in line with other studies (e.g. Colloca et al., 2010; Freeman et al., 2015; Schmid et al., 2015). We would argue that the neutral expectations can still be meaningfully compared to positive and negative expectations because only the latter shift expectations and perception in one direction. Thus, we changed our wording throughout the manuscript to acknowledge that we indeed did not test for general effects of expectations vs. no expectations, but for effects of directed expectations. Please also see our reasoning regarding the control condition in response to Reviewer 1, in which we addressed the interpretation of the control condition. We therefore still believe that the contrasts that we calculated between conditions are valid. The proposed new contrast largely overlaps with our differential contrast low>high and vice versa already reported in the manuscript (for additional results also see Supplements).

      Recommendations for the authors:

      Reviewer #1 (Recommendations For The Authors):

      Figure 6, panel C. The figure mentions Anterior Cingulate Cortex R, whereas the legend mentions left ACC. Please check. 

      Thanks for catching this, we changed the figure legend accordingly.

      Reviewer #2 (Recommendations For The Authors):  

      - I don't think that activity during the rating of expectations is easily interpretable. I think I would recommend not reporting it. 

      The majority of participants completed the expectation rating relatively quickly (M = 2.17 s, SD = 0.35 s), which resulted in the overlap between the DLPFC EEG cluster and the expectation rating encompassing only a limited portion of the cluster (~ 1 s). We agree that this activity still is more difficult to interpret, yet we have decided to report it for reasons of completeness.

      - The effects on SIIPS are interesting. I think that it is fine to present them as a "validation" of what was observed with pain ratings, but it also seems to give a direction to the analyses that the authors don't end up following. For instance, why not try other "signatures" like the NPS or signatures of pain anticipation? Also, why not try to look at EEG correlates of SIIPS? I don't think that the authors "need" to do any of that, but I just wanted to let them know that SIIPS results may stir that kind of curiosity in the readers.  

      While this would be indeed very interesting, these additional analyses are not directly related to our current research question. We fear that too many analyses could be confusing for the readers. Nonetheless, we are grateful for your suggestion and will implement additional brain signatures in future studies. 

      - The shock was calibrated to be 60%. Why not have high (70%) and low (30%) conditions at equal distances from neutral, like 80% and 40% for instance? The current design makes it hard to distinguish high from control. Perhaps the "common" effects of high + low are driven by a deactivation for low (30%)?  

      We appreciate your feedback! We adjusted the temperature during the test phase to counteract habituation typically happening with heat stimuli. We believe that this was a good measure as participants rated the control condition at roughly VAS 50 (M = 51.40) which was our target temperature and then would be equidistant to the VAS 70 and VAS 30 during conditioning when no habituation should have taken place yet. We further tested whether participants rated placebo and nocebo trials at equal distances from the control condition and found no existent bias for either of the conditions. To do this, we computed the individual placebo effect (control minus placebo) and nocebo effect (nocebo minus control) for each participant during the test phase and statistically compared whether they differed in terms of magnitude. There was no significant difference between placebo and nocebo effects for both expectation (placebo effect M = 14.25 vs. nocebo effect M = 17.22, t(49) = 1.92, p = .061) and pain ratings (placebo effect M = 6.52 vs. nocebo effect M = 5.40, t(49) = -1.11, p = .274). This suggests that our expectation manipulation resulted in comparable shifts in expectation and pain ratings away from the control condition for both the placebo and nocebo condition and thus hints against any bias of the conditioning temperatures. Please also note that the analysis of the common effects was masked for differences of the high and low, therefore the effects cannot be driven by one condition by itself.

      - If I understand correctly, all fMRI contrasts were thresholded with FWE. This is fine, but very strict. The authors could have opted for FDR. Maybe I missed something here....  

      While it is true that FDR is the more liberal approach, it is not valid for spatially correlated fMRI data and is no longer available in SPM for the correction of multiple comparisons. The newly implemented topological peak based FDR correction is comparably sensitive with the FWE correction (see. Chumbley et al. BELEG). We opted for the slightly more conservative approach in our preregistration (_p_FWE < .05), therefore a change of the correction is not possible.

      Altogether, I think that this is a great study. The combination of EEG and fMRI is truly unique and affords many opportunities to examine the transition from expectations to perception. The experimental manipulation of expectations seems to have worked well, and there seem to be very promising results. However, I think that more could have been done. At least, I would recommend trying to give more of a theoretical framework to help interpret the results.  

      We are very grateful for your positive feedback. We took your suggestion seriously and tried to implement a more general framework from the literature (see Büchel et al., 2014) to provide a better explanation for our results.

      References

      Atlas, L. Y., & Wager, T. D. (2014). A meta-analysis of brain mechanisms of placebo analgesia: Consistent findings and unanswered questions. Handbook of Experimental Pharmacology, 225, 37–69. https://doi.org/10.1007/978-3-662-44519-8_3

      Bingel, U., Wanigasekera, V., Wiech, K., Ni Mhuircheartaigh, R., Lee, M. C., Ploner, M., & Tracey, I. (2011). The effect of treatment expectation on drug efficacy: Imaging the analgesic benefit of the opioid remifentanil. Science Translational Medicine, 3(70), 70ra14. https://doi.org/10.1126/scitranslmed.3001244

      Büchel, C., Geuter, S., Sprenger, C., & Eippert, F. (2014). Placebo analgesia: A predictive coding perspective. Neuron, 81(6), 1223–1239. https://doi.org/10.1016/j.neuron.2014.02.042

      Colloca, L., Petrovic, P., Wager, T. D., Ingvar, M., & Benedetti, F. (2010). How the number of learning trials affects placebo and nocebo responses. Pain, 151(2), 430–439. https://doi.org/10.1016/j.pain.2010.08.007

      Freeman, S., Yu, R., Egorova, N., Chen, X., Kirsch, I., Claggett, B., Kaptchuk, T. J., Gollub, R. L., & Kong, J. (2015). Distinct neural representations of placebo and nocebo effects. NeuroImage, 112, 197–207. https://doi.org/10.1016/j.neuroimage.2015.03.015

      Hipp, J. F., Engel, A. K., & Siegel, M. (2011). Oscillatory synchronization in large-scale cortical networks predicts perception. Neuron, 69(2), 387–396. https://doi.org/10.1016/j.neuron.2010.12.027

      Jepma, M., Koban, L., van Doorn, J., Jones, M., & Wager, T. D. (2018). Behavioural and neural evidence for self-reinforcing expectancy effects on pain. Nature Human Behaviour, 2(11), 838–855. https://doi.org/10.1038/s41562-018-0455-8

      Kilner, J. M., Mattout, J., Henson, R., & Friston, K. J. (2005). Hemodynamic correlates of EEG: A heuristic. NeuroImage, 28(1), 280–286. https://doi.org/10.1016/j.neuroimage.2005.06.008

      Nickel, M. M., Tiemann, L., Hohn, V. D., May, E. S., Gil Ávila, C., Eippert, F., & Ploner, M. (2022). Temporal-spectral signaling of sensory information and expectations in the cerebral processing of pain. Proceedings of the National Academy of Sciences of the United States of America, 119(1). https://doi.org/10.1073/pnas.2116616119

      Ploner, M., Sorg, C., & Gross, J. (2017). Brain Rhythms of Pain. Trends in Cognitive Sciences, 21(2), 100–110. https://doi.org/10.1016/j.tics.2016.12.001

      Schmid, J., Bingel, U., Ritter, C., Benson, S., Schedlowski, M., Gramsch, C., Forsting, M., & Elsenbruch, S. (2015). Neural underpinnings of nocebo hyperalgesia in visceral pain: A fMRI study in healthy volunteers. NeuroImage, 120, 114–122. https://doi.org/10.1016/j.neuroimage.2015.06.060

      Shih, Y.‑W., Tsai, H.‑Y., Lin, F.‑S., Lin, Y.‑H., Chiang, C.‑Y., Lu, Z.‑L., & Tseng, M.‑T. (2019). Effects of Positive and Negative Expectations on Human Pain Perception Engage Separate But Interrelated and Dependently Regulated Cerebral Mechanisms. Journal of Neuroscience, 39(7), 1261–1274. https://doi.org/10.1523/JNEUROSCI.2154-18.2018

      Skvortsova, A., Veldhuijzen, D. S., van Middendorp, H., Colloca, L., & Evers, A. W. M. (2020). Effects of Oxytocin on Placebo and Nocebo Effects in a Pain Conditioning Paradigm: A Randomized Controlled Trial. The Journal of Pain, 21(3-4), 430–439. https://doi.org/10.1016/j.jpain.2019.08.010

      Strube, A., Rose, M., Fazeli, S., & Büchel, C. (2021). The temporal and spectral characteristics of expectations and prediction errors in pain and thermoception. ELife, 10. https://doi.org/10.7554/eLife.62809

      Tu, Y., Zhang, Z., Tan, A., Peng, W., Hung, Y. S., Moayedi, M., Iannetti, G. D., & Hu, L. (2016). Alpha and gamma oscillation amplitudes synergistically predict the perception of forthcoming nociceptive stimuli. Human Brain Mapping, 37(2), 501–514. https://doi.org/10.1002/hbm.23048

    1. Machine learning is a young field,

      ? young? Author is in their 20s, case of 'my first encounter with something means it is globally new'?

    2. I expect AI to get much better than it is today. Research on AI systems has shown that they predictably improve given better algorithms, more and better quality data, and more computational power. Labs are in the process of further scaling up their clusters—the groupings of computers that the algorithms run on.

      Ah, article based on assumption of future improvement. compute and data are limiting factors, and you will end up making the equation if compute footprint is more efficient than doing it yourself. Data even more limiting, as the most meaningful stuff is qualitative rather than quantitative, and stats on the Q stuff won't give you meaning (LLMs case in point)

    3. The shared goal of the field of artificial intelligence is to create a system that can do anything. I expect us to soon reach it.

      Is it though? Wrt GAI that is as far away as before imo. The rainbow never gets nearer, because it is dependent on your position.

    4. The economically and politically relevant comparison on most tasks is not whether the language model is better than the best human, it is whether they are better than the human who would otherwise do that task

      True, and that is where this fails outside of bullshit tasks. The unmentioned assumption here is that algogen output can have meaning, rather than just coherence and plausibility.

    5. The general reaction to language models among knowledge workers is one of denial.

      equates 'content production' w k-work

    6. my ability to write large amounts of content quickly

      right. 'content production' where the actual meaning isn't relevant?

    7. it can competently generate cogent content on a wide range of topics. It can summarize and analyze texts passably well

      cogent content / passably well isn't the quality benchmark for K-work though.

    1. eLife Assessment

      This valuable study provides convincing evidence that white matter diffusion imaging of the right superior longitudinal fasciculus might help to develop a predictive biomarker of chronic back pain chronicity. The results are based on a discovery-replication approach with different cohorts, but the sample size is limited. The findings will interest researchers interested in the brain mechanisms of chronic pain and in developing brain-based biomarkers of chronic pain.

    2. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public Review):

      Summary:

      In this paper, Misic et al showed that white matter properties can be used to classify subacute back pain patients that will develop persisting pain.

      Strengths:

      Compared to most previous papers studying associations between white matter properties and chronic pain, the strength of the method is to perform a prediction in unseen data. Another strength of the paper is the use of three different cohorts. This is an interesting paper that provides a valuable contribution to the field.

      We thank the reviewer for emphasizing the strength of our paper and the importance of validation on multiple unseen cohorts.

      Weaknesses:

      The authors imply that their biomarker could outperform traditional questionnaires to predict pain: "While these models are of great value showing that few of these variables (e.g. work factors) might have significant prognostic power on the long-term outcome of back pain and provide easy-to-use brief questionnaires-based tools, (21, 25) parameters often explain no more than 30% of the variance (28-30) and their prognostic accuracy is limited.(31)". I don't think this is correct; questionnaire-based tools can achieve far greater prediction than their model in about half a million individuals from the UK Biobank (Tanguay-Sabourin et al., A prognostic risk score for the development and spread of chronic pain, Nature Medicine 2023).

      We agree with the reviewer that we might have under-estimated the prognostic accuracy of questionnaire-based tools, especially, the strong predictive accuracy shown by Tangay-Sabourin 2023.  In this revised version, we have changed both the introduction and the discussion to reflect the questionnaire-based prognostic accuracy reported in the seminal work by Tangay-Sabourin. 

      In the introduction (page 4, lines 3-18), we now write:

      “Some studies have addressed this question with prognostic models incorporating demographic, pain-related, and psychosocial predictors.1-4 While these models are of great value showing that few of these variables (e.g. work factors) might have significant prognostic power on the long-term outcome of back pain, their prognostic accuracy is limited,5 with parameters often explaining no more than 30% of the variance.6-8. A recent notable study in this regard developed a model based on easy-to-use brief questionnaires to predict the development and spread of chronic pain in a variety of pain conditions capitalizing on a large dataset obtained from the UK-BioBank. 9 This work demonstrated that only few features related to assessment of sleep, neuroticism, mood, stress, and body mass index were enough to predict persistence and spread of pain with an area under the curve of 0.53-0.73. Yet, this study is unique in showing such a predictive value of questionnaire-based tools. Neurobiological measures could therefore complement existing prognostic models based on psychosocial variables to improve overall accuracy and discriminative power. More importantly, neurobiological factors such as brain parameters can provide a mechanistic understanding of chronicity and its central processing.”

      And in the conclusion (page 22, lines 5-9), we write:

      “Integrating findings from studies that used questionnaire-based tools and showed remarkable predictive power9 with neurobiological measures that can offer mechanistic insights into chronic pain development, could enhance predictive power in CBP prognostic modeling.”

      Moreover, the main weakness of this study is the sample size. It remains small despite having 3 cohorts. This is problematic because results are often overfitted in such a small sample size brain imaging study, especially when all the data are available to the authors at the time of training the model (Poldrack et al., Scanning the horizon: towards transparent and reproducible neuroimaging research, Nature Reviews in Neuroscience 2017). Thus, having access to all the data, the authors have a high degree of flexibility in data analysis, as they can retrain their model any number of times until it generalizes across all three cohorts. In this case, the testing set could easily become part of the training making it difficult to assess the real performance, especially for small sample size studies.

      The reviewer raises a very important point of limited sample size and of the methodology intrinsic of model development and testing. We acknowledge the small sample size in the “Limitations” section of the discussion.   In the resubmission, we acknowledge the degree of flexibility that is afforded by having access to all the data at once. However, we also note that our SLF-FA based model is a simple cut-off approach that does not include any learning or hidden layers and that the data obtained from Open Pain were never part of the “training” set at any point at either the New Haven or the Mannheim site.  Regarding our SVC approach we follow standard procedures for machine learning where we never mix the training and testing sets. The models are trained on the training data with parameters selected based on cross-validation within the training data. Therefore, no models have ever seen the test data set. The model performances we reported reflect the prognostic accuracy of our model. We write in the limitation section of the discussion (page 20, lines 20-21, and page 21, lines 1-6):

      “In addition, at the time of analysis, we had “access” to all the data, which may lead to bias in model training and development.  We believe that the data presented here are nevertheless robust since multisite validated but need replication. Additionally, we followed standard procedures for machine learning where we never mix the training and testing sets. The models were trained on the training data with parameters selected based on cross-validation within the training data. Therefore, no models have ever seen the test data set. The model performances we reported reflect the prognostic accuracy of our model”. 

      Finally, as discussed by Spisak et al., 10 the key determinant of the required sample size in predictive modeling is the ” true effect size of the brain-phenotype relationship”, which we think is the determinant of the replication we observe in this study. As such the effect size in the New Haven and Mannheim data is Cohen’s d >1.

      Even if the performance was properly assessed, their models show AUCs between 0.65-0.70, which is usually considered as poor, and most likely without potential clinical use. Despite this, their conclusion was: "This biomarker is easy to obtain (~10 min of scanning time) and opens the door for translation into clinical practice." One may ask who is really willing to use an MRI signature with a relatively poor performance that can be outperformed by self-report questionnaires?

      The reviewer is correct, the model performance is fair which limits its usefulness for clinical translation.  We wanted to emphasize that obtaining diffusion images can be done in a short period of time and, hence, as such models’ predictive accuracy improves, clinical translation becomes closer to reality. In addition, our findings are based on older diffusion data and limited sample sizes coming from different sites and different acquisition sequences.  This by itself would limit the accuracy especially since the evidence shows that sample size affects also model performance (i.e. testing AUC)10.  In the revision, we re-worded the sentence mentioned by the reviewer to reflect the points discussed here. This also motivates us to collect a more homogeneous and larger sample.  In the limitations section of the discussion, we now write (page 21, lines 6-9):

      “Even though our model performance is fair, which currently limits its usefulness for clinical translation, we believe that future models would further improve accuracy by using larger homogenous sample sizes and uniform acquisition sequences.”

      Overall, these criticisms are more about the wording sometimes used and the inference they made. I think the strength of the evidence is incomplete to support the main claims of the paper.

      Despite these limitations, I still think this is a very relevant contribution to the field. Showing predictive performance through cross-validation and testing in multiple cohorts is not an easy task and this is a strong effort by the team. I strongly believe this approach is the right one and I believe the authors did a good job.

      We thank the reviewer for acknowledging that our effort and approach were useful.

      Minor points:

      Methods:

      I get the voxel-wise analysis, but I don't understand the methods for the structural connectivity analysis between the 88 ROIs. Have the authors run tractography or have they used a predetermined streamlined form of 'population-based connectome'? They report that models of AUC above 0.75 were considered and tested in the Chicago dataset, but we have no information about what the model actually learned (although this can be tricky for decision tree algorithms). 

      We apologize for the lack of clarity; we did run tractography and we did not use a pre-determined streamlined form of the connectome.

      Finding which connections are important for the classification of SBPr and SBPp is difficult because of our choices during data preprocessing and SVC model development: (1) preprocessing steps which included TNPCA for dimensionality reduction, and regressing out the confounders (i.e., age, sex, and head motion); (2) the harmonization for effects of sites; and (3) the Support Vector Classifier which is a hard classification model11.

      In the methods section (page 30, lines 21-23) we added: “Of note, such models cannot tell us the features that are important in classifying the groups.  Hence, our model is considered a black-box predictive model like neural networks.”

      Minor:

      What results are shown in Figure 7? It looks more descriptive than the actual results.

      The reviewer is correct; Figure 7 and Supplementary Figure 4 were both qualitatively illustrating the shape of the SLF. We have now changed both figures in response to this point and a point raised by reviewer 3.  We now show a 3D depiction of different sub-components of the right SLF (Figure 7) and left SLF (Now Supplementary Figure 11 instead of Supplementary Figure 4) with a quantitative estimation of the FA content of the tracts, and the number of tracts per component.  The results reinforce the TBSS analysis in showing asymmetry in the differences between left and right SLF between the groups (i.e. SBPp and SBPr) in both FA values and number of tracts per bundle.

      Reviewer #2 (Public Review):

      The present study aims to investigate brain white matter predictors of back pain chronicity. To this end, a discovery cohort of 28 patients with subacute back pain (SBP) was studied using white matter diffusion imaging. The cohort was investigated at baseline and one-year follow-up when 16 patients had recovered (SBPr) and 12 had persistent back pain (SBPp). A comparison of baseline scans revealed that SBPr patients had higher fractional anisotropy values in the right superior longitudinal fasciculus SLF) than SBPp patients and that FA values predicted changes in pain severity. Moreover, the FA values of SBPr patients were larger than those of healthy participants, suggesting a role of FA of the SLF in resilience to chronic pain. These findings were replicated in two other independent datasets. The authors conclude that the right SLF might be a robust predictive biomarker of CBP development with the potential for clinical translation.

      Developing predictive biomarkers for pain chronicity is an interesting, timely, and potentially clinically relevant topic. The paradigm and the analysis are sound, the results are convincing, and the interpretation is adequate. A particular strength of the study is the discovery-replication approach with replications of the findings in two independent datasets.

      We thank reviewer 2 for pointing to the strength of our study.

      The following revisions might help to improve the manuscript further.

      - Definition of recovery. In the New Haven and Chicago datasets, SBPr and SBPp patients are distinguished by reductions of >30% in pain intensity. In contrast, in the Mannheim dataset, both groups are distinguished by reductions of >20%. This should be harmonized. Moreover, as there is no established definition of recovery (reference 79 does not provide a clear criterion), it would be interesting to know whether the results hold for different definitions of recovery. Control analyses for different thresholds could strengthen the robustness of the findings.

      The reviewer raises an important point regarding the definition of recovery.  To address the reviewers’ concern we have added a supplementary figure (Fig. S6) showing the results in the Mannheim data set if a 30% reduction is used as a recovery criterion, and in the manuscript (page 11, lines 1,2) we write: “Supplementary Figure S6 shows the results in the Mannheim data set if a 30% reduction is used as a recovery criterion in this dataset (AUC= 0.53)”.

      We would like to emphasize here several points that support the use of different recovery thresholds between New Haven and Mannheim.  The New Haven primary pain ratings relied on visual analogue scale (VAS) while the Mannheim data relied on the German version of the West-Haven-Yale Multidimensional Pain Inventory. In addition, the Mannheim data were pre-registered with a definition of recovery at 20% and are part of a larger sub-acute to chronic pain study with prior publications from this cohort using the 20% cut-off12. Finally, a more recent consensus publication13 from IMMPACT indicates that a change of at least 30% is needed for a moderate improvement in pain on the 0-10 Numerical Rating Scale but that this percentage depends on baseline pain levels.

      - Analysis of the Chicago dataset. The manuscript includes results on FA values and their association with pain severity for the New Haven and Mannheim datasets but not for the Chicago dataset. It would be straightforward to show figures like Figures 1 - 4 for the Chicago dataset, as well.

      We welcome the reviewer’s suggestion; we added these analyses to the results section of the resubmitted manuscript (page 11, lines 13-16): “The correlation between FA values in the right SLF and pain severity in the Chicago data set showed marginal significance (p = 0.055) at visit 1 (Fig. S8A) and higher FA values were significantly associated with a greater reduction in pain at visit 2 (p = 0.035) (Fig. S8B).”

      - Data sharing. The discovery-replication approach of the present study distinguishes the present from previous approaches. This approach enhances the belief in the robustness of the findings. This belief would be further enhanced by making the data openly available. It would be extremely valuable for the community if other researchers could reproduce and replicate the findings without restrictions. It is not clear why the fact that the studies are ongoing prevents the unrestricted sharing of the data used in the present study.

      We greatly appreciate the reviewer's suggestion to share our data sets, as we strongly support the Open Science initiative. The Chicago data set is already publicly available. The New Haven data set will be shared on the Open Pain repository, and the Mannheim data set will be uploaded to heiDATA or heiARCHIVE at Heidelberg University in the near future. We cannot share the data immediately because this project is part of the Heidelberg pain consortium, “SFB 1158: From nociception to chronic pain: Structure-function properties of neural pathways and their reorganization.” Within this consortium, all data must be shared following a harmonized structure across projects, and no study will be published openly until all projects have completed initial analysis and quality control.

      Reviewer #3 (Public Review):

      Summary:

      Authors suggest a new biomarker of chronic back pain with the option to predict the result of treatment. The authors found a significant difference in a fractional anisotropy measure in superior longitudinal fasciculus for recovered patients with chronic back pain.

      Strengths:

      The results were reproduced in three different groups at different studies/sites.

      Weaknesses:

      - The number of participants is still low.

      The reviewer raises a very important point of limited sample size. As discussed in our replies to reviewer number 1:

      We acknowledge the small sample size in the “Limitations” section of the discussion.   In the resubmission, we acknowledge the degree of flexibility that is afforded by having access to all the data at once. However, we also note that our SLF-FA based model is a simple cut-off approach that does not include any learning or hidden layers and that the data obtained from Open Pain were never part of the “training” set at any point at either the New Haven or the Mannheim site.  Regarding our SVC approach we follow standard procedures for machine learning where we never mix the training and testing sets. The models are trained on the training data with parameters selected based on cross-validation within the training data. Therefore, no models have ever seen the test data set. The model performances we reported reflect the prognostic accuracy of our model. We write in the limitation section of the discussion (page 20, lines 20-21, and page 21, lines 1-6):

      “In addition, at the time of analysis, we had “access” to all the data, which may lead to bias in model training and development.  We believe that the data presented here are nevertheless robust since multisite validated but need replication. Additionally, we followed standard procedures for machine learning where we never mix the training and testing sets. The models were trained on the training data with parameters selected based on cross-validation within the training data. Therefore, no models have ever seen the test data set. The model performances we reported reflect the prognostic accuracy of our model”. 

      Finally, as discussed by Spisak et al., 10 the key determinant of the required sample size in predictive modeling is the ” true effect size of the brain-phenotype relationship”, which we think is the determinant of the replication we observe in this study. As such the effect size in the New Haven and Mannheim data is Cohen’s d >1.

      - An explanation of microstructure changes was not given.

      The reviewer points to an important gap in our discussion.  While we cannot do a direct study of actual tissue microstructure, we explored further the changes observed in the SLF by calculating diffusivity measures. We have now performed the analysis of mean, axial, and radial diffusivity. 

      In the results section we added (page 7, lines 12-19): “We also examined mean diffusivity (MD), axial diffusivity (AD), and radial diffusivity (RD) extracted from the right SLF shown in Fig.1 to further understand which diffusion component is different between the groups. The right SLF MD is significantly increased (p < 0.05) in the SBPr compared to SBPp patients (Fig. S3), while the right SLF RD is significantly decreased (p < 0.05) in the SBPr compared to SBPp patients in the New Haven data (Fig. S4). Axial diffusivity extracted from the RSLF mask did not show significant difference between SBPr and SBPp (p = 0.28) (Fig. S5).”

      In the discussion, we write (page 15, lines 10-20):

      “Within the significant cluster in the discovery data set, MD was significantly increased, while RD in the right SLF was significantly decreased in SBPr compared to SBPp patients. Higher RD values, indicative of demyelination, were previously observed in chronic musculoskeletal patients across several bundles, including the superior longitudinal fasciculus14.  Similarly, Mansour et al. found higher RD in SBPp compared to SBPr in the predictive FA cluster. While they noted decreased AD and increased MD in SBPp, suggestive of both demyelination and altered axonal tracts,15 our results show increased MD and RD in SBPr with no AD differences between SBPp and SBPr, pointing to white matter changes primarily due to myelin disruption rather than axonal loss, or more complex processes. Further studies on tissue microstructure in chronic pain development are needed to elucidate these processes.”

      - Some technical drawbacks are presented.

      We are uncertain if the reviewer is suggesting that we have acknowledged certain technical drawbacks and expects further elaboration on our part. We kindly request that the reviewer specify what particular issues need to be addressed so that we can respond appropriately.

      Recommendations For The Authors:

      We thank the reviewers for their constructive feedback, which has significantly improved our manuscript. We have done our best to answer the criticisms that they raised point-by-point.

      Reviewer #2 (Recommendations For The Authors):

      The discovery-replication approach of the current study justifies the use of the terminus 'robust.' In contrast, previous studies on predictive biomarkers using functional and structural brain imaging did not pursue similar approaches and have not been replicated. Still, the respective biomarkers are repeatedly referred to as 'robust.' Throughout the manuscript, it would, therefore, be more appropriate to remove the label 'robust' from those studies.

      We thank the reviewer for this valuable suggestion. We removed the label 'robust' throughout the manuscript when referring to the previous studies which didn’t follow the same approach and have not yet been replicated.

      Reviewer #3 (Recommendations For The Authors):

      This is, indeed, quite a well-written manuscript with very interesting findings and patient group. There are a few comments that enfeeble the findings.

      (1) It is a bit frustrating to read at the beginning how important chronic back pain is and the number of patients in the used studies. At least the number of healthy subjects could be higher.

      The reviewer raises an important point regarding the number of pain-free healthy controls (HC) in our samples. We first note that our primary statistical analysis focused on comparing recovered and persistent patients at baseline and validating these findings across sites without directly comparing them to HCs. Nevertheless, the data from New Haven included 28 HCs at baseline, and the data from Mannheim included 24 HCs. Although these sample sizes are not large, they have enabled us to clearly establish that the recovered SBPr patients generally have larger FA values in the right superior longitudinal fasciculus compared to the HCs, a finding consistent across sites (see Figs. 1 and 3). This suggests that the general pain-free population includes individuals with both low and high-risk potential for chronic pain. It also offers one explanation for the reported lack of differences or inconsistent differences between chronic low-back pain patients and HCs in the literature, as these differences likely depend on the (unknown) proportion of high- and low-risk individuals in the control groups. Therefore, if the high-risk group is more represented by chance in the HC group, comparisons between HCs and chronic pain patients are unlikely to yield statistically significant results. Thus, while we agree with the reviewer that the sample sizes of our HCs are limited, this limitation does not undermine the validity of our findings.

      (2) Pain reaction in the brain is in general a quite popular topic and could be connected to the findings or mentioned in the introduction.

      We thank the reviewer for this suggestion.  We have now added a summary of brain response to pain in general; In the introduction, we now write (page 4, lines 19-22 and page 5, lines 1-5):

      “Neuroimaging research on chronic pain has uncovered a shift in brain responses to pain when acute and chronic pain are compared. The thalamus, primary somatosensory, motor areas, insula, and mid-cingulate cortex most often respond to acute pain and can predict the perception of acute pain16-19. Conversely, limbic brain areas are more frequently engaged when patients report the intensity of their clinical pain20, 21. Consistent findings have demonstrated that increased prefrontal-limbic functional connectivity during episodes of heightened subacute ongoing back pain or during a reward learning task is a significant predictor of CBP.12, 22. Furthermore, low somatosensory cortex excitability in the acute stage of low back pain was identified as a predictor of CBP chronicity.23”

      (3) It is clearly observed structural asymmetry in the brain, why not elaborate this finding further? Would SLF be a hub in connectivity analysis? Would FA changes have along tract features? etc etc etc

      The reviewer raises an important point. There is ground to suggest from our data that there is an asymmetry to the role of the SLF in resilience to chronic pain. We discuss this at length in the Discussion section. We have, in addition, we elaborated more in our data analysis using our Population Based Structural Connectome pipeline on the New Haven dataset. Following that approach, we studied both the number of fiber tracts making different parts of the SLF on the right and left side. In addition, we have extracted FA values along fiber tracts and compared the average across groups. Our new analyses are presented in our modified Figures 7 and Fig S11.  These results support the asymmetry hypothesis indeed. The SLF could be a hub of structural connectivity. Please note however, given the nature of our design of discovery and validation, the study of structural connectivity of the SLF is beyond the scope of this paper because tract-based connectivity is very sensitive to data collection parameters and is less accurate with single shell DWI acquisition. Therefore, we will pursue the study of connectivity of the SLF in the future with well-powered and more harmonized data.

      (4) Only FA is mentioned; did the authors work with MD, RD, and AD metrics?

      We thank the reviewer for this suggestion that helps in providing a clearer picture of the differences in the right SLF between SBPr and SBPp. We have now extracted MD, AD, and RD for the predictive mask we discovered in Figure 1 and plotted the values comparing SBPr to SBPp patients in Fig. S3, Fig. S4., and Fig. S5 across all sites using one comprehensive harmonized analysis. We have added in the discussion “Within the significant cluster in the discovery data set, MD was significantly increased, while RD in the right SLF was significantly decreased in SBPr compared to SBPp patients. Higher RD values, indicative of demyelination, were previously observed in chronic musculoskeletal patients across several bundles, including the superior longitudinal fasciculus14.  Similarly, Mansour et al. found higher RD in SBPp compared to SBPr in the predictive FA cluster. While they noted decreased AD and increased MD in SBPp, suggestive of both demyelination and altered axonal tracts15, our results show increased MD and RD in SBPr with no AD differences between SBPp and SBPr, pointing to white matter changes primarily due to myelin disruption rather than axonal loss, or more complex processes. Further studies on tissue microstructure in chronic pain development are needed to elucidate these processes.”

      (5) There are many speculations in the Discussion, however, some of them are not supported by the results.

      We agree with the reviewer and thank them for pointing this out. We have now made several changes across the discussion related to the wording where speculations were not supported by the data. For example, instead of writing (page 16, lines 7-9): “Together the literature on the right SLF role in higher cognitive functions suggests, therefore, that resilience to chronic pain is a top-down phenomenon related to visuospatial and body awareness.”, We write: “Together the literature on the right SLF role in higher cognitive functions suggests, therefore, that resilience to chronic pain might be related to a top-down phenomenon involving visuospatial and body awareness.”

      (6) A method section was written quite roughly. In order to obtain all the details for a potential replication one needs to jump over the text.

      The reviewer is correct; our methodology may have lacked more detailed descriptions.  Therefore, we have clarified our methodology more extensively.  Under “Estimation of structural connectivity”; we now write (page 28, lines 20,21 and page 29, lines 1-19):

      “Structural connectivity was estimated from the diffusion tensor data using a population-based structural connectome (PSC) detailed in a previous publication.24 PSC can utilize the geometric information of streamlines, including shape, size, and location for a better parcellation-based connectome analysis. It, therefore, preserves the geometric information, which is crucial for quantifying brain connectivity and understanding variation across subjects. We have previously shown that the PSC pipeline is robust and reproducible across large data sets.24 PSC output uses the Desikan-Killiany atlas (DKA) 25 of cortical and sub-cortical regions of interest (ROI). The DKA parcellation comprises 68 cortical surface regions (34 nodes per hemisphere) and 19 subcortical regions. The complete list of ROIs is provided in the supplementary materials’ Table S6.  PSC leverages a reproducible probabilistic tractography algorithm 26 to create whole-brain tractography data, integrating anatomical details from high-resolution T1 images to minimize bias in the tractography. We utilized DKA 25 to define the ROIs corresponding to the nodes in the structural connectome. For each pair of ROIs, we extracted the streamlines connecting them by following these steps: 1) dilating each gray matter ROI to include a small portion of white matter regions, 2) segmenting streamlines connecting multiple ROIs to extract the correct and complete pathway, and 3) removing apparent outlier streamlines. Due to its widespread use in brain imaging studies27, 28, we examined the mean fractional anisotropy (FA) value along streamlines and the count of streamlines in this work. The output we used includes fiber count, fiber length, and fiber volume shared between the ROIs in addition to measures of fractional anisotropy and mean diffusivity.”

      (7) Why not join all the data with harmonisation in order to reproduce the results (TBSS)

      We have followed the reviewer’s suggestion; we used neuroCombat harmonization after pooling all the diffusion weighted data into one TBSS analysis. Our results remain the same after harmonization. 

      In the Supplementary Information we added a paragraph explaining the method for harmonization; we write (SI, page 3, lines 25-34):

      “Harmonization of DTI data using neuroCombat. Because the 3 data sets originated from different sites using different MR data acquisition parameters and slightly different recruitment criteria, we applied neuroCombat 29  to correct for site effects and then repeated the TBSS analysis shown in Figure 1 and the validation analyses shown in Figures 5 and 6. First, the FA maps derived using the FDT toolbox were pooled into one TBSS analysis where registration to a standard template FA template (FMRIB58_FA_1mm.nii.gz part of FSL) was performed.  Next, neuroCombat was applied to the FA maps as implemented in Python with batch (i.e., site) effect modeled with a vector containing 1 for New Haven, 2 for Chicago, and 3 for Mannheim originating maps, respectively. The harmonized maps were then skeletonized to allow for TBSS.”

      And in the results section, we write (page 12, lines 2-21):

      “Validation after harmonization

      Because the DTI data sets originated from 3 sites with different MR acquisition parameters, we repeated our TBSS and validation analyses after correcting for variability arising from site differences using DTI data harmonization as implemented in neuroCombat. 29 The method of harmonization is described in detail in the Supplementary Methods. The whole brain unpaired t-test depicted in Figure 1 was repeated after neuroCombat and yielded very similar results (Fig. S9A) showing significantly increased FA in the SBPr compared to SBPp patients in the right superior longitudinal fasciculus (MNI-coordinates of peak voxel: x = 40; y = - 42; z = 18 mm; t(max) = 2.52; p < 0.05, corrected against 10,000 permutations).  We again tested the accuracy of local diffusion properties (FA) of the right SLF extracted from the mask of voxels passing threshold in the New Haven data (Fig.S9A) in classifying the Mannheim and the Chicago patients, respectively, into persistent and recovered. FA values corrected for age, gender, and head displacement accurately classified SBPr  and SBPp patients from the Mannheim data set with an AUC = 0.67 (p = 0.023, tested against 10,000 random permutations, Fig. S9B and S7D), and patients from the Chicago data set with an AUC = 0.69 (p = 0.0068) (Fig. S9C and S7E) at baseline, and an AUC = 0.67 (p = 0.0098)  (Fig. S9D and S7F) patients at follow-up,  confirming the predictive cluster from the right SLF across sites. The application of neuroCombat significantly changes the FA values as shown in Fig.S10 but does not change the results between groups.”

      Minor comments

      (1) In the case of New Haven data, one used MB 4 and GRAPPA 2, these two factors accelerate the imaging 8 times and often lead to quite a poor quality.<br /> Any kind of QA?

      We thank the reviewer for identifying this error. GRAPPA 2 was in fact used for our T1-MPRAGE image acquisition but not during the diffusion data acquisition. The diffusion data were acquired with a multi-band acceleration factor of 4.  We have now corrected this mistake.

      (2) Why not include MPRAGE data into the analysis, in particular, for predictions?

      We thank the reviewer for the suggestion. The collaboration on this paper was set around diffusion data. In addition, MPRAGE data from New Haven related to prediction is already published (10.1073/pnas.1918682117) and MPRAGE data of the Mannheim data set is a part of the larger project and will be published elsewhere.

      (3) In preprocessing, the authors wrote: "Eddy current corrects for image distortions due to susceptibility-induced distortions and eddy currents in the gradient coil"<br /> However, they did not mention that they acquired phase-opposite b0 data. It means eddy_openmp works likely only as an alignment tool, but not susceptibility corrector.

      We kindly thank the reviewer for bringing this to our attention. We indeed did not collect b0 data in the phase-opposite direction, however, eddy_openmp can still be used to correct for eddy current distortions and perform motion correction, but the absence of phase-opposite b0 data may limit its ability to fully address susceptibility artifacts. This is now noted in the Supplementary Methods under Preprocessing section (SI, page 3, lines 16-18): “We do note, however, that as we did not acquire data in the phase-opposite direction, the susceptibility-induced distortions may not be fully corrected.”

      (4) Version of FSL?

      We thank the reviewer for addressing this point that we have now added under the Supplementary Methods (SI, page 3, lines 10-11): “Preprocessing of all data sets was performed employing the same procedures and the FMRIB diffusion toolbox (FDT) running on FSL version 6.0.”

      (5) Some short sketches about the connectivity analysis could be useful, at least in SI.

      We are grateful for this suggestion that improves our work. We added the sketches about the connectivity analysis, please see Figure 7 and Supplementary Figure 11.

      (6) Machine learning: functions, language, version?

      We thank the reviewer for pointing out these minor points that we now hope to have addressed in our resubmission in the Methods section by adding a detailed description of the structural connectivity analysis. We added: “The DKA parcellation comprises 68 cortical surface regions (34 nodes per hemisphere) and 19 subcortical regions. The complete list of ROIs is provided in the supplementary materials’ Table S7.  PSC leverages a reproducible probabilistic tractography algorithm 26 to create whole-brain tractography data, integrating anatomical details from high-resolution T1 images to minimize bias in the tractography. We utilized DKA 25 to define the ROIs corresponding to the nodes in the structural connectome. For each pair of ROIs, we extracted the streamlines connecting them by following these steps: 1) dilating each gray matter ROI to include a small portion of white matter regions, 2) segmenting streamlines connecting multiple ROIs to extract the correct and complete pathway, and 3) removing apparent outlier streamlines. Due to its widespread use in brain imaging studies27, 28, we examined the mean fractional anisotropy (FA) value along streamlines and the count of streamlines in this work. The output we used includes fiber count, fiber length, and fiber volume shared between the ROIs in addition to measures of fractional anisotropy and mean diffusivity.”

      The script is described and provided at: https://github.com/MISICMINA/DTI-Study-Resilience-to-CBP.git.

      (7) Ethical approval?

      The New Haven data is part of a study that was approved by the Yale University Institutional Review Board. This is mentioned under the description of the data “New Haven (Discovery) data set (page 23, lines 1,2).  Likewise, the Mannheim data is part of a study approved by Ethics Committee of the Medical Faculty of Mannheim, Heidelberg University, and was conducted in accordance with the declaration of Helsinki in its most recent form. This is also mentioned under “Mannheim data set” (page 26, lines 2-5): “The study was approved by the Ethics Committee of the Medical Faculty of Mannheim, Heidelberg University, and was conducted in accordance with the declaration of Helsinki in its most recent form.”

      (1) Traeger AC, Henschke N, Hubscher M, et al. Estimating the Risk of Chronic Pain: Development and Validation of a Prognostic Model (PICKUP) for Patients with Acute Low Back Pain. PLoS Med 2016;13:e1002019.

      (2) Hill JC, Dunn KM, Lewis M, et al. A primary care back pain screening tool: identifying patient subgroups for initial treatment. Arthritis Rheum 2008;59:632-641.

      (3) Hockings RL, McAuley JH, Maher CG. A systematic review of the predictive ability of the Orebro Musculoskeletal Pain Questionnaire. Spine (Phila Pa 1976) 2008;33:E494-500.

      (4) Chou R, Shekelle P. Will this patient develop persistent disabling low back pain? JAMA 2010;303:1295-1302.

      (5) Silva FG, Costa LO, Hancock MJ, Palomo GA, Costa LC, da Silva T. No prognostic model for people with recent-onset low back pain has yet been demonstrated to be suitable for use in clinical practice: a systematic review. J Physiother 2022;68:99-109.

      (6) Kent PM, Keating JL. Can we predict poor recovery from recent-onset nonspecific low back pain? A systematic review. Man Ther 2008;13:12-28.

      (7) Hruschak V, Cochran G. Psychosocial predictors in the transition from acute to chronic pain: a systematic review. Psychol Health Med 2018;23:1151-1167.

      (8) Hartvigsen J, Hancock MJ, Kongsted A, et al. What low back pain is and why we need to pay attention. Lancet 2018;391:2356-2367.

      (9) Tanguay-Sabourin C, Fillingim M, Guglietti GV, et al. A prognostic risk score for development and spread of chronic pain. Nat Med 2023;29:1821-1831.

      (10) Spisak T, Bingel U, Wager TD. Multivariate BWAS can be replicable with moderate sample sizes. Nature 2023;615:E4-E7.

      (11) Liu Y, Zhang HH, Wu Y. Hard or Soft Classification? Large-margin Unified Machines. J Am Stat Assoc 2011;106:166-177.

      (12) Loffler M, Levine SM, Usai K, et al. Corticostriatal circuits in the transition to chronic back pain: The predictive role of reward learning. Cell Rep Med 2022;3:100677.

      (13) Smith SM, Dworkin RH, Turk DC, et al. Interpretation of chronic pain clinical trial outcomes: IMMPACT recommended considerations. Pain 2020;161:2446-2461.

      (14) Lieberman G, Shpaner M, Watts R, et al. White Matter Involvement in Chronic Musculoskeletal Pain. The Journal of Pain 2014;15:1110-1119.

      (15) Mansour AR, Baliki MN, Huang L, et al. Brain white matter structural properties predict transition to chronic pain. Pain 2013;154:2160-2168.

      (16) Wager TD, Atlas LY, Lindquist MA, Roy M, Woo CW, Kross E. An fMRI-based neurologic signature of physical pain. N Engl J Med 2013;368:1388-1397.

      (17) Lee JJ, Kim HJ, Ceko M, et al. A neuroimaging biomarker for sustained experimental and clinical pain. Nat Med 2021;27:174-182.

      (18) Becker S, Navratilova E, Nees F, Van Damme S. Emotional and Motivational Pain Processing: Current State of Knowledge and Perspectives in Translational Research. Pain Res Manag 2018;2018:5457870.

      (19) Spisak T, Kincses B, Schlitt F, et al. Pain-free resting-state functional brain connectivity predicts individual pain sensitivity. Nat Commun 2020;11:187.

      (20) Baliki MN, Apkarian AV. Nociception, Pain, Negative Moods, and Behavior Selection. Neuron 2015;87:474-491.

      (21) Elman I, Borsook D. Common Brain Mechanisms of Chronic Pain and Addiction. Neuron 2016;89:11-36.

      (22) Baliki MN, Petre B, Torbey S, et al. Corticostriatal functional connectivity predicts transition to chronic back pain. Nat Neurosci 2012;15:1117-1119.

      (23) Jenkins LC, Chang WJ, Buscemi V, et al. Do sensorimotor cortex activity, an individual's capacity for neuroplasticity, and psychological features during an episode of acute low back pain predict outcome at 6 months: a protocol for an Australian, multisite prospective, longitudinal cohort study. BMJ Open 2019;9:e029027.

      (24) Zhang Z, Descoteaux M, Zhang J, et al. Mapping population-based structural connectomes. Neuroimage 2018;172:130-145.

      (25) Desikan RS, Segonne F, Fischl B, et al. An automated labeling system for subdividing the human cerebral cortex on MRI scans into gyral based regions of interest. Neuroimage 2006;31:968-980.

      (26) Maier-Hein KH, Neher PF, Houde J-C, et al. The challenge of mapping the human connectome based on diffusion tractography. Nature Communications 2017;8:1349.

      (27) Chiang MC, McMahon KL, de Zubicaray GI, et al. Genetics of white matter development: a DTI study of 705 twins and their siblings aged 12 to 29. Neuroimage 2011;54:2308-2317.

      (28) Zhao B, Li T, Yang Y, et al. Common genetic variation influencing human white matter microstructure. Science 2021;372.

      (29) Fortin JP, Parker D, Tunc B, et al. Harmonization of multi-site diffusion tensor imaging data. Neuroimage 2017;161:149-170.

    3. Reviewer #1 (Public review):

      Summary:

      In this paper, Misic et al showed that white matter properties can be used to classify subacute back pain patients that will develop persisting pain.

      Strengths:

      Compared to most previous papers studying associations between white matter properties and chronic pain, the strength of the method is to perform a prediction in unseen data. Another strength of the paper is the use of three different cohorts. This is an interesting paper that provides a valuable contribution to the field.

      Weaknesses:

      The main weakness of this study is the sample size. It remains small despite having 3 cohorts. This is problematic because results are often overfitted in such a small sample size brain imaging study, especially when all the data are available to the authors at the time of training the model (Poldrack et al., Scanning the horizon: towards transparent and reproducible neuroimaging research, Nature Reviews in Neuroscience 2017). Thus, having access to all the data, the authors have a high degree of flexibility in data analysis, as they can retrain their model any number of time until it generalizes across all three cohorts. In this case, the testing set could easily become part of the training making it difficult to assess the real performance, especially for small sample size studies.

      Even if the performance was properly assessed their models show AUCs between 0.65-0.70, which is usually considered as poor, and most likely without potential clinical use. Despite this, their conclusion was: "This biomarker is easy to obtain (~10 min 18 of scanning time) and opens the door for translation into clinical practice." One may ask who is really willing to use an MRI signature with a relatively poor performance that can be outperformed by self-report questionnaires?

      Overall, these criticisms are more about the wording sometimes use and the inference they made. I still think this is a very relevant contribution to the field. Showing predictive performance through cross validation and testing in multiple cohorts is not an easy task and this is a strong effort by the team. I strongly believe this approach is the right one and I believe the authors did a good job.

    4. Reviewer #2 (Public review):

      The present study aims to investigate brain white matter predictors of back pain chronicity. To this end, a discovery cohort of 28 patients with subacute back pain (SBP) was studied using white matter diffusion imaging. The cohort was investigated at baseline and one-year follow-up when 16 patients had recovered (SBPr) and 12 had persistent back pain (SBPp). A comparison of baseline scans revealed that SBPr patients had higher fractional anisotropy values in the right superior longitudinal fasciculus SLF) than SBPp patients and that FA values predicted changes in pain severity. Moreover, the FA values of SBPr patients were larger than those of healthy participants, suggesting a role of FA of the SLF in resilience to chronic pain. These findings were replicated in two other independent datasets. The authors conclude that the right SLF might be a robust predictive biomarker of CBP development with the potential for clinical translation.<br /> Developing predictive biomarkers for pain chronicity is an interesting, timely, and potentially clinically relevant topic. The paradigm and the analysis are sound, the results are convincing, and the interpretation is adequate. A particular strength of the study is the discovery-replication approach with replications of the findings in two independent datasets.

    5. Reviewer #3 (Public review):

      Summary:

      The authors suggest a new biomarker of chronic back pain with an option to predict a result of treatment.

      Strengths:

      The results were reproduced in three studies.

      Weaknesses:

      The number of participants is still low, an explanation of microstructure changes was not given, and some technical drawbacks are presented.

    1. We know that eyewitnessidentifications are fallible.

      For some people , they could mistake what they had seen or lie about the truth.

    2. : The brain abhors avacuum. Under the best of observation conditions, the absolute best, we only detect, encodeand store in our brains bits and pieces of the entire experience in front of us, and they'restored in different parts of the brain.

      The brain can only record part of the information, and it fills in the missing parts when recalling it, which explains why people's memories can be unreliable.

    1. eLife Assessment

      By combining psychophysics and computational modelling based on the Theory of Visual Attention, this study examines the mechanisms underlying self-prioritization by revealing the influence of self-associations on early attentional selection. While the findings are important, the experimental evidence is incomplete. The relationship between consciousness (awareness) and attention, the potential contamination by arousal, the inconsistent and unexpected results, and the distinguishing between social and perceptual tasks need to be addressed or improved. The work will be of interest to researchers in psychology, cognitive science, and neuroscience.

    2. Reviewer #1 (Public review):

      Summary:

      The authors intended to investigate the earliest mechanisms enabling self-prioritization, especially in the attention. Combining a temporal order judgement task with computational modelling based on the Theory of Visual Attention (TVA), the authors suggested that the shapes associated with the self can fundamentally alter the attentional selection of sensory information into awareness. This self-prioritization in attentional selection occurs automatically at early perceptual stages. Furthermore, the processing benefits obtained from attentional selection via self-relatedness and physical salience were separated from each other.

      Strengths:

      The manuscript is written in a way that is easy to follow. The methods of the paper are very clear and appropriate.

      Weaknesses:

      There are two main concerns:

      (1) The authors had a too strong pre-hypothesis that self-prioritization was associated with attention. They used the prior entry to consciousness (awareness) as an index of attention, which is not appropriate. There may be other processing that makes the stimulus prior to entry to consciousness (e.g. high arousal, high sensitivity), but not attention. The self-related/associated stimulus may be involved in such processing but not attention to make the stimulus easily caught. Perhaps the authors could include other methods such as EEG or MEG to answer this question.

      (2) The authors suggested that there are two independent attention processes. I suspect that the brain needs two attention systems. Is there a probability that the social and perceptual (physical properties of the stimulus) salience fired the same attention processing through different processing?

    3. Reviewer #2 (Public review):

      Summary:

      The main aim of this research was to explore whether and how self-associations (as opposed to other associations) bias early attentional selection, and whether this can explain well-known self-prioritization phenomena, such as the self-advantage in perceptual matching tasks. The authors adopted the Visual Attention Theory (VAT) by estimating VAT parameters using a hierarchical Bayesian model from the field of attention and applied it to investigate the mechanisms underlying self-prioritization. They also discussed the constraints on the self-prioritization effect in attentional selection. The key conclusions reported were:

      (1) Self-association enhances both attentional weights and processing capacity

      (2) Self-prioritization in attentional selection occurs automatically but diminishes when active social decoding is required, and

      (3) Social and perceptual salience capture attention through distinct mechanisms.

      Strengths:

      Transferring the Theory of Visual Attention parameters estimated by a hierarchical Bayesian model to investigate self-prioritization in attentional selection was a smart approach. This method provides a valuable tool for accessing the very early stages of self-processing, i.e., attention selection. The authors conclude that self-associations can bias visual attention by enhancing both attentional weights and processing capacity and that this process occurs automatically. These findings offer new insights into self-prioritization from the perspective of the early stage of attentional selection.

      Weaknesses:

      (1) The results are not convincing enough to definitively support their conclusions. This is due to inconsistent findings (e.g., the model selection suggested condition-specific c parameters, but the increase in processing capacity was only slight; the correlations between attentional selection bias and SPE were inconsistent across experiments), unexpected results (e.g., when examining the impact of social association on processing rates, the other-associated stimuli were processed faster after social association, while the self-associated stimuli were processed more slowly), and weak correlations between attentional bias and behavioral SPE, which were reported without any p-value corrections. Additionally, the reasons why the attentional bias of self-association occurs automatically but disappears during active social decoding remain difficult to explain. It is also possible that the self-association with shapes was not strong enough to demonstrate attention bias, rather than the automatic processes as the authors suggest. Although these inconsistencies and unexpected results were discussed, all were post hoc explanations. To convince readers, empirical evidence is needed to support these unexpected findings.

      (2) The generalization of the findings needs further examination. The current results seem to rely heavily on the perceptual matching task. Whether this attentional selection mechanism of self-prioritization can be generalized to other stimuli, such as self-name, self-face, or other domains of self-association advantages, remains to be tested. In other words, more converging evidence is needed.

      (3) The comparison between the "social" and "perceptual" tasks remains debatable, as it is challenging to equate the levels of social salience and perceptual salience. In addition, these two tasks differ not only in terms of social decoding processes but also in other aspects such as task difficulty. Whether the observed differences between the tasks can definitively suggest the specificity of social decoding, as the authors claim, needs further confirmation.

    4. Author response:

      Public Reviews:

      Reviewer #1 (Public review):

      Summary:

      The authors intended to investigate the earliest mechanisms enabling self-prioritization, especially in the attention. Combining a temporal order judgement task with computational modelling based on the Theory of Visual Attention (TVA), the authors suggested that the shapes associated with the self can fundamentally alter the attentional selection of sensory information into awareness. This self-prioritization in attentional selection occurs automatically at early perceptual stages. Furthermore, the processing benefits obtained from attentional selection via self-relatedness and physical salience were separated from each other.

      Strengths:

      The manuscript is written in a way that is easy to follow. The methods of the paper are very clear and appropriate.

      Thank you for your valuable feedback and helpful suggestions. Please see specific answers below.

      Weaknesses:

      There are two main concerns:

      (1) The authors had a too strong pre-hypothesis that self-prioritization was associated with attention. They used the prior entry to consciousness (awareness) as an index of attention, which is not appropriate. There may be other processing that makes the stimulus prior to entry to consciousness (e.g. high arousal, high sensitivity), but not attention. The self-related/associated stimulus may be involved in such processing but not attention to make the stimulus easily caught. Perhaps the authors could include other methods such as EEG or MEG to answer this question.

      We found the possibility of other mechanisms to be responsible for “prior entry” interesting too, but believe there are solid grounds for the hypothesis that it is indicative of attention:

      First, prior entry has a long-standing history as in index of attention (e.g., Titchener, 1903; Shore et al., 2001; Yates and Nicholls, 2009; Olivers et al. 2011; see Spence & Parise, 2010, for a review.) Of course, other factors (like the ones mentioned) can contribute to encoding speed. However, for the perceptual condition, we systematically varied a stimulus feature that is associated with selective attention (salience, see e.g. Wolfe, 2021) and kept other features that are known to be associated with other factors such as arousal and sensitivity constant across the two variants (e.g. clear over threshold visibility) or varied them between participants (e.g. the colours / shapes used).

      Second, in the social salience condition we used a manipulation that has repeatedly been used to establish social salience effects in other paradigms (e.g., Li et al., 2022; Liu & Sui, 2016; Scheller et al., 2024; Sui et al., 2015; see Humphreys & Sui, 2016, for a review). We assume that the reviewer’s comment suggests that changes in arousal or sensitivity may be responsible for social salience effects, specifically. We have several reasons to interpret the social salience effects as an alteration in attentional selection, rather than a result of arousal or sensitivity:

      Arousal and attention are closely linked. However, within the present model, arousal is more likely linked to the availability of processing resources (capacity parameter C). That is, enhanced arousal is typically not stimulus-specific, and therefore unlikely affects the *relative* advantage in processing weights/rates of the self-associated (vs other-associated) stimuli. Indeed, a recent study showed that arousal does not modulate the relative division of attentional resources (as modelled by the Theory of Visual Attention; Asgeirsson & Nieuwenhuis, 2017). As such, it is unlikely that arousal can explain the observed results in relative processing changes for the self and other identities.

      Further, there is little reason to assume that presenting a different shape enhances perceptual sensitivity. Firstly, all stimuli were presented well above threshold, which would shrink any effects that were resulting from increases in sensitivity alone. Secondly, shape-associations were counterbalanced across participants, reducing the possibility that specific features, present in the stimulus display, lead to the measurable change in processing rates as a result of enhanced shape-sensitivity.

      Taken together, both, the wealth of literature that suggests prior entry to index attention and the specific design choices within our study, strongly support the notion that the observed changes in processing rates are indicative of changes in attentional selection, rather than other mechanisms (e.g. arousal, sensitivity).

      (2) The authors suggested that there are two independent attention processes. I suspect that the brain needs two attention systems. Is there a probability that the social and perceptual (physical properties of the stimulus) salience fired the same attention processing through different processing?

      We appreciate this thought-provoking comment. We conceptualize attention as a process that can facilitate different levels of representation, rather than as separate systems tuned to specific types of information. Different forms of representation, such as the perceptual shape, or the associated social identity, may be impacted by the same attentional process at different levels of representation. Indeed, our findings suggest that both social and perceptual salience effects may result from the same attentional system, albeit at different levels of representation. This is further supported by the additivity of perceptual and social salience effects and the negative correlation of processing facilitations between perceptually and socially salient cues. These results may reflect a trade-off in how attentional resources are distributed between either perceptually or socially salient stimuli.

      Reviewer #2 (Public review):

      Summary:

      The main aim of this research was to explore whether and how self-associations (as opposed to other associations) bias early attentional selection, and whether this can explain well-known self-prioritization phenomena, such as the self-advantage in perceptual matching tasks. The authors adopted the Visual Attention Theory (VAT) by estimating VAT parameters using a hierarchical Bayesian model from the field of attention and applied it to investigate the mechanisms underlying self-prioritization. They also discussed the constraints on the self-prioritization effect in attentional selection. The key conclusions reported were:

      (1) Self-association enhances both attentional weights and processing capacity

      (2) Self-prioritization in attentional selection occurs automatically but diminishes when active social decoding is required, and

      (3) Social and perceptual salience capture attention through distinct mechanisms.

      Strengths:

      Transferring the Theory of Visual Attention parameters estimated by a hierarchical Bayesian model to investigate self-prioritization in attentional selection was a smart approach. This method provides a valuable tool for accessing the very early stages of self-processing, i.e., attention selection. The authors conclude that self-associations can bias visual attention by enhancing both attentional weights and processing capacity and that this process occurs automatically. These findings offer new insights into self-prioritization from the perspective of the early stage of attentional selection.

      Thank you for your valuable feedback and helpful suggestions. Please see specific answers below.

      Weaknesses:

      (1) The results are not convincing enough to definitively support their conclusions. This is due to inconsistent findings (e.g., the model selection suggested condition-specific c parameters, but the increase in processing capacity was only slight; the correlations between attentional selection bias and SPE were inconsistent across experiments), unexpected results (e.g., when examining the impact of social association on processing rates, the other-associated stimuli were processed faster after social association, while the self-associated stimuli were processed more slowly), and weak correlations between attentional bias and behavioral SPE, which were reported without any p-value corrections. Additionally, the reasons why the attentional bias of self-association occurs automatically but disappears during active social decoding remain difficult to explain. It is also possible that the self-association with shapes was not strong enough to demonstrate attention bias, rather than the automatic processes as the authors suggest. Although these inconsistencies and unexpected results were discussed, all were post hoc explanations. To convince readers, empirical evidence is needed to support these unexpected findings.

      Thank you for outlining the specific points that raise your concern. We were happy to address these points as follows:

      a. Replications and Consistency: In our study, we consistently observed trends (relative reduction in processing speed of the self-associated stimulus) in the social salience conditions across experiments. While Experiment 2 demonstrated a significant reduction in processing rate towards self-stimuli, there was a notable trend in Experiment 1 as well.

      b. Condition-specific parameters: The condition-specific C parameters, though presenting a small effect size, significantly improved model fit. Inspecting the HDI ranges of our estimated C parameters indicates a high probability (85-89%) that processing capacity increased due to social associations, suggesting that even small changes (~2Hz) can hold meaningful implications within the context attentional selection.

      Please also note that the main conclusions about relative salience (self/other, salient/non-salient) are based on the relative processing rates. Processing rates are the product of the processing capacity (condition- but not stimulus dependent) and the attentional weight (condition and stimulus dependent). The latter is crucial to judge the *relative* advantage of the salient stimulus. Hence, the self-/salient stimulus advantage that is reflected in the ‘processing rate difference’ is automatically also reflected in the relative attentional weights attributed to the self/other and salient/non-salient stimuli. As such, the overall results of an automatic relative advantage of self-associated stimuli hold, independently of the change in overall processing capacity.

      c. Correlations: Regarding the correlations the reviewer noted, we wish to clarify that these were exploratory, and not the primary focus of our research. The aim of these exploratory analyses was to gauge the contribution of attentional selection to matching-based SPEs. As SPEs measured via the matching task are typically based on multiple different levels of processing, the contribution of early attentional selection to their overall magnitude was unclear. Without being able to gauge the possible effect sizes, corrected analyses may prevent detecting small but meaningful effects. As such, the effect sizes reported serve future studies to estimate power a priori and conduct well-powered replications of such exploratory effects. Additionally, Bayes factors were provided to give an appreciation of the strength of the evidence, all suggesting at least moderate evidence in favour of a correlation. Lastly, please note that effects that were measured within individuals and task (processing rate increase in social and perceptual decision dimensions in the TOJ task) showed consistent patterns, suggesting that the modulations within tasks were highly predictive of each other, while the modulations between tasks were not as clearly linked. We will add this clarification to the revised manuscript.

      d. Unexpected results: The unexpected results concerning the processing rates of other-associated versus self-associated stimuli certainly warrant further discussion. We believe that the additional processing steps required for social judgments, reflected in enhanced reaction times, may explain the slower processing of self-associated stimuli in that dimension. We agree that not all findings will align with initial hypotheses, and this variability presents avenues for further research. We have added this to the discussion of social salience effects.

      e. Whether association strength can account for the findings: We appreciate the scepticism regarding the strength of self-association with shapes. However, our within-participant design and control matching task indicate that the relative processing advantage for self-associated stimuli holds across conditions. This makes the scenario that “the self-association with shapes was not strong enough to demonstrate attention bias” very unlikely. Firstly, the relative processing advantage of self-associated stimuli in the perceptual decision condition, and the absence of such advantage in the social decision condition, were evidenced in the same participants. Hence, the strength of association between shapes and social identities was the same for both conditions. However, we only find an advantage for the self-associated shape when participants make perceptual (shape) judgements. It is therefore highly unlikely that the “association strength” can account for the difference in the outcomes between the conditions in experiment 1. Also, note that the order in which these conditions were presented was counter-balanced across participants, reducing the possibility that the automatic self-advantage was merely a result of learning or fatigue. Secondly, all participants completed the standard matching task to ascertain that the association between shapes and identities did indeed lead to processing advantages (across different levels).

      In summary, we believe that the evidence we provide supports the final conclusions. We do, of course, welcome any further empirical evidence that could enhance our understanding of the contribution of different processing levels to the SPE and are committed to exploring these areas in future work.

      (2) The generalization of the findings needs further examination. The current results seem to rely heavily on the perceptual matching task. Whether this attentional selection mechanism of self-prioritization can be generalized to other stimuli, such as self-name, self-face, or other domains of self-association advantages, remains to be tested. In other words, more converging evidence is needed.

      The reviewer indicates that the current findings heavily rely on the perceptual matching task, and it would be more convincing to include other paradigm(s) and different types of stimuli. We are happy to address these points here: first, we specifically used a temporal order paradigm to tap into specific processes, rather than merely relying on the matching task. Attentional selection is, along with other processes, involved in matching, but the TOJ-TVA approach allows tapping into attentional selection specifically.  Second, self-prioritization effects have been replicated across a wide range of stimuli (e.g. faces: Wozniak et al., 2018; names or owned objects: Scheller & Sui, 2022a, or even fully unfamiliar stimuli: Wozniak & Knoblich, 2019) and paradigms (e.g. matching task: Sui et al., 2012; cross-modal cue integration: e.g. Scheller & Sui, 2022b; Scheller et al., 2023; continuous flash suppression: Macrae et al., 2017; temporal order judgment: Constable et al., 2019; Truong et al., 2017). Using neutral geometric shapes, rather than faces and names, addresses a key challenge in self research: mitigating the influence of stimulus familiarity on results. In addition, these newly learned, simple stimuli can be combined with other paradigms, such as the TOJ paradigm in the current study, to investigate the broader impact of self-processing on perception and cognition.

      To the best of our knowledge, this is the first study showing evidence about the mechanisms that are involved in early attentional selection of socially salient stimuli. Future replications and extensions would certainly be useful, as with any experimental paradigm.

      (3) The comparison between the "social" and "perceptual" tasks remains debatable, as it is challenging to equate the levels of social salience and perceptual salience. In addition, these two tasks differ not only in terms of social decoding processes but also in other aspects such as task difficulty. Whether the observed differences between the tasks can definitively suggest the specificity of social decoding, as the authors claim, needs further confirmation.

      Equating the levels of social and perceptual salience is indeed challenging, but not an aim of the present study. Instead, the present study directly compares the mechanisms and effects of social and perceptual salience, specifically experiment 2. By manipulating perceptual salience (relative colour) and social salience (relative shape association) independently and jointly, and quantifying the effects on processing rates, our study allows to directly delineate the contributions of each of these types of salience. The results suggest additive effects (see also Figure 7). Indeed, the possibility remains that these effects are additive because of the use of different perceptual features, so it would be helpful for future studies to explore whether similar perceptual features lead to (supra-/sub-) additive effects. In either case, the study design allows to directly compare the effects and mechanisms of social and perceptual salience.

      Regarding the social and perceptual decision dimensions, they were not expected to be equated. Indeed, the social decision dimension requires additional retrieval of the associated identity, making it likely more challenging. This additional retrieval is also likely responsible for the slower responses towards the social association compared to the shape itself. However, the motivation to compare the effects of these two decisional dimensions lies in the assumption that the self needs to be task relevant. Some evidence suggests that the self needs to be task-relevant to induce self-prioritization effects (e.g., Woźniak & Knoblich, 2022). However, these studies typically used matching tasks and were powered to detect large effects only (e.g. f = 0.4, n = 18). As it is likely that lacking contribution of decisional processing levels (which interact with task-relevance) will reduce the SPE, smaller self-prioritization effects that result from earlier processing levels may not be detected with sufficient statistical power. Targeting specific processing levels, especially those with relatively early contributions or small effect sizes, requires larger samples (here: n = 70) to provide sufficient power. Indeed, by contrasting the relative attentional selection effects in the present study we find that the self does not need to be task-relevant to produce self-prioritization effects. This is in line with recent findings of prior entry of self-faces (Jubile & Kumar, 2021)

    1. Author response:

      The following is the authors’ response to the original reviews.

      Reviewer #1 (Public Review):

      The authors show that SVZ-derived astrocytes respond to a middle carotid artery occlusion (MCAO) hypoxia lesion by secreting and modulating hyaluronan at the edge of the lesion (penumbra) and that hyaluronan is a chemoattractant to SVZ astrocytes. They use lineage tracing of SVZ cells to determine their origin. They also find that SVZ-derived astrocytes express Thbs-4 but astrocytes at the MCAO-induced scar do not. Also, they demonstrate that decreased HA in the SVZ is correlated with gliogenesis. While much of the paper is descriptive/correlative they do overexpress Hyaluronan synthase 2 via viral vectors and show this is sufficient to recruit astrocytes to the injury. Interestingly, astrocytes preferred to migrate to the MCAO than to the region of overexpressed HAS2.

      Strengths:

      The field has largely ignored the gliogenic response of the SVZ, especially with regard to astrocytic function. These cells and especially newborn cells may provide support for regeneration. Emigrated cells from the SVZ have been shown to be neuroprotective via creating pro-survival environments, but their expression and deposition of beneficial extracellular matrix molecules are poorly understood. Therefore, this study is timely and important. The paper is very well written and the flow of results is logical.

      Weaknesses:

      The main problem is that they do not show that Hyaluronan is necessary for SVZ astrogenesis and or migration to MCAO lesions. Such loss of function studies have been carried out by studies they cite (e.g. Girard et al., 2014 and Benner et al., 2013). Similar approaches seem to be necessary in this work. 

      We appreciate the comments by the reviewer. The article is, indeed, largely descriptive since we attempt to describe in detail what happens to newborn astrocytes after MCAO. Still, we have not attempted any modification to the model, such as amelioration of ischemic damage. This is a limitation of the study that we do not hide. However, we use several experimental approaches, such as lineage tracing and hyaluronan modification, to strengthen our conclusions.

      Regarding the weaknesses found by the reviewer, we do not claim that hyaluronan is necessary for SVZ astrogenesis. Indeed, we observe that when the MCAO stimulus (i.e. inflammation) is present, the HMW-HA (AAV-Has2) stimulus is less powerful (we discuss this in line 330-332). We do claim, and we believe we successfully demonstrate, the reverse situation: that SVZ astrocytes modulate hyaluronan, not at the SVZ but at the site of MCAO, i.e. the scar. However, regarding whether hyaluronan is necessary for SVZ astrogenesis, we only show a correlation between its degradation and the time-course of astrogenesis. We suggest this result as a starting point for a follow-up study. We have included a phrase in the discussion (line 310), stating that further experiments are needed to fully establish a link between hyaluronan and astrogenesis in the SVZ.

      Major points:

      (1) How good of a marker for newborn astrocytes is Thbs4? Did you co-label with B cell markers like EGFr? Is the Thbs4 gene expressed in B cells? Do scRNAseq papers show it is expressed in B cells? Are they B1 or B2 cells?

      We chose Thbs4 as a marker of newborn astrocytes based on published research (Beckervordersanforth et al., 2010; Benner et al., 2013; Llorens-Bobadilla et al. 2015, Codega et al, 2014; Basak et al., 2018; Mizrak et al., 2019; Kjell et al., 2020; Cebrian-Silla et al., 2021). From those studies, at least 3 associate Thbs4 to B-type cells based on scRNAseq data (LlorensBobadilla et al. 2015; Cebrian-Silla et al., 2021; Basak et al., 2018). We have included a sentence about this and the associated references, in line 92. 

      We co-label Thbs4 with EGFR, but in the context of MCAO. We observed an increase of EGFR expression with MCAO, similar to the increase in Thbs4 alongside ischemia (see author ). We did not include this figure in the manuscript since we did not have available tissue from all the time points we used (7d, 60d post-ischemia). 

      Author response image 1.

      Thbs4 cells, in basal and ischemic conditions, only represent a small amount of IdU-positive cells (Fig 3F), suggesting that they are mostly quiescent cells, i.e., B1 cells. However, the scRNAseq literature is not consistent about this.

      (2) It is curious that there was no increase in Type C cells after MCAO - do the authors propose a direct NSC-astrocyte differentiation?

      Type C cells are fast-proliferating cells, and our BrdU/IdU experiment (Fig. 3) suggests that Thbs4 cells are slow-proliferating cells. Some authors suggest (Encinas lab, Spain) that when the hippocampus is challenged by a harsh stimulus, such as kainate-induced epilepsy, the NSCs differentiate directly into reactive astrocytes and deplete the DG neurogenic niche (Encinas et al., 2011, Cell Stem Cell; Sierra et al., 2015, Cell Stem Cell). We believe this might be the case in our MCAO model and the SVZ niche, since we observe a decrease in DCX labeling in the olfactory bulb (Fig S5) and an increase in astrocytes in the SVZ, which migrate to the ischemic lesion. We did not want to overcomplicate an already complicated paper, dwelling with direct NSC-astrocyte differentiation or with the reactive status of these newborn astrocytes. 

      (3) The paper would be strengthened with orthogonal views of z projections to show colocalization.

      We thank the reviewer for this observation. We have now included orthogonal projections in the critical colocalization IF of CD44 and hyaluronan (hyaluronan internalization) in Fig S6D, and a zoomed-in inset. Hyaluronan membrane synthesis is already depicted with orthogonal projection in Fig 6F.

      (4) It is not clear why the dorsal SVZ is analysed and focused on in Figure 4. This region emanates from the developmental pallium (cerebral cortex anlagen). It generates some excitatory neurons early postnatally and is thought to have differential signalling such as Wnt (Raineteau group).

      We decided to analyze in depth the dorsal SVZ after the BrdU experiment (Fig S3), where we observed an increase in BrdU+/Thbs4+ cells mostly in the dorsal area. Hence, the electrodes for electroporation were oriented in such a way as to label the dorsal area. We appreciate the paper by Raineteau lab, but we assume that this region may potentially exploit other roles (apart from excitatory neurons generated early postnatally) depending on the developmental stage (our model is in adults) and/or pathological conditions (MCAO). 

      (5) Several of the images show the lesion and penumbra as being quite close to the SVZ. Did any of the lesions contact the SVZ? If so, I would strongly recommend excluding them from the analysis as such contact is known to hyperactivate the SVZ.

      We thank the referee for the suggestion to exclude the harsher MCAO-lesioned animals from the analysis. Indeed, the MCAO ischemia, methodologically, can generate different tissue damages that cannot be easily controlled. Thus, based on TTC staining, we had already excluded the more severe tissue damage that contacted the SVZ, based on TTC staining.

      (6) The authors switch to a rat in vitro analysis towards the end of the study. This needs to be better justified. How similar are the molecules involved between mouse and rat?

      We chose the rat culture since it is a culture that we have already established in our lab, and that in our own hands, is much more reproducible than the mouse brain cell culture that we occasionally use (for transgenic animals only). Benito-Muñoz et al., Glia. 2016; Cavaliere et al., Front Cell Neurosci. 2013. It is true that there could be differences between the rat and mouse Thbs4-cell physiology, despite a 96% identity between rat and mouse Thbs4 protein sequence (BLASTp). In vitro, we only confirm the capacity of astrocytes to internalize hyaluronan, which was a finding that we did not expect in our in vivo experiments. Indeed, these observations, notwithstanding the obvious differences between in vivo and in vitro scenarios, suggest that the HA internalization by astrocytes is a cross-species event, at least in rodents. Regarding HA, hyaluronan is similar in all species, since it’s a glycan (this is why there are no antibodies against HA, and ones has to rely on binding proteins such as HABP to label it).

      (7) Similar comment for overexpression of naked mole rat HA.

      We chose the naked mole rat Hyaluronan synthase (HAS), because it is a HAS that produces HA of very high molecular weight, similar to the one found accumulated in the glial scar, at the lesion border. The naked-mole rat HAS used in mice (Gorbunova Lab) is a known tool in the ECM field. (Zhang et al, 2023, Nature; Tian et al., 2013, Nature).

      Reviewer 1 (Recommendation to authors):

      (1) Line 22: most of the cells that migrate out of the SVZ are not stem cells but cells further along in the lineage - neuroblasts and glioblasts.

      We thank the reviewer for this clarification. We have modified the abstract accordingly. 

      (2) In Figure 3d the MCAO group staining with GFAP looks suspiciously like ependymal cells which have been shown to be dramatically activated by stroke models.

      The picture does show ependymal cells, which are located next to the ventricle and are indeed very proliferative in stroke. However, these cells do not express Thbs4 (Shah et al., 2018, Cell). In the quantifications from the SVZ of BrdU and IdU injected animals (Fig 3e and f), we only take into account Thbs4+ GFAP+ cells, no GFAP+ only. 

      (3) The TTC injury shown in Figure 5c is too low mag.

      We apologize for the low mag. We have increased the magnification two-fold without compromising resolution. The problem might also have arisen from the compression of TIF into JPEG in the PDF export process. We will address this in the revised version by carefully selecting export settings. The images we used are all publication quality (300 ppi).

      (4) How specific to HA is HABP?

      Hyaluronic Acid Binding Protein is a canonical marker for hyaluronan that is used also in ELISA to quantify it specifically, since it does not bind other glycosaminoglycans. The label has been used for years in the field for immunochemistry, and some controls and validations have been published: Deepa et al., 2006, JBC performed appropriate controls of HABP-biotin labeling using hyaluronidase (destroys labeling) and chondroitinase (preserves labeling). Soria et al., 2020, Nat Commun checked that (i) streptavidin does not label unspecifically, and (ii) that HABP staining is reduced after hyaluronan depletion in vivo with HAS inhibitor 4MU.

      (5) A number of images are out of focus and thus difficult to interpret (e.g. SFig. 4e).

      This is true. We realized that the PDF conversion process for the preprint version has severely compressed the larger images, such as the one found in Fig. S4e. We have submitted a revised version in a better-quality PDF (the final paper will have the original TIFF files). We apologize for the technical problem.

      (6) "restructuration" is not a word.

      We apologize for the mistake and thank the reviewer for the correction. We corrected “restructuration” with “reorganization” in line 67.

      (7) While much of the manuscript is well-written and logical it could use an in-depth edit to remove awkward words and phrasings.

      A native English speaker has revised the manuscript to correct these awkward phrases. All changes are labeled in red in the revised version.

      (8) Please describe why and how you used skeleton analysis for HABP in the methods, this will be unfamiliar to most readers. The one-sentence description in the methods is insufficient.

      We have modified the text accordingly, explaining in depth the logic behind the skeleton analysis. (Line 204). We also added several lines of text describing in detail the image analysis (CD44/HABP spots, fractal dimension, masks for membranal HABP, among others, in lines 484494) 

      Reviewer #2 (Public Review)

      Summary:

      In their manuscript, Ardaya et al have addressed the impact of ischemia-induced gliogenesis from the adult SVZ and their effect on the remodeling of the extracellular matrix (ECM) in the glial scar. They use Thbs4, a marker previously identified to be expressed in astrocytes of the SVZ, to understand its role in ischemia-induced gliogenesis. First, the authors show that Thbs4 is expressed in the SVZ and that its expression levels increase upon ischemia. Next, they claim that ischemia induces the generation of newborn astrocyte from SVZ neural stem cells (NSCs), which migrate toward the ischemic regions to accumulate at the glial scar. Thbs4-expressing astrocytes are recruited to the lesion by Hyaluronan where they modulate ECM homeostasis.

      Strengths:

      The findings of these studies are in principle interesting and the experiments are in principle good.

      Weaknesses:

      The manuscript suffers from an evident lack of clarity and precision in regard to their findings and their interpretation.

      We thank the reviewer for the valuable feedback. We hope the changes proposed improve clarity and precision throughout the manuscript.

      (1) The authors talk about Thbs4 expression in NSCs and astrocytes, but neither of both is shown in Figure 1, nor have they used cell type-specific markers.

      As we reported also to Referee #1 (major point 1), Thbs4 is widely considered in literature as a valid marker for newly formed astrocytes (Beckervordersanforth et al., 2010; Benner et al., 2013; Llorens-Bobadilla et al. 2015, Codega et al, 2014; Basak et al., 2018; Mizrak et al., 2019; Kjell et al., 2020; Cebrian-Silla et al., 2021). Some of the studies mentioned here and discussed in the manuscript text, also associate Thbs4 to B-type cells based on scRNAseq data (LlorensBobadilla et al. 2015; Cebrian-Silla et al., 2021; Basak et al., 2018). Moreover, we also showed colocalization of Thbs4 with activated stem cells marker nestin (Fig.2), glial marker GFAP (Fig. 3) and with dorsal NSCs marker tdTOM (from electroporation, Fig. 4). 

      (2) Very important for all following experiments is to show that Thbs4 is not expressed outside of the SVZ, specifically in the areas where the lesion will take place. If Thbs4 was expressed there, the conclusion that Thbs4+ cells come from the SVZ to migrate to the lesion would be entirely wrong.

      In Figure 1a, we show that Thbs4 is expressed in the telencephalon, exclusively in the neurogenic regions like SVZ, RMS and OB, together with cerebellum and VTA, which are likely not directly topographically connected to the damaged area (cortex and striatum). Regarding the origin of Thbs4+ cells, we demonstrated their SVZ origin by lineage tracking experiments after in vivo cell labeling (Fig. 4).

      (3) Next, the authors want to confirm the expression level of Thbs4 by electroporation of pThbs4-eGFP at P1 and write that this results in 20% of total cells expressing GFP, especially in the rostral SVZ. I do not understand the benefit of this sentence. This may be a confirmation of expression, but it also shows that the GFP+ cells derive from early postnatal NSCs.

      Furthermore, these cells look all like astrocytes, so the authors could have made a point here that indeed early postnatal NSCs expressing Thbs4 generate astrocytes alongside development. Here, it would have been interesting to see how many of the GFP+ cells are still NSCs.

      We thank the reviewer for this useful remark. We have rephrased this paragraph in the results section (Line 99).

      (4) In the next chapter, the authors show that Thbs4 increases in expression after brain injury. I do not understand the meaning of the graphs showing expression levels of distinct cell types of the neuronal lineage. Please specify why this is interesting and what to conclude from that.

      Also here, the expression of Thbs4 should be shown outside of the SVZ as well.

      In Fig 2, we show the temporal expression of two markers (besides Thbs4) in the SVZ. Nestin and DCX are the gold standard markers for NSCs, with DCX present in neuroblasts. This is already explained in line 119. What we didn’t explain, and now we say in line 124, is that Nestin and DCX decrease immediately after ischemia (7d time-point). This probably means that the NSCs stop differentiating into neuroblast to favor glioblast formation. This is also supported by the experiments in the olfactory bulb depicted in Fig. S5C-H.

      (5) Next, the origin of newborn astrocytes from the SVZ upon ischemia is revealed. The graphs indicate that the authors perfused at different time points after tMCAO. Did they also show the data of the early time points? If only of the 30dpi, they should remove the additional time points indicated in the graph. In line 127 they talk about the origin of newborn astrocytes. Until now they have not even mentioned that new astrocytes are generated. Furthermore, the following sentences are imprecise: first they write that the number of slow proliferation NSCs is increased, then they talk about astrocytes. How exactly did they identify astrocytes and separate them from NSCs? Morphologically? Because both cell types express GFAP and Thbs4.

      The same problem also occurs throughout the next chapter.

      We thank the reviewer for this interesting comment. The experiment in Fig 3 combines BrdU and IdU. This is a tricky experiment, since chronic BrdU is normally analyzed after 30d, since the experimenter must wait for the wash out of BrdU (it labels slow-proliferating cells). Since we also wanted to label fast proliferative cells with IdU, we used IP injections of this nucleotide at the different time points, and perfused the day after. It wouldn’t make sense to show BrdU at earlier time points. We do so in Fig 3e, just to colocalize with Thbs4 to read the tendency of the experiment. However, the quantification of BrdU (not of IdU) is done only at 30 DPI, which is explained in the methods (line 407).

      “In line 127, they talk about the origin of newborn astrocytes…” 

      Indeed, we wanted to introduce in the paragraph title that ischemia induced the generation of new astrocytes, which is more clearly described in the text. We changed the paragraph title with “Characterization of Ischemia-induced cell populations”

      “How exactly did they identify astrocytes and separate them from NSC?” 

      With this experiment and using two different protocols to label proliferating cells (BrdU vs IdU) we wanted to track the precursor cells that derivate to astrocytes and that already expressed the marker Thbs4. Indeed, the different increase and rate of proliferation is only related to the progenitor cells that lately will differentiate in astrocytes. In this experiment we only referred to the astrocytes in the last sentence “These results suggest that, after ischemia, Thbs4positive astrocytes derive from the slow proliferative type B cells”

      (6) "These results suggest that ischemia-induced astrogliogenesis in the SVZ occurs in type B cells from the dorsal region, and that these newborn Thbs4-positive astrocytes migrate to the ischemic areas." This sentence is a bit dangerous and bares at least one conceptual difficulty: if NSCs generate astrocytes under normal conditions and along the cause of postnatal development (which they do), then local astrocytes  (expressing the tdTom because they stem from a postnatal NSC ), may also react to MCAO and proliferate locally. So the astrocytes along the scar do not necessarily come from adult NSCs upon injury but from local astrocytes.  If the authors state that NSCs generate astrocytes that migrate to the lesion, I would like to see that no astrocytes inside the striatum carry the tdTom reporter before MCAO is committed.

      We understand the referee’s concern about the postnatal origin of astrocytes that can also be labeled with tdTom. Our hypothesis, tested at the beginning of the paper, is that SVZ-derived astrocytes derive from slow proliferative NSC. Thus, it is reasonable that Tom+ cells can reach the cortical region in such a short time frame. This is why we assumed that local astrocytes can’t be positive for tdTom. We characterized the expression of tfTom in sham animals and we observed few tdTom+ cells in the cortex and striatum (Author response image 2 and Figure S4). The expression of tdTom mainly remains in the SVZ and the corpus callosum under physiological conditions. However, proliferation of local astrocytes labeled with tdTom expression (early postnatally astrocytes) could explain the small percentage of tdTom+ cells in the ischemic regions that do not express Thbs4, even though this percentage could represent other cell types such as OPCs or oligodendrocytes. 

      Author response image 2.

      (7) If astrocytes outside the SVZ do not express Thbs4, I would like to see it.  Otherwise, the discrimination of SVZ-derive GFAP+/Thbs4+ astrocytes and local astrocytes expressing only GFAP is shaky.

      Regarding Thbs4 outside the SVZ, we already answered this in point 2 (please refer to Fig 1A). We also quantified the expression of Thbs4+/GFAP+ astrocytes in the corpus callosum, cortex and striatum of sham and MCAO mice (Figure S5a-b) and we did not observe that local astrocytes express Thbs4 under physiological conditions.

      (8) Please briefly explain what a Skeleton analysis and a Fractal dimension analysis is, and what it is good for.

      We apologized for the brief information on Skeleton and Fractal dimension analysis. We included a detailed explanation of these analyses in methods (line 484-494).

      (9) The chapter on HA is again a bit difficult to follow. Please rewrite to clarify who produces HA and who removes it by again showing all astrocyte subtypes (GFAP+/Thbs4+ and GFAP+/Thbs4-).

      We apologize for the lack of clarity. We rewrote some passages of those chapters (changes in red), trying to convey the ideas more clearly. We also changed a panel in Figure S6b-c to clarify all astrocytes subtypes that internalize hyaluronan (Thbs4+/GFAP+ and Thbs4-/GFAP+). See Author response image 3.

      Author response image 3.

      (10) Why did the authors separate dorsal, medial, and ventral SVZ so carefully? Do they comment on it? As far as I remember, astrogenesis in physiological conditions has some local preferences (dorsal?)

      We performed the electroporation protocol in the dorsal SVZ based on previous results (Figure 3 and Figure S3). NSC produce specific neurons in the olfactory bulb according to their location in the SVZ. However, postnatal production of astrocytes mainly occurs through local astrocytes proliferation and the SVZ contribution is very limited at this time point. 

      Reviewer #3 (Public Review)

      Summary:

      The authors aimed to study the activation of gliogenesis and the role of newborn astrocytes in a post-ischemic scenario. Combining immunofluorescence, BrdU-tracing, and genetic cellular labelling, they tracked the migration of newborn astrocytes (expressing Thbs4) and found that Thbs4-positive astrocytes modulate the extracellular matrix at the lesion border by synthesis but also degradation of hyaluronan. Their results point to a relevant function of SVZ newborn astrocytes in the modulation of the glial scar after brain ischemia. This work's major strength is the fact that it is tackling the function of SVZ newborn astrocytes, whose role is undisclosed so far.

      Strengths:

      The article is innovative, of good quality, and clearly written, with properly described Materials and Methods, data analysis, and presentation. In general, the methods are designed properly to answer the main question of the authors, being a major strength. Interpretation of the data is also in general well done, with results supporting the main conclusions of this article.

      Weaknesses:

      However, there are some points of this article that still need clarification to further improve this work.

      (1) As a first general comment, is it possible that the increase in Thbs4-positive astrocytes can also happen locally close to the glia scar, through the proliferation of local astrocytes or even from local astrocytes at the SVZ? As it was shown in published articles most of the newborn astrocytes in the adult brain actually derive from proliferating astrocytes, and a smaller percentage is derived from NSCs. How can the authors rule out a contribution of local astrocytes to the increase of Thbs4-positive astrocytes? The authors also observed that only about one-third of the astrocytes in the glial scar derived from the SVZ.

      We thank the reviewer for the interesting comment. We have extended the discussion about this topic in the manuscript, (lines 333-342), including the statement about a third of glial scar astrocytes being from the SVZ and not downplaying the role of local astrocytes.  Whether the glial scar is populated by newborn astrocytes derived from SVZ or from local astrocytes is under debate, since there are groups that found astrocytes contribution from local astrocytes (Frisèn group, Magnusson et al., 2014) but there are others that observed the opposite (Li et al., 2010; Benner et al., 2013; Faiz et al., 2015; Laug et al., 2019 & Pous et al., 2020). 

      In our study we observed that Thbs4 expression is almost absent in the cortex and striatum of sham mice. To demonstrate that new-born astrocytes are derived from SVZ we used two techniques: the chronic BrdU treatment and the cell tracing which mainly labels SVZ neural stem cells. Fast proliferating cells lose BrdU quickly so local astrocytes under ischemic conditions do not express BrdU. In addition, we injected IdU the day before perfusion in order to see if local astrocytes express Thbs4 when they respond to the brain ischemia. However, we did not observe proliferating local astrocytes expressing Thbs4 after MCAO (see Author response image 4)

      Author response image 4.

      As mentioned in the response for reviewer 2, the cell tracing technique could label early postnatal astrocytes. We characterized the technique and only a small percentage of tdTom expression was found in the cortex and striatum of sham animals.  This tdTom population could explain the percentage of tdTom+ cells in the ischemic regions that do not express Thbs4 even though this percentage could represent other cell types such as OPCs or oligodendrocytes. Taking all together, evidences suggest that Thbs4+ astrocyte population derived from the SVZ. 

      We indeed observed a small contribution of Thbs4+ astrocytes to the glial scar. However, Thbs4+ astrocytes arrive at the lesion at a critical temporal window - when local hyper-reactive astrocytes die or lose their function. We hypothesized that Thbs4+ astrocytes could help local astrocytes or replace them in reorganizing the extracellular space and the glial scar, an instrumental process for the recovery of the ischemic area. 

      (2) It is known that the local, GFAP-reactive astrocytes at the scar can form the required ECM. The authors propose a role of Thbs4-positive astrocytes in the modulation, and perhaps maintenance, of the ECM at the scar, thus participating in scar formation likewise. So, this means that the function of newborn astrocytes is only to help the local astrocytes in the scar formation and thus contribute to tissue regeneration. Why do we need specifically the Thbs4positive astrocytes migrating from the SVZ to help the local astrocytes? Can you discuss this further?

      Unfortunately, we could not demonstrate which molecular machinery is involved in these mechanisms, and we can only speculate the functional meaning of a second wave of glial activation. We added a lengthy discussion in lines 333-342.

      (3) The authors observed that the number of BrdU- and DCX-positive cells decreased 15 dpi in all OB layers (Fig. S5). They further suggest that ischemia-induced a change in the neuroblasts ectopic migratory pathway, depriving the OB layers of the SVZ newborn neurons. Are the authors suggesting that these BrdU/DCX-positive cells now migrate also to the ischemic scar, or do they die? In fact, they see an increase in caspase-3 positive cells in the SVZ after ischemia, but they do not analyse which type of cells are dying. Alternatively, is there a change in the fate of the cells, and astrogliogenesis is increased at the expense of neurogenesis?  The authors should understand which cells are Cleaved-caspase-3 positive at the SVZ and clarify if there is a change in cell fate. Also please clarify what happens to the BrdU/DCX-positive cells that are born at the SVZ but do not migrate properly to the OB layers.

      Actually, we cannot demonstrate the fate of missing BrdU/DCX cells in the OB. We can reasonably speculate that following the ischemic insult, the neurogenic machinery steers toward investing more energy in generating glial cells to support the lesion. We didn’t analyze the fate of the DCX that originally should migrate and differentiate to the OB, whether they die or if there is a shift in the differentiation program in the SVZ, since we consider that question is out of the study’s scope.   

      (4) The authors showed decreased Nestin protein levels at 15 dpi by western blot and immunostaining shows a decrease already at 7div (Figure 2). These results mean that there is at least a transient depletion of NSCs due to the promotion of astrogliogenesis. However, the authors show that at 30dpi there is an increase of slow proliferating NSCs (Figure 3). Does this mean, that there is a reestablishment of the SVZ cytogenic process?  How does it happen, more specifically, how NSCs number is promoted at 30dpi?  Please explain how are the NSCs modulated throughout time after ischemia induction and its impact on the cytogenic process.

      Based on the chronic BrdU treatment, results suggested a restoration of SVZ cytogenic process (also observed in the nestin and DCX proteins expression at 30dpi). However, we did not analyze how it happens (from asymmetric or symmetric divisions). As suggested by Encinas group, we hypothesized that the brain ischemia induces the exhaustion of the neurogenic niche of the SVZ by symmetric divisions of NSC into reactive astrocytes.

      (5) The authors performed a classification of Thbs4-positive cells in the SVZ according to their morphology. This should be confirmed with markers expressed by each of the cell subtypes.

      We thank the referee for the comment. Classifying NSC based on different markers could also be tricky because different NSC cell types share markers. This classification was made considering the specific morphology of each NSC cell type. In addition, Thbs4 expression in Btype cells is also observed in other studies (Llorens-Bobadilla et al. 2015; Cebrian-Silla et al., 2021; Basak et al., 2018).

      (6) In Figure S6, the authors quantified HABP spots inside Thbs4-positive astrocytes. Please show a higher magnification picture to show how this quantification was done.

      We quantified HABP area and HABP spots inside Thbs4+ astrocytes with a custom FIJI script.

      Thbs4 cell mask was done via automatic thresholding within the GFAP cell mask. Threshold for HABP marker was performed and binary image was processed with 1 pixel median filter (to eliminate 1 px noise-related spots). “Analyze particles” tool was used to sort HABP spots in the cell ROI. HABP spot number per compartment and population was exported to excel and data was normalized dividing HABP spots per ROI by total HABP spots. See Author response image 5.

      Author response image 5.

    2. eLife Assessment

      This work shows that newborn Thbs4-positive astrocytes generated in the adult subventricular zone (SVZ) respond to middle carotid artery occlusion (MCAO) by secreting hyaluronan at the lesion penumbra, and that hyaluronin is a chemoattractant to SVZ astrocytes. These findings are important, despite mostly descriptive, as they point to a relevant function of SVZ newborn astrocytes in the modulation of the glial scar after brain ischemia. The methods, data and analyses are convincing and broadly support the claims made by the authors with only some weaknesses.

    3. Reviewer #1 (Public review):

      Summary:

      The authiors show that SVZ derived astrocytes respond to a middle carotid artery occlusion (MCAO) hypoxia lesion by secreting and modulating hyaluronan at the edge of the lesion (penumbra) and that hyaluronin is a chemoattractant to SVZ astrocytes. They use lineage tracing of SVZ cells to determine their origin. They also find that SVZ derived astrocytes express Thbs-4 but astrocytes at the MCAO-induced scar do not. Also, they demonstrate that decreased HA in the SVZ is correlated with gliogenesis. While much of the paper is descriptive/correlative they do overexpress Hyaluronan synthase 2 via viral vectors and show this is sufficient to recruit astrocytes to the injury. Interestingly, astrocytes preferred to migrate to the MCAO than to the region of overexpressed HAS2.

      Strengths:

      The field has largely ignored the gliogenic response of the SVZ, especially with regards to astrocytic function. These cells and especially newborn cells may provide support for regeneration. Emigrated cells from the SVZ have been shown to be neuroprotective via creating pro-survival environments, but their expression and deposition of beneficial extracellular matrix molecules is poorly understood. Therefore, this study is timely and important. The paper is very well written and the flow of results logical.

      Comments on revised version:

      The authors have addressed my points and the paper is much improved. Here are the salient remaining issues that I suggest be addressed.

      The authors have still not shown, using loss of function studies, that Hyaluronan is necessary for SVZ astrogenesis and or migration to MCAO lesions.

      (1) The co-expression of EGFr with Thbs4 and the literature examination is useful.

      (2) Too bad they cannot explain the lack of effect of the MCAO on type C cells. The comparison with kainate-induced epilepsy in the hippocampus may or may not be relevant.

      (3) Thanks for including the orthogonal confocal views in Fig S6D.

      (4) The statement that "BrdU+/Thbs4+ cells mostly in the dorsal area" and therefore they mostly focused on that region is strange. Figure 8 clearly shows Thbs4 staining all along the striatal SVZ. Do they mean the dorsal segment of the striatal SVZ or the subcallosal SVZ? Fig. 4b and Fig 4f clearly show the "subcallosal" area as the one analysed but other figures show the dorsal striatal region (Fig. 2a). This is important because of the well-known embryological and neurogenic differences between the regions.

      (5) It is good to know that the harsh MCAO's had already been excluded.

      (6) Sorry for the lack of clarity - in addition to Thbs4, I was referring to mouse versus rat Hyaluronan degradation genes (Hyal1, Hyal2 and Hyal3) and hyaluronan synthase genes (HAS1 and HAS2) in order to address the overall species differences in hyaluronan biology thus justifying the "shift" from mouse to rat. You examine these in the (weirdly positioned) Fig. 8h,i. Please add a few sentences on mouse vs rat Thbs4 and Hyaluronan relevant genes.

      (7) Thank you for the better justification of using the naked mole rat HA synthase.

    4. Reviewer #3 (Public review):

      Summary:

      The authors aimed to study the activation of gliogenesis and the role of newborn astrocytes in a post-ischemic scenario. Combining immunofluorescence, BrdU-tracing and genetic cellular labelling, they tracked the migration of newborn astrocytes (expressing Thbs4) and found that Thbs4-positive astrocytes modulate the extracellular matrix at the lesion border by synthesis but also degradation of hyaluronan. Their results point to a relevant function of SVZ newborn astrocytes in the modulation of the glial scar after brain ischemia. This work's major strength is the fact that it is tackling the function of SVZ newborn astrocytes, whose role is undisclosed so far.

      Strengths:

      The article is innovative, of good quality, and clearly written, with properly described Materials and Methods, data analysis and presentation. In general, the methods are designed properly to answer the main question of the authors, being a major strength. Interpretation of the data is also in general well done, with results supporting the main conclusions of this article.

      In this revised version, the points raised/weaknesses were clarified and discussed in the article.

    1. But at far lower cost, through a rational transport policy, it could remove millions of real cars from the roads, while improving our mobility, cutting air pollution and releasing land for green spaces and housing.

      From link:

      • Prioritise investment in public transport, walking and cycling instead of road building
      • Reinstate the annual inflation-linked rise and end the 5p cut in fuel duty, and use the £4.2 billion a year proceeds to make rail fares more affordable
      • Require all new developments to provide frequent public transport services and safe walking and cycling networks from the start
      • Commit to a target for modal shift to public transport and active travel
      • Facilitate further expansion of rail freight to reduce congestion on the road network
      • Require local authorities to meet specific carbon reduction budgets through the next round of Local Transport Plans

      Reinstate the annual inflation-linked rise and end the 5p cut in fuel duty, and use the £4.2 billion a year proceeds to make rail fares more affordable

      Cars are going green anyway, and could go more green with hydrogen! The fuel duty is regressive and would hurt poorer families the most.

      Facilitate further expansion of rail freight to reduce congestion on the road network

      They literally say at the start of the article how HS2 has been a disaster, they want to repeat that?

    2. cheaper and more effective projects had already been committed

      more effective? insulating homes sure but there needs to be some capture capture and it needs investing and where UK can have the biggest impact.

    3. The government’s plan for carbon capture and storage (CCS) – catching carbon dioxide from major industry and pumping it into rocks under the North Sea – is a fossil fuel-driven boondoggle that will accelerate climate breakdown.

      So from what I can see, these are two blue hydrogen projects as opposed to green hydrogen? Green hydrogen just uses water so is obviously better but takes longer and is see as viable by 2040 while blue hydrogen is compatible with current infrastructure so works in shortterm and can help speed up green hydrogen too. https://www.abdn.ac.uk/news/opinion/is-there-a-role-for-blue-hydrogen-in-a-green-energy-transition/.

      Also the guardian linking to itself as a source of fact? Great

    4. George Monbiot

      Famed Scientist

    1. other words

      sum

    2. he evidence indicated

      Results suggested (evidence indicate sounds very strong to me)

    3. )

      add comma

    4. igure 8 illustrates the median and interquartile range proportion of faces categorized as women (in this data set, with categorizations beyond the binary removed, any face not categorized as a woman was categorized as a man)

      this sentence first before you tell about actual content

    5. expense of man categorization

      can you dumb it down

    6. seems to

      del

    7. he difference is so stark, we do not feel that inferential statistics add any more information, but the curious reader may find these in the supplemental materi

      sounds odd. Can you at least have one analysis? And say rest is in supplementary?

    8. e Figure 6 ). Participants who only categorized faces as women or men are not represented in figure Figure 6.

      get rid of Figure 6 reps. If you start "As shown in Figure 6, ..." I would read this to mean that everything is from the figure until you tell me differently.

    9. illustrates how many categorizations (y-axis) beyond the binary participants made. Each bar represents how many participants (y-axis) made a certain number of categorizations (x-axis). The different colors denote the different categorizations

      swap previous and these sentences

    10. illustrates how many participants (x-axis) categorized how many faces (y-axis) according to the categories “other” and “don’t know” (different colors) across the two experimental conditio

      swap order of 1st and 2nd sentence. First, say what is shown. Second, what is the take home msg

    11. responses

      not italics

    12. variations

      so that

    13. “woman” a

      maybe get rid of "" altogether? You introduced these labels in Study 1

    14. odo: change order to be consistent

      yes, plz

    15. all participants were informed that participation was voluntary and gave written consent to participate in the study

      same as 2 sentences before

    16. N~free

      formating?

    17. and

      . All

    18. 2 pa

      and 2

    19. control

      remove

    20. baseline

      control

    21. “other”

      reads odd with "". I would remove

    22. suggests that categorical perception was not reduced by two-dimensional response options

      bake together with previous sentence? "Results suggested that cat perception [your text] was not reduced ... (numbers here)"

    23. R = NA)

      ???

    24. Results

      Well written! You tell the reader what to look at and what to conclude.

    25. were not meaningfully

      Results suggested no differences between conditions

    26. test this,

      spell out. Right now,it sounds as if you want to test why there are twice as many lines :)

    27. Thus there are

      sentence could look better

    28. “woman”

      APA wants italicize for labels (I think)

      also: once you italicized a label, you are not supposed to do it again

    29. The pattern of scores was non-linear

      unclear what "pattern of scores" refers to. I guess you mean something with morphing steps. plz clarify

      Unclear if this has to be true or whether it was something observed in the data

    30. We fit the data to Bayesian mixed-effects models to test the categorical effects. In a

      It would be nice if you first describe the different variables. Morph level (seven steps from 0 to 100%), response options (one-dimensional, two-dimensional). It makes it easier to follow.

    31. different trials, and the order of trials was completely rand

      not sure I get this. You mean that each face was rated twice? Once on woman continuum and once one man continuum, and order of all trials was random? If so, please clarify

    32. i

      I would use separate sentence

    33. one-dimensional

      italicize these labels

      do you need to say "one-dimensional control". How about "one-dimensional" and explain in a sentence that this is control.

    34. tilted

      WC (word choice) sounds odd to me, but maybe this is how to write it?

    35. The morphs were made in 7 steps, from completely feminine to completely masculine

      can you share these images? Would be concrete. Also, would save time for others instead of reinventing the wheel.

    1. eLife Assessment

      This study describes a useful technique to improve imaging depth using confocal microscopy for imaging large, cleared samples. It is as yet unclear if their proposed technique presents a significant advance to the field since their comparisons to existing techniques remain incomplete. However, the work will be of broad interest to many researchers in different fields.

    2. Reviewer #1 (Public review):

      Summary:

      Liu et al., present an immersion objective adapter design called RIM-Deep, which can be utilized for enhancing axial resolution and reducing spherical aberrations during inverted confocal microscopy of thick cleared tissue.

      Strengths:

      RI mismatches present a significant challenge to deep tissue imaging, and developing a robust immersion method is valuable in preventing losses in resolution. Liu et al., present data showing that RIM-Deep is suitable for tissue cleared with two different clearing techniques, demonstrating the adaptability and versatility of the approach.

      Weaknesses:

      Liu et al., claim to have developed a useful technique for deep tissue imaging, but in its current form, the paper does not provide sufficient evidence that their technique performs better than existing ones.

    3. Reviewer #2 (Public review):

      Summary:

      Liu et al investigated the performance of a novel imaging technique called RIM-Deep to enhance the imaging depth for cleared samples. Usually, the imaging depth using the classical confocal microscopy sample chamber is limited due to optical aberrations, resulting in loss of resolution and image quality. To overcome this limitation and increase depth, they generated a special imaging chamber, that is affixed to the objective and filled with a solution matching the refractive indices to reduce aberrations. Importantly, the study was conducted using a standard confocal microscope, that has not been modified apart from exchanging the standard sample chamber with the RIM-Deep sample holder. Upon analysing the imaging depth, the authors claim that the RIM-Deep method increased the depth from 2 mm to 5 mm. In summary, RIM-Deep has the potential to significantly enhance imaging quality of thick samples on a low budget, making in-depth measurements possible for a wide range of researchers that have access to an inverted confocal microscope.

      Strengths:

      The authors used different clearing methods to demonstrate the suitability of RIM-Deep for various sample preparation protocols with clearing solutions of different refractive indices. They clearly demonstrate that the RIM-Deep chamber is compatible with all 3 methods. Brain samples are characterized by complex networks of cells and are often hard to visualize. Despite the dense, complex structure of brain tissue, the RIM-Deep method generated high quality images of all 3 samples given. As the authors already stated, increasing imaging depth often goes hand in hand with purchasing expensive new equipment, exchanging several microscopy parts or purchasing a new microscopy set-up. Innovations, such as the RIM-Deep chamber, hence, might pave the way for cost-effective imaging and expand the applicability of an inverted confocal microscope.

      Weaknesses:

      (1) However, since this study introduces a novel imaging technique, and therefore, aims to revolutionize the way of imaging large samples, additional control experiments would strengthen the data. From the 3 clearing protocol used (CUBIC, MACS and iDISCO), only the brain section from Macaca fascicularis cleared with iDISCO was imaged with the standard chamber and the RIM-Deep method. This comparison indeed shows that the imaging depth thereby increases more than 2-fold, which is a significant enhancement in terms of microscopy. However, it would have been important to evaluate and show the difference of the imaging depth also on the other two samples, since they were cleared with different protocols and, thus, treated with clearing solutions of different refractive indices compared to iDCISCO.

      (2) The description of the figures and figure panels should be improved for a better understanding of the experiments performed and the thus resulting images/data.

      (3) While the authors used a Nikon AX inverted laser scanning confocal microscope, the study would highly benefit from evaluating the performance of the RIM-Deep method using other inverted confocal microscopes or even wide-field microscopes.

    4. Author response:

      Reviewer #1 (Public review):

      Summary:

      Liu et al., present an immersion objective adapter design called RIM-Deep, which can be utilized for enhancing axial resolution and reducing spherical aberrations during inverted confocal microscopy of thick cleared tissue.

      Strengths:

      RI mismatches present a significant challenge to deep tissue imaging, and developing a robust immersion method is valuable in preventing losses in resolution. Liu et al., present data showing that RIM-Deep is suitable for tissue cleared with two different clearing techniques, demonstrating the adaptability and versatility of the approach.

      Greetings, we greatly appreciate your feedback. In truth, we have utilized three distinct clearing techniques, including iDISCO, CUBIC, and MACS, to substantiate the adaptability and multifunctionality of the RIM-Deep adapter.

      Weaknesses:

      Liu et al., claim to have developed a useful technique for deep tissue imaging, but in its current form, the paper does not provide sufficient evidence that their technique performs better than existing ones.

      We are in complete agreement with your recommendation, and the additional experiments will conduct a thorough comparison of the efficacy between the RIM-deep adapter and the official adapter in the context of fluorescence bead experiments, along with their performance in cubic and MASC tissue clearing techniques.

      Reviewer 2 (Public review):

      The authors used different clearing methods to demonstrate the suitability of RIM-Deep for various sample preparation protocols with clearing solutions of different refractive indices. They clearly demonstrate that the RIM-Deep chamber is compatible with all three methods. Brain samples are characterized by complex networks of cells and are often hard to visualize. Despite the dense, complex structure of brain tissue, the RIM-Deep method generated high-quality images of all three samples. As the authors stated, increasing imaging depth often goes hand in hand with purchasing expensive new equipment, exchanging several microscopy parts, or purchasing a new microscopy setup. Innovations like the RIM-Deep chamber might pave the way for cost-effective imaging and expand the applicability of inverted confocal microscopy.

      Weeknesses:

      (1) However, since this study introduces a novel imaging technique aiming to revolutionize imaging of large samples, additional control experiments would strengthen the data. From the three clearing protocols used (CUBIC, MACS, and iDISCO), only the brain section from Macaca fascicularis cleared with iDISCO was imaged with the standard chamber and the RIM-Deep method. This comparison indeed shows a more than 2-fold increase in imaging depth, a significant enhancement in microscopy. However, it would have been important to evaluate and show the imaging depth differences in the other two samples, as they were cleared with different protocols and treated with clearing solutions of different refractive indices compared to iDISCO.

      Thank you for your suggestion. We will investigate the imaging performance of brain tissue using the other two clearing protocols with both the official adapter and the RIM-deep method.

      (2) The description of the figures and figure panels should be improved for a better understanding of the experiments performed and the resulting images/data.

      Thank you for your suggestion. We will revise the figure legends in detail.

      (3) While the authors used a Nikon AX inverted laser scanning confocal microscope, the study would benefit from evaluating the performance of the RIM-Deep method using other inverted confocal microscopes or even wide-field microscopes.

      Thank you for your suggestion. We also recognize that evaluating the performance of the RIM-Deep method on other inverted confocal microscopes will help further validate its applicability and robustness. We will supplement these experiments to expand the scope and reliability of RIM-Deep.

    1. Over the years, forums did not really get smaller, so much as the rest of the internet just got bigger. Reddit, Discord and Facebook groups have filled a lot of that space, but there is just certain information that requires the dedication of adults who have specifically signed up to be in one kind of community. This blog is a salute to those forums that are either worth participating in or at least looking at in bewilderment.

      It's just nice to see people be interested in stuff, and have a group of like minded people that's also interested in the same stuff! What else is there to it all

    2. What follows is a list of forums that range from at least interesting to good. I will attempt to contextualize the ones I know well. This post is by no means supposed to be complete and will be updated whenever I find more good forums.

      Digital public service - thank you!!

    1. eLife Assessment

      This valuable study investigates how biologically plausible learning mechanisms can support assembly formation that encodes statistics of the environment, by enabling neural sampling that is based on within-assembly connectivity strength. It convincingly shows that assembly formation can emerge from predictive plasticity in excitatory synapses, while two types of plasticity in inhibitory synapses are required: inhibitory homeostatic (predictive) plasticity and inhibitory competitive (anti-predictive) plasticity.

    2. Reviewer #1 (Public review):

      The authors have successfully addressed most of the issues raised in the first review. Nevertheless, some of the mentioned problems require further attention, mostly regarding the formal derivation of the learning rules, as well as connections to previous research.

      Regarding the derivations of learning rules: The authors have provided Goal functions for each of the plastic neural connections to give some insight into what these connections do. However, as I understand, this does not address the main concern raised in the previous review: Why do these rules lead to overall network dynamics that sample from the input distribution? Virtually all other work on neural sampling that I am aware of (e.g., from Maass Lab, Lengyel Lab, etc.) start from a single goal function for all connections that somehow quantifies the difference of network dynamics from the target distribution. In the presented work the authors specify different goal functions for the different weights, which does not make clear how the desired network dynamics are ultimately achieved.

      This becomes especially evident looking at the two different recurrent connections (M and G). M minimizes the difference between network activity f and recurrent prediction DKL[f|phi(My)], but why is this alone not enough to ensure a good sampling? G minimizes the squared error [f-phi(Gy)]^2, but what does that mean? The problem is that the goal functions are self-consistent in the sense that both f and phi(Gy) depend on G, which makes an interpretation very difficult. Ultimately it's easier to interpret this by looking at the plasticity rule and see that it leads to a balance. For G the authors furthermore actually ignore the derived plasticity rule and switch to a rule similar to the one for M, meaning that the actual goal function for G is also something like DKL[f|phi(Gy)]. Overall, an overarching optimization goal for the entire network is missing, which makes the interpretation very difficult. I understand that this might be very difficult to provide at this stage, but the authors should at least point out this shortcoming as an open question for the proposed framework.

      Regarding the relation to previous work the authors have provided a lot more detailed discussion, which very much clears up the contributions and novel ideas in their work. Still, there are some claims that are not consistent with the literature. Especially, in lines 767 ff. the authors state that Kappel et al "assumed plasticity only at recurrent synapses projecting onto the excitatory neurons. In addition, unlike our model, the cell assembly memberships need to be preconfigured in the [...] model." This is not correct, as Kappel et al learn both the feed-forward and recurrent connections, hence the main difference is that in Kappel et al sampling is sequential and not random. This is why I mentioned this work in the first review, as it speaks against the authors claims of novelty (719 ff.), which should be adjusted accordingly.

    3. Reviewer #2 (Public review):

      Summary:

      The paper reconsiders the formation of Hebbian-type assemblies, with their spontaneous reactivation representing the statistics of the sensory inputs, in the light of predictive synaptic plasticity. It convincingly shows that not all plasticity rules can be predictive in the narrow sense. While plasticity for the excitatory synapses (the forward projecting and recurrent ones) are predictive, two types of plasticity in the recurrent inhibition is required: a homeostatic and competitive one.

      Details:

      Besides the excitatory forward and recurrent connections that are learned based on predictive synaptic plasticity, two types of inhibitory plasticity are considered. A first type of inhibition is homeostatic and roughly balances excitation within the cell assemblies. Plasticity in this type 1 inhibition is also predictive, analogous to the plasticity of the excitatory synapses. However, plasticity in type 2 inhibition is competitive and has a switched sign. Both types of inhibitory plasticity, the predictive (homeostatic) and the anti-predictive (competitive) one, work together with the predictive excitatory plasticity to form cell assemblies representing sensory stimuli. Only if the two types of homeostatic and competitive inhibitory plasticity are present, will the spontaneous replay of the assemblies reflect the statistics of the stimulus presentation.

      Critical review:

      The simulations include Dale's law, making them more biologically realistic. The paper emphasizes predictive plasticity and introduces type 1 inhibitory plasticity that, by construction, tries to fully explain away the excitatory input. In the absence of external inputs, however, due to the symmetry between the excitatory and inhibitory-type-1 plasticity rules, excitation and inhibition tend to fully cancel each other. Multiple options may solve the dilemma:

      (1) As other predictive dendritic plasticity models assume, the presynaptic source for recurrent inhibition is typically less informative than the presynaptic source of excitation, so that inhibition is not able to fully explain away excitation.

      (2) Beside the inhibitory predictive plasticity that mirrors the analogous excitatory predictive plasticity, and additional competitive plasticity can be introduced.

      The paper chooses solution (2) and suggests and additional inhibitory recurrent pathway that is not predictive, but instead anti-predictive with a reversed sign. The combination of the two types of inhibitory plasticities lead to a stable formation of cell assemblies. The stable target activity of the plasticity rules in a memory recall is not anymore 0, as it would be with only type-1-inhibitory plasticity.<br /> Instead, the target activity of plasticity is now enhanced within a winning assembly, and also positive but reduced in the loosing assemblies.

    4. Reviewer #3 (Public review):

      Summary:

      The work shows how learned assembly structure and its influence on replay during spontaneous activity can reflect the statistics of stimulus input. In particular, stimuli that are more frequent during training elicit stronger wiring and more frequent activation during replay. Past works (Litwin-Kumar and Doiron, 2014; Zenke et al., 2015) have not addressed this specific question, as classic homeostatic mechanisms forced activity to be similar across all assemblies. Here, the authors use a dynamic gain and threshold mechanism to circumnavigate this issue and link this mechanism to a cellular monitoring of membrane potential history.

      Strengths:

      (1) This is an interesting advance, and the authors link this to experimental work in sensory learning in environments with non-uniform stimulus probabilities.

      (2) The authors consider their mechanism in a variety of models of increasing complexity (simple stimuli, complex stimuli; ignoring Dale's law, incorporating Dale's law).

      (3) Links a cellular mechanism of internal gain control (their variable h) to assembly formation and the non-uniformity of spontaneous replay activity. Offers a promise of relating cellular and synaptic plasticity mechanisms under a common goal of assembly formation.

      Weaknesses:

      (1) However, while the manuscript does show that assembly wiring does follow stimulus likelihood, it is not clear how the assembly specific statistics of h reflect these likelihoods. I find this to be a key issue.

      (2) The authors model does take advantage of the sigmoidal transfer function, and after learning an assembly is either fully active or near fully silent (Fig. 2a). This somewhat artificial saturation may be the reason that classic homeostasis is not required, since runaway activity is not as damaging to network activity.

      (3) Classic mechanisms of homeostatic regulation (synaptic scaling, inhibitory plasticity) try to ensure that firing rates match a target rate (on average). If the target rate is the same for all neurons then having elevated firing rates for one assembly compared to others during spontaneous activity would be difficult. If these homeostatic mechanisms were incorporated, how would they permit the elevated firing rates for assemblies that represent more likely stimuli?

    5. Author response:

      The following is the authors’ response to the previous reviews.

      Public Reviews: 

      Reviewer #1 (Public Review): 

      In their manuscript, the authors propose a learning scheme to enable spiking neurons to learn the appearance probability of inputs to the network. To this end, the neurons rely on error-based plasticity rules for feedforward and recurrent connections. The authors show that this enables the networks to spontaneously sample assembly activations according to the occurrence probability of the input patterns they respond to. They also show that the learning scheme could explain biases in decision-making, as observed in monkey experiments. While the task of neural sampling has been solved before in other models, the novelty here is the proposal that the main drivers of sampling are within-assembly connections, and not between-assembly (Markov chains) connections as in previous models. This could provide a new understanding of how spontaneous activity in the cortex is shaped by synaptic plasticity. 

      The manuscript is well written and the results are presented in a clear and understandable way. The main results are convincing, concerning the spontaneous firing rate dependence of assemblies on input probability, as well as the replication of biases in the decision-making experiment. Nevertheless, the manuscript and model leave open several important questions. The main problem is the unclarity, both in theory and intuitively, of how the sampling exactly works. This also makes it difficult to assess the claims of novelty the authors make, as it is not clear how their work relates to previous models of neural sampling. 

      We agree with the reviewer that our previous manuscript was not clear regarding the mechanism of the model. We have performed additional simulations and included a derivation of the learning rule to address this, which we explain below.

      Regarding the unclarity of the sampling mechanism, the authors state that withinassembly excitatory connections are responsible for activating the neurons according to stimulus probability. However, the intuition for this process is not made clear anywhere in the manuscript. How do the recurrent connections lead to the observed effect of sampling? How exactly do assemblies form from feedforward plasticity? This intuitive unclarity is accompanied by a lack of formal justification for the plasticity rules. The authors refer to a previous publication from the same lab, but it is difficult to connect these previous results and derivations to the current manuscript. The manuscript should include a clear derivation of the learning rules, as well as an (ideally formal) intuition of how this leads to the sampling dynamics in the simulation. 

      We have included a derivation of our plasticity rules in lines 871-919 in the revised manuscript. Consistent with our claim that predictive plasticity updates the feedforward and the recurrent synapses to predict output firing rates, we have shown that the corresponding cost function measures the discrepancy among the recurrent prediction, feedforward prediction, and the output firing rate. The resultant feedforward plasticity is the same with our previous rule (Asabuki and Fukai, 2020), which segments the salient patterns embedded in the input sequence. The recurrent plasticity rule suggests that the recurrent prediction learns the statistical model of the evoked activity, enabling the network to replay the learned internal model.  

      Similarly, for the inhibitory plasticity, we defined a cost function that evaluates the difference between the firing rate and inhibitory potential within each neuron. This rule is crucial for maintaining balanced network dynamics. See our response below for more details on the role of inhibitory plasticity.

      Some of the model details should furthermore be cleared up. First, recurrent connections transmit signals instantaneously, which is implausible. Is this required, would the network dynamics change significantly if, e.g., excitation arrives slightly delayed? Second, why is the homeostasis on h required for replay? The authors show that without it the probabilities of sampling are not matched, but it is not clear why, nor how homeostasis prevents this. Third, G and M have the same plasticity rule except for G being confined to positive values, but there is no formal justification given for this quite unusual rule. The authors should clearly justify (ideally formally) the introduction of these inhibitory weights G, which is also where the manuscript deviates from their previous 2020 work. My feeling is that inhibitory weights have to be constrained in the current model because they have a different goal (decorrelation, not prediction) and thus should operate with a completely different plasticity mechanism. The current manuscript doesn't address this, as there is no overall formal justification for the learning algorithm. 

      First, while the reviewer's suggestion to test with delayed excitation is intriguing and crucial for a more biologically detailed spiking neuron model, we have chosen to maintain the current model configuration. Our use of Poisson spiking neurons, which generate spikes based on instantaneous firing rates, does not heavily depend on precise spike timing information. Therefore, to preserve the simplicity of our results, we kept the model unchanged.

      Second, we agree that our previous claim regarding the importance of the memory trace h for sampling may have been confusing. As shown in Supplementary Figure 7b in the revised manuscript, when we eliminated the dynamics of the memory trace, sampling performance did indeed decrease. However, we also observed that the assembly activity ratio continued to show a linear relationship with stimulus probabilities. Based on these findings, we have revised our claim in the manuscript to clarify that the memory trace is primarily critical for firing rate homeostasis, rather than directly influencing sampling within the learned network. We have explained this in ll. 446-448 in the revised manuscript.

      Third, we explored a new architecture where all recurrent connections are either exclusively excitatory or inhibitory, keeping their sign throughout the learning process. This change addresses the reviewer's concern about our initial assumption that only the inhibitory connection G was constrained to non-negative values. We found that inhibition plays a crucial role in decorrelation and prediction, helping activate specific assemblies through competition while preventing runaway excitation within active assemblies. We have explained this in ll.560-593 in the revised manuscript.

      Finally, the authors should make the relation to previous models of sampling and error-based plasticity more clear. Since there is no formal derivation of the sampling dynamics, it is difficult to assess how they differ exactly from previous (Markov-based) approaches, which should be made more precise. Especially, it would be important to have concrete (ideally experimentally testable) predictions on how these two ideas differ. As a side note, especially in the introduction (line 90), this unclarity about the sampling made it difficult to understand the contrast to Markovian transition models. 

      As the reviewer pointed out, previous computational models have demonstrated that recurrent networks with Hebbian-like plasticity can learn appropriate Markovian statistics (Kappel et al., 2014; Asabuki and Clopath, 2024). However, our model differs conceptually from these previous models. While Kappel et al. showed that STDP in winner-take-all circuits can approximate online learning of hidden Markov models (HMMs), a key difference with our model is that their neural representations acquire sequences using Markovian sampling dynamics, whereas our model does not depend on such ordered sampling. Specifically, in their model, sequential sampling arises from learned structures in the off-diagonal elements of the recurrent connections (i.e., between-assembly connections). In contrast, our network learns to stochastically generate recurrent cell assemblies by relying solely on within-assembly connections. A similar argument can be made for Asabuki and Clopath paper as well. Further, while our model introduced plasticity rule for all types of connections, Asabuki and Clopath paper introduced plasticity for recurrent synapses projecting on the excitatory neurons only and the cell assembly memberships were preconfigured unlike our model. We have added additional clarifying sentences in ll. 757-772 of the revised manuscript to elaborate on this point.

      There are also several related models that have not been mentioned and should be discussed. In 663 ff. the authors discuss the contributions of their model which they claim are novel, but in Kappel et al (STDP Installs in Winner-Take-All Circuits an Online Approximation to Hidden Markov Model Learning) similar elements seem to exist as well, and the difference should be clarified. There is also a range of other models with lateral inhibition that make use of error-based plasticity (most recently reviewed in Mikulasch et al, Where is the error? Hierarchical predictive coding through dendritic error computation), and it should be discussed how the proposed model differs from these. 

      We have clarified the difference from previously proposed recurrent network model to perform Markovian sampling. Please see our reply above.

      We have also included additional sentence in ll. 704-709 in the revised manuscript to discuss how our model differs from similar predictive learning models: “It should be noted that while several network models that perform errorbased computations like ours exploit only inhibitory recurrent plasticity (Mikulasch et al., 2021; Mackwood et al., 2021; Hertäg and Clopath., 2022; Mikulasch et al., 2023), our model learns the structured spontaneous activity to reproduce the evoked statistics by modifying both excitatory and inhibitory recurrent connections.”

      Reviewer #2 (Public Review):

      Summary: 

      The paper considers a recurrent network with neurons driven by external input. During the external stimulation predictive synaptic plasticity adapts the forward and recurrent weights. It is shown that after the presentation of constant stimuli, the network spontaneously samples the states imposed by these stimuli. The probability of sampling stimulus x^(i) is proportional to the relative frequency of presenting stimulus x^(i) among all stimuli i=1,..., 5. 

      Methods: 

      Neuronal dynamics: 

      For the main simulation (Figure 3), the network had 500 neurons, and 5 nonoverlapping stimuli with each activating 100 different neurons where presented. The voltage u of the neurons is driven by the forward weights W via input rates x, the inhibitory recurrent weights G, are restricted to have non-negative weights (Dale's law), and the other recurrent weights M had no sign-restrictions. Neurons were spiking with an instantaneous Poisson firing rate, and each spike-triggered an exponentially decaying postsynaptic voltage deflection. Neglecting time constants of the postsynaptic responses, the expected postsynaptic voltage reads (in vectorial form) as 

      u = W x + (M - G) f (Eq. 5) 

      where f =; phi(u) represents the instantaneous Poisson rate, and phi a sigmoidal nonlinearity. The rate f is only an approximation (symbolized by =;) of phi(u) since an additional regularization variable h enters (taken up in Point 4 below). The initialisation of W and M is Gaussian with mean 0 and variance 1/sqrt(N), N the number of neurons in the network. The initial entries of G are all set to 1/sqrt(N). 

      Predictive synaptic plasticity: 

      The 3 types of synapses were each adapted so that they individually predict the postsynaptic firing rate f, in matrix form 

      ΔW ≈ (f - phi( W x ) ) x^T 

      ΔM ≈ (f - phi( M f ) ) f^T 

      ΔG ≈ (f - phi( M f ) ) f^T but confined to non-negative values of G (Dale's law). 

      The ^T tells us to take the transpose, and the ≈ again refers to the fact that the ϕ entering in the learning rule is not exactly the ϕ determining the rate, only up to the regularization (see Point 4). 

      Main formal result: 

      As the authors explain, the forward weight W and the unconstrained weight M develop such that, in expectations, 

      f =; phi( W x ) =; phi( M f ) =; phi( G f ) , 

      consistent with the above plasticity rules. Some elements of M remain negative. In this final state, the network displays the behaviour as explained in the summary. 

      Major issues: 

      Point 1: Conceptual inconsistency 

      The main results seem to arise from unilaterally applying Dale's law only to the inhibitory recurrent synapses G, but not to the excitatory recurrent synapses M. 

      In fact, if the same non-negativity restriction were also imposed on M (as it is on G), then their learning rules would become identical, likely leading to M=G. But in this case, the network becomes purely forward, u = W x, and no spontaneous recall would arise. Of course, this should be checked in simulations. 

      Because Dale's law was only applied to G, however, M and G cannot become equal, and the remaining differences seem to cause the effect. 

      Predictive learning rules are certainly powerful, and it is reasonable to consider the same type of error-correcting predictive learning rule, for instance for different dendritic branches that both should predict the somatic activity. Or one may postulate the same type of error-correcting predictive plasticity for inhibitory and excitatory synapses, but then the presynaptic neurons should not be identical, as it is assumed here. Both these types of error-correcting and error-forming learning rules for same-branches and inhibitory/excitatory inputs have been considered already (but with inhibitory input being itself restricted to local input, for instance). 

      The model presented above lacked biological plausibility in several key aspects. Specifically, we assumed that the recurrent connection M could change sign through plasticity and be either excitatory or inhibitory, while the inhibitory connection G was restricted to being inhibitory only. This initial setting does not reflect the biological constraint that synapses typically maintain a consistent excitatory or inhibitory type. Furthermore, due to this unconstrained recurrent connectivity M, the original model had two types of inhibitory connections (i.e., the negative part of M and the inhibitory connection G) without providing a clear computational role for each type of inhibition.

      To address these limitations and to understand the role of the two types of inhibition, we explored a new architecture where all recurrent connections are either exclusively excitatory or inhibitory, keeping their sign throughout the learning process. This change addresses the reviewer's concern about our initial assumption that only the inhibitory connection G was constrained to non-negative values. We found that inhibition plays a crucial role in prediction and decorrelation, helping activate specific assemblies through competition while preventing runaway excitation within active assemblies. We have explained this in ll. 561593 in the revised manuscript.

      Point 2: Main result as an artefact of an inconsistently applied Dale's law? 

      The main result shows that the probability of a spontaneous recall for the 5 nonoverlapping stimuli is proportional to the relative time the stimulus was presented. This is roughly explained as follows: each stimulus pushes the activity from 0 up towards f =; phi( W x ) by the learning rule (roughly). Because the mean weights W are initialized to 0, a stimulus that is presented longer will have more time to push W up so that positive firing rates are reached (assuming x is non-negative). The recurrent weights M learn to reproduce these firing rates too, while the plasticity in G tries to prevent that (by its negative sign, but with the restriction to non-negative values). Stimuli that are presented more often, on average, will have more time to reach the positive target and hence will form a stronger and wider attractor. In spontaneous recall, the size of the attractor reflects the time of the stimulus presentation. This mechanism so far is fine, but the only problem is that it is based on restricting G, but not M, to non-negative values. 

      As mentioned above, we have included an additional simulation where all weights are non-negative. We have demonstrated the new results in Figure 6 before presenting the two-population model in the revised manuscript (Figure 7), so that readers can follow the importance of two pathways of inhibitory connections.

      Point 3: Comparison of rates between stimulation and recall. 

      The firing rates with external stimulations will be considerably larger than during replay (unless the rates are saturated). 

      This is a prediction that should be tested in simulations. In fact, since the voltage roughly reads as  u = W x + (M - G) f,  and the learning rules are such that eventually M =; G, the recurrences roughly cancel and the voltage is mainly driven by the external input x. In the state of spontaneous activity without external drive, one has  u = (M - G) f ,  and this should generate considerably smaller instantaneous rates f =; phi(u) than in the case of the feedforward drive (unless f is in both cases at the upper or lower ceiling of phi). This is a prediction that can also be tested. 

      Because the figures mostly show activity ratios or normalized activities, it was not possible for me to check this hypothesis with the current figures. So please show non-normalized activities for comparing stimulation and recall for the same patterns. 

      We agree with the reviewer that the activity levels of spontaneous and induced activity should be compared. We have shown the distributions of activity level of these activities in our new Figure 2d. As expected, we found that the evoked activity showed stronger activity compared to the spontaneous activity.  

      Point 4: Unclear definition of the variable h. 

      The formal definition of h = hi is given by (suppressing here the neuron index i and the h-index of tau) 

      tau dh/dt = -h if h>u, (Eq. 10)  h = u otherwise. 

      But if it is only Equation 10 (nothing else is said), h will always become equal to u, or will vanish, i.e. either h=u or h=0 after some initial transient. In fact, as soon as h>u, h is decaying to 0 according to the first line. If u is >0, then it stops at u=h according to the second line. No reason to change h=u further. If u<=0 while h>u, then h is converging to 0 according to the first line and will stay there. I guess the authors had issues with the recurrent spiking simulations and tried to fix this with some regularization. However as presented, it does not become clear how their regulation works. 

      We apologize for the reviewer that our definition of h was unclear. As the reviewer pointed out, since the memory trace is always positive and larger than (or equal to) the membrane potential, it is possible that the membrane potential becomes always negative and the memory trace reach to 0 constantly. However, since the network is always balanced between excitatory and inhibitory inputs, and it does not happen that the membrane potential always diverges negatively. In fact, we trained without any manipulations other than the memory trace described in the manuscript, and the network was able to learn the assembly structure stably. 

      BTW: In Eq. 11 the authors set the gain beta to beta = beta0/h which could become infinite and, putatively more problematic, negative, depending on the value of h. Maybe some remark would convince a reader that no issues emerge from this. 

      We have mentioned in ll. 864-866 in the revised manuscript that no issues emerge from the slope parameter.

      Added from discussions with the editor and the other reviewers: 

      Thanks for alerting me to this Supplementary Figure 8. Yes, it looks like the authors did apply there Dale's law for both the excitatory and inhibitory synapses. Yet, they also introduced two types of inhibitory pathways converging both to the excitatory and inhibitory neurons. For me, this is a confirmation that applying Dale's law to both excitatory and inhibitory synapses, with identical learning rules as explained in the main part of the paper, does not work. 

      Adding such two pathways is a strong change from the original model as introduced before, and based on which all the Figures in the main text are based. Supplementary Figure 8 should come with an analysis of why a single inhibitory pathway does not work. I guess I gave the reason in my Points 1-3. Some form of symmetry breaking between the recurrent excitation and recurrent inhibition is required so that, eventually, the recurrent excitatory connection will dominate. 

      Making the inhibitory plasticity less expressive by applying Dale's law to only those inhibitory synapses seems to be the answer chosen in the Figures of the main text (but then the criticism of unilaterally applying Dale's law). 

      Applying Dale's law to both types of synapses, but dividing the labor of inhibition into two strictly separate and asymmetric pathways, and hence asymmetric development of excitatory and inhibitory weights, seems to be another option. However, introducing such two separate inhibitory pathways, just to rescue the fact that Dale's law is applied to both types of synapses, is a bold assumption. Is there some biological evidence of such two pathways in the inhibitory, but not the excitatory connections? And what is the computational reasoning to have such a separation, apart from some form of symmetry breaking between excitation and inhibition? I guess, simpler solutions could be found, for instance by breaking the symmetry between the plasticity rules for the excitatory and inhibitory neurons. All these questions, in my view, need to be addressed to give some insights into why the simulations do work. 

      The reviewer’s intuition is correct. To effectively learn cell assembly structures and replay their activities, our model indeed requires two types of inhibitory connections. Please refer to our response above for further details. 

      Overall, Supplementary Figure 8 seems to me too important to be deferred to the Supplement. The reasoning behind the two inhibitory pathways should appear more prominently in the main text. Without this, important questions remain. For instance, when thinking in a rate-based framework, the two inhibitory pathways twice try to explain the somatic firing rate away. Doesn't this lead to a too strong inhibition? Can some steady state with a positive firing rate caused by the recurrence, in the absence of an external drive, be proven? The argument must include the separation into Path 1 and Path 2. So far, this reasoning has not been entered. 

      In fact, it might be that, in a spiking implementation, some sparse spikes will survive. I wonder whether at least some of these spikes survive because of the other rescuing construction with the dynamic variable h (Equation 10, which is not transparent, and that is not taken up in the reasoning either, see my Point 4)

      Perhaps it is helpful for the authors to add this text in the reply to them. 

      We have moved the former Supplemental Figure 8 to the main Figure 7. Please see our response above about the role of dual inhibitory connection types.

      Reviewer #3 (Public Review): 

      Summary: 

      The work shows how learned assembly structure and its influence on replay during spontaneous activity can reflect the statistics of stimulus input. In particular, stimuli that are more frequent during training elicit stronger wiring and more frequent activation during replay. Past works (Litwin-Kumar and Doiron, 2014; Zenke et al., 2015) have not addressed this specific question, as classic homeostatic mechanisms forced activity to be similar across all assemblies. Here, the authors use a dynamic gain and threshold mechanism to circumnavigate this issue and link this mechanism to cellular monitoring of membrane potential history. 

      Strengths: 

      (1) This is an interesting advance, and the authors link this to experimental work in sensory learning in environments with non-uniform stimulus probabilities. 

      (2) The authors consider their mechanism in a variety of models of increasing complexity (simple stimuli, complex stimuli; ignoring Dale's law, incorporating Dale's law). 

      (3) Links a cellular mechanism of internal gain control (their variable h) to assembly formation and the non-uniformity of spontaneous replay activity. Offers a promise of relating cellular and synaptic plasticity mechanisms under a common goal of assembly formation. 

      Weaknesses: 

      (1) However, while the manuscript does show that assembly wiring does follow stimulus likelihood, it is not clear how the assembly-specific statistics of h reflect these likelihoods. I find this to be a key issue. 

      We agree that our previous claim regarding the importance of the memory trace h for sampling may have been confusing. As shown in Supplementary Figure 7b, when we eliminated the dynamics of the memory trace, sampling performance did indeed decrease. However, we also observed that the assembly activity ratio continued to show a linear relationship with stimulus probabilities. Based on these findings, we revised our claim in the manuscript to clarify that the memory trace is primarily critical for learning to avoid trivial solutions, rather than directly influencing sampling within the learned network. We have explained this in ll. 446-448 in the revised manuscript.

      (2) The authors' model does take advantage of the sigmoidal transfer function, and after learning an assembly is either fully active or nearly fully silent (Figure 2a). This somewhat artificial saturation may be the reason that classic homeostasis is not required since runaway activity is not as damaging to network activity. 

      The reviewer's intuition is correct. The saturating nonlinearity is important for the network to form stable assembly structures. We have added an additional sentence in ll. 866-868 to mention this.

      (3) Classic mechanisms of homeostatic regulation (synaptic scaling, inhibitory plasticity) try to ensure that firing rates match a target rate (on average). If the target rate is the same for all neurons then having elevated firing rates for one assembly compared to others during spontaneous activity would be difficult. If these homeostatic mechanisms were incorporated, how would they permit the elevated firing rates for assemblies that represent more likely stimuli? 

      LIF neurons) may solve this problem by utilizing spike-timing statistics.

      Recommendations for the authors:

      Reviewer #1 (Recommendations For The Authors): 

      Minor issues: 

      Figure 1: It would be helpful to display the equation for output rate here as well. 

      We have included the equation in the revised Figure 1a.

      Figure 3c: Typo "indivisual neurons". 

      We have modified the typo. We thank the reviewer for their careful review.

      Line 325: Do you mean Figure 3f,g? 

      We repeated the task with different numbers of stimuli in Supplementary Figure 1c,d.

      Line 398: Winner-take-all can be misunderstood, as it typically stands for competition in inference, not in learning. 

      We have rephrased it as “unstable dynamics” in l. 400

      Line 429: Are intra-assembly and within-assembly the same? If so please use consistent terminology. 

      We have made the terminology consistent.

      Line 792 ff.: Please mention that (t) was left away. 

      We have included a sentence to mention it in ll. 847-848 in the revised manuscript.

      Line 817: Should u_i be v_i? 

      We have modified the term.

      Methods: What is the value of tau_h? 

      We have used 𝜏! \=10 s, which is mentioned in l. 853

    1. For me, it was always a way to build community at scale.

      yup

    2. The web sits apart from the rest of technology; to me, it’s inherently more interesting. Silicon Valley’s origins (including the venture capital ecosystem) lie in defense technology. In contrast, the web was created in service of academic learning and mutual discovery, and both built and shared in a spirit of free and open access. Tim Berners-Lee, Robert Cailliau, and CERN did a wonderful thing by building a prototype and setting it free.

      Ben Werdmüller makes an interesting distinction. Internet tech, and thus Silicon Valley, originated in defense (ARPA etc.), whereas the web originated in academia in a spirit of open academic debate (CERN). Now ARPA etc had deep ties w academia too, and it's mostly defense funding at play. Still there may be something to this distinction. You could also say perhaps it's an Atlantic divide, the web originated at CERN in Europe.

    1. A dynamic concept graph consisting of nodes, each representing an idea, and edges showing the hierarchical structure among them.LLMs generates the hierarchical structure automatically but the structure is editable through our gestures as we see fitattract and repulse in force between nodes reflect the proximity of the ideas they containnodes can be merged, split, grouped to generate new ideasA data landscape where we can navigate on various scales (micro- and macro views).each data entry turns into a landform or structure, with its physical properties (size, color, elevation, .etc) mirroring its attributesapply sort, group, filter on data entries to reshape the landscape and look for patterns

      Network graphs, maps - it's why canvas is the UI du jour, to go beyond linearity, lists and trees

    1. Gao, Murray, Kotabe & Lu (2010). A “Strategy Tripod” Perspective on Export Behaviors: Evidence from domestic and foreign firms based in an emerging economy.

      Not needed, not learning