1. Oct 2024
    1. om as atividades referimo-nos a diferentes ações que os alunos realizamem completa relação com os conteúdos e as informações que lhe foramdadas. Se estas atividades, são apresentadas, realizadas ou transferidasatravés da rede, então podemos considerá-las como e-atividades” (p.11).

      Estas estratégias podem ser ensinadas e devem ser adaptadas às necessidades e estilos dos alunos, sendo fundamentais para a aprendizagem autônoma. Por sua vez, as estratégias de ensino referem-se às técnicas e métodos que os professores utilizam para transmitir conceitos e habilidades, variando conforme o conteúdo e os objetivos educacionais.

    2. As e-atividades estão centradas nos estudantes, para que possam construir,trabalhar e partilhar conhecimento. Estas estão suportadas pela ideia deque o conhecimento é construído pelos estudantes de forma colaborativa,ativa e participativa. Elas são um caminho de construção do conhecimento.Pensar desta forma facilita muito o fazer pedagógico do docente

      Não obstante esta ideia, sobre a importância de uma e-atividade para a construção do conhecimento, do meu ponto de vista um outro aspeto a realçar, relaciona-se a avaliação, Considero que a avaliação desempenha um papel central numa e-atividade, pois é através dela que se mede o progresso dos estudantes, se ajusta o processo de ensino-aprendizagem e se promove a reflexão sobre o conhecimento adquirido. Em ambientes digitais, tais como nos outros, a avaliação formativa é muito relevante devido às características dos modelos de autonomia e flexibilidade que marcam a educação online. A avaliação é, portanto, uma "ferramenta" essencial para garantir a eficácia das e-atividades e deve ser projetada com base em princípios pedagógicos sólidos. Saudações académicas

    3. cinco estágios do modelo

      A transversalidade evidente deste instrumento de trabalho na concretização dos diferentes estádios do modelo de Gilly Salmon, traduz de uma forma clara a importância que as e-atividades têm na conceção e estruturação de processos de ensino-aprendizagem em ambientes virtuais. António Costa

    4. orientação para a construçãode uma e-atividade

      Há premissas importantes que servem para estruturar o pensamento na fase de conceção das e-atividades. Gostaria de sublinhar a importância de que elementos como a correta adequação dos conteúdos e os consequentes objetivos de aprendizagem estejam logicamente alinhados. A clareza das instruções/orientações deve ser condição fundamental, para assim existir uma progressão lógica nas diferentes etapas/fases da e-atividade, garantindo, desta forma, uma aprendizagem efetiva por parte dos alunos. Também será importante que o professor divulgue os resultados e faça um balanço de uma dada e-atividade, garantindo assim que os alunos tomam consciência da medida de concretização dos objetivos propostos, validando dessa forma eventuais oportunidades de melhoria. António Costa

    5. Num contexto de ensino a distância digital, a planificação de E-atividades proporciona ao formando uma maior noção da sua assimilação de conceitos, conteúdos e consiste num método de avaliação de aprendizagens. São sem dúvida uma forma dinâmica e interativa de promover uma aprendizagem ativa, autónoma e onde se privilegia o pensamento crítico. De acordo com Almenara, Osuna & Cejudo (2014) e transportando-nos para um ambiente virtual, as e-atividades são o elemento que facilita a inter-relação entre o Ensino e a Aprendizagem. Na perspetiva do formador os desafios que se colocam ao desenhar uma e-atividade são vários, nomeadamente, ser objetivo, claro, adequar os conteúdos, conhecer os públicos, definir o tempo, apresentar recursos, selecionar o formato mais adequado, diversificar e avaliar.

      Em forma de conclusão as e-atividades permitem uma aprendizagem online ativa, participativa, colaborativa, seja desenvolvida de forma individual ou em grupo e cujo principal objetivo centra-se nas aprendizagens.

    6. Como vimos anteriormente existe uma panóplia de tipologias de e-atividades.A pergunta que se impõe é saber como selecionar a e-atividade maisadequada ao nosso propósito.

      Esta tarefa que nos cabe a nós não é fácil, porque temos de ter vários fatores em consideração, tais como, o objetivo e finalidade do curso, a faixa etária, a aplicabilidade prática, etc...

    7. 43Tabela 3.2. | Modelo de desenho de e-atividades de acordo com Almenara, Osuna &Cejudo (2014)

      Acho interessante este modelo de desenho de uma e-atividade de acordo com Almenara Osuna & Cejudo mas também gostei da proposta do Maina

    8. Fornecer feedback: Após a conclusão da atividade deve serfornecido feedback construtivo aos alunos, destacando o queeles fizeram bem e onde podem melhorar

      o feedback é muito importante, ajuda os alunos a compreenderem melhor seus erros e acertos, proporcionando uma direção clara sobre como melhorar suas habilidades e conhecimentos.

    9. observar que essas e-atividades podem ser concebidas para darresposta a cada um dos cinco estágios do modelo e, com isso, ajudar osalunos a construir comunidades virtuais de aprendizagem e a alcançar osseus objetivos de aprendizagem online

      Tendo por base na minha opinião a minha experiência como formadora de informática/programação a adolescentes, eu incentivava bastante trabalhos de grupo, primeiro porque é uma condição essencial para se estar em sociedade e eles demonstravam bastante interesse e empenhavam-se mais do que quando eram trabalhos individuais. Sentia que entre eles havia entreajuda e esforçavam-se para que o resultado final e a apresentação do trabalho fosse o melhor.

    10. Isso porque essas atividades podem ser utilizadas paradiversificar as formas de aprendizagem e envolver os alunos em processosmais dinâmicos e interativos.

      Não conhecia esta ferramenta hypothes.is software que tem como objetivo recolher comentários sobre declarações feitas em qualquer conteúdo acessível pela web. Esta passagem apela à diversificação para cativar a atenção e interesse dos alunos/formandos. Eu dei formação de programação a adolescentes e tive algum sucesso quando usei ferramentas como: kahoot para elaborar quizz , Scratch: que é uma plataforma de programação visual que permite aos alunos criar projetos interativos utilizando blocos de código de forma lúdica, e o **CoSpaces ** que é uma aplicação que permite criar conteúdos de realidade virtual.

    11. é importante garantir que as e-atividades sejam inclusivas eacessíveis a todos os alunos, independentemente das suas habilidades erecursos tecnológicos disponíveis.

      A inclusão representa um ato de igualdade entre os diferentes indivíduos na sociedade que permite que todos tenham o direito de integrar e participar das várias dimensões do seu ambiente, sem sofrer qualquer tipo de discriminação e preconceito. Assim, na minha opinião a importância das e-atividades inclusivas está diretamente ligada à necessidade de garantir que todos os alunos, independentemente das suas condições físicas, cognitivas, culturais ou socio-económicas, tenham igualdade de oportunidades no processo de aprendizagem. A inclusão é um princípio fundamental em ambientes de educação, presencial ou digital, e torna-se ainda mais relevante em plataformas online, onde as barreiras tecnológicas podem aumentar as desigualdades se não forem devidamente consideradas. Saudações Rui Ventura

    12. as atividades realizadas por meio de dispositivoseletrónicos, têm um papel importante no desenho das estratégias deaprendizagem. Isso porque essas atividades podem ser utilizadas paradiversificar as formas de aprendizagem e envolver os alunos em processosmais dinâmicos e interativos.

      Nesta passagem anotada na pg. 32, a autora relembra quanto importantes são os dispositivos eletrónicos no desenvolvimento de estratégias de ensino e de aprendizagem. Quando integramos as tecnologias, nos processos de ensino e de aprendizagem, vamos no caminho da diversificação dos métodos de ensino, tornando a aprendizagem mais dinâmica e interativa. Esta situação vai estimular o envolvimento dos alunos e possibilita abordagens mais personalizadas, adaptando-se às diferentes formas de aprender de cada discente. Por outro lado, o uso de atividades mediadas por tecnologias digitais no âmbito da educação vai facilitar o acesso a recursos variados, consequentemente o professor vai promover a autonomia dos alunos, ampliando suas oportunidades de aprendizagem dentro e fora da sala de aula e ao longo da vida. Maria Barreto

    1. The Amontillado!” ejaculated my friend, not yet recoveredfrom his astonishment

      first EJACULATED HUH????? Also he really just cares about the wine

    2. Throwing the links about his waist

      wow yeah he just chained him up what on earth

    3. fettered

      restrained with chains or manacles, typically around the ankles. Woah you just straight up chained your friend?

    4. flambeaux rather to glow than flame

      It's cold, dark, the air is awful, Fortunato is barely able to walk, it feels grim

    5. offering him my arm. He leaned upon it heavily.

      Fortunato is struggling to keep himself up

    6. You? Impossible! A mason?

      Narrator is a mason? also, what is the brotherhood?

    7. had I givenFortunato cause to doubt my good wil

      Is this more like a karma situation? do good and you will be lucky? Is Fortunato a someone? <br /> I wrote that before finding out it was just a character. Maybe this is just saying Fortunato was not justified in doing harm to the narrator.

    8. “The nitre!” I said; “see, it increases. It hangs like moss uponthe vaults. We are below the river’s bed. The drops of moisturetrickle among the bones. Come, we will go back ere it is too late.Your cough—”

      Narrator yet again telling fortunato to turn back

    Annotators

    1. A great operating model on its own, for instance, won’t bring results without the right talent or data in place.

      This shows how AI isn't just a quick fix and something to instantly get you results rather you have to work on it so then it could be more productive in the long run.

    2. Generative AI (gen AI) is revolutionizing the banking industry as financial institutions use the technology to supercharge customer-facing chatbots, prevent fraud, and speed up time-consuming tasks such as developing code, preparing drafts of pitch books, and summarizing regulatory reports.

      It seems to already having a positive affect on the banking community

    3. gen AI could add between $200 billion and $340 billion in value annually, or 2.8 to 4.7 percent of total industry revenues, largely through increased productivity.1

      This shows the massive impact AI is having and how much money is being made because of it and provides facts how it has increased from 2.8 to 4.7%

    1. the rewards are divided through by the standard deviation of a rolling dis-counted sum of the reward

      big reward shaping

    2. we find that they dramatically affect the performanceof PPO. To demonstrate this, we start by performing a full ablation study on the four optimizationsmentioned above

      All these little optimizations in the implementation of PPO have a big impact on it's performance.

    1. graph

      There is no descriptive caption for this graph, and it should also be labeled as figure 1.

    1. description

      Change to description_unified

    2. nt_desc <- filter(variable_list, str_detect(variable, "no_travel")) %>% select(variable, description)

      The column has been changed from variable to variable_unified and from description to description_unified

    3. weighted_hhs <- tbi$hh[!is.na(hh_weight) & hh_weight > 0, hh_id] tbi <- lapply( tbi, function(dt) { dt <- dt[hh_id %in% weighted_hhs] } )

      The new data includes 10 elements instead of 8. To align them first create variable_list <- tbi$metaData_variables and values_list <- tbi$metaData_values before deleting the two additional elements tbi<- tbi[1:8]

    4. Getting Started

      It would be nice for a list of all packages that will be used so we can activate them.

    1. introverts will stop belittling themselves.  Support the Next Generation of Content Creators Invest in the diverse voices that will shape and lead the future of journalism and art. donate now

      This article angers me far beyond what it should. It is supposed to be a call to action for society to stop treating introverts like they're inferior. As an introvert myself I feel belittled reading this. Like I'm the victim and that it is a negative trait to be an introvert. While it's supposed to be a positive trait according to the article (Oh look, I'm introverted and that makes me a GREAT leader). All it does is just list the author's problems with being an introvert, but all of what she did list is barely traits of being an introvert, it's traits of being a coward with no self-confidence and a victim complex. Being an introvert or an extrovert is neither good nor bad, it's just what it is.

    2. eing introverted and shy do not go hand in hand. The so-called “shyness” I experienced from an early age was truly just anxiety surrounding meeting new people. I, as a person, was not shy, but rather introverted.

      this shows how the way she is labelled doesnt define her. The outside appeance isnt always what is true on the inside

    3. . I never thought of my introverted qualities as something negative until elementary school. T

      Implies some sort of social conditioning.

      (Natasha)

    4. While her words were meant to calm me, they, in fact, did the opposite.  Why can’t I just be normal?

      the effect of the label of being shy on the author

    5. . “I see you’re shy!

      labeling her with something against her will

    6. ome, I was “the shy kid.” To others, I wa

      diction and perspective: shows how the other peoples opinions affect how the author views herself

    1. This failure also has also created constituencies among various types of domestic manufacturers opposed to the kind of market liberalization inherent to sanctions relief—undermining a core belief held by Western policymakers that sanctions can spur behavior changes in countries like Iran through bottom-up pressure, including from business lobbies.

      Nice point

    1. Littler’s work shows in great detail how the narrative of ‘hard work’ and ‘making it’ I note above has become so present and alive in Global North societies (2) – and it’s by drawing this kind of sharp attention to the way such destructive narratives are mobilised, and who they work for, that we position ourselves to challenge and reject them.

      The Tirukkuṟaḷ, a Tamil text from the "Global South" that is at least 1500 year old, contains "narratives of 'hard work'". The idea that this is somehow a Global North concept is woefully ignorant.

      Couplet 620:

      Who strive with undismayed, unfaltering mind, At length shall leave opposing fate behind.

    1. Connecting Linkbetween twoSentences orParagraphs,

      Miles, 1905 uses an arrow symbol with a hash on it to indicate a "connecting link between two Sentences or Paragraphs, etc."

      It's certainly an early example of what we would now consider a hyperlink. It actively uses a "pointer" in it's incarnation.

      Are there earlier examples of these sorts of idea links in the historical record? Surely there were circles and arrows on a contiguous page, but what about links from one place to separate places (possibly using page numbers?) Indexing methods from 11/12C certainly acted as explicit sorts of pointers.

    2. An omission,e.g. to befilled in after-wards.

      When was the use of the caret first made for indicating the insertion of material?

      Eustace Miles has an example from 1905.

    3. Special Marks on Cards

      Eustace Miles suggests the use of "special marks on cards" (annotations) in the top left corners, though he doesn't provide specific examples of how they might be used in practice. He does mention "The Abbreviations and Marks need be clear only to the Writer [sic] himself. They save ever so much time."

      • "X": As contrasted with—
      • "Q": Quotation
      • Black triangle in corner: important
      • Arrow pointing to corner of card: As compared with
      • Angled parallel lines in the bottom right corner of card: End of Paragraph (or Chapter).
      • Arrow pointing to the corner of card with hash mark: Connecting Link between two Sentences or Paragraphs, etc.
      • Upside down V (or caret): An omission, e.g. to be filled in afterwards
      • ?: A doubtful point
    4. Special Marks on Cards

      In Miles' visual examples of cards, he presents them in portrait (rather than landscape) orientation.

      This goes against the broad grain of most standard card index filing systems of the time, but may be more in line with the earlier French use of playing cards orientation.

      His portrait orientation also matches with the size ratios seen in his Card-Tray suggestion on p187. https://hypothes.is/a/llEgpIf4Ee-dVfcaIGUryQ

    5. no false economy r

      He's repeating (and thus emphasizing) the admonition that a card system is not expensive, particularly in relation to the savings in time and effort.

    6. There should also be a Card-Tray, or abox with compartments in it, such as shown in thefollowing illustration. Of course the Tray might havean open top.

      Miles suggests using a Card-Tray (in 1899) with various compartments and potentially an open top rather than some of the individual trays or card index boxes which may have been more ubiquitous

      This shows a slight difference at the time in how an individual would use one of these in writing versus how a business might use them in drawers of 1, 2, 3 or cabinets with many more.

      The image he shows seems more reminiscent of a 5x3" library charging tray than of some of the business filing appliances of the day and the decade following.


      very similar to the self-made version at https://hypothes.is/a/DHU_-If6Ee-mGieKOjg8ZQ

    7. These Cards (if used only once) should be labelledand catalogued very carefully.

      How does he define "labelled" and "catalogued"?

      Presumably he means a version of tagging/categorization and possibly indexing them to be able to easily find them again?

    8. A great help towards Arrangement and Clearnessis to have Cards of different sizes and shapes, and ofdifferent colours, or with different marks on them

      Miles goes against the grain of using "cards of equal size", but does so to emphasize the affordance of using them for "Arrangement and Clearness".

    9. The Cards can be turned afterwards.

      Miles admits that one can use both sides of index cards in a card system, but primarily because he's writing at a time (1899) when, although paper is cheap (which he mentions earlier), some people may have an objection to the system's use due to the expense, which he places at the top of his list of objections. (And he does this in a book in which he emphasizes multiple times the ideas of selection and ordering!)

    10. and of course writing only on one side of the Card ata time.
    11. And the same will apply to the objection that theSystem is unusual. Seldom have there been any newsuggestions which have not been condemned as ' un-us
    12. Objections to the Card-System,

      Miles lists the following objections: - expense - inconvenience - unusual (new, novel)

      Notice that he starts not with benefits or affordances, but with the objections.

      What would a 2024 list of objections look like? - anachronism - harder than digital methods - lack of easier search - complexity - ... others?

    13. At first, also, it might be thought that the Cardswould be inconvenient to use, but the personal ex-perience of thousands shows that, at any rate forbusiness-purposes, exactly the reverse is true

      Miles' uses the ubiquity of card systems (even at the writing in 1899, prior to publication) within business as evidence for bolstering their use in writing and composition.

      (Recall that he's also writing in the UK.)

    14. Good Practice for this will be to studyLoisette's System of Memory, e.g. in "How to Remember"(see p. 264) ; in fact Loisette's System might be calledthe Link-System ; and Comparisons and Contrasts willvery often be a great help as Links.

      Interesting to see a mention of Alphonse Loisette here!

      But also nice to see the concept of linking ideas and association (associative memory) pop up here in the context of note making, writing, and creating card systems.

    15. include anything which links one Ideato another. See further " How to Remember " (to bepublished in February, 1900, by Warne & Co.).

      This book was finally published in 1905. The introduction was written in 1899 and the mentioned Feb 1900 publication of How to Remember didn't happen until 1901.

      Miles, Eustace Hamilton. How to Remember: Without Memory Systems or with Them. Frederick Warne & Co., 1901.

    16. If the Letter is important, especially if it be aBusiness-Letter, there should be as long an interval as isfeasible between the writing and the sending off.

      writing and waiting is useful in many instances, and particularly for clarity of expression.

      see also: <br /> - angry letter https://hypothes.is/a/6OoqHofyEe-1mtOohGA63w - diffuse thinking<br /> - typewriter (waiting) <br /> - editing (waiting) https://hypothes.is/a/VxRNeofvEe-5n1dpCEM48Q

    17. fter the Letter has been done it should beread through, and should (if possible) be read out loud,and you should ask yourself, as you read it, whetherit is clear, whether it is fair and true, and (last but notleast) whether it is kind. Putting it in another way,you might ask yourself, ' What will the person feel andthink on reading this ? ' or, * Should I eventually besorry to have received such a Letter myself? ' or, again,'Should I be sorry to have written it, say a yearhe

      Recall: Abraham Lincoln's angry letter - put it in a drawer

    18. You can prepare your Letters any-where, even in the train, and so save a great deal oftime ; and it may be noticed here that the idlenessof people, during that great portion of their lives whichthey spend in travelling and waiting, can easily beavoided in this way.

      Using a card system, particularly while travelling, can help to more efficiently use one's time in preventing idleness while travelling and waiting.

    19. s we have often said before, paper is so cheap thatthere is no need for such economy.

      Compare this with the reference in @Kimmerer2013 about responsibility to the tree and not wasting paper: https://hypothes.is/a/pvQ_4ofxEe-NfSOv5wMFGw

      where is the balance?

    20. How to Express Ideas : Style.

      It could be interesting/useful to create a checklist or set of procedures (perhaps a la Oblique Strategies") for editing a major work.

      Sections in this TOC could be useful for creating such.

    21. The third reading should again be a slow reading,

      relationship to Adler's levels of reading?

    22. But in my opinion nothing can excuse the laziness ofa great number of Editors. When the Writers arepoor and have staked a great deal on their Writings,then the laziness is simply disgusting : in fact, it amountsto cruelty. It is concerned with some of the verysaddest tragedies that the world has ever seen, andI only mention it because it is very common and be-cause itis as well that the novice should know what toexpect.
    23. Another Article I sent to a Paper, and after twentyweeks, and after many letters (which enclosed stampedand addressed envelopes), I was told that the Articlewas unsuitable for the Paper.

      Even in 1905 writers had to wait interminably after submitting their writing...

      it's only gotten worse since then...

    24. Very few have the strength of mind tokeep back for a whole week a piece of Writing whichthey have finished. Type-writing sometimes necessitatesthis interval, or at any rate a certain interval.

      The process of having a work typewritten forced the affordance of creating time away from the writing of a piece. This allows for both active and diffuse thinking on the piece as well as the ability to re-approach it with fresh eyes days or weeks later.

    25. there is a great distinction between a thing whichis heard, and a thing which is read in ordinary writing,and a thing which is read in print. In fact these differ-ences almost necessitate certain differences in Style.Now Type-writing is far nearer to print than ordinarywriting is.
    26. When an Article or Book has been written, it must betype-written before it is sent to the Editor or Publisher,that is to say, unless it has been ordered beforehand orunless you are well known. The reason is not simplythat Type-writing looks better than ordinary writing,and that it is easier to read, but it actually is a fact thatfew Editors or Publishers will read anything that is notType- written.

      Even as early as 1905 (or 1899 if we go by the dating of the introduction), typewritten manuscripts were de rigueur for submission to editors and publishers.

    27. Type-writing (see p. 369) is becoming more and morecommonly used, and for certain purposes it is indispen-s

      Note that he's writing in 1899 (via the introduction), and certainly not later than 1905 (publication date).

    28. Carlyle

      One of the major values of fame is that it often allows the dropping of context in communication between people.

      Example: Carlyle references in @Miles1905

    29. Carlyle

      It bears noting in this book on writing and composition, Miles (nor the indexer if it was done by someone else) never uses Carlyle's first name (Thomas) in any of the eleven instances in which it appears, as he's famous enough in the context (space, time) to need only a single name.

    30. General Hints on Preparing Essays etc., in Rhyme.

      One ought to ask what purpose this Rhyme serves?

      • Providing emphasis of the material in the chapter;
      • scaffolding for hanging the rest of the material of the book upon, and
      • potentially meant to be memorized as a sort of outline of the book and the material.
    31. WITH A RHYME.

      did I miss the "rhyme" in this section or is he using a more figurative sense (as in "rhyme or reason")?

      Ha! Didn't get far enough, it's on page 36, but also works the other way as well.

    32. IN this Chapter I shall try to summarise the main partof this work, so that those who have not the time orthe inclination to go right through it may at any rategrasp the general plan of it, and may be able to referto any particular Chapter or page for further informa-tion on any particular topic.

      This chapter is essentially what one ought to glean from skimming the TOC, the Index, and doing a brief inspectional read (Adler, 1972).

    33. In these two latter sections it is aswell to emphasise the general advice, " Try a thing foryourself before you go to anything or anyone for infor-mation." You should try (if there is time) to work outthe subject beforehand ; and then, after you have reador listened to the information, you should note it downin a special Note-book, and if possible make certain ofunderstanding it, of remembering it, and of using it.

      Echoes of my own advice to "practice, practice, practice".

    34. Interest is required especially in the Beginning,
    35. But, the more heexamines the subject, and the more he goes by hispersonal experience, the more he will find it worthwhile to spend time on, and to practise carefully,fthisfirst department of Composition, as opposed to the mereExpression^] Indeed one might almost say that, if thisfirst department has been thoroughly well done, that isto say, if the Scheme of Headings and Sub-Headingshas been well prepared, the Expression will be a com-paratively easy matter.

      Definition of the "first department of composition": <br /> The preparation (mise en place) for writing as opposed to the actual expression of the writing. By this he likely means the actions of Part II (collecting, selecting, arranging) of this book versus Part III.

    36. Humour is to be classed as a Rhetoricalweapon, and indeed as one of the most powerful.
    37. sCarlyle's writings show. Proverb, Paradox, Epigram,exaggeration, humour, and unexpected order of words,all these can be means of Emphasis.
    38. One might think at first that it was a Universal Lawthat all Writing or Speaking should be so clear as tobe transparent. And yet, as we have seen, no readerof Carlyle can doubt that a great deal of his Forcewould be gone if one made his Writings transparent.If one took some of Carlyle's most typical works andparaphrased them in simple English, the effect wouldnot be a quarter as good as it is.

      How is this accomplished exactly? How could one imitate this effect?

      How do we break down his material and style to re-create it?

    39. as Vigour, but the two generally go hand in hand.

      "Brevity is not always the same as Vigour, but the two generally go hand in hand." -Miles

    40. As to the other extreme, it is a questionwhether a sentence can be too clear, whether the Ideacan be too simply expressed ; and, if we once admitthat Carlyle's writings produced a greater effect anda better effect than they would have done if they hadbeen perfectly clear, then we must admit that forcertain purposes absolute Clearness is a Fault.
    41. No Writer seems to be going off the point, and tobe violating the Law of ' Unity ' and Economy, morethan Carlyle does. As we read his "Frederick theGreat", the characters at first appear to us to have nomore connexion with one another than the characters
    42. The reader will doubtless be amazed at the amountof time which has to be spent before he arrives at thestage of Expressing his Ideas at all.
    43. In order to give the reader some chance of havinga good Collection of Headings, and less chance ofomitting the important Headings, I have offered (e.g.on pp. 83, 92) a few General Lists, which are not quitecomplete but yet approach to completeness ; two ofthese Lists will be found sufficient for most purposes.One of these is called the List of Period- Headings,such as Geography, Religion, Education, Commerce,War, etc. (see p. 83); the other is called the List ofGeneral Headings, and includes Instances, Causes andHindrances, Effects, Aims, etc. : this latter List will befound on p. 92.
    44. Rhythm, Grammar, Vocab-ulary, Punctuation, etc. It was hard to break thefaggots when they were in a bundle, but it was easyto break them when they were taken one by one.

      Notice that again he's emphasizing breaking down the problem into steps, and he's using a little analogy to do so, just like he had described previously.

      (see: https://hypothes.is/a/NDArGoemEe-9BXcYJSUyMQ)

    45. I shall try to give the ChiefFaults in Composition. The reader will see that thelist is long : and that, if he merely tries to write wholeEssays all at one 'sitting', he is little likely to escapethem all.S

      Attempting to escape the huge list of potential "Chief faults in composition" is a solid reason not to try to cram a paper or essay in a single night/day.

    46. Teaching is one of the best means of Learning, notonly because it forces one to prepare one's work care-fully, and to be criticised whether one wishes it or not,but also because it gives one a sense of responsibility :it reminds one that one is no longer working for selfalone.
    47. whether you are Writing or Speaking, the generalprinciple to remember is that you must appeal, innearly everything you say, to the very stupidest peoplepossible.
    48. It is important to learn as much and at the sametime as little as possible.J

      By abstracting and concatenating portions of material, one can more efficiently learn material that would otherwise take more time.

    49. But of all methods of Learning none is better thanthe attempt to teach others

    Tags

    Annotators

    1. Enslavers and thecourts did not honor kinship ties tomothers, siblings, cousins. In mostcourts, they had no legal standing.Enslavers could rape or murder their

      This shows that people don't like slaves so much that they don't want them to have a family and they can do anything to their slaves like they own the slaves which is very cruel.

    2. Hundreds of black veterans werebeaten, maimed, shot and lynched.

      This just shows the amount of hate for people with a darker skin color which is just completely un human.

    3. Despite the guarantees of equal-ity in the 14th Amendment, theSupreme Court’s landmark Plessy v.Ferguson decision in 1896 declaredthat the racial segregation of blackAmericans was constitutional.

      Even though slavery ended this quote is explaining that the effects of slavery was racism which was never ending or it would take a very long time to end.

    4. They had no claim to their own chil-dren, who could be bought, sold andtraded away from them on auctionblocks alongside furniture and cattleor behind storefronts that advertised‘‘Negroes for Sale.’

      This quote shows how the African slaves were treated especially with their children which were separated away from them and sold to other people to work or one day become another slave and those kids would not know anything about them or their families.

    1. High engagement in gambling and gaming activities might serve as a way to mitigate loneliness and related distress

      Important to discuss

    2. The widespread availability and addictive nature of the loot box system makes it crucial to regulate such monetization practices to protect vulnerable individuals such as young people, lonely individuals, and problem gamblers.

      This last section maybe a good solution to the issue of loot box addiction.

    3. Interestingly, a study by Etchells et al. (2022) did not find associations between mental wellbeing and loot box purchasing,

      An interesting study to look at

    4. the study looked at financial consequences and the role of problem gambling in these associations.

      Both problem gambling and financial consequences are a good aspect to look at the issue of online gambling and loot boxes.

    5. With respect to H1, Loneliness had a positive association with Loot Box Purchasing

      Helps strengthens the previous claim of lonely individuals being vulnerable to loot boxes.

    6. we will conclude that we have no evidence of metric invariance between models for different genders, nationalities, and age groups.

      Potentially be used to show how loot boxes and addiction to said loot boxes are indiscriminate of genders, nationalities, and age groups.

    7. Therefore, the positive association between Loot Box Purchasing and Indebtedness was indirect and mediated through Problem gambling.

      The connection between loot boxes and indebtedness may be indirect. However, researching further in problem gambling may produce help claims that strength the research question.

    8. Loot box purchasing was measured with a single-item “How have your online consumer habits changed during the coronavirus pandemic regarding the following services in comparison to your previous habits: Loot box purchases in digital games”

      Need to find further studies for after the pandemic but this can be a good baseline on how individuals can be affected by the 'predatory monetization schemes' as previously mentioned in the journal.

    9. Increased loot box purchasing is positively associated with indebtedness.Given that gambling activities and loot box purchasing often co-occur

      This overall feeling of indebtness can help strength the claim of the similarity of hopelessness that gambling produces to what loot boxes produce.

    10. Loot box expenditure can add to financial strain caused by excessive gambling (Hing et al., 2022), but it might be problem gambling that plays a major role in debt problems among loot box buyers

      Potentially looking for cases of severe loot box addictions can help improve the claim of how these loot boxes mask the true nature of online gambling.

    11. Loot box prices typically vary from a few to tens of dollars, and high-spenders use over $100 per month on loot boxes

      It is good to bring up how much these individuals may spend.

    12. ‘predatory monetization schemes’ are designed to make players both financially and psychologically committed to a game with a purpose of spending more and more money.

      Good to further look into as these 'predatory monetization schemes' could be a point to bring up in how these loot boxes can change the perception of online gambling by posing as a lighter form of online gambling in a way?

    13. Loot boxes are commonly juxtaposed with forms of gambling and generally perceived as a gambling-like activity

      The term gambling-like activity can be key in placing loot boxes as an activity closely linked to gambling.

    14. roblem gambling is more common among those of lower income (Hahmann et al., 2021), but gambling can further worsen the situation leading to severe financial problems such as indebtedness

      Good point to bring up in the dangers of gambling.

    15. Studies have found that loneliness is a risk factor for problem gambling

      Potentially a good angle to describe the individuals that might be the most vulnerable to loot boxes and online gambling in general.

    16. Several studies have found associations between loot box purchasing and poorer mental health and distress

      These studies would be helpful in strengthening the claim on how those how are mentally struggling are the most vulnerable. These studies can also supplement the potentially connection to how loot box purchasing can have adverse affects on mental health. However, this claim will require additional research using multiple sources and studies.

    17. Concerns have been raised particularly in relation to ‘loot boxes’ that present a controversial form of in-game purchases in pursuit of randomized rewards such as weapons or cosmetic features

      Good definition for what a loot box is. Also helps present the loot box as a form of in-game purchase based on luck.

    18. The chance-based nature of loot boxes is often juxtaposed with mechanisms of gambling, and these gambling-like mechanisms make them potentially addictive for players

      Strong point that be used as an argument on why loot boxes can hurt individuals with its similarities to gambling. This can also help show how loot boxes promote a form of online gambling.

    1. Hello there, folks.

      Thanks once again for joining.

      Now that we've got a little bit of an understanding of what problem cloud is solving, let's actually go ahead and define it.

      So what we'll talk about is technology on tap, a common phrase that you might have heard about when talking about cloud.

      What is it and why would we say that?

      Then what we're actually going to do is walk through the NIST definition of cloud.

      So there are five key properties that the National Institute of Standards and Technology does use to determine whether or not something is cloud.

      So we'll walk through that.

      So we've got a good understanding of what cloud is and what cloud is not.

      So first things first, technology on tap.

      Why would we refer to cloud as technology on tap?

      Well, let's have a think about the taps we do know about.

      When you want access to water, if you're lucky enough to have access to a nice and easy supply of water, all you really need to do is turn on your tap and get access to as little or as much water as you want.

      You can turn that on and off as you require.

      Now, we know that that's easy for us.

      All we have to worry about is the tap and paying the bill for the amount of water that we consume.

      But what we don't really have to worry about is everything that goes in behind the scenes.

      So the treatment of the water to bring it up to drinking standards, the actual storage of that treated water, and then the transportation of that through the piping network to actually get to our tap.

      All of that is managed for us.

      We don't need to really worry about what happens behind the scenes.

      All we do is focus on that tap.

      We turn it on if we want more.

      We turn it off when we are finished.

      We only pay for what we consume.

      So you might be able to see where I'm going with this.

      This is exactly what we are talking about with cloud.

      With cloud, however, it's not water that we're getting access to, it is technology.

      So if we want access to technology, we use the cloud.

      We push some buttons, we click on an interface, we use whatever tool we require, and we get access to those servers, that storage, that database, whatever it might be that we require in the cloud.

      Now again, behind the scenes, we don't have to worry about the data centers that host all of this technology, all of these services that we want access to.

      We don't worry about the physical infrastructure, the hosting infrastructure, the storage, all the different bits and pieces that actually get that technology to us, we don't need to worry about.

      And how does it get to us?

      How is it available all across the globe?

      Well, we don't need to worry about that connectivity and delivery as well.

      All of this behind the scenes when we use cloud is managed for us.

      All we have to worry about is turning on or off services as we require.

      And this is why you can hear cloud being referred to as technology on tap, because it is very similar to the water utility service.

      Utility service is another name you might hear cloud being referred to, because it's like water or electricity.

      Cloud are like these utility services where you don't have to worry about all the infrastructure behind the scenes.

      You just worry about the thing that you want access to.

      And really importantly, you only have to pay for what you use.

      You turn it on if you need it, you turn it on if you don't, you create things when you need them, delete them when you don't, and you only pay for those services when you have them, even though they are constantly available at your fingertips.

      Now, compare this to the scenario we walked through earlier.

      Traditionally, we would have to buy all of the infrastructure, have it sitting there idly, even if we weren't using it, we would still have had to pay for it, set it up, power it and keep it all running.

      So this is a high level of what we are talking about with cloud.

      Easy access to servers when you need them, turn them off when you don't, don't worry about all that infrastructure behind the scenes.

      But that's a high level definition.

      So let's now walk through what the NIST use as the key properties to define cloud.

      One of the first properties you can use to understand whether something is or is not cloud is understanding whether or not it provides you on demand self service access, where you can easily go ahead and get that technology without even having to talk to humans.

      So what do I really mean by that?

      Well, let's say you're a cloud administrator, you want to go ahead and access some resources in the cloud.

      Now, if you do want access to some services, some data, some storage and application, whatever it might be, while you're probably going to have some sort of admin interface that you can use, whether that's a command line tool or some sort of graphical user interface, you can easily use that to turn on any of the services that you need, web applications, data, storage, compute and much, much more.

      And you don't have to go ahead, talk to another human, procure all of the infrastructure that runs behind the scenes.

      You use your tool, it is self service, it is on demand, create it when you want it, delete it when you don't.

      So that's on demand self service access and one of the key properties of the cloud.

      Next, what I want to talk to you about is broad network access.

      Now, this is where we're just saying, if something is cloud, it should be easy for you to access through standard capabilities.

      So for example, if we are the cloud administrator, it's pretty common when you're working with technology to expect that you would have command line tools, web based tools and so on.

      But even when we're not talking about cloud administrators and we're actually talking about the end users, maybe for example, accessing storage, it should be easy for them to do so through standard tools as well, such as a desktop application, a web browser or something similar.

      Or maybe you've gone ahead and deployed a reporting solution in the cloud, like we spoke of in the previous lesson.

      Well, you would commonly expect for that sort of solution that maybe there's also a mobile application to go and access all of that reporting data.

      The key point here is that if you are using cloud, it is expected that all of the common standard sorts of accessibility options are available to you, public access, private access, desktop applications, mobile applications and so on.

      So if that's what cloud is and how we access it, where actually is it?

      That's a really important part of the definition of cloud.

      And that's where we're referring to resource pooling, this idea that you don't really know exactly where the cloud is that you are going to access.

      So let's say for example, you've got your Aussie Mart company.

      If they want to deploy their solution to be available across the globe, well, it should be pretty easy for them to actually go ahead and do that.

      Now, we don't know necessarily where that is.

      We can get access to it.

      We might say, I want my solution available in Australia East for example, or Europe or India or maybe central US for example.

      All of these refer to general locations where we want to deploy our services.

      When you use cloud, you are not going to go ahead and say, I want one server and I want it deployed to the data center at 123 data center street.

      Okay, you don't know the physical address exactly or at least you shouldn't really have to.

      All you need to know about is generally where you are going to go and deploy that.

      Now, you will also see that for most cloud providers, you've got that global access in terms of all the different locations you can deploy to.

      And really importantly, in terms of all of these pooled resources, understand that it's not just for you to use.

      There will be other customers all across the globe who are using that as well.

      So when you're using cloud, there are lots of resources.

      They might be in lots of different physical locations and lots of different physical infrastructure and in use by lots of different customers.

      And you don't really need to worry about that or know too much about it.

      Another really important property of the cloud is something referred to as rapid elasticity.

      Now elasticity is the idea that you can easily get access to more or less resources.

      And when you work with cloud, you're actually going to commonly hear this being referred to as scaling out and in rather than just scaling up and down.

      So what do I mean by that?

      Well, let's say we've got our users that need to access our Aussie Mart store.

      We might decide to use cloud to host our Aussie Mart web application.

      And perhaps that's hosted on a server and a database.

      Now, when that application gets really busy, for example, if we have lots of different users going to access it at the same time, we might want to scale out to meet demand.

      That is to say, rather than having one server that hosts our web application, we might actually have three.

      And if that demand for our application decreases, we might actually go ahead and decrease the underlying resources that power it as well.

      What we are talking about here is scaling in and out by adding or decreasing the number of resources that host our application.

      This is different from the traditional approach to scalability, where what we would normally do is just add CPU or add memory, for example.

      We would increase the size of one individual resource that was hosting our solution.

      So that's just elasticity at a high level and it's a really key property of cloud.

      Now, we'll just say here that if you are worried about how that actually works behind the scenes in terms of how you host that application across duplicate resources, how you provide connectivity to that, that's all outside the scope of this beginners course, but it's definitely covered in other content as well.

      So when you're using cloud, you get easy access to scale in and out and you should never feel like there are not enough resources to meet your demand.

      To you, it should just feel like if you want a hundred servers, for example, then you can easily get a hundred servers.

      All right, now the last property of cloud that I want to talk to you about is that of measuring service.

      When we're talking about measuring service, what we're talking about is the idea that if you are using cloud to host your solutions, it should be really easy for you to go and say, I know what this is costing, I know where my resources are, how they are performing and whether there are any issues and I can control the types of resources and the configuration that I use that I'm going to deploy.

      So for example, it should be easy for you to say, how much is it going to cost me for five gigabytes of storage?

      What does my bill look like currently and what am I forecasted to be using over the remainder of the month?

      Or maybe you want to say that certain services should not be allowed to be deployed across all regions.

      Yes, cloud can be accessed across the globe, but maybe your organization only works in one part of a specific country and that's the only location you should be able to use.

      These are the standard notions of measuring and controlling service and it's really common to all of the cloud providers.

      All right, everybody.

      So now you've got an understanding of what cloud is and how you can define it.

      If you'd like to see more about this definition from the NIST, then be sure to check out the link that I've included for this lesson.

      So thanks for joining me, folks.

      I'll see you in the next lesson.

    1. IBMT involves learning that requires experience and explicit instruction. To ensure appropriate experience, coaches (qualified instructors) are trained to help novices practice IBMT properly. Instructors received training on how to interact with experimental and control groups to make sure they understand the training program exactly.

      I would be interested to see if traditional focused meditation would have similar results.

    2. Although no direct measures of brain changes were used in this study, some previous studies suggest that changes in brain networks can occur. Thomas et al. (40) showed that, in rats, one short experience of acute exposure to psychosocial stress reduced both short- and long-term survival of newborn hippocampal neurons. Similarly, the human brain is sensitive to short experience. Naccache et al. (41) showed that the subliminal presentation of emotional words (<100 ms) modulates the activity of the amygdala at a long latency and triggers long-lasting cerebral processes (41).

      Let's review these other studies as well.

    3. However, the lengthy training required has made it difficult to use random assignment of participants to conditions to confirm these findings.

      Interesting.

    4. shows significantly better attention and control of stress

      How is this measured? Edit: Addressed further down.

    5. a group

      What is the size and composition of the group? Edit: Addressed further down.

    6. showed greater improvement

      Is this quantified?

    7. may be easier to teach to novices because they would not have to struggle so hard to control their thoughts.

      Very interesting. I thought the whole purpose of meditation WAS the struggle to control the thoughts. I thought that is where the benefits came from.

    8. Thought control is achieved gradually through posture and relaxation, body–mind harmony, and balance with the help of the coach rather than by making the trainee attempt an internal struggle to control thoughts in accordance with instruction.

      Certainly would make it more approachable.

    9. The main effect of the training session was significant only for the executive network [F(1,78) = 9.859; P < 0.01]. More importantly, the group × session interaction was significant for the executive network [F(1,78) = 10.839; P < 0.01], indicating that the before vs. after difference in the conflict resolution score was significant only for the trained group

      This would imply improved equanimity, but perhaps not long term focus improvement.

    10. Performance of the ANT after 5 days of IBMT or control. Error bars indicate 1 SD. Vertical axis indicates the difference in mean reaction time between the congruent and incongruent flankers. The higher scores show less efficient resolution of conflict.

      This particular study seems to show that the change in focus efficiency was actually better in the control group than the experiment group.

    1. Hey there everybody, thanks for joining.

      It's great to have you with me in this lesson where we're going to talk about why cloud matters.

      Now to help answer that question, what I want to do firstly is talk to you about the traditional IT infrastructure.

      How did we used to do things?

      What sort of challenges and issues did we face?

      And therefore we'll get a better understanding of what cloud is actually doing to help.

      We can look at how things used to be and how things are now.

      So what we're going to do throughout this lesson is walk through a little bit of a scenario with a fictitious company called Ozzymart.

      So let's go ahead now, jump in and have a chat about the issues that they're currently facing.

      Ozzymart is a fictitious company that works across the globe selling a range of different Australia related paraphernalia.

      Maybe stuffed toys for kangaroos, koalas and that sort of thing.

      Now they've currently got several different applications that they use that they provide access to for their users.

      And currently the Ozzymart team do not use the cloud.

      So when we have a look at the infrastructure hosting these applications, we'll learn that Ozzymart have a couple of servers, one server for each of the applications that they've got configured.

      Now the Ozzymart IT team have had to have gone and set up these servers with windows, the applications and all the different data that they need for these applications to work.

      And what it's also important to understand about the Ozzymart infrastructure is all of this is currently hosted on their on-premises customer managed infrastructure.

      So yes, the Ozzymart team could have gone out and maybe used a data center provider.

      But the key point here is that the Ozzymart IT team have had to set up servers, operating systems, applications and a range of other infrastructure to support all of this storage, networking, power, cooling.

      Okay, these are the sorts of things that we have to manage traditionally before we were able to use cloud.

      Now to help understand what sort of challenges that might introduce, let's walk through a scenario.

      We're going to say that the Ozzymart CEO has gone and identified the need for reporting to be performed across these two applications.

      And the CEO wants those reports to be up and ready by the end of this month.

      Let's say that's only a week away.

      So the CEO has instructed the finance manager and the finance manager has said, "Hey, awesome.

      You know what?

      I've found this great app out there on the internet called Reports For You.

      We can buy it, download it and install it.

      I'm going to go tell the IT team to get this up and running straight away."

      So this might sound a little bit familiar to some of you who have worked in traditional IT where sometimes demands can come from the top of the organization and they filter down with really tight timelines.

      So let's say for example, the finance manager is going to go along, talk to the IT team and say, "We need this Reports For You application set up by the end of month."

      Now the IT team might be a little bit scared because, hey, when we look at the infrastructure we've got, it's supporting those two servers and applications okay, but maybe we don't have much more space.

      Maybe we don't have enough storage.

      Maybe we are using something like virtualization.

      So we might not need to buy a brand new physical server and we can run up a virtual Windows server for the Reports For You application.

      But there might just not be enough resources in general.

      CPU, memory, storage, whatever it might be to be able to meet the demands of this Reports For You application.

      But you've got a timeline.

      So you go ahead, you get that server up and running.

      You install the applications, the operating system data, all there as quickly as you can to meet these timelines that you've been given by the finance manager.

      Now maybe it's not the best server that you've ever built.

      It might be a little bit rushed and a little bit squished, but you've managed to get that server up and running with the Reports For You application and you've been able to meet those timelines and provide access to your users.

      Now let's say that you've given access to your users for this Reports For You application.

      Now let's say when they start that monthly reporting job, the Reports For You application needs to talk to the data across your other two applications, the Aussie Mart Store and the Aussie Mart Comply application.

      And it's going to use that data to perform the reporting that the CEO has requested.

      So you kick off this report job on a Friday.

      You hope that it's going to be complete on a Saturday, but maybe it's not.

      You check again on a Sunday and things are starting to get a little bit scary.

      And uh-oh, Monday rolls around, the Reports For You report is still running.

      It has not yet complete.

      And that might not be so great because you don't have a lot of resources on-premises.

      And now all of your applications are starting to perform really poorly.

      So that Reports For You application is still running.

      It's still trying to read data from those other two applications.

      And maybe they're getting really, really slow and let's hope not, but maybe the applications even go off entirely.

      Now those users are going to become pretty angry.

      You're going to get a lot of calls to the help desk saying that things are offline.

      And you're probably going to have the finance manager and every other manager reaching out to you saying, this needs to be fixed now.

      So let's say you managed to push through, perhaps through the rest of Monday, and that report finally finishes.

      You clearly need more resources to be able to run this report much more quickly at the end of each month so that you don't have angry users.

      So what are you going to do to fix this for the next month when you need to run the report again?

      Well, you might have a think about ordering some new software and hardware because you clearly don't have enough hardware on-premises right now.

      You're going to have to wait some time for all of that to be delivered.

      And then you're going to have to physically and store it, set it up, get it running, and make sure that you've got everything you need for reports for you to be running with more CPU and resources next time.

      There's a lot of different work that you need to do.

      This is one of the traditional IT challenges that we might face when the business has demands and expectations for things to happen quickly.

      And it's not really necessarily the CEO or the finance manager's fault.

      They are focused on what the business needs.

      And when you work in the technology teams, you need to do what you can to support them so that the business can succeed.

      So how might we do that a little bit differently with cloud?

      Well, with cloud, we could sign up for a cloud provider, we could turn on and off servers as needed, and we could scale up and scale down, scale in and scale out resources, all to meet those demands on a monthly basis.

      So that could be a lot less work to do and it could certainly provide you the ability to respond much more quickly to the demands that come from the business.

      And rather than having to go out and buy all of this new infrastructure that you are only going to use once a month, well, as we're going to learn throughout this course, one of the many benefits of cloud is that you can turn things on and off really quickly and only pay for what you need.

      So what might this look like with cloud?

      Well, with cloud, what we might do is no longer have that on-premises rushed server that we were using for reports for you.

      Instead of that, we can go out to a public cloud provider like AWS, GCP or hopefully Azure, and you can set up those servers once again using a range of different features, products that are all available through the various public cloud providers.

      Now, yes, in this scenario, we are still talking about setting up a server.

      So that is going to take you some time to configure Windows, set up the application, all of the data and configuration that you require, but at least you don't need to worry about the actual physical infrastructure that is supporting that server.

      You don't have to go out, talk to your procurement team, talk to a different providers, wait for different physical infrastructure to be delivered and licensing and software and other assets.

      With cloud, as we will learn, you can really quickly get online and up and running.

      And also, if we had that need to ensure that the reports for you application was running with lots of different resources at the end of the month, it's much easier when we use cloud to just go and turn some servers on and then maybe turn them off at the end of the month when they are no longer acquired.

      This is the sort of thing that we are talking about with cloud.

      We're only really just touching on the service about what cloud can do and what cloud actually is.

      But my hope is that through this lesson, you can understand how cloud changes things.

      Cloud allows us to work with technology in a much different way than we traditionally would work with our on-premises infrastructure.

      Another example that shows how cloud is different is that rather than using the reports for you application, what we might in fact actually choose to do is go to a public cloud provider and go to someone that actually has a equivalent reports for you solution that's entirely built in the cloud ready to go.

      In this way, not only do we no longer have to manage the underlying physical infrastructure, we don't actually have to manage the application software installation, configuration, and all of that service setup.

      With something like a reporting software that's built in the cloud, we would just provide access to our users and only have to pay on a per user basis.

      So if you've used something like zoom for meetings or Dropbox for data sharing, that's the sort of solution we're talking about.

      So if we consider this scenario for Aussie Mart, we have a think about the benefits that they might access when they use the cloud.

      Well, we can much more quickly get access to resources to respond to demand.

      If we need to have a lot of different compute capacity working at the end of the month with cloud, like you'll learn, we can easily get access to that.

      If we wanted to add lots of users, we could do that much more simply as well.

      And something that the finance manager might really be happy about in this scenario is that we aren't going to go back and suggest to them that we need to buy a whole heap of new physical infrastructure right now.

      When we think about traditionally how Aussie Mart would have worked with this scenario, they would have to go and buy some new physical servers, resources, storage, networking, whatever that might be, to meet the needs of this reports for you application.

      And really, they're probably going to have to strike a balance between having enough infrastructure to ensure that the reports for you application completes its job quickly and not buying too much infrastructure that's just going to be sitting there unused whilst the reports for you application is not working.

      And really importantly, when we go to cloud, we see this difference as not having to buy lots of physical infrastructure upfront as being referred to as capital expenditure versus operational expenditure.

      Really, what we're just saying here is rather than spending a whole big lump sum all at once to get what you need, you can just pay on a monthly basis for what you need when you need it.

      And finally, one of the other benefits that you'll also see is that we're getting a reduction in the amount of different tasks that we have to complete in terms of IT administration, set up of operating systems, management of physical infrastructure, what the procurement team has to manage, and so on.

      Again, right now we're just talking really high level about a fictitious scenario for Aussie Mart to help you to understand the types of things and the types of benefits that we can get access to for cloud.

      So hopefully if you're embarking on a cloud journey, you're gonna have a happy finance manager, CEO, and other team members that you're working with as well.

      Okay, everybody, so that's a wrap to this lesson on why cloud matters.

      As I've said, we're really only just scratching the surface.

      This is just to introduce you to a scenario that can help you to understand the types of benefits we get access to with cloud.

      As we move throughout this course, we'll progressively dive deeper in terms of what cloud is, how you define it, the features you get access to, and other common concepts and terms.

      So thanks for joining me, I'll see you there.

    1. “I do not wantmy wife to take up with any otherman; if she does, this real estate goesto my estate.” The wife re-married.Does she own the realty in fee simple?

      The more apparent search item here is "fee simple". However, another search item is whether or not a promise to not remarry, which could come at a detriment, is with consideration and is legally enforceable.

    1. SOTs generated by the anomalous Hall effect inFM/NM/FM multilayers were predicted 13 and experimentallyrealized14

      Is this normal?

    Annotators

    1. Marrim thinks they will still find a way to smoke. “Kids break the rules — that’s the way of the world,” she said. “We were all kids and we tried it for the first time,” she added. “Might as well do it in the safety of a lounge.”

      Marrim feels that hookah is a big part of her life because it helped her feel liberated even though she was looked as shameful because she is a women but that did not stop her she would make her own hookah when she was younger to smoke some hookah she's not wrong kids like to break the rules

    2. the chemicals in hookah smoke are similar to those found in cigarette smoke.

      due to hookah being Tabaco that you inhale in to your lungs so it's still a health problem because you get smoke in your lungs.

    3. birthdays, graduations, that time you cried over the crush who didn’t like you back or showed off your smoke ring skills to your friends. “It’s like a rite of passage here when you start smoking hookah,” Marrim said.

      The hookah lounge is more then a place to smoke it a place where people to together to celebrate special events like birthdays or to relief dome stress or hang with friends.

    4. “And it’s something you have to create for yourself when you’re displaced, and you might not ever be able to go back home because you don’t really know what home is anymore.”

      hookah is a sacred traditions for Muslim people they don't know if they will every go home some day so to keep their tradition hookah is important for them.

    1. on the other side

      The water that has only recently brought about death to an unfortunate sailor and has seemingly threatened “Gentile[s]” and “Jew[s]” and even us, the readers, now becomes the force an absence of which leads to death. What if the “death by water” that Madame Sosotris warned about was not the drowning but the death brought by its lack? This absence—spiritual and physical—defines the drought that pervades society.

      In essence, that warning has already become true. In the search of meaning, the earthly desires have drowned humanity. What comes after it is stillness: a period of profound spiritual drought. This lack of spiritually induces apocalypse: a cycle of life seemingly becomes broken. The silence in the mountains does not give way to voice, and stillness, described in The Death of Water, that follows the storm does not imply recovery; instead, it leads to further destruction. There is no resurgence after the storm, only desolation.

      This desolation is no less overwhelming than the indulgence that preceded it. The absence of water—a metaphor for spiritual sustenance—is inescapable. The mountains, once symbols of “solitude,” “silence” and reflection are now dry and barren. The use of “even” in these lines underscores the notion of the totality of this spiritual drought. There is no refuge, no shelter, not “even” in the mountains.

      Eliot further juxtaposes biblical light, associated with Christ as “the sun shineth in his strength,” as described in Revelations, with thunder, transforming it into a symbol of apocalypse – the thunder itself represents a loud rumbling or crashing noise after a lightning flash. This choice of title and imagery seems to suggest that divine intervention may have already occurred—unrecognized and unheeded, leaving only a loud noice as its product. What if Jesus is already here “walking beside” us? Left unrecognized, however, he does not intervene. This notion is underscored by the repetition of the question regarding the personality of this third creature: “Who is the third who walks always beside you?” In the second reiteration of the question, however, “beside” changes to “on the other side.” This divine creature, most likely Christ, is present, yet now isolated, by the walls of mountains we ourselves have built.

      The tragedy of this drought, thus, seems to lie not in the absence of divine intervention, but in humanity’s inability to recognize it. In this contemporary world, it is not the storm that destroys; it is the stillness after, where the absence of recognition leads to a deeper decay. The apocalypse has already begun (or potentially has almost reached its culmination), not in fire or flood, but in silence and spiritual blindness.

    1. Thanksgiving is a time to reflect on the things we’re grateful for and to share that gratitude with the people who matter most. Along with gathering around the table, sending a heartfelt card is a meaningful way to reach out to friends, family, and co-workers—especially those who can’t join you in person. A thoughtful message can remind them how much they’re loved and appreciated. Whether you’re sending Thanksgiving cards or inviting loved ones to celebrate with you, these Thanksgiving messages and well wishes will help express your gratitude this season. function scrollListener(){document.getElementById("carousel-product-list-wrapper").addEventListener("scroll",scrollListener)}function scrollListener(){manageArrowsVisibility()}function manageArrowsVisibility(e=0){var t,l,s,r=document.getElementById("carousel-product-list-wrapper");r&&(e=r.scrollLeft+e,t=r.offsetWidth,r=r.scrollWidth,l=document.getElementById("apc-left-arrow"),s=document.getElementById("apc-right-arrow"),e<=0?(l.classList.add("disable"),s.classList.remove("disable")):e+t<r?(l.classList.remove("disable"),s.classList.remove("disable")):r<=e+t&&(l.classList.remove("disable"),s.classList.add("disable")))}function handleArrowClick(e){var t,l=document.getElementById("carousel-product-list-wrapper");l&&(scrollAmount=0,t=setInterval(function(){"left"===e?(manageArrowsVisibility(-10),l.scrollLeft-=10):(manageArrowsVisibility(10),l.scrollLeft+=10),720<=(scrollAmount+=10)&&window.clearInterval(t)},25))} .apc-spinner{display:flex;width:72px;position:absolute;text-align:center;justify-content:space-around}.apc-spinner>div{width:18px;height:18px;background:#aaa;border-radius:100%;display:inline-block;-webkit-animation:sk-bouncedelay 1.4s ease-in-out infinite both;animation:sk-bouncedelay 1.4s ease-in-out infinite both}.apc-spinner .bounce1{-webkit-animation-delay:-.32s;animation-delay:-.32s}.apc-spinner .bounce2{-webkit-animation-delay:-.16s;animation-delay:-.16s}@-webkit-keyframes sk-bouncedelay{0%,80%,to{-webkit-transform:scale(0)}40%{-webkit-transform:scale(1)}}@keyframes sk-bouncedelay{0%,80%,to{-webkit-transform:scale(0);transform:scale(0)}40%{-webkit-transform:scale(1);transform:scale(1)}}.shimmer-background{position:absolute;width:100%;height:100%;background:#fff;flex-direction:column;z-index:3}.shimmer-background,.shimmer-thumb{display:flex;align-items:center;justify-content:center}.shimmer-thumb{width:70%;height:55%;background:#ebedf0;margin:10%}.shimmer-text{width:50%;height:16px;background:#ebedf0;border-radius:8px;margin-bottom:4%}.apc-product-wrapper{background:0 0;padding:4px;display:flex;flex-direction:column;justify-content:center;align-items:center;margin:0 auto;box-sizing:content-box;max-width:280px}.mobile .apc-product-wrapper{height:250px}.grid-layout .apc-product-wrapper{height:295px;width:calc(20% - 8px)}.carousel-layout .apc-product-wrapper{height:264px;width:248px}.apc-product{flex-direction:column;width:100%;height:100%;flex-grow:1;border:none;position:relative;box-sizing:border-box;cursor:pointer;background:0 0}.apc-product,.product-image-wrapper{display:flex;justify-content:center;align-items:center}.product-image-wrapper{height:80%}.product-image{max-width:90%;height:auto;width:auto;transition:-webkit-transform 1s ease-out;transition:transform 1s ease-out;transition:transform 1s ease-out,-webkit-transform 1s ease-out;max-height:100%}.product-image:hover{transform:scale(1.05);-ms-transform:scale(1.05);-webkit-transform:scale(1.05)}.product-name{width:100%;height:24px;font-size:14px;position:relative;text-align:center;color:#58595b;white-space:nowrap;text-overflow:ellipsis;overflow:hidden}.product-name.short{font-size:15px}.mobile .product-name{font-size:14px}.tablet.grid-layout.landscape .apc-product-wrapper{width:calc(25% - 16px);margin:0}.tablet.grid-layout.landscape .preview .apc-product-wrapper:nth-child(n+9){display:none}.tablet.grid-layout.portrait .apc-product-wrapper{width:calc(33% - 12px);margin:0}.tablet.grid-layout.portrait .preview .apc-product-wrapper:nth-child(n+7){display:none}.mobile.grid-layout .apc-product-wrapper{width:calc(50% - 8px);margin:0;padding:0}.mobile.grid-layout .preview .apc-product-wrapper:nth-child(n+5){display:none}.apc-product-list{display:flex;justify-content:center;align-items:center}.carousel-layout .apc-product-list{width:-webkit-fit-content;width:-moz-fit-content;width:fit-content;margin:0 4px}.grid-layout .apc-product-list{flex-flow:row wrap;width:100%;height:100%}.grid-wrapper{height:calc(100% - 70px);display:flex;flex-direction:row;align-items:center;justify-content:center;overflow-y:hidden;position:relative}.mobile .grid-wrapper,.tablet .grid-wrapper{flex-direction:column}.see-more{height:44px;width:100%;min-width:260px;color:#58595b;font-size:14px;text-align:center;border:none;border-top:1px solid #dcdee1;background:0 0;margin:16px 0;display:flex;align-items:center;justify-content:center;cursor:pointer}.see-more-icon{margin:-8px 0 0 12px;-webkit-transform:rotate(135deg);transform:rotate(135deg)}.apc-recommendation-title{color:#58595b;font-size:32px;font-weight:700;line-height:40px}.mobile .apc-recommendation-title{font-size:24px;line-height:28px}.apc-recommendation-subtitle{color:#58595b;font-size:17px;line-height:36px}.mobile .apc-recommendation-subtitle{font-size:17px;line-height:25px}.apc-header{display:flex;flex-direction:column;overflow:hidden}.icon{height:16px;width:16px}.replace-button{color:#58595b;font-size:14px;text-align:center;background:0 0;width:250px;height:36px;margin:8px 0;display:flex;align-items:center;justify-content:space-around;align-self:center}.apc-arrow,.replace-button{border:1px solid #58595b;border-radius:4px;cursor:pointer}.apc-arrow{padding:0;background:#fff;width:24px;height:48px;margin:0 8px}.apc-arrow:hover{box-shadow:0 4px 4px -1px #c6c7c9}.apc-arrow.disable{opacity:.3;cursor:auto}.apc-arrow.disable:hover{box-shadow:none}.arrow-icon{position:relative;height:12px;width:12px;border-right:2px solid #58595b;border-top:2px solid #58595b}.arrow-icon.left{-webkit-transform:rotate(225deg);transform:rotate(225deg);margin-left:7px}.arrow-icon.right{-webkit-transform:rotate(45deg);transform:rotate(45deg);margin-right:7px}.carousel-wrapper{display:flex;flex-direction:row;justify-content:center;align-items:center}.carousel-product-list-wrapper{overflow-x:scroll}.carousel-product-list-wrapper::-webkit-scrollbar{background-color:#fff;width:16px}.carousel-product-list-wrapper::-webkit-scrollbar-track{background-color:#fff}.carousel-product-list-wrapper::-webkit-scrollbar-thumb{background-color:#fff;border-radius:16px;border:4px solid #fff}.carousel-product-list-wrapper:hover::-webkit-scrollbar-thumb{background-color:#babac0;border-radius:16px;border:4px solid #fff}.carousel-product-list-wrapper::-webkit-scrollbar-button{display:none}.apc-wrapper{display:flex;flex-direction:column;align-items:center;justify-content:flex-start;transition:min-height 3s;max-width:1600px;margin:0 auto}.apc-wrapper.mobile,.apc-wrapper.table{width:auto}.container{display:flex;flex-direction:column;width:100%;margin:16px 0;background-color:#fff}.apc-container-product-list-wrapper{overflow-x:scroll;width:85%}.widget-loading{display:flex;width:100%;border-radius:12px;background:#fff;padding:24px;box-sizing:border-box;opacity:.5;position:relative;z-index:5;align-items:center;justify-content:center;height:421px}.align-left{align-self:start;text-align:start;margin-left:8px}.align-center{align-self:center;text-align:center}.align-right{align-self:end;text-align:end;margin-right:8px}.apc-global{margin:0}.apc-global::-webkit-scrollbar{display:block;overflow:auto;border-radius:10px}.apc-global *{font-family:Montserrat Medium,Verdana,-apple-system,BlinkMacSystemFont,Segoe UI,Roboto,Oxygen,Ubuntu,Cantarell,Fira Sans,Droid Sans,Helvetica Neue,sans-serif;-moz-osx-font-smoothing:grayscale}.apc-global .apc-pipe{font-family:system-ui,sans-serif}.apc-product:link{text-decoration:none}.apc-product:visited{text-decoration:none}.apc-product:hover{text-decoration:none}.apc-product:active{text-decoration:none}

      Well-written paragraph, reads very smoothly. 1. First sentence states what Thanksgiving is all about. 2. Second and third smoothly transition from the first into the need for sending messages for Thanksgiving. 3. Last hints at some of the tangible options to be discussed, then summarizes the value of Thanksgiving messages.

  2. docdrop.org docdrop.org
    1. "You guys are no help. Literally no help. Why do you guys have me in here?" she protested. Sofia's step-grandfather was so angry with the school administrators (and perhaps intimidated by them) that Lola tried to intervene. (He tells us that when he was growing up here in the 1950s, all the parents were involved in the schools, but now they are completely uninterested. "They would rather let others do it, but then no one gets involved."

      Nowadays, I think that's the case for a lot of schools in the U.S.. Many parents aren't involved in school affairs as back in the day. parents actually cared about their children's education and the material they're being taught, but now parents just send kids to these schools and are not involved whatsoever.

    1. Cyber-EnabledNetworks
      • Has the capacity to be entwined with other crimes such as extortion

      • Intensifies amid the rise of ais

    2. Mafia networks
      • Only 18 of them in Canada

      • They're mainly held in Ontario and Quebec but have connections to more than 10 countries

      • They're very violent

      • Active in the private business sector where they commit money laundering

    3. Extortion
      • Force or threats are used to obtain money

      Eg. Co-op extortion 1. Perpetrator threatened to release sensitive data to the public if certain demands weren't made

    4. Money Laundering
      • Money is obtained illegally but is disguised as legit

      • 30% of OCGs are involved in money laundering

      • 40 billion dollars is laundered annually in Canada

    5. Piracy
      • The crime of stealing intellectual property and distributing them either for a reduced price or for free

      • Eg. Zlibrary

      • Causes financial consequences to the movie producer and its promoter and other associated with the production of the movie

      • What would be the difference between piracy and buying something and selling it at a thrifting site at a reduced price?

      • Manga mura

    6. What are Loan Sharks? (Beware!)
      • Happens frequently in social media
      • Professionals or experts who collect loans at extremely high rates to clients, by means of threats and violence
      • You can still experience loan sharking even in legitimate places(eg. payday)

    Annotators

    1. Responsibility to the tree makes everyone pause before beginning.Sometimes I have that same sense when I face a blank sheet of paper.For me, writing is an act of reciprocity with the world; it is what Ican give back in return for everything that has been given to me. Andnow there’s another layer of responsibility, writing on a thin sheet oftree and hoping the words are worth it. Such a thought could make aperson set down her pen.
    1. Welcome back and in this lesson I want to talk through another type of storage.

      This time instant store volumes.

      It's essential for all of the AWS exams and real-world usage that you understand the pros and cons for this type of storage.

      It can save money, improve performance or it can cause significant headaches so you have to appreciate all of the different factors.

      So let's just jump in and get started because we've got a lot to cover.

      Instant store volumes provide block storage devices so raw volumes which can be attached to an instance presented to the operating system on that instance and used as the basis for a file system which can then in turn be used by applications.

      So far they're just like EBS only local instead of being presented over the network.

      These volumes are physically connected to one EC2 host and that's really important.

      Each EC2 host has its own instant store volumes and they're isolated to that one particular host.

      Instances which are on that host can access those volumes and because they're locally attached they offer the highest storage performance available within AWS much higher than EBS can provide and more on why this is relevant very soon.

      They're also included in the price of any instances which they come with.

      Different instance types come with different selections of instant store volumes and for any instances which include instant store volumes they're included in the price of that instance so it comes down to use it or lose it.

      One really important thing about instant store volumes is that you have to attach them at launch time and unlike EBS you can't attach them afterwards.

      I've seen this question come up a few times in various AWS exams about adding new instant store volumes after instance launch and it's important that you remember that you can't do this it's launch time only.

      Depending on the instance type you're going to be allocated a certain number of instant store volumes you can choose to use them or not but if you don't you can't adjust this later.

      This is how instant store architecture looks.

      Each instance can have a collection of volumes which are backed by physical devices on the EC2 host which that instance is running on.

      So in this case host A has three physical devices and these are presented as three instant store volumes and host B has the same three physical devices.

      Now in reality EC2 hosts will have many more but this is a simplified diagram.

      Now on host A instance 1 and 2 are running instance 1 is using one volume and instance 2 is using the other two volumes and the volumes are named ephemeral 0, 1 and 2.

      Roughly the same architecture is present on host B but instance 3 is the only instance running on that host and it's using ephemeral 1 and ephemeral 2 volumes.

      Now these are ephemeral volumes they're temporary storage as a solutions architect or a developer or an engineer you need to think of them as such.

      If instance 1 stored some data on ephemeral volume 0 on EC2 host A let's say a cat picture and then for some reason the instance migrated from host A through to host B then it would still have access to an ephemeral 0 volume but it would be a new physical volume a blank block device.

      So this is important if an instance moves between hosts then any data that was present on the instant store volumes is lost and instances can move between hosts for many reasons.

      If they're stopped and started this causes a migration between hosts or another example is if host A was undergoing maintenance then instances would be migrated to a different host.

      When instances move between hosts they're given new blank ephemeral volumes data on the old volumes is lost they're wiped before being reassigned but the data is gone and even if you do something like change an instance type this will cause an instance to move between hosts and that instance will no longer have access to the same instant store volumes.

      This is another risk to keep in mind you should view all instant store volumes as ephemeral.

      The other danger to keep in mind is hardware failure if a physical volume fails say the ephemeral 1 volume on EC2 host A then instance 2 would lose whatever data was on that volume.

      These are ephemeral volumes treat them as such their temporary data they should not be used for anything where persistence is required.

      Now the size of instant store volumes and the number of volumes available to an instance vary depending on the type of instance and the size of instance.

      Some instance types don't support instant store volumes different instance types have different types of instance store volumes and as you increase in size you're generally allocated larger numbers of these volumes so that's something that you need to keep in mind.

      One of the primary benefits of instance store volumes is performance you can achieve much higher levels of throughput and more IOPS by using instance store volumes versus EBS.

      I won't consume your time by going through every example but some of the higher-end figures that you need to consider are things like if you use a D3 instance which is storage optimized then you can achieve 4.6 GB per second of throughput and this instance type provides large amounts of storage using traditional hard disks so it's really good value for large amounts of storage.

      It provides much high levels of throughput than the maximums available when using HDD based EBS volumes.

      The I3 series which is another storage optimized family of instances these provide NVMe SSDs and this provides up to 16 GB per second of throughput and this is significantly higher than even the most high performance EBS volumes can provide and the difference in IOPS is even more pronounced versus EBS with certain I3 instances able to provide 2 million read IOPS and 1.6 million write IOPS when optimally configured.

      In general instance store volumes perform to a much higher level versus the equivalent storage in EBS.

      I'll be doing a comparison of EBS versus instance store elsewhere in this section which will help you in situations where you need to assess suitability but these are some examples of the raw figures.

      Now before we finish this lesson just a number of exam power-ups.

      Instance store volumes are local to an EC2 host so if an instance does move between hosts you lose access to the data on that volume you can only add instance store volumes to an instance at launch time if you don't add them you cannot come back later and add additional instance store volumes and any data on instance store volumes is lost if that instance moves between hosts if it gets resized or if you have either host failure or specific volume hardware failure.

      Now in exchange for all these restrictions of course instance store volumes provide high performance so it's the highest data performance that you can achieve within AWS you just need to be willing to accept all of the shortcomings around the risk of data loss its temporary nature and the fact that it can't survive through restarts or moves or resizes.

      It's essentially a performance trade-off you're getting much faster storage as long as you can tolerate all of the restrictions.

      Now with instance store volumes you pay for it anyway it's included in the price of an instance so generally when you're provisioning an instance which does come with instance store volumes there is no advantage to not utilizing them you can decide not to use them inside the OS but you can't physically add them to the instance at a later date.

      Just to reiterate and I'm going to keep repeating this throughout this section of the course instance store volumes are temporary you cannot use them for any data that you rely on or data which is not replaceable so keep that in mind it does give you amazing performance but it is not for the persistent storage of data but at this point that's all of the theory that I wanted to cover so that's the architecture and some of the performance trade-offs and benefits that you get with instance store volumes go ahead and complete this video and when you're ready join me in the next which will be an architectural comparison of EBS and instance store which will help you in exam situations to pick between the two.

    1. Welcome back and in this lesson I want to talk about the Hard Disk Drive or HDD-based volume types provided by EBS.

      HDD-based means they have moving bits, platters which spin little robot arms known as heads which move across those spinning platters.

      Moving parts means slower which is why you'd only want to use these volume types in very specific situations.

      Now let's jump straight in and look at the types of situations where you would want to use HDD-based storage.

      Now there are two types of HDD-based storage within EBS.

      Well that's not true, there are actually three but one of them is legacy.

      So I'll be covering the two ones which are in general usage.

      And those are ST1 which is throughput optimized HDD and SC1 which is cold HDD.

      So think about ST1 as the fast hard drive not very agile but pretty fast and think about SC1 as cold.

      ST1 is cheap, it's less expensive than the SSD volumes which makes it ideal for any larger volumes of data.

      SC1 is even cheaper but it comes with some significant trade-offs.

      Now ST1 is designed for data which is sequentially accessed because it's HDD-based it's not great at random access.

      It's more designed for data which needs to be written or read in a fairly sequential way.

      Applications where throughput and economy is more important than IOPS or extreme levels of performance.

      ST1 volumes range from 125 GB to 16 TB in size and you have a maximum of 500 IOPS.

      But and this is important IO on HDD-based volumes is measured as 1 MB blocks.

      So 500 IOPS means 500 MB per second.

      Now their maximums HDD-based storage works in a similar way to how GP2 volumes work with a credit bucket.

      Only with HDD-based volumes it's done around MB per second rather than IOPS.

      So with ST1 you have a baseline performance of 40 MB per second for every 1 TB of volume size.

      And you can burst to a maximum of 250 MB per second for every TB of volume size.

      Obviously up to the maximum of 500 IOPS and 500 MB per second.

      ST1 is designed for when cost is a concern but you need frequent access storage for throughput intensive sequential workloads.

      So things like big data, data warehouses and log processing.

      Now ST1 on the other hand is designed for infrequent workloads.

      It's geared towards maximum economy when you just want to store lots of data and don't care about performance.

      So it offers a maximum of 250 IOPS.

      Again this is with a 1 MB IO size.

      So this means a maximum of 250 MB per second of throughput.

      And just like with ST1 this is based on the same credit pool architecture.

      So it has a baseline of 12 MB per TB of volume size and a burst of 80 MB per second per TB of volume size.

      So you can see that this offers significantly less performance than ST1 but it's also significantly cheaper.

      And just like with ST1 volumes can range from 125 GB to 16 TB in size.

      This storage type is the lowest cost EBS storage available.

      It's designed for less frequently accessed workloads.

      So if you have colder data, archives or anything which requires less than a few loads or scans per day then this is the type of storage volume to pick.

      And that's it for HDD based storage.

      Both of these are lower cost and lower performance versus SSD.

      Designed for when you need economy of data storage.

      Picking between them is simple.

      If you can tolerate the trade-offs of ST1 then use that.

      It's super cheap and for anything which isn't day to day accessed it's perfect.

      Otherwise choose ST1.

      And if you have a requirement for anything IOPS based then avoid both of these and look at SSD based storage.

      With that being said though that's everything that I wanted to cover in this lesson.

      Thanks for watching.

      Go ahead and complete the video and when you're ready I'll look forward to you joining me in the next.

    1. The conclusion we would draw is apparent; — if there is a similarity of minds discernible in the whole human race, can dissimilitude of forms or the gradations of complexion prove that the earth is peopled by many different species of men?

      The question: what matters more? Hearts/minds? Or Complexion?

    1. Rémunération du personnel, 65 000 €

      Compte de résultat, charges, 641

    2. Ventes de produits finis, 511 200 €

      Compte de résultat, produit, 701

    3. Stock de matières 1ères du 31/12/N, 35 000 €

      Bilan, actif, 31

    4. Achats de matières premières, 284 000 €

      Compte de résultat, charges, 601

    1. Welcome back and in this lesson I want to continue my EBS series and talk about provisioned IOPS SSD.

      So that means IO1 and IO2.

      Let's jump in and get started straight away because we do have a lot to cover.

      Strictly speaking there are now three types of provisioned IOPS SSD.

      Two which are in general release IO1 and its successor IO2 and one which is in preview which is IO2 Block Express.

      Now they all offer slightly different performance characteristics and different prices but the common factors is that IOPS are configurable independent of the size of the volume and they're designed for super high performance situations where low latency and consistency of that low latency are both important characteristics.

      With IO1 and IO2 you can achieve a maximum of 64,000 IOPS per volume and that's four times the maximum for GP2 and GP3 and with IO1 and IO2 you can achieve a 1000 MB per second of throughput.

      This is the same as GP3 and significantly more than GP2.

      Now IO2 Block Express takes this to another level.

      With Block Express you can achieve 256,000 IOPS per volume and 4000 MB per second of throughput per volume.

      In terms of the volume sizes that you can use with provisioned IOPS SSDs with IO1 and IO2 it ranges from 4 GB to 16 TB and with IO2 Block Express you can use larger up to 64 TB volumes.

      Now I mentioned that with these volumes you can allocate IOPS performance values independently of the size of the volume.

      Now this is useful for when you need extreme performance for smaller volumes or when you just need extreme performance in general but there is a maximum of the size to performance ratio.

      For IO1 it's 50 IOPS per GB of size so this is more than the 3 IOPS per GB for GP2.

      For IO2 this increases to 500 IOPS per GB of volume size and for Block Express this is 1000 IOPS per GB of volume size.

      Now these are all maximums and with these types of volumes you pay for both the size and the provisioned IOPS that you need.

      Now because with these volume types you're dealing with extreme levels of performance there is also another restriction that you need to be aware of and that's the per instance performance.

      There is a maximum performance which can be achieved between the EBS service and a single EC2 instance.

      Now this is influenced by a few things.

      The type of volumes so different volumes have a different maximum per instance performance level, the type of the instance and then finally the size of the instance.

      You'll find that only the most modern and largest instances support the highest levels of performance and these per instance maximums will also be more than one volume can provide on its own and so you're going to need multiple volumes to saturate this per instance performance level.

      With IO1 volumes you can achieve a maximum of 260,000 IOPS per instance and a throughput of 7,500 MB per second.

      It means you'll need just over four volumes of performance operating at maximum to achieve this per instance limit.

      Oddly enough IO2 is slightly less at 160,000 IOPS for an entire instance and 4,750 MB per second and that's because AWS have split these new generation volume types.

      They've added block express which can achieve 260,000 IOPS and 7,500 MB per second for an instance maximum.

      So it's important that you understand that these are per instance maximums so you need multiple volumes all operating together and think of this as a performance cap for an individual EC2 instance.

      Now these are the maximums for the volume types but you also need to take into consideration any maximums for the type and size of the instance so all of these things need to align in order to achieve maximum performance.

      Now keep these figures locked in your mind it's not so much about the exact numbers but having a good idea about the levels of performance that you can achieve with GP2 or GP3 and then IO1, IO2 and IO2 block express will really help you in real-world situations and in the exam.

      Instance store volumes which we're going to be covering elsewhere in this section can achieve even higher performance levels but this comes with a serious limitation in that it's not persistent but more on that soon.

      Now as a comparison the per instance maximums for GP2 and GP3 is 260,000 IOPS and 7,000 MB per second per instance.

      Again don't focus too much on the exact numbers but you need to have a feel for the ranges that these different types of storage volumes occupy versus each other and versus instance store.

      Now you'll be using provisioned IOPS SSD for anything which needs really low latency or sub millisecond latency, consistent latency and higher levels of performance.

      One common use case is when you have smaller volumes but need super high performance and that's only achievable with IO1, IO2 and IO2 block express.

      Now that's everything that I wanted to cover in this lesson.

      Again if you're doing the sysops or developer streams there's going to be a demo lesson where you'll experience the storage performance levels.

      For the architecture stream this theory is enough.

      At this point though thanks for watching that's everything I wanted to cover go ahead and complete the video and when you're ready I look forward to you joining me in the next.

    1. Disease: Von Willebrand Disease (VWD) type 1

      Patient(s): 13 yo, female and 14 yo, female, both Italian

      Variant: VWF NM_000552.5: c.820A>C p. (Thr274Pro)

      Dominant negative effect

      Heterozygous carrier

      Variant located in the D1 domain on VWF

      Phenotypes:

      heterozygous carriers have no bleeding history

      reduced VWF levels compatible with diagnosis of VWD type 1

      increased FVIII:C/VWF:Ag ratio, suggests reduced VWF synthesis/secretion as possible phathophysiological mechanism

      Normal VWFpp/VWF:Ag ratio

      Modest alteration of multimeric pattern in plasma and platelet multimers

      plasma VWF showed slight increase of LMWM and decrease of IMWM and HMWM

      Platelet VWF showed quantitative decrease of IMWM, HMWM, and UL multimers

      In silico analysis:

      SIFT, ALIGN, GVD Polyphen 2.0, SNP&GO, Mutation Taster, Pmut all suggest damaging consequences.

      PROVEAN and Effect suggest neutral effect

      according to ACMG guidelines this variant was classified as pathogenic

    1. Sorry boy, but I've been hit by purple rain

      Ventura Highway, track 14 on the album Here & Now by America (1972-11-04)

      It’s unsure whether a connection between this lyric and the famous Prince song (which was released 12 years after “Ventura Highway”) exists, but at least two journalists from The San Diego Union and the Post-Tribune wrote that Prince got the phrase “Purple Rain” from here.

      Asked to explain the phrase “purple rain” in “Ventura Highway,” Gerry Beckley responded: “You got me.”

    1. Welcome back and in this lesson I want to talk about two volume types available within AWS GP2 and GP3.

      Now GP2 is the default general purpose SSD based storage provided by EBS.

      GP3 is a newer storage type which I want to include because I expect it to feature on all of the exams very soon.

      Now let's just jump in and get started.

      General Purpose SSD storage provided by EBS was a game changer when it was first introduced.

      It's high performance storage for a fairly low price.

      Now GP2 was the first iteration and it's what I'm going to be covering first because it has a simple but initially difficult to understand architecture.

      So I want to get this out of the way first because it will help you understand the different storage types.

      When you first create a GP2 volume it can be as small as 1 GB or as large as 16 TB.

      And when you create it the volume is created with an I/O credit allocation.

      Think of this like a bucket.

      So an I/O is one input output operation.

      An I/O credit is a 16 kb chunk of data.

      So an I/O is one chunk of 16 kilobytes in one second.

      If you're transferring a 160 kb file that represents 10 I/O blocks of data.

      So 10 blocks of 16 kb.

      And if you do that all in one second that's 10 credits in one second.

      So 10 I/Ops.

      When you aren't using the volume much you aren't using many I/Ops and you aren't using many credits.

      During periods of high disc load you're going to be pushing a volume hard and because of that it's consuming more credits.

      For example during system boots or backups or heavy database work.

      Now if you have no credits in this I/O bucket you can't perform any I/O on the disc.

      The I/O bucket has a capacity of 5.4 million I/O credits.

      And it fills at the baseline performance rate of the volume.

      So what does this mean?

      Well every volume has a baseline performance based on its size with a minimum.

      So streaming into the bucket at all times is a 100 I/O credits per second refill rate.

      This means as an absolute minimum regardless of anything else you can consume 100 I/O credits per second which is 100 I/Ops.

      Now the actual baseline rate which you get with GP2 is based on the volume size.

      You get 3 I/O credits per second per GB of volume size.

      This means that a 100 GB volume gets 300 I/O credits per second refilling the bucket.

      Anything below 33.33 recurring GB gets this 100 I/O minimum.

      Anything above 33.33 recurring gets 3 times the size of the volume as a baseline performance rate.

      Now you aren't limited to only consuming at this baseline rate.

      By default GP2 can burst up to 3000 I/Ops so you can do up to 3000 input output operations of 16 kb in one second.

      And that's referred to as your burst rate.

      It means that if you have heavy workloads which aren't constant you aren't limited by your baseline performance rate of 3 times the GB size of the volume.

      So you can have a small volume which has periodic heavy workloads and that's OK.

      What's even better is that the credit bucket it starts off full so 5.4 million I/O credits.

      And this means that you could run it at 3000 I/Ops so 3000 I/O per second for a full 30 minutes.

      And that assumes that your bucket isn't filling up with new credits which it always is.

      So in reality you can run at full burst for much longer.

      And this is great if your volumes are used initially for any really heavy workloads because this initial allocation is a great buffer.

      The key takeaway at this point is if you're consuming more I/O credits than the rate at which your bucket is refilling then you're depleting the bucket.

      So if you burst up to 3000 I/Ops and your baseline performance is lower then over time you're decreasing your credit bucket.

      If you're consuming less than your baseline performance then your bucket is replenishing.

      And one of the key factors of this type of storage is the requirement that you manage all of the credit buckets of all of your volumes.

      So you need to ensure that they're staying replenished and not depleting down to zero.

      Now because every volume is credited with 3 I/O credits per second for every GB in size, volumes which are up to 1 TB in size they'll use this I/O credit architecture.

      But for volumes larger than 1 TB they will have a baseline equal to or exceeding the burst rate of 3000.

      And so they will always achieve their baseline performance as standard.

      They don't use this credit system.

      The maximum I/O per second for GP2 is currently 16000.

      So any volumes above 5.33 recurring TB in size achieves this maximum rate constantly.

      GP2 is a really flexible type of storage which is good for general usage.

      At the time of creating this lesson it's the default but I expect that to change over time to GP3 which I'm going to be talking about next.

      GP2 is great for boot volumes, for low latency interactive applications or for dev and test environments.

      Anything where you don't have a reason to pick something else.

      It can be used for boot volumes and as I've mentioned previously it is currently the default.

      Again over time I expect GP3 to replace this as it's actually cheaper in most cases but more on this in a second.

      You can also use the elastic volume feature to change the storage type between GP2 and all of the others.

      And I'll be showing you how that works in an upcoming lesson if you're doing the CIS Ops or developer associate courses.

      If you're doing the architecture stream then this architecture theory is enough.

      At this point I want to move on and explain exactly how GP3 is different.

      GP3 is also SSD based but it removes the credit bucket architecture of GP2 for something much simpler.

      Every GP3 volume regardless of size starts with a standard 3000 IOPS so 3000 16 kB operations per second and it can transfer 125 MB per second.

      That standard regardless of volume size and just like GP2 volumes can range from 1 GB through to 16 TB.

      Now the base price for GP3 at the time of creating this lesson is 20% cheaper than GP2.

      So if you only intend to use up to 3000 IOPS then it's a no brainer.

      You should pick GP3 rather than GP2.

      If you need more performance then you can pay for up to 16000 IOPS and up to 1000 MB per second of throughput.

      And even with those extras generally it works out to be more economical than GP2.

      GP3 offers a higher max throughput as well so you can get up to 1000 MB per second versus the 250 MB per second maximum of GP2.

      So GP3 is just simpler to understand for most people versus GP2 and I think over time it's going to be the default.

      For now though at the time of creating this lesson GP2 is still the default.

      In summary GP3 is like GP2 and IO1 which I'll cover soon had a baby.

      You get some of the benefits of both in a new type of general purpose SSD storage.

      Now the usage scenarios for GP3 are also much the same as GP2.

      So virtual desktops, medium sized databases, low latency applications, dev and test environments and boot volumes.

      You can safely swap GP2 to GP3 at any point but just be aware that for anything above 3000 IOPS the performance doesn't get added automatically like with GP2 which scales on size.

      With GP3 you would need to add these extra IOPS which come at an extra cost and that's the same with any additional throughput.

      Beyond the 125 MB per second standard it's an additional extra but still even including those extras for most things this storage type is more economical than GP2.

      At this point that's everything that I wanted to cover about the general purpose SSD volume types in this lesson.

      Go ahead, complete the lesson and then when you're ready, I'll look forward to you joining me in the next.

    1. Wet in formele zin die geen wet in materiële zin is:

      De wet in materiele zin heeft dus een algemeen karakter en is abstract omdat het niet tot 1 specifiek persoon of gebeurtenis is gericht, maar dus algemeen geldt voor een groep personen.

    1. Nesse ponto, não está mais claro quem treina quem, quem é o mestre e quem é o servo

      Fico pensando na influência desse comportamento na Psicanálise, em especial nos Discurso de Lacan, principalmente no Discurso do Mestre e o Discurso do Capitalista (o quinto discurso)

    1. Welcome back and in this lesson I want to quickly step through the basics of the Elastic Block Store service known as EBS.

      You'll be using EBS directly or indirectly, constantly as you make use of the wider AWS platform and as such you need to understand what it does, how it does it and the product's limitations.

      So let's jump in and get started straight away as we have a lot to cover.

      EBS is a service which provides block storage.

      Now you should know what that is by now.

      It's storage which can be addressed using block IDs.

      So EBS takes raw physical disks and it presents an allocation of those physical disks and this is known as a volume and these volumes can be written to or read from using a block number on that volume.

      Now volumes can be unencrypted or you can choose to encrypt the volume using KMS and I'll be covering that in a separate lesson.

      Now you see two instances when you attach a volume to them they see a block device, a raw storage and they can use this to create a file system on top of it such as EXT3, EXT4 or XFS and many more in the case of Linux or alternatively NTFS in the case of Windows.

      The important thing to grasp is that EBS volumes appear just like any other storage device to an EC2 instance.

      Now storage is provisioned in one availability zone.

      I can't stress enough the importance of this.

      EBS in one availability zone is different than EBS in another availability zone and different from EBS in another AZ in another region.

      EBS is an availability zone service.

      It's separate and isolated within that availability zone.

      It's also resilient within that availability zone so if a physical storage device fails there's some built-in resiliency but if you do have a major AZ failure then the volumes created within that availability zone will likely fail as will instances also in that availability zone.

      Now with EBS you create a volume and you generally attach it to one EC2 instance over a storage network.

      With some storage types you can use a feature called Multi-Attach which lets you attach it to multiple EC2 instances at the same time and this is used for clusters but if you do this the cluster application has to manage it so you don't overwrite data and cause data corruption by multiple writes at the same time.

      You should by default think of EBS volumes as things which are attached to one instance at a time but they can be detached from one instance and then reattached to another.

      EBS volumes are not linked to the instance lifecycle of one instance.

      They're persistent.

      If an instance moves between different EC2 hosts then the EBS volume follows it.

      If an instance stops and starts or restarts the volume is maintained.

      An EBS volume is created, it has data added to it and it's persistent until you delete that volume.

      Now even though EBS is an availability zone based service you can create a backup of a volume into S3 in the form of a snapshot.

      Now I'll be covering these in a dedicated lesson but snapshots in S3 are now regionally resilient so the data is replicated across availability zones in that region and it's accessible in all availability zones.

      So you can take a snapshot of a volume in availability zone A and when you do so EBS stores that data inside a portion of S3 that it manages and then you can use that snapshot to create a new volume in a different availability zone.

      For example availability zone B and this is useful if you want to migrate data between availability zones.

      Now don't worry I'll be covering how snapshots work in detail including a demo later in this section.

      For now I'm just introducing them.

      EBS can provision volumes based on different physical storage types, SSD based, high performance SSD and volumes based on mechanical disks and it can also provision different sizes of volumes and volumes with different performance profiles all things which I'll be covering in the upcoming lessons.

      For now again this is just an introduction to the service.

      The last point which I want to cover about EBS is that you'll build using a gigabyte per month metric so the price of one gig for one month would be the same as two gig for half a month and the same as half a gig for two months.

      Now there are some extras for certain types of volumes for certain enhanced performance characteristics but I'll be covering that in the dedicated lessons which are coming up next.

      For now before we finish this service introduction let's take a look visually at how this architecture fits together.

      So we're going to start with two regions in this example that's US-EAST-1 and AP-SOUTH EAST-2 and then in those regions we've got some availability zones AZA and AZB and then another availability zone in AP-SOUTH EAST 2 and then finally the S3 service which is running in all availability zones in both of those regions.

      Now EBS as I keep stressing and I will stress this more is availability zone based so in the cut-down example which I'm showing in US-EAST-1 you've got two availability zones and so two separate deployments of EBS one in each availability zone and that's just the same architecture as you have with EC2.

      You have different sets of EC2 hosts in every availability zone.

      Now visually let's say that you have an EC2 instance in availability zone A.

      You might create an EBS volume within that same availability zone and then attach that volume to the instance so critically both of these are in the same availability zone.

      You might have another instance which this time has two volumes attached to it and over time you might choose to detach one of those volumes and then reattach it to another instance in the same availability zone and that's doable because EBS volumes are separate from EC2 instances.

      It's a separate product with separate life cycles.

      Now you can have the same architecture in availability zone B where volumes can be created and then attached to instances in that same availability zone.

      What you cannot do and I'm stressing this for the 57th time small print it might not actually be 57 but it's close.

      What I'm stressing is that you cannot communicate cross availability zone with storage.

      So the instance in availability zone B cannot communicate with and so logically cannot attach to any volumes in availability zone A.

      It's an availability zone service so no cross AZ attachments are possible.

      Now EBS replicates data within an availability zone so the data on a volume it's replicated across multiple physical devices in that AZ but and this is important again the failure of an entire availability zone is going to impact all volumes within that availability zone.

      Now to resolve that you can snapshot volumes to S3 and this means that the data is now replicated as part of that snapshot across AZs in that region so that gives you additional resilience and it also gives you the ability to create an EBS volume in another availability zone from this snapshot.

      You can even copy the snapshot to another AWS region in this example AP - Southeastern -2 and once you've copied the snapshot it can be used in that other region to create a volume and that volume can then be attached to an EC2 instance in that same availability zone in that region.

      So that at a high level is the architecture of EBS.

      Now depending on what course you're studying there will be other areas that you need to deep dive on so over the coming section of the course we're going to be stepping through the features of EBS which you'll need to understand and these will differ depending on the exam but you will be learning everything you need for the particular exam that you're studying for.

      At this point that's everything I wanted to cover so go ahead finish this lesson and when you're ready I look forward to you joining me in the next.

    1. Chomsky has long been an opponent of the statistical learning tradition of language modeling, essentially claiming that it does not provide insight about what humans know about languages, and that engineering success probably can’t be achieved without explicitly incorporating important mathematical facts about the underlying structure of language
    1. allsoalso Mu=sickeMusic, whether vocallvocal, or instrumentallinstrumental: herein the ancient Philosophers - did soeso exercise themselves, that heehe was reputed unlearned, and forcdforced to sing to the Myrtle, who refused the Harp in festivallsfestivals, as is declared of Themistocles: in MusickeMusic was Socrates instructed, and Plato himselfehimself, who concluded him not harmoniously compounded, that delighted not in MusicallMusical harmony: Pythagoras was very famous in the same, who is saydsaid to have used the symphony of musickemusic morning, and evening to compose the minds of his disciples: for this is a peculiar virtue of MusickeMusic, to quicken or refresh the affections by the different musicallmusical measures: SoeSo the Phrygian tune was - by the GræksGreeks termed warrlikewarlike, because it was sung in warrewar, and upon en=gagement, and had a singular virtue in stirring up the Spirits of the - Soldiers; instead of which the JonickeIonic is sometimes used for the same pur=pose, which was formerly esteemed

      It appears that we are still in the period where all intellectual arts—music, mathematics, war tactics, etc—are expressions of one and the same phenomenon of the mind, and work off of each other, than the artificial separations of Chemistry and other disciplines we see later.

    1. i.e. an ethical pedagogy must be a critical one

      There are a variety of important, ethical pedagogies that don't involve imposing one's political views on your students, as this author suggests.

    2. Critical Pedagogy is an approach to teaching and learning predicated on fostering agency and empowering learners (implicitly and explicitly critiquing oppressive power structures).

      This seems narrow to me: teaching contributes to agency and learning in many ways beyond critiquing power structures, .e.g by enhancing attention, calling into question implicit cognitive biases, equipping students with habits and tools that allow them to extract greater meaning or probe hidden assumptions from all kinds of texts. In my view, it would be a consequential reduction to understand all of this only in terms of critiquing oppressive power structures.

    3. rites, “It doesn’t matter to me if my classroom is a little rectangle in a building or a little rectangle above my keyboard. Doors are rectangles; rectangles are portals.

      This terrifies me! I always have screens in the classroom because we are so often watching clips, but I am afraid of all our screenified minds and want to resist the dissolution of rectangles in general...

    4. How can we build platforms that support learning across age, race, culture, gender, ability, geography?

      Interesting that class is missing here, when the digital divide remains a real challenge to online access....

    5. objective, quantifiable, apolitical

      of course education is not alone - almost every sphere of humanistic knowledge has been eclipsed by the logic of data analytics.

    6. Paulo Freire, Pedagogy of the Oppressed

      As a historian, I always want to know what year something was published!

    7. “content

      Or "coverage"

    1. American attitudes toward international affairs followed the advice given by President George Washington in his 1796 Farewell Address. Washington had urged his countrymen to avoid “foreign alliances, attachments, and intrigues”,

      It’s interesting that the George Washington warned to stay out of foreign affairs considering in today’s year we are more involved with other countries than any other country in the world.

    1. Post-conventional

      Geen universele ervaring die behaald word; vooral binnen collectivistische culturen.

    2. zone of proximal development

      Doormiddel van hulp iets toch kunnen uitvoeren.

    3. scaffolding

      Helpen en daarmee elkaar stimuleren tot hoger denken.

    4. Peers are a powerful agent of enculturation

      Leeftijdsgenoten

    5. The study found that the economic/utilitarian value of having children decreased as socioeconomic development increased. However, the psychological value did not change

      Materiële onafhankelijkheid is niet onverenigbaar met emotionele interdependentie; mogelijk om economisch zelfstandig te zijn, maar nog wel emotioneel verbonden te zijn met anderen en hechte relaties te onderhouden.

    1. Argent vive

      Cambridge's "Dictionary of Alchemical Imagery" asserts that Argent vivre is synonymous with mercury and must be combined with Sulfur to produce the philoshopher's stone. Interestingly, Sulfur is a popular snake repellent, so perhaps there is something about these two metals being oppositional that makes them more powerful together.

    1. 4. Conclusiones

      Faltan dos cosas: 1) Cramer / Phi 2) test de proporciones

    2. Vemos que la correlación entre el sentido de injusticia distributiva recodificado (sj_gerente_rec) y la justificación de la violencia por el cambio social (jv_cambio_rec) es positiva, muy pequeña y estadísticamente significativa (r = 0.11; p < 0.05).

      Entonces, en términos de responder la pregunta de investigación ...

    3. Vemos que la correlación entre el sentido de injusticia distributiva (sj_gerente) y la justificación de la violencia por el cambio social (jv_cambio_rec) es positiva, pequeña y estadísticamente significativa (r = 0.11; p < 0.05)

      falta luego la interpretación sustantiva de esto

    4. Por otro lado, el sentido de injusticia distribuva se mide con un indicador denominado evaluación de injusticia (Jasso, 1980). Este representa cuanta justicia evalúan las personas en la distribución de recompensas de una situación. En este caso, representa la evalaución de qué tan justa es la distribución de ingresos de un gerente hipótetico, en tanto este representa el extremo mayor del espectro ocupacional. El indicador (en una versión simplificada) se lee de la siguiente manera:

      no se entiende, al resumir e intentar simplificar se pierde el sentido. Dar detalles de cómo se construye esto, o elegir otro item para el práctico que no requiera tanta necesidad de explicaciones, me iría por algo más simple. Ya lo de justificación de la violencia requiere explicación, y con eso basta. Ahora, se puede intentar explicar igual ...

    5. Originalmente, esta variable es ordinal, sin embargo, para efectos del ejemplo de este práctico, trabajaremos con la variable recodificada de la siguiente manera:

      antes de esto mostrar gráfico de distribución de respuestas en los distintos valores

    6. jv_control Justificación de la violencia por el control social 3407 0.2926544 1.4029938 0.8712963 4 (1-5) 1 jv_cambio

      ordenar, no intercalar para evitar confusión.

    7. Pregunta 1: ¿En qué medida se relacionan el sentido de injusticia distributiva y la justificación de la violencia por el cambio social en Chile al año 2019? H1: A mayor sentido de injusticia distributiva, mayor es la justificación de la violencia por el cambio social Pregunta 2: ¿En qué medida se relacionan el sentido de injusticia distributiva y la justificación de la violencia por el control social en Chile al año 2019? H2: A mayor sentido de injusticia distributiva, menor es la justificación de la violencia por el control social

      Partir con las preguntas, y resumir mucho el párrafo introductorio, máximo 150 palabras. Además de resumir, focalizar más, ya que a pesar de lo extenso no aparecen distinciones fundamentales vinculadas con el ejercicio. No incluir conceptos que no se van a definir y que pueden aumentar confusión (dominancia, etc). Definir qué es la justificación de la violanencia en sus dos variantes principales, y luego por qué se relacionaría con la justicia distributiva. Después adelantar la operacionalización, ya que por ejemplo luego no se entiende qué tienen que ver con esto lo de los gerentes

    8. ijusticia

      injusticia

    1. eLife Assessment

      Wittkamp et al. investigated the spatiotemporal dynamics of expectation of pain using an original fMRI-EEG approach. The methods are solid and the evidence for a substantially different neural representation between the anticipatory and the actual pain period is convincing. These important findings are discussed within a general framework that encompasses their research questions, hypotheses, and analysis of results. Although the choice of conditions and their influence on the results might accept different interpretations, the manuscript is strong and contributes beneficial insights to the field.

    2. Reviewer #1 (Public review):

      Summary:

      In this important paper the authors investigate the temporal dynamics of expectation of pain using a combined fMRI-EEG approach. More specifically, by modifying the expectations of higher or lower pain on a trial-to- trial basis they report that expectations largely share the same set of activations before the administration of the painful stimulus and that the coding of the valence of the stimulus is observed only after the nociceptive input has been presented. fMRI informed EEG analysis suggested that the temporal sequence of information processing involved the Dorsolateral prefrontal cortex (DLPFC), the anterior insula and the anterior cingulate cortex. The strength of evidence is convincing, the methods are solid, but a few alternative interpretations about the findings related to the control group, as well as a more in depth discussion on the correlations between the BOLD and EEG signals would strengthen the manuscript.

      Strengths:

      In line with open science principles, the article presents the data and the results in a complete and transparent fashion.<br /> On the theoretical standpoint, the authors make a step forward in our understanding of how expectations modulate pain by introducing a combination of spatial and temporal investigation. It is becoming increasingly clear that our appraisal of the world is dynamic, guided by previous experiences and mapped on a combination of what we expect and what we get. New research methods, questions and analyses are needed to capture this evolving process.

      Weaknesses:

      The authors have addressed my concerns about the control condition and made some adjustments, namely acknowledging that participants cannot be "expectations" free and investigating whether scores in the control condition are simply due to a "regression to the mean".

      General considerations and reflections

      Inducing expectations in the desired direction is not a straightforward task, and results might depend on the exact experimental conditions and the comparison group. In this sense, the authors choice of having 3 groups of positive, negative and "neutral" expectations is to be praised. On the other hand, also control groups form their expectations, and this can constitute a confounder in every experiment using expectation manipulation, if not appropriately investigated. The authors have addressed this element in their revised submission.

      In addition, although fMRI is still (probably) the best available tool we have to understand the spatial representation of cortical processing, limitations about not only the temporal but even the spatial resolution should be acknowledged. This has been done. Given the anatomical and physiological complexity of the cortical connections, as we know from the animal world, it is still well possible that sub circuits are activated also for positive and negative expectations, but cannot be observed due to the limitation of our techniques. Indeed, on an empirical/evolutionary bases, it would remain unclear why we should have a system that waits for the valence of a stimulus to show differential responses.<br /> Also, moving in a dimension of network and graph theory, one would not expect single areas to be responsible for distinct processes, but rather that they would more integrate information in a shared way, potentially with different feedback and feedforward communications. As such, it becomes more difficult to assume the insula as a center for coding potential pain, perhaps more of a node in a system that signals potential dangers for the integrity of the body.<br /> The rationale for the choice of their EEG band has been outlined.

    3. Reviewer #2 (Public review):

      I appreciate the authors' thorough revision of the manuscript, which has significantly improved its quality. I have no additional comments or requests for further changes.

      However, I remain in slight disagreement regarding the characterization of the neutral condition. My perspective is that it resembles more of a "medium" condition, making it challenging to understand what would be common to "high-medium" and "low-medium" contrasts. I suspect that the neutral condition might represent a state of high uncertainty since participants are informed that the algorithm cannot provide a prediction. From this viewpoint, the observed similarities in effects for both positive and negative expectations may actually reflect differences between certainty and uncertainty rather than the specific expectations themselves.

      Nevertheless, the authors have addressed alternative interpretations of their discussion section, and I have no further requests. The paper is well-executed and demonstrates several strengths: the procedure effectively induced varying levels of expectations with clear impacts on pain ratings. Additionally, the integration of fMRI with EEG is commendable for tracking the transition from anticipatory to pain periods. Overall, the manuscript is strong and contributes valuable insights to the field.

    4. Author response:

      The following is the authors’ response to the original reviews.

      We thank the reviewers for their careful and overall positive evaluation of our work and the constructive feedback! To address the main concerns, we have:

      – Clarified a major misunderstanding of our instructions: Participants were only informed that they would receive different stimuli of medium intensity and were thus not aware that the stimulation temperature remained constant

      – Implemented a new analysis to evaluate how participants rated their expectation and pain levels in the control condition

      – Added a paragraph in the discussion in which we argue that our paradigm is comparable to previous studies

      Below, we provide responses to each of the reviewers’ comments on our manuscript.

      Reviewer #1 (Public Review):

      Summary:  

      In this important paper, the authors investigate the temporal dynamics of expectation of pain using a combined fMRI-EEG approach. More specifically, by modifying the expectations of higher or lower pain on a trial-to-trial basis, they report that expectations largely share the same set of activations before the administration of the painful stimulus, and that the coding of the valence of the stimulus is observed only after the nociceptive input has been presented. fMRIinformed EEG analysis suggested that the temporal sequence of information processing involved the Dorsolateral prefrontal cortex (DLPFC), the anterior insula, and the anterior cingulate cortex. The strength of evidence is convincing, and the methods are solid, but a few alternative interpretations about the findings related to the control group, as well as a more in-depth discussion on the correlations between the BOLD and EEG signals would strengthen the manuscript. 

      Thank you for your positive evaluation! In the revised version of the manuscript, we elaborated on the control condition and the BOLD-EEG correlations in more detail.

      Strengths:  

      In line with open science principles, the article presents the data and the results in a complete and transparent fashion. 

      From a theoretical standpoint, the authors make a step forward in our understanding of how expectations modulate pain by introducing a combination of spatial and temporal investigation. It is becoming increasingly clear that our appraisal of the world is dynamic, guided by previous experiences, and mapped on a combination of what we expect and what we get. New research methods, questions, and analyses are needed to capture these evolving processes.  

      Thank you very much for these positive comments!

      Weaknesses:  

      The control condition is not so straightforward. Across the manuscript it is defined as "no expectation", and in the legend of Figure 1 it is mentioned that the third state would be "no prediction". However, it is difficult to conceive that participants would not have any expectations or predictions. Indeed, in the description of the task it is mentioned that participants were instructed that they would receive stimuli during "intermediate sensitive states". The results of the pain scores and expectations might support the idea that the control condition is situated in between the placebo and nocebo conditions. However, since this control condition was not part of the initial conditioning, and participants had no reference to previous stimuli, one might expect that some ratings might have simply "regressed to the mean" for a lack of previous experience. 

      General considerations and reflections:  

      Inducing expectations in the desired direction is not a straightforward task, and results might depend on the exact experimental conditions and the comparison group. In this sense, the authors' choice of having 3 groups of positive, negative, and "neutral" expectations is to be praised. On the other hand, also control groups form their expectations, and this can constitute a confounder in every experiment using expectation manipulation, if not appropriately investigated. 

      Thank you for raising these important concerns! Firstly, as it seems that we did not explain the experimental procedure in a clear fashion, there appeared to be a general misunderstanding regarding our instructions. We want to emphasize that we did not tell participants that the stimulus intensity would always be the same, but that pain stimuli would be different temperatures of medium intensity. Furthermore, our instruction did not necessarily imply that our algorithm detected a state of medium sensitivity, but that the algorithm would not make any prediction, e.g., due to highly fluctuating states of pain sensitivity, or no clear-cut state of high or low pain sensitivity. We changed this in the Methods (ll. 556-560, 601-606, 612-614) and Results (ll. 181-192) sections of the manuscript to clarify these important features of our procedure.

      Then, we absolutely agree that participants explicitly and implicitly form expectations regarding all conditions over time, including the control condition. We carefully considered your feedback and rephrased the control condition, no longer framing it as eliciting “no expectations” but as “neutral expectations” in the revised version of the manuscript. This follows the more common phrasing in the literature and acknowledges that participants indeed build up expectations in the control condition. However, we do still think that we can meaningfully compare the placebo and nocebo condition to the control condition to investigate the neuronal underpinnings of expectation effects. Independently of whether participants build up an expectation of “medium” intensities in the control condition, which caused them to perceive stimuli in line with this expectation, or if they simply perceived the stimuli as they were (of medium intensity) with limited effects of expectations, the crucial difference to the placebo and nocebo conditions is that there was no alteration of perception due to previous experiences or verbal information and no shift of perception from the actual stimulus intensity towards any direction in the control condition. This allowed us to compare the neural basis of a modulation of pain perception in either direction to a condition in which this modulation did not take place. 

      Author response image 1.

      Variability within conditions over time. Relative variability index for expectation (left) and pain ratings (right) per condition and measurement block. 

      Lastly, we want to highlight that our finding of the control condition being rated in between the placebo and nocebo condition is in line with many previous studies that included similar control conditions and advanced our understanding of pain-related expectations (Bingel et al., 2011; Colloca et al., 2010; Shih et al., 2019). We thank the reviewer for the very interesting idea to evaluate the development of ratings in the control condition in more detail and added a new analysis to the manuscript in which we compared how much intra-subject variance was within the ratings of each of the three conditions and how much this variance changed over time. For this aim, we computed the relative variability index (Mestdagh et al., 2018), a measure that quantifies intra-subject variation over multiple ratings, and compared between the three conditions and the three measurement blocks. We observed differences in variances between conditions for both expectation (F(2,96) = 8.14, p < .001) and pain ratings (F(2,96) = 3.41, p = .037). For both measures, post-hoc tests revealed that there was significantly more variance in the placebo compared to the control condition (both p_holm < .05), but no difference between control and nocebo. The substantial and comparable variation in pain and expectation ratings in all three conditions (or at least between control and nocebo) shows that participants did not always expect and perceive the same intensity within conditions. Variance in expectation ratings decreased from the first block compared to the other two blocks (_F(1.35,64.64) = 5.69, p = .012; both p_holm < .05), which was not the case for pain ratings. Most importantly, there was no interaction effect of block and condition for neither expectation (_F(2.65,127.06) = 0.40, p = .728) nor pain ratings (F(4,192) = 0.48, p = .748), which implies that expectations were similarly dynamically updated in all conditions over the course of the experiment. This speak against a “regression to the mean” in the control condition and shows that control ratings fluctuated from trial to trial. We included this analysis and a more in-depth discussion of the choice of conditions in the Result (ll. 219-232) and Discussion (ll. 452-486) sections of the revised manuscript.

      In addition, although fMRI is still (probably) the best available tool we have to understand the spatial representation of cortical processing, limitations about not only the temporal but even the spatial resolution should be acknowledged. Given the anatomical and physiological complexity of the cortical connections, as we know from the animal world, it is still well possible that subcircuits are activated also for positive and negative expectations, but cannot be observed due to the limitation of our techniques. Indeed, on an empirical/evolutionary basis it would remain unclear why we should have a system that waits for the valence of a stimulus to show differential responses. 

      We agree that the spatial resolution of fMRI is limited and that our signal is often not able to dissociate different subcircuits. Whether on this basis differential processes occurred cannot be observed in fMRI but is indeed possible. We now include this reasoning in our Discussion (ll. 373-377):

      “Importantly, the spatial resolution of fMRI is limited when it comes to discriminating whether the same pattern of activity is due to identical activation or to activation in different sub-circuits within the same area. Nonetheless, the overlap of areas is an indicator for similar processes involved in a more general preparation process.

      Also, moving in a dimension of network and graph theory, one would not expect single areas to be responsible for distinct processes, but rather that they would integrate information in a shared way, potentially with different feedback and feedforward communications. As such, it becomes more difficult to assume the insula is a center for coding potential pain, perhaps more of a node in a system that signals potential dangers for the integrity of the body. 

      We appreciate the feedback on our interpretation of our results and agree that the overall network activity most likely determines how a large part of expectations and pain are coded. We therefore adjusted the Discussion, embedding the results in an interpretation considering networks (ll. 427-430, 432-435,438-442 ). 

      The authors analyze the EEG signal between 0.5 to 128 Hz, finding significant results in the correlation between single-trial BOLD and EEG activity in the higher gamma range (see Figure 6 panel C). It would be interesting to understand the rationale for including such high frequencies in the signal, and the interpretation of the significant correlation in the high gamma range. 

      On a technical level, we adapted our EEG processing pipeline from Hipp et al. (2011) who similarly investigated signals up to 128 Hz. Of note, the spectral smoothing was adjusted to match 3/4 octave, meaning that the frequency resolution at 128 Hz is rather broad and does not only contain oscillations at 128 Hz sharp. Gamma oscillations in general have repeatedly been reported in relation to pain and feedforward signals reflecting noxious information (e.g. Ploner et al., 2017; Strube et al., 2021). Strube et al. (2021) reported the highest effects of pain stimulus intensity and prediction error processing at high gamma frequencies (100 and 98 Hz, respectively). These findings could also serve as basis to interpret our results in this frequency range: If anticipatory activation in the ACC is linked to high gamma oscillations, which appear to play an important role in feedforward signaling of pain intensity and prediction errors, this could indicate that later processing of intensity in this area is already pre-modulated before the stimulus actually occurs. Of note: although not significant, it looks as if the cluster extends further into pain processing on a descriptive level. We added additional explanation regarding the interpretation of the correlation in the Discussion (ll. 414425):

      “The link between anticipatory activity in the ACC and EEG oscillatory activity was observed in the high gamma band, which is consistent with findings that demonstrate a connection between increased fMRI BOLD signals and a relative shift from lower to higher frequencies (Kilner et al., 2005). Gamma oscillations have been repeatedly reported in the context of pain and expectations and have been interpreted as reflecting feedforward signals of noxious information ( e.g. Ploner et al., 2017; Strube et al., 2021). In combination with our findings, this might imply that high frequency oscillations may not only signal higher actual or perceived pain intensity during pain processing (Nickel et al., 2022; Ploner et al., 2017; Strube et al., 2021; Tu et al., 2016), but might also be instrumental in the transfer of directed expectations from anticipation into pain processing.”

      Reviewer #2 (Public Review):  

      I think this is a very promising paper. The combination of EEG and fMRI is unique and original. However, I also have some suggestions that I think could help improve the manuscript. 

      This manuscript reports the findings of an EEG-fMRI study (n = 50) on the effects of expectations on pain. The combination of EEG with fMRI is extremely original and well-suited to study the transition from expectation to perception. However, I think that the current treatment of the data, as well as the way that the manuscript is currently written, does not fully capitalize on the potential of this unique dataset. Several findings are presented but there is currently no clear message coming out of this manuscript. 

      First, one positive point is that the experimental manipulation clearly worked. However, it should be noted that the instructions used are not typical of studies on placebo/nocebo. Participants were not told that the stimulations would be of higher/lower intensity. Rather, they were told that objective intensities were held constant, but that EEG recordings could be used to predict whether they would perceive the stimulus as more or less intense. I think that this is an interesting way to manipulate expectations, but there could have been more justification in the introduction for why the authors have chosen this unusual procedure. 

      Most importantly, we again want to emphasize again that participants were not aware that the stimulation temperature was always the same but were informed that they would receive different stimuli of medium intensity. We now clarify this in the revised Results (ll. 190-192) and Methods (ll. 612-614) sections.

      While we agree that our procedure was not typical, we do not think that the manipulation is not comparable to previous studies on pain-related expectations. To our knowledge, either expectations regarding a treatment that changes pain perception (treatment expectancy) or expectations regarding stimulus intensities (stimulus expectancy) are manipulated (see Atlas & Wager, 2014). In our study, participants received a cue that induced expectations in regard to a ”treatment”, although in this case the “treatment” came from changes in their own brain activity. This is comparable to studies using TENS-devices that are supposedly changing peripheral pain transmission (Skvortsova et al., 2020). Thus, although not typical, our paradigm could be classified as targeting treatment expectancies and allowed us to examine effects on a trial-by-trial level within subjects. We added a paragraph regarding the comparability of our paradigm with previous studies in the Discussion of the revised manuscript (ll. 452-464) .

      Also, the introduction mentions that little is known about potential cerebral differences between expectations of high vs. low pain expectations. I think the fear conditioning literature could be cited here. Activations in ACC, SMA, Ins, parahippocampal gyrus, PAG, etc. are often associated with upcoming threat, whereas activations vmPFC/default mode network are associated with safety. 

      We thank you for your suggestions to add literature on fear conditioning. We agree there is some overlap between fear conditioning and expectation effects in humans, but we also believe there are fundamental differences regarding their underlying processes and paradigms. E.g. the expectation effects are not driven by classical learning algorithms but act in a large amount as self-fulfilling prophecies (see e.g. Jepma et al., 2018). However, we now acknowledge the similarities e.g in the recruitment of the insula and the vmPFC of the modalities in our Introduction (ll. 132-136 ).

      The fact that the authors didn't observe a clearer distinction between high and low expectations here could be related to their specific instructions that imply that the stimulus is the same and that it is the subjective perception that is expected to change. In any case, this is a relatively minor issue that is easy to address. 

      We apologize again for the lack of clarity in our instructions: Participants were unaware that they would receive the exact same stimulus. The clear effects of the different conditions on expectation and pain ratings also challenge the notion that participants always expected the same level of stimulation and/or perception. Additionally, if participants were indeed expecting a consistent level of intensity in all conditions, one would also assume to see the same anticipatory activation in the control condition as in the placebo and nocebo conditions, which is not the case. Thus, we respectfully disagree that the common effects might be explained by our instructions but would argue that they indeed reflect common (anticipatory) processes of positive and negative expectations.

      Towards the end of the introduction, the authors present the aims of the study in mainly exploratory terms: 

      (1) What are the differences between anticipation and perception? 

      (2) What regions display a difference between high and low expectations (high > low or low < high) vs. an effect of expectation regardless of the direction (high and low different than neutral)? 

      I think these are good questions, but the authors should provide more justification, or framework, for these questions. More specifically, what will they be able to conclude based on their observations? 

      For instance (note that this is just an example to illustrate my point. I encourage the authors to come up with their own framework/predictions) : 

      (1) Possibility #1: A certain region encodes expectations in a directed fashion (high > low) and that same region also responds to perception in the same direction (high > low). This region would therefore modulate pain by assimilating perception towards expectations. 

      (2) Possibility # 2: different regions are involved in expectation and perception. Perhaps this could mean that certain regions influence pain processing through descending facilitation for instance...  

      Thank you for pointing out that our hypotheses were not crafted carefully enough. We tried to give better explanations for the possible interpretations of our hypotheses. Additionally, we interpreted our results on the background of a broader framework for placebo and nocebo effects (predictive coding) to derive possible functions of the described brain areas. We embedded this in our Introduction (ll. 74-86, 158-175 ) and Discussion (ll. 384-388 ), interpreting the anticipatory activity and the activity during pain processing in the context of expectation formation as described in Büchel et al. (2014).

      Interpretation derived from our framework (ll. 384-388):

      e.g.: “Following the framework of predictive coding, our results would suggest that the DPMS is the network responsible for integrating ascending signals with descending signals in the pain domain and that this process is similar for positive and negative valences during anticipation of pain but differentiates during pain processing.”

      Regarding analyses, I think that examining the transition from expectations to perception is a strong angle of the manuscript given the EGG-fMRI nature of the study. However, I feel that more could have been done here. One problem is that the sequence of analyses starts by identifying an fMRI signal of interest and then attempts to find its EEG correlates. The problem is that the low temporal resolution of fMRI makes it difficult to differentiate expectation from perception, which doesn't make this analysis a good starting point in my opinion. Why not start by identifying an EEG signal that differentiates perception vs expectation, and then look for its fMRI correlates?  

      We appreciate your feedback on the transition from expectations to perceptions and also think that additional questions could be answered with our data set. However, based on the literature we had specific hypotheses regarding specific brain areas, and we therefore decided to start from the fMRI data with the superior spatial resolution and EEG was used to focus on the temporal dynamics within the areas important for anticipatory processes. We share the view that many different approaches in analyzing our data are possible. On the other hand, identifying relevant areas based on EEG characteristics inherits even more uncertainty due to the spatial filtering of the EEG signal. For the research question of this study a more accurate evaluation of the involved areas and the related representation was more important. We therefore decided to only implement the procedure already present in the manuscript. 

      Finally, I found the hypotheses on "valenced" vs. "absolute" effects a little bit more difficult to follow. This is because "neutral" is not really neutral: it falls in between low and high. If I follow correctly, participants know that the temperature is always the same. Therefore, if they are told that the machine cannot predict whether their perception is going to be low or high, then it must be because it is likely to be in between. Ratings of expectation and pain ratings confirm that. The neutral condition is not "devoid" of expectations as the authors suggest.

      Therefore, it would make sense to look at regions with the following pattern low > neutral > high, or vice-versa, low < neutral < high. Low & high being different than neutral is more difficult to interpret. I don't think that you can say that it reflects "absolute" expectations because neutral is also the expectation of a medium temperature. Perhaps it reflects "certainty/uncertainty" or something like that, but it is not clear that it reflects "expectations". 

      Thank you for your valuable feedback! We considered your concerns about the interpretation of our results and completely agree that the control condition cannot be interpreted as void of expectations (ll. 119-123). We therefore evaluated the control condition in more detail in a separate analysis (ll. 219-232) and integrated a new assessment of the conditions into the Discussion (ll. 465-486). We changed the phrasing of our control condition to “neutral expectations”, as we agree that the control condition is not void of expectations and this phrasing is more in line with other studies (e.g. Colloca et al., 2010; Freeman et al., 2015; Schmid et al., 2015). We would argue that the neutral expectations can still be meaningfully compared to positive and negative expectations because only the latter shift expectations and perception in one direction. Thus, we changed our wording throughout the manuscript to acknowledge that we indeed did not test for general effects of expectations vs. no expectations, but for effects of directed expectations. Please also see our reasoning regarding the control condition in response to Reviewer 1, in which we addressed the interpretation of the control condition. We therefore still believe that the contrasts that we calculated between conditions are valid. The proposed new contrast largely overlaps with our differential contrast low>high and vice versa already reported in the manuscript (for additional results also see Supplements).

      Recommendations for the authors:

      Reviewer #1 (Recommendations For The Authors):

      Figure 6, panel C. The figure mentions Anterior Cingulate Cortex R, whereas the legend mentions left ACC. Please check. 

      Thanks for catching this, we changed the figure legend accordingly.

      Reviewer #2 (Recommendations For The Authors):  

      - I don't think that activity during the rating of expectations is easily interpretable. I think I would recommend not reporting it. 

      The majority of participants completed the expectation rating relatively quickly (M = 2.17 s, SD = 0.35 s), which resulted in the overlap between the DLPFC EEG cluster and the expectation rating encompassing only a limited portion of the cluster (~ 1 s). We agree that this activity still is more difficult to interpret, yet we have decided to report it for reasons of completeness.

      - The effects on SIIPS are interesting. I think that it is fine to present them as a "validation" of what was observed with pain ratings, but it also seems to give a direction to the analyses that the authors don't end up following. For instance, why not try other "signatures" like the NPS or signatures of pain anticipation? Also, why not try to look at EEG correlates of SIIPS? I don't think that the authors "need" to do any of that, but I just wanted to let them know that SIIPS results may stir that kind of curiosity in the readers.  

      While this would be indeed very interesting, these additional analyses are not directly related to our current research question. We fear that too many analyses could be confusing for the readers. Nonetheless, we are grateful for your suggestion and will implement additional brain signatures in future studies. 

      - The shock was calibrated to be 60%. Why not have high (70%) and low (30%) conditions at equal distances from neutral, like 80% and 40% for instance? The current design makes it hard to distinguish high from control. Perhaps the "common" effects of high + low are driven by a deactivation for low (30%)?  

      We appreciate your feedback! We adjusted the temperature during the test phase to counteract habituation typically happening with heat stimuli. We believe that this was a good measure as participants rated the control condition at roughly VAS 50 (M = 51.40) which was our target temperature and then would be equidistant to the VAS 70 and VAS 30 during conditioning when no habituation should have taken place yet. We further tested whether participants rated placebo and nocebo trials at equal distances from the control condition and found no existent bias for either of the conditions. To do this, we computed the individual placebo effect (control minus placebo) and nocebo effect (nocebo minus control) for each participant during the test phase and statistically compared whether they differed in terms of magnitude. There was no significant difference between placebo and nocebo effects for both expectation (placebo effect M = 14.25 vs. nocebo effect M = 17.22, t(49) = 1.92, p = .061) and pain ratings (placebo effect M = 6.52 vs. nocebo effect M = 5.40, t(49) = -1.11, p = .274). This suggests that our expectation manipulation resulted in comparable shifts in expectation and pain ratings away from the control condition for both the placebo and nocebo condition and thus hints against any bias of the conditioning temperatures. Please also note that the analysis of the common effects was masked for differences of the high and low, therefore the effects cannot be driven by one condition by itself.

      - If I understand correctly, all fMRI contrasts were thresholded with FWE. This is fine, but very strict. The authors could have opted for FDR. Maybe I missed something here....  

      While it is true that FDR is the more liberal approach, it is not valid for spatially correlated fMRI data and is no longer available in SPM for the correction of multiple comparisons. The newly implemented topological peak based FDR correction is comparably sensitive with the FWE correction (see. Chumbley et al. BELEG). We opted for the slightly more conservative approach in our preregistration (_p_FWE < .05), therefore a change of the correction is not possible.

      Altogether, I think that this is a great study. The combination of EEG and fMRI is truly unique and affords many opportunities to examine the transition from expectations to perception. The experimental manipulation of expectations seems to have worked well, and there seem to be very promising results. However, I think that more could have been done. At least, I would recommend trying to give more of a theoretical framework to help interpret the results.  

      We are very grateful for your positive feedback. We took your suggestion seriously and tried to implement a more general framework from the literature (see Büchel et al., 2014) to provide a better explanation for our results.

      References

      Atlas, L. Y., & Wager, T. D. (2014). A meta-analysis of brain mechanisms of placebo analgesia: Consistent findings and unanswered questions. Handbook of Experimental Pharmacology, 225, 37–69. https://doi.org/10.1007/978-3-662-44519-8_3

      Bingel, U., Wanigasekera, V., Wiech, K., Ni Mhuircheartaigh, R., Lee, M. C., Ploner, M., & Tracey, I. (2011). The effect of treatment expectation on drug efficacy: Imaging the analgesic benefit of the opioid remifentanil. Science Translational Medicine, 3(70), 70ra14. https://doi.org/10.1126/scitranslmed.3001244

      Büchel, C., Geuter, S., Sprenger, C., & Eippert, F. (2014). Placebo analgesia: A predictive coding perspective. Neuron, 81(6), 1223–1239. https://doi.org/10.1016/j.neuron.2014.02.042

      Colloca, L., Petrovic, P., Wager, T. D., Ingvar, M., & Benedetti, F. (2010). How the number of learning trials affects placebo and nocebo responses. Pain, 151(2), 430–439. https://doi.org/10.1016/j.pain.2010.08.007

      Freeman, S., Yu, R., Egorova, N., Chen, X., Kirsch, I., Claggett, B., Kaptchuk, T. J., Gollub, R. L., & Kong, J. (2015). Distinct neural representations of placebo and nocebo effects. NeuroImage, 112, 197–207. https://doi.org/10.1016/j.neuroimage.2015.03.015

      Hipp, J. F., Engel, A. K., & Siegel, M. (2011). Oscillatory synchronization in large-scale cortical networks predicts perception. Neuron, 69(2), 387–396. https://doi.org/10.1016/j.neuron.2010.12.027

      Jepma, M., Koban, L., van Doorn, J., Jones, M., & Wager, T. D. (2018). Behavioural and neural evidence for self-reinforcing expectancy effects on pain. Nature Human Behaviour, 2(11), 838–855. https://doi.org/10.1038/s41562-018-0455-8

      Kilner, J. M., Mattout, J., Henson, R., & Friston, K. J. (2005). Hemodynamic correlates of EEG: A heuristic. NeuroImage, 28(1), 280–286. https://doi.org/10.1016/j.neuroimage.2005.06.008

      Nickel, M. M., Tiemann, L., Hohn, V. D., May, E. S., Gil Ávila, C., Eippert, F., & Ploner, M. (2022). Temporal-spectral signaling of sensory information and expectations in the cerebral processing of pain. Proceedings of the National Academy of Sciences of the United States of America, 119(1). https://doi.org/10.1073/pnas.2116616119

      Ploner, M., Sorg, C., & Gross, J. (2017). Brain Rhythms of Pain. Trends in Cognitive Sciences, 21(2), 100–110. https://doi.org/10.1016/j.tics.2016.12.001

      Schmid, J., Bingel, U., Ritter, C., Benson, S., Schedlowski, M., Gramsch, C., Forsting, M., & Elsenbruch, S. (2015). Neural underpinnings of nocebo hyperalgesia in visceral pain: A fMRI study in healthy volunteers. NeuroImage, 120, 114–122. https://doi.org/10.1016/j.neuroimage.2015.06.060

      Shih, Y.‑W., Tsai, H.‑Y., Lin, F.‑S., Lin, Y.‑H., Chiang, C.‑Y., Lu, Z.‑L., & Tseng, M.‑T. (2019). Effects of Positive and Negative Expectations on Human Pain Perception Engage Separate But Interrelated and Dependently Regulated Cerebral Mechanisms. Journal of Neuroscience, 39(7), 1261–1274. https://doi.org/10.1523/JNEUROSCI.2154-18.2018

      Skvortsova, A., Veldhuijzen, D. S., van Middendorp, H., Colloca, L., & Evers, A. W. M. (2020). Effects of Oxytocin on Placebo and Nocebo Effects in a Pain Conditioning Paradigm: A Randomized Controlled Trial. The Journal of Pain, 21(3-4), 430–439. https://doi.org/10.1016/j.jpain.2019.08.010

      Strube, A., Rose, M., Fazeli, S., & Büchel, C. (2021). The temporal and spectral characteristics of expectations and prediction errors in pain and thermoception. ELife, 10. https://doi.org/10.7554/eLife.62809

      Tu, Y., Zhang, Z., Tan, A., Peng, W., Hung, Y. S., Moayedi, M., Iannetti, G. D., & Hu, L. (2016). Alpha and gamma oscillation amplitudes synergistically predict the perception of forthcoming nociceptive stimuli. Human Brain Mapping, 37(2), 501–514. https://doi.org/10.1002/hbm.23048

    1. Machine learning is a young field,

      ? young? Author is in their 20s, case of 'my first encounter with something means it is globally new'?

    2. I expect AI to get much better than it is today. Research on AI systems has shown that they predictably improve given better algorithms, more and better quality data, and more computational power. Labs are in the process of further scaling up their clusters—the groupings of computers that the algorithms run on.

      Ah, article based on assumption of future improvement. compute and data are limiting factors, and you will end up making the equation if compute footprint is more efficient than doing it yourself. Data even more limiting, as the most meaningful stuff is qualitative rather than quantitative, and stats on the Q stuff won't give you meaning (LLMs case in point)

    3. The shared goal of the field of artificial intelligence is to create a system that can do anything. I expect us to soon reach it.

      Is it though? Wrt GAI that is as far away as before imo. The rainbow never gets nearer, because it is dependent on your position.

    4. The economically and politically relevant comparison on most tasks is not whether the language model is better than the best human, it is whether they are better than the human who would otherwise do that task

      True, and that is where this fails outside of bullshit tasks. The unmentioned assumption here is that algogen output can have meaning, rather than just coherence and plausibility.

    5. The general reaction to language models among knowledge workers is one of denial.

      equates 'content production' w k-work

    6. my ability to write large amounts of content quickly

      right. 'content production' where the actual meaning isn't relevant?

    1. eLife Assessment

      This valuable study provides convincing evidence that white matter diffusion imaging of the right superior longitudinal fasciculus might help to develop a predictive biomarker of chronic back pain chronicity. The results are based on a discovery-replication approach with different cohorts, but the sample size is limited. The findings will interest researchers interested in the brain mechanisms of chronic pain and in developing brain-based biomarkers of chronic pain.

    2. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public Review):

      Summary:

      In this paper, Misic et al showed that white matter properties can be used to classify subacute back pain patients that will develop persisting pain.

      Strengths:

      Compared to most previous papers studying associations between white matter properties and chronic pain, the strength of the method is to perform a prediction in unseen data. Another strength of the paper is the use of three different cohorts. This is an interesting paper that provides a valuable contribution to the field.

      We thank the reviewer for emphasizing the strength of our paper and the importance of validation on multiple unseen cohorts.

      Weaknesses:

      The authors imply that their biomarker could outperform traditional questionnaires to predict pain: "While these models are of great value showing that few of these variables (e.g. work factors) might have significant prognostic power on the long-term outcome of back pain and provide easy-to-use brief questionnaires-based tools, (21, 25) parameters often explain no more than 30% of the variance (28-30) and their prognostic accuracy is limited.(31)". I don't think this is correct; questionnaire-based tools can achieve far greater prediction than their model in about half a million individuals from the UK Biobank (Tanguay-Sabourin et al., A prognostic risk score for the development and spread of chronic pain, Nature Medicine 2023).

      We agree with the reviewer that we might have under-estimated the prognostic accuracy of questionnaire-based tools, especially, the strong predictive accuracy shown by Tangay-Sabourin 2023.  In this revised version, we have changed both the introduction and the discussion to reflect the questionnaire-based prognostic accuracy reported in the seminal work by Tangay-Sabourin. 

      In the introduction (page 4, lines 3-18), we now write:

      “Some studies have addressed this question with prognostic models incorporating demographic, pain-related, and psychosocial predictors.1-4 While these models are of great value showing that few of these variables (e.g. work factors) might have significant prognostic power on the long-term outcome of back pain, their prognostic accuracy is limited,5 with parameters often explaining no more than 30% of the variance.6-8. A recent notable study in this regard developed a model based on easy-to-use brief questionnaires to predict the development and spread of chronic pain in a variety of pain conditions capitalizing on a large dataset obtained from the UK-BioBank. 9 This work demonstrated that only few features related to assessment of sleep, neuroticism, mood, stress, and body mass index were enough to predict persistence and spread of pain with an area under the curve of 0.53-0.73. Yet, this study is unique in showing such a predictive value of questionnaire-based tools. Neurobiological measures could therefore complement existing prognostic models based on psychosocial variables to improve overall accuracy and discriminative power. More importantly, neurobiological factors such as brain parameters can provide a mechanistic understanding of chronicity and its central processing.”

      And in the conclusion (page 22, lines 5-9), we write:

      “Integrating findings from studies that used questionnaire-based tools and showed remarkable predictive power9 with neurobiological measures that can offer mechanistic insights into chronic pain development, could enhance predictive power in CBP prognostic modeling.”

      Moreover, the main weakness of this study is the sample size. It remains small despite having 3 cohorts. This is problematic because results are often overfitted in such a small sample size brain imaging study, especially when all the data are available to the authors at the time of training the model (Poldrack et al., Scanning the horizon: towards transparent and reproducible neuroimaging research, Nature Reviews in Neuroscience 2017). Thus, having access to all the data, the authors have a high degree of flexibility in data analysis, as they can retrain their model any number of times until it generalizes across all three cohorts. In this case, the testing set could easily become part of the training making it difficult to assess the real performance, especially for small sample size studies.

      The reviewer raises a very important point of limited sample size and of the methodology intrinsic of model development and testing. We acknowledge the small sample size in the “Limitations” section of the discussion.   In the resubmission, we acknowledge the degree of flexibility that is afforded by having access to all the data at once. However, we also note that our SLF-FA based model is a simple cut-off approach that does not include any learning or hidden layers and that the data obtained from Open Pain were never part of the “training” set at any point at either the New Haven or the Mannheim site.  Regarding our SVC approach we follow standard procedures for machine learning where we never mix the training and testing sets. The models are trained on the training data with parameters selected based on cross-validation within the training data. Therefore, no models have ever seen the test data set. The model performances we reported reflect the prognostic accuracy of our model. We write in the limitation section of the discussion (page 20, lines 20-21, and page 21, lines 1-6):

      “In addition, at the time of analysis, we had “access” to all the data, which may lead to bias in model training and development.  We believe that the data presented here are nevertheless robust since multisite validated but need replication. Additionally, we followed standard procedures for machine learning where we never mix the training and testing sets. The models were trained on the training data with parameters selected based on cross-validation within the training data. Therefore, no models have ever seen the test data set. The model performances we reported reflect the prognostic accuracy of our model”. 

      Finally, as discussed by Spisak et al., 10 the key determinant of the required sample size in predictive modeling is the ” true effect size of the brain-phenotype relationship”, which we think is the determinant of the replication we observe in this study. As such the effect size in the New Haven and Mannheim data is Cohen’s d >1.

      Even if the performance was properly assessed, their models show AUCs between 0.65-0.70, which is usually considered as poor, and most likely without potential clinical use. Despite this, their conclusion was: "This biomarker is easy to obtain (~10 min of scanning time) and opens the door for translation into clinical practice." One may ask who is really willing to use an MRI signature with a relatively poor performance that can be outperformed by self-report questionnaires?

      The reviewer is correct, the model performance is fair which limits its usefulness for clinical translation.  We wanted to emphasize that obtaining diffusion images can be done in a short period of time and, hence, as such models’ predictive accuracy improves, clinical translation becomes closer to reality. In addition, our findings are based on older diffusion data and limited sample sizes coming from different sites and different acquisition sequences.  This by itself would limit the accuracy especially since the evidence shows that sample size affects also model performance (i.e. testing AUC)10.  In the revision, we re-worded the sentence mentioned by the reviewer to reflect the points discussed here. This also motivates us to collect a more homogeneous and larger sample.  In the limitations section of the discussion, we now write (page 21, lines 6-9):

      “Even though our model performance is fair, which currently limits its usefulness for clinical translation, we believe that future models would further improve accuracy by using larger homogenous sample sizes and uniform acquisition sequences.”

      Overall, these criticisms are more about the wording sometimes used and the inference they made. I think the strength of the evidence is incomplete to support the main claims of the paper.

      Despite these limitations, I still think this is a very relevant contribution to the field. Showing predictive performance through cross-validation and testing in multiple cohorts is not an easy task and this is a strong effort by the team. I strongly believe this approach is the right one and I believe the authors did a good job.

      We thank the reviewer for acknowledging that our effort and approach were useful.

      Minor points:

      Methods:

      I get the voxel-wise analysis, but I don't understand the methods for the structural connectivity analysis between the 88 ROIs. Have the authors run tractography or have they used a predetermined streamlined form of 'population-based connectome'? They report that models of AUC above 0.75 were considered and tested in the Chicago dataset, but we have no information about what the model actually learned (although this can be tricky for decision tree algorithms). 

      We apologize for the lack of clarity; we did run tractography and we did not use a pre-determined streamlined form of the connectome.

      Finding which connections are important for the classification of SBPr and SBPp is difficult because of our choices during data preprocessing and SVC model development: (1) preprocessing steps which included TNPCA for dimensionality reduction, and regressing out the confounders (i.e., age, sex, and head motion); (2) the harmonization for effects of sites; and (3) the Support Vector Classifier which is a hard classification model11.

      In the methods section (page 30, lines 21-23) we added: “Of note, such models cannot tell us the features that are important in classifying the groups.  Hence, our model is considered a black-box predictive model like neural networks.”

      Minor:

      What results are shown in Figure 7? It looks more descriptive than the actual results.

      The reviewer is correct; Figure 7 and Supplementary Figure 4 were both qualitatively illustrating the shape of the SLF. We have now changed both figures in response to this point and a point raised by reviewer 3.  We now show a 3D depiction of different sub-components of the right SLF (Figure 7) and left SLF (Now Supplementary Figure 11 instead of Supplementary Figure 4) with a quantitative estimation of the FA content of the tracts, and the number of tracts per component.  The results reinforce the TBSS analysis in showing asymmetry in the differences between left and right SLF between the groups (i.e. SBPp and SBPr) in both FA values and number of tracts per bundle.

      Reviewer #2 (Public Review):

      The present study aims to investigate brain white matter predictors of back pain chronicity. To this end, a discovery cohort of 28 patients with subacute back pain (SBP) was studied using white matter diffusion imaging. The cohort was investigated at baseline and one-year follow-up when 16 patients had recovered (SBPr) and 12 had persistent back pain (SBPp). A comparison of baseline scans revealed that SBPr patients had higher fractional anisotropy values in the right superior longitudinal fasciculus SLF) than SBPp patients and that FA values predicted changes in pain severity. Moreover, the FA values of SBPr patients were larger than those of healthy participants, suggesting a role of FA of the SLF in resilience to chronic pain. These findings were replicated in two other independent datasets. The authors conclude that the right SLF might be a robust predictive biomarker of CBP development with the potential for clinical translation.

      Developing predictive biomarkers for pain chronicity is an interesting, timely, and potentially clinically relevant topic. The paradigm and the analysis are sound, the results are convincing, and the interpretation is adequate. A particular strength of the study is the discovery-replication approach with replications of the findings in two independent datasets.

      We thank reviewer 2 for pointing to the strength of our study.

      The following revisions might help to improve the manuscript further.

      - Definition of recovery. In the New Haven and Chicago datasets, SBPr and SBPp patients are distinguished by reductions of >30% in pain intensity. In contrast, in the Mannheim dataset, both groups are distinguished by reductions of >20%. This should be harmonized. Moreover, as there is no established definition of recovery (reference 79 does not provide a clear criterion), it would be interesting to know whether the results hold for different definitions of recovery. Control analyses for different thresholds could strengthen the robustness of the findings.

      The reviewer raises an important point regarding the definition of recovery.  To address the reviewers’ concern we have added a supplementary figure (Fig. S6) showing the results in the Mannheim data set if a 30% reduction is used as a recovery criterion, and in the manuscript (page 11, lines 1,2) we write: “Supplementary Figure S6 shows the results in the Mannheim data set if a 30% reduction is used as a recovery criterion in this dataset (AUC= 0.53)”.

      We would like to emphasize here several points that support the use of different recovery thresholds between New Haven and Mannheim.  The New Haven primary pain ratings relied on visual analogue scale (VAS) while the Mannheim data relied on the German version of the West-Haven-Yale Multidimensional Pain Inventory. In addition, the Mannheim data were pre-registered with a definition of recovery at 20% and are part of a larger sub-acute to chronic pain study with prior publications from this cohort using the 20% cut-off12. Finally, a more recent consensus publication13 from IMMPACT indicates that a change of at least 30% is needed for a moderate improvement in pain on the 0-10 Numerical Rating Scale but that this percentage depends on baseline pain levels.

      - Analysis of the Chicago dataset. The manuscript includes results on FA values and their association with pain severity for the New Haven and Mannheim datasets but not for the Chicago dataset. It would be straightforward to show figures like Figures 1 - 4 for the Chicago dataset, as well.

      We welcome the reviewer’s suggestion; we added these analyses to the results section of the resubmitted manuscript (page 11, lines 13-16): “The correlation between FA values in the right SLF and pain severity in the Chicago data set showed marginal significance (p = 0.055) at visit 1 (Fig. S8A) and higher FA values were significantly associated with a greater reduction in pain at visit 2 (p = 0.035) (Fig. S8B).”

      - Data sharing. The discovery-replication approach of the present study distinguishes the present from previous approaches. This approach enhances the belief in the robustness of the findings. This belief would be further enhanced by making the data openly available. It would be extremely valuable for the community if other researchers could reproduce and replicate the findings without restrictions. It is not clear why the fact that the studies are ongoing prevents the unrestricted sharing of the data used in the present study.

      We greatly appreciate the reviewer's suggestion to share our data sets, as we strongly support the Open Science initiative. The Chicago data set is already publicly available. The New Haven data set will be shared on the Open Pain repository, and the Mannheim data set will be uploaded to heiDATA or heiARCHIVE at Heidelberg University in the near future. We cannot share the data immediately because this project is part of the Heidelberg pain consortium, “SFB 1158: From nociception to chronic pain: Structure-function properties of neural pathways and their reorganization.” Within this consortium, all data must be shared following a harmonized structure across projects, and no study will be published openly until all projects have completed initial analysis and quality control.

      Reviewer #3 (Public Review):

      Summary:

      Authors suggest a new biomarker of chronic back pain with the option to predict the result of treatment. The authors found a significant difference in a fractional anisotropy measure in superior longitudinal fasciculus for recovered patients with chronic back pain.

      Strengths:

      The results were reproduced in three different groups at different studies/sites.

      Weaknesses:

      - The number of participants is still low.

      The reviewer raises a very important point of limited sample size. As discussed in our replies to reviewer number 1:

      We acknowledge the small sample size in the “Limitations” section of the discussion.   In the resubmission, we acknowledge the degree of flexibility that is afforded by having access to all the data at once. However, we also note that our SLF-FA based model is a simple cut-off approach that does not include any learning or hidden layers and that the data obtained from Open Pain were never part of the “training” set at any point at either the New Haven or the Mannheim site.  Regarding our SVC approach we follow standard procedures for machine learning where we never mix the training and testing sets. The models are trained on the training data with parameters selected based on cross-validation within the training data. Therefore, no models have ever seen the test data set. The model performances we reported reflect the prognostic accuracy of our model. We write in the limitation section of the discussion (page 20, lines 20-21, and page 21, lines 1-6):

      “In addition, at the time of analysis, we had “access” to all the data, which may lead to bias in model training and development.  We believe that the data presented here are nevertheless robust since multisite validated but need replication. Additionally, we followed standard procedures for machine learning where we never mix the training and testing sets. The models were trained on the training data with parameters selected based on cross-validation within the training data. Therefore, no models have ever seen the test data set. The model performances we reported reflect the prognostic accuracy of our model”. 

      Finally, as discussed by Spisak et al., 10 the key determinant of the required sample size in predictive modeling is the ” true effect size of the brain-phenotype relationship”, which we think is the determinant of the replication we observe in this study. As such the effect size in the New Haven and Mannheim data is Cohen’s d >1.

      - An explanation of microstructure changes was not given.

      The reviewer points to an important gap in our discussion.  While we cannot do a direct study of actual tissue microstructure, we explored further the changes observed in the SLF by calculating diffusivity measures. We have now performed the analysis of mean, axial, and radial diffusivity. 

      In the results section we added (page 7, lines 12-19): “We also examined mean diffusivity (MD), axial diffusivity (AD), and radial diffusivity (RD) extracted from the right SLF shown in Fig.1 to further understand which diffusion component is different between the groups. The right SLF MD is significantly increased (p < 0.05) in the SBPr compared to SBPp patients (Fig. S3), while the right SLF RD is significantly decreased (p < 0.05) in the SBPr compared to SBPp patients in the New Haven data (Fig. S4). Axial diffusivity extracted from the RSLF mask did not show significant difference between SBPr and SBPp (p = 0.28) (Fig. S5).”

      In the discussion, we write (page 15, lines 10-20):

      “Within the significant cluster in the discovery data set, MD was significantly increased, while RD in the right SLF was significantly decreased in SBPr compared to SBPp patients. Higher RD values, indicative of demyelination, were previously observed in chronic musculoskeletal patients across several bundles, including the superior longitudinal fasciculus14.  Similarly, Mansour et al. found higher RD in SBPp compared to SBPr in the predictive FA cluster. While they noted decreased AD and increased MD in SBPp, suggestive of both demyelination and altered axonal tracts,15 our results show increased MD and RD in SBPr with no AD differences between SBPp and SBPr, pointing to white matter changes primarily due to myelin disruption rather than axonal loss, or more complex processes. Further studies on tissue microstructure in chronic pain development are needed to elucidate these processes.”

      - Some technical drawbacks are presented.

      We are uncertain if the reviewer is suggesting that we have acknowledged certain technical drawbacks and expects further elaboration on our part. We kindly request that the reviewer specify what particular issues need to be addressed so that we can respond appropriately.

      Recommendations For The Authors:

      We thank the reviewers for their constructive feedback, which has significantly improved our manuscript. We have done our best to answer the criticisms that they raised point-by-point.

      Reviewer #2 (Recommendations For The Authors):

      The discovery-replication approach of the current study justifies the use of the terminus 'robust.' In contrast, previous studies on predictive biomarkers using functional and structural brain imaging did not pursue similar approaches and have not been replicated. Still, the respective biomarkers are repeatedly referred to as 'robust.' Throughout the manuscript, it would, therefore, be more appropriate to remove the label 'robust' from those studies.

      We thank the reviewer for this valuable suggestion. We removed the label 'robust' throughout the manuscript when referring to the previous studies which didn’t follow the same approach and have not yet been replicated.

      Reviewer #3 (Recommendations For The Authors):

      This is, indeed, quite a well-written manuscript with very interesting findings and patient group. There are a few comments that enfeeble the findings.

      (1) It is a bit frustrating to read at the beginning how important chronic back pain is and the number of patients in the used studies. At least the number of healthy subjects could be higher.

      The reviewer raises an important point regarding the number of pain-free healthy controls (HC) in our samples. We first note that our primary statistical analysis focused on comparing recovered and persistent patients at baseline and validating these findings across sites without directly comparing them to HCs. Nevertheless, the data from New Haven included 28 HCs at baseline, and the data from Mannheim included 24 HCs. Although these sample sizes are not large, they have enabled us to clearly establish that the recovered SBPr patients generally have larger FA values in the right superior longitudinal fasciculus compared to the HCs, a finding consistent across sites (see Figs. 1 and 3). This suggests that the general pain-free population includes individuals with both low and high-risk potential for chronic pain. It also offers one explanation for the reported lack of differences or inconsistent differences between chronic low-back pain patients and HCs in the literature, as these differences likely depend on the (unknown) proportion of high- and low-risk individuals in the control groups. Therefore, if the high-risk group is more represented by chance in the HC group, comparisons between HCs and chronic pain patients are unlikely to yield statistically significant results. Thus, while we agree with the reviewer that the sample sizes of our HCs are limited, this limitation does not undermine the validity of our findings.

      (2) Pain reaction in the brain is in general a quite popular topic and could be connected to the findings or mentioned in the introduction.

      We thank the reviewer for this suggestion.  We have now added a summary of brain response to pain in general; In the introduction, we now write (page 4, lines 19-22 and page 5, lines 1-5):

      “Neuroimaging research on chronic pain has uncovered a shift in brain responses to pain when acute and chronic pain are compared. The thalamus, primary somatosensory, motor areas, insula, and mid-cingulate cortex most often respond to acute pain and can predict the perception of acute pain16-19. Conversely, limbic brain areas are more frequently engaged when patients report the intensity of their clinical pain20, 21. Consistent findings have demonstrated that increased prefrontal-limbic functional connectivity during episodes of heightened subacute ongoing back pain or during a reward learning task is a significant predictor of CBP.12, 22. Furthermore, low somatosensory cortex excitability in the acute stage of low back pain was identified as a predictor of CBP chronicity.23”

      (3) It is clearly observed structural asymmetry in the brain, why not elaborate this finding further? Would SLF be a hub in connectivity analysis? Would FA changes have along tract features? etc etc etc

      The reviewer raises an important point. There is ground to suggest from our data that there is an asymmetry to the role of the SLF in resilience to chronic pain. We discuss this at length in the Discussion section. We have, in addition, we elaborated more in our data analysis using our Population Based Structural Connectome pipeline on the New Haven dataset. Following that approach, we studied both the number of fiber tracts making different parts of the SLF on the right and left side. In addition, we have extracted FA values along fiber tracts and compared the average across groups. Our new analyses are presented in our modified Figures 7 and Fig S11.  These results support the asymmetry hypothesis indeed. The SLF could be a hub of structural connectivity. Please note however, given the nature of our design of discovery and validation, the study of structural connectivity of the SLF is beyond the scope of this paper because tract-based connectivity is very sensitive to data collection parameters and is less accurate with single shell DWI acquisition. Therefore, we will pursue the study of connectivity of the SLF in the future with well-powered and more harmonized data.

      (4) Only FA is mentioned; did the authors work with MD, RD, and AD metrics?

      We thank the reviewer for this suggestion that helps in providing a clearer picture of the differences in the right SLF between SBPr and SBPp. We have now extracted MD, AD, and RD for the predictive mask we discovered in Figure 1 and plotted the values comparing SBPr to SBPp patients in Fig. S3, Fig. S4., and Fig. S5 across all sites using one comprehensive harmonized analysis. We have added in the discussion “Within the significant cluster in the discovery data set, MD was significantly increased, while RD in the right SLF was significantly decreased in SBPr compared to SBPp patients. Higher RD values, indicative of demyelination, were previously observed in chronic musculoskeletal patients across several bundles, including the superior longitudinal fasciculus14.  Similarly, Mansour et al. found higher RD in SBPp compared to SBPr in the predictive FA cluster. While they noted decreased AD and increased MD in SBPp, suggestive of both demyelination and altered axonal tracts15, our results show increased MD and RD in SBPr with no AD differences between SBPp and SBPr, pointing to white matter changes primarily due to myelin disruption rather than axonal loss, or more complex processes. Further studies on tissue microstructure in chronic pain development are needed to elucidate these processes.”

      (5) There are many speculations in the Discussion, however, some of them are not supported by the results.

      We agree with the reviewer and thank them for pointing this out. We have now made several changes across the discussion related to the wording where speculations were not supported by the data. For example, instead of writing (page 16, lines 7-9): “Together the literature on the right SLF role in higher cognitive functions suggests, therefore, that resilience to chronic pain is a top-down phenomenon related to visuospatial and body awareness.”, We write: “Together the literature on the right SLF role in higher cognitive functions suggests, therefore, that resilience to chronic pain might be related to a top-down phenomenon involving visuospatial and body awareness.”

      (6) A method section was written quite roughly. In order to obtain all the details for a potential replication one needs to jump over the text.

      The reviewer is correct; our methodology may have lacked more detailed descriptions.  Therefore, we have clarified our methodology more extensively.  Under “Estimation of structural connectivity”; we now write (page 28, lines 20,21 and page 29, lines 1-19):

      “Structural connectivity was estimated from the diffusion tensor data using a population-based structural connectome (PSC) detailed in a previous publication.24 PSC can utilize the geometric information of streamlines, including shape, size, and location for a better parcellation-based connectome analysis. It, therefore, preserves the geometric information, which is crucial for quantifying brain connectivity and understanding variation across subjects. We have previously shown that the PSC pipeline is robust and reproducible across large data sets.24 PSC output uses the Desikan-Killiany atlas (DKA) 25 of cortical and sub-cortical regions of interest (ROI). The DKA parcellation comprises 68 cortical surface regions (34 nodes per hemisphere) and 19 subcortical regions. The complete list of ROIs is provided in the supplementary materials’ Table S6.  PSC leverages a reproducible probabilistic tractography algorithm 26 to create whole-brain tractography data, integrating anatomical details from high-resolution T1 images to minimize bias in the tractography. We utilized DKA 25 to define the ROIs corresponding to the nodes in the structural connectome. For each pair of ROIs, we extracted the streamlines connecting them by following these steps: 1) dilating each gray matter ROI to include a small portion of white matter regions, 2) segmenting streamlines connecting multiple ROIs to extract the correct and complete pathway, and 3) removing apparent outlier streamlines. Due to its widespread use in brain imaging studies27, 28, we examined the mean fractional anisotropy (FA) value along streamlines and the count of streamlines in this work. The output we used includes fiber count, fiber length, and fiber volume shared between the ROIs in addition to measures of fractional anisotropy and mean diffusivity.”

      (7) Why not join all the data with harmonisation in order to reproduce the results (TBSS)

      We have followed the reviewer’s suggestion; we used neuroCombat harmonization after pooling all the diffusion weighted data into one TBSS analysis. Our results remain the same after harmonization. 

      In the Supplementary Information we added a paragraph explaining the method for harmonization; we write (SI, page 3, lines 25-34):

      “Harmonization of DTI data using neuroCombat. Because the 3 data sets originated from different sites using different MR data acquisition parameters and slightly different recruitment criteria, we applied neuroCombat 29  to correct for site effects and then repeated the TBSS analysis shown in Figure 1 and the validation analyses shown in Figures 5 and 6. First, the FA maps derived using the FDT toolbox were pooled into one TBSS analysis where registration to a standard template FA template (FMRIB58_FA_1mm.nii.gz part of FSL) was performed.  Next, neuroCombat was applied to the FA maps as implemented in Python with batch (i.e., site) effect modeled with a vector containing 1 for New Haven, 2 for Chicago, and 3 for Mannheim originating maps, respectively. The harmonized maps were then skeletonized to allow for TBSS.”

      And in the results section, we write (page 12, lines 2-21):

      “Validation after harmonization

      Because the DTI data sets originated from 3 sites with different MR acquisition parameters, we repeated our TBSS and validation analyses after correcting for variability arising from site differences using DTI data harmonization as implemented in neuroCombat. 29 The method of harmonization is described in detail in the Supplementary Methods. The whole brain unpaired t-test depicted in Figure 1 was repeated after neuroCombat and yielded very similar results (Fig. S9A) showing significantly increased FA in the SBPr compared to SBPp patients in the right superior longitudinal fasciculus (MNI-coordinates of peak voxel: x = 40; y = - 42; z = 18 mm; t(max) = 2.52; p < 0.05, corrected against 10,000 permutations).  We again tested the accuracy of local diffusion properties (FA) of the right SLF extracted from the mask of voxels passing threshold in the New Haven data (Fig.S9A) in classifying the Mannheim and the Chicago patients, respectively, into persistent and recovered. FA values corrected for age, gender, and head displacement accurately classified SBPr  and SBPp patients from the Mannheim data set with an AUC = 0.67 (p = 0.023, tested against 10,000 random permutations, Fig. S9B and S7D), and patients from the Chicago data set with an AUC = 0.69 (p = 0.0068) (Fig. S9C and S7E) at baseline, and an AUC = 0.67 (p = 0.0098)  (Fig. S9D and S7F) patients at follow-up,  confirming the predictive cluster from the right SLF across sites. The application of neuroCombat significantly changes the FA values as shown in Fig.S10 but does not change the results between groups.”

      Minor comments

      (1) In the case of New Haven data, one used MB 4 and GRAPPA 2, these two factors accelerate the imaging 8 times and often lead to quite a poor quality.<br /> Any kind of QA?

      We thank the reviewer for identifying this error. GRAPPA 2 was in fact used for our T1-MPRAGE image acquisition but not during the diffusion data acquisition. The diffusion data were acquired with a multi-band acceleration factor of 4.  We have now corrected this mistake.

      (2) Why not include MPRAGE data into the analysis, in particular, for predictions?

      We thank the reviewer for the suggestion. The collaboration on this paper was set around diffusion data. In addition, MPRAGE data from New Haven related to prediction is already published (10.1073/pnas.1918682117) and MPRAGE data of the Mannheim data set is a part of the larger project and will be published elsewhere.

      (3) In preprocessing, the authors wrote: "Eddy current corrects for image distortions due to susceptibility-induced distortions and eddy currents in the gradient coil"<br /> However, they did not mention that they acquired phase-opposite b0 data. It means eddy_openmp works likely only as an alignment tool, but not susceptibility corrector.

      We kindly thank the reviewer for bringing this to our attention. We indeed did not collect b0 data in the phase-opposite direction, however, eddy_openmp can still be used to correct for eddy current distortions and perform motion correction, but the absence of phase-opposite b0 data may limit its ability to fully address susceptibility artifacts. This is now noted in the Supplementary Methods under Preprocessing section (SI, page 3, lines 16-18): “We do note, however, that as we did not acquire data in the phase-opposite direction, the susceptibility-induced distortions may not be fully corrected.”

      (4) Version of FSL?

      We thank the reviewer for addressing this point that we have now added under the Supplementary Methods (SI, page 3, lines 10-11): “Preprocessing of all data sets was performed employing the same procedures and the FMRIB diffusion toolbox (FDT) running on FSL version 6.0.”

      (5) Some short sketches about the connectivity analysis could be useful, at least in SI.

      We are grateful for this suggestion that improves our work. We added the sketches about the connectivity analysis, please see Figure 7 and Supplementary Figure 11.

      (6) Machine learning: functions, language, version?

      We thank the reviewer for pointing out these minor points that we now hope to have addressed in our resubmission in the Methods section by adding a detailed description of the structural connectivity analysis. We added: “The DKA parcellation comprises 68 cortical surface regions (34 nodes per hemisphere) and 19 subcortical regions. The complete list of ROIs is provided in the supplementary materials’ Table S7.  PSC leverages a reproducible probabilistic tractography algorithm 26 to create whole-brain tractography data, integrating anatomical details from high-resolution T1 images to minimize bias in the tractography. We utilized DKA 25 to define the ROIs corresponding to the nodes in the structural connectome. For each pair of ROIs, we extracted the streamlines connecting them by following these steps: 1) dilating each gray matter ROI to include a small portion of white matter regions, 2) segmenting streamlines connecting multiple ROIs to extract the correct and complete pathway, and 3) removing apparent outlier streamlines. Due to its widespread use in brain imaging studies27, 28, we examined the mean fractional anisotropy (FA) value along streamlines and the count of streamlines in this work. The output we used includes fiber count, fiber length, and fiber volume shared between the ROIs in addition to measures of fractional anisotropy and mean diffusivity.”

      The script is described and provided at: https://github.com/MISICMINA/DTI-Study-Resilience-to-CBP.git.

      (7) Ethical approval?

      The New Haven data is part of a study that was approved by the Yale University Institutional Review Board. This is mentioned under the description of the data “New Haven (Discovery) data set (page 23, lines 1,2).  Likewise, the Mannheim data is part of a study approved by Ethics Committee of the Medical Faculty of Mannheim, Heidelberg University, and was conducted in accordance with the declaration of Helsinki in its most recent form. This is also mentioned under “Mannheim data set” (page 26, lines 2-5): “The study was approved by the Ethics Committee of the Medical Faculty of Mannheim, Heidelberg University, and was conducted in accordance with the declaration of Helsinki in its most recent form.”

      (1) Traeger AC, Henschke N, Hubscher M, et al. Estimating the Risk of Chronic Pain: Development and Validation of a Prognostic Model (PICKUP) for Patients with Acute Low Back Pain. PLoS Med 2016;13:e1002019.

      (2) Hill JC, Dunn KM, Lewis M, et al. A primary care back pain screening tool: identifying patient subgroups for initial treatment. Arthritis Rheum 2008;59:632-641.

      (3) Hockings RL, McAuley JH, Maher CG. A systematic review of the predictive ability of the Orebro Musculoskeletal Pain Questionnaire. Spine (Phila Pa 1976) 2008;33:E494-500.

      (4) Chou R, Shekelle P. Will this patient develop persistent disabling low back pain? JAMA 2010;303:1295-1302.

      (5) Silva FG, Costa LO, Hancock MJ, Palomo GA, Costa LC, da Silva T. No prognostic model for people with recent-onset low back pain has yet been demonstrated to be suitable for use in clinical practice: a systematic review. J Physiother 2022;68:99-109.

      (6) Kent PM, Keating JL. Can we predict poor recovery from recent-onset nonspecific low back pain? A systematic review. Man Ther 2008;13:12-28.

      (7) Hruschak V, Cochran G. Psychosocial predictors in the transition from acute to chronic pain: a systematic review. Psychol Health Med 2018;23:1151-1167.

      (8) Hartvigsen J, Hancock MJ, Kongsted A, et al. What low back pain is and why we need to pay attention. Lancet 2018;391:2356-2367.

      (9) Tanguay-Sabourin C, Fillingim M, Guglietti GV, et al. A prognostic risk score for development and spread of chronic pain. Nat Med 2023;29:1821-1831.

      (10) Spisak T, Bingel U, Wager TD. Multivariate BWAS can be replicable with moderate sample sizes. Nature 2023;615:E4-E7.

      (11) Liu Y, Zhang HH, Wu Y. Hard or Soft Classification? Large-margin Unified Machines. J Am Stat Assoc 2011;106:166-177.

      (12) Loffler M, Levine SM, Usai K, et al. Corticostriatal circuits in the transition to chronic back pain: The predictive role of reward learning. Cell Rep Med 2022;3:100677.

      (13) Smith SM, Dworkin RH, Turk DC, et al. Interpretation of chronic pain clinical trial outcomes: IMMPACT recommended considerations. Pain 2020;161:2446-2461.

      (14) Lieberman G, Shpaner M, Watts R, et al. White Matter Involvement in Chronic Musculoskeletal Pain. The Journal of Pain 2014;15:1110-1119.

      (15) Mansour AR, Baliki MN, Huang L, et al. Brain white matter structural properties predict transition to chronic pain. Pain 2013;154:2160-2168.

      (16) Wager TD, Atlas LY, Lindquist MA, Roy M, Woo CW, Kross E. An fMRI-based neurologic signature of physical pain. N Engl J Med 2013;368:1388-1397.

      (17) Lee JJ, Kim HJ, Ceko M, et al. A neuroimaging biomarker for sustained experimental and clinical pain. Nat Med 2021;27:174-182.

      (18) Becker S, Navratilova E, Nees F, Van Damme S. Emotional and Motivational Pain Processing: Current State of Knowledge and Perspectives in Translational Research. Pain Res Manag 2018;2018:5457870.

      (19) Spisak T, Kincses B, Schlitt F, et al. Pain-free resting-state functional brain connectivity predicts individual pain sensitivity. Nat Commun 2020;11:187.

      (20) Baliki MN, Apkarian AV. Nociception, Pain, Negative Moods, and Behavior Selection. Neuron 2015;87:474-491.

      (21) Elman I, Borsook D. Common Brain Mechanisms of Chronic Pain and Addiction. Neuron 2016;89:11-36.

      (22) Baliki MN, Petre B, Torbey S, et al. Corticostriatal functional connectivity predicts transition to chronic back pain. Nat Neurosci 2012;15:1117-1119.

      (23) Jenkins LC, Chang WJ, Buscemi V, et al. Do sensorimotor cortex activity, an individual's capacity for neuroplasticity, and psychological features during an episode of acute low back pain predict outcome at 6 months: a protocol for an Australian, multisite prospective, longitudinal cohort study. BMJ Open 2019;9:e029027.

      (24) Zhang Z, Descoteaux M, Zhang J, et al. Mapping population-based structural connectomes. Neuroimage 2018;172:130-145.

      (25) Desikan RS, Segonne F, Fischl B, et al. An automated labeling system for subdividing the human cerebral cortex on MRI scans into gyral based regions of interest. Neuroimage 2006;31:968-980.

      (26) Maier-Hein KH, Neher PF, Houde J-C, et al. The challenge of mapping the human connectome based on diffusion tractography. Nature Communications 2017;8:1349.

      (27) Chiang MC, McMahon KL, de Zubicaray GI, et al. Genetics of white matter development: a DTI study of 705 twins and their siblings aged 12 to 29. Neuroimage 2011;54:2308-2317.

      (28) Zhao B, Li T, Yang Y, et al. Common genetic variation influencing human white matter microstructure. Science 2021;372.

      (29) Fortin JP, Parker D, Tunc B, et al. Harmonization of multi-site diffusion tensor imaging data. Neuroimage 2017;161:149-170.