- Oct 2024
-
docdrop.org docdrop.org
-
Questions 1-5 in the lab packet were then completed using analyticalthinking
Mention of the lab packet is unnecessary, as it doesn't relate to procedure. It is also too vague and would not help future chemists working through the lab.
-
The
This paragraph should have an indentation.
-
-
surfingcomplexity.blog surfingcomplexity.blog
-
If we want people to see these things as real, we have to integrate them into narrative descriptions of incidents.
Who are the best story tellers of our time?
-
Without a narrative schema to anchor it, the pandemic all but vanished from public discourse soon after it ended.
-
In The 1918 Flu Faded in Our Collective Memory: We Might ‘Forget’ the Coronavirus, Too, Scott Hershberger speculated in Scientific American along similar lines about why historians paid little attention the Spanish Flu epidemic, even though it killed more people than World War I (emphasis mine):
There seems to be an inherent value higharchy here for what makes an epic story, Corona was sold as a very epic story, 1918 spanish flu, not so much
-
We use stories to make sense of the world. What that means is that when events occur that don’t fit neatly into a narrative, we can’t make sense of them. As a consequence, these sorts of events are less salient, which means they’re less real.
Ya Homo Deus: A Brief History of Tomorrow - Wikipedia author Yuval talks about this a lot
-
-
Local file Local filePsyc 2536
-
No pressure/outside motivation• No motivating instructions during study, then told there was a big cash prize for bestperformance• Told at the beginning about the cash prize23
- There was no difference between these two groups
-
Does feedback need to be immediate?
- What's important is that you review it regularly, but the feedback doesn't have to be immediate as long as we take the time to learn from the feedback
-
Testing
- Turn the learning objectives into questions and answer them
-
Gradually increasing practice-test interva
- When studying flashcards, say it out loud or write it down so you'll be able to identify the error
-
Total Time Hypothesis
- More time you spend studying something, the better you'll learn it
-
Anesthesia has 3 components
- Anesthesia doesn't resemble sleep since the general feeling of passing time doesn't appear after waking up from anathesia
-
-
www.youtube.com www.youtube.com
-
28:08 UMKC is one kilometre from our location in Kansas - literally at the end of 53rd street where we live :-)
-
26:30 Bernard Lietaer - founder of the EURO
-
25:38 MMT changes our view on the nature of money
-
22:32 In early colonial times, once taxes are paid in paper money, the money was burned
-
18:59 Warren Mosler 19:49 Government does not need dollars, citizens need dollars 20:18 Warren is not an economist - he is not trying to defend economic theory - he is a financial trader watching the operation of money
-
18:47 Stephanie Skelton was sceptical at the start too
-
16:08 During the war, the USA moved 50% of the nations production to WAR. It woudl take a simple political decison to move 50% of the nations production to peace if peace is as profitable as war
-
14:22 The government spends money into existence
-
11:42 For a currency issuer, funding the money is NEVER the problem
-
9:31 Jared Bernstein fails to answer this question coherently. "I dont get it" !!
-
5:49 95% of the problems with policy is the language used to describe the policy
-
5:17 Let us evolve beyond the BATTLE FOR IDEAS through Dialogue
-
4:28 Let us create the money at community level
Tags
- MMT changes our view on the nature of money
- In early colonial times, once taxes are paid in paper money, the money was burned
- Warren Mosler
- 95% of problems is WORDS
- Bernard Lietaer - founder of the EURO
- USA moved 50% of production to War
- BATTLE FOR IDEAS THROUGH DIALOGUE
- 1996
- create the money
- Government does not need dollars, citizens need dollars
- Jared Bernstein does not know why the government borrows its own money
- UMKC is one kilometre from our location in Kansas - literally at the end of 53rd street where we live :-)
- Demurrage
- Warren is not an economist - he is not trying to defend economic theory - he is a financial trader watching the operation of money
- Stephanie Skelton was sceptical at the start too
- For a currency issuer, funding the money is NEVER the problem
- Move 50% of production to peace if peace is as profitable as war
- The government spends money into existence
Annotators
URL
-
-
pdx.pressbooks.pub pdx.pressbooks.pub
-
Are you more likely to behave aggressively in real life after watching people behave violently in dramatic situations on the screen?
Here's what a public annotation looks like. You can add links to these annotations too: OAI at PSU
-
-
ecampusontario.pressbooks.pub ecampusontario.pressbooks.pub
-
this will also ensurethis part of l’acadie againstenemy attacks, as allthese sauvages could stormat the right time onthose who would dare
This speaks to what we discussed in class that missionaries did operate as imperial agents. They saw Indigenous people as potential allies, particularly in a potential battle or war. They saw converting "all these sauvages" as a way to ensure Acadie. By converting them they thought that they could create allies for the French crown, strong enough allies to fight alongside them.
-
-
uh.edu uh.edu
-
Each ofthese domains have also shown unique relevance
Important to differentiate physical, cognitive, and social concerns that are caused by anxiety and depression.
For AUD/SUD in particular, examples of these could be... 1. Fear of becoming an alcoholic 2. Fear of physical symptoms and health defects related to alcoholism 3. Fear of being seen by others as an alcoholic
-
-
Local file Local file
-
astounded toresist
astounded with the fact that he sees wine, or that his friend is chaining him up? Seems to be the wine, but I first though it was the fact that his friend was chaining him up
-
which I had difficulty inrecognising as that of the noble Fortunato.
jesus. This reminds me of the Nutty Putty cave incident. Someone was trapped upside down for 27 hours, and you couldn't even make out his final words since it was basically just gurgling.
-
A succession of loud and shrill screams, bursting suddenly fromthe throat of the chained form,
This is terrifying. Fortunato has no escape
-
I heard the furious vibrations of thechain
Okay man Fortunato's tryna leave now I thought he was your friend
-
The Amontillado!” ejaculated my friend, not yet recoveredfrom his astonishment
first EJACULATED HUH????? Also he really just cares about the wine
-
Throwing the links about his waist
wow yeah he just chained him up what on earth
-
fettered
restrained with chains or manacles, typically around the ankles. Woah you just straight up chained your friend?
-
flambeaux rather to glow than flame
It's cold, dark, the air is awful, Fortunato is barely able to walk, it feels grim
-
You? Impossible! A mason?
Narrator is a mason? also, what is the brotherhood?
-
had I givenFortunato cause to doubt my good wil
Is this more like a karma situation? do good and you will be lucky? Is Fortunato a someone? <br /> I wrote that before finding out it was just a character. Maybe this is just saying Fortunato was not justified in doing harm to the narrator.
-
“The nitre!” I said; “see, it increases. It hangs like moss uponthe vaults. We are below the river’s bed. The drops of moisturetrickle among the bones. Come, we will go back ere it is too late.Your cough—”
Narrator yet again telling fortunato to turn back
-
-
trailhead.salesforce.com trailhead.salesforce.com
-
you want to limit the results when the query is very broad to one or two letters
This limit clause won't go away even if the user types more than 2 letters, though.
-
as the user types into the field, letter by letter
Is there a way to define a debounce timer?
-
-
Local file Local file
-
neoliberalism
Neoliberalism is both a political philosophy and a term used to signify the late-20th-century political reappearance of 19th-century ideas associated with free-market capitalism. The term has multiple, competing definitions, and is often used pejoratively. (Wiki)
-
Given chronic problems with mould, mice, overcrowding and under-maintainedbuildings, most residents of Cayce Homes were in favour of the redevelopment.Yet, many were concerned with how the redevelopment would impact their fami-lies, and whether they in fact were the intended beneficiaries. One resident, MsAudrey mentions: ‘the plan they got is good. But is it for us? That’s the mainthing.’ Another resident adds: ‘Or is it just for them?” Observed during the courseof this study, a core group of six to eight Cayce United residents organized aroundthree primary goals related to the redevelopment: 1) no resident displacement,2) the creation of job opportunities and 3) the integration of needed social supports.As Cayce United worked to mobilize their neighbours, educate the community,shape the public narrative of the redevelopment and win resident goals, residentorganizers – and those who worked alongside them (myself included) – were oftenstymied by the same questions that challenge many scholars of neighbourhoodinequality. What will produce more equitable outcomes in urban communities?How can positive social change occur? Who can (and ought to) be involved intransforming urban neighbourhoods? These are theoretical questions, and theanswers vary based upon the theoretical perspectives used.
Very similar to the circumstances of the Dudley Triangle.
-
-
www.mckinsey.com www.mckinsey.com
-
Without the right gen AI operating model in place, it is tough to incorporate enough structure and move quickly enough to generate enterprise-wide impact.
Without AI being set up properly it could set up any of these institutions up for failure, but if it is then it could have a enterprise-wide impact.
-
The financial-services companies that have best managed the transition to gen AI already had a high level of organizational agility, allowing them to quickly rework processes and flexibly pool resources, either by locating them in a central hub or by creating ad hoc, centrally coordinated, agile squads to execute use cases.
It is already having an increasing affect in Productivity and more specifically in the organization field.
-
The nascent nature of gen AI has led financial-services companies to rethink their operating models to address the technology’s rapidly evolving capabilities, uncharted risks, and far-reaching organizational implications
These industries are realizing that there are more positives in adding AI and how AI could positively increase their productivity and numbers.
-
where a central team is in charge of gen AI solutions, from design to execution, with independence from the rest of the enterprise—can allow for the fastest skill and capability building for the gen AI team.
able to increase productivity.
-
A great operating model on its own, for instance, won’t bring results without the right talent or data in place.
This shows how AI isn't just a quick fix and something to instantly get you results rather you have to work on it so then it could be more productive in the long run.
-
Generative AI (gen AI) is revolutionizing the banking industry as financial institutions use the technology to supercharge customer-facing chatbots, prevent fraud, and speed up time-consuming tasks such as developing code, preparing drafts of pitch books, and summarizing regulatory reports.
It seems to already having a positive affect on the banking community
-
gen AI could add between $200 billion and $340 billion in value annually, or 2.8 to 4.7 percent of total industry revenues, largely through increased productivity.1
This shows the massive impact AI is having and how much money is being made because of it and provides facts how it has increased from 2.8 to 4.7%
-
-
openreview.net openreview.net
-
the rewards are divided through by the standard deviation of a rolling dis-counted sum of the reward
big reward shaping
-
we find that they dramatically affect the performanceof PPO. To demonstrate this, we start by performing a full ablation study on the four optimizationsmentioned above
All these little optimizations in the implementation of PPO have a big impact on it's performance.
-
-
rsginc.github.io rsginc.github.io
-
nt_desc <- filter(variable_list, str_detect(variable, "no_travel")) %>% select(variable, description)
The column has been changed from variable to variable_unified and from description to description_unified
-
weighted_hhs <- tbi$hh[!is.na(hh_weight) & hh_weight > 0, hh_id] tbi <- lapply( tbi, function(dt) { dt <- dt[hh_id %in% weighted_hhs] } )
The new data includes 10 elements instead of 8. To align them first create variable_list <- tbi$metaData_variables and values_list <- tbi$metaData_values before deleting the two additional elements tbi<- tbi[1:8]
-
-
-
introverts will stop belittling themselves. Support the Next Generation of Content Creators Invest in the diverse voices that will shape and lead the future of journalism and art. donate now
This article angers me far beyond what it should. It is supposed to be a call to action for society to stop treating introverts like they're inferior. As an introvert myself I feel belittled reading this. Like I'm the victim and that it is a negative trait to be an introvert. While it's supposed to be a positive trait according to the article (Oh look, I'm introverted and that makes me a GREAT leader). All it does is just list the author's problems with being an introvert, but all of what she did list is barely traits of being an introvert, it's traits of being a coward with no self-confidence and a victim complex. Being an introvert or an extrovert is neither good nor bad, it's just what it is.
-
eing introverted and shy do not go hand in hand. The so-called “shyness” I experienced from an early age was truly just anxiety surrounding meeting new people. I, as a person, was not shy, but rather introverted.
this shows how the way she is labelled doesnt define her. The outside appeance isnt always what is true on the inside
-
While her words were meant to calm me, they, in fact, did the opposite. Why can’t I just be normal?
the effect of the label of being shy on the author
-
ome, I was “the shy kid.” To others, I wa
diction and perspective: shows how the other peoples opinions affect how the author views herself
-
-
www.phenomenalworld.org www.phenomenalworld.org
-
This failure also has also created constituencies among various types of domestic manufacturers opposed to the kind of market liberalization inherent to sanctions relief—undermining a core belief held by Western policymakers that sanctions can spur behavior changes in countries like Iran through bottom-up pressure, including from business lobbies.
Nice point
-
-
blogs.lse.ac.uk blogs.lse.ac.uk
-
Littler’s work shows in great detail how the narrative of ‘hard work’ and ‘making it’ I note above has become so present and alive in Global North societies (2) – and it’s by drawing this kind of sharp attention to the way such destructive narratives are mobilised, and who they work for, that we position ourselves to challenge and reject them.
The Tirukkuṟaḷ, a Tamil text from the "Global South" that is at least 1500 year old, contains "narratives of 'hard work'". The idea that this is somehow a Global North concept is woefully ignorant.
Couplet 620:
Who strive with undismayed, unfaltering mind, At length shall leave opposing fate behind.
-
-
Local file Local file
-
Connecting Linkbetween twoSentences orParagraphs,
Miles, 1905 uses an arrow symbol with a hash on it to indicate a "connecting link between two Sentences or Paragraphs, etc."
It's certainly an early example of what we would now consider a hyperlink. It actively uses a "pointer" in it's incarnation.
Are there earlier examples of these sorts of idea links in the historical record? Surely there were circles and arrows on a contiguous page, but what about links from one place to separate places (possibly using page numbers?) Indexing methods from 11/12C certainly acted as explicit sorts of pointers.
-
An omission,e.g. to befilled in after-wards.
When was the use of the caret first made for indicating the insertion of material?
Eustace Miles has an example from 1905.
-
Special Marks on Cards
Eustace Miles suggests the use of "special marks on cards" (annotations) in the top left corners, though he doesn't provide specific examples of how they might be used in practice. He does mention "The Abbreviations and Marks need be clear only to the Writer [sic] himself. They save ever so much time."
- "X": As contrasted with—
- "Q": Quotation
- Black triangle in corner: important
- Arrow pointing to corner of card: As compared with
- Angled parallel lines in the bottom right corner of card: End of Paragraph (or Chapter).
- Arrow pointing to the corner of card with hash mark: Connecting Link between two Sentences or Paragraphs, etc.
- Upside down V (or caret): An omission, e.g. to be filled in afterwards
- ?: A doubtful point
-
Special Marks on Cards
In Miles' visual examples of cards, he presents them in portrait (rather than landscape) orientation.
This goes against the broad grain of most standard card index filing systems of the time, but may be more in line with the earlier French use of playing cards orientation.
His portrait orientation also matches with the size ratios seen in his Card-Tray suggestion on p187. https://hypothes.is/a/llEgpIf4Ee-dVfcaIGUryQ
-
no false economy r
He's repeating (and thus emphasizing) the admonition that a card system is not expensive, particularly in relation to the savings in time and effort.
-
There should also be a Card-Tray, or abox with compartments in it, such as shown in thefollowing illustration. Of course the Tray might havean open top.
Miles suggests using a Card-Tray (in 1899) with various compartments and potentially an open top rather than some of the individual trays or card index boxes which may have been more ubiquitous
This shows a slight difference at the time in how an individual would use one of these in writing versus how a business might use them in drawers of 1, 2, 3 or cabinets with many more.
The image he shows seems more reminiscent of a 5x3" library charging tray than of some of the business filing appliances of the day and the decade following.
very similar to the self-made version at https://hypothes.is/a/DHU_-If6Ee-mGieKOjg8ZQ
-
A great help towards Arrangement and Clearnessis to have Cards of different sizes and shapes, and ofdifferent colours, or with different marks on them
Miles goes against the grain of using "cards of equal size", but does so to emphasize the affordance of using them for "Arrangement and Clearness".
-
The Cards can be turned afterwards.
Miles admits that one can use both sides of index cards in a card system, but primarily because he's writing at a time (1899) when, although paper is cheap (which he mentions earlier), some people may have an objection to the system's use due to the expense, which he places at the top of his list of objections. (And he does this in a book in which he emphasizes multiple times the ideas of selection and ordering!)
-
and of course writing only on one side of the Card ata time.
-
And the same will apply to the objection that theSystem is unusual. Seldom have there been any newsuggestions which have not been condemned as ' un-us
-
Objections to the Card-System,
Miles lists the following objections: - expense - inconvenience - unusual (new, novel)
Notice that he starts not with benefits or affordances, but with the objections.
What would a 2024 list of objections look like? - anachronism - harder than digital methods - lack of easier search - complexity - ... others?
-
At first, also, it might be thought that the Cardswould be inconvenient to use, but the personal ex-perience of thousands shows that, at any rate forbusiness-purposes, exactly the reverse is true
Miles' uses the ubiquity of card systems (even at the writing in 1899, prior to publication) within business as evidence for bolstering their use in writing and composition.
(Recall that he's also writing in the UK.)
-
Good Practice for this will be to studyLoisette's System of Memory, e.g. in "How to Remember"(see p. 264) ; in fact Loisette's System might be calledthe Link-System ; and Comparisons and Contrasts willvery often be a great help as Links.
Interesting to see a mention of Alphonse Loisette here!
But also nice to see the concept of linking ideas and association (associative memory) pop up here in the context of note making, writing, and creating card systems.
-
include anything which links one Ideato another. See further " How to Remember " (to bepublished in February, 1900, by Warne & Co.).
This book was finally published in 1905. The introduction was written in 1899 and the mentioned Feb 1900 publication of How to Remember didn't happen until 1901.
Miles, Eustace Hamilton. How to Remember: Without Memory Systems or with Them. Frederick Warne & Co., 1901.
-
If the Letter is important, especially if it be aBusiness-Letter, there should be as long an interval as isfeasible between the writing and the sending off.
writing and waiting is useful in many instances, and particularly for clarity of expression.
see also: <br /> - angry letter https://hypothes.is/a/6OoqHofyEe-1mtOohGA63w - diffuse thinking<br /> - typewriter (waiting) <br /> - editing (waiting) https://hypothes.is/a/VxRNeofvEe-5n1dpCEM48Q
-
fter the Letter has been done it should beread through, and should (if possible) be read out loud,and you should ask yourself, as you read it, whetherit is clear, whether it is fair and true, and (last but notleast) whether it is kind. Putting it in another way,you might ask yourself, ' What will the person feel andthink on reading this ? ' or, * Should I eventually besorry to have received such a Letter myself? ' or, again,'Should I be sorry to have written it, say a yearhe
Recall: Abraham Lincoln's angry letter - put it in a drawer
-
You can prepare your Letters any-where, even in the train, and so save a great deal oftime ; and it may be noticed here that the idlenessof people, during that great portion of their lives whichthey spend in travelling and waiting, can easily beavoided in this way.
Using a card system, particularly while travelling, can help to more efficiently use one's time in preventing idleness while travelling and waiting.
-
s we have often said before, paper is so cheap thatthere is no need for such economy.
Compare this with the reference in @Kimmerer2013 about responsibility to the tree and not wasting paper: https://hypothes.is/a/pvQ_4ofxEe-NfSOv5wMFGw
where is the balance?
-
The third reading should again be a slow reading,
relationship to Adler's levels of reading?
-
But in my opinion nothing can excuse the laziness ofa great number of Editors. When the Writers arepoor and have staked a great deal on their Writings,then the laziness is simply disgusting : in fact, it amountsto cruelty. It is concerned with some of the verysaddest tragedies that the world has ever seen, andI only mention it because it is very common and be-cause itis as well that the novice should know what toexpect.
-
Another Article I sent to a Paper, and after twentyweeks, and after many letters (which enclosed stampedand addressed envelopes), I was told that the Articlewas unsuitable for the Paper.
Even in 1905 writers had to wait interminably after submitting their writing...
it's only gotten worse since then...
-
Very few have the strength of mind tokeep back for a whole week a piece of Writing whichthey have finished. Type-writing sometimes necessitatesthis interval, or at any rate a certain interval.
The process of having a work typewritten forced the affordance of creating time away from the writing of a piece. This allows for both active and diffuse thinking on the piece as well as the ability to re-approach it with fresh eyes days or weeks later.
-
When an Article or Book has been written, it must betype-written before it is sent to the Editor or Publisher,that is to say, unless it has been ordered beforehand orunless you are well known. The reason is not simplythat Type-writing looks better than ordinary writing,and that it is easier to read, but it actually is a fact thatfew Editors or Publishers will read anything that is notType- written.
Even as early as 1905 (or 1899 if we go by the dating of the introduction), typewritten manuscripts were de rigueur for submission to editors and publishers.
-
Type-writing (see p. 369) is becoming more and morecommonly used, and for certain purposes it is indispen-s
Note that he's writing in 1899 (via the introduction), and certainly not later than 1905 (publication date).
-
Carlyle
One of the major values of fame is that it often allows the dropping of context in communication between people.
Example: Carlyle references in @Miles1905
-
Carlyle
It bears noting in this book on writing and composition, Miles (nor the indexer if it was done by someone else) never uses Carlyle's first name (Thomas) in any of the eleven instances in which it appears, as he's famous enough in the context (space, time) to need only a single name.
-
General Hints on Preparing Essays etc., in Rhyme.
One ought to ask what purpose this Rhyme serves?
- Providing emphasis of the material in the chapter;
- scaffolding for hanging the rest of the material of the book upon, and
- potentially meant to be memorized as a sort of outline of the book and the material.
-
WITH A RHYME.
did I miss the "rhyme" in this section or is he using a more figurative sense (as in "rhyme or reason")?
Ha! Didn't get far enough, it's on page 36, but also works the other way as well.
-
IN this Chapter I shall try to summarise the main partof this work, so that those who have not the time orthe inclination to go right through it may at any rategrasp the general plan of it, and may be able to referto any particular Chapter or page for further informa-tion on any particular topic.
This chapter is essentially what one ought to glean from skimming the TOC, the Index, and doing a brief inspectional read (Adler, 1972).
-
In these two latter sections it is aswell to emphasise the general advice, " Try a thing foryourself before you go to anything or anyone for infor-mation." You should try (if there is time) to work outthe subject beforehand ; and then, after you have reador listened to the information, you should note it downin a special Note-book, and if possible make certain ofunderstanding it, of remembering it, and of using it.
Echoes of my own advice to "practice, practice, practice".
-
Interest is required especially in the Beginning,
-
But, the more heexamines the subject, and the more he goes by hispersonal experience, the more he will find it worthwhile to spend time on, and to practise carefully,fthisfirst department of Composition, as opposed to the mereExpression^] Indeed one might almost say that, if thisfirst department has been thoroughly well done, that isto say, if the Scheme of Headings and Sub-Headingshas been well prepared, the Expression will be a com-paratively easy matter.
Definition of the "first department of composition": <br /> The preparation (mise en place) for writing as opposed to the actual expression of the writing. By this he likely means the actions of Part II (collecting, selecting, arranging) of this book versus Part III.
-
Humour is to be classed as a Rhetoricalweapon, and indeed as one of the most powerful.
-
One might think at first that it was a Universal Lawthat all Writing or Speaking should be so clear as tobe transparent. And yet, as we have seen, no readerof Carlyle can doubt that a great deal of his Forcewould be gone if one made his Writings transparent.If one took some of Carlyle's most typical works andparaphrased them in simple English, the effect wouldnot be a quarter as good as it is.
How is this accomplished exactly? How could one imitate this effect?
How do we break down his material and style to re-create it?
-
as Vigour, but the two generally go hand in hand.
"Brevity is not always the same as Vigour, but the two generally go hand in hand." -Miles
-
As to the other extreme, it is a questionwhether a sentence can be too clear, whether the Ideacan be too simply expressed ; and, if we once admitthat Carlyle's writings produced a greater effect anda better effect than they would have done if they hadbeen perfectly clear, then we must admit that forcertain purposes absolute Clearness is a Fault.
-
No Writer seems to be going off the point, and tobe violating the Law of ' Unity ' and Economy, morethan Carlyle does. As we read his "Frederick theGreat", the characters at first appear to us to have nomore connexion with one another than the characters
-
The reader will doubtless be amazed at the amountof time which has to be spent before he arrives at thestage of Expressing his Ideas at all.
-
In order to give the reader some chance of havinga good Collection of Headings, and less chance ofomitting the important Headings, I have offered (e.g.on pp. 83, 92) a few General Lists, which are not quitecomplete but yet approach to completeness ; two ofthese Lists will be found sufficient for most purposes.One of these is called the List of Period- Headings,such as Geography, Religion, Education, Commerce,War, etc. (see p. 83); the other is called the List ofGeneral Headings, and includes Instances, Causes andHindrances, Effects, Aims, etc. : this latter List will befound on p. 92.
-
Rhythm, Grammar, Vocab-ulary, Punctuation, etc. It was hard to break thefaggots when they were in a bundle, but it was easyto break them when they were taken one by one.
Notice that again he's emphasizing breaking down the problem into steps, and he's using a little analogy to do so, just like he had described previously.
-
I shall try to give the ChiefFaults in Composition. The reader will see that thelist is long : and that, if he merely tries to write wholeEssays all at one 'sitting', he is little likely to escapethem all.S
Attempting to escape the huge list of potential "Chief faults in composition" is a solid reason not to try to cram a paper or essay in a single night/day.
-
Teaching is one of the best means of Learning, notonly because it forces one to prepare one's work care-fully, and to be criticised whether one wishes it or not,but also because it gives one a sense of responsibility :it reminds one that one is no longer working for selfalone.
-
whether you are Writing or Speaking, the generalprinciple to remember is that you must appeal, innearly everything you say, to the very stupidest peoplepossible.
-
It is important to learn as much and at the sametime as little as possible.J
By abstracting and concatenating portions of material, one can more efficiently learn material that would otherwise take more time.
Tags
- editors' marks
- teaching to the bottom
- card index for business
- card index
- time away
- first department of composition
- unity
- objections
- composition
- card system
- 1905
- letter-writing
- note taking affordances
- humor
- associative memory
- write only on one side
- technology in the classroom
- charging trays
- general lists
- reading practices
- being cheap
- 1899
- quotes
- empathy
- humanity
- diffuse thinking
- beginning
- intellectual history
- list of general headings
- analogies
- vigor
- rhyme for memory
- annotations
- card index for writing
- mise en place
- paper
- rhyme
- brevity
- breaking down complex processes
- publishing timelines
- distillation
- inspectional reading
- idea links
- media studies
- portrait vs. landscape orientations
- kindness
- definitions
- chief faults in composition
- practice, practice, practice
- emphasis
- style (writing)
- laziness
- step-by-step
- technopanic
- abstraction
- insertions
- submissions (writing)
- expectations
- idleness
- card system for business
- interest
- zettelkasten boxes
- typewriters (adoption)
- toxic capitalism
- rhetorical weapons
- clarity of expression
- weaponization of language
- writing advice
- caret
- rhetoric
- proofreading
- editors (publishing)
- technology
- color codes
- fear
- fresh eyes
- typewriter affordances
- teaching as learning
- put the letter in a drawer
- travelling
- cruelty
- preparation
- concatenation
- card index as productivity system
- indexing methods
- productivity
- the unknown
- writing and waiting
- cards of equal size
- patience
- efficiency
- hyperlinks
- rhyme or reason
- waiting
- How to Read a Book
- context
- fame
- Thomas Carlyle
- writing affordances
- Alphonse Loisette
- list of period-headings
- novelty
- economies of scale
- learning
- open questions
- writing with empathy
- Eustace Hamilton Miles
Annotators
-
-
pulitzercenter.org pulitzercenter.org
-
Enslavers and thecourts did not honor kinship ties tomothers, siblings, cousins. In mostcourts, they had no legal standing.Enslavers could rape or murder their
This shows that people don't like slaves so much that they don't want them to have a family and they can do anything to their slaves like they own the slaves which is very cruel.
-
Hundreds of black veterans werebeaten, maimed, shot and lynched.
This just shows the amount of hate for people with a darker skin color which is just completely un human.
-
Despite the guarantees of equal-ity in the 14th Amendment, theSupreme Court’s landmark Plessy v.Ferguson decision in 1896 declaredthat the racial segregation of blackAmericans was constitutional.
Even though slavery ended this quote is explaining that the effects of slavery was racism which was never ending or it would take a very long time to end.
-
They had no claim to their own chil-dren, who could be bought, sold andtraded away from them on auctionblocks alongside furniture and cattleor behind storefronts that advertised‘‘Negroes for Sale.’
This quote shows how the African slaves were treated especially with their children which were separated away from them and sold to other people to work or one day become another slave and those kids would not know anything about them or their families.
-
-
www.sciencedirect.com www.sciencedirect.com
-
The widespread availability and addictive nature of the loot box system makes it crucial to regulate such monetization practices to protect vulnerable individuals such as young people, lonely individuals, and problem gamblers.
This last section maybe a good solution to the issue of loot box addiction.
-
the study looked at financial consequences and the role of problem gambling in these associations.
Both problem gambling and financial consequences are a good aspect to look at the issue of online gambling and loot boxes.
-
With respect to H1, Loneliness had a positive association with Loot Box Purchasing
Helps strengthens the previous claim of lonely individuals being vulnerable to loot boxes.
-
Therefore, the positive association between Loot Box Purchasing and Indebtedness was indirect and mediated through Problem gambling.
The connection between loot boxes and indebtedness may be indirect. However, researching further in problem gambling may produce help claims that strength the research question.
-
Loot box purchasing was measured with a single-item “How have your online consumer habits changed during the coronavirus pandemic regarding the following services in comparison to your previous habits: Loot box purchases in digital games”
Need to find further studies for after the pandemic but this can be a good baseline on how individuals can be affected by the 'predatory monetization schemes' as previously mentioned in the journal.
-
Increased loot box purchasing is positively associated with indebtedness.Given that gambling activities and loot box purchasing often co-occur
This overall feeling of indebtness can help strength the claim of the similarity of hopelessness that gambling produces to what loot boxes produce.
-
Loot box expenditure can add to financial strain caused by excessive gambling (Hing et al., 2022), but it might be problem gambling that plays a major role in debt problems among loot box buyers
Potentially looking for cases of severe loot box addictions can help improve the claim of how these loot boxes mask the true nature of online gambling.
-
‘predatory monetization schemes’ are designed to make players both financially and psychologically committed to a game with a purpose of spending more and more money.
Good to further look into as these 'predatory monetization schemes' could be a point to bring up in how these loot boxes can change the perception of online gambling by posing as a lighter form of online gambling in a way?
-
Loot boxes are commonly juxtaposed with forms of gambling and generally perceived as a gambling-like activity
The term gambling-like activity can be key in placing loot boxes as an activity closely linked to gambling.
-
roblem gambling is more common among those of lower income (Hahmann et al., 2021), but gambling can further worsen the situation leading to severe financial problems such as indebtedness
Good point to bring up in the dangers of gambling.
-
Studies have found that loneliness is a risk factor for problem gambling
Potentially a good angle to describe the individuals that might be the most vulnerable to loot boxes and online gambling in general.
-
Several studies have found associations between loot box purchasing and poorer mental health and distress
These studies would be helpful in strengthening the claim on how those how are mentally struggling are the most vulnerable. These studies can also supplement the potentially connection to how loot box purchasing can have adverse affects on mental health. However, this claim will require additional research using multiple sources and studies.
-
Concerns have been raised particularly in relation to ‘loot boxes’ that present a controversial form of in-game purchases in pursuit of randomized rewards such as weapons or cosmetic features
Good definition for what a loot box is. Also helps present the loot box as a form of in-game purchase based on luck.
-
The chance-based nature of loot boxes is often juxtaposed with mechanisms of gambling, and these gambling-like mechanisms make them potentially addictive for players
Strong point that be used as an argument on why loot boxes can hurt individuals with its similarities to gambling. This can also help show how loot boxes promote a form of online gambling.
-
-
learn.cantrill.io learn.cantrill.io
-
Hello there, folks.
Thanks once again for joining.
Now that we've got a little bit of an understanding of what problem cloud is solving, let's actually go ahead and define it.
So what we'll talk about is technology on tap, a common phrase that you might have heard about when talking about cloud.
What is it and why would we say that?
Then what we're actually going to do is walk through the NIST definition of cloud.
So there are five key properties that the National Institute of Standards and Technology does use to determine whether or not something is cloud.
So we'll walk through that.
So we've got a good understanding of what cloud is and what cloud is not.
So first things first, technology on tap.
Why would we refer to cloud as technology on tap?
Well, let's have a think about the taps we do know about.
When you want access to water, if you're lucky enough to have access to a nice and easy supply of water, all you really need to do is turn on your tap and get access to as little or as much water as you want.
You can turn that on and off as you require.
Now, we know that that's easy for us.
All we have to worry about is the tap and paying the bill for the amount of water that we consume.
But what we don't really have to worry about is everything that goes in behind the scenes.
So the treatment of the water to bring it up to drinking standards, the actual storage of that treated water, and then the transportation of that through the piping network to actually get to our tap.
All of that is managed for us.
We don't need to really worry about what happens behind the scenes.
All we do is focus on that tap.
We turn it on if we want more.
We turn it off when we are finished.
We only pay for what we consume.
So you might be able to see where I'm going with this.
This is exactly what we are talking about with cloud.
With cloud, however, it's not water that we're getting access to, it is technology.
So if we want access to technology, we use the cloud.
We push some buttons, we click on an interface, we use whatever tool we require, and we get access to those servers, that storage, that database, whatever it might be that we require in the cloud.
Now again, behind the scenes, we don't have to worry about the data centers that host all of this technology, all of these services that we want access to.
We don't worry about the physical infrastructure, the hosting infrastructure, the storage, all the different bits and pieces that actually get that technology to us, we don't need to worry about.
And how does it get to us?
How is it available all across the globe?
Well, we don't need to worry about that connectivity and delivery as well.
All of this behind the scenes when we use cloud is managed for us.
All we have to worry about is turning on or off services as we require.
And this is why you can hear cloud being referred to as technology on tap, because it is very similar to the water utility service.
Utility service is another name you might hear cloud being referred to, because it's like water or electricity.
Cloud are like these utility services where you don't have to worry about all the infrastructure behind the scenes.
You just worry about the thing that you want access to.
And really importantly, you only have to pay for what you use.
You turn it on if you need it, you turn it on if you don't, you create things when you need them, delete them when you don't, and you only pay for those services when you have them, even though they are constantly available at your fingertips.
Now, compare this to the scenario we walked through earlier.
Traditionally, we would have to buy all of the infrastructure, have it sitting there idly, even if we weren't using it, we would still have had to pay for it, set it up, power it and keep it all running.
So this is a high level of what we are talking about with cloud.
Easy access to servers when you need them, turn them off when you don't, don't worry about all that infrastructure behind the scenes.
But that's a high level definition.
So let's now walk through what the NIST use as the key properties to define cloud.
One of the first properties you can use to understand whether something is or is not cloud is understanding whether or not it provides you on demand self service access, where you can easily go ahead and get that technology without even having to talk to humans.
So what do I really mean by that?
Well, let's say you're a cloud administrator, you want to go ahead and access some resources in the cloud.
Now, if you do want access to some services, some data, some storage and application, whatever it might be, while you're probably going to have some sort of admin interface that you can use, whether that's a command line tool or some sort of graphical user interface, you can easily use that to turn on any of the services that you need, web applications, data, storage, compute and much, much more.
And you don't have to go ahead, talk to another human, procure all of the infrastructure that runs behind the scenes.
You use your tool, it is self service, it is on demand, create it when you want it, delete it when you don't.
So that's on demand self service access and one of the key properties of the cloud.
Next, what I want to talk to you about is broad network access.
Now, this is where we're just saying, if something is cloud, it should be easy for you to access through standard capabilities.
So for example, if we are the cloud administrator, it's pretty common when you're working with technology to expect that you would have command line tools, web based tools and so on.
But even when we're not talking about cloud administrators and we're actually talking about the end users, maybe for example, accessing storage, it should be easy for them to do so through standard tools as well, such as a desktop application, a web browser or something similar.
Or maybe you've gone ahead and deployed a reporting solution in the cloud, like we spoke of in the previous lesson.
Well, you would commonly expect for that sort of solution that maybe there's also a mobile application to go and access all of that reporting data.
The key point here is that if you are using cloud, it is expected that all of the common standard sorts of accessibility options are available to you, public access, private access, desktop applications, mobile applications and so on.
So if that's what cloud is and how we access it, where actually is it?
That's a really important part of the definition of cloud.
And that's where we're referring to resource pooling, this idea that you don't really know exactly where the cloud is that you are going to access.
So let's say for example, you've got your Aussie Mart company.
If they want to deploy their solution to be available across the globe, well, it should be pretty easy for them to actually go ahead and do that.
Now, we don't know necessarily where that is.
We can get access to it.
We might say, I want my solution available in Australia East for example, or Europe or India or maybe central US for example.
All of these refer to general locations where we want to deploy our services.
When you use cloud, you are not going to go ahead and say, I want one server and I want it deployed to the data center at 123 data center street.
Okay, you don't know the physical address exactly or at least you shouldn't really have to.
All you need to know about is generally where you are going to go and deploy that.
Now, you will also see that for most cloud providers, you've got that global access in terms of all the different locations you can deploy to.
And really importantly, in terms of all of these pooled resources, understand that it's not just for you to use.
There will be other customers all across the globe who are using that as well.
So when you're using cloud, there are lots of resources.
They might be in lots of different physical locations and lots of different physical infrastructure and in use by lots of different customers.
And you don't really need to worry about that or know too much about it.
Another really important property of the cloud is something referred to as rapid elasticity.
Now elasticity is the idea that you can easily get access to more or less resources.
And when you work with cloud, you're actually going to commonly hear this being referred to as scaling out and in rather than just scaling up and down.
So what do I mean by that?
Well, let's say we've got our users that need to access our Aussie Mart store.
We might decide to use cloud to host our Aussie Mart web application.
And perhaps that's hosted on a server and a database.
Now, when that application gets really busy, for example, if we have lots of different users going to access it at the same time, we might want to scale out to meet demand.
That is to say, rather than having one server that hosts our web application, we might actually have three.
And if that demand for our application decreases, we might actually go ahead and decrease the underlying resources that power it as well.
What we are talking about here is scaling in and out by adding or decreasing the number of resources that host our application.
This is different from the traditional approach to scalability, where what we would normally do is just add CPU or add memory, for example.
We would increase the size of one individual resource that was hosting our solution.
So that's just elasticity at a high level and it's a really key property of cloud.
Now, we'll just say here that if you are worried about how that actually works behind the scenes in terms of how you host that application across duplicate resources, how you provide connectivity to that, that's all outside the scope of this beginners course, but it's definitely covered in other content as well.
So when you're using cloud, you get easy access to scale in and out and you should never feel like there are not enough resources to meet your demand.
To you, it should just feel like if you want a hundred servers, for example, then you can easily get a hundred servers.
All right, now the last property of cloud that I want to talk to you about is that of measuring service.
When we're talking about measuring service, what we're talking about is the idea that if you are using cloud to host your solutions, it should be really easy for you to go and say, I know what this is costing, I know where my resources are, how they are performing and whether there are any issues and I can control the types of resources and the configuration that I use that I'm going to deploy.
So for example, it should be easy for you to say, how much is it going to cost me for five gigabytes of storage?
What does my bill look like currently and what am I forecasted to be using over the remainder of the month?
Or maybe you want to say that certain services should not be allowed to be deployed across all regions.
Yes, cloud can be accessed across the globe, but maybe your organization only works in one part of a specific country and that's the only location you should be able to use.
These are the standard notions of measuring and controlling service and it's really common to all of the cloud providers.
All right, everybody.
So now you've got an understanding of what cloud is and how you can define it.
If you'd like to see more about this definition from the NIST, then be sure to check out the link that I've included for this lesson.
So thanks for joining me, folks.
I'll see you in the next lesson.
-
-
www.ncbi.nlm.nih.gov www.ncbi.nlm.nih.gov
-
IBMT involves learning that requires experience and explicit instruction. To ensure appropriate experience, coaches (qualified instructors) are trained to help novices practice IBMT properly. Instructors received training on how to interact with experimental and control groups to make sure they understand the training program exactly.
I would be interested to see if traditional focused meditation would have similar results.
-
Although no direct measures of brain changes were used in this study, some previous studies suggest that changes in brain networks can occur. Thomas et al. (40) showed that, in rats, one short experience of acute exposure to psychosocial stress reduced both short- and long-term survival of newborn hippocampal neurons. Similarly, the human brain is sensitive to short experience. Naccache et al. (41) showed that the subliminal presentation of emotional words (<100 ms) modulates the activity of the amygdala at a long latency and triggers long-lasting cerebral processes (41).
Let's review these other studies as well.
-
However, the lengthy training required has made it difficult to use random assignment of participants to conditions to confirm these findings.
Interesting.
-
a group
What is the size and composition of the group? Edit: Addressed further down.
-
may be easier to teach to novices because they would not have to struggle so hard to control their thoughts.
Very interesting. I thought the whole purpose of meditation WAS the struggle to control the thoughts. I thought that is where the benefits came from.
-
Thought control is achieved gradually through posture and relaxation, body–mind harmony, and balance with the help of the coach rather than by making the trainee attempt an internal struggle to control thoughts in accordance with instruction.
Certainly would make it more approachable.
-
The main effect of the training session was significant only for the executive network [F(1,78) = 9.859; P < 0.01]. More importantly, the group × session interaction was significant for the executive network [F(1,78) = 10.839; P < 0.01], indicating that the before vs. after difference in the conflict resolution score was significant only for the trained group
This would imply improved equanimity, but perhaps not long term focus improvement.
-
Performance of the ANT after 5 days of IBMT or control. Error bars indicate 1 SD. Vertical axis indicates the difference in mean reaction time between the congruent and incongruent flankers. The higher scores show less efficient resolution of conflict.
This particular study seems to show that the change in focus efficiency was actually better in the control group than the experiment group.
-
-
learn.cantrill.io learn.cantrill.io
-
Hey there everybody, thanks for joining.
It's great to have you with me in this lesson where we're going to talk about why cloud matters.
Now to help answer that question, what I want to do firstly is talk to you about the traditional IT infrastructure.
How did we used to do things?
What sort of challenges and issues did we face?
And therefore we'll get a better understanding of what cloud is actually doing to help.
We can look at how things used to be and how things are now.
So what we're going to do throughout this lesson is walk through a little bit of a scenario with a fictitious company called Ozzymart.
So let's go ahead now, jump in and have a chat about the issues that they're currently facing.
Ozzymart is a fictitious company that works across the globe selling a range of different Australia related paraphernalia.
Maybe stuffed toys for kangaroos, koalas and that sort of thing.
Now they've currently got several different applications that they use that they provide access to for their users.
And currently the Ozzymart team do not use the cloud.
So when we have a look at the infrastructure hosting these applications, we'll learn that Ozzymart have a couple of servers, one server for each of the applications that they've got configured.
Now the Ozzymart IT team have had to have gone and set up these servers with windows, the applications and all the different data that they need for these applications to work.
And what it's also important to understand about the Ozzymart infrastructure is all of this is currently hosted on their on-premises customer managed infrastructure.
So yes, the Ozzymart team could have gone out and maybe used a data center provider.
But the key point here is that the Ozzymart IT team have had to set up servers, operating systems, applications and a range of other infrastructure to support all of this storage, networking, power, cooling.
Okay, these are the sorts of things that we have to manage traditionally before we were able to use cloud.
Now to help understand what sort of challenges that might introduce, let's walk through a scenario.
We're going to say that the Ozzymart CEO has gone and identified the need for reporting to be performed across these two applications.
And the CEO wants those reports to be up and ready by the end of this month.
Let's say that's only a week away.
So the CEO has instructed the finance manager and the finance manager has said, "Hey, awesome.
You know what?
I've found this great app out there on the internet called Reports For You.
We can buy it, download it and install it.
I'm going to go tell the IT team to get this up and running straight away."
So this might sound a little bit familiar to some of you who have worked in traditional IT where sometimes demands can come from the top of the organization and they filter down with really tight timelines.
So let's say for example, the finance manager is going to go along, talk to the IT team and say, "We need this Reports For You application set up by the end of month."
Now the IT team might be a little bit scared because, hey, when we look at the infrastructure we've got, it's supporting those two servers and applications okay, but maybe we don't have much more space.
Maybe we don't have enough storage.
Maybe we are using something like virtualization.
So we might not need to buy a brand new physical server and we can run up a virtual Windows server for the Reports For You application.
But there might just not be enough resources in general.
CPU, memory, storage, whatever it might be to be able to meet the demands of this Reports For You application.
But you've got a timeline.
So you go ahead, you get that server up and running.
You install the applications, the operating system data, all there as quickly as you can to meet these timelines that you've been given by the finance manager.
Now maybe it's not the best server that you've ever built.
It might be a little bit rushed and a little bit squished, but you've managed to get that server up and running with the Reports For You application and you've been able to meet those timelines and provide access to your users.
Now let's say that you've given access to your users for this Reports For You application.
Now let's say when they start that monthly reporting job, the Reports For You application needs to talk to the data across your other two applications, the Aussie Mart Store and the Aussie Mart Comply application.
And it's going to use that data to perform the reporting that the CEO has requested.
So you kick off this report job on a Friday.
You hope that it's going to be complete on a Saturday, but maybe it's not.
You check again on a Sunday and things are starting to get a little bit scary.
And uh-oh, Monday rolls around, the Reports For You report is still running.
It has not yet complete.
And that might not be so great because you don't have a lot of resources on-premises.
And now all of your applications are starting to perform really poorly.
So that Reports For You application is still running.
It's still trying to read data from those other two applications.
And maybe they're getting really, really slow and let's hope not, but maybe the applications even go off entirely.
Now those users are going to become pretty angry.
You're going to get a lot of calls to the help desk saying that things are offline.
And you're probably going to have the finance manager and every other manager reaching out to you saying, this needs to be fixed now.
So let's say you managed to push through, perhaps through the rest of Monday, and that report finally finishes.
You clearly need more resources to be able to run this report much more quickly at the end of each month so that you don't have angry users.
So what are you going to do to fix this for the next month when you need to run the report again?
Well, you might have a think about ordering some new software and hardware because you clearly don't have enough hardware on-premises right now.
You're going to have to wait some time for all of that to be delivered.
And then you're going to have to physically and store it, set it up, get it running, and make sure that you've got everything you need for reports for you to be running with more CPU and resources next time.
There's a lot of different work that you need to do.
This is one of the traditional IT challenges that we might face when the business has demands and expectations for things to happen quickly.
And it's not really necessarily the CEO or the finance manager's fault.
They are focused on what the business needs.
And when you work in the technology teams, you need to do what you can to support them so that the business can succeed.
So how might we do that a little bit differently with cloud?
Well, with cloud, we could sign up for a cloud provider, we could turn on and off servers as needed, and we could scale up and scale down, scale in and scale out resources, all to meet those demands on a monthly basis.
So that could be a lot less work to do and it could certainly provide you the ability to respond much more quickly to the demands that come from the business.
And rather than having to go out and buy all of this new infrastructure that you are only going to use once a month, well, as we're going to learn throughout this course, one of the many benefits of cloud is that you can turn things on and off really quickly and only pay for what you need.
So what might this look like with cloud?
Well, with cloud, what we might do is no longer have that on-premises rushed server that we were using for reports for you.
Instead of that, we can go out to a public cloud provider like AWS, GCP or hopefully Azure, and you can set up those servers once again using a range of different features, products that are all available through the various public cloud providers.
Now, yes, in this scenario, we are still talking about setting up a server.
So that is going to take you some time to configure Windows, set up the application, all of the data and configuration that you require, but at least you don't need to worry about the actual physical infrastructure that is supporting that server.
You don't have to go out, talk to your procurement team, talk to a different providers, wait for different physical infrastructure to be delivered and licensing and software and other assets.
With cloud, as we will learn, you can really quickly get online and up and running.
And also, if we had that need to ensure that the reports for you application was running with lots of different resources at the end of the month, it's much easier when we use cloud to just go and turn some servers on and then maybe turn them off at the end of the month when they are no longer acquired.
This is the sort of thing that we are talking about with cloud.
We're only really just touching on the service about what cloud can do and what cloud actually is.
But my hope is that through this lesson, you can understand how cloud changes things.
Cloud allows us to work with technology in a much different way than we traditionally would work with our on-premises infrastructure.
Another example that shows how cloud is different is that rather than using the reports for you application, what we might in fact actually choose to do is go to a public cloud provider and go to someone that actually has a equivalent reports for you solution that's entirely built in the cloud ready to go.
In this way, not only do we no longer have to manage the underlying physical infrastructure, we don't actually have to manage the application software installation, configuration, and all of that service setup.
With something like a reporting software that's built in the cloud, we would just provide access to our users and only have to pay on a per user basis.
So if you've used something like zoom for meetings or Dropbox for data sharing, that's the sort of solution we're talking about.
So if we consider this scenario for Aussie Mart, we have a think about the benefits that they might access when they use the cloud.
Well, we can much more quickly get access to resources to respond to demand.
If we need to have a lot of different compute capacity working at the end of the month with cloud, like you'll learn, we can easily get access to that.
If we wanted to add lots of users, we could do that much more simply as well.
And something that the finance manager might really be happy about in this scenario is that we aren't going to go back and suggest to them that we need to buy a whole heap of new physical infrastructure right now.
When we think about traditionally how Aussie Mart would have worked with this scenario, they would have to go and buy some new physical servers, resources, storage, networking, whatever that might be, to meet the needs of this reports for you application.
And really, they're probably going to have to strike a balance between having enough infrastructure to ensure that the reports for you application completes its job quickly and not buying too much infrastructure that's just going to be sitting there unused whilst the reports for you application is not working.
And really importantly, when we go to cloud, we see this difference as not having to buy lots of physical infrastructure upfront as being referred to as capital expenditure versus operational expenditure.
Really, what we're just saying here is rather than spending a whole big lump sum all at once to get what you need, you can just pay on a monthly basis for what you need when you need it.
And finally, one of the other benefits that you'll also see is that we're getting a reduction in the amount of different tasks that we have to complete in terms of IT administration, set up of operating systems, management of physical infrastructure, what the procurement team has to manage, and so on.
Again, right now we're just talking really high level about a fictitious scenario for Aussie Mart to help you to understand the types of things and the types of benefits that we can get access to for cloud.
So hopefully if you're embarking on a cloud journey, you're gonna have a happy finance manager, CEO, and other team members that you're working with as well.
Okay, everybody, so that's a wrap to this lesson on why cloud matters.
As I've said, we're really only just scratching the surface.
This is just to introduce you to a scenario that can help you to understand the types of benefits we get access to with cloud.
As we move throughout this course, we'll progressively dive deeper in terms of what cloud is, how you define it, the features you get access to, and other common concepts and terms.
So thanks for joining me, I'll see you there.
-
-
-
“I do not wantmy wife to take up with any otherman; if she does, this real estate goesto my estate.” The wife re-married.Does she own the realty in fee simple?
The more apparent search item here is "fee simple". However, another search item is whether or not a promise to not remarry, which could come at a detriment, is with consideration and is legally enforceable.
-
-
-
SOTs generated by the anomalous Hall effect inFM/NM/FM multilayers were predicted 13 and experimentallyrealized14
Is this normal?
-
-
www.nytimes.com www.nytimes.com
-
Marrim thinks they will still find a way to smoke. “Kids break the rules — that’s the way of the world,” she said. “We were all kids and we tried it for the first time,” she added. “Might as well do it in the safety of a lounge.”
Marrim feels that hookah is a big part of her life because it helped her feel liberated even though she was looked as shameful because she is a women but that did not stop her she would make her own hookah when she was younger to smoke some hookah she's not wrong kids like to break the rules
-
the chemicals in hookah smoke are similar to those found in cigarette smoke.
due to hookah being Tabaco that you inhale in to your lungs so it's still a health problem because you get smoke in your lungs.
-
birthdays, graduations, that time you cried over the crush who didn’t like you back or showed off your smoke ring skills to your friends. “It’s like a rite of passage here when you start smoking hookah,” Marrim said.
The hookah lounge is more then a place to smoke it a place where people to together to celebrate special events like birthdays or to relief dome stress or hang with friends.
-
-
thewasteland.info thewasteland.info
-
on the other side
The water that has only recently brought about death to an unfortunate sailor and has seemingly threatened “Gentile[s]” and “Jew[s]” and even us, the readers, now becomes the force an absence of which leads to death. What if the “death by water” that Madame Sosotris warned about was not the drowning but the death brought by its lack? This absence—spiritual and physical—defines the drought that pervades society.
In essence, that warning has already become true. In the search of meaning, the earthly desires have drowned humanity. What comes after it is stillness: a period of profound spiritual drought. This lack of spiritually induces apocalypse: a cycle of life seemingly becomes broken. The silence in the mountains does not give way to voice, and stillness, described in The Death of Water, that follows the storm does not imply recovery; instead, it leads to further destruction. There is no resurgence after the storm, only desolation.
This desolation is no less overwhelming than the indulgence that preceded it. The absence of water—a metaphor for spiritual sustenance—is inescapable. The mountains, once symbols of “solitude,” “silence” and reflection are now dry and barren. The use of “even” in these lines underscores the notion of the totality of this spiritual drought. There is no refuge, no shelter, not “even” in the mountains.
Eliot further juxtaposes biblical light, associated with Christ as “the sun shineth in his strength,” as described in Revelations, with thunder, transforming it into a symbol of apocalypse – the thunder itself represents a loud rumbling or crashing noise after a lightning flash. This choice of title and imagery seems to suggest that divine intervention may have already occurred—unrecognized and unheeded, leaving only a loud noice as its product. What if Jesus is already here “walking beside” us? Left unrecognized, however, he does not intervene. This notion is underscored by the repetition of the question regarding the personality of this third creature: “Who is the third who walks always beside you?” In the second reiteration of the question, however, “beside” changes to “on the other side.” This divine creature, most likely Christ, is present, yet now isolated, by the walls of mountains we ourselves have built.
The tragedy of this drought, thus, seems to lie not in the absence of divine intervention, but in humanity’s inability to recognize it. In this contemporary world, it is not the storm that destroys; it is the stillness after, where the absence of recognition leads to a deeper decay. The apocalypse has already begun (or potentially has almost reached its culmination), not in fire or flood, but in silence and spiritual blindness.
-
-
www.shutterfly.com www.shutterfly.com
-
Thanksgiving is a time to reflect on the things we’re grateful for and to share that gratitude with the people who matter most. Along with gathering around the table, sending a heartfelt card is a meaningful way to reach out to friends, family, and co-workers—especially those who can’t join you in person. A thoughtful message can remind them how much they’re loved and appreciated. Whether you’re sending Thanksgiving cards or inviting loved ones to celebrate with you, these Thanksgiving messages and well wishes will help express your gratitude this season. function scrollListener(){document.getElementById("carousel-product-list-wrapper").addEventListener("scroll",scrollListener)}function scrollListener(){manageArrowsVisibility()}function manageArrowsVisibility(e=0){var t,l,s,r=document.getElementById("carousel-product-list-wrapper");r&&(e=r.scrollLeft+e,t=r.offsetWidth,r=r.scrollWidth,l=document.getElementById("apc-left-arrow"),s=document.getElementById("apc-right-arrow"),e<=0?(l.classList.add("disable"),s.classList.remove("disable")):e+t<r?(l.classList.remove("disable"),s.classList.remove("disable")):r<=e+t&&(l.classList.remove("disable"),s.classList.add("disable")))}function handleArrowClick(e){var t,l=document.getElementById("carousel-product-list-wrapper");l&&(scrollAmount=0,t=setInterval(function(){"left"===e?(manageArrowsVisibility(-10),l.scrollLeft-=10):(manageArrowsVisibility(10),l.scrollLeft+=10),720<=(scrollAmount+=10)&&window.clearInterval(t)},25))} .apc-spinner{display:flex;width:72px;position:absolute;text-align:center;justify-content:space-around}.apc-spinner>div{width:18px;height:18px;background:#aaa;border-radius:100%;display:inline-block;-webkit-animation:sk-bouncedelay 1.4s ease-in-out infinite both;animation:sk-bouncedelay 1.4s ease-in-out infinite both}.apc-spinner .bounce1{-webkit-animation-delay:-.32s;animation-delay:-.32s}.apc-spinner .bounce2{-webkit-animation-delay:-.16s;animation-delay:-.16s}@-webkit-keyframes sk-bouncedelay{0%,80%,to{-webkit-transform:scale(0)}40%{-webkit-transform:scale(1)}}@keyframes sk-bouncedelay{0%,80%,to{-webkit-transform:scale(0);transform:scale(0)}40%{-webkit-transform:scale(1);transform:scale(1)}}.shimmer-background{position:absolute;width:100%;height:100%;background:#fff;flex-direction:column;z-index:3}.shimmer-background,.shimmer-thumb{display:flex;align-items:center;justify-content:center}.shimmer-thumb{width:70%;height:55%;background:#ebedf0;margin:10%}.shimmer-text{width:50%;height:16px;background:#ebedf0;border-radius:8px;margin-bottom:4%}.apc-product-wrapper{background:0 0;padding:4px;display:flex;flex-direction:column;justify-content:center;align-items:center;margin:0 auto;box-sizing:content-box;max-width:280px}.mobile .apc-product-wrapper{height:250px}.grid-layout .apc-product-wrapper{height:295px;width:calc(20% - 8px)}.carousel-layout .apc-product-wrapper{height:264px;width:248px}.apc-product{flex-direction:column;width:100%;height:100%;flex-grow:1;border:none;position:relative;box-sizing:border-box;cursor:pointer;background:0 0}.apc-product,.product-image-wrapper{display:flex;justify-content:center;align-items:center}.product-image-wrapper{height:80%}.product-image{max-width:90%;height:auto;width:auto;transition:-webkit-transform 1s ease-out;transition:transform 1s ease-out;transition:transform 1s ease-out,-webkit-transform 1s ease-out;max-height:100%}.product-image:hover{transform:scale(1.05);-ms-transform:scale(1.05);-webkit-transform:scale(1.05)}.product-name{width:100%;height:24px;font-size:14px;position:relative;text-align:center;color:#58595b;white-space:nowrap;text-overflow:ellipsis;overflow:hidden}.product-name.short{font-size:15px}.mobile .product-name{font-size:14px}.tablet.grid-layout.landscape .apc-product-wrapper{width:calc(25% - 16px);margin:0}.tablet.grid-layout.landscape .preview .apc-product-wrapper:nth-child(n+9){display:none}.tablet.grid-layout.portrait .apc-product-wrapper{width:calc(33% - 12px);margin:0}.tablet.grid-layout.portrait .preview .apc-product-wrapper:nth-child(n+7){display:none}.mobile.grid-layout .apc-product-wrapper{width:calc(50% - 8px);margin:0;padding:0}.mobile.grid-layout .preview .apc-product-wrapper:nth-child(n+5){display:none}.apc-product-list{display:flex;justify-content:center;align-items:center}.carousel-layout .apc-product-list{width:-webkit-fit-content;width:-moz-fit-content;width:fit-content;margin:0 4px}.grid-layout .apc-product-list{flex-flow:row wrap;width:100%;height:100%}.grid-wrapper{height:calc(100% - 70px);display:flex;flex-direction:row;align-items:center;justify-content:center;overflow-y:hidden;position:relative}.mobile .grid-wrapper,.tablet .grid-wrapper{flex-direction:column}.see-more{height:44px;width:100%;min-width:260px;color:#58595b;font-size:14px;text-align:center;border:none;border-top:1px solid #dcdee1;background:0 0;margin:16px 0;display:flex;align-items:center;justify-content:center;cursor:pointer}.see-more-icon{margin:-8px 0 0 12px;-webkit-transform:rotate(135deg);transform:rotate(135deg)}.apc-recommendation-title{color:#58595b;font-size:32px;font-weight:700;line-height:40px}.mobile .apc-recommendation-title{font-size:24px;line-height:28px}.apc-recommendation-subtitle{color:#58595b;font-size:17px;line-height:36px}.mobile .apc-recommendation-subtitle{font-size:17px;line-height:25px}.apc-header{display:flex;flex-direction:column;overflow:hidden}.icon{height:16px;width:16px}.replace-button{color:#58595b;font-size:14px;text-align:center;background:0 0;width:250px;height:36px;margin:8px 0;display:flex;align-items:center;justify-content:space-around;align-self:center}.apc-arrow,.replace-button{border:1px solid #58595b;border-radius:4px;cursor:pointer}.apc-arrow{padding:0;background:#fff;width:24px;height:48px;margin:0 8px}.apc-arrow:hover{box-shadow:0 4px 4px -1px #c6c7c9}.apc-arrow.disable{opacity:.3;cursor:auto}.apc-arrow.disable:hover{box-shadow:none}.arrow-icon{position:relative;height:12px;width:12px;border-right:2px solid #58595b;border-top:2px solid #58595b}.arrow-icon.left{-webkit-transform:rotate(225deg);transform:rotate(225deg);margin-left:7px}.arrow-icon.right{-webkit-transform:rotate(45deg);transform:rotate(45deg);margin-right:7px}.carousel-wrapper{display:flex;flex-direction:row;justify-content:center;align-items:center}.carousel-product-list-wrapper{overflow-x:scroll}.carousel-product-list-wrapper::-webkit-scrollbar{background-color:#fff;width:16px}.carousel-product-list-wrapper::-webkit-scrollbar-track{background-color:#fff}.carousel-product-list-wrapper::-webkit-scrollbar-thumb{background-color:#fff;border-radius:16px;border:4px solid #fff}.carousel-product-list-wrapper:hover::-webkit-scrollbar-thumb{background-color:#babac0;border-radius:16px;border:4px solid #fff}.carousel-product-list-wrapper::-webkit-scrollbar-button{display:none}.apc-wrapper{display:flex;flex-direction:column;align-items:center;justify-content:flex-start;transition:min-height 3s;max-width:1600px;margin:0 auto}.apc-wrapper.mobile,.apc-wrapper.table{width:auto}.container{display:flex;flex-direction:column;width:100%;margin:16px 0;background-color:#fff}.apc-container-product-list-wrapper{overflow-x:scroll;width:85%}.widget-loading{display:flex;width:100%;border-radius:12px;background:#fff;padding:24px;box-sizing:border-box;opacity:.5;position:relative;z-index:5;align-items:center;justify-content:center;height:421px}.align-left{align-self:start;text-align:start;margin-left:8px}.align-center{align-self:center;text-align:center}.align-right{align-self:end;text-align:end;margin-right:8px}.apc-global{margin:0}.apc-global::-webkit-scrollbar{display:block;overflow:auto;border-radius:10px}.apc-global *{font-family:Montserrat Medium,Verdana,-apple-system,BlinkMacSystemFont,Segoe UI,Roboto,Oxygen,Ubuntu,Cantarell,Fira Sans,Droid Sans,Helvetica Neue,sans-serif;-moz-osx-font-smoothing:grayscale}.apc-global .apc-pipe{font-family:system-ui,sans-serif}.apc-product:link{text-decoration:none}.apc-product:visited{text-decoration:none}.apc-product:hover{text-decoration:none}.apc-product:active{text-decoration:none}
Well-written paragraph, reads very smoothly. 1. First sentence states what Thanksgiving is all about. 2. Second and third smoothly transition from the first into the need for sending messages for Thanksgiving. 3. Last hints at some of the tangible options to be discussed, then summarizes the value of Thanksgiving messages.
-
-
docdrop.org docdrop.orgview1
-
"You guys are no help. Literally no help. Why do you guys have me in here?" she protested. Sofia's step-grandfather was so angry with the school administrators (and perhaps intimidated by them) that Lola tried to intervene. (He tells us that when he was growing up here in the 1950s, all the parents were involved in the schools, but now they are completely uninterested. "They would rather let others do it, but then no one gets involved."
Nowadays, I think that's the case for a lot of schools in the U.S.. Many parents aren't involved in school affairs as back in the day. parents actually cared about their children's education and the material they're being taught, but now parents just send kids to these schools and are not involved whatsoever.
-
-
Local file Local file
-
Cyber-EnabledNetworks
-
Has the capacity to be entwined with other crimes such as extortion
-
Intensifies amid the rise of ais
-
-
Mafia networks
-
Only 18 of them in Canada
-
They're mainly held in Ontario and Quebec but have connections to more than 10 countries
-
They're very violent
-
Active in the private business sector where they commit money laundering
-
-
Extortion
- Force or threats are used to obtain money
Eg. Co-op extortion 1. Perpetrator threatened to release sensitive data to the public if certain demands weren't made
-
Piracy
-
The crime of stealing intellectual property and distributing them either for a reduced price or for free
-
Eg. Zlibrary
-
Causes financial consequences to the movie producer and its promoter and other associated with the production of the movie
-
What would be the difference between piracy and buying something and selling it at a thrifting site at a reduced price?
-
Manga mura
-
-
-
inst-fs-iad-prod.inscloudgate.net inst-fs-iad-prod.inscloudgate.net
-
Responsibility to the tree makes everyone pause before beginning.Sometimes I have that same sense when I face a blank sheet of paper.For me, writing is an act of reciprocity with the world; it is what Ican give back in return for everything that has been given to me. Andnow there’s another layer of responsibility, writing on a thin sheet oftree and hoping the words are worth it. Such a thought could make aperson set down her pen.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson I want to talk through another type of storage.
This time instant store volumes.
It's essential for all of the AWS exams and real-world usage that you understand the pros and cons for this type of storage.
It can save money, improve performance or it can cause significant headaches so you have to appreciate all of the different factors.
So let's just jump in and get started because we've got a lot to cover.
Instant store volumes provide block storage devices so raw volumes which can be attached to an instance presented to the operating system on that instance and used as the basis for a file system which can then in turn be used by applications.
So far they're just like EBS only local instead of being presented over the network.
These volumes are physically connected to one EC2 host and that's really important.
Each EC2 host has its own instant store volumes and they're isolated to that one particular host.
Instances which are on that host can access those volumes and because they're locally attached they offer the highest storage performance available within AWS much higher than EBS can provide and more on why this is relevant very soon.
They're also included in the price of any instances which they come with.
Different instance types come with different selections of instant store volumes and for any instances which include instant store volumes they're included in the price of that instance so it comes down to use it or lose it.
One really important thing about instant store volumes is that you have to attach them at launch time and unlike EBS you can't attach them afterwards.
I've seen this question come up a few times in various AWS exams about adding new instant store volumes after instance launch and it's important that you remember that you can't do this it's launch time only.
Depending on the instance type you're going to be allocated a certain number of instant store volumes you can choose to use them or not but if you don't you can't adjust this later.
This is how instant store architecture looks.
Each instance can have a collection of volumes which are backed by physical devices on the EC2 host which that instance is running on.
So in this case host A has three physical devices and these are presented as three instant store volumes and host B has the same three physical devices.
Now in reality EC2 hosts will have many more but this is a simplified diagram.
Now on host A instance 1 and 2 are running instance 1 is using one volume and instance 2 is using the other two volumes and the volumes are named ephemeral 0, 1 and 2.
Roughly the same architecture is present on host B but instance 3 is the only instance running on that host and it's using ephemeral 1 and ephemeral 2 volumes.
Now these are ephemeral volumes they're temporary storage as a solutions architect or a developer or an engineer you need to think of them as such.
If instance 1 stored some data on ephemeral volume 0 on EC2 host A let's say a cat picture and then for some reason the instance migrated from host A through to host B then it would still have access to an ephemeral 0 volume but it would be a new physical volume a blank block device.
So this is important if an instance moves between hosts then any data that was present on the instant store volumes is lost and instances can move between hosts for many reasons.
If they're stopped and started this causes a migration between hosts or another example is if host A was undergoing maintenance then instances would be migrated to a different host.
When instances move between hosts they're given new blank ephemeral volumes data on the old volumes is lost they're wiped before being reassigned but the data is gone and even if you do something like change an instance type this will cause an instance to move between hosts and that instance will no longer have access to the same instant store volumes.
This is another risk to keep in mind you should view all instant store volumes as ephemeral.
The other danger to keep in mind is hardware failure if a physical volume fails say the ephemeral 1 volume on EC2 host A then instance 2 would lose whatever data was on that volume.
These are ephemeral volumes treat them as such their temporary data they should not be used for anything where persistence is required.
Now the size of instant store volumes and the number of volumes available to an instance vary depending on the type of instance and the size of instance.
Some instance types don't support instant store volumes different instance types have different types of instance store volumes and as you increase in size you're generally allocated larger numbers of these volumes so that's something that you need to keep in mind.
One of the primary benefits of instance store volumes is performance you can achieve much higher levels of throughput and more IOPS by using instance store volumes versus EBS.
I won't consume your time by going through every example but some of the higher-end figures that you need to consider are things like if you use a D3 instance which is storage optimized then you can achieve 4.6 GB per second of throughput and this instance type provides large amounts of storage using traditional hard disks so it's really good value for large amounts of storage.
It provides much high levels of throughput than the maximums available when using HDD based EBS volumes.
The I3 series which is another storage optimized family of instances these provide NVMe SSDs and this provides up to 16 GB per second of throughput and this is significantly higher than even the most high performance EBS volumes can provide and the difference in IOPS is even more pronounced versus EBS with certain I3 instances able to provide 2 million read IOPS and 1.6 million write IOPS when optimally configured.
In general instance store volumes perform to a much higher level versus the equivalent storage in EBS.
I'll be doing a comparison of EBS versus instance store elsewhere in this section which will help you in situations where you need to assess suitability but these are some examples of the raw figures.
Now before we finish this lesson just a number of exam power-ups.
Instance store volumes are local to an EC2 host so if an instance does move between hosts you lose access to the data on that volume you can only add instance store volumes to an instance at launch time if you don't add them you cannot come back later and add additional instance store volumes and any data on instance store volumes is lost if that instance moves between hosts if it gets resized or if you have either host failure or specific volume hardware failure.
Now in exchange for all these restrictions of course instance store volumes provide high performance so it's the highest data performance that you can achieve within AWS you just need to be willing to accept all of the shortcomings around the risk of data loss its temporary nature and the fact that it can't survive through restarts or moves or resizes.
It's essentially a performance trade-off you're getting much faster storage as long as you can tolerate all of the restrictions.
Now with instance store volumes you pay for it anyway it's included in the price of an instance so generally when you're provisioning an instance which does come with instance store volumes there is no advantage to not utilizing them you can decide not to use them inside the OS but you can't physically add them to the instance at a later date.
Just to reiterate and I'm going to keep repeating this throughout this section of the course instance store volumes are temporary you cannot use them for any data that you rely on or data which is not replaceable so keep that in mind it does give you amazing performance but it is not for the persistent storage of data but at this point that's all of the theory that I wanted to cover so that's the architecture and some of the performance trade-offs and benefits that you get with instance store volumes go ahead and complete this video and when you're ready join me in the next which will be an architectural comparison of EBS and instance store which will help you in exam situations to pick between the two.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson I want to talk about the Hard Disk Drive or HDD-based volume types provided by EBS.
HDD-based means they have moving bits, platters which spin little robot arms known as heads which move across those spinning platters.
Moving parts means slower which is why you'd only want to use these volume types in very specific situations.
Now let's jump straight in and look at the types of situations where you would want to use HDD-based storage.
Now there are two types of HDD-based storage within EBS.
Well that's not true, there are actually three but one of them is legacy.
So I'll be covering the two ones which are in general usage.
And those are ST1 which is throughput optimized HDD and SC1 which is cold HDD.
So think about ST1 as the fast hard drive not very agile but pretty fast and think about SC1 as cold.
ST1 is cheap, it's less expensive than the SSD volumes which makes it ideal for any larger volumes of data.
SC1 is even cheaper but it comes with some significant trade-offs.
Now ST1 is designed for data which is sequentially accessed because it's HDD-based it's not great at random access.
It's more designed for data which needs to be written or read in a fairly sequential way.
Applications where throughput and economy is more important than IOPS or extreme levels of performance.
ST1 volumes range from 125 GB to 16 TB in size and you have a maximum of 500 IOPS.
But and this is important IO on HDD-based volumes is measured as 1 MB blocks.
So 500 IOPS means 500 MB per second.
Now their maximums HDD-based storage works in a similar way to how GP2 volumes work with a credit bucket.
Only with HDD-based volumes it's done around MB per second rather than IOPS.
So with ST1 you have a baseline performance of 40 MB per second for every 1 TB of volume size.
And you can burst to a maximum of 250 MB per second for every TB of volume size.
Obviously up to the maximum of 500 IOPS and 500 MB per second.
ST1 is designed for when cost is a concern but you need frequent access storage for throughput intensive sequential workloads.
So things like big data, data warehouses and log processing.
Now ST1 on the other hand is designed for infrequent workloads.
It's geared towards maximum economy when you just want to store lots of data and don't care about performance.
So it offers a maximum of 250 IOPS.
Again this is with a 1 MB IO size.
So this means a maximum of 250 MB per second of throughput.
And just like with ST1 this is based on the same credit pool architecture.
So it has a baseline of 12 MB per TB of volume size and a burst of 80 MB per second per TB of volume size.
So you can see that this offers significantly less performance than ST1 but it's also significantly cheaper.
And just like with ST1 volumes can range from 125 GB to 16 TB in size.
This storage type is the lowest cost EBS storage available.
It's designed for less frequently accessed workloads.
So if you have colder data, archives or anything which requires less than a few loads or scans per day then this is the type of storage volume to pick.
And that's it for HDD based storage.
Both of these are lower cost and lower performance versus SSD.
Designed for when you need economy of data storage.
Picking between them is simple.
If you can tolerate the trade-offs of ST1 then use that.
It's super cheap and for anything which isn't day to day accessed it's perfect.
Otherwise choose ST1.
And if you have a requirement for anything IOPS based then avoid both of these and look at SSD based storage.
With that being said though that's everything that I wanted to cover in this lesson.
Thanks for watching.
Go ahead and complete the video and when you're ready I'll look forward to you joining me in the next.
-
-
pm.nlx.com.proxy.library.vanderbilt.edu pm.nlx.com.proxy.library.vanderbilt.edu
-
The conclusion we would draw is apparent; — if there is a similarity of minds discernible in the whole human race, can dissimilitude of forms or the gradations of complexion prove that the earth is peopled by many different species of men?
The question: what matters more? Hearts/minds? Or Complexion?
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson I want to continue my EBS series and talk about provisioned IOPS SSD.
So that means IO1 and IO2.
Let's jump in and get started straight away because we do have a lot to cover.
Strictly speaking there are now three types of provisioned IOPS SSD.
Two which are in general release IO1 and its successor IO2 and one which is in preview which is IO2 Block Express.
Now they all offer slightly different performance characteristics and different prices but the common factors is that IOPS are configurable independent of the size of the volume and they're designed for super high performance situations where low latency and consistency of that low latency are both important characteristics.
With IO1 and IO2 you can achieve a maximum of 64,000 IOPS per volume and that's four times the maximum for GP2 and GP3 and with IO1 and IO2 you can achieve a 1000 MB per second of throughput.
This is the same as GP3 and significantly more than GP2.
Now IO2 Block Express takes this to another level.
With Block Express you can achieve 256,000 IOPS per volume and 4000 MB per second of throughput per volume.
In terms of the volume sizes that you can use with provisioned IOPS SSDs with IO1 and IO2 it ranges from 4 GB to 16 TB and with IO2 Block Express you can use larger up to 64 TB volumes.
Now I mentioned that with these volumes you can allocate IOPS performance values independently of the size of the volume.
Now this is useful for when you need extreme performance for smaller volumes or when you just need extreme performance in general but there is a maximum of the size to performance ratio.
For IO1 it's 50 IOPS per GB of size so this is more than the 3 IOPS per GB for GP2.
For IO2 this increases to 500 IOPS per GB of volume size and for Block Express this is 1000 IOPS per GB of volume size.
Now these are all maximums and with these types of volumes you pay for both the size and the provisioned IOPS that you need.
Now because with these volume types you're dealing with extreme levels of performance there is also another restriction that you need to be aware of and that's the per instance performance.
There is a maximum performance which can be achieved between the EBS service and a single EC2 instance.
Now this is influenced by a few things.
The type of volumes so different volumes have a different maximum per instance performance level, the type of the instance and then finally the size of the instance.
You'll find that only the most modern and largest instances support the highest levels of performance and these per instance maximums will also be more than one volume can provide on its own and so you're going to need multiple volumes to saturate this per instance performance level.
With IO1 volumes you can achieve a maximum of 260,000 IOPS per instance and a throughput of 7,500 MB per second.
It means you'll need just over four volumes of performance operating at maximum to achieve this per instance limit.
Oddly enough IO2 is slightly less at 160,000 IOPS for an entire instance and 4,750 MB per second and that's because AWS have split these new generation volume types.
They've added block express which can achieve 260,000 IOPS and 7,500 MB per second for an instance maximum.
So it's important that you understand that these are per instance maximums so you need multiple volumes all operating together and think of this as a performance cap for an individual EC2 instance.
Now these are the maximums for the volume types but you also need to take into consideration any maximums for the type and size of the instance so all of these things need to align in order to achieve maximum performance.
Now keep these figures locked in your mind it's not so much about the exact numbers but having a good idea about the levels of performance that you can achieve with GP2 or GP3 and then IO1, IO2 and IO2 block express will really help you in real-world situations and in the exam.
Instance store volumes which we're going to be covering elsewhere in this section can achieve even higher performance levels but this comes with a serious limitation in that it's not persistent but more on that soon.
Now as a comparison the per instance maximums for GP2 and GP3 is 260,000 IOPS and 7,000 MB per second per instance.
Again don't focus too much on the exact numbers but you need to have a feel for the ranges that these different types of storage volumes occupy versus each other and versus instance store.
Now you'll be using provisioned IOPS SSD for anything which needs really low latency or sub millisecond latency, consistent latency and higher levels of performance.
One common use case is when you have smaller volumes but need super high performance and that's only achievable with IO1, IO2 and IO2 block express.
Now that's everything that I wanted to cover in this lesson.
Again if you're doing the sysops or developer streams there's going to be a demo lesson where you'll experience the storage performance levels.
For the architecture stream this theory is enough.
At this point though thanks for watching that's everything I wanted to cover go ahead and complete the video and when you're ready I look forward to you joining me in the next.
-
-
www.ncbi.nlm.nih.gov www.ncbi.nlm.nih.gov
-
Disease: Von Willebrand Disease (VWD) type 1
Patient(s): 13 yo, female and 14 yo, female, both Italian
Variant: VWF NM_000552.5: c.820A>C p. (Thr274Pro)
Dominant negative effect
Heterozygous carrier
Variant located in the D1 domain on VWF
Phenotypes:
heterozygous carriers have no bleeding history
reduced VWF levels compatible with diagnosis of VWD type 1
increased FVIII:C/VWF:Ag ratio, suggests reduced VWF synthesis/secretion as possible phathophysiological mechanism
Normal VWFpp/VWF:Ag ratio
Modest alteration of multimeric pattern in plasma and platelet multimers
plasma VWF showed slight increase of LMWM and decrease of IMWM and HMWM
Platelet VWF showed quantitative decrease of IMWM, HMWM, and UL multimers
In silico analysis:
SIFT, ALIGN, GVD Polyphen 2.0, SNP&GO, Mutation Taster, Pmut all suggest damaging consequences.
PROVEAN and Effect suggest neutral effect
according to ACMG guidelines this variant was classified as pathogenic
-
-
genius.com genius.com
-
Sorry boy, but I've been hit by purple rain
Ventura Highway, track 14 on the album Here & Now by America (1972-11-04)
It’s unsure whether a connection between this lyric and the famous Prince song (which was released 12 years after “Ventura Highway”) exists, but at least two journalists from The San Diego Union and the Post-Tribune wrote that Prince got the phrase “Purple Rain” from here.
Asked to explain the phrase “purple rain” in “Ventura Highway,” Gerry Beckley responded: “You got me.”
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson I want to talk about two volume types available within AWS GP2 and GP3.
Now GP2 is the default general purpose SSD based storage provided by EBS.
GP3 is a newer storage type which I want to include because I expect it to feature on all of the exams very soon.
Now let's just jump in and get started.
General Purpose SSD storage provided by EBS was a game changer when it was first introduced.
It's high performance storage for a fairly low price.
Now GP2 was the first iteration and it's what I'm going to be covering first because it has a simple but initially difficult to understand architecture.
So I want to get this out of the way first because it will help you understand the different storage types.
When you first create a GP2 volume it can be as small as 1 GB or as large as 16 TB.
And when you create it the volume is created with an I/O credit allocation.
Think of this like a bucket.
So an I/O is one input output operation.
An I/O credit is a 16 kb chunk of data.
So an I/O is one chunk of 16 kilobytes in one second.
If you're transferring a 160 kb file that represents 10 I/O blocks of data.
So 10 blocks of 16 kb.
And if you do that all in one second that's 10 credits in one second.
So 10 I/Ops.
When you aren't using the volume much you aren't using many I/Ops and you aren't using many credits.
During periods of high disc load you're going to be pushing a volume hard and because of that it's consuming more credits.
For example during system boots or backups or heavy database work.
Now if you have no credits in this I/O bucket you can't perform any I/O on the disc.
The I/O bucket has a capacity of 5.4 million I/O credits.
And it fills at the baseline performance rate of the volume.
So what does this mean?
Well every volume has a baseline performance based on its size with a minimum.
So streaming into the bucket at all times is a 100 I/O credits per second refill rate.
This means as an absolute minimum regardless of anything else you can consume 100 I/O credits per second which is 100 I/Ops.
Now the actual baseline rate which you get with GP2 is based on the volume size.
You get 3 I/O credits per second per GB of volume size.
This means that a 100 GB volume gets 300 I/O credits per second refilling the bucket.
Anything below 33.33 recurring GB gets this 100 I/O minimum.
Anything above 33.33 recurring gets 3 times the size of the volume as a baseline performance rate.
Now you aren't limited to only consuming at this baseline rate.
By default GP2 can burst up to 3000 I/Ops so you can do up to 3000 input output operations of 16 kb in one second.
And that's referred to as your burst rate.
It means that if you have heavy workloads which aren't constant you aren't limited by your baseline performance rate of 3 times the GB size of the volume.
So you can have a small volume which has periodic heavy workloads and that's OK.
What's even better is that the credit bucket it starts off full so 5.4 million I/O credits.
And this means that you could run it at 3000 I/Ops so 3000 I/O per second for a full 30 minutes.
And that assumes that your bucket isn't filling up with new credits which it always is.
So in reality you can run at full burst for much longer.
And this is great if your volumes are used initially for any really heavy workloads because this initial allocation is a great buffer.
The key takeaway at this point is if you're consuming more I/O credits than the rate at which your bucket is refilling then you're depleting the bucket.
So if you burst up to 3000 I/Ops and your baseline performance is lower then over time you're decreasing your credit bucket.
If you're consuming less than your baseline performance then your bucket is replenishing.
And one of the key factors of this type of storage is the requirement that you manage all of the credit buckets of all of your volumes.
So you need to ensure that they're staying replenished and not depleting down to zero.
Now because every volume is credited with 3 I/O credits per second for every GB in size, volumes which are up to 1 TB in size they'll use this I/O credit architecture.
But for volumes larger than 1 TB they will have a baseline equal to or exceeding the burst rate of 3000.
And so they will always achieve their baseline performance as standard.
They don't use this credit system.
The maximum I/O per second for GP2 is currently 16000.
So any volumes above 5.33 recurring TB in size achieves this maximum rate constantly.
GP2 is a really flexible type of storage which is good for general usage.
At the time of creating this lesson it's the default but I expect that to change over time to GP3 which I'm going to be talking about next.
GP2 is great for boot volumes, for low latency interactive applications or for dev and test environments.
Anything where you don't have a reason to pick something else.
It can be used for boot volumes and as I've mentioned previously it is currently the default.
Again over time I expect GP3 to replace this as it's actually cheaper in most cases but more on this in a second.
You can also use the elastic volume feature to change the storage type between GP2 and all of the others.
And I'll be showing you how that works in an upcoming lesson if you're doing the CIS Ops or developer associate courses.
If you're doing the architecture stream then this architecture theory is enough.
At this point I want to move on and explain exactly how GP3 is different.
GP3 is also SSD based but it removes the credit bucket architecture of GP2 for something much simpler.
Every GP3 volume regardless of size starts with a standard 3000 IOPS so 3000 16 kB operations per second and it can transfer 125 MB per second.
That standard regardless of volume size and just like GP2 volumes can range from 1 GB through to 16 TB.
Now the base price for GP3 at the time of creating this lesson is 20% cheaper than GP2.
So if you only intend to use up to 3000 IOPS then it's a no brainer.
You should pick GP3 rather than GP2.
If you need more performance then you can pay for up to 16000 IOPS and up to 1000 MB per second of throughput.
And even with those extras generally it works out to be more economical than GP2.
GP3 offers a higher max throughput as well so you can get up to 1000 MB per second versus the 250 MB per second maximum of GP2.
So GP3 is just simpler to understand for most people versus GP2 and I think over time it's going to be the default.
For now though at the time of creating this lesson GP2 is still the default.
In summary GP3 is like GP2 and IO1 which I'll cover soon had a baby.
You get some of the benefits of both in a new type of general purpose SSD storage.
Now the usage scenarios for GP3 are also much the same as GP2.
So virtual desktops, medium sized databases, low latency applications, dev and test environments and boot volumes.
You can safely swap GP2 to GP3 at any point but just be aware that for anything above 3000 IOPS the performance doesn't get added automatically like with GP2 which scales on size.
With GP3 you would need to add these extra IOPS which come at an extra cost and that's the same with any additional throughput.
Beyond the 125 MB per second standard it's an additional extra but still even including those extras for most things this storage type is more economical than GP2.
At this point that's everything that I wanted to cover about the general purpose SSD volume types in this lesson.
Go ahead, complete the lesson and then when you're ready, I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson I want to quickly step through the basics of the Elastic Block Store service known as EBS.
You'll be using EBS directly or indirectly, constantly as you make use of the wider AWS platform and as such you need to understand what it does, how it does it and the product's limitations.
So let's jump in and get started straight away as we have a lot to cover.
EBS is a service which provides block storage.
Now you should know what that is by now.
It's storage which can be addressed using block IDs.
So EBS takes raw physical disks and it presents an allocation of those physical disks and this is known as a volume and these volumes can be written to or read from using a block number on that volume.
Now volumes can be unencrypted or you can choose to encrypt the volume using KMS and I'll be covering that in a separate lesson.
Now you see two instances when you attach a volume to them they see a block device, a raw storage and they can use this to create a file system on top of it such as EXT3, EXT4 or XFS and many more in the case of Linux or alternatively NTFS in the case of Windows.
The important thing to grasp is that EBS volumes appear just like any other storage device to an EC2 instance.
Now storage is provisioned in one availability zone.
I can't stress enough the importance of this.
EBS in one availability zone is different than EBS in another availability zone and different from EBS in another AZ in another region.
EBS is an availability zone service.
It's separate and isolated within that availability zone.
It's also resilient within that availability zone so if a physical storage device fails there's some built-in resiliency but if you do have a major AZ failure then the volumes created within that availability zone will likely fail as will instances also in that availability zone.
Now with EBS you create a volume and you generally attach it to one EC2 instance over a storage network.
With some storage types you can use a feature called Multi-Attach which lets you attach it to multiple EC2 instances at the same time and this is used for clusters but if you do this the cluster application has to manage it so you don't overwrite data and cause data corruption by multiple writes at the same time.
You should by default think of EBS volumes as things which are attached to one instance at a time but they can be detached from one instance and then reattached to another.
EBS volumes are not linked to the instance lifecycle of one instance.
They're persistent.
If an instance moves between different EC2 hosts then the EBS volume follows it.
If an instance stops and starts or restarts the volume is maintained.
An EBS volume is created, it has data added to it and it's persistent until you delete that volume.
Now even though EBS is an availability zone based service you can create a backup of a volume into S3 in the form of a snapshot.
Now I'll be covering these in a dedicated lesson but snapshots in S3 are now regionally resilient so the data is replicated across availability zones in that region and it's accessible in all availability zones.
So you can take a snapshot of a volume in availability zone A and when you do so EBS stores that data inside a portion of S3 that it manages and then you can use that snapshot to create a new volume in a different availability zone.
For example availability zone B and this is useful if you want to migrate data between availability zones.
Now don't worry I'll be covering how snapshots work in detail including a demo later in this section.
For now I'm just introducing them.
EBS can provision volumes based on different physical storage types, SSD based, high performance SSD and volumes based on mechanical disks and it can also provision different sizes of volumes and volumes with different performance profiles all things which I'll be covering in the upcoming lessons.
For now again this is just an introduction to the service.
The last point which I want to cover about EBS is that you'll build using a gigabyte per month metric so the price of one gig for one month would be the same as two gig for half a month and the same as half a gig for two months.
Now there are some extras for certain types of volumes for certain enhanced performance characteristics but I'll be covering that in the dedicated lessons which are coming up next.
For now before we finish this service introduction let's take a look visually at how this architecture fits together.
So we're going to start with two regions in this example that's US-EAST-1 and AP-SOUTH EAST-2 and then in those regions we've got some availability zones AZA and AZB and then another availability zone in AP-SOUTH EAST 2 and then finally the S3 service which is running in all availability zones in both of those regions.
Now EBS as I keep stressing and I will stress this more is availability zone based so in the cut-down example which I'm showing in US-EAST-1 you've got two availability zones and so two separate deployments of EBS one in each availability zone and that's just the same architecture as you have with EC2.
You have different sets of EC2 hosts in every availability zone.
Now visually let's say that you have an EC2 instance in availability zone A.
You might create an EBS volume within that same availability zone and then attach that volume to the instance so critically both of these are in the same availability zone.
You might have another instance which this time has two volumes attached to it and over time you might choose to detach one of those volumes and then reattach it to another instance in the same availability zone and that's doable because EBS volumes are separate from EC2 instances.
It's a separate product with separate life cycles.
Now you can have the same architecture in availability zone B where volumes can be created and then attached to instances in that same availability zone.
What you cannot do and I'm stressing this for the 57th time small print it might not actually be 57 but it's close.
What I'm stressing is that you cannot communicate cross availability zone with storage.
So the instance in availability zone B cannot communicate with and so logically cannot attach to any volumes in availability zone A.
It's an availability zone service so no cross AZ attachments are possible.
Now EBS replicates data within an availability zone so the data on a volume it's replicated across multiple physical devices in that AZ but and this is important again the failure of an entire availability zone is going to impact all volumes within that availability zone.
Now to resolve that you can snapshot volumes to S3 and this means that the data is now replicated as part of that snapshot across AZs in that region so that gives you additional resilience and it also gives you the ability to create an EBS volume in another availability zone from this snapshot.
You can even copy the snapshot to another AWS region in this example AP - Southeastern -2 and once you've copied the snapshot it can be used in that other region to create a volume and that volume can then be attached to an EC2 instance in that same availability zone in that region.
So that at a high level is the architecture of EBS.
Now depending on what course you're studying there will be other areas that you need to deep dive on so over the coming section of the course we're going to be stepping through the features of EBS which you'll need to understand and these will differ depending on the exam but you will be learning everything you need for the particular exam that you're studying for.
At this point that's everything I wanted to cover so go ahead finish this lesson and when you're ready I look forward to you joining me in the next.
-
-
www.astralcodexten.com www.astralcodexten.com
-
Chomsky has long been an opponent of the statistical learning tradition of language modeling, essentially claiming that it does not provide insight about what humans know about languages, and that engineering success probably can’t be achieved without explicitly incorporating important mathematical facts about the underlying structure of language
-
-
furnaceandfugue.org furnaceandfugue.org
-
allsoalso Mu=sickeMusic, whether vocallvocal, or instrumentallinstrumental: herein the ancient Philosophers - did soeso exercise themselves, that heehe was reputed unlearned, and forcdforced to sing to the Myrtle, who refused the Harp in festivallsfestivals, as is declared of Themistocles: in MusickeMusic was Socrates instructed, and Plato himselfehimself, who concluded him not harmoniously compounded, that delighted not in MusicallMusical harmony: Pythagoras was very famous in the same, who is saydsaid to have used the symphony of musickemusic morning, and evening to compose the minds of his disciples: for this is a peculiar virtue of MusickeMusic, to quicken or refresh the affections by the different musicallmusical measures: SoeSo the Phrygian tune was - by the GræksGreeks termed warrlikewarlike, because it was sung in warrewar, and upon en=gagement, and had a singular virtue in stirring up the Spirits of the - Soldiers; instead of which the JonickeIonic is sometimes used for the same pur=pose, which was formerly esteemed
It appears that we are still in the period where all intellectual arts—music, mathematics, war tactics, etc—are expressions of one and the same phenomenon of the mind, and work off of each other, than the artificial separations of Chemistry and other disciplines we see later.
-
-
academic.oup.com academic.oup.com
-
John Jumper,
Joint winner of the 2024 Nobel Prize in Chemistry https://x.com/NobelPrize/status/1843951197960777760
-
Demis Hassabis,
Joint winner of the 2024 Nobel Prize in Chemistry https://x.com/NobelPrize/status/1843951197960777760
-
-
hybridpedagogy.org hybridpedagogy.org
-
rites, “It doesn’t matter to me if my classroom is a little rectangle in a building or a little rectangle above my keyboard. Doors are rectangles; rectangles are portals.
This terrifies me! I always have screens in the classroom because we are so often watching clips, but I am afraid of all our screenified minds and want to resist the dissolution of rectangles in general...
-
How can we build platforms that support learning across age, race, culture, gender, ability, geography?
Interesting that class is missing here, when the digital divide remains a real challenge to online access....
-
objective, quantifiable, apolitical
of course education is not alone - almost every sphere of humanistic knowledge has been eclipsed by the logic of data analytics.
-
Paulo Freire, Pedagogy of the Oppressed
As a historian, I always want to know what year something was published!
-
-
mlpp.pressbooks.pub mlpp.pressbooks.pub
-
American attitudes toward international affairs followed the advice given by President George Washington in his 1796 Farewell Address. Washington had urged his countrymen to avoid “foreign alliances, attachments, and intrigues”,
It’s interesting that the George Washington warned to stay out of foreign affairs considering in today’s year we are more involved with other countries than any other country in the world.
-
-
furnaceandfugue.org furnaceandfugue.org
-
Argent vive
Cambridge's "Dictionary of Alchemical Imagery" asserts that Argent vivre is synonymous with mercury and must be combined with Sulfur to produce the philoshopher's stone. Interestingly, Sulfur is a popular snake repellent, so perhaps there is something about these two metals being oppositional that makes them more powerful together.
-
-
-
eLife Assessment
Wittkamp et al. investigated the spatiotemporal dynamics of expectation of pain using an original fMRI-EEG approach. The methods are solid and the evidence for a substantially different neural representation between the anticipatory and the actual pain period is convincing. These important findings are discussed within a general framework that encompasses their research questions, hypotheses, and analysis of results. Although the choice of conditions and their influence on the results might accept different interpretations, the manuscript is strong and contributes beneficial insights to the field.
-
Reviewer #1 (Public review):
Summary:
In this important paper the authors investigate the temporal dynamics of expectation of pain using a combined fMRI-EEG approach. More specifically, by modifying the expectations of higher or lower pain on a trial-to- trial basis they report that expectations largely share the same set of activations before the administration of the painful stimulus and that the coding of the valence of the stimulus is observed only after the nociceptive input has been presented. fMRI informed EEG analysis suggested that the temporal sequence of information processing involved the Dorsolateral prefrontal cortex (DLPFC), the anterior insula and the anterior cingulate cortex. The strength of evidence is convincing, the methods are solid, but a few alternative interpretations about the findings related to the control group, as well as a more in depth discussion on the correlations between the BOLD and EEG signals would strengthen the manuscript.
Strengths:
In line with open science principles, the article presents the data and the results in a complete and transparent fashion.<br /> On the theoretical standpoint, the authors make a step forward in our understanding of how expectations modulate pain by introducing a combination of spatial and temporal investigation. It is becoming increasingly clear that our appraisal of the world is dynamic, guided by previous experiences and mapped on a combination of what we expect and what we get. New research methods, questions and analyses are needed to capture this evolving process.
Weaknesses:
The authors have addressed my concerns about the control condition and made some adjustments, namely acknowledging that participants cannot be "expectations" free and investigating whether scores in the control condition are simply due to a "regression to the mean".
General considerations and reflections
Inducing expectations in the desired direction is not a straightforward task, and results might depend on the exact experimental conditions and the comparison group. In this sense, the authors choice of having 3 groups of positive, negative and "neutral" expectations is to be praised. On the other hand, also control groups form their expectations, and this can constitute a confounder in every experiment using expectation manipulation, if not appropriately investigated. The authors have addressed this element in their revised submission.
In addition, although fMRI is still (probably) the best available tool we have to understand the spatial representation of cortical processing, limitations about not only the temporal but even the spatial resolution should be acknowledged. This has been done. Given the anatomical and physiological complexity of the cortical connections, as we know from the animal world, it is still well possible that sub circuits are activated also for positive and negative expectations, but cannot be observed due to the limitation of our techniques. Indeed, on an empirical/evolutionary bases, it would remain unclear why we should have a system that waits for the valence of a stimulus to show differential responses.<br /> Also, moving in a dimension of network and graph theory, one would not expect single areas to be responsible for distinct processes, but rather that they would more integrate information in a shared way, potentially with different feedback and feedforward communications. As such, it becomes more difficult to assume the insula as a center for coding potential pain, perhaps more of a node in a system that signals potential dangers for the integrity of the body.<br /> The rationale for the choice of their EEG band has been outlined.
-
Reviewer #2 (Public review):
I appreciate the authors' thorough revision of the manuscript, which has significantly improved its quality. I have no additional comments or requests for further changes.
However, I remain in slight disagreement regarding the characterization of the neutral condition. My perspective is that it resembles more of a "medium" condition, making it challenging to understand what would be common to "high-medium" and "low-medium" contrasts. I suspect that the neutral condition might represent a state of high uncertainty since participants are informed that the algorithm cannot provide a prediction. From this viewpoint, the observed similarities in effects for both positive and negative expectations may actually reflect differences between certainty and uncertainty rather than the specific expectations themselves.
Nevertheless, the authors have addressed alternative interpretations of their discussion section, and I have no further requests. The paper is well-executed and demonstrates several strengths: the procedure effectively induced varying levels of expectations with clear impacts on pain ratings. Additionally, the integration of fMRI with EEG is commendable for tracking the transition from anticipatory to pain periods. Overall, the manuscript is strong and contributes valuable insights to the field.
-
Author response:
The following is the authors’ response to the original reviews.
We thank the reviewers for their careful and overall positive evaluation of our work and the constructive feedback! To address the main concerns, we have:
– Clarified a major misunderstanding of our instructions: Participants were only informed that they would receive different stimuli of medium intensity and were thus not aware that the stimulation temperature remained constant
– Implemented a new analysis to evaluate how participants rated their expectation and pain levels in the control condition
– Added a paragraph in the discussion in which we argue that our paradigm is comparable to previous studies
Below, we provide responses to each of the reviewers’ comments on our manuscript.
Reviewer #1 (Public Review):
Summary:
In this important paper, the authors investigate the temporal dynamics of expectation of pain using a combined fMRI-EEG approach. More specifically, by modifying the expectations of higher or lower pain on a trial-to-trial basis, they report that expectations largely share the same set of activations before the administration of the painful stimulus, and that the coding of the valence of the stimulus is observed only after the nociceptive input has been presented. fMRIinformed EEG analysis suggested that the temporal sequence of information processing involved the Dorsolateral prefrontal cortex (DLPFC), the anterior insula, and the anterior cingulate cortex. The strength of evidence is convincing, and the methods are solid, but a few alternative interpretations about the findings related to the control group, as well as a more in-depth discussion on the correlations between the BOLD and EEG signals would strengthen the manuscript.
Thank you for your positive evaluation! In the revised version of the manuscript, we elaborated on the control condition and the BOLD-EEG correlations in more detail.
Strengths:
In line with open science principles, the article presents the data and the results in a complete and transparent fashion.
From a theoretical standpoint, the authors make a step forward in our understanding of how expectations modulate pain by introducing a combination of spatial and temporal investigation. It is becoming increasingly clear that our appraisal of the world is dynamic, guided by previous experiences, and mapped on a combination of what we expect and what we get. New research methods, questions, and analyses are needed to capture these evolving processes.
Thank you very much for these positive comments!
Weaknesses:
The control condition is not so straightforward. Across the manuscript it is defined as "no expectation", and in the legend of Figure 1 it is mentioned that the third state would be "no prediction". However, it is difficult to conceive that participants would not have any expectations or predictions. Indeed, in the description of the task it is mentioned that participants were instructed that they would receive stimuli during "intermediate sensitive states". The results of the pain scores and expectations might support the idea that the control condition is situated in between the placebo and nocebo conditions. However, since this control condition was not part of the initial conditioning, and participants had no reference to previous stimuli, one might expect that some ratings might have simply "regressed to the mean" for a lack of previous experience.
General considerations and reflections:
Inducing expectations in the desired direction is not a straightforward task, and results might depend on the exact experimental conditions and the comparison group. In this sense, the authors' choice of having 3 groups of positive, negative, and "neutral" expectations is to be praised. On the other hand, also control groups form their expectations, and this can constitute a confounder in every experiment using expectation manipulation, if not appropriately investigated.
Thank you for raising these important concerns! Firstly, as it seems that we did not explain the experimental procedure in a clear fashion, there appeared to be a general misunderstanding regarding our instructions. We want to emphasize that we did not tell participants that the stimulus intensity would always be the same, but that pain stimuli would be different temperatures of medium intensity. Furthermore, our instruction did not necessarily imply that our algorithm detected a state of medium sensitivity, but that the algorithm would not make any prediction, e.g., due to highly fluctuating states of pain sensitivity, or no clear-cut state of high or low pain sensitivity. We changed this in the Methods (ll. 556-560, 601-606, 612-614) and Results (ll. 181-192) sections of the manuscript to clarify these important features of our procedure.
Then, we absolutely agree that participants explicitly and implicitly form expectations regarding all conditions over time, including the control condition. We carefully considered your feedback and rephrased the control condition, no longer framing it as eliciting “no expectations” but as “neutral expectations” in the revised version of the manuscript. This follows the more common phrasing in the literature and acknowledges that participants indeed build up expectations in the control condition. However, we do still think that we can meaningfully compare the placebo and nocebo condition to the control condition to investigate the neuronal underpinnings of expectation effects. Independently of whether participants build up an expectation of “medium” intensities in the control condition, which caused them to perceive stimuli in line with this expectation, or if they simply perceived the stimuli as they were (of medium intensity) with limited effects of expectations, the crucial difference to the placebo and nocebo conditions is that there was no alteration of perception due to previous experiences or verbal information and no shift of perception from the actual stimulus intensity towards any direction in the control condition. This allowed us to compare the neural basis of a modulation of pain perception in either direction to a condition in which this modulation did not take place.
Author response image 1.
Variability within conditions over time. Relative variability index for expectation (left) and pain ratings (right) per condition and measurement block.
Lastly, we want to highlight that our finding of the control condition being rated in between the placebo and nocebo condition is in line with many previous studies that included similar control conditions and advanced our understanding of pain-related expectations (Bingel et al., 2011; Colloca et al., 2010; Shih et al., 2019). We thank the reviewer for the very interesting idea to evaluate the development of ratings in the control condition in more detail and added a new analysis to the manuscript in which we compared how much intra-subject variance was within the ratings of each of the three conditions and how much this variance changed over time. For this aim, we computed the relative variability index (Mestdagh et al., 2018), a measure that quantifies intra-subject variation over multiple ratings, and compared between the three conditions and the three measurement blocks. We observed differences in variances between conditions for both expectation (F(2,96) = 8.14, p < .001) and pain ratings (F(2,96) = 3.41, p = .037). For both measures, post-hoc tests revealed that there was significantly more variance in the placebo compared to the control condition (both p_holm < .05), but no difference between control and nocebo. The substantial and comparable variation in pain and expectation ratings in all three conditions (or at least between control and nocebo) shows that participants did not always expect and perceive the same intensity within conditions. Variance in expectation ratings decreased from the first block compared to the other two blocks (_F(1.35,64.64) = 5.69, p = .012; both p_holm < .05), which was not the case for pain ratings. Most importantly, there was no interaction effect of block and condition for neither expectation (_F(2.65,127.06) = 0.40, p = .728) nor pain ratings (F(4,192) = 0.48, p = .748), which implies that expectations were similarly dynamically updated in all conditions over the course of the experiment. This speak against a “regression to the mean” in the control condition and shows that control ratings fluctuated from trial to trial. We included this analysis and a more in-depth discussion of the choice of conditions in the Result (ll. 219-232) and Discussion (ll. 452-486) sections of the revised manuscript.
In addition, although fMRI is still (probably) the best available tool we have to understand the spatial representation of cortical processing, limitations about not only the temporal but even the spatial resolution should be acknowledged. Given the anatomical and physiological complexity of the cortical connections, as we know from the animal world, it is still well possible that subcircuits are activated also for positive and negative expectations, but cannot be observed due to the limitation of our techniques. Indeed, on an empirical/evolutionary basis it would remain unclear why we should have a system that waits for the valence of a stimulus to show differential responses.
We agree that the spatial resolution of fMRI is limited and that our signal is often not able to dissociate different subcircuits. Whether on this basis differential processes occurred cannot be observed in fMRI but is indeed possible. We now include this reasoning in our Discussion (ll. 373-377):
“Importantly, the spatial resolution of fMRI is limited when it comes to discriminating whether the same pattern of activity is due to identical activation or to activation in different sub-circuits within the same area. Nonetheless, the overlap of areas is an indicator for similar processes involved in a more general preparation process.”
Also, moving in a dimension of network and graph theory, one would not expect single areas to be responsible for distinct processes, but rather that they would integrate information in a shared way, potentially with different feedback and feedforward communications. As such, it becomes more difficult to assume the insula is a center for coding potential pain, perhaps more of a node in a system that signals potential dangers for the integrity of the body.
We appreciate the feedback on our interpretation of our results and agree that the overall network activity most likely determines how a large part of expectations and pain are coded. We therefore adjusted the Discussion, embedding the results in an interpretation considering networks (ll. 427-430, 432-435,438-442 ).
The authors analyze the EEG signal between 0.5 to 128 Hz, finding significant results in the correlation between single-trial BOLD and EEG activity in the higher gamma range (see Figure 6 panel C). It would be interesting to understand the rationale for including such high frequencies in the signal, and the interpretation of the significant correlation in the high gamma range.
On a technical level, we adapted our EEG processing pipeline from Hipp et al. (2011) who similarly investigated signals up to 128 Hz. Of note, the spectral smoothing was adjusted to match 3/4 octave, meaning that the frequency resolution at 128 Hz is rather broad and does not only contain oscillations at 128 Hz sharp. Gamma oscillations in general have repeatedly been reported in relation to pain and feedforward signals reflecting noxious information (e.g. Ploner et al., 2017; Strube et al., 2021). Strube et al. (2021) reported the highest effects of pain stimulus intensity and prediction error processing at high gamma frequencies (100 and 98 Hz, respectively). These findings could also serve as basis to interpret our results in this frequency range: If anticipatory activation in the ACC is linked to high gamma oscillations, which appear to play an important role in feedforward signaling of pain intensity and prediction errors, this could indicate that later processing of intensity in this area is already pre-modulated before the stimulus actually occurs. Of note: although not significant, it looks as if the cluster extends further into pain processing on a descriptive level. We added additional explanation regarding the interpretation of the correlation in the Discussion (ll. 414425):
“The link between anticipatory activity in the ACC and EEG oscillatory activity was observed in the high gamma band, which is consistent with findings that demonstrate a connection between increased fMRI BOLD signals and a relative shift from lower to higher frequencies (Kilner et al., 2005). Gamma oscillations have been repeatedly reported in the context of pain and expectations and have been interpreted as reflecting feedforward signals of noxious information ( e.g. Ploner et al., 2017; Strube et al., 2021). In combination with our findings, this might imply that high frequency oscillations may not only signal higher actual or perceived pain intensity during pain processing (Nickel et al., 2022; Ploner et al., 2017; Strube et al., 2021; Tu et al., 2016), but might also be instrumental in the transfer of directed expectations from anticipation into pain processing.”
Reviewer #2 (Public Review):
I think this is a very promising paper. The combination of EEG and fMRI is unique and original. However, I also have some suggestions that I think could help improve the manuscript.
This manuscript reports the findings of an EEG-fMRI study (n = 50) on the effects of expectations on pain. The combination of EEG with fMRI is extremely original and well-suited to study the transition from expectation to perception. However, I think that the current treatment of the data, as well as the way that the manuscript is currently written, does not fully capitalize on the potential of this unique dataset. Several findings are presented but there is currently no clear message coming out of this manuscript.
First, one positive point is that the experimental manipulation clearly worked. However, it should be noted that the instructions used are not typical of studies on placebo/nocebo. Participants were not told that the stimulations would be of higher/lower intensity. Rather, they were told that objective intensities were held constant, but that EEG recordings could be used to predict whether they would perceive the stimulus as more or less intense. I think that this is an interesting way to manipulate expectations, but there could have been more justification in the introduction for why the authors have chosen this unusual procedure.
Most importantly, we again want to emphasize again that participants were not aware that the stimulation temperature was always the same but were informed that they would receive different stimuli of medium intensity. We now clarify this in the revised Results (ll. 190-192) and Methods (ll. 612-614) sections.
While we agree that our procedure was not typical, we do not think that the manipulation is not comparable to previous studies on pain-related expectations. To our knowledge, either expectations regarding a treatment that changes pain perception (treatment expectancy) or expectations regarding stimulus intensities (stimulus expectancy) are manipulated (see Atlas & Wager, 2014). In our study, participants received a cue that induced expectations in regard to a ”treatment”, although in this case the “treatment” came from changes in their own brain activity. This is comparable to studies using TENS-devices that are supposedly changing peripheral pain transmission (Skvortsova et al., 2020). Thus, although not typical, our paradigm could be classified as targeting treatment expectancies and allowed us to examine effects on a trial-by-trial level within subjects. We added a paragraph regarding the comparability of our paradigm with previous studies in the Discussion of the revised manuscript (ll. 452-464) .
Also, the introduction mentions that little is known about potential cerebral differences between expectations of high vs. low pain expectations. I think the fear conditioning literature could be cited here. Activations in ACC, SMA, Ins, parahippocampal gyrus, PAG, etc. are often associated with upcoming threat, whereas activations vmPFC/default mode network are associated with safety.
We thank you for your suggestions to add literature on fear conditioning. We agree there is some overlap between fear conditioning and expectation effects in humans, but we also believe there are fundamental differences regarding their underlying processes and paradigms. E.g. the expectation effects are not driven by classical learning algorithms but act in a large amount as self-fulfilling prophecies (see e.g. Jepma et al., 2018). However, we now acknowledge the similarities e.g in the recruitment of the insula and the vmPFC of the modalities in our Introduction (ll. 132-136 ).
The fact that the authors didn't observe a clearer distinction between high and low expectations here could be related to their specific instructions that imply that the stimulus is the same and that it is the subjective perception that is expected to change. In any case, this is a relatively minor issue that is easy to address.
We apologize again for the lack of clarity in our instructions: Participants were unaware that they would receive the exact same stimulus. The clear effects of the different conditions on expectation and pain ratings also challenge the notion that participants always expected the same level of stimulation and/or perception. Additionally, if participants were indeed expecting a consistent level of intensity in all conditions, one would also assume to see the same anticipatory activation in the control condition as in the placebo and nocebo conditions, which is not the case. Thus, we respectfully disagree that the common effects might be explained by our instructions but would argue that they indeed reflect common (anticipatory) processes of positive and negative expectations.
Towards the end of the introduction, the authors present the aims of the study in mainly exploratory terms:
(1) What are the differences between anticipation and perception?
(2) What regions display a difference between high and low expectations (high > low or low < high) vs. an effect of expectation regardless of the direction (high and low different than neutral)?
I think these are good questions, but the authors should provide more justification, or framework, for these questions. More specifically, what will they be able to conclude based on their observations?
For instance (note that this is just an example to illustrate my point. I encourage the authors to come up with their own framework/predictions) :
(1) Possibility #1: A certain region encodes expectations in a directed fashion (high > low) and that same region also responds to perception in the same direction (high > low). This region would therefore modulate pain by assimilating perception towards expectations.
(2) Possibility # 2: different regions are involved in expectation and perception. Perhaps this could mean that certain regions influence pain processing through descending facilitation for instance...
Thank you for pointing out that our hypotheses were not crafted carefully enough. We tried to give better explanations for the possible interpretations of our hypotheses. Additionally, we interpreted our results on the background of a broader framework for placebo and nocebo effects (predictive coding) to derive possible functions of the described brain areas. We embedded this in our Introduction (ll. 74-86, 158-175 ) and Discussion (ll. 384-388 ), interpreting the anticipatory activity and the activity during pain processing in the context of expectation formation as described in Büchel et al. (2014).
Interpretation derived from our framework (ll. 384-388):
e.g.: “Following the framework of predictive coding, our results would suggest that the DPMS is the network responsible for integrating ascending signals with descending signals in the pain domain and that this process is similar for positive and negative valences during anticipation of pain but differentiates during pain processing.”
Regarding analyses, I think that examining the transition from expectations to perception is a strong angle of the manuscript given the EGG-fMRI nature of the study. However, I feel that more could have been done here. One problem is that the sequence of analyses starts by identifying an fMRI signal of interest and then attempts to find its EEG correlates. The problem is that the low temporal resolution of fMRI makes it difficult to differentiate expectation from perception, which doesn't make this analysis a good starting point in my opinion. Why not start by identifying an EEG signal that differentiates perception vs expectation, and then look for its fMRI correlates?
We appreciate your feedback on the transition from expectations to perceptions and also think that additional questions could be answered with our data set. However, based on the literature we had specific hypotheses regarding specific brain areas, and we therefore decided to start from the fMRI data with the superior spatial resolution and EEG was used to focus on the temporal dynamics within the areas important for anticipatory processes. We share the view that many different approaches in analyzing our data are possible. On the other hand, identifying relevant areas based on EEG characteristics inherits even more uncertainty due to the spatial filtering of the EEG signal. For the research question of this study a more accurate evaluation of the involved areas and the related representation was more important. We therefore decided to only implement the procedure already present in the manuscript.
Finally, I found the hypotheses on "valenced" vs. "absolute" effects a little bit more difficult to follow. This is because "neutral" is not really neutral: it falls in between low and high. If I follow correctly, participants know that the temperature is always the same. Therefore, if they are told that the machine cannot predict whether their perception is going to be low or high, then it must be because it is likely to be in between. Ratings of expectation and pain ratings confirm that. The neutral condition is not "devoid" of expectations as the authors suggest.
Therefore, it would make sense to look at regions with the following pattern low > neutral > high, or vice-versa, low < neutral < high. Low & high being different than neutral is more difficult to interpret. I don't think that you can say that it reflects "absolute" expectations because neutral is also the expectation of a medium temperature. Perhaps it reflects "certainty/uncertainty" or something like that, but it is not clear that it reflects "expectations".
Thank you for your valuable feedback! We considered your concerns about the interpretation of our results and completely agree that the control condition cannot be interpreted as void of expectations (ll. 119-123). We therefore evaluated the control condition in more detail in a separate analysis (ll. 219-232) and integrated a new assessment of the conditions into the Discussion (ll. 465-486). We changed the phrasing of our control condition to “neutral expectations”, as we agree that the control condition is not void of expectations and this phrasing is more in line with other studies (e.g. Colloca et al., 2010; Freeman et al., 2015; Schmid et al., 2015). We would argue that the neutral expectations can still be meaningfully compared to positive and negative expectations because only the latter shift expectations and perception in one direction. Thus, we changed our wording throughout the manuscript to acknowledge that we indeed did not test for general effects of expectations vs. no expectations, but for effects of directed expectations. Please also see our reasoning regarding the control condition in response to Reviewer 1, in which we addressed the interpretation of the control condition. We therefore still believe that the contrasts that we calculated between conditions are valid. The proposed new contrast largely overlaps with our differential contrast low>high and vice versa already reported in the manuscript (for additional results also see Supplements).
Recommendations for the authors:
Reviewer #1 (Recommendations For The Authors):
Figure 6, panel C. The figure mentions Anterior Cingulate Cortex R, whereas the legend mentions left ACC. Please check.
Thanks for catching this, we changed the figure legend accordingly.
Reviewer #2 (Recommendations For The Authors):
- I don't think that activity during the rating of expectations is easily interpretable. I think I would recommend not reporting it.
The majority of participants completed the expectation rating relatively quickly (M = 2.17 s, SD = 0.35 s), which resulted in the overlap between the DLPFC EEG cluster and the expectation rating encompassing only a limited portion of the cluster (~ 1 s). We agree that this activity still is more difficult to interpret, yet we have decided to report it for reasons of completeness.
- The effects on SIIPS are interesting. I think that it is fine to present them as a "validation" of what was observed with pain ratings, but it also seems to give a direction to the analyses that the authors don't end up following. For instance, why not try other "signatures" like the NPS or signatures of pain anticipation? Also, why not try to look at EEG correlates of SIIPS? I don't think that the authors "need" to do any of that, but I just wanted to let them know that SIIPS results may stir that kind of curiosity in the readers.
While this would be indeed very interesting, these additional analyses are not directly related to our current research question. We fear that too many analyses could be confusing for the readers. Nonetheless, we are grateful for your suggestion and will implement additional brain signatures in future studies.
- The shock was calibrated to be 60%. Why not have high (70%) and low (30%) conditions at equal distances from neutral, like 80% and 40% for instance? The current design makes it hard to distinguish high from control. Perhaps the "common" effects of high + low are driven by a deactivation for low (30%)?
We appreciate your feedback! We adjusted the temperature during the test phase to counteract habituation typically happening with heat stimuli. We believe that this was a good measure as participants rated the control condition at roughly VAS 50 (M = 51.40) which was our target temperature and then would be equidistant to the VAS 70 and VAS 30 during conditioning when no habituation should have taken place yet. We further tested whether participants rated placebo and nocebo trials at equal distances from the control condition and found no existent bias for either of the conditions. To do this, we computed the individual placebo effect (control minus placebo) and nocebo effect (nocebo minus control) for each participant during the test phase and statistically compared whether they differed in terms of magnitude. There was no significant difference between placebo and nocebo effects for both expectation (placebo effect M = 14.25 vs. nocebo effect M = 17.22, t(49) = 1.92, p = .061) and pain ratings (placebo effect M = 6.52 vs. nocebo effect M = 5.40, t(49) = -1.11, p = .274). This suggests that our expectation manipulation resulted in comparable shifts in expectation and pain ratings away from the control condition for both the placebo and nocebo condition and thus hints against any bias of the conditioning temperatures. Please also note that the analysis of the common effects was masked for differences of the high and low, therefore the effects cannot be driven by one condition by itself.
- If I understand correctly, all fMRI contrasts were thresholded with FWE. This is fine, but very strict. The authors could have opted for FDR. Maybe I missed something here....
While it is true that FDR is the more liberal approach, it is not valid for spatially correlated fMRI data and is no longer available in SPM for the correction of multiple comparisons. The newly implemented topological peak based FDR correction is comparably sensitive with the FWE correction (see. Chumbley et al. BELEG). We opted for the slightly more conservative approach in our preregistration (_p_FWE < .05), therefore a change of the correction is not possible.
Altogether, I think that this is a great study. The combination of EEG and fMRI is truly unique and affords many opportunities to examine the transition from expectations to perception. The experimental manipulation of expectations seems to have worked well, and there seem to be very promising results. However, I think that more could have been done. At least, I would recommend trying to give more of a theoretical framework to help interpret the results.
We are very grateful for your positive feedback. We took your suggestion seriously and tried to implement a more general framework from the literature (see Büchel et al., 2014) to provide a better explanation for our results.
References
Atlas, L. Y., & Wager, T. D. (2014). A meta-analysis of brain mechanisms of placebo analgesia: Consistent findings and unanswered questions. Handbook of Experimental Pharmacology, 225, 37–69. https://doi.org/10.1007/978-3-662-44519-8_3
Bingel, U., Wanigasekera, V., Wiech, K., Ni Mhuircheartaigh, R., Lee, M. C., Ploner, M., & Tracey, I. (2011). The effect of treatment expectation on drug efficacy: Imaging the analgesic benefit of the opioid remifentanil. Science Translational Medicine, 3(70), 70ra14. https://doi.org/10.1126/scitranslmed.3001244
Büchel, C., Geuter, S., Sprenger, C., & Eippert, F. (2014). Placebo analgesia: A predictive coding perspective. Neuron, 81(6), 1223–1239. https://doi.org/10.1016/j.neuron.2014.02.042
Colloca, L., Petrovic, P., Wager, T. D., Ingvar, M., & Benedetti, F. (2010). How the number of learning trials affects placebo and nocebo responses. Pain, 151(2), 430–439. https://doi.org/10.1016/j.pain.2010.08.007
Freeman, S., Yu, R., Egorova, N., Chen, X., Kirsch, I., Claggett, B., Kaptchuk, T. J., Gollub, R. L., & Kong, J. (2015). Distinct neural representations of placebo and nocebo effects. NeuroImage, 112, 197–207. https://doi.org/10.1016/j.neuroimage.2015.03.015
Hipp, J. F., Engel, A. K., & Siegel, M. (2011). Oscillatory synchronization in large-scale cortical networks predicts perception. Neuron, 69(2), 387–396. https://doi.org/10.1016/j.neuron.2010.12.027
Jepma, M., Koban, L., van Doorn, J., Jones, M., & Wager, T. D. (2018). Behavioural and neural evidence for self-reinforcing expectancy effects on pain. Nature Human Behaviour, 2(11), 838–855. https://doi.org/10.1038/s41562-018-0455-8
Kilner, J. M., Mattout, J., Henson, R., & Friston, K. J. (2005). Hemodynamic correlates of EEG: A heuristic. NeuroImage, 28(1), 280–286. https://doi.org/10.1016/j.neuroimage.2005.06.008
Nickel, M. M., Tiemann, L., Hohn, V. D., May, E. S., Gil Ávila, C., Eippert, F., & Ploner, M. (2022). Temporal-spectral signaling of sensory information and expectations in the cerebral processing of pain. Proceedings of the National Academy of Sciences of the United States of America, 119(1). https://doi.org/10.1073/pnas.2116616119
Ploner, M., Sorg, C., & Gross, J. (2017). Brain Rhythms of Pain. Trends in Cognitive Sciences, 21(2), 100–110. https://doi.org/10.1016/j.tics.2016.12.001
Schmid, J., Bingel, U., Ritter, C., Benson, S., Schedlowski, M., Gramsch, C., Forsting, M., & Elsenbruch, S. (2015). Neural underpinnings of nocebo hyperalgesia in visceral pain: A fMRI study in healthy volunteers. NeuroImage, 120, 114–122. https://doi.org/10.1016/j.neuroimage.2015.06.060
Shih, Y.‑W., Tsai, H.‑Y., Lin, F.‑S., Lin, Y.‑H., Chiang, C.‑Y., Lu, Z.‑L., & Tseng, M.‑T. (2019). Effects of Positive and Negative Expectations on Human Pain Perception Engage Separate But Interrelated and Dependently Regulated Cerebral Mechanisms. Journal of Neuroscience, 39(7), 1261–1274. https://doi.org/10.1523/JNEUROSCI.2154-18.2018
Skvortsova, A., Veldhuijzen, D. S., van Middendorp, H., Colloca, L., & Evers, A. W. M. (2020). Effects of Oxytocin on Placebo and Nocebo Effects in a Pain Conditioning Paradigm: A Randomized Controlled Trial. The Journal of Pain, 21(3-4), 430–439. https://doi.org/10.1016/j.jpain.2019.08.010
Strube, A., Rose, M., Fazeli, S., & Büchel, C. (2021). The temporal and spectral characteristics of expectations and prediction errors in pain and thermoception. ELife, 10. https://doi.org/10.7554/eLife.62809
Tu, Y., Zhang, Z., Tan, A., Peng, W., Hung, Y. S., Moayedi, M., Iannetti, G. D., & Hu, L. (2016). Alpha and gamma oscillation amplitudes synergistically predict the perception of forthcoming nociceptive stimuli. Human Brain Mapping, 37(2), 501–514. https://doi.org/10.1002/hbm.23048
-
-
www.palladiummag.com www.palladiummag.com
-
I expect AI to get much better than it is today. Research on AI systems has shown that they predictably improve given better algorithms, more and better quality data, and more computational power. Labs are in the process of further scaling up their clusters—the groupings of computers that the algorithms run on.
Ah, article based on assumption of future improvement. compute and data are limiting factors, and you will end up making the equation if compute footprint is more efficient than doing it yourself. Data even more limiting, as the most meaningful stuff is qualitative rather than quantitative, and stats on the Q stuff won't give you meaning (LLMs case in point)
-
The shared goal of the field of artificial intelligence is to create a system that can do anything. I expect us to soon reach it.
Is it though? Wrt GAI that is as far away as before imo. The rainbow never gets nearer, because it is dependent on your position.
-
The economically and politically relevant comparison on most tasks is not whether the language model is better than the best human, it is whether they are better than the human who would otherwise do that task
True, and that is where this fails outside of bullshit tasks. The unmentioned assumption here is that algogen output can have meaning, rather than just coherence and plausibility.
-
The general reaction to language models among knowledge workers is one of denial.
equates 'content production' w k-work
-
my ability to write large amounts of content quickly
right. 'content production' where the actual meaning isn't relevant?
-
it can competently generate cogent content on a wide range of topics. It can summarize and analyze texts passably well
cogent content / passably well isn't the quality benchmark for K-work though.
-
-
viewer.athenadocs.nl viewer.athenadocs.nl
-
The study found that the economic/utilitarian value of having children decreased as socioeconomic development increased. However, the psychological value did not change
Materiële onafhankelijkheid is niet onverenigbaar met emotionele interdependentie; mogelijk om economisch zelfstandig te zijn, maar nog wel emotioneel verbonden te zijn met anderen en hechte relaties te onderhouden.
-
-
www.biorxiv.org www.biorxiv.org
-
eLife Assessment
This valuable study provides convincing evidence that white matter diffusion imaging of the right superior longitudinal fasciculus might help to develop a predictive biomarker of chronic back pain chronicity. The results are based on a discovery-replication approach with different cohorts, but the sample size is limited. The findings will interest researchers interested in the brain mechanisms of chronic pain and in developing brain-based biomarkers of chronic pain.
-
Author response:
The following is the authors’ response to the original reviews.
Public Reviews:
Reviewer #1 (Public Review):
Summary:
In this paper, Misic et al showed that white matter properties can be used to classify subacute back pain patients that will develop persisting pain.
Strengths:
Compared to most previous papers studying associations between white matter properties and chronic pain, the strength of the method is to perform a prediction in unseen data. Another strength of the paper is the use of three different cohorts. This is an interesting paper that provides a valuable contribution to the field.
We thank the reviewer for emphasizing the strength of our paper and the importance of validation on multiple unseen cohorts.
Weaknesses:
The authors imply that their biomarker could outperform traditional questionnaires to predict pain: "While these models are of great value showing that few of these variables (e.g. work factors) might have significant prognostic power on the long-term outcome of back pain and provide easy-to-use brief questionnaires-based tools, (21, 25) parameters often explain no more than 30% of the variance (28-30) and their prognostic accuracy is limited.(31)". I don't think this is correct; questionnaire-based tools can achieve far greater prediction than their model in about half a million individuals from the UK Biobank (Tanguay-Sabourin et al., A prognostic risk score for the development and spread of chronic pain, Nature Medicine 2023).
We agree with the reviewer that we might have under-estimated the prognostic accuracy of questionnaire-based tools, especially, the strong predictive accuracy shown by Tangay-Sabourin 2023. In this revised version, we have changed both the introduction and the discussion to reflect the questionnaire-based prognostic accuracy reported in the seminal work by Tangay-Sabourin.
In the introduction (page 4, lines 3-18), we now write:
“Some studies have addressed this question with prognostic models incorporating demographic, pain-related, and psychosocial predictors.1-4 While these models are of great value showing that few of these variables (e.g. work factors) might have significant prognostic power on the long-term outcome of back pain, their prognostic accuracy is limited,5 with parameters often explaining no more than 30% of the variance.6-8. A recent notable study in this regard developed a model based on easy-to-use brief questionnaires to predict the development and spread of chronic pain in a variety of pain conditions capitalizing on a large dataset obtained from the UK-BioBank. 9 This work demonstrated that only few features related to assessment of sleep, neuroticism, mood, stress, and body mass index were enough to predict persistence and spread of pain with an area under the curve of 0.53-0.73. Yet, this study is unique in showing such a predictive value of questionnaire-based tools. Neurobiological measures could therefore complement existing prognostic models based on psychosocial variables to improve overall accuracy and discriminative power. More importantly, neurobiological factors such as brain parameters can provide a mechanistic understanding of chronicity and its central processing.”
And in the conclusion (page 22, lines 5-9), we write:
“Integrating findings from studies that used questionnaire-based tools and showed remarkable predictive power9 with neurobiological measures that can offer mechanistic insights into chronic pain development, could enhance predictive power in CBP prognostic modeling.”
Moreover, the main weakness of this study is the sample size. It remains small despite having 3 cohorts. This is problematic because results are often overfitted in such a small sample size brain imaging study, especially when all the data are available to the authors at the time of training the model (Poldrack et al., Scanning the horizon: towards transparent and reproducible neuroimaging research, Nature Reviews in Neuroscience 2017). Thus, having access to all the data, the authors have a high degree of flexibility in data analysis, as they can retrain their model any number of times until it generalizes across all three cohorts. In this case, the testing set could easily become part of the training making it difficult to assess the real performance, especially for small sample size studies.
The reviewer raises a very important point of limited sample size and of the methodology intrinsic of model development and testing. We acknowledge the small sample size in the “Limitations” section of the discussion. In the resubmission, we acknowledge the degree of flexibility that is afforded by having access to all the data at once. However, we also note that our SLF-FA based model is a simple cut-off approach that does not include any learning or hidden layers and that the data obtained from Open Pain were never part of the “training” set at any point at either the New Haven or the Mannheim site. Regarding our SVC approach we follow standard procedures for machine learning where we never mix the training and testing sets. The models are trained on the training data with parameters selected based on cross-validation within the training data. Therefore, no models have ever seen the test data set. The model performances we reported reflect the prognostic accuracy of our model. We write in the limitation section of the discussion (page 20, lines 20-21, and page 21, lines 1-6):
“In addition, at the time of analysis, we had “access” to all the data, which may lead to bias in model training and development. We believe that the data presented here are nevertheless robust since multisite validated but need replication. Additionally, we followed standard procedures for machine learning where we never mix the training and testing sets. The models were trained on the training data with parameters selected based on cross-validation within the training data. Therefore, no models have ever seen the test data set. The model performances we reported reflect the prognostic accuracy of our model”.
Finally, as discussed by Spisak et al., 10 the key determinant of the required sample size in predictive modeling is the ” true effect size of the brain-phenotype relationship”, which we think is the determinant of the replication we observe in this study. As such the effect size in the New Haven and Mannheim data is Cohen’s d >1.
Even if the performance was properly assessed, their models show AUCs between 0.65-0.70, which is usually considered as poor, and most likely without potential clinical use. Despite this, their conclusion was: "This biomarker is easy to obtain (~10 min of scanning time) and opens the door for translation into clinical practice." One may ask who is really willing to use an MRI signature with a relatively poor performance that can be outperformed by self-report questionnaires?
The reviewer is correct, the model performance is fair which limits its usefulness for clinical translation. We wanted to emphasize that obtaining diffusion images can be done in a short period of time and, hence, as such models’ predictive accuracy improves, clinical translation becomes closer to reality. In addition, our findings are based on older diffusion data and limited sample sizes coming from different sites and different acquisition sequences. This by itself would limit the accuracy especially since the evidence shows that sample size affects also model performance (i.e. testing AUC)10. In the revision, we re-worded the sentence mentioned by the reviewer to reflect the points discussed here. This also motivates us to collect a more homogeneous and larger sample. In the limitations section of the discussion, we now write (page 21, lines 6-9):
“Even though our model performance is fair, which currently limits its usefulness for clinical translation, we believe that future models would further improve accuracy by using larger homogenous sample sizes and uniform acquisition sequences.”
Overall, these criticisms are more about the wording sometimes used and the inference they made. I think the strength of the evidence is incomplete to support the main claims of the paper.
Despite these limitations, I still think this is a very relevant contribution to the field. Showing predictive performance through cross-validation and testing in multiple cohorts is not an easy task and this is a strong effort by the team. I strongly believe this approach is the right one and I believe the authors did a good job.
We thank the reviewer for acknowledging that our effort and approach were useful.
Minor points:
Methods:
I get the voxel-wise analysis, but I don't understand the methods for the structural connectivity analysis between the 88 ROIs. Have the authors run tractography or have they used a predetermined streamlined form of 'population-based connectome'? They report that models of AUC above 0.75 were considered and tested in the Chicago dataset, but we have no information about what the model actually learned (although this can be tricky for decision tree algorithms).
We apologize for the lack of clarity; we did run tractography and we did not use a pre-determined streamlined form of the connectome.
Finding which connections are important for the classification of SBPr and SBPp is difficult because of our choices during data preprocessing and SVC model development: (1) preprocessing steps which included TNPCA for dimensionality reduction, and regressing out the confounders (i.e., age, sex, and head motion); (2) the harmonization for effects of sites; and (3) the Support Vector Classifier which is a hard classification model11.
In the methods section (page 30, lines 21-23) we added: “Of note, such models cannot tell us the features that are important in classifying the groups. Hence, our model is considered a black-box predictive model like neural networks.”
Minor:
What results are shown in Figure 7? It looks more descriptive than the actual results.
The reviewer is correct; Figure 7 and Supplementary Figure 4 were both qualitatively illustrating the shape of the SLF. We have now changed both figures in response to this point and a point raised by reviewer 3. We now show a 3D depiction of different sub-components of the right SLF (Figure 7) and left SLF (Now Supplementary Figure 11 instead of Supplementary Figure 4) with a quantitative estimation of the FA content of the tracts, and the number of tracts per component. The results reinforce the TBSS analysis in showing asymmetry in the differences between left and right SLF between the groups (i.e. SBPp and SBPr) in both FA values and number of tracts per bundle.
Reviewer #2 (Public Review):
The present study aims to investigate brain white matter predictors of back pain chronicity. To this end, a discovery cohort of 28 patients with subacute back pain (SBP) was studied using white matter diffusion imaging. The cohort was investigated at baseline and one-year follow-up when 16 patients had recovered (SBPr) and 12 had persistent back pain (SBPp). A comparison of baseline scans revealed that SBPr patients had higher fractional anisotropy values in the right superior longitudinal fasciculus SLF) than SBPp patients and that FA values predicted changes in pain severity. Moreover, the FA values of SBPr patients were larger than those of healthy participants, suggesting a role of FA of the SLF in resilience to chronic pain. These findings were replicated in two other independent datasets. The authors conclude that the right SLF might be a robust predictive biomarker of CBP development with the potential for clinical translation.
Developing predictive biomarkers for pain chronicity is an interesting, timely, and potentially clinically relevant topic. The paradigm and the analysis are sound, the results are convincing, and the interpretation is adequate. A particular strength of the study is the discovery-replication approach with replications of the findings in two independent datasets.
We thank reviewer 2 for pointing to the strength of our study.
The following revisions might help to improve the manuscript further.
- Definition of recovery. In the New Haven and Chicago datasets, SBPr and SBPp patients are distinguished by reductions of >30% in pain intensity. In contrast, in the Mannheim dataset, both groups are distinguished by reductions of >20%. This should be harmonized. Moreover, as there is no established definition of recovery (reference 79 does not provide a clear criterion), it would be interesting to know whether the results hold for different definitions of recovery. Control analyses for different thresholds could strengthen the robustness of the findings.
The reviewer raises an important point regarding the definition of recovery. To address the reviewers’ concern we have added a supplementary figure (Fig. S6) showing the results in the Mannheim data set if a 30% reduction is used as a recovery criterion, and in the manuscript (page 11, lines 1,2) we write: “Supplementary Figure S6 shows the results in the Mannheim data set if a 30% reduction is used as a recovery criterion in this dataset (AUC= 0.53)”.
We would like to emphasize here several points that support the use of different recovery thresholds between New Haven and Mannheim. The New Haven primary pain ratings relied on visual analogue scale (VAS) while the Mannheim data relied on the German version of the West-Haven-Yale Multidimensional Pain Inventory. In addition, the Mannheim data were pre-registered with a definition of recovery at 20% and are part of a larger sub-acute to chronic pain study with prior publications from this cohort using the 20% cut-off12. Finally, a more recent consensus publication13 from IMMPACT indicates that a change of at least 30% is needed for a moderate improvement in pain on the 0-10 Numerical Rating Scale but that this percentage depends on baseline pain levels.
- Analysis of the Chicago dataset. The manuscript includes results on FA values and their association with pain severity for the New Haven and Mannheim datasets but not for the Chicago dataset. It would be straightforward to show figures like Figures 1 - 4 for the Chicago dataset, as well.
We welcome the reviewer’s suggestion; we added these analyses to the results section of the resubmitted manuscript (page 11, lines 13-16): “The correlation between FA values in the right SLF and pain severity in the Chicago data set showed marginal significance (p = 0.055) at visit 1 (Fig. S8A) and higher FA values were significantly associated with a greater reduction in pain at visit 2 (p = 0.035) (Fig. S8B).”
- Data sharing. The discovery-replication approach of the present study distinguishes the present from previous approaches. This approach enhances the belief in the robustness of the findings. This belief would be further enhanced by making the data openly available. It would be extremely valuable for the community if other researchers could reproduce and replicate the findings without restrictions. It is not clear why the fact that the studies are ongoing prevents the unrestricted sharing of the data used in the present study.
We greatly appreciate the reviewer's suggestion to share our data sets, as we strongly support the Open Science initiative. The Chicago data set is already publicly available. The New Haven data set will be shared on the Open Pain repository, and the Mannheim data set will be uploaded to heiDATA or heiARCHIVE at Heidelberg University in the near future. We cannot share the data immediately because this project is part of the Heidelberg pain consortium, “SFB 1158: From nociception to chronic pain: Structure-function properties of neural pathways and their reorganization.” Within this consortium, all data must be shared following a harmonized structure across projects, and no study will be published openly until all projects have completed initial analysis and quality control.
Reviewer #3 (Public Review):
Summary:
Authors suggest a new biomarker of chronic back pain with the option to predict the result of treatment. The authors found a significant difference in a fractional anisotropy measure in superior longitudinal fasciculus for recovered patients with chronic back pain.
Strengths:
The results were reproduced in three different groups at different studies/sites.
Weaknesses:
- The number of participants is still low.
The reviewer raises a very important point of limited sample size. As discussed in our replies to reviewer number 1:
We acknowledge the small sample size in the “Limitations” section of the discussion. In the resubmission, we acknowledge the degree of flexibility that is afforded by having access to all the data at once. However, we also note that our SLF-FA based model is a simple cut-off approach that does not include any learning or hidden layers and that the data obtained from Open Pain were never part of the “training” set at any point at either the New Haven or the Mannheim site. Regarding our SVC approach we follow standard procedures for machine learning where we never mix the training and testing sets. The models are trained on the training data with parameters selected based on cross-validation within the training data. Therefore, no models have ever seen the test data set. The model performances we reported reflect the prognostic accuracy of our model. We write in the limitation section of the discussion (page 20, lines 20-21, and page 21, lines 1-6):
“In addition, at the time of analysis, we had “access” to all the data, which may lead to bias in model training and development. We believe that the data presented here are nevertheless robust since multisite validated but need replication. Additionally, we followed standard procedures for machine learning where we never mix the training and testing sets. The models were trained on the training data with parameters selected based on cross-validation within the training data. Therefore, no models have ever seen the test data set. The model performances we reported reflect the prognostic accuracy of our model”.
Finally, as discussed by Spisak et al., 10 the key determinant of the required sample size in predictive modeling is the ” true effect size of the brain-phenotype relationship”, which we think is the determinant of the replication we observe in this study. As such the effect size in the New Haven and Mannheim data is Cohen’s d >1.
- An explanation of microstructure changes was not given.
The reviewer points to an important gap in our discussion. While we cannot do a direct study of actual tissue microstructure, we explored further the changes observed in the SLF by calculating diffusivity measures. We have now performed the analysis of mean, axial, and radial diffusivity.
In the results section we added (page 7, lines 12-19): “We also examined mean diffusivity (MD), axial diffusivity (AD), and radial diffusivity (RD) extracted from the right SLF shown in Fig.1 to further understand which diffusion component is different between the groups. The right SLF MD is significantly increased (p < 0.05) in the SBPr compared to SBPp patients (Fig. S3), while the right SLF RD is significantly decreased (p < 0.05) in the SBPr compared to SBPp patients in the New Haven data (Fig. S4). Axial diffusivity extracted from the RSLF mask did not show significant difference between SBPr and SBPp (p = 0.28) (Fig. S5).”
In the discussion, we write (page 15, lines 10-20):
“Within the significant cluster in the discovery data set, MD was significantly increased, while RD in the right SLF was significantly decreased in SBPr compared to SBPp patients. Higher RD values, indicative of demyelination, were previously observed in chronic musculoskeletal patients across several bundles, including the superior longitudinal fasciculus14. Similarly, Mansour et al. found higher RD in SBPp compared to SBPr in the predictive FA cluster. While they noted decreased AD and increased MD in SBPp, suggestive of both demyelination and altered axonal tracts,15 our results show increased MD and RD in SBPr with no AD differences between SBPp and SBPr, pointing to white matter changes primarily due to myelin disruption rather than axonal loss, or more complex processes. Further studies on tissue microstructure in chronic pain development are needed to elucidate these processes.”
- Some technical drawbacks are presented.
We are uncertain if the reviewer is suggesting that we have acknowledged certain technical drawbacks and expects further elaboration on our part. We kindly request that the reviewer specify what particular issues need to be addressed so that we can respond appropriately.
Recommendations For The Authors:
We thank the reviewers for their constructive feedback, which has significantly improved our manuscript. We have done our best to answer the criticisms that they raised point-by-point.
Reviewer #2 (Recommendations For The Authors):
The discovery-replication approach of the current study justifies the use of the terminus 'robust.' In contrast, previous studies on predictive biomarkers using functional and structural brain imaging did not pursue similar approaches and have not been replicated. Still, the respective biomarkers are repeatedly referred to as 'robust.' Throughout the manuscript, it would, therefore, be more appropriate to remove the label 'robust' from those studies.
We thank the reviewer for this valuable suggestion. We removed the label 'robust' throughout the manuscript when referring to the previous studies which didn’t follow the same approach and have not yet been replicated.
Reviewer #3 (Recommendations For The Authors):
This is, indeed, quite a well-written manuscript with very interesting findings and patient group. There are a few comments that enfeeble the findings.
(1) It is a bit frustrating to read at the beginning how important chronic back pain is and the number of patients in the used studies. At least the number of healthy subjects could be higher.
The reviewer raises an important point regarding the number of pain-free healthy controls (HC) in our samples. We first note that our primary statistical analysis focused on comparing recovered and persistent patients at baseline and validating these findings across sites without directly comparing them to HCs. Nevertheless, the data from New Haven included 28 HCs at baseline, and the data from Mannheim included 24 HCs. Although these sample sizes are not large, they have enabled us to clearly establish that the recovered SBPr patients generally have larger FA values in the right superior longitudinal fasciculus compared to the HCs, a finding consistent across sites (see Figs. 1 and 3). This suggests that the general pain-free population includes individuals with both low and high-risk potential for chronic pain. It also offers one explanation for the reported lack of differences or inconsistent differences between chronic low-back pain patients and HCs in the literature, as these differences likely depend on the (unknown) proportion of high- and low-risk individuals in the control groups. Therefore, if the high-risk group is more represented by chance in the HC group, comparisons between HCs and chronic pain patients are unlikely to yield statistically significant results. Thus, while we agree with the reviewer that the sample sizes of our HCs are limited, this limitation does not undermine the validity of our findings.
(2) Pain reaction in the brain is in general a quite popular topic and could be connected to the findings or mentioned in the introduction.
We thank the reviewer for this suggestion. We have now added a summary of brain response to pain in general; In the introduction, we now write (page 4, lines 19-22 and page 5, lines 1-5):
“Neuroimaging research on chronic pain has uncovered a shift in brain responses to pain when acute and chronic pain are compared. The thalamus, primary somatosensory, motor areas, insula, and mid-cingulate cortex most often respond to acute pain and can predict the perception of acute pain16-19. Conversely, limbic brain areas are more frequently engaged when patients report the intensity of their clinical pain20, 21. Consistent findings have demonstrated that increased prefrontal-limbic functional connectivity during episodes of heightened subacute ongoing back pain or during a reward learning task is a significant predictor of CBP.12, 22. Furthermore, low somatosensory cortex excitability in the acute stage of low back pain was identified as a predictor of CBP chronicity.23”
(3) It is clearly observed structural asymmetry in the brain, why not elaborate this finding further? Would SLF be a hub in connectivity analysis? Would FA changes have along tract features? etc etc etc
The reviewer raises an important point. There is ground to suggest from our data that there is an asymmetry to the role of the SLF in resilience to chronic pain. We discuss this at length in the Discussion section. We have, in addition, we elaborated more in our data analysis using our Population Based Structural Connectome pipeline on the New Haven dataset. Following that approach, we studied both the number of fiber tracts making different parts of the SLF on the right and left side. In addition, we have extracted FA values along fiber tracts and compared the average across groups. Our new analyses are presented in our modified Figures 7 and Fig S11. These results support the asymmetry hypothesis indeed. The SLF could be a hub of structural connectivity. Please note however, given the nature of our design of discovery and validation, the study of structural connectivity of the SLF is beyond the scope of this paper because tract-based connectivity is very sensitive to data collection parameters and is less accurate with single shell DWI acquisition. Therefore, we will pursue the study of connectivity of the SLF in the future with well-powered and more harmonized data.
(4) Only FA is mentioned; did the authors work with MD, RD, and AD metrics?
We thank the reviewer for this suggestion that helps in providing a clearer picture of the differences in the right SLF between SBPr and SBPp. We have now extracted MD, AD, and RD for the predictive mask we discovered in Figure 1 and plotted the values comparing SBPr to SBPp patients in Fig. S3, Fig. S4., and Fig. S5 across all sites using one comprehensive harmonized analysis. We have added in the discussion “Within the significant cluster in the discovery data set, MD was significantly increased, while RD in the right SLF was significantly decreased in SBPr compared to SBPp patients. Higher RD values, indicative of demyelination, were previously observed in chronic musculoskeletal patients across several bundles, including the superior longitudinal fasciculus14. Similarly, Mansour et al. found higher RD in SBPp compared to SBPr in the predictive FA cluster. While they noted decreased AD and increased MD in SBPp, suggestive of both demyelination and altered axonal tracts15, our results show increased MD and RD in SBPr with no AD differences between SBPp and SBPr, pointing to white matter changes primarily due to myelin disruption rather than axonal loss, or more complex processes. Further studies on tissue microstructure in chronic pain development are needed to elucidate these processes.”
(5) There are many speculations in the Discussion, however, some of them are not supported by the results.
We agree with the reviewer and thank them for pointing this out. We have now made several changes across the discussion related to the wording where speculations were not supported by the data. For example, instead of writing (page 16, lines 7-9): “Together the literature on the right SLF role in higher cognitive functions suggests, therefore, that resilience to chronic pain is a top-down phenomenon related to visuospatial and body awareness.”, We write: “Together the literature on the right SLF role in higher cognitive functions suggests, therefore, that resilience to chronic pain might be related to a top-down phenomenon involving visuospatial and body awareness.”
(6) A method section was written quite roughly. In order to obtain all the details for a potential replication one needs to jump over the text.
The reviewer is correct; our methodology may have lacked more detailed descriptions. Therefore, we have clarified our methodology more extensively. Under “Estimation of structural connectivity”; we now write (page 28, lines 20,21 and page 29, lines 1-19):
“Structural connectivity was estimated from the diffusion tensor data using a population-based structural connectome (PSC) detailed in a previous publication.24 PSC can utilize the geometric information of streamlines, including shape, size, and location for a better parcellation-based connectome analysis. It, therefore, preserves the geometric information, which is crucial for quantifying brain connectivity and understanding variation across subjects. We have previously shown that the PSC pipeline is robust and reproducible across large data sets.24 PSC output uses the Desikan-Killiany atlas (DKA) 25 of cortical and sub-cortical regions of interest (ROI). The DKA parcellation comprises 68 cortical surface regions (34 nodes per hemisphere) and 19 subcortical regions. The complete list of ROIs is provided in the supplementary materials’ Table S6. PSC leverages a reproducible probabilistic tractography algorithm 26 to create whole-brain tractography data, integrating anatomical details from high-resolution T1 images to minimize bias in the tractography. We utilized DKA 25 to define the ROIs corresponding to the nodes in the structural connectome. For each pair of ROIs, we extracted the streamlines connecting them by following these steps: 1) dilating each gray matter ROI to include a small portion of white matter regions, 2) segmenting streamlines connecting multiple ROIs to extract the correct and complete pathway, and 3) removing apparent outlier streamlines. Due to its widespread use in brain imaging studies27, 28, we examined the mean fractional anisotropy (FA) value along streamlines and the count of streamlines in this work. The output we used includes fiber count, fiber length, and fiber volume shared between the ROIs in addition to measures of fractional anisotropy and mean diffusivity.”
(7) Why not join all the data with harmonisation in order to reproduce the results (TBSS)
We have followed the reviewer’s suggestion; we used neuroCombat harmonization after pooling all the diffusion weighted data into one TBSS analysis. Our results remain the same after harmonization.
In the Supplementary Information we added a paragraph explaining the method for harmonization; we write (SI, page 3, lines 25-34):
“Harmonization of DTI data using neuroCombat. Because the 3 data sets originated from different sites using different MR data acquisition parameters and slightly different recruitment criteria, we applied neuroCombat 29 to correct for site effects and then repeated the TBSS analysis shown in Figure 1 and the validation analyses shown in Figures 5 and 6. First, the FA maps derived using the FDT toolbox were pooled into one TBSS analysis where registration to a standard template FA template (FMRIB58_FA_1mm.nii.gz part of FSL) was performed. Next, neuroCombat was applied to the FA maps as implemented in Python with batch (i.e., site) effect modeled with a vector containing 1 for New Haven, 2 for Chicago, and 3 for Mannheim originating maps, respectively. The harmonized maps were then skeletonized to allow for TBSS.”
And in the results section, we write (page 12, lines 2-21):
“Validation after harmonization
Because the DTI data sets originated from 3 sites with different MR acquisition parameters, we repeated our TBSS and validation analyses after correcting for variability arising from site differences using DTI data harmonization as implemented in neuroCombat. 29 The method of harmonization is described in detail in the Supplementary Methods. The whole brain unpaired t-test depicted in Figure 1 was repeated after neuroCombat and yielded very similar results (Fig. S9A) showing significantly increased FA in the SBPr compared to SBPp patients in the right superior longitudinal fasciculus (MNI-coordinates of peak voxel: x = 40; y = - 42; z = 18 mm; t(max) = 2.52; p < 0.05, corrected against 10,000 permutations). We again tested the accuracy of local diffusion properties (FA) of the right SLF extracted from the mask of voxels passing threshold in the New Haven data (Fig.S9A) in classifying the Mannheim and the Chicago patients, respectively, into persistent and recovered. FA values corrected for age, gender, and head displacement accurately classified SBPr and SBPp patients from the Mannheim data set with an AUC = 0.67 (p = 0.023, tested against 10,000 random permutations, Fig. S9B and S7D), and patients from the Chicago data set with an AUC = 0.69 (p = 0.0068) (Fig. S9C and S7E) at baseline, and an AUC = 0.67 (p = 0.0098) (Fig. S9D and S7F) patients at follow-up, confirming the predictive cluster from the right SLF across sites. The application of neuroCombat significantly changes the FA values as shown in Fig.S10 but does not change the results between groups.”
Minor comments
(1) In the case of New Haven data, one used MB 4 and GRAPPA 2, these two factors accelerate the imaging 8 times and often lead to quite a poor quality.<br /> Any kind of QA?
We thank the reviewer for identifying this error. GRAPPA 2 was in fact used for our T1-MPRAGE image acquisition but not during the diffusion data acquisition. The diffusion data were acquired with a multi-band acceleration factor of 4. We have now corrected this mistake.
(2) Why not include MPRAGE data into the analysis, in particular, for predictions?
We thank the reviewer for the suggestion. The collaboration on this paper was set around diffusion data. In addition, MPRAGE data from New Haven related to prediction is already published (10.1073/pnas.1918682117) and MPRAGE data of the Mannheim data set is a part of the larger project and will be published elsewhere.
(3) In preprocessing, the authors wrote: "Eddy current corrects for image distortions due to susceptibility-induced distortions and eddy currents in the gradient coil"<br /> However, they did not mention that they acquired phase-opposite b0 data. It means eddy_openmp works likely only as an alignment tool, but not susceptibility corrector.
We kindly thank the reviewer for bringing this to our attention. We indeed did not collect b0 data in the phase-opposite direction, however, eddy_openmp can still be used to correct for eddy current distortions and perform motion correction, but the absence of phase-opposite b0 data may limit its ability to fully address susceptibility artifacts. This is now noted in the Supplementary Methods under Preprocessing section (SI, page 3, lines 16-18): “We do note, however, that as we did not acquire data in the phase-opposite direction, the susceptibility-induced distortions may not be fully corrected.”
(4) Version of FSL?
We thank the reviewer for addressing this point that we have now added under the Supplementary Methods (SI, page 3, lines 10-11): “Preprocessing of all data sets was performed employing the same procedures and the FMRIB diffusion toolbox (FDT) running on FSL version 6.0.”
(5) Some short sketches about the connectivity analysis could be useful, at least in SI.
We are grateful for this suggestion that improves our work. We added the sketches about the connectivity analysis, please see Figure 7 and Supplementary Figure 11.
(6) Machine learning: functions, language, version?
We thank the reviewer for pointing out these minor points that we now hope to have addressed in our resubmission in the Methods section by adding a detailed description of the structural connectivity analysis. We added: “The DKA parcellation comprises 68 cortical surface regions (34 nodes per hemisphere) and 19 subcortical regions. The complete list of ROIs is provided in the supplementary materials’ Table S7. PSC leverages a reproducible probabilistic tractography algorithm 26 to create whole-brain tractography data, integrating anatomical details from high-resolution T1 images to minimize bias in the tractography. We utilized DKA 25 to define the ROIs corresponding to the nodes in the structural connectome. For each pair of ROIs, we extracted the streamlines connecting them by following these steps: 1) dilating each gray matter ROI to include a small portion of white matter regions, 2) segmenting streamlines connecting multiple ROIs to extract the correct and complete pathway, and 3) removing apparent outlier streamlines. Due to its widespread use in brain imaging studies27, 28, we examined the mean fractional anisotropy (FA) value along streamlines and the count of streamlines in this work. The output we used includes fiber count, fiber length, and fiber volume shared between the ROIs in addition to measures of fractional anisotropy and mean diffusivity.”
The script is described and provided at: https://github.com/MISICMINA/DTI-Study-Resilience-to-CBP.git.
(7) Ethical approval?
The New Haven data is part of a study that was approved by the Yale University Institutional Review Board. This is mentioned under the description of the data “New Haven (Discovery) data set (page 23, lines 1,2). Likewise, the Mannheim data is part of a study approved by Ethics Committee of the Medical Faculty of Mannheim, Heidelberg University, and was conducted in accordance with the declaration of Helsinki in its most recent form. This is also mentioned under “Mannheim data set” (page 26, lines 2-5): “The study was approved by the Ethics Committee of the Medical Faculty of Mannheim, Heidelberg University, and was conducted in accordance with the declaration of Helsinki in its most recent form.”
(1) Traeger AC, Henschke N, Hubscher M, et al. Estimating the Risk of Chronic Pain: Development and Validation of a Prognostic Model (PICKUP) for Patients with Acute Low Back Pain. PLoS Med 2016;13:e1002019.
(2) Hill JC, Dunn KM, Lewis M, et al. A primary care back pain screening tool: identifying patient subgroups for initial treatment. Arthritis Rheum 2008;59:632-641.
(3) Hockings RL, McAuley JH, Maher CG. A systematic review of the predictive ability of the Orebro Musculoskeletal Pain Questionnaire. Spine (Phila Pa 1976) 2008;33:E494-500.
(4) Chou R, Shekelle P. Will this patient develop persistent disabling low back pain? JAMA 2010;303:1295-1302.
(5) Silva FG, Costa LO, Hancock MJ, Palomo GA, Costa LC, da Silva T. No prognostic model for people with recent-onset low back pain has yet been demonstrated to be suitable for use in clinical practice: a systematic review. J Physiother 2022;68:99-109.
(6) Kent PM, Keating JL. Can we predict poor recovery from recent-onset nonspecific low back pain? A systematic review. Man Ther 2008;13:12-28.
(7) Hruschak V, Cochran G. Psychosocial predictors in the transition from acute to chronic pain: a systematic review. Psychol Health Med 2018;23:1151-1167.
(8) Hartvigsen J, Hancock MJ, Kongsted A, et al. What low back pain is and why we need to pay attention. Lancet 2018;391:2356-2367.
(9) Tanguay-Sabourin C, Fillingim M, Guglietti GV, et al. A prognostic risk score for development and spread of chronic pain. Nat Med 2023;29:1821-1831.
(10) Spisak T, Bingel U, Wager TD. Multivariate BWAS can be replicable with moderate sample sizes. Nature 2023;615:E4-E7.
(11) Liu Y, Zhang HH, Wu Y. Hard or Soft Classification? Large-margin Unified Machines. J Am Stat Assoc 2011;106:166-177.
(12) Loffler M, Levine SM, Usai K, et al. Corticostriatal circuits in the transition to chronic back pain: The predictive role of reward learning. Cell Rep Med 2022;3:100677.
(13) Smith SM, Dworkin RH, Turk DC, et al. Interpretation of chronic pain clinical trial outcomes: IMMPACT recommended considerations. Pain 2020;161:2446-2461.
(14) Lieberman G, Shpaner M, Watts R, et al. White Matter Involvement in Chronic Musculoskeletal Pain. The Journal of Pain 2014;15:1110-1119.
(15) Mansour AR, Baliki MN, Huang L, et al. Brain white matter structural properties predict transition to chronic pain. Pain 2013;154:2160-2168.
(16) Wager TD, Atlas LY, Lindquist MA, Roy M, Woo CW, Kross E. An fMRI-based neurologic signature of physical pain. N Engl J Med 2013;368:1388-1397.
(17) Lee JJ, Kim HJ, Ceko M, et al. A neuroimaging biomarker for sustained experimental and clinical pain. Nat Med 2021;27:174-182.
(18) Becker S, Navratilova E, Nees F, Van Damme S. Emotional and Motivational Pain Processing: Current State of Knowledge and Perspectives in Translational Research. Pain Res Manag 2018;2018:5457870.
(19) Spisak T, Kincses B, Schlitt F, et al. Pain-free resting-state functional brain connectivity predicts individual pain sensitivity. Nat Commun 2020;11:187.
(20) Baliki MN, Apkarian AV. Nociception, Pain, Negative Moods, and Behavior Selection. Neuron 2015;87:474-491.
(21) Elman I, Borsook D. Common Brain Mechanisms of Chronic Pain and Addiction. Neuron 2016;89:11-36.
(22) Baliki MN, Petre B, Torbey S, et al. Corticostriatal functional connectivity predicts transition to chronic back pain. Nat Neurosci 2012;15:1117-1119.
(23) Jenkins LC, Chang WJ, Buscemi V, et al. Do sensorimotor cortex activity, an individual's capacity for neuroplasticity, and psychological features during an episode of acute low back pain predict outcome at 6 months: a protocol for an Australian, multisite prospective, longitudinal cohort study. BMJ Open 2019;9:e029027.
(24) Zhang Z, Descoteaux M, Zhang J, et al. Mapping population-based structural connectomes. Neuroimage 2018;172:130-145.
(25) Desikan RS, Segonne F, Fischl B, et al. An automated labeling system for subdividing the human cerebral cortex on MRI scans into gyral based regions of interest. Neuroimage 2006;31:968-980.
(26) Maier-Hein KH, Neher PF, Houde J-C, et al. The challenge of mapping the human connectome based on diffusion tractography. Nature Communications 2017;8:1349.
(27) Chiang MC, McMahon KL, de Zubicaray GI, et al. Genetics of white matter development: a DTI study of 705 twins and their siblings aged 12 to 29. Neuroimage 2011;54:2308-2317.
(28) Zhao B, Li T, Yang Y, et al. Common genetic variation influencing human white matter microstructure. Science 2021;372.
(29) Fortin JP, Parker D, Tunc B, et al. Harmonization of multi-site diffusion tensor imaging data. Neuroimage 2017;161:149-170.
-
Reviewer #1 (Public review):
Summary:
In this paper, Misic et al showed that white matter properties can be used to classify subacute back pain patients that will develop persisting pain.
Strengths:
Compared to most previous papers studying associations between white matter properties and chronic pain, the strength of the method is to perform a prediction in unseen data. Another strength of the paper is the use of three different cohorts. This is an interesting paper that provides a valuable contribution to the field.
Weaknesses:
The main weakness of this study is the sample size. It remains small despite having 3 cohorts. This is problematic because results are often overfitted in such a small sample size brain imaging study, especially when all the data are available to the authors at the time of training the model (Poldrack et al., Scanning the horizon: towards transparent and reproducible neuroimaging research, Nature Reviews in Neuroscience 2017). Thus, having access to all the data, the authors have a high degree of flexibility in data analysis, as they can retrain their model any number of time until it generalizes across all three cohorts. In this case, the testing set could easily become part of the training making it difficult to assess the real performance, especially for small sample size studies.
Even if the performance was properly assessed their models show AUCs between 0.65-0.70, which is usually considered as poor, and most likely without potential clinical use. Despite this, their conclusion was: "This biomarker is easy to obtain (~10 min 18 of scanning time) and opens the door for translation into clinical practice." One may ask who is really willing to use an MRI signature with a relatively poor performance that can be outperformed by self-report questionnaires?
Overall, these criticisms are more about the wording sometimes use and the inference they made. I still think this is a very relevant contribution to the field. Showing predictive performance through cross validation and testing in multiple cohorts is not an easy task and this is a strong effort by the team. I strongly believe this approach is the right one and I believe the authors did a good job.
-
Reviewer #2 (Public review):
The present study aims to investigate brain white matter predictors of back pain chronicity. To this end, a discovery cohort of 28 patients with subacute back pain (SBP) was studied using white matter diffusion imaging. The cohort was investigated at baseline and one-year follow-up when 16 patients had recovered (SBPr) and 12 had persistent back pain (SBPp). A comparison of baseline scans revealed that SBPr patients had higher fractional anisotropy values in the right superior longitudinal fasciculus SLF) than SBPp patients and that FA values predicted changes in pain severity. Moreover, the FA values of SBPr patients were larger than those of healthy participants, suggesting a role of FA of the SLF in resilience to chronic pain. These findings were replicated in two other independent datasets. The authors conclude that the right SLF might be a robust predictive biomarker of CBP development with the potential for clinical translation.<br /> Developing predictive biomarkers for pain chronicity is an interesting, timely, and potentially clinically relevant topic. The paradigm and the analysis are sound, the results are convincing, and the interpretation is adequate. A particular strength of the study is the discovery-replication approach with replications of the findings in two independent datasets.
-
Reviewer #3 (Public review):
Summary:
The authors suggest a new biomarker of chronic back pain with an option to predict a result of treatment.
Strengths:
The results were reproduced in three studies.
Weaknesses:
The number of participants is still low, an explanation of microstructure changes was not given, and some technical drawbacks are presented.
-
-
learn-eu-central-1-prod-fleet01-xythos.content.blackboardcdn.com learn-eu-central-1-prod-fleet01-xythos.content.blackboardcdn.com
-
We know that eyewitnessidentifications are fallible.
For some people , they could mistake what they had seen or lie about the truth.
-
: The brain abhors avacuum. Under the best of observation conditions, the absolute best, we only detect, encodeand store in our brains bits and pieces of the entire experience in front of us, and they'restored in different parts of the brain.
The brain can only record part of the information, and it fills in the missing parts when recalling it, which explains why people's memories can be unreliable.
-
-
www.biorxiv.org www.biorxiv.org
-
eLife Assessment
By combining psychophysics and computational modelling based on the Theory of Visual Attention, this study examines the mechanisms underlying self-prioritization by revealing the influence of self-associations on early attentional selection. While the findings are important, the experimental evidence is incomplete. The relationship between consciousness (awareness) and attention, the potential contamination by arousal, the inconsistent and unexpected results, and the distinguishing between social and perceptual tasks need to be addressed or improved. The work will be of interest to researchers in psychology, cognitive science, and neuroscience.
-
Reviewer #1 (Public review):
Summary:
The authors intended to investigate the earliest mechanisms enabling self-prioritization, especially in the attention. Combining a temporal order judgement task with computational modelling based on the Theory of Visual Attention (TVA), the authors suggested that the shapes associated with the self can fundamentally alter the attentional selection of sensory information into awareness. This self-prioritization in attentional selection occurs automatically at early perceptual stages. Furthermore, the processing benefits obtained from attentional selection via self-relatedness and physical salience were separated from each other.
Strengths:
The manuscript is written in a way that is easy to follow. The methods of the paper are very clear and appropriate.
Weaknesses:
There are two main concerns:
(1) The authors had a too strong pre-hypothesis that self-prioritization was associated with attention. They used the prior entry to consciousness (awareness) as an index of attention, which is not appropriate. There may be other processing that makes the stimulus prior to entry to consciousness (e.g. high arousal, high sensitivity), but not attention. The self-related/associated stimulus may be involved in such processing but not attention to make the stimulus easily caught. Perhaps the authors could include other methods such as EEG or MEG to answer this question.
(2) The authors suggested that there are two independent attention processes. I suspect that the brain needs two attention systems. Is there a probability that the social and perceptual (physical properties of the stimulus) salience fired the same attention processing through different processing?
-
Reviewer #2 (Public review):
Summary:
The main aim of this research was to explore whether and how self-associations (as opposed to other associations) bias early attentional selection, and whether this can explain well-known self-prioritization phenomena, such as the self-advantage in perceptual matching tasks. The authors adopted the Visual Attention Theory (VAT) by estimating VAT parameters using a hierarchical Bayesian model from the field of attention and applied it to investigate the mechanisms underlying self-prioritization. They also discussed the constraints on the self-prioritization effect in attentional selection. The key conclusions reported were:
(1) Self-association enhances both attentional weights and processing capacity
(2) Self-prioritization in attentional selection occurs automatically but diminishes when active social decoding is required, and
(3) Social and perceptual salience capture attention through distinct mechanisms.
Strengths:
Transferring the Theory of Visual Attention parameters estimated by a hierarchical Bayesian model to investigate self-prioritization in attentional selection was a smart approach. This method provides a valuable tool for accessing the very early stages of self-processing, i.e., attention selection. The authors conclude that self-associations can bias visual attention by enhancing both attentional weights and processing capacity and that this process occurs automatically. These findings offer new insights into self-prioritization from the perspective of the early stage of attentional selection.
Weaknesses:
(1) The results are not convincing enough to definitively support their conclusions. This is due to inconsistent findings (e.g., the model selection suggested condition-specific c parameters, but the increase in processing capacity was only slight; the correlations between attentional selection bias and SPE were inconsistent across experiments), unexpected results (e.g., when examining the impact of social association on processing rates, the other-associated stimuli were processed faster after social association, while the self-associated stimuli were processed more slowly), and weak correlations between attentional bias and behavioral SPE, which were reported without any p-value corrections. Additionally, the reasons why the attentional bias of self-association occurs automatically but disappears during active social decoding remain difficult to explain. It is also possible that the self-association with shapes was not strong enough to demonstrate attention bias, rather than the automatic processes as the authors suggest. Although these inconsistencies and unexpected results were discussed, all were post hoc explanations. To convince readers, empirical evidence is needed to support these unexpected findings.
(2) The generalization of the findings needs further examination. The current results seem to rely heavily on the perceptual matching task. Whether this attentional selection mechanism of self-prioritization can be generalized to other stimuli, such as self-name, self-face, or other domains of self-association advantages, remains to be tested. In other words, more converging evidence is needed.
(3) The comparison between the "social" and "perceptual" tasks remains debatable, as it is challenging to equate the levels of social salience and perceptual salience. In addition, these two tasks differ not only in terms of social decoding processes but also in other aspects such as task difficulty. Whether the observed differences between the tasks can definitively suggest the specificity of social decoding, as the authors claim, needs further confirmation.
-
Author response:
Public Reviews:
Reviewer #1 (Public review):
Summary:
The authors intended to investigate the earliest mechanisms enabling self-prioritization, especially in the attention. Combining a temporal order judgement task with computational modelling based on the Theory of Visual Attention (TVA), the authors suggested that the shapes associated with the self can fundamentally alter the attentional selection of sensory information into awareness. This self-prioritization in attentional selection occurs automatically at early perceptual stages. Furthermore, the processing benefits obtained from attentional selection via self-relatedness and physical salience were separated from each other.
Strengths:
The manuscript is written in a way that is easy to follow. The methods of the paper are very clear and appropriate.
Thank you for your valuable feedback and helpful suggestions. Please see specific answers below.
Weaknesses:
There are two main concerns:
(1) The authors had a too strong pre-hypothesis that self-prioritization was associated with attention. They used the prior entry to consciousness (awareness) as an index of attention, which is not appropriate. There may be other processing that makes the stimulus prior to entry to consciousness (e.g. high arousal, high sensitivity), but not attention. The self-related/associated stimulus may be involved in such processing but not attention to make the stimulus easily caught. Perhaps the authors could include other methods such as EEG or MEG to answer this question.
We found the possibility of other mechanisms to be responsible for “prior entry” interesting too, but believe there are solid grounds for the hypothesis that it is indicative of attention:
First, prior entry has a long-standing history as in index of attention (e.g., Titchener, 1903; Shore et al., 2001; Yates and Nicholls, 2009; Olivers et al. 2011; see Spence & Parise, 2010, for a review.) Of course, other factors (like the ones mentioned) can contribute to encoding speed. However, for the perceptual condition, we systematically varied a stimulus feature that is associated with selective attention (salience, see e.g. Wolfe, 2021) and kept other features that are known to be associated with other factors such as arousal and sensitivity constant across the two variants (e.g. clear over threshold visibility) or varied them between participants (e.g. the colours / shapes used).
Second, in the social salience condition we used a manipulation that has repeatedly been used to establish social salience effects in other paradigms (e.g., Li et al., 2022; Liu & Sui, 2016; Scheller et al., 2024; Sui et al., 2015; see Humphreys & Sui, 2016, for a review). We assume that the reviewer’s comment suggests that changes in arousal or sensitivity may be responsible for social salience effects, specifically. We have several reasons to interpret the social salience effects as an alteration in attentional selection, rather than a result of arousal or sensitivity:
Arousal and attention are closely linked. However, within the present model, arousal is more likely linked to the availability of processing resources (capacity parameter C). That is, enhanced arousal is typically not stimulus-specific, and therefore unlikely affects the *relative* advantage in processing weights/rates of the self-associated (vs other-associated) stimuli. Indeed, a recent study showed that arousal does not modulate the relative division of attentional resources (as modelled by the Theory of Visual Attention; Asgeirsson & Nieuwenhuis, 2017). As such, it is unlikely that arousal can explain the observed results in relative processing changes for the self and other identities.
Further, there is little reason to assume that presenting a different shape enhances perceptual sensitivity. Firstly, all stimuli were presented well above threshold, which would shrink any effects that were resulting from increases in sensitivity alone. Secondly, shape-associations were counterbalanced across participants, reducing the possibility that specific features, present in the stimulus display, lead to the measurable change in processing rates as a result of enhanced shape-sensitivity.
Taken together, both, the wealth of literature that suggests prior entry to index attention and the specific design choices within our study, strongly support the notion that the observed changes in processing rates are indicative of changes in attentional selection, rather than other mechanisms (e.g. arousal, sensitivity).
(2) The authors suggested that there are two independent attention processes. I suspect that the brain needs two attention systems. Is there a probability that the social and perceptual (physical properties of the stimulus) salience fired the same attention processing through different processing?
We appreciate this thought-provoking comment. We conceptualize attention as a process that can facilitate different levels of representation, rather than as separate systems tuned to specific types of information. Different forms of representation, such as the perceptual shape, or the associated social identity, may be impacted by the same attentional process at different levels of representation. Indeed, our findings suggest that both social and perceptual salience effects may result from the same attentional system, albeit at different levels of representation. This is further supported by the additivity of perceptual and social salience effects and the negative correlation of processing facilitations between perceptually and socially salient cues. These results may reflect a trade-off in how attentional resources are distributed between either perceptually or socially salient stimuli.
Reviewer #2 (Public review):
Summary:
The main aim of this research was to explore whether and how self-associations (as opposed to other associations) bias early attentional selection, and whether this can explain well-known self-prioritization phenomena, such as the self-advantage in perceptual matching tasks. The authors adopted the Visual Attention Theory (VAT) by estimating VAT parameters using a hierarchical Bayesian model from the field of attention and applied it to investigate the mechanisms underlying self-prioritization. They also discussed the constraints on the self-prioritization effect in attentional selection. The key conclusions reported were:
(1) Self-association enhances both attentional weights and processing capacity
(2) Self-prioritization in attentional selection occurs automatically but diminishes when active social decoding is required, and
(3) Social and perceptual salience capture attention through distinct mechanisms.
Strengths:
Transferring the Theory of Visual Attention parameters estimated by a hierarchical Bayesian model to investigate self-prioritization in attentional selection was a smart approach. This method provides a valuable tool for accessing the very early stages of self-processing, i.e., attention selection. The authors conclude that self-associations can bias visual attention by enhancing both attentional weights and processing capacity and that this process occurs automatically. These findings offer new insights into self-prioritization from the perspective of the early stage of attentional selection.
Thank you for your valuable feedback and helpful suggestions. Please see specific answers below.
Weaknesses:
(1) The results are not convincing enough to definitively support their conclusions. This is due to inconsistent findings (e.g., the model selection suggested condition-specific c parameters, but the increase in processing capacity was only slight; the correlations between attentional selection bias and SPE were inconsistent across experiments), unexpected results (e.g., when examining the impact of social association on processing rates, the other-associated stimuli were processed faster after social association, while the self-associated stimuli were processed more slowly), and weak correlations between attentional bias and behavioral SPE, which were reported without any p-value corrections. Additionally, the reasons why the attentional bias of self-association occurs automatically but disappears during active social decoding remain difficult to explain. It is also possible that the self-association with shapes was not strong enough to demonstrate attention bias, rather than the automatic processes as the authors suggest. Although these inconsistencies and unexpected results were discussed, all were post hoc explanations. To convince readers, empirical evidence is needed to support these unexpected findings.
Thank you for outlining the specific points that raise your concern. We were happy to address these points as follows:
a. Replications and Consistency: In our study, we consistently observed trends (relative reduction in processing speed of the self-associated stimulus) in the social salience conditions across experiments. While Experiment 2 demonstrated a significant reduction in processing rate towards self-stimuli, there was a notable trend in Experiment 1 as well.
b. Condition-specific parameters: The condition-specific C parameters, though presenting a small effect size, significantly improved model fit. Inspecting the HDI ranges of our estimated C parameters indicates a high probability (85-89%) that processing capacity increased due to social associations, suggesting that even small changes (~2Hz) can hold meaningful implications within the context attentional selection.
Please also note that the main conclusions about relative salience (self/other, salient/non-salient) are based on the relative processing rates. Processing rates are the product of the processing capacity (condition- but not stimulus dependent) and the attentional weight (condition and stimulus dependent). The latter is crucial to judge the *relative* advantage of the salient stimulus. Hence, the self-/salient stimulus advantage that is reflected in the ‘processing rate difference’ is automatically also reflected in the relative attentional weights attributed to the self/other and salient/non-salient stimuli. As such, the overall results of an automatic relative advantage of self-associated stimuli hold, independently of the change in overall processing capacity.
c. Correlations: Regarding the correlations the reviewer noted, we wish to clarify that these were exploratory, and not the primary focus of our research. The aim of these exploratory analyses was to gauge the contribution of attentional selection to matching-based SPEs. As SPEs measured via the matching task are typically based on multiple different levels of processing, the contribution of early attentional selection to their overall magnitude was unclear. Without being able to gauge the possible effect sizes, corrected analyses may prevent detecting small but meaningful effects. As such, the effect sizes reported serve future studies to estimate power a priori and conduct well-powered replications of such exploratory effects. Additionally, Bayes factors were provided to give an appreciation of the strength of the evidence, all suggesting at least moderate evidence in favour of a correlation. Lastly, please note that effects that were measured within individuals and task (processing rate increase in social and perceptual decision dimensions in the TOJ task) showed consistent patterns, suggesting that the modulations within tasks were highly predictive of each other, while the modulations between tasks were not as clearly linked. We will add this clarification to the revised manuscript.
d. Unexpected results: The unexpected results concerning the processing rates of other-associated versus self-associated stimuli certainly warrant further discussion. We believe that the additional processing steps required for social judgments, reflected in enhanced reaction times, may explain the slower processing of self-associated stimuli in that dimension. We agree that not all findings will align with initial hypotheses, and this variability presents avenues for further research. We have added this to the discussion of social salience effects.
e. Whether association strength can account for the findings: We appreciate the scepticism regarding the strength of self-association with shapes. However, our within-participant design and control matching task indicate that the relative processing advantage for self-associated stimuli holds across conditions. This makes the scenario that “the self-association with shapes was not strong enough to demonstrate attention bias” very unlikely. Firstly, the relative processing advantage of self-associated stimuli in the perceptual decision condition, and the absence of such advantage in the social decision condition, were evidenced in the same participants. Hence, the strength of association between shapes and social identities was the same for both conditions. However, we only find an advantage for the self-associated shape when participants make perceptual (shape) judgements. It is therefore highly unlikely that the “association strength” can account for the difference in the outcomes between the conditions in experiment 1. Also, note that the order in which these conditions were presented was counter-balanced across participants, reducing the possibility that the automatic self-advantage was merely a result of learning or fatigue. Secondly, all participants completed the standard matching task to ascertain that the association between shapes and identities did indeed lead to processing advantages (across different levels).
In summary, we believe that the evidence we provide supports the final conclusions. We do, of course, welcome any further empirical evidence that could enhance our understanding of the contribution of different processing levels to the SPE and are committed to exploring these areas in future work.
(2) The generalization of the findings needs further examination. The current results seem to rely heavily on the perceptual matching task. Whether this attentional selection mechanism of self-prioritization can be generalized to other stimuli, such as self-name, self-face, or other domains of self-association advantages, remains to be tested. In other words, more converging evidence is needed.
The reviewer indicates that the current findings heavily rely on the perceptual matching task, and it would be more convincing to include other paradigm(s) and different types of stimuli. We are happy to address these points here: first, we specifically used a temporal order paradigm to tap into specific processes, rather than merely relying on the matching task. Attentional selection is, along with other processes, involved in matching, but the TOJ-TVA approach allows tapping into attentional selection specifically. Second, self-prioritization effects have been replicated across a wide range of stimuli (e.g. faces: Wozniak et al., 2018; names or owned objects: Scheller & Sui, 2022a, or even fully unfamiliar stimuli: Wozniak & Knoblich, 2019) and paradigms (e.g. matching task: Sui et al., 2012; cross-modal cue integration: e.g. Scheller & Sui, 2022b; Scheller et al., 2023; continuous flash suppression: Macrae et al., 2017; temporal order judgment: Constable et al., 2019; Truong et al., 2017). Using neutral geometric shapes, rather than faces and names, addresses a key challenge in self research: mitigating the influence of stimulus familiarity on results. In addition, these newly learned, simple stimuli can be combined with other paradigms, such as the TOJ paradigm in the current study, to investigate the broader impact of self-processing on perception and cognition.
To the best of our knowledge, this is the first study showing evidence about the mechanisms that are involved in early attentional selection of socially salient stimuli. Future replications and extensions would certainly be useful, as with any experimental paradigm.
(3) The comparison between the "social" and "perceptual" tasks remains debatable, as it is challenging to equate the levels of social salience and perceptual salience. In addition, these two tasks differ not only in terms of social decoding processes but also in other aspects such as task difficulty. Whether the observed differences between the tasks can definitively suggest the specificity of social decoding, as the authors claim, needs further confirmation.
Equating the levels of social and perceptual salience is indeed challenging, but not an aim of the present study. Instead, the present study directly compares the mechanisms and effects of social and perceptual salience, specifically experiment 2. By manipulating perceptual salience (relative colour) and social salience (relative shape association) independently and jointly, and quantifying the effects on processing rates, our study allows to directly delineate the contributions of each of these types of salience. The results suggest additive effects (see also Figure 7). Indeed, the possibility remains that these effects are additive because of the use of different perceptual features, so it would be helpful for future studies to explore whether similar perceptual features lead to (supra-/sub-) additive effects. In either case, the study design allows to directly compare the effects and mechanisms of social and perceptual salience.
Regarding the social and perceptual decision dimensions, they were not expected to be equated. Indeed, the social decision dimension requires additional retrieval of the associated identity, making it likely more challenging. This additional retrieval is also likely responsible for the slower responses towards the social association compared to the shape itself. However, the motivation to compare the effects of these two decisional dimensions lies in the assumption that the self needs to be task relevant. Some evidence suggests that the self needs to be task-relevant to induce self-prioritization effects (e.g., Woźniak & Knoblich, 2022). However, these studies typically used matching tasks and were powered to detect large effects only (e.g. f = 0.4, n = 18). As it is likely that lacking contribution of decisional processing levels (which interact with task-relevance) will reduce the SPE, smaller self-prioritization effects that result from earlier processing levels may not be detected with sufficient statistical power. Targeting specific processing levels, especially those with relatively early contributions or small effect sizes, requires larger samples (here: n = 70) to provide sufficient power. Indeed, by contrasting the relative attentional selection effects in the present study we find that the self does not need to be task-relevant to produce self-prioritization effects. This is in line with recent findings of prior entry of self-faces (Jubile & Kumar, 2021)
-
-
www.theguardian.com www.theguardian.com
-
-
www.biorxiv.org www.biorxiv.org
-
Author response:
The following is the authors’ response to the original reviews.
Reviewer #1 (Public Review):
The authors show that SVZ-derived astrocytes respond to a middle carotid artery occlusion (MCAO) hypoxia lesion by secreting and modulating hyaluronan at the edge of the lesion (penumbra) and that hyaluronan is a chemoattractant to SVZ astrocytes. They use lineage tracing of SVZ cells to determine their origin. They also find that SVZ-derived astrocytes express Thbs-4 but astrocytes at the MCAO-induced scar do not. Also, they demonstrate that decreased HA in the SVZ is correlated with gliogenesis. While much of the paper is descriptive/correlative they do overexpress Hyaluronan synthase 2 via viral vectors and show this is sufficient to recruit astrocytes to the injury. Interestingly, astrocytes preferred to migrate to the MCAO than to the region of overexpressed HAS2.
Strengths:
The field has largely ignored the gliogenic response of the SVZ, especially with regard to astrocytic function. These cells and especially newborn cells may provide support for regeneration. Emigrated cells from the SVZ have been shown to be neuroprotective via creating pro-survival environments, but their expression and deposition of beneficial extracellular matrix molecules are poorly understood. Therefore, this study is timely and important. The paper is very well written and the flow of results is logical.
Weaknesses:
The main problem is that they do not show that Hyaluronan is necessary for SVZ astrogenesis and or migration to MCAO lesions. Such loss of function studies have been carried out by studies they cite (e.g. Girard et al., 2014 and Benner et al., 2013). Similar approaches seem to be necessary in this work.
We appreciate the comments by the reviewer. The article is, indeed, largely descriptive since we attempt to describe in detail what happens to newborn astrocytes after MCAO. Still, we have not attempted any modification to the model, such as amelioration of ischemic damage. This is a limitation of the study that we do not hide. However, we use several experimental approaches, such as lineage tracing and hyaluronan modification, to strengthen our conclusions.
Regarding the weaknesses found by the reviewer, we do not claim that hyaluronan is necessary for SVZ astrogenesis. Indeed, we observe that when the MCAO stimulus (i.e. inflammation) is present, the HMW-HA (AAV-Has2) stimulus is less powerful (we discuss this in line 330-332). We do claim, and we believe we successfully demonstrate, the reverse situation: that SVZ astrocytes modulate hyaluronan, not at the SVZ but at the site of MCAO, i.e. the scar. However, regarding whether hyaluronan is necessary for SVZ astrogenesis, we only show a correlation between its degradation and the time-course of astrogenesis. We suggest this result as a starting point for a follow-up study. We have included a phrase in the discussion (line 310), stating that further experiments are needed to fully establish a link between hyaluronan and astrogenesis in the SVZ.
Major points:
(1) How good of a marker for newborn astrocytes is Thbs4? Did you co-label with B cell markers like EGFr? Is the Thbs4 gene expressed in B cells? Do scRNAseq papers show it is expressed in B cells? Are they B1 or B2 cells?
We chose Thbs4 as a marker of newborn astrocytes based on published research (Beckervordersanforth et al., 2010; Benner et al., 2013; Llorens-Bobadilla et al. 2015, Codega et al, 2014; Basak et al., 2018; Mizrak et al., 2019; Kjell et al., 2020; Cebrian-Silla et al., 2021). From those studies, at least 3 associate Thbs4 to B-type cells based on scRNAseq data (LlorensBobadilla et al. 2015; Cebrian-Silla et al., 2021; Basak et al., 2018). We have included a sentence about this and the associated references, in line 92.
We co-label Thbs4 with EGFR, but in the context of MCAO. We observed an increase of EGFR expression with MCAO, similar to the increase in Thbs4 alongside ischemia (see author ). We did not include this figure in the manuscript since we did not have available tissue from all the time points we used (7d, 60d post-ischemia).
Author response image 1.
Thbs4 cells, in basal and ischemic conditions, only represent a small amount of IdU-positive cells (Fig 3F), suggesting that they are mostly quiescent cells, i.e., B1 cells. However, the scRNAseq literature is not consistent about this.
(2) It is curious that there was no increase in Type C cells after MCAO - do the authors propose a direct NSC-astrocyte differentiation?
Type C cells are fast-proliferating cells, and our BrdU/IdU experiment (Fig. 3) suggests that Thbs4 cells are slow-proliferating cells. Some authors suggest (Encinas lab, Spain) that when the hippocampus is challenged by a harsh stimulus, such as kainate-induced epilepsy, the NSCs differentiate directly into reactive astrocytes and deplete the DG neurogenic niche (Encinas et al., 2011, Cell Stem Cell; Sierra et al., 2015, Cell Stem Cell). We believe this might be the case in our MCAO model and the SVZ niche, since we observe a decrease in DCX labeling in the olfactory bulb (Fig S5) and an increase in astrocytes in the SVZ, which migrate to the ischemic lesion. We did not want to overcomplicate an already complicated paper, dwelling with direct NSC-astrocyte differentiation or with the reactive status of these newborn astrocytes.
(3) The paper would be strengthened with orthogonal views of z projections to show colocalization.
We thank the reviewer for this observation. We have now included orthogonal projections in the critical colocalization IF of CD44 and hyaluronan (hyaluronan internalization) in Fig S6D, and a zoomed-in inset. Hyaluronan membrane synthesis is already depicted with orthogonal projection in Fig 6F.
(4) It is not clear why the dorsal SVZ is analysed and focused on in Figure 4. This region emanates from the developmental pallium (cerebral cortex anlagen). It generates some excitatory neurons early postnatally and is thought to have differential signalling such as Wnt (Raineteau group).
We decided to analyze in depth the dorsal SVZ after the BrdU experiment (Fig S3), where we observed an increase in BrdU+/Thbs4+ cells mostly in the dorsal area. Hence, the electrodes for electroporation were oriented in such a way as to label the dorsal area. We appreciate the paper by Raineteau lab, but we assume that this region may potentially exploit other roles (apart from excitatory neurons generated early postnatally) depending on the developmental stage (our model is in adults) and/or pathological conditions (MCAO).
(5) Several of the images show the lesion and penumbra as being quite close to the SVZ. Did any of the lesions contact the SVZ? If so, I would strongly recommend excluding them from the analysis as such contact is known to hyperactivate the SVZ.
We thank the referee for the suggestion to exclude the harsher MCAO-lesioned animals from the analysis. Indeed, the MCAO ischemia, methodologically, can generate different tissue damages that cannot be easily controlled. Thus, based on TTC staining, we had already excluded the more severe tissue damage that contacted the SVZ, based on TTC staining.
(6) The authors switch to a rat in vitro analysis towards the end of the study. This needs to be better justified. How similar are the molecules involved between mouse and rat?
We chose the rat culture since it is a culture that we have already established in our lab, and that in our own hands, is much more reproducible than the mouse brain cell culture that we occasionally use (for transgenic animals only). Benito-Muñoz et al., Glia. 2016; Cavaliere et al., Front Cell Neurosci. 2013. It is true that there could be differences between the rat and mouse Thbs4-cell physiology, despite a 96% identity between rat and mouse Thbs4 protein sequence (BLASTp). In vitro, we only confirm the capacity of astrocytes to internalize hyaluronan, which was a finding that we did not expect in our in vivo experiments. Indeed, these observations, notwithstanding the obvious differences between in vivo and in vitro scenarios, suggest that the HA internalization by astrocytes is a cross-species event, at least in rodents. Regarding HA, hyaluronan is similar in all species, since it’s a glycan (this is why there are no antibodies against HA, and ones has to rely on binding proteins such as HABP to label it).
(7) Similar comment for overexpression of naked mole rat HA.
We chose the naked mole rat Hyaluronan synthase (HAS), because it is a HAS that produces HA of very high molecular weight, similar to the one found accumulated in the glial scar, at the lesion border. The naked-mole rat HAS used in mice (Gorbunova Lab) is a known tool in the ECM field. (Zhang et al, 2023, Nature; Tian et al., 2013, Nature).
Reviewer 1 (Recommendation to authors):
(1) Line 22: most of the cells that migrate out of the SVZ are not stem cells but cells further along in the lineage - neuroblasts and glioblasts.
We thank the reviewer for this clarification. We have modified the abstract accordingly.
(2) In Figure 3d the MCAO group staining with GFAP looks suspiciously like ependymal cells which have been shown to be dramatically activated by stroke models.
The picture does show ependymal cells, which are located next to the ventricle and are indeed very proliferative in stroke. However, these cells do not express Thbs4 (Shah et al., 2018, Cell). In the quantifications from the SVZ of BrdU and IdU injected animals (Fig 3e and f), we only take into account Thbs4+ GFAP+ cells, no GFAP+ only.
(3) The TTC injury shown in Figure 5c is too low mag.
We apologize for the low mag. We have increased the magnification two-fold without compromising resolution. The problem might also have arisen from the compression of TIF into JPEG in the PDF export process. We will address this in the revised version by carefully selecting export settings. The images we used are all publication quality (300 ppi).
(4) How specific to HA is HABP?
Hyaluronic Acid Binding Protein is a canonical marker for hyaluronan that is used also in ELISA to quantify it specifically, since it does not bind other glycosaminoglycans. The label has been used for years in the field for immunochemistry, and some controls and validations have been published: Deepa et al., 2006, JBC performed appropriate controls of HABP-biotin labeling using hyaluronidase (destroys labeling) and chondroitinase (preserves labeling). Soria et al., 2020, Nat Commun checked that (i) streptavidin does not label unspecifically, and (ii) that HABP staining is reduced after hyaluronan depletion in vivo with HAS inhibitor 4MU.
(5) A number of images are out of focus and thus difficult to interpret (e.g. SFig. 4e).
This is true. We realized that the PDF conversion process for the preprint version has severely compressed the larger images, such as the one found in Fig. S4e. We have submitted a revised version in a better-quality PDF (the final paper will have the original TIFF files). We apologize for the technical problem.
(6) "restructuration" is not a word.
We apologize for the mistake and thank the reviewer for the correction. We corrected “restructuration” with “reorganization” in line 67.
(7) While much of the manuscript is well-written and logical it could use an in-depth edit to remove awkward words and phrasings.
A native English speaker has revised the manuscript to correct these awkward phrases. All changes are labeled in red in the revised version.
(8) Please describe why and how you used skeleton analysis for HABP in the methods, this will be unfamiliar to most readers. The one-sentence description in the methods is insufficient.
We have modified the text accordingly, explaining in depth the logic behind the skeleton analysis. (Line 204). We also added several lines of text describing in detail the image analysis (CD44/HABP spots, fractal dimension, masks for membranal HABP, among others, in lines 484494)
Reviewer #2 (Public Review)
Summary:
In their manuscript, Ardaya et al have addressed the impact of ischemia-induced gliogenesis from the adult SVZ and their effect on the remodeling of the extracellular matrix (ECM) in the glial scar. They use Thbs4, a marker previously identified to be expressed in astrocytes of the SVZ, to understand its role in ischemia-induced gliogenesis. First, the authors show that Thbs4 is expressed in the SVZ and that its expression levels increase upon ischemia. Next, they claim that ischemia induces the generation of newborn astrocyte from SVZ neural stem cells (NSCs), which migrate toward the ischemic regions to accumulate at the glial scar. Thbs4-expressing astrocytes are recruited to the lesion by Hyaluronan where they modulate ECM homeostasis.
Strengths:
The findings of these studies are in principle interesting and the experiments are in principle good.
Weaknesses:
The manuscript suffers from an evident lack of clarity and precision in regard to their findings and their interpretation.
We thank the reviewer for the valuable feedback. We hope the changes proposed improve clarity and precision throughout the manuscript.
(1) The authors talk about Thbs4 expression in NSCs and astrocytes, but neither of both is shown in Figure 1, nor have they used cell type-specific markers.
As we reported also to Referee #1 (major point 1), Thbs4 is widely considered in literature as a valid marker for newly formed astrocytes (Beckervordersanforth et al., 2010; Benner et al., 2013; Llorens-Bobadilla et al. 2015, Codega et al, 2014; Basak et al., 2018; Mizrak et al., 2019; Kjell et al., 2020; Cebrian-Silla et al., 2021). Some of the studies mentioned here and discussed in the manuscript text, also associate Thbs4 to B-type cells based on scRNAseq data (LlorensBobadilla et al. 2015; Cebrian-Silla et al., 2021; Basak et al., 2018). Moreover, we also showed colocalization of Thbs4 with activated stem cells marker nestin (Fig.2), glial marker GFAP (Fig. 3) and with dorsal NSCs marker tdTOM (from electroporation, Fig. 4).
(2) Very important for all following experiments is to show that Thbs4 is not expressed outside of the SVZ, specifically in the areas where the lesion will take place. If Thbs4 was expressed there, the conclusion that Thbs4+ cells come from the SVZ to migrate to the lesion would be entirely wrong.
In Figure 1a, we show that Thbs4 is expressed in the telencephalon, exclusively in the neurogenic regions like SVZ, RMS and OB, together with cerebellum and VTA, which are likely not directly topographically connected to the damaged area (cortex and striatum). Regarding the origin of Thbs4+ cells, we demonstrated their SVZ origin by lineage tracking experiments after in vivo cell labeling (Fig. 4).
(3) Next, the authors want to confirm the expression level of Thbs4 by electroporation of pThbs4-eGFP at P1 and write that this results in 20% of total cells expressing GFP, especially in the rostral SVZ. I do not understand the benefit of this sentence. This may be a confirmation of expression, but it also shows that the GFP+ cells derive from early postnatal NSCs.
Furthermore, these cells look all like astrocytes, so the authors could have made a point here that indeed early postnatal NSCs expressing Thbs4 generate astrocytes alongside development. Here, it would have been interesting to see how many of the GFP+ cells are still NSCs.
We thank the reviewer for this useful remark. We have rephrased this paragraph in the results section (Line 99).
(4) In the next chapter, the authors show that Thbs4 increases in expression after brain injury. I do not understand the meaning of the graphs showing expression levels of distinct cell types of the neuronal lineage. Please specify why this is interesting and what to conclude from that.
Also here, the expression of Thbs4 should be shown outside of the SVZ as well.
In Fig 2, we show the temporal expression of two markers (besides Thbs4) in the SVZ. Nestin and DCX are the gold standard markers for NSCs, with DCX present in neuroblasts. This is already explained in line 119. What we didn’t explain, and now we say in line 124, is that Nestin and DCX decrease immediately after ischemia (7d time-point). This probably means that the NSCs stop differentiating into neuroblast to favor glioblast formation. This is also supported by the experiments in the olfactory bulb depicted in Fig. S5C-H.
(5) Next, the origin of newborn astrocytes from the SVZ upon ischemia is revealed. The graphs indicate that the authors perfused at different time points after tMCAO. Did they also show the data of the early time points? If only of the 30dpi, they should remove the additional time points indicated in the graph. In line 127 they talk about the origin of newborn astrocytes. Until now they have not even mentioned that new astrocytes are generated. Furthermore, the following sentences are imprecise: first they write that the number of slow proliferation NSCs is increased, then they talk about astrocytes. How exactly did they identify astrocytes and separate them from NSCs? Morphologically? Because both cell types express GFAP and Thbs4.
The same problem also occurs throughout the next chapter.
We thank the reviewer for this interesting comment. The experiment in Fig 3 combines BrdU and IdU. This is a tricky experiment, since chronic BrdU is normally analyzed after 30d, since the experimenter must wait for the wash out of BrdU (it labels slow-proliferating cells). Since we also wanted to label fast proliferative cells with IdU, we used IP injections of this nucleotide at the different time points, and perfused the day after. It wouldn’t make sense to show BrdU at earlier time points. We do so in Fig 3e, just to colocalize with Thbs4 to read the tendency of the experiment. However, the quantification of BrdU (not of IdU) is done only at 30 DPI, which is explained in the methods (line 407).
“In line 127, they talk about the origin of newborn astrocytes…”
Indeed, we wanted to introduce in the paragraph title that ischemia induced the generation of new astrocytes, which is more clearly described in the text. We changed the paragraph title with “Characterization of Ischemia-induced cell populations”
“How exactly did they identify astrocytes and separate them from NSC?”
With this experiment and using two different protocols to label proliferating cells (BrdU vs IdU) we wanted to track the precursor cells that derivate to astrocytes and that already expressed the marker Thbs4. Indeed, the different increase and rate of proliferation is only related to the progenitor cells that lately will differentiate in astrocytes. In this experiment we only referred to the astrocytes in the last sentence “These results suggest that, after ischemia, Thbs4positive astrocytes derive from the slow proliferative type B cells”
(6) "These results suggest that ischemia-induced astrogliogenesis in the SVZ occurs in type B cells from the dorsal region, and that these newborn Thbs4-positive astrocytes migrate to the ischemic areas." This sentence is a bit dangerous and bares at least one conceptual difficulty: if NSCs generate astrocytes under normal conditions and along the cause of postnatal development (which they do), then local astrocytes (expressing the tdTom because they stem from a postnatal NSC ), may also react to MCAO and proliferate locally. So the astrocytes along the scar do not necessarily come from adult NSCs upon injury but from local astrocytes. If the authors state that NSCs generate astrocytes that migrate to the lesion, I would like to see that no astrocytes inside the striatum carry the tdTom reporter before MCAO is committed.
We understand the referee’s concern about the postnatal origin of astrocytes that can also be labeled with tdTom. Our hypothesis, tested at the beginning of the paper, is that SVZ-derived astrocytes derive from slow proliferative NSC. Thus, it is reasonable that Tom+ cells can reach the cortical region in such a short time frame. This is why we assumed that local astrocytes can’t be positive for tdTom. We characterized the expression of tfTom in sham animals and we observed few tdTom+ cells in the cortex and striatum (Author response image 2 and Figure S4). The expression of tdTom mainly remains in the SVZ and the corpus callosum under physiological conditions. However, proliferation of local astrocytes labeled with tdTom expression (early postnatally astrocytes) could explain the small percentage of tdTom+ cells in the ischemic regions that do not express Thbs4, even though this percentage could represent other cell types such as OPCs or oligodendrocytes.
Author response image 2.
(7) If astrocytes outside the SVZ do not express Thbs4, I would like to see it. Otherwise, the discrimination of SVZ-derive GFAP+/Thbs4+ astrocytes and local astrocytes expressing only GFAP is shaky.
Regarding Thbs4 outside the SVZ, we already answered this in point 2 (please refer to Fig 1A). We also quantified the expression of Thbs4+/GFAP+ astrocytes in the corpus callosum, cortex and striatum of sham and MCAO mice (Figure S5a-b) and we did not observe that local astrocytes express Thbs4 under physiological conditions.
(8) Please briefly explain what a Skeleton analysis and a Fractal dimension analysis is, and what it is good for.
We apologized for the brief information on Skeleton and Fractal dimension analysis. We included a detailed explanation of these analyses in methods (line 484-494).
(9) The chapter on HA is again a bit difficult to follow. Please rewrite to clarify who produces HA and who removes it by again showing all astrocyte subtypes (GFAP+/Thbs4+ and GFAP+/Thbs4-).
We apologize for the lack of clarity. We rewrote some passages of those chapters (changes in red), trying to convey the ideas more clearly. We also changed a panel in Figure S6b-c to clarify all astrocytes subtypes that internalize hyaluronan (Thbs4+/GFAP+ and Thbs4-/GFAP+). See Author response image 3.
Author response image 3.
(10) Why did the authors separate dorsal, medial, and ventral SVZ so carefully? Do they comment on it? As far as I remember, astrogenesis in physiological conditions has some local preferences (dorsal?)
We performed the electroporation protocol in the dorsal SVZ based on previous results (Figure 3 and Figure S3). NSC produce specific neurons in the olfactory bulb according to their location in the SVZ. However, postnatal production of astrocytes mainly occurs through local astrocytes proliferation and the SVZ contribution is very limited at this time point.
Reviewer #3 (Public Review)
Summary:
The authors aimed to study the activation of gliogenesis and the role of newborn astrocytes in a post-ischemic scenario. Combining immunofluorescence, BrdU-tracing, and genetic cellular labelling, they tracked the migration of newborn astrocytes (expressing Thbs4) and found that Thbs4-positive astrocytes modulate the extracellular matrix at the lesion border by synthesis but also degradation of hyaluronan. Their results point to a relevant function of SVZ newborn astrocytes in the modulation of the glial scar after brain ischemia. This work's major strength is the fact that it is tackling the function of SVZ newborn astrocytes, whose role is undisclosed so far.
Strengths:
The article is innovative, of good quality, and clearly written, with properly described Materials and Methods, data analysis, and presentation. In general, the methods are designed properly to answer the main question of the authors, being a major strength. Interpretation of the data is also in general well done, with results supporting the main conclusions of this article.
Weaknesses:
However, there are some points of this article that still need clarification to further improve this work.
(1) As a first general comment, is it possible that the increase in Thbs4-positive astrocytes can also happen locally close to the glia scar, through the proliferation of local astrocytes or even from local astrocytes at the SVZ? As it was shown in published articles most of the newborn astrocytes in the adult brain actually derive from proliferating astrocytes, and a smaller percentage is derived from NSCs. How can the authors rule out a contribution of local astrocytes to the increase of Thbs4-positive astrocytes? The authors also observed that only about one-third of the astrocytes in the glial scar derived from the SVZ.
We thank the reviewer for the interesting comment. We have extended the discussion about this topic in the manuscript, (lines 333-342), including the statement about a third of glial scar astrocytes being from the SVZ and not downplaying the role of local astrocytes. Whether the glial scar is populated by newborn astrocytes derived from SVZ or from local astrocytes is under debate, since there are groups that found astrocytes contribution from local astrocytes (Frisèn group, Magnusson et al., 2014) but there are others that observed the opposite (Li et al., 2010; Benner et al., 2013; Faiz et al., 2015; Laug et al., 2019 & Pous et al., 2020).
In our study we observed that Thbs4 expression is almost absent in the cortex and striatum of sham mice. To demonstrate that new-born astrocytes are derived from SVZ we used two techniques: the chronic BrdU treatment and the cell tracing which mainly labels SVZ neural stem cells. Fast proliferating cells lose BrdU quickly so local astrocytes under ischemic conditions do not express BrdU. In addition, we injected IdU the day before perfusion in order to see if local astrocytes express Thbs4 when they respond to the brain ischemia. However, we did not observe proliferating local astrocytes expressing Thbs4 after MCAO (see Author response image 4)
Author response image 4.
As mentioned in the response for reviewer 2, the cell tracing technique could label early postnatal astrocytes. We characterized the technique and only a small percentage of tdTom expression was found in the cortex and striatum of sham animals. This tdTom population could explain the percentage of tdTom+ cells in the ischemic regions that do not express Thbs4 even though this percentage could represent other cell types such as OPCs or oligodendrocytes. Taking all together, evidences suggest that Thbs4+ astrocyte population derived from the SVZ.
We indeed observed a small contribution of Thbs4+ astrocytes to the glial scar. However, Thbs4+ astrocytes arrive at the lesion at a critical temporal window - when local hyper-reactive astrocytes die or lose their function. We hypothesized that Thbs4+ astrocytes could help local astrocytes or replace them in reorganizing the extracellular space and the glial scar, an instrumental process for the recovery of the ischemic area.
(2) It is known that the local, GFAP-reactive astrocytes at the scar can form the required ECM. The authors propose a role of Thbs4-positive astrocytes in the modulation, and perhaps maintenance, of the ECM at the scar, thus participating in scar formation likewise. So, this means that the function of newborn astrocytes is only to help the local astrocytes in the scar formation and thus contribute to tissue regeneration. Why do we need specifically the Thbs4positive astrocytes migrating from the SVZ to help the local astrocytes? Can you discuss this further?
Unfortunately, we could not demonstrate which molecular machinery is involved in these mechanisms, and we can only speculate the functional meaning of a second wave of glial activation. We added a lengthy discussion in lines 333-342.
(3) The authors observed that the number of BrdU- and DCX-positive cells decreased 15 dpi in all OB layers (Fig. S5). They further suggest that ischemia-induced a change in the neuroblasts ectopic migratory pathway, depriving the OB layers of the SVZ newborn neurons. Are the authors suggesting that these BrdU/DCX-positive cells now migrate also to the ischemic scar, or do they die? In fact, they see an increase in caspase-3 positive cells in the SVZ after ischemia, but they do not analyse which type of cells are dying. Alternatively, is there a change in the fate of the cells, and astrogliogenesis is increased at the expense of neurogenesis? The authors should understand which cells are Cleaved-caspase-3 positive at the SVZ and clarify if there is a change in cell fate. Also please clarify what happens to the BrdU/DCX-positive cells that are born at the SVZ but do not migrate properly to the OB layers.
Actually, we cannot demonstrate the fate of missing BrdU/DCX cells in the OB. We can reasonably speculate that following the ischemic insult, the neurogenic machinery steers toward investing more energy in generating glial cells to support the lesion. We didn’t analyze the fate of the DCX that originally should migrate and differentiate to the OB, whether they die or if there is a shift in the differentiation program in the SVZ, since we consider that question is out of the study’s scope.
(4) The authors showed decreased Nestin protein levels at 15 dpi by western blot and immunostaining shows a decrease already at 7div (Figure 2). These results mean that there is at least a transient depletion of NSCs due to the promotion of astrogliogenesis. However, the authors show that at 30dpi there is an increase of slow proliferating NSCs (Figure 3). Does this mean, that there is a reestablishment of the SVZ cytogenic process? How does it happen, more specifically, how NSCs number is promoted at 30dpi? Please explain how are the NSCs modulated throughout time after ischemia induction and its impact on the cytogenic process.
Based on the chronic BrdU treatment, results suggested a restoration of SVZ cytogenic process (also observed in the nestin and DCX proteins expression at 30dpi). However, we did not analyze how it happens (from asymmetric or symmetric divisions). As suggested by Encinas group, we hypothesized that the brain ischemia induces the exhaustion of the neurogenic niche of the SVZ by symmetric divisions of NSC into reactive astrocytes.
(5) The authors performed a classification of Thbs4-positive cells in the SVZ according to their morphology. This should be confirmed with markers expressed by each of the cell subtypes.
We thank the referee for the comment. Classifying NSC based on different markers could also be tricky because different NSC cell types share markers. This classification was made considering the specific morphology of each NSC cell type. In addition, Thbs4 expression in Btype cells is also observed in other studies (Llorens-Bobadilla et al. 2015; Cebrian-Silla et al., 2021; Basak et al., 2018).
(6) In Figure S6, the authors quantified HABP spots inside Thbs4-positive astrocytes. Please show a higher magnification picture to show how this quantification was done.
We quantified HABP area and HABP spots inside Thbs4+ astrocytes with a custom FIJI script.
Thbs4 cell mask was done via automatic thresholding within the GFAP cell mask. Threshold for HABP marker was performed and binary image was processed with 1 pixel median filter (to eliminate 1 px noise-related spots). “Analyze particles” tool was used to sort HABP spots in the cell ROI. HABP spot number per compartment and population was exported to excel and data was normalized dividing HABP spots per ROI by total HABP spots. See Author response image 5.
Author response image 5.
-
eLife Assessment
This work shows that newborn Thbs4-positive astrocytes generated in the adult subventricular zone (SVZ) respond to middle carotid artery occlusion (MCAO) by secreting hyaluronan at the lesion penumbra, and that hyaluronin is a chemoattractant to SVZ astrocytes. These findings are important, despite mostly descriptive, as they point to a relevant function of SVZ newborn astrocytes in the modulation of the glial scar after brain ischemia. The methods, data and analyses are convincing and broadly support the claims made by the authors with only some weaknesses.
-
Reviewer #1 (Public review):
Summary:
The authiors show that SVZ derived astrocytes respond to a middle carotid artery occlusion (MCAO) hypoxia lesion by secreting and modulating hyaluronan at the edge of the lesion (penumbra) and that hyaluronin is a chemoattractant to SVZ astrocytes. They use lineage tracing of SVZ cells to determine their origin. They also find that SVZ derived astrocytes express Thbs-4 but astrocytes at the MCAO-induced scar do not. Also, they demonstrate that decreased HA in the SVZ is correlated with gliogenesis. While much of the paper is descriptive/correlative they do overexpress Hyaluronan synthase 2 via viral vectors and show this is sufficient to recruit astrocytes to the injury. Interestingly, astrocytes preferred to migrate to the MCAO than to the region of overexpressed HAS2.
Strengths:
The field has largely ignored the gliogenic response of the SVZ, especially with regards to astrocytic function. These cells and especially newborn cells may provide support for regeneration. Emigrated cells from the SVZ have been shown to be neuroprotective via creating pro-survival environments, but their expression and deposition of beneficial extracellular matrix molecules is poorly understood. Therefore, this study is timely and important. The paper is very well written and the flow of results logical.
Comments on revised version:
The authors have addressed my points and the paper is much improved. Here are the salient remaining issues that I suggest be addressed.
The authors have still not shown, using loss of function studies, that Hyaluronan is necessary for SVZ astrogenesis and or migration to MCAO lesions.
(1) The co-expression of EGFr with Thbs4 and the literature examination is useful.
(2) Too bad they cannot explain the lack of effect of the MCAO on type C cells. The comparison with kainate-induced epilepsy in the hippocampus may or may not be relevant.
(3) Thanks for including the orthogonal confocal views in Fig S6D.
(4) The statement that "BrdU+/Thbs4+ cells mostly in the dorsal area" and therefore they mostly focused on that region is strange. Figure 8 clearly shows Thbs4 staining all along the striatal SVZ. Do they mean the dorsal segment of the striatal SVZ or the subcallosal SVZ? Fig. 4b and Fig 4f clearly show the "subcallosal" area as the one analysed but other figures show the dorsal striatal region (Fig. 2a). This is important because of the well-known embryological and neurogenic differences between the regions.
(5) It is good to know that the harsh MCAO's had already been excluded.
(6) Sorry for the lack of clarity - in addition to Thbs4, I was referring to mouse versus rat Hyaluronan degradation genes (Hyal1, Hyal2 and Hyal3) and hyaluronan synthase genes (HAS1 and HAS2) in order to address the overall species differences in hyaluronan biology thus justifying the "shift" from mouse to rat. You examine these in the (weirdly positioned) Fig. 8h,i. Please add a few sentences on mouse vs rat Thbs4 and Hyaluronan relevant genes.
(7) Thank you for the better justification of using the naked mole rat HA synthase.
-
Reviewer #3 (Public review):
Summary:
The authors aimed to study the activation of gliogenesis and the role of newborn astrocytes in a post-ischemic scenario. Combining immunofluorescence, BrdU-tracing and genetic cellular labelling, they tracked the migration of newborn astrocytes (expressing Thbs4) and found that Thbs4-positive astrocytes modulate the extracellular matrix at the lesion border by synthesis but also degradation of hyaluronan. Their results point to a relevant function of SVZ newborn astrocytes in the modulation of the glial scar after brain ischemia. This work's major strength is the fact that it is tackling the function of SVZ newborn astrocytes, whose role is undisclosed so far.
Strengths:
The article is innovative, of good quality, and clearly written, with properly described Materials and Methods, data analysis and presentation. In general, the methods are designed properly to answer the main question of the authors, being a major strength. Interpretation of the data is also in general well done, with results supporting the main conclusions of this article.
In this revised version, the points raised/weaknesses were clarified and discussed in the article.
-
-
www.theguardian.com www.theguardian.com
-
But at far lower cost, through a rational transport policy, it could remove millions of real cars from the roads, while improving our mobility, cutting air pollution and releasing land for green spaces and housing.
From link:
- Prioritise investment in public transport, walking and cycling instead of road building
- Reinstate the annual inflation-linked rise and end the 5p cut in fuel duty, and use the £4.2 billion a year proceeds to make rail fares more affordable
- Require all new developments to provide frequent public transport services and safe walking and cycling networks from the start
- Commit to a target for modal shift to public transport and active travel
- Facilitate further expansion of rail freight to reduce congestion on the road network
- Require local authorities to meet specific carbon reduction budgets through the next round of Local Transport Plans
Reinstate the annual inflation-linked rise and end the 5p cut in fuel duty, and use the £4.2 billion a year proceeds to make rail fares more affordable
Cars are going green anyway, and could go more green with hydrogen! The fuel duty is regressive and would hurt poorer families the most.
Facilitate further expansion of rail freight to reduce congestion on the road network
They literally say at the start of the article how HS2 has been a disaster, they want to repeat that?
-
cheaper and more effective projects had already been committed
more effective? insulating homes sure but there needs to be some capture capture and it needs investing and where UK can have the biggest impact.
-
The government’s plan for carbon capture and storage (CCS) – catching carbon dioxide from major industry and pumping it into rocks under the North Sea – is a fossil fuel-driven boondoggle that will accelerate climate breakdown.
So from what I can see, these are two blue hydrogen projects as opposed to green hydrogen? Green hydrogen just uses water so is obviously better but takes longer and is see as viable by 2040 while blue hydrogen is compatible with current infrastructure so works in shortterm and can help speed up green hydrogen too. https://www.abdn.ac.uk/news/opinion/is-there-a-role-for-blue-hydrogen-in-a-green-energy-transition/.
Also the guardian linking to itself as a source of fact? Great
-
-
vanbrrlekom.github.io vanbrrlekom.github.io
-
igure 8 illustrates the median and interquartile range proportion of faces categorized as women (in this data set, with categorizations beyond the binary removed, any face not categorized as a woman was categorized as a man)
this sentence first before you tell about actual content
-
he difference is so stark, we do not feel that inferential statistics add any more information, but the curious reader may find these in the supplemental materi
sounds odd. Can you at least have one analysis? And say rest is in supplementary?
-
e Figure 6 ). Participants who only categorized faces as women or men are not represented in figure Figure 6.
get rid of Figure 6 reps. If you start "As shown in Figure 6, ..." I would read this to mean that everything is from the figure until you tell me differently.
-
illustrates how many categorizations (y-axis) beyond the binary participants made. Each bar represents how many participants (y-axis) made a certain number of categorizations (x-axis). The different colors denote the different categorizations
swap previous and these sentences
-
-
www.biorxiv.org www.biorxiv.org
-
eLife Assessment
This study describes a useful technique to improve imaging depth using confocal microscopy for imaging large, cleared samples. It is as yet unclear if their proposed technique presents a significant advance to the field since their comparisons to existing techniques remain incomplete. However, the work will be of broad interest to many researchers in different fields.
-
Reviewer #1 (Public review):
Summary:
Liu et al., present an immersion objective adapter design called RIM-Deep, which can be utilized for enhancing axial resolution and reducing spherical aberrations during inverted confocal microscopy of thick cleared tissue.
Strengths:
RI mismatches present a significant challenge to deep tissue imaging, and developing a robust immersion method is valuable in preventing losses in resolution. Liu et al., present data showing that RIM-Deep is suitable for tissue cleared with two different clearing techniques, demonstrating the adaptability and versatility of the approach.
Weaknesses:
Liu et al., claim to have developed a useful technique for deep tissue imaging, but in its current form, the paper does not provide sufficient evidence that their technique performs better than existing ones.
-
Reviewer #2 (Public review):
Summary:
Liu et al investigated the performance of a novel imaging technique called RIM-Deep to enhance the imaging depth for cleared samples. Usually, the imaging depth using the classical confocal microscopy sample chamber is limited due to optical aberrations, resulting in loss of resolution and image quality. To overcome this limitation and increase depth, they generated a special imaging chamber, that is affixed to the objective and filled with a solution matching the refractive indices to reduce aberrations. Importantly, the study was conducted using a standard confocal microscope, that has not been modified apart from exchanging the standard sample chamber with the RIM-Deep sample holder. Upon analysing the imaging depth, the authors claim that the RIM-Deep method increased the depth from 2 mm to 5 mm. In summary, RIM-Deep has the potential to significantly enhance imaging quality of thick samples on a low budget, making in-depth measurements possible for a wide range of researchers that have access to an inverted confocal microscope.
Strengths:
The authors used different clearing methods to demonstrate the suitability of RIM-Deep for various sample preparation protocols with clearing solutions of different refractive indices. They clearly demonstrate that the RIM-Deep chamber is compatible with all 3 methods. Brain samples are characterized by complex networks of cells and are often hard to visualize. Despite the dense, complex structure of brain tissue, the RIM-Deep method generated high quality images of all 3 samples given. As the authors already stated, increasing imaging depth often goes hand in hand with purchasing expensive new equipment, exchanging several microscopy parts or purchasing a new microscopy set-up. Innovations, such as the RIM-Deep chamber, hence, might pave the way for cost-effective imaging and expand the applicability of an inverted confocal microscope.
Weaknesses:
(1) However, since this study introduces a novel imaging technique, and therefore, aims to revolutionize the way of imaging large samples, additional control experiments would strengthen the data. From the 3 clearing protocol used (CUBIC, MACS and iDISCO), only the brain section from Macaca fascicularis cleared with iDISCO was imaged with the standard chamber and the RIM-Deep method. This comparison indeed shows that the imaging depth thereby increases more than 2-fold, which is a significant enhancement in terms of microscopy. However, it would have been important to evaluate and show the difference of the imaging depth also on the other two samples, since they were cleared with different protocols and, thus, treated with clearing solutions of different refractive indices compared to iDCISCO.
(2) The description of the figures and figure panels should be improved for a better understanding of the experiments performed and the thus resulting images/data.
(3) While the authors used a Nikon AX inverted laser scanning confocal microscope, the study would highly benefit from evaluating the performance of the RIM-Deep method using other inverted confocal microscopes or even wide-field microscopes.
-