The data is clear: Internships are the single most powerful predictor of early-career success
This also aligns with the Talent Disrupted report.
The data is clear: Internships are the single most powerful predictor of early-career success
This also aligns with the Talent Disrupted report.
trap
It just goes to show; that from the very beginning of time, people were scheming and throwing others under the bus to further their own plans. The irony of this though is that the traps is happening to someone who schemes as well. Karma at its finest.
One classic example is the tendency to overlook the interests of children and/or people abroad when we post about travels, especially when fundraising for ‘charity tourism’. One could go abroad, and take a picture of a cute kid running through a field, or a selfie with kids one had traveled to help out. It was easy, in such situations, to decide the likely utility of posting the photo on social media based on the interest it would generate for us, without thinking about the ethics of using photos of minors without their consent. This was called out by The Onion in a parody article, titled “6-Day Visit To Rural African Village Completely Changes Woman’s Facebook Profile Picture”.
This example highlights how ethical blind spots can arise when people focus on personal benefit and social validation. It shows how easily children and marginalized communities can be overlooked when posting content.
Data points often give the appearance of being concrete and reliable, especially if they are numerical. So when Twitter initially came out with a claim that less than 5% of users are spam bots, it may have been accepted by most people who heard it. Elon Musk then questioned that figure and attempted to back out of buying Twitter, and Twitter is accusing Musk’s complaint of being an invented excuse to back out of the deal, and the case is now in court.
This example shows how numerical data can appear trustworthy, even when the methods behind it are unclear or undisclosed. It also shows how statistics can be used as tools in power struggles, where the same data can be interpreted differently depending on who benefits from the claim.
Data points often give the appearance of being concrete and reliable, especially if they are numerical.
I find this especially interesting, because it's true, when we see a number or percentage it seems like it must be the correct one. But it is so important to think about the data collection behind it. If we use the bot example, when processing all the users in the systems did the data processors then remove all incomplete data (if something was missing from the profile) or did they leave it in? I believe that even though data seems unbiased, there are always choices in how it's processed that effect how the outcome looks.
As you can see in the apple example, any time we turn something into data, we are making a simplification.1 If we are counting the number of something, like apples, we are deciding that each one is equivalent. If we are writing down what someone said, we are losing their tone of voice, accent, etc. If we are taking a photograph, it is only from one perspective, etc.
I think this is important both when considering data but also just considering social media as a whole. People tend to only post the positives in their life, and even in those positive posts a lot of information is being left out of the context. If someone only posts a single photo from a concert with a lyric as a caption, it does not explain what the set was or how the experience as a whole was like for that person.
E0art = E0SP + E0LSSE + E0N
These are not all described in the supplemental. Only the spin pumping is.
Immoderately she weeps for Tybalt's death, And therefore have I little talk'd of love; 2370For Venus smiles not in a house of tears. Now, sir, her father counts it dangerous That she doth give her sorrow so much sway, And in his wisdom hastes our marriage, To stop the inundation of her tears; 2375Which, too much minded by herself alone, May be put from her by society: Now do you know the reason of this haste.
Paris speaks about the marriage, unknowning of Juliet’s situation. Shakespeare contrasts his conventional love with Juliet’s heated love for Romeo.
self-guided workshop
There's a slight issue of audience here vs. the actual workshop materials: the "you" in the materials is the instructor, while here it's any viewer of the webpage. I'd suggest rephrasing here, something like "you may use this page as a self-guided workshop, or your instructor may assign specific sections for class"... or change entirely to, "this workshop helps learners reflect... for managing their personal materials..."
How much do you play a role in your own developmental path? Are you at the whim of your genetic inheritance or the environment that surrounds you? Some theorists see humans as playing a much more active role in their own development. Piaget, for instance, believed that children actively explore their world and construct new ways of thinking to explain the things they experience. In contrast, many behaviorists view humans as being more passive in the developmental process.11
as children grow are they not in a stage of metamorphosis with changing the way they think and interact in daily life growing and shedding the adolescent Self?
Last, do not post unrelated ideas; for example, if you are asked about the main idea of a text you read, make sure to read the text, and respond by giving what you think is the main idea, not by posting that you liked the text because of a personal experience you had. It isn’t wrong to include personal content, but be sure to answer the instructor’s questions first to earn full credit.
Pull the main ideas, not personal statements1`
This textbook will cover ways to communicate effectively as you develop insight into your own style, writing process, grammatical choices, and rhetorical situations. With these skills, you should be able to improve your writing talent regardless of the discipline you enter after completing this course. Knowing your rhetorical situation, or the circumstances under which you communicate, and knowing which tone, style, and genre will most effectively persuade your audience, will help you regardless of whether you are enrolling in history, biology, theater, or music next semester–because when you get to college, you write in every discipline.
This textbook helps students improve communication by understanding their writing style, process, grammar, and rhetorical situations. These skills are useful in any major since writing is required in almost all college courses. Learning how to adapt tone, style, and genre to different audiences will make writing more effective overall.
So, for example, if we made a form that someone needed to enter their address, we could assume everyone is in the United States and not have any country selection.
This one is personal for me: as an exchange student applying for exchange I encountered this problem many times, as it could not register my Danish address, because it is written in a different way. This highlight the fact that all data and all artefacts have politics. Even if you try to accommodate everyone, you are always forced to make choices that sometimes exclude people entirely. This could be blind of deaf people, but also gender as mentioned in the next paragraph.
As you can see, TurboTax has a limit on how long last names are allowed to be, and people with too long of names have different strategies with how to deal with not fitting in the system.
INFO 103 Winter 2026 It is extremely important to ensure that the user experience of different users is consistent. Otherwise, they will feel discriminated against, which is very fatal to software design.
Gender# Data collection and storage can go wrong in other ways as well, with incorrect or erroneous options. Here are some screenshots from a thread of people collecting strange gender selection forms:
Gender is a hard one to create a drop down for, since gender is a social construct and can mean something different to every person. In the first example they use terms like female and male, but those are generally seen as terms to identify sex and not gender. In recent times there are now government forms who don't ask about gender whatsoever, and only ask about your biological sex (the FAFSA form does this now! My friend put male as his sex, but since it didn't match his birth certificate (F) his application was flagged and had to go under review). I've seen gender drop downs that separate transgender from man/woman, why is there a specification? Are they not "actually" the gender they say they are in the eyes of the programmers/data collectors? Food for thought.
(a) personal experience; (b) common sense; (c) the media (including the Internet); (d) “expert authorities,” such as teachers, parents, and government officials; and (e) tradition.
"How do we know what we think we know" When reading this question, I didn't know quite how to respond. But reading about all the five societal interpretation, I understand more. There are so many different ways to interpret things. We all assimilate different thoughts and perceptions. For me, I would say that personal experiences would be the reason why I know things. But reading on, ones experience might not be the same for another persons experience. In conclusion, every person in the world works differently. One might not agree on how another person on how they interpret things. All in all, we need to be respectful of all differences and different opinions.
In recognizing the importance of social structure, sociology stresses that individual problems are often rooted in problems stemming from the horizontal and vertical social structures of society
When people are very poor and just trying to survive, some end up doing crime, but rich people with good schools and connections don’t face that same survival pressure.
Respondents aged 65 or older were actually slightly more likely than those younger than 65 to say they were very happy! About 33% of older respondents reported feeling this way, compared with only 28% of younger respondents
In my opinion, many older people are actually happy because they already experienced life. They experienced joy, they saw what they wanted to see, and they watched the world and their places change over time. Some of them might feel sad because they think they did not achieve enough, but they still experienced life and they saw what they had to see.
All of these problems indicate that older people should be less happy than younger people. If a sociologist did some research and then reported that older people are indeed less happy than younger people, what have we learned?
I only believe something when there is a hypothesis tested many times and the prediction comes true. Many old people get sick and struggle because they did not take care of themselves when they were young and used different things that made their bodies weak. My point is we can’t just keep predicting things. If we keep predicting, it’s just assuming about things we’re not really sure of yet.
If you relied on your personal experience to understand the typical American marriage, you would conclude that most marriages were as good as your parents’ marriage, which, unfortunately, also is not true. Many other examples could be cited here, but the basic point should be clear: although personal experience is better than nothing, it often offers only a very limited understanding of social reality other than our own.
I’m not American, I’m Filipino. I used to think a lot of Filipino families are good families, because no matter how hard life is and how poor it is, they stick together. Even if there is a lot of fighting, yelling, shouting, they still stay together. I felt so proud that my mom and dad are still married. But these days I found out the true story, that my dad is not perfect, and the story I knew in the past is getting destroyed by learning what the real story or real picture is.
Sociology can help us understand the social forces that affect our behavior, beliefs, and life chances, but it can only go so far. That limitation conceded, sociological understanding can still go fairly far toward such an understanding, and it can help us comprehend who we are and what we are by helping us first understand the profound yet often subtle influence of our social backgrounds on so many things about us.
When I first got here in America, there were certain things I believed because of my social background. My husband had a therapist, so I joined his therapy. At first, I just listened to other people’s problems and issues, until the therapist asked me about my own feelings about my family. I had a lot of beliefs like ‘my family is great,’ but when he asked me, I felt doomed because I didn’t know what to say and I didn’t even know the definition of feelings. As we kept working, I started to see the reality. I saw the true color, the true image of my past. I realized I have a lot of trauma, but at least now I can face it. I can see it is there, instead of lying to myself that it is not there. I know it’s not perfect and I still have a lot of issues and a lot of scramble in my life, but at least I can work on it now.
Many people will not fit the pattern of such a generalization, because people are shaped but not totally determined by their social environment.
Yes, of course people have their own thoughts and their own mind. They get influenced by what they see, what they hear, and the people around them, but they still have their own mindset. They still have their own feelings and judgment about what’s going on around them, so they don’t always follow the pattern.
Given all that can be at stake in making decisions on how data will be stored and constrained, choose one type of data a social media site might collect (e.g., name, age, location, gender, posts you liked, etc.), and then choose two different ethics frameworks and consider what each framework would mean for someone choosing how that data will be stored and constrained.
INFO 103 26 WinterThe group I want to choose is LGBTQ. I think under the moral framework of group discrimination, it is very important whether the identity feature information of these groups can be stored and whether they need to be constrained by others' access to their information, because social media is a place that easily encourages evil people to do evil. Therefore, LGBTQ should have the right to decide whether to share their identity or not.
Yet othersdeveloped the skills to carve intricate cylinder seals used bymembers of the elite to identify their goods.
Other men and womenwent off to distant lands and set up smaller versions of Uruk,certainly keeping in touch with home through messengers, andpresumably sending goods that could be useful to their mothercities.
Can one discern specific colonialist policies that the Uruk had as their culture spread in the Ancient Near East?
Whereas earlier peoples had manufactured their pottery by hand,adding eye-catching designs and glazes, the Uruk craftsmen mostlyproduced pottery on the newly invented wheel, rarely adding anyadornments. Quantity, now possible with a type of mass production,seems to have taken priority over quality in ceramic manufacture.
The sign for “food” (also for“bread”) in proto-cuneiform is the shape of a beveled-rim bowl.
The scribes do not seem to have thought of this scriptthat they had invented as a representation of language.
The people of Uruk started out with at least thirteen differentnumerical systems; they counted differently depending on what theywere counting, and the signs indicated different numbers fordifferent commodities. And about 30 percent of the signs they firstcreated to represent nouns had no later equivalents, so scholars donot know how to read them.
This proto-cuneiform tablet from Uruk includes signs forsheep and the goddess Inanna, but its meaning is unclear.
Potentially an early historic form of a receipt?
The investment of time and manpower devoted to the constructionof this complex would have resembled the work on a medievalcathedral. As early as 3600 BCE work had begun on the so-calledLimestone Temple in the Eanna precinct. Quarrymen and masonsremoved limestone from a rocky outcrop around fifty kilometers (31mi) to the southwest. Other men transported the stone to Uruk. Stillothers formed hundreds of thousands of mud bricks and clay cones,and set them out to harden in the sun. Others brought timber fromfar to the north for the roofs. Someone supervised all the workmenwho set the bricks and stones and mosaic cones in place. The menwould have been fed and provided for during the construction. Thebuilders were all probably residents of Uruk, united in their desire tocreate a magnificent home for their beloved divine queen.
Possibility that even with proto-cuneiform (writing) evolving here that such temples were local memory palaces for the culture of the inhabitants who would have been primary orality-based?
facts that no one couldconceivably commit to memory.
This statement belies the power of orality and the size of built communities without literacy. It's more a question of understanding how it was done and how communities either trusted (or didn't) those who memorized the materials.
Another factor is how long one needed to remember various facts, especially if for commerce and over what spaces?
Were there stratifications of society based on the power of memory here? Compare the anthropology and archaeology with the studies by Lynne Kelly.
Datesafter ca. 1400 BCE are fairly reliable and uncontroversial (the morerecent, the less controversial). For dates before 1500 BCE, however, adebate revolves around the Middle Chronology. Some scholars proposelower dates (from eight years to as much as a century later). But until aconsensus is reached, it seems best to use the dates that are familiar, ifprobably wrong.
The widely used Middle Chronology—which gives the dates ofHammurabi’s reign as 1792 to 1750 BCE—is followed in this book.
Later letters, inscriptions, and prayers, in which one mightexpect to see frequent references to flooding if it had been a majorconcern, mostly describe the rivers as a blessing.
The ancient Near East is defined here as comprising the “cuneiformlands,”
to the waysthat documents were organized in archives,
Find archaeological papers which described how Mesopotamians in the ANE organized their documents.
Mention via @Podany2013
Law, for example,once invented in Mesopotamia around 2100 BCE, was never forgotten,even though the actual laws of the Mesopotamians bear littleresemblance to those in use today.
The popularimage of history as a story of progress from primitive barbarism tomodern sophistication is completely belied by the study of the ancientNear East.
Statement in support of Graeber and Wengrow's thesis in The Dawn of Everything, though predating it.
Podany, Amanda H. 2013. The Ancient Near East: A Very Short Introduction. 1st ed. New York: Oxford University Press. https://www.amazon.com/Ancient-Near-East-Introduction-Introductions/dp/0195377990/ (January 1, 2026).
this is still the original landing site. Eventually it will also migrate to magines.art (magines.art)
On October 17, 1931, gangster Al Capone is convicted of tax evasion, signaling the downfall of one of the most notorious criminals of the 1920s and 1930s. He was later sentenced to 11 years in federal prison and fined $50,000.
This is the same day as my birthday but 77 years earlier. I think criminals are interesting and how they stay up so long without getting caught.
drooped
Here is the meaning of “drooped” in clear English + 中文解释:
English: When something (like a flower, branch, or body part) hangs down loosely because it has no strength. 中文: 下垂、垂落;因为疲累、虚弱或重量而往下垂。
Example / 例子:
English: When a person’s mood or energy becomes low or weak. 中文:(情绪)低落,无精打采。
Example / 例子:
English: When your eyelids lower because you’re tired or sleepy. 中文:(眼皮)耷拉下来、半闭。
Example / 例子:
If you want, I can also give: ✅ synonyms ✅ a bilingual flashcard Just tell me!
misty
Here is the meaning of “misty” in clear English + 中文解释:
English: When there is a light fog or tiny water droplets in the air. 中文: 有薄雾的;起雾的;朦胧的。
Example / 例子:
English: When someone’s eyes look blurry because they are about to cry or emotional. 中文:(眼睛)含泪的,湿润的,模糊的。
Example / 例子:
English: Something not sharp or clear, visually or emotionally. 中文: 模糊的、不清晰的(也可形容记忆、感觉)。
Example / 例子:
If you want, I can also give: ✅ synonyms ✅ more example sentences ✅ a bilingual flashcard version Just let me know!
sweetums
“sweetums” is an informal, cute, affectionate English word. Here is the meaning in simple English + 中文解释:
English: A playful, sweet nickname used for a boyfriend/girlfriend, partner, spouse, child, or someone you care about. Similar to honey, sweetie, baby. 中文: 可爱、亲昵的昵称,用来称呼亲密的人,如伴侣、小孩等;相当于“甜心”“宝贝”“亲爱的”。
Example / 例子:
English: Often used in a joking or exaggerated way to sound extra sweet or dramatic. 中文: 有时带一点夸张、搞笑的甜蜜称呼。
Example / 例子:
If you want, I can also: ✅ give synonyms ✅ translate sentences ✅ create flashcards (English ↔ Chinese) Just let me know!
belch
Here is the meaning of “belch” in clear English + 中文解释:
English: When gas comes out from the stomach through the mouth with a loud sound. 中文: 打嗝,尤其是响亮的打嗝。
Example / 例子:
English: When something releases smoke, fire, gas, or steam forcefully. 中文:(机器、火山等)喷出烟雾、火焰或气体。
Example / 例子:
English: Used metaphorically to mean releasing something powerfully. 中文: 比喻强力地喷出、发出某物。
If you want, I can also give: ✅ synonyms ✅ example sentences ✅ a bilingual flashcard version Just tell me!
snarled
Here is the meaning of “snarled” in clear English + 中文解释:
English: When an animal (like a dog or wolf) makes a low, angry sound while showing its teeth. 中文:(动物)龇牙咧嘴地低吼、咆哮。
Example / 例子:
English: When a person says something in an angry, sharp, unfriendly way. 中文:(人)用愤怒、尖锐、粗暴的口气说话。
Example / 例子:
English: Used to describe something twisted, tangled, or blocked (e.g., hair, wires, traffic). 中文: 形容东西缠在一起、打结、堵塞。
Example / 例子:
If you want, I can give synonyms, more examples, or translate sentences using “snarled.”
hooting
It looks like you might be asking about the word “hooting.” Here are the meanings in simple English + 中文解释:
English: A loud “hoo-hoo” sound made by an owl. 中文: 猫头鹰发出的“咕咕”叫声。
Example / 例子:
English: When people shout, cheer, or laugh loudly. 中文: (非正式)人大声喊叫、起哄、嘲笑。
Example / 例子:
English: Sometimes used to describe a car horn sounding. 中文: 有时也用来形容汽车按喇叭声。
Example / 例子:
If you want, I can also give: ✅ synonyms 同义词 ✅ pronunciation 发音 ✅ example sentences 例句 Just tell me!
As a girl, Ms. Book would save up her allowance then head to Indigo to pick out her next read, usually whatever had the coolest cover and best synopsis on the $6 book shelf.
anecdotal lead
s he or she academically gifted or smart in some way that would not be readily recog-nized as a form of intelligence?
To me one of the biggest things that modern tests don't get a measurement on is how well someone can communicate, especially in a work environment. Most of the "smart" people I know are horrible conversators and tend to lack empathy compared to others.
ncourages you to collect your own thoughts and opinions about the topic or related subjects before you commence reading
I think this is incredibly important because thinking about what you know or more importantly don't know, can help you figure out what to look for when active reading.
Read the selection. 3. Reread the selection. 4. Annotate the text with marginal notes.
In my previous experiences with annotating, I typically annotate during my second read, not afterwards. This new approach will probably provide me with a chance to gain more from the passage because I will be reading it an additional time.
f you feel skeptical, indicate that response: “Why?” or “Explain.”
I think that finding the "Why?" behind an author's message is more beneficial than the words themselves, especially when reading literature through an academic lens.
Who’s the smartest person you know?
Oftentimes, I find that the smartest people I know do not make the best leaders and communicators. There is a not a direct correlation between "book smarts" and being able to accurately and meaningfully convey a message.
What’s the essay about? What do you know about the writer’s background and reputation? Where was the essay first published? Who was the intended audience for the essay? How much do you already know about the subject of the reading selection?
All of these questions are so crucial because without them then we might as well just computer generate everything we read. The writer is a person with thoughts, feelings, and opinions and knowing that is crucial to understanding any writing.
Active reading, then, is a skill you need if you are truly to engage and under-stand the content of a piece of writing as well as the craft that shapes the writer’s ideas into a presentable form. Active reading will repay your efforts by helping you read more effectively and grow as a writer
It is super important when reading something to understand who the author is and why they are writing what they are writing. Active reading helps the reader absorb more of the piece by interacting with the writer themselves.
To m o v e f r o m r e a d i n g t o w r i t i n g , y o u n e e d t o r e a d a c t i v e l y, i n a t h o u g h t -ful spirit, and with an alert, inquiring mind. Reading actively means learning how to analyze what you read.
Active reading is a skill that if learned properly I believe really elevates what you get out of whatever it is that you are reading. Understanding the piece and remembering parts of it come so much easier if you active read.
He prefers talking aboutthe characteristics of race, the physical conformation of thecountry, or the genius of civilization, which abridges his ownlabors, and satisfies his reader far better at less cost.
easier on the historian if he can fall back on certain concepts to explain world events
Historians who live in democratic ages exhibit pre-cisely opposite characteristics. Most of them attribute hardlyany influence to the individual over the destiny of the race,
democracies remove power landing in the hands of a single individual --> democracy give citizens power
In this course, using AI in ANY capacity is not permitted.
It has been super interesting to see the ways teachers have responded to the exponential increase of AI usage in education.
Intense maternal stress, such as exposure to HurricaneKatrina, is associated with low birth weight (,2.5 kg) (51).However, these effects are independent of maternal mentalhealth, further underscoring the distinct influences of spe-cific forms of maternal adversity. An extensive review (52)reveals an influence of severe stress (e.g., death of a spouse) onoffspring birth weight, as well as of factors such as socialsupport that moderate the impact of stress, but no consistentevidence for the influence of maternal anxiety or depression(also see reference 53). The exceptions are studies showingan association between “pregnancy-associated anxiety” andbirth outcomes, which again underscores the specificity ofdifferent forms of maternal “adversity.”
Lack of specificity of 'maternal adversity' make its effects difficult to measure. Hurricane Katrina is associated with low birth weight. Shows how non mental health related stress can also contributed to 'maternal adversity'. The term covers a broad amount of stressors.
Severe stress such as the death of a spouse and other factors like social support (moderates stress) influences birth weight. However, maternal anxiety and depression do not consistently show evidence of influence on birth weight. The exceptions are pregnancy-associated anxiety and birth outcomes; this further supports the importance of specificity of different forms of maternal "adversity".
Studies employing momentary assessments ofmaternal mood states show covariation between negativemood in pregnancy and salivary cortisol (e.g., 43); however, anumber of studies find little or no association between maternalcortisol levels and measures of maternal stress, anxiety, or de-pression (25, 44–47). Indeed, pregnancy in humans and othermammals is associated with dampened HPA stress reactivity.Detailed studies reveal no association of maternal salivary (19),plasma, or amniotic cortisol levels (46, 47) with either maternalstress or anxiety. Sarkar et al. (47) reported a weak correlationbetween maternal anxiety and plasma cortisol, and only in earlypregnancy. In contrast, O’Connor et al. (48) reported an asso-ciation between maternal depression and diurnal cortisol levelsin a low-socioeconomic-status sample, while one study (49)has reported increased cortisol levels in pregnant womenwith comorbid anxiety and depression. These findings sug-gest that the relation between maternal mental health andglucocorticoid exposure may lie within specific subgroups,including women with more severe mental health conditions
Conflicting results. Some studies show covariation between negative mood in pregnancy and salivary cortisol. Others show little to no association. Perhaps, the relationship between maternal mental health and glucocorticoid exposure may lie within specific subgroups (e.g. women with more severe mental health conditions).
The Term “Maternal Adversity
"Maternal adversity" is not well defined. The perception of stress, depressive symptoms, and levels of state, trait, or pregnancy related anxiety are generalized as 'maternal adversity'. This fails to account for the nuance distinctions that may exist between them.
Studies of 11b-HSD-2 null mice provide ev-idence for a causal association between 11b-HSD-2, reducedbirth weight, and anxiety-like behavior in adulthood
Relationship shown with mice who had null mutations to the 11 beta hydroxysteroid dehydrogenase type 2 enzyme. This null mutation (loss of function) leads to reduced birth weight and anxious adult behaviour.
A deficiency of 11b-HSD-2 leads tooverexposure of the fetus to cortisol and lower birth weight
This enzyme degrades fetal levels of glucocorticoids. If there is a deficiency in this enzyme, then these glucocorticoids aren't degraded properly. Thus, the fetal levels of stress hormone increases.
late pregnancy, when fetal growth isaccelerated.
When fetal growth is accelerated during last stage of pregnancy, the impact of high amounts of cortisol/stress is most prevalent.
Antenatal treatment with the highly catabolic glucocorticoidsreduces birth weight
What are glucocorticoids? Cortisols. Stress hormones.
In another words, high amounts of stress hormones during pregnancy leads to reduced birth weight of child.
On April 8, 1974, Hank Aaron of the Atlanta Braves hits his 715th career home run, breaking Babe Ruth’s legendary record of 714 homers.
This is important to me becasue this date is my birthday.
What bots do you find surprising? What bots do you like?
Many bots on today’s video platforms are designed to recognize background music in videos and help users download audio or video clips. Even though these bots sometimes go against the rules of the platforms, they can be very useful for people who want to find songs or save content for personal use. I find these bots surprising because they can quickly identify music with high accuracy. I also like them because they make it much easier to explore and reuse media that would otherwise be hard to access.
What bots do you like?
For me, I like bots that help me sift through information that is true or false. An example of this would be X grok helping you look through the internet to see if the information being posted about is true or not.
In neoclassical and neoliberal economics, state spendingon services is framed as being paid for by the “productive” –i.e. profitable – sector of the economy. But, as we learned inthe pandemic, the most useful and essential parts of our soci-ety are often the least profitable and their workers the leastremunerated. Many profitable sectors are not useful, andoften quite damaging. Large parts of the hyper-profitablefinance sector are parasitic on the public service sector. AsI argued in Chapter 2, this “productive” economy relies on awhole set of disavowed systems – education, domestic labour,environment – without which its profits would be impossible.
I feel this argument, against bullshit jobs, is much stronger than "capitalism has failed"... for once, because it doesn't necessitate a defeatist starting point, and can me framed as even a satirical position (CEOs don't do crap, is laughable), and for second, because just world hypothesis conservatism bias tells people this can't be the case. Messages online tell people "capitalism+democracy" is the way to go, or else dictatorships by the ultra rich (?). I find it amusing, that more of the same sells so nicely, but that's what people see, survivorship bias big companies employing lots of people, and capitalism lifting out pop stars through financial mobility, and the system delivering all the goods we can possible think of. It's become spectacular consumerism, and culture is everywhere. Culture works, it fucking does, but not your culture, rather Gmail's, and Meta, and OnlyFans, Roblox, etc. one.
LUYỆN TẬP 2
This is pretty frustrating instructional design for a learner who does not yet master tones and diphthongs. Being told "you are wrong" does not help learning very much. Was the tone right? The diphthong? Neither? If your system can't provide such guidance, at least reveal the correct solution.
IQ scores
Psychologist in me wants to say that this is the "operationalisation". Wahrscheinlich ist es ein bisschen picky, aber klar, neurodevelopment ist natürlich noch ganz viel mehr als nur IQ, und IQ lediglich unser "window" into neurodev hier.
Beyond updating the evidence base
Dieser Satz hier würde auch wunderbar an einen "update the evidence base because new exposure contexts" Absatz anknüpfen!
These important limitations warrant reconsideration of their continued use as evidence base
Ich finde den Satz selbst eigentlich gut – er ist prägnant auf eine Art – aber ich glaube, dass mein Papa mich fragen würde, ob ich es auch ohne "reconsideration" kann, stattdessen mit "reconsider". Nur ein Versuch: "These important limitations motivate [expanding / refining] the existing evidence base."
continued importance of accurately characterizing the exposure-response relationship
Ich würde das als topic sentence von einem Absatz erwarten. Jedenfalls verstehe ich das als die zentrale message dieses Absatzes. Evtl. könnte ein Abschnitt auch so beginnen: "It is therefore essential to continue to accurately characterize the exposure-response relationship..."
Oder wenn man es kombinieren will und direkt noch die "nicht ausreichend gute" aktuelle Studienlage miteinbeziehen möchte, könnte der topic sentence auch lauten:
"It is essential to continue to accurately characterize the [Pb-IQ ERF] not only because of [these detrimental effects] but also because of substantive criticism concerning the existing evidence base.
Among
Inhaltlich bewegt es sich ab hier schon klar in Richtung "we should care about the effect of lead on intellectual development in children".
AF overpredicted the dimer conformation substantially
It seems pertinent to establish why the dimer conformation is predicted in XCL1. It would be valuable to run a structural alignment of both XCL1 conformations against the AF2/3 training dataset.
This would reveal several things. First, which XCL1 conformations are in the training dataset, if any? Either being present would be considered data leakage. And second, how many hits correspond to each of the conformations?
My hypothesis is that either (a) the XCL1 dimer is present in the training dataset and the chemokine isn't, or (b) neither/both are present, but the dimer yields significantly more hits, creating a dimer preference for XCL1 and all of its derived "ancestors".
Depending on the dataset size (I forget how much clustering the AF folks did), the alignment could be feasibly conducted using TMAlign. Otherwise, foldseek or other scalable aligners would work.
Though we might consider these to be run by “human computers” who are following the instructions given to them, such as in a click farm:
I had no clue that click farms were a thing, or that "human computers" such as the one shown in the image worked in that capacity. That was quite shocking.
This definition of a bot caught my attention because it emphasizes the actions that the program undertakes, rather than whether these actions are carried out for good or bad reasons. Another point that caught my attention was that this chapter distinguishes bots from other automated systems, such as recommendation systems and data mining software.
In fact, researchers must decide how to exercise their power based on inconsistent and overlapping rules, laws, and norms. This combination of powerful capabilities and vague guidelines can force even well-meaning researchers to grapple with difficult decisions.
Should it be researchers who solely must decide this? I feel as though that not only puts a lot of responsibility onto individuals alone but also leaves potentially detrimental ambiguity across social research. I know many research studies must receive approval via the IRB to ensure responsible conduct, so this phrasing seems a little flawed
For example, Joshua Blumenstock and colleagues were part Duchamp and part Michelangelo; they repurposed the call records (a readymade) and they created their own survey data (a custommade).
The discussion of researcher power stood out to me, especially the idea that researchers can now affect people’s lives without their awareness or consent. This feels like a major shift from traditional social research and makes ethics feel less like an afterthought and more like a core design constraint in digital-age studies.
The first can be illustrated by an analogy that compares two greats: Marcel Duchamp and Michelangelo.
The Duchamp vs. Michelangelo analogy makes the readymade vs. custommade distinction really concrete. I like how this reframes data sources as design choices rather than shortcuts or “less rigorous” options. It also raises a question for me about trade-offs: when repurposing readymade data, how do researchers ensure alignment between the original purpose of the data and their research question?
the direct Monte Carlo method by Bird 6
References for first Monte Carlo method?
More generally, social researchers will need to combine ideas from social science and data science in order to take advantage of the opportunities of the digital age; neither approach alone will be enough.
The author emphasizes that both social and data science must be considered in tandem with one another to inform comprehensive research and to optimize the opportunities offered by this digital era. It seems that a limitation then could be a lack of resources like time or monetary means to ensure quality considerations in both aspects.
The “Internet of Things” means that behavior in the physical world will be increasingly captured by digital sensors. In other words, when you think about social research in the digital age you should not just think online, you should think everywhere.
In my Intro to Information class, I recall my professor discussing the substantial impact that many researchers deem AI to have — an impact that is only comparable to that of the introduction of the Internet. It makes me think about how revolutionary AI feels to be in the midst of its development and how I often overlook this scale when thinking about the internet simply because it's all I've known throughout my upbringing. The digitization and automation of many once manual processes is so commonplace now, I can only imagine how much it'll change in the upcoming years
Give the author of the message,
I did not know you could cite emails
URL Format: When including a URL, copy it in full from your browser, but omit any query strings when possible (for example: http://www.mla.org/search/?query=pmla). http:// or https:// use: You may also omit the protocol (http:// or https://) from URLs unless you are hyperlinking your source in software that requires the full protocol. However, always include https:// when adding DOIs to reference entries. Access Date: This is optional and should only be included for sources without a publication date or those that are likely to change (e.g., a wiki).
When citing websites, should we include the citation as an embedded link or just write it without embedding the link?
Certain older MLA rules are now considered outdated. See the table below for the current guidelines.
Who changes the rules? What happens to published work using the outdated rules, do they need to be updated?
eliminate all https:// when citing URLs.
I didn't know you had to delete the https:// part in a link
Downloading or even printing key documents ensures you have a stable backup.
This makes sense becausse oftentimes I lose a link that I used for an essay, which ends up taking a lot of time to find the link again.
But when Blumenstock and colleagues aggregated their estimates to Rwanda’s 30 districts, they found that their estimates were very similar to estimates from the Demographic and Health Survey, which is widely considered to be the gold standard of surveys in developing countries. Although these two approaches produced similar estimates in this case, the approach of Blumenstock and colleagues was about 10 times faster and 50 times cheaper than the traditional Demographic and Health Surveys.
I wonder where the discrepancies lied, as I am sure there were some valuable differences between Blumenstock's aggregate estimates and that of the Demographic and Health Survey's. Additionally, I wonder about the extent to which the results of this machine learning model approach would be successful in other countries or via a different means of database. This first section really highlights how how incredibly nuanced the implications of Machine Learning are
In the1790s, James Baker recorded that the town ‘had a very considerable share of resort from the mostdistinguished persons of fashion in the kingdom; it is found a most convenient trip for the inhabitants ofBristol, Bath and the counties adjoining the Severn Sea’
First primary source - possible analysis and comparision with other article
health tourism, with wealthy visitors flocking in duringthe summer months to bathe in the sheltered bay
but was 'health tourism' the same as developed seaside resorts?
uidebooks, newspaper columns and visitor comments of the day andcrucial to understanding the urban environment of Swansea
While they are the primary sources available, they should be taken with a pinch of salt. often, such works were to advertise the areas, so, as i think blarney says, they missed out the industry parts, or maybe if they were dissing the place, they emphasised the ports?
in the second half of the nineteenth century,with more day-trippers and working-class visitors from the surrounding industrial suburbs and furtherafield.
Thischapter re-evaluates the history of one south Wales coastal town by merging these two previouslyseparate strands of research to form a new analysis of interactivity between industry and tourism
Clearlu believes that the history of industry and tourism are linked. She claims that there is a strong link between them and that they shouldn't be separated. This is a similar view to blarney, someone of whom she references multiple times and is also an author in the same collection that this chapter is a part of
Page 1 She begins with talk of the seperation of
P. Borsay,
She references Borsay multiple times - he seems to be a significant writier on the history of welsh beaches does she take his view or challenge it? Reply when you've read it
On the other hand, some bots are made with the intention of harming, countering, or deceiving others. For example, people use bots to spam advertisements at people. You can use bots as a way of buying fake followers, or making fake crowds that appear to support a cause (called Astroturfing).
This section has been an eye-opener for me regarding the potential for bots to deliberately influence public perception rather than just being an annoyance to users with spam messages. The case of astroturfing illustrates the potential for fake crowds to make the opinion of a few appear to be the more popular view than others.
International Women’s Day, the bot automatically finds wh
This idea is really fascinating to me. I love the idea of using technology like this for good, to shine light and hold corporations accountable. It truly never occurred to me that a bot could be used in this way.
Bots present a similar disconnect between intentions and actions. Bot programs are written by one or more people, potentially all with different intentions, and they are run by others people, or sometimes scheduled by people to be run by computers.
When looking at bots we can't assume it reflects the intention of any one single person. The people who wrote the code, the people who use it, and the computer that runs it are all separate different entities and because of this responsibility for a bot's action is spread out.
How are people’s expectations different for a bot and a “normal” user?
I think seeing bot activity for me, at least before taking this class, felt completely detached from human activity. When I notice content that seems like it was produced by a bot, my mind doesn't jump at all to the programming of said bot, and merely shoos away any attempt to care about it.
Why do you think social media platforms allow bots to operate? Why would users want to be able to make bots? How does allowing bots influence social media sites’ profitability?
I think social medias benefit from the bots because they create more engagement, they push out more tweets/comments so this helps their platform that is supported by people interacting with it. They also streamline the tediousness of posting/tweeting, it is automated so once you set it up that is all the work you'd have to do.
How are people’s expectations different for a bot and a “normal” user?
I think people's expectations of a bot compared to a normal user is that they might not take it as seriously because it is an automated thing. It is only there really to relay information that the programmer wanted to put on a specific schedule. If I saw an account that was a bot, I'd just think, "Oh its just a bot." Whereas with a normal user I would imagine a real person is behind the screen and consciously putting out the content they want to.
But, since the donkey does not understand the act of protest it is performing, it can’t be rightly punished for protesting.
I appreciate this analogy; it can also resemble our current state of our own country when people often do not take accountability for their actions because they did not participate “first hand.” Online has taken this donkey protest dilemma to the extreme; often times people are extremely more outspoken on beliefs or topics because they’re behind a screen, much like controlling bots where they don’t believe they should be held accountable for what the bot had posted/said online because “it wasn’t me!”
The three lowest scores will be excluded from your classroom preparation quizzes (Bundle 1).● The two lowest scores will be excluded from your classroom participation activities (Bundle 2).● You have the opportunity to revise and resubmit one question from Mini-project 1 within oneweek after your initial submission is returned to you (Bundle 4).
Helpful if something comes up unexpectedly!
As such, you are not to use AI toolsfor any assignments unless permission is explicitly granted in the instructions.
I appreciate rejecting the use of AI tools because I also believe AI impedes critical thinking and hurts the environment.
In other words, this book is not designed to teach you how to do any specific calculation; rather, it is designed to change the way that you think about social research.
This framing helps clarify what we’ll be doing in this course, not just learning techniques, but learning how to evaluate research choices. It makes me think about how we’ll need to justify data sources, sampling decisions, and ethics in our own projects.
This book is for social scientists who want to do more data science, data scientists who want to do more social science, and anyone interested in the hybrid of these two fields.
The contrast between the optimism of data scientists and the skepticism of social scientists stood out to me. I like the idea of balancing both perspectives; being critical without dismissing new methods outright. That balance feels essential for responsible computational social science.
Changes in technology—specifically the transition from the analog age to the digital age—mean that we can now collect and analyze social data in new ways.
This raises ethical questions for me about informed consent. If researchers are using data generated while people are “living their lives,” how aware are participants that they’re being studied? I’m curious how the book later addresses privacy and consent in digital contexts.
One morning, when I came into my basement office, I discovered that overnight about 100 people from Brazil had participated in my experiment.
This example really highlights the methodological shift from traditional lab experiments to digital-age research. It’s striking how scale and speed change what’s possible but it also makes me wonder how issues like sample bias or lack of experimental control compare to in-person lab studies.
In chapter 6 (“Ethics”), I’ll argue that researchers have rapidly increasing power over participants and that these capabilities are changing faster than our norms, rules, and laws.
I appreciate that ethics is treated as both its own chapter and something embedded throughout the book. That structure reinforces the idea that ethical considerations shouldn’t be isolated to a single stage of research, especially when methods like experiments or mass collaboration introduce new responsibilities for researchers.
This book progresses through four broad research designs: observing behavior, asking questions, running experiments, and creating mass collaboration.
I found the framing of the four research designs helpful because it clarifies how different methods enable fundamentally different kinds of knowledge. The idea that collaboration allows learning that isn’t possible through observation, surveys, or experiments alone makes me think about how method choice shapes not just results, but the kinds of questions we even consider asking.
There are several ways computer programs are involved with social media. One of them is a “bot,” a computer program that acts through a social media account. There are other ways of programming with social media that we won’t consider a bot (and we will cover these at various points as well): The social media platform itself is run with computer programs, such as recommendation algorithms (chapter 12). Various groups want to gather data from social media, such as advertisers and scientists. This data is gathered and analyzed with computer programs, which we will not consider bots, but will cover later, such as in Chapter 8: Data Mining. Bots, on the other hand, will do actions through social media accounts and can appear to be like any other user. The bot might be the only thing posting to the account, or human users might sometimes use a bot to post for them. Note that sometimes people use “bots” to mean inauthentically run accounts, such as those run by actual humans, but are paid to post things like advertisements or political content. We will not consider those to be bots, since they aren’t run by a computer. Though we might consider these to be run by “human computers” who are following the instructions given to them, such as in a click farm:
It is shocking to know that a bot is not just defined as a computer program, but also to those so-called "human computers" in a click farm, that a worker is staring at dozens of phone for hours and hours. I sees those workers as they have no choice, although this is a job, but those poor workers are just doing what they've told to do, lost their own decision making, makes no different to an automated account.
the amount of oxygen or lack thereof (redox state) and thepH and salt concentration, as well as to scavenge essentialminerals and harvest and metabolize the available nutrients,determines the numbers and nature of the species that popu-late a site of the body. Anaerobic or facultative anaerobicbacteria colonize most of the sites of the body because of thelack of oxygen in sites such as the mouth, intestine, andgenitourinary tract.
Microbe species determined by available nutrients/environmental niches
Built a cross-platform desktop timer app leveraging Electron and Node.js back-end APIs to create looping timers,system tray minimization, and a compact miniplayer for live countdown tracking.• Structured the codebase with TypeScript modules and packaged the app using Electron Forge, ensuring cleanbuilds and maintainable desktop deployment.
Same thing here. Metrics comparing to other apps or something
Personal Website
This one a little subjective I'd put the actual name of yopur website here rather than "Personal Website" with the hyperlink still attached
Spearheading the development of a full-stack web application using DeepSeek to parse syllabi and gatherimportant details into a polished UI.• Used Flask to create REST endpoints, additionally making requests using the OpenAI API to DeepSeek V3.2for user-catered JSON responses.• Collaborated with another developer using GitHub to merge front-end and back-end knowledge, speeding upplanning and development.
Same with this
Created a full-stack JavaScript application using Express and Node.js to serve a REST API to a Discord.jsfront-end for server account instance automation.• Used the Mineflayer API for scripting commands, events, and regex matching, eliminating all manual actions.• Employed SOCKS5 ISP proxies to rotate connections, bypassing server restrictions to support 100+ instances.
Personally I wouldn't highlight and bold the technologies fro this. I'd keep them but add some other metric like how it reduced manual work load by x% or x time if you have that metric
Psuedocode is intended to be easier to read and write.
In other words, it’s like the fundamental structure for more developed code? It could be interpreted as the building blocks for what humans can understand of the vast universe of coding.
Synaptic Transmission [1:51]
This video is one of the most simple and helpful explanations I've seen thus far.
She wants to tellPocahontas's life story from an Algonquian viewpoint, from the myths"and the worldview that informed her actions and characte
This stands out to me because growing up Pocahontas's life was depicted as more of a fairy tale than it was an accurate moment of history. I am curious to learn more about her from a historical perspective and not just a story telling point of view.
memory was as-sumed to fade as it gained distance fromthe focus of its recollection, its authoritylessening as time passed.
This is interesting because the more I have explored memory through the lens of this class, I am skeptical of this statement. From what we have learned, I am understandng how time strengthens memories from a community standpoint as they build off one another
More suggestive is the widespread effort on the part of ordm_aryi people to celebrate symbols such as pioneer ancestors or dead soldiers'that were more important for autobiographical and ~ocal rr_iemory thani for civic memory.
This stands out to me because it calls out how traditions create patriotism. Appealing to ordinary people with commemorated figures puts these people on a pedestal. In return, the practice of respect and honor drives spirit and patriotism. - Tanisha Thulasidas
[[Peter Rukavina p]] and [[Lisa Chandler p]]'s box printing project in Hilversum #2024/04 and where the boxes were registered.
Anil Dash in #2023/07 on 'VC QAnon' the slide into fascism of the techbros. Vgl [[How Silicon Valley Unleashed Techno Feudalism by Cédric Durand]] and [[I Thought I Knew Silicon Valley. I Was Wrong]]
kit
it can be changed in case of generating with the PowerShell. While using VBR GUI or WEBUI only expiration date can be seen in dialog window
contains
From the Using Veeam Deployment Kit section -> The Deployment Kit typically includes:
Necessary binaries and supporting files Authentication certificates A sample service configuration script (InstallDeploymentKit.BAT) for automated installation
eLife Assessment
The current work uses DNA-tethered motor trapping to reduce vertical forces and improve datasets for kinesin-1 motility under load. The evidence is compelling and the significance is important to the kinesin field. Kinesin-1 is more robust and less prone to premature detachment than previously indicated. This represents a significant advancement in the field and is generally applicable to work with optical tweezers.
Reviewer #1 (Public review):
Summary:
The manuscript by Hensley and Yildez studies the mechanical behavior of kinesin under conditions where the z-component of the applied force is minimized. This is accomplished by tethering the kinesin to the trapped bead with a long double stranded DNA segment as opposed to directly binding the kinesin to the large bead. It complements several recent studies that have used different approaches to looking at the mechanical properties of kinesin under low z-force loads. The study shows that much of the mechanical information gleaned from the traditional "one bead" with attached kinesin approach was probably profoundly influenced by the direction of the applied force. The authors speculate that when moving small vesicle cargos (particularly membrane bound ones) the direction of resisting force on the motor has much less of a z-component than might be experience if the motor were moving large organelles like mitochondria.
Strengths:
The approach is sound and provides an alternative method to examine the mechanics of kinesin under conditions where the z-component of the force is lessened. The data show that kinesin has very different mechanical properties compared to those extensively reported with using the "single-bead" assay where the molecule is directly coupled to a large bead which is then trapped.
Weaknesses:
The sub stoichiometry binding of kinesins to the multivalent DNA complicates the interpretation of the data.
Comments on revisions:
The authors have addressed my concerns.
Reviewer #2 (Public review):
This short report by Hensley and Yildiz explores kinesin-1 motility under more physiological load geometries than previous studies. Large Z-direction (or radial) forces are a consequence of certain optical trap experimental geometries, and likely do not occur in the cell. Use of a long DNA tether between the motor and the bead can alleviate Z-component forces. The authors perform three experiments. In the first, they use two assay geometries - one with kinesin attached directly to a bead and the other with kinesin attached via a 2 kbp DNA tether - with a constant-position trap to determine that reducing the Z component of force leads to a difference in stall time but not stall force. In the second, they use the same two assay geometries with a constant-force trap to replicate the asymmetric slip bond of kinesin-1; reducing the Z component of force leads to a small but uniform change in the run lengths and detachment rates under hindering forces but not assisting forces. In the third, they connect two or three kinesin molecules to each DNA, and measure a stronger scaling in stall force and time when the Z component of force is reduced. They conclude that kinesin-1 is a more robust motor than previously envisaged, where much of its weakness came from the application of axial force. If forces are instead along the direction of transport, kinesin can hold on longer and work well in teams. The experiments are rigorous, and the data quality is very high. There is little to critique or discuss. The improved dataset will be useful for modeling and understanding multi-motor transport. The conclusions complement other recent works that used different approaches to low-Z component kinesin force spectroscopy, and provide strong value to the kinesin field.
Comments on revisions:
The authors have satisfied all of my comments. I commend them on an excellent paper.
Reviewer #3 (Public review):
Hensley et al. present an important study into the force-detachment behaviour of kinesin-1, using a newly adapted methodological approach. This new method of DNA-tethered motor trapping is effective in reducing vertical forces and can be easily optimised for other motors and protein characterisation. The major strength of the paper is characterising kinesin-1 under low z-forces, which is likely to reflect the physiological scenario. They find kinesin-1 is more robust and less prone to premature detachment. The motors exhibit higher stall rates and times. Under hindering and assisting loads, kinesin-1 detachment is more asymmetric and sensitive, and with low z-force shows that slip-behaviour kinetics prevail. Another achievement of this paper is the demonstration of the multi-motor kinesin-1 assay using their low-z force method, showing that multiple kinesin-1 motors are capable of generating higher forces (up to 15 pN, and nearly proportional to motor number), thus opening an avenue to study multiple motor coordination. Overall, the data have been collected in a rigorous manner, the new technique is sound and effective, and results presented are compelling.
Author response:
The following is the authors’ response to the original reviews
Reviewer #1 (Recommendations for the authors):
(1) My primary concern is that in some of the studies, there are not enough data points to be totally convincing. This is particularly apparent in the low z-force condition of Figure 1C.
We agree that adequate sampling is essential for drawing robust conclusions. To address this concern, we performed a post hoc sensitivity analysis to assess the statistical power of our dataset. Given our sample sizes (N = 85 and 45) and observed variability, the experiment had 80% power (α = 0.05) to detect a difference in stall force of approximately 0.36 pN (Cohen’s d ≈ 0.38). The actual difference observed between conditions was 0.25 pN (d ≈ 0.26), which lies below the minimum detectable effect size. Thus, the non-significant result (p = 0.16) likely reflects that any true difference, if present, is smaller than the experimental sensitivity, rather than a lack of sufficient sampling.
Importantly, both measured stall forces fall within the reported range for kinesin-1 in the literature, supporting that the dataset is representative and the measurements are reliable.
(2) I'm also concerned about Figure 2B. Does each data point in the three graphs represent only a single event? If so, this should probably be repeated several more times to ensure that the data are robust.
Each data point shown corresponds to the average of many processive runs, ranging from 32 to 167. This has been updated in the figure caption accordingly.
(3) Figure 3. I'm surprised that the authors could not obtain a higher occupancy of the multivalent DNA tether with kinesin motors. They were adding up to a 30X higher concentration of kinesin, but still did not achieve stoichiometric labeling. The reasons for this should be discussed. This makes interpretation of the mechanical data much tougher. For instance, only 6-7% of the beads would be driven by three kinesins. Unless the movement of hundreds of beads were studied, I think it would be difficult to draw any meaningful insight, since most of the events would be reflective of beads with only one or sometimes two kinesins bound. I think more discussion is required to describe how these data were treated.
The mass-photometry data in Figure 3B were acquired in the presence of a 3-fold molar excess of kinesin (Supplemental Figure 4) relative to the DNA chassis. In comparison, optical trapping studies were performed at a 10-20-fold molar excess of kinesin, resulting in a substantially higher percentage of chassis with multiple motors. The reason why we had to perform mass photometry measurements at lower molar excess than the optical trap is that at higher kinesin concentrations, the “kinesin-only” peak dominated and obscured 2- or 3-kinesin-bound species, preventing reliable fitting of the mass photometry data.
We have now used the mass photometry measurements to extrapolate occupancies under trapping conditions. We estimate 76-93% of 2-motor chassis are bound to two kinesins and ~70% of 3-motor chassis are bound to three kinesins under our trapping conditions. Moreover, the mean forces in Figures 3C–D exceed those expected for a single kinesin, consistent with occupancy substantially greater than one motor per chassis.
We wrote: “To estimate the percentage of chassis with two and three motors bound, we performed mass photometry measurements at a 3-fold molar excess of kinesin to the chassis, as higher ratios would obscure the distinction of complexes from the kinesin-only population. Assuming there is no cooperativity among the binding sites, we modeled motor occupancy using a Binomial distribution (Figure 3_figure supplement 2). We observed 17-29% of particles corresponded to the two-motor species on the 2-motor chassis in mass photometry, indicating that 45-78% of the 2-motor chassis was bound to two kinesins. Similarly, 15% and 40% of the 3motor chassis were bound to two and three kinesins, respectively.
In optical trapping assays, we used 10-fold and 20-fold molar excess of kinesin for 2-motor and 3-motor chassis, respectively, to substantially increase the percentage of the chassis carried by multiple kinesins. Under these conditions, we estimate 76-93% of the 2-motor chassis were bound to two kinesins, and 30% and 70% of 3-motor chassis were bound to two and three kinesins, respectively.”
“Multi-motor trapping assays were performed similarly using 10x and 20x kinesin for 2- and 3motor chassis, respectively. To estimate the percentage of chassis with multiple motors, we used the probability of kinesin binding to a site on a chassis from mass photometry in 3x excess condition to compute an effective dissociation constant
where r is the molar ratio of kinesin to chassis. Single-site occupancy at higher molar excesses of kinesin was calculated using this parameter. ”
We also added Figure 3_figure supplement 2 to explain our Binomial model.
(4) Page 5, 1st paragraph. Here, the authors are comparing time constants from stall experiments to data obtained with dynein from Ezber et al. This study used the traditional "one bead" trapping approach with dynein bound directly to the bead under conditions where it would experience high z-forces. Thus, the comparison between the behavior of kinesin at low z-forces is not necessarily appropriate. Has anyone studied dynein's mechanics under low z-force regimes?
We thank the reviewer for catching a citation error. The text has been corrected to reference Elshenawy et al. 2020, which reported stall time constants for mammalian dynein.
To our knowledge, dynein’s mechanics under explicitly low z-force conditions have not yet been reported; however, given the more robust stalling behavior of dynein and greater collective force generation, the cited paper was chosen to compare low z-force kinesin to a motor that appears comparatively unencumbered by z-forces. Our study adds to growing evidence that high z-forces disproportionately limit kinesin performance.
For clarification, we modified that sentence as follows: “These time constants are comparable to those reported for minus-end-directed dynein under high z-forces”.
Reviewer #2 (Recommendations for the authors):
(1) P3 pp2, a DNA tensiometer cannot control the force, but it can measure it; get the distance between the two ends of the tensiometer, and apply WLC.
The text has been updated to more accurately reflect the differences between optical trapping and kinesin motility against a DNA tensiometer with a fixed lattice position.
(2) Fig. 2b, SEM is a poor estimate or error for exponentially distributed run lengths. Other methods, like bootstrapping an exponential distribution fit, may provide a more realistic estimate.
Run lengths were plotted as an inverse cumulative distribution function and fitted to a single exponential decay (Supplementary Figure S3). The plotted value represents the fitted decay constant (characteristic run length) ± SE (standard error of the fit), not the arithmetic mean ± SEM. Velocity values are reported as mean ± SEM. Detachment rate was computed as velocity divided by run length, except at 6 and 10 pN hindering loads, where minimal forward displacement necessitated fitting run-time decays directly. In those cases, the plotted detachment rate equals the inverse of the fitted time constant. The figure caption has been updated accordingly.
(3) Kinesin-1 is covalently bound to a DNA oligo, which then attaches to the DNA chassis by hybridization. This oligo is 21 nt with a relatively low GC%. At what force does this oligo unhybridize? Can the authors verify that their stall force measurements are not cut short by the oligo detaching from the chassis?
The 21-nt attachment oligo (38 % GC) is predicted to have ΔG<sub>37C</sub> ≈-25 kcal/mole or approximately 42 kT. If we assume this is the approximate amount of work required to unhybridize the oligo, we would expect the rupture force to be >15 pN. This significantly exceeds the stall force of a single kinesin. Since the stalling events rarely exceed a few seconds, it is unlikely that our oligos quickly detach from the chassis under such low forces.
Furthermore, optical trapping experiments are tuned such that no more than 30% of beads display motion within several minutes after they are brought near microtubules. After stalling events, the motor dissociates from the MT, and the bead snaps back to the trap center. Most beads robustly reengage with the microtubule, typically within 10 s, suggesting that the same motor chassis reengages with the microtubule after microtubule detachment. Successive runs of the same bead typically have similar stall forces, suggesting that the motors do not disengage from the chassis under resistive forces exerted by the trap.
(4) Figure 1, a justification or explanation should be provided for why events lower than 1.5 pN were excluded. It appears arbitrary.
Single-motor stall-force measurements used a trap stiffness of 0.08–0.10 pN/nm. At this stiffness, a 1.5 pN force corresponds to 15–19 nm bead displacement, roughly two kinesin steps, and events below this threshold could not be reliably distinguished from Brownian noise. For this reason, forces < 1.5 pN were excluded.
In Methods, we wrote “Only peak forces above 1.5 pN (corresponding to a 15-19 nm bead displacement) were analyzed to clearly distinguish runs from the tracking noise.”
(5) Figure 2b, is the difference in velocity statistically significant?
The difference in velocity is statistically significant for most conditions. We did not compare velocities for -10 and -6 pN as these conditions resulted in little forward displacement. However, the p-values for all of the other conditions are -4 pN: 0.0026, -2 pN: 0.0001, -1 pN: 0.0446, +0.5 pN: 0.3148, +2 pN: 0.0001, +3 pN: 0.1191, +4 pN: 0.0004.
(6) The number of measurements for each experimental datapoint in the corresponding figure caption should be provided. SEM is used without, but N is not reported in the caption.
Figure captions have now been updated to report the number of trajectories (N) for each data point.
Reviewer #3 (Recommendations for the authors):
(1) The method of DNA-tethered motor trapping to enable low z-force is not entirely novel, but adapted from Urbanska (2021) for use in conventional optical trapping laboratories without reliance on microfluidics. However, I appreciate that they have fully established it here to share with the community. The authors could strengthen their methods section by being transparent about protein weight, protein labelling, and DNA ladders shown in the supplementary information. What organism is the protein from? Presumably human, but this should be specified in the methods. While the figures show beautiful data and exemplary traces, the total number of molecules analysed or events is not consistently reported. Overall, certain methodological details should be made sufficient for reproducibility.
We appreciate the reviewer’s attention to methodological clarity. The constructs used are indeed human kinesin-1, KIF5B. The Methods now specify protein origin, molecular weights, and labeling details, and all figure captions report the number of trajectories analyzed to ensure reproducibility.
(2) The major limitation the study presents is overarching generalisability, starting with the title. I recommend that the title be specific to kinesin-1.
The title has been revised to specify kinesin-1.
The study uses two constructs: a truncated K560 for conventional high-force assays, and full-length Kif5b for the low z-force method. However, for the multi-motor assay, the authors use K560 with the rationale of preventing autoinhibition due to binding with DNA, but that would also have limited characterisation in the single-molecule assay. Overall, the data generated are clear, high-quality, and exciting in the low z-force conditions. But why have they not compared or validated their findings with the truncated construct K560? This is especially important in the force-feedback experiments and in comparison with Andreasson et al. and Carter et al., who use Drosophila kinesin-1. Could kinesin-1 across organisms exhibit different force-detachment kinetics? It is quite possible.
Construct choice was guided by physiological relevance and considerations of autoinhibition: K560 was used for high z-force single-motor assays. The results of these assays are consistent with conventional bead assays performed by Andreasson et al. and Carter et al. using kinesin from a different organism. Therefore, we do not believe there are major differences between force properties of Drosophila and human kinesin-1.
For low z-force assays, we used full-length KIF5B, which has nearly identical velocity and stall force to K560 in standard bead assays. We used this construct for low z force assays because it has a longer and more flexible stalk than K560 and better represents the force behavior of kinesin under physiological conditions. We then used constitutively-active K560 motors for multi-motor experiments to avoid potential complications from autoinhibition of full-length kinesin.
Similarly, the authors test backward slipping of Kif5b and K560 and measure dwell times in multi-motor assays. Why not detail the backward slippage kinetics of Kif5b and any step-size impact under low z-forces? For instance, with the traces they already have, the authors could determine slip times, distances, and frequency in horizontal force experiments. Overall, the manuscript could be strengthened by analysing both constructs more fully.
Slip or backstep analyses were not performed on single-motor data because such events were rare; kinesin typically detached rather than slipped. In contrast, multi-motor assays exhibited frequent slip events corresponding to the detachment of individual motors, which were analyzed in detail.
We wrote “In comparison, slipping events were rarely observed in beads driven by a single motor, suggesting that kinesin typically detaches rather than slipping back on the microtubule under hindering loads.”
Appraisal and impact:
This study contributes to important and debated evidence on kinesin-1 force-detachment kinetics. The authors conclude that kinesin-1 exhibits a slip-bond interaction with the microtubule under increasing forces, while other recent studies (Noell et al. and Kuo et al.), which also use low z-force setups, conclude catch-bond behaviour under hindering loads. I find the results not fully aligned with their interpretation. The first comparison of low zforces in their setup with Noell et al. (2024), based on stall times, does not hold, because it is an apples-to-oranges comparison. Their data show a stall time constant of 2.52 s, which is comparable to the 3 s reported by Noell et al., but the comparison is made with a weighted average of 1.49 s. The authors do report that detachment rates are lower in low z-force conditions under unloaded scenarios. So, to completely rule out catch-bond-like behaviour is unfair. That said, their data quality is good and does show that higher hindering forces lead to higher detachment rates. However, on closer inspection, the range of 0-5 pN shows either a decrease or no change in detachment rate, which suggests that under a hindering force threshold, catch-bond-like or ideal-bond-like behaviour is possible, followed by slipbond behaviour, which is amazing resolution. Under assisting loads, the slip-bond character is consistent, as expected. Overall, the study contributes to an important discussion in the biophysical community and is needed, but requires cautious framing, particularly without evidence of motor trapping in a high microtubule-affinity state rather than genuine bond strengthening.
We are not completely ruling out the catch bond behavior in our manuscript. As the reviewer pointed out, our results are consistent with the asymmetric slip bond model, whereas DNA tensiometer assays are more consistent with the catch bond behavior. The advantage of our approach is the capability to directly control the magnitude and direction of load exerted on the motor in the horizontal axis and measure the rate at which the motor detaches from the microtubule as it walks under constant load. In comparison, DNA tensiometer assays cannot control the force, but measure the time it takes the motor to fall off from the microtubule after a brief stall. The extension of the DNA tether is used to estimate the force exerted on the motor during a stall in those assays. The slight disadvantage of our method is the presence of low zforces, whereas DNA tensiometer assays are expected to have little to no z-force. We wrote that the discrepancy between our results can be attributed to the presence of low z forces in our DNA tethered trapping assembly, which may result in a higher-than-normal detachment rate under high hindering loads, thereby resulting in less asymmetry in the force detachment kinetics. We also added that this discrepancy can be addressed by future studies that directly control and measure horizontal force and measure the motor detachment rate in the absence of z forces. Optical trapping assays with small nanoparticles (Sudhakar et al. Science 2021) may be well suited to conclusively reveal the bond characteristics of kinesin under hindering loads.
Reviewing Editor Comments:
The reviewers are in agreement with the importance of the findings and the quality of the results. The use of the DNA tether reduces the z-force on the motor and provides biologically relevant insight into the behavior of the motor under load. The reviewers' suggestions are constructive and focus on bolstering some of the data points and clarifying some of the methodological approaches. My major suggestion would be to clarify the rationale for concluding that kinesin-1 exhibits slip-bond behavior with increasing force in light of the work of Noell (10.1101/2024.12.03.626575) and Kuo et al (2022 10.1038/s41467022-31069-x), both of which take advantage of DNA tethers.
Please see our response to the previous comment. In the revised manuscript, we first clarified that our results are in agreement with previous theoretical (Khataee & Howard, 2019) and experimental studies (Kuo et al., 2022; Noell et al., 2024; Pyrpassopoulos et al., 2020) that kinesin exhibits slower detachment under hindering load. This asymmetry became clear when the z-force was reduced or eliminated.
We clarified the differences between our results and DNA tensiometer assays and provided a potential explanation for these discrepancies. We also proposed that future studies might be required to fully distinguish between asymmetric slip, ideal, or catch bonding of kinesin under hindering loads.
We wrote:
“Our results agree with the theoretical prediction that kinesin exhibits higher asymmetry in force-detachment kinetics without z-forces (Khataee & Howard, 2019), and are consistent with optical trapping and DNA tensiometer assays that reported more persistent stalling of kinesin in the absence of z-forces (Kuo et al., 2022; Noell et al., 2024; Pyrpassopoulos et al., 2020).
Force-detachment kinetics of protein-protein interactions have been modeled as either a slip, ideal, or catch bond, which exhibit an increase, no change, or a decrease in detachment rate, respectively, under increasing force (Thomas et al., 2008). Slip bonds are most commonly observed in biomolecules, but studies on cell adhesion proteins reported a catch bond behavior (Marshall et al., 2003). Although previous trapping studies of kinesin reported a slip bond behavior (Andreasson et al., 2015; Carter & Cross, 2005), recent DNA tensiometer studies that eliminated the z-force showed that the detachment rate of the motor under hindering forces is lower than that of an unloaded motor walking on the microtubule (Kuo et al., 2022; Noell et al., 2024), consistent with the catch bond behavior. Unlike these reports, we observed that the stall duration of kinesin is shorter than the motor run time under unloaded conditions, and the detachment rate of kinesin increases with the magnitude of the hindering force. Therefore, our results are more consistent with the asymmetric slip bond behavior. The difference between our results and the DNA tensiometer assays (Kuo et al., 2022; Noell et al., 2024) can be attributed to the presence of low z-forces in our DNA-tethered optical trapping assays, which may increase the detachment rate under high hindering forces. Future studies that could directly control hindering forces and measure the motor detachment rate in the absence of z-forces would be required to conclusively reveal the bond characteristics of kinesin under hindering loads.”
eLife Assessment
This paper undertakes an important investigation to determine whether movement slowing in microgravity is due to a strategic conservative approach or rather due to an underestimation of the mass of the arm. The experimental dataset is unique, the coupled experimental and computational analyses comprehensive, and the effect is strong. However, the authors present incomplete results to support the claim that movement slowing is due to mass underestimation. Further analysis is needed to rule out alternative explanations.
Reviewer #1 (Public review):
The authors have conducted substantial additional analyses to address the reviewers' comments. However, several key points still require attention. I was unable to see the correspondence between the model predictions and the data in the added quantitative analysis. In the rebuttal letter, the delta peak speed time displays values in the range of [20, 30] ms, whereas the data were negative for the 45{degree sign} direction. Should the reader directly compare panel B of Figure 6 with Figure 1E? The correspondence between the model and the data should be made more apparent in Figure 6. Furthermore, the rebuttal states that a quantitative prediction was not expected, yet it subsequently argues that there was a quantitative match. Overall, this response remains unclear.
A follow-up question concerns the argument about strategic slowing. The authors argue that this explanation can be rejected because the timing of peak speed should be delayed, contrary to the data. However, there appears to be a sign difference between the model and the data for the 45{degree sign} direction, which means that it was delayed in this case. Did I understand correctly? In that regard, I believe that the hypothesis of strategic slowing cannot yet be firmly rejected and the discussion should more clearly indicate that this argument is based on some, but not all, directions. I agree with the authors on the importance of the mass underestimation hypothesis, and I am not particularly committed to the strategic slowing explanation, but I do not see a strong argument against it. If the conclusion relies on the sign of the delta peak speed, then the authors' claims are not valid across all directions, and greater caution in the interpretation and discussion is warranted. Regarding the peak acceleration time, I would be hesitant to draw firm conclusions based on differences smaller than 10 ms (Figures R3 and 6D).
The authors state in the rebuttal that the two hypotheses are competing. This is not accurate, as they are not mutually exclusive and could even vary as a function of movement direction. The abstract also claims that the data "refutes" strategic slowing, which I believe is too strong. The main issue is that, based on the authors' revised manuscript, the lack of quantitative agreement between the model and the data for the mass underestimation hypothesis is considered acceptable because a precise quantitative match is not expected, and the predictions overall agree for some (though not all) directions and phases (excluding post-in). That is reasonable, but by the same logic, the small differences between the model prediction and the strategic slowing hypothesis should not be taken as firm evidence against it, as the authors seem to suggest. In practice, I recommend a more transparent and cautious interpretation to avoid giving readers the false impression that the evidence is decisive. The mass underestimation hypothesis is clearly supported, but the remaining aspects are less clear, and several features of the data remain unexplained.
Reviewer #2 (Public review):
This study explores the underlying causes of the generalized movement slowness observed in astronauts in weightlessness compared to their performance on Earth. The authors argue that this movement slowness stems from an underestimation of mass rather than a deliberate reduction in speed for enhanced stability and safety.
Overall, this is a fascinating and well-written work. The kinematic analysis is thorough and comprehensive. The design of the study is solid, the collected dataset is rare, and the model adds confidence to the proposed conclusions.
Compared to the previous version, the authors have thoroughly addressed my concerns. The model is now clear and well-articulated, and alternative hypotheses have been ruled out convincingly. The paper is improved and suitable for publication in my opinion, making a significant contribution to the field.
Strengths:
- Comprehensive analysis of a unique data set of reaching movement in microgravity<br /> - Use of a sensible and well-thought experimental approach<br /> - State-of-the-art analyses of main kinematic parameter<br /> - Computational model simulations of arm reaching to test alternative hypotheses and support the mass underestimation one
This work has no major weakness as it stands, and the discussion provides a fair evaluation of the findings and conclusions.
Reviewer #3 (Public review):
Summary:
The authors describe an interesting study of arm movements carried out in weightlessness after a prolonged exposure to the so-called microgravity conditions of orbital spaceflight. Subjects performed radial point-to-point motions of the fingertip on a touch pad. The authors note a reduction in movement speed in weightlessness, which they hypothesize could be due to either an overall strategy of lowering movement speed to better accommodate the instability of the body in weightlessness or an underestimation of body mass. They conclude for the latter, mainly based on two effects. One, slowing in weightlessness is greater for movement directions with higher effective mass at the end effector of the arm. Two, they present evidence for increased number of corrective submovements in weightlessness. They contend that this provides conclusive evidence to accept the hypothesis of an underestimation of body mass.
Strengths:
In my opinion, the study provides a valuable contribution, the theoretical aspects are well presented through simulations, the statistical analyses are meticulous, the applicable literature is comprehensively considered and cited and the manuscript is well written.
Weaknesses:
I nevertheless am of the opinion that the interpretation of the observations leaves room for other possible explanations of the observed phenomenon, thus weakening the strength of the arguments.
To strengthen the conclusions, I feel that the following points would need to be addressed:
(1) The authors model the movement control through equations that derive the input control variable in terms of the force acting on the hand and treating the arm as a second-order low pass filter (Eq. 13). Underestimation of the mass in the computation of a feedforward command would lead to a lower-than-expected displacement to that command. But it is not clear if and how the authors account for a potential modification of the time constants of the 2nd order system. The CNS does not effectuate movements with pure torque generators. Muscles have elastic properties that depend on their tonic excitation level, reflex feedback and other parameters. Indeed, Fisk et al.* showed variations of movement characteristics consistent with lower muscle tone, lower bandwidth and lower damping ratio in 0g compared to 1g. Could the variations in the response to the initial feedforward command be explained by a misrepresentation of the limbs damping and natural frequency, leading to greater uncertainty to the consequences of the initial command. This would still be an argument for un-adapted feedforward control of the movement, leading to the need for more corrective movements. But it would not necessarily reflect an underestimation of body mass.
*Fisk, J. O. H. N., Lackner, J. R., & DiZio, P. A. U. L. (1993). Gravitoinertial force level influences arm movement control. Journal of neurophysiology, 69(2), 504-511.
While the authors attempt to differentiate their study from previous studies where limb neuromechanical impedance was shown to be modified in weightlessness by emphasizing that in the current study the movements were rapid and the initial movement is "feedforward". But this incorrectly implies that the limb's mechanical response to the motor command is determined only by active feedback mechanisms. In fact:
(a) All commands to the muscle pass through the motor neurons. These neurons receive descending activations related not only to the volitional movement, but also to the dynamic state of the body and the influence of other sensory inputs, including the vestibular system. A decrease in descending influences from the vestibular organs will lower the background sensitivity to all other neural influences on the motor neuron. Thus, the motor neuron may be less sensitive to the other volitional and reflexive synaptic inputs that it may receive.
(b) Muscle tone plays a significant role in determining the force and the time course of the muscle contraction. In a weightless environment, where tonic muscle activity is likely to be reduced, there is the distinct possibility that muscles will react more slowly and with lower amplitude to an otherwise equivalent descending motor command, particularly in the initial moments before spinal reflexes come into play. These, and other neuronal mechanisms could lead to the "under-actuation" effect observed in the current study, without necessarily being reflective of an underestimation of mass per se.
(2) The subject's body in weightless is much more sensitive to reaction forces in interactions with the environment in the absence of the anchoring effect of gravity pushing the body into the floor and in the absence of anticipatory postural adjustments that typically accompany upper-limb motions in Earth gravity in order to maintain an upright posture. The authors dismiss this possibility because the taikonauts were asked to stabilize their bodies with the contralateral hand. But the authors present no evidence that this was sufficient to maintain the shoulder and trunk at a strictly constant position, as is supposed by the simplified biomechanical model used in their optimal control framework. Indeed, a small backward motion of the shoulder would result in a smaller acceleration of the fingertip and a smaller extent of the initial ballistic motion of the hand with respect to the measurement device (the tablet), consistent with the observations reported in the study. Note that stability of the base might explain why 45º movements were apparently less affected in weightlessness, according to many of the reported analyses, including those related to corrective movements (Fig. 5 B, C, F; Fig. 6D), than the other two directions. If the trunk is being stabilized by the left arm, the same reaction forces on the trunk due to the acceleration of the hand will result in less effective torque on the trunk, given that the reaction forces act with a much smaller moment arm with respect to the left shoulder (the hand movement axis passes approximately through the left shoulder for the 45º target) compared to either the forward or rightward motions of the hand.
(3) The above is exacerbated by potential changes in the frictional forces between the fingertip and the tablet. The movements were measured by having the subjects slide their finger on the surface of a touch screen. In weightlessness, the implications of this contact can be expected to be quite different than on the ground. While these forces may be low on Earth, the fact is that we do not know what forces the taikonauts used on orbit. In weightlessness, the taikonauts would need to actively press downward to maintain contact with the screen, while on Earth gravity will do the work. The tangential forces that resist movement due to friction might therefore be different in 0g. . Indeed, given the increased instability of the body and the increased uncertainty of movement direction of the hand, taikonauts may have been induced to apply greater forces against the tablet in order to maintain contact in weightlessness, which would in turn slow the motion of the finger on the table and increase the reaction forces acting on the trunk. This could be particularly relevant given that the effect of friction would interact with the limb in a direction-dependent fashion, given the anisotropy of the equivalent mass at the fingertip evoked by the authors
I feel that the authors have done an admirable job of exploring the how to explain the modifications to movement kinematics that they observed on orbit within the constraints of the optimal control theory applied to a simplified model of the human motor system. While I fully appreciate the value of such models to provide insights into question of human sensorimotor behaviour, to draw firm conclusions on what humans are actually experiencing based only on manipulations of the computational model, without testing the model's implicit assumptions and without considering the actual neurophysiological and biomechanical mechanisms, can be misleading. One way to do this could be to examine these questions through extensions to the model used in the simulations (changing activation dynamics of the torque generators, allowing for potential motion backward motion of the shoulder and trunk, etc.). A better solution would be to emulate the physiological and biomechanical conditions on Earth (supporting the arm against gravity to reduce muscle tone, placing the subject on a moveable base that requires that the body be stabilized with the other hand) in order to distinguish the hypothesis of an underestimation of mass vs. other potential sources of under-actuation and other potential effects of weightlessness on the body.
In sum, my opinion is that the authors are relying too much on a theoretical model as a ground truth and thus overstate their conclusions. But to provide a convincing argument that humans truly underestimate mass in weightlessness, they should consider more judiciously the neurophysiology and biomechanics that fall outside the purview of the simplified model that they have chosen. If a more thorough assessment of this nature is not possible, then I would argue that a more measured conclusion of the paper should be 1) that the authors observed modifications to movement kinematics in weightlessness consistent with an under-actuation for the intended motion, 2) that a simplified model of human physiology and biomechanics that incorporates principles of optimal control suggest that the source of this under-actuation might be an underestimation of mass in the computation of an appropriate feedforward motor command, and 3) that other potential neurophysiological or biomechanical effects cannot be excluded due to limitations of the computational model.
Author response:
The following is the authors’ response to the original reviews
eLife Assessment
This paper undertakes an important investigation to determine whether movement slowing in microgravity is due to a strategic conservative approach or rather due to an underestimation of the mass of the arm. While the experimental dataset is unique and the coupled experimental and computational analyses comprehensive, the authors present incomplete results to support the claim that movement slowing is due to mass underestimation. Further analysis is needed to rule out alternative explanations.
We thank the editor and reviewers for the thoughtful and constructive comments, which helped us substantially improve the manuscript. In this revised version, we have made the following key changes:
- Directly presented the differential effect of microgravity in different movement directions, showing its quantitative match with model predictions.
- Showed that changing cost function with the idea of conservative strategy is not a viable alternative.
- Showed our model predictions remain largely the same after adding Coriolis and centripetal torques.
- Discussed alternative explanations including neuromuscular deconditioning, friction, body stability, etc.
- Detailed the model description and moved it to the main text, as suggested.
Our point-to-point response is numbered to facilitate cross-referencing.
We believe the revisions and the responses adequately addresses the reviewers’ concerns, and new analysis results strengthened our conclusion that mass underestimation is the major contributor to movement slowing in microgravity.
Reviewer #1 (Public review):
Summary:
This article investigates the origin of movement slowdown in weightlessness by testing two possible hypotheses: the first is based on a strategic and conservative slowdown, presented as a scaling of the motion kinematics without altering its profile, while the second is based on the hypothesis of a misestimation of effective mass by the brain due to an alteration of gravity-dependent sensory inputs, which alters the kinematics following a controller parameterization error.
Strengths:
The article convincingly demonstrates that trajectories are affected in 0g conditions, as in previous work. It is interesting, and the results appear robust. However, I have two major reservations about the current version of the manuscript that prevent me from endorsing the conclusion in its current form.
Weaknesses:
(1) First, the hypothesis of a strategic and conservative slow down implicitly assumes a similar cost function, which cannot be guaranteed, tested, or verified. For example, previous work has suggested that changing the ratio between the state and control weight matrices produced an alteration in movement kinematics similar to that presented here, without changing the estimated mass parameter (Crevecoeur et al., 2010, J Neurophysiol, 104 (3), 1301-1313). Thus, the hypothesis of conservative slowing cannot be rejected. Such a strategy could vary with effective mass (thus showing a statistical effect), but the possibility that the data reflect a combination of both mechanisms (strategic slowing and mass misestimation) remains open.
Response (1): Thank you for raising this point. The basic premise of this concern is that changing the cost function for implementing strategic slowing can reproduce our empirical findings, thus the alternative hypothesis that we aimed to refute in the paper remain possible. At least, it could co-exist with our hypothesis of mass underestimation. In the revision, we show that changing the cost function only, as suggested here, cannot produce the behavioral patterns observed in microgravity.
As suggested, we modified the relative weighting of the state and control cost matrices (i.e., Q and R in the cost function Eq 15) without considering mass underestimation. While this cost function scaling can decrease peak velocity – a hallmark of strategic slowing – it also inevitably leads to later peak timings. This is opposite to our robust findings: the taikonauts consistently “advanced” their peak velocity and peak acceleration in time. Note, these model simulation patterns have also been shown in Crevecoeur et al. (2010), the paper mentioned by the reviewer (see their Figure 7B).
We systematically changed the ratio between the state and control weight matrices in the simulation, as suggested. We divided Q and multiplied R by the same factor α, the cost function scaling parameter α as defined in Crevecoeur et al. (2010). This adjustment models a shift in movement strategy in microgravity, and we tested a wide range of α to examine reasonable parameter space. Simulation results for α = 3 and α = 0.3 are shown in Figure 1—figure supplement 2 and Figure 1—figure supplement 3 respectively. As expected, with α = 3 (higher control effort penalty), peak velocities and accelerations are reduced, but their timing is delayed. Conversely, with α = 0.3, both peak amplitude and timing increase. Hence, changing the cost function to implement a conservative strategy cannot produce the kinematic pattern observed in microgravity, which is a combination of movement slowing and peak timing advance.
Therefore, we conclude that a change in optimal control strategy alone is insufficient to explain our empirical findings. Logically speaking, we cannot refute the possibility of strategic slowing, which can still exist on top of the mass underestimation we proposed here. However, our data does not support its role in explaining the slowing of goal-directed hand reaching in microgravity. We have added these analyses to the Supplementary Materials and expanded the Discussion to address this point.
(2) The main strength of the article is the presence of directional effects expected under the hypothesis of mass estimation error. However, the article lacks a clear demonstration of such an effect: indeed, although there appears to be a significant effect of direction, I was not sure that this effect matched the model's predictions. A directional effect is not sufficient because the model makes clear quantitative predictions about how this effect should vary across directions. In the absence of a quantitative match between the model and the data, the authors' claims regarding the role of misestimating the effective mass remain unsupported.
Response (2): First, we have to clarify that our study does not aim to quantitatively fit observed hand trajectory. The two-link arm model simulates an ideal case of moving a point mass (effective mass) on a horizontal plane without friction (Todorov, 2004; 2005). In contrast, in the experiment, participants moved their hand on a tabletop without vertical arm support, so the movement was not strictly planar and was affected by friction. Thus, this kind of model can only illustrate qualitative differences between conditions, as in the majorities of similar modeling studies (e.g., Shadmehr et al., 2016). In our study, qualitative simulation means the model is intended to reproduce the directional differences between conditions—not exact numeric values—in key kinematic measures. Specifically, it should capture how the peak velocity and acceleration amplitudes and their timings differ between normal gravity and microgravity (particularly under the mass-underestimation assumption).
Second, the reviewer rightfully pointed out that the directional effect is essential for our theorization of the importance of mass underestimation. However, the directional effect has two aspects, which were not clearly presented in our original manuscript. We now clarify both here and in the revision. The first aspect is that key kinematic variables (peak velocity/acceleration and their timing) are affected by movement direction, even before any potential microgravity effect. This is shown by the ranking order of directions for these variables (Figure 1C-H). The direction-dependent ranking, confirmed by pre-flight data, indicates that effective mass is a determining factor for reaching kinematics, which motivated us to study its role in eliciting movement slowing in space. This was what our original manuscript emphasized and clearly presented.
The second aspect is that the hypothetical mass underestimation might also differentially affect movements in different directions. This was not clearly presented in the original manuscript. However, we would not expect a quantitative match between model predictions and empirical data, for the reasons mentioned above. We now show this directional ranking in microgravity-elicited kinematic changes in both model simulations and empirical data. The overall trend is that the microgravity effect indeed differs between directions, and the model predictions and the data showed a reasonable qualitative match (Author response image 1 below).
Shown in Author response image 1, we found that for amplitude changes (Δ peak speed, Δ peak acceleration) both the model and the mean of empirical data show the same directional ordering (45° > 90° > 135°) in pre-in and post-in comparisons. For timing (Δ peak-speed time, Δ peak-acceleration time), which we consider the most diagnostic, the same directional ranking was observed. We only found one deviation, i.e., the predicted sign (earlier peaks) was confirmed at 90° and 135°, but not at 45°. As discussed in Response (6), the absence of timing advance at 45° may reflect limitations of our simplified model, which did not consider that the 45° direction is essentially a single-joint reach. Taken together, the directional pattern is largely consistent with the model predictions based on mass underestimation. The model successfully reproduces the directional ordering of amplitude measures -- peak velocity and peak acceleration. It also captures the sign of the timing changes in two out of the three directions. We added these new analysis results in the revision and expanded Discussion accordingly.
The details of our analysis on directional effects: We compared the model predictions (Author response image 1, left) with the experimental data (Author response image 1, right) across the three tested directions (45°, 90°, 135°). In the experimental data panels, both Δ(pre-in) (solid bars) and Δ(post-in) (semi-transparent bars) with standard error are shown. The directional trends are remarkably similar between model prediction and actual data. The post-in comparison is less aligned with model prediction; we postulate that the incomplete after-flight recovery (i.e., post data had not returned to pre-flight baselines) might obscure the microgravity effect. Incomplete recovery has also been shown in our original manuscript: peak speed and peak acceleration did not fully recover in post-flight sessions when compared to pre-flight sessions. To further quantify the correspondence between model and data, we performed repeated-measures correlation (rm-corr) analyses. We found significant within-subject correlations for three of the four metrics. For pre–in, Δ peak speed time (r<sub>rm</sub> = 0.627, t(23) = 3.858, p < 0.001), Δ peak acceleration time (r<sub>rm</sub> = 0.591, t(23) = 3.513, p = 0.002), and Δ peak acceleration (r<sub>rm</sub> = 0.573, t(23) = 3.351, p = 0.003) were significant, whereas Δ peak speed was not (r<sub>rm</sub> = 0.334, t(23) = 1.696, p = 0.103). These results thus show that the directional effect, as predicted our model, is observed both before spaceflight and in spaceflight (the pre-in comparison).
Author response image 1.
Directional comparison between model predictions and experimental data across the three reach directions (45°, 90°, 135°). Left: model outputs. Right: experimental data shown as Δ relative to the in-flight session; solid bars = Δ(in − pre) and semi-transparent bars = Δ(in − post). Colors encode direction consistently across panels (e.g., 45° = darker hue, 90° = medium, 135° = lighter/orange). Panels (clockwise from top-left): Δ peak speed (cm/s), Δ peak speed time (ms), Δ peak acceleration time (ms), and Δ peak acceleration (cm/s²). Bars are group means; error bars denote standard error across participants.
Citations:
Todorov, E. (2004). Optimality principles in sensorimotor control. Nature Neuroscience, 7(9), 907.
Todorov, E. (2005). Stochastic optimal control and estimation methods adapted to the noise characteristics of the sensorimotor system. Neural Computation, 17(5), 1084–1108.
Shadmehr, R., Huang, H. J., & Ahmed, A. A. (2016). A Representation of Effort in Decision-Making and Motor Control. Current Biology: CB, 26(14), 1929–1934.
In general, both the hypotheses of slowing motion (out of caution) and misestimating mass have been put forward in the past, and the added value of this article lies in demonstrating that the effect depended on direction. However, (1) a conservative strategy with a different cost function can also explain the data, and (2) the quantitative match between the directional effect and the model's predictions has not been established.
We agree that both hypotheses have been put forward before, however they are competing hypotheses that have not been resolved. Furthermore, the mass underestimation hypothesis is a conjecture without any solid evidence; previous reports on mass underestimation of object cannot directly translate to underestimation of body. As detailed in our responses above, we have shown that a conservative strategy implemented via a different cost function cannot reproduce the key findings in our dataset, thereby supporting the alternative hypothesis of mass underestimation. Moreover, we found qualitative agreement between the model predictions and the experimental data in terms of directional effects, which further strengthens our interpretation.
Specific points:
(1) I noted a lack of presentation of raw kinematic traces, which would be necessary to convince me that the directional effect was related to effective mass as stated.
Response (3): We are happy to include exemplary speed and acceleration trajectories. Kinematic profiles from one example participant are shown in Figure 2—figure supplement 6.
(2) The presentation and justification of the model require substantial improvement; the reason for their presence in the supplementary material is unclear, as there is space to present the modelling work in detail in the main text. Regarding the model, some choices require justification: for example, why did the authors ignore the nonlinear Coriolis and centripetal terms?
Response (4): Great suggestion. In the revision, we have moved the model into the main text and added further justification for using this simple model.
We initially omitted the nonlinear Coriolis and centripetal terms in order to start with a minimal model. Importantly, excluding these terms does not affect the model’s main conclusions. In the revision we added simulations that explicitly include these terms. The full explanation and simulations are provided in the Supplementary Notes 2 (this time we have to put it into the Supplementary to reduce the texts devoted to the model). More explanations can also be found in our response to Reviewer 2 (response (6)). The results indicate that, although these velocity-dependent forces show some directional anisotropy, their contribution is substantially smaller relative to that of the included inertial component; specifically, they have only a negligible impact on the predicted peak amplitudes and peak times.
(3) The increase in the proportion of trials with subcomponents is interesting, but the explanatory power of this observation is limited, as the initial percentage was already quite high (from 60-70% during the initial study to 70-85% in flight). This suggests that the potential effect of effective mass only explains a small increase in a trend already present in the initial study. A more critical assessment of this result is warranted.
Response (5): Thank you for your thoughtful comment. You are correct that the increase in the percentage of trials with submovements is modest, but a more critical change was observed in the timing between submovement peaks—specifically, the inter-peak interval (IPI). These intervals became longer during flight. Taken together with the percentage increase, the submovement changes significantly predicted the increase in movement duration, as shown by our linear mixed-effects model, which indicated that IPI increased.
Reviewer #2 (Public review):
This study explores the underlying causes of the generalized movement slowness observed in astronauts in weightlessness compared to their performance on Earth. The authors argue that this movement slowness stems from an underestimation of mass rather than a deliberate reduction in speed for enhanced stability and safety.
Overall, this is a fascinating and well-written work. The kinematic analysis is thorough and comprehensive. The design of the study is solid, the collected dataset is rare, and the model tends to add confidence to the proposed conclusions. That being said, I have several comments that could be addressed to consolidate interpretations and improve clarity.
Main comments:
(1) Mass underestimation
a) While this interpretation is supported by data and analyses, it is not clear whether this gives a complete picture of the underlying phenomena. The two hypotheses (i.e., mass underestimation vs deliberate speed reduction) can only be distinguished in terms of velocity/acceleration patterns, which should display specific changes during the flight with a mass underestimation. The experimental data generally shows the expected changes but for the 45° condition, no changes are observed during flight compared to the pre- and post-phases (Figure 4). In Figure 5E, only a change in the primary submovement peak velocity is observed for 45°, but this finding relies on a more involved decomposition procedure. It suggests that there is something specific about 45° (beyond its low effective mass). In such planar movements, 45° often corresponds to a movement which is close to single-joint, whereas 90° and 135° involve multi-joint movements. If so, the increased proportion of submovements in 90° and 135° could indicate that participants had more difficulties in coordinating multi-joint movements during flight. Besides inertia, Coriolis and centripetal effects may be non-negligible in such fast planar reaching (Hollerbach & Flash, Biol Cyber, 1982) and, interestingly, they would also be affected by a mass underestimation (thus, this is not necessarily incompatible with the author's view; yet predicting the effects of a mass underestimation on Coriolis/centripetal torques would require a two-link arm model). Overall, I found the discrepancy between the 45° direction and the other directions under-exploited in the current version of the article. In sum, could the corrective submovements be due to a misestimation of Coriolis/centripetal torques in the multi-joint dynamics (caused specifically -or not- by a mass underestimation)?
Response (6): Thank you for raising these important questions. We unpacked the whole paragraph into two concerns: 1) the possibility that misestimation of Coriolis and centripetal torques might lead to corrective submovements, and 2) the weak effect in the 45° direction unexploited. These two concerns are valid but addressable, and they did not change our general conclusions based on our empirical findings (see Supplementary note 2. Coriolis and centripetal torques have minimal impact).
Possible explanation for the 45° discrepancy
We agree with the reviewer that the 45° direction likely involves more single-joint (elbow-dominant) movement, whereas the 90° and 135° directions require greater multi-joint (elbow + shoulder) coordination. This is particularly relevant when the workspace is near body midline (e.g., Haggard & Richardson, 1995), as the case in our experimental setup. To demonstrate this, we examined the curvature of the hand trajectories across directions. Using cumulative curvature (positive = counterclockwise), we obtained average values of 6.484° ± 0.841°, 1.539° ± 0.462°, and 2.819° ± 0.538° for the 45°, 90°, and 135° directions, respectively. The significantly larger curvature in the 45° condition suggests that these movements deviate more from a straight-line path, a hallmark of more elbow-dominant movements.
Importantly, this curvature pattern was present in both the pre-flight and in-flight phases, indicating that it is a general movement characteristic rather than a microgravity-induced effect. Thus, the 45° reaches are less suitable for modeling with a simplified two-link arm model compared to the other two directions. We believe this is the main reason why the model predictions based on effective mass become less consistent with the empirical data for the 45° direction.
We have now incorporated this new analysis in the Results and discussed it in the revised Discussion.
Citation: Haggard, P., Hutchinson, K., & Stein, J. (1995). Patterns of coordinated multi-joint movement. Experimental Brain Research, 107(2), 254-266.
b) Additionally, since the taikonauts are tested after 2 or 3 weeks in flight, one could also assume that neuromuscular deconditioning explains (at least in part) the general decrease in movement speed. Can the authors explain how to rule out this alternative interpretation? For instance, weaker muscles could account for slower movements within a classical time-effort trade-off (as more neural effort would be needed to generate a similar amount of muscle force, thereby suggesting a purposive slowing down of movement). Therefore, could the observed results (slowing down + more submovements) be explained by some neuromuscular deconditioning combined with a difficulty in coordinating multi-joint movements in weightlessness (due to a misestimation or Coriolis/centripetal torques) provide an alternative explanation for the results?
Response (7): Neuromuscular deconditioning is indeed a space effect; thanks for bringing this up as we omitted the discussion of this confounds in our original manuscript. Prolonged stay in microgravity can lead to a reduction of muscle strength, but this is mostly limited to lower limb. For example, a recent well-designed large-sample study have shown that while lower leg muscle showed significant strength reductions, no changes in mean upper body strength was found (Scott et al., 2023), consistent with previous propositions that muscle weakness is less for upper-limb muscles than for postural and lower-limb muscles (Tesch et al., 2005). Furthermore, the muscle weakness is unlikely to play a major role here since our reaching task involves small movements (~12cm) with joint torques of a magnitude of ~2N·m. Of course, we cannot completely rule out the contribution of muscle weakness; we can only postulate, based on the task itself (12 cm reaching) and systematic microgravity effect (the increase in submovements, the increase in the inter-submovements intervals, and their significant prediction on movement slowing), that muscle weakness is an unlikely major contributor for the movement slowing.
The reviewer suggests that poor coordination in microgravity might contribute to slowing down + more submovements. This is also a possibility, but we did not find evidence to support it. First, there is no clear evidence or reports about poor coordination for simple upper-limb movements like reaching investigated here. Note that reaching or aiming movement is one of the most studied tasks among astronauts. Second, we further analyzed our reaching trajectories and found no sign of curvature increase, a hallmark of poor coordination of Coriolis/centripetal torques, in our large collection of reaching movements. We probably have the largest dataset of reaching movements collected in microgravity thus far, given that we had 12 taikonauts and each of them performed about 480 to 840 reaching trials during their spaceflight. We believe the probability of Type II error is quite low here.
Citation: Tesch, P. A., Berg, H. E., Bring, D., Evans, H. J., & LeBlanc, A. D. (2005). Effects of 17-day spaceflight on knee extensor muscle function and size. European journal of applied physiology, 93(4), 463-468.
Scott J, Feiveson A, English K, et al. Effects of exercise countermeasures on multisystem function in long duration spaceflight astronauts. npj Microgravity. 2023;9(11).
(2) Modelling
a) The model description should be improved as it is currently a mix of discrete time and continuous time formulations. Moreover, an infinite-horizon cost function is used, but I thought the authors used a finite-horizon formulation with the prefixed duration provided by the movement utility maximization framework of Shadmehr et al. (Curr Biol, 2016). Furthermore, was the mass underestimation reflected both in the utility model and the optimal control model? If so, did the authors really compute the feedback control gain with the underestimated mass but simulate the system with the real mass? This is important because the mass appears both in the utility framework and in the LQ framework. Given the current interpretations, the feedforward command is assumed to be erroneous, and the feedback command would allow for motor corrections. Therefore, it could be clarified whether the feedback command also misestimates the mass or not, which may affect its efficiency. For instance, if both feedforward and feedback motor commands are based on wrong internal models (e.g., due to the mass underestimation), one may wonder how the astronauts would execute accurate goal-directed movements.
b) The model seems to be deterministic in its current form (no motor and sensory noise). Since the framework developed by Todorov (2005) is used, sensorimotor noise could have been readily considered. One could also assume that motor and sensory noise increase in microgravity, and the model could inform on how microgravity affects the number of submovements or endpoint variance due to sensorimotor noise changes, for instance.
c) Finally, how does the model distinguish the feedforward and feedback components of the motor command that are discussed in the paper, given that the model only yields a feedback control law? Does 'feedforward' refer to the motor plan here (i.e., the prefixed duration and arguably the precomputed feedback gain)?
Response (8): We thank the reviewer for raising these important and technically insightful points regarding our modeling framework. We first clarify the structure of the model and key assumptions, and then address the specific questions in points (a)–(c) below.
We used Todorov’s (2005) stochastic optimal control method to compute a finite-horizon LQG policy under sensory noise and signal-dependent motor noise (state noise set to zero). The cost function is:
(see details in updated Methods). The resulting time-varying gains {L<sub>k</sub>, K<sub>k</sub>} correspond to the feedforward mapping and the feedback correction gain, respectively. The control law can be expressed as:
where u<sub>k</sub> is the control input,
is the nominal planned state,
is the estimated state, L<sub>k</sub> is the feedforward (nominal) control associated with the planned trajectory, and K<sub>k</sub> is the time-varying feedback gain that corrects deviations from the plan.
To define the motor plan for comparison with behavior, we simulate the deterministic open-loop
trajectory by turning off noise and disabling feedback corrections, i.e.,
. In this framework, “feedforward” refers to this nominal motor plan. Thus, sensory and signal-dependent noise influence the computed policy (via the gains), but are not injected when generating the nominal trajectory. This mirrors the minimum-jerk practice used to obtain nominal kinematics in prior utility-based work (Shadmehr, 2016), while optimal control provides a more physiologically grounded nominal plan. In the revision, we have updated the equations, provided more modeling details, and moved the model description to the main text to reduce possible confusions.
In the implementation of the “mass underestimation” condition, the mass used to compute the policy is the underestimated mass (
), whereas the actual mass is used when simulating the feedforward trajectories. Corrective submovements are analyzed separately and are not required for the planning-deficit findings reported here.
Answers of the three specific questions:
a) We mistakenly wrote a continuous-time infinite-horizon cost function in our original manuscript, whereas our controller is actually implemented as a discrete-time finite-horizon LQG with a terminal cost, over a horizon set by the utility-based optimal movement duration T<sub>opt</sub>. The underestimated mass is used in both the utility model (to determine T<sub>opt</sub>) and in the control computation (i.e., internal model), while the true mass is used when simulating the movement. This mismatch captures the central idea of feedforward planning based on an incorrect internal model.
b) As described, our model includes signal-dependent motor noise and sensory noise, following Todorov (2005). We also evaluated whether increased noise levels in microgravity could account for the observed behavioral changes. Simulation results showed that increasing either source of noise did not alter the main conclusions or reverse the trends in our key metrics. Moreover, our experimental data showed no significant increase in endpoint variability in microgravity (see analyses and results in Figure 2—figure supplement 3 & 4), making it unlikely that increased sensorimotor noise alone accounts for the observed slowing and submovement changes.
c) In our framework, the time-varying gains {L<sub>K</sub>,K<sub>K</sub>}define the feedforward and feedback components of the control policy. While both gains are computed based on a stochastic optimal control formulation (including noise), for comparison with behavior we simulate only the nominal feedforward plan, by turning off both noise and feedback:
. This defines a deterministic open-loop trajectory, which we use to capture planning-level effects such as peak timing shifts under mass underestimation. Feedback corrections via gains exist in the full model but are not involved in these specific analyses. We clarified this modeling choice and its behavioral relevance in the revised text.
We have updated the equations and moved the model description into the main text in the revised manuscript to avoid confusion.
(3) Brevity of movements and speed-accuracy trade-off
The tested movements are much faster (average duration approx. 350 ms) than similar self-paced movements that have been studied in other works (e.g., Wang et al., J Neurophysiology, 2016; Berret et al., PLOS Comp Biol, 2021, where movements can last about 900-1000 ms). This is consistent with the instructions to reach quickly and accurately, in line with a speed-accuracy trade-off. Was this instruction given to highlight the inertial effects related to the arm's anisotropy? One may however, wonder if the same results would hold for slower self-paced movements (are they also with reduced speed compared to Earth performance?). Moreover, a few other important questions might need to be addressed for completeness: how to ensure that astronauts did remember this instruction during the flight? (could the control group move faster because they better remembered the instruction?). Did the taikonauts perform the experiment on their own during the flight, or did one taikonaut assume the role of the experimenter?
Response (9): Thanks for highlighting the brevity of movements in our experiment. Our intention in emphasizing fast movements is to rigorously test whether movement is indeed slowed down in microgravity. The observed prolonged movement duration clearly shows that microgravity affects people’s movement duration, even when they are pushed to move fast. The second reason for using fast movement is to highlight that feedforward control is affected in microgravity. Mass underestimation specifically affects feedforward control in the first place, shown by the microgravity-related changes in peak velocity/acceleration. Slow movement would inevitably have online corrections that might obscure the effect of mass underestimation. Note that movement slowing is not only observed in our speed-emphasized reaching task, but also in whole-arm pointing in other astronauts’ studies (Berger, 1997; Sangals, 1999), which have been quoted in our paper. We thus believe these findings are generalizable.
Regarding the consistency of instructions: all our experiments conducted in the Tiangong space station were monitored in real time by experimenters in the control center located in Beijing. The task instructions were presented on the initial display of the data acquisition application and ample reading time was allowed. All the pre-, in-, and post-flight test sessions were administered by the same group of personnel with the same instruction. It is common that astronauts serve both as participants and experimenters at the same time. And, they were well trained for this type of role on the ground. Note that we had multiple pre-flight test sessions to familiarize them with the task. All these rigorous measures were in place to obtain high-quality data. In the revision, we included these experimental details for readers that are not familiar with space studies, and provided the rationales for emphasizing fast movements.
Citations:
Berger, M., Mescheriakov, S., Molokanova, E., Lechner-Steinleitner, S., Seguer, N., & Kozlovskaya, I. (1997). Pointing arm movements in short- and long-term spaceflights. Aviation, Space, and Environmental Medicine, 68(9), 781–787.
Sangals, J., Heuer, H., Manzey, D., & Lorenz, B. (1999). Changed visuomotor transformations during and after prolonged microgravity. Experimental Brain Research. Experimentelle Hirnforschung. Experimentation Cerebrale, 129(3), 378–390.
(4) No learning effect
This is a surprising effect, as mentioned by the authors. Other studies conducted in microgravity have indeed revealed an optimal adaptation of motor patterns in a few dozen trials (e.g., Gaveau et al., eLife, 2016). Perhaps the difference is again related to single-joint versus multi-joint movements. This should be better discussed given the impact of this claim. Typically, why would a "sensory bias of bodily property" persist in microgravity and be a "fundamental constraint of the sensorimotor system"?
Response (10): We believe that the presence or absence of adaptation between our study and Gaveau et al.’s study cannot be simply attributed to single-joint versus multi-joint movements. Their adaptation concerned incorporating microgravity into movement control to minimize effort, whereas ours concerned accurately perceiving body mass. Gaveau et al.’s task involved large-amplitude vertical reaching, a scenario in which gravity strongly affects joint torques and movement execution. Thus, adaptation to microgravity can lead to better execution, providing a strong incentive for learning. By contrast, our task consisted of small-amplitude horizontal movements, where the gravitational influence on biomechanics is minimal.
More importantly, we believe the lack of adaptation for mass underestimation is not totally surprising. When an inertial change is perceived (such as an extra weight attached to the forearm, as in previous motor adaptation studies), people can adapt their reaching within tens of trials. In that case, sensory cues are veridical, as they correctly signal the inertial perturbation. However, in microgravity, reduced gravitational pull and proprioceptive inputs constantly inform the controller that the body mass is less than its actual magnitude. In other words, sensory cues in space are misleading for estimating body mass. The resulting sensory bias prevents the sensorimotor system from adapting. Our initial explanation on this matter was too brief; we expanded it in the revised Discussion.
Reviewer #3 (Public review):
Summary:
The authors describe an interesting study of arm movements carried out in weightlessness after a prolonged exposure to the so-called microgravity conditions of orbital spaceflight. Subjects performed radial point-to-point motions of the fingertip on a touch pad. The authors note a reduction in movement speed in weightlessness, which they hypothesize could be due to either an overall strategy of lowering movement speed to better accommodate the instability of the body in weightlessness or an underestimation of body mass. They conclude for the latter, mainly based on two effects. One, slowing in weightlessness is greater for movement directions with higher effective mass at the end effector of the arm. Two, they present evidence for an increased number of corrective submovements in weightlessness. They contend that this provides conclusive evidence to accept the hypothesis of an underestimation of body mass.
Strengths:
In my opinion, the study provides a valuable contribution, the theoretical aspects are well presented through simulations, the statistical analyses are meticulous, the applicable literature is comprehensively considered and cited, and the manuscript is well written.
Weaknesses:
Nevertheless, I am of the opinion that the interpretation of the observations leaves room for other possible explanations of the observed phenomenon, thus weakening the strength of the arguments.
First, I would like to point out an apparent (at least to me) divergence between the predictions and the observed data. Figures 1 and S1 show that the difference between predicted values for the 3 movement directions is almost linear, with predictions for 90º midway between predictions for 45º and 135º. The effective mass at 90º appears to be much closer to that of 45º than to that of 135º (Figure S1A). But the data shown in Figure 2 and Figure 3 indicate that movements at 90º and 135º are grouped together in terms of reaction time, movement duration, and peak acceleration, while both differ significantly from those values for movements at 45º.
Furthermore, in Figure 4, the change in peak acceleration time and relative time to peak acceleration between 1g and 0g appears to be greater for 90º than for 135º, which appears to me to be at least superficially in contradiction with the predictions from Figure S1. If the effective mass is the key parameter, wouldn't one expect as much difference between 90º and 135º as between 90º and 45º? It is true that peak speed (Figure 3B) and peak speed time (Figure 4B) appear to follow the ordering according to effective mass, but is there a mathematical explanation as to why the ordering is respected for velocity but not acceleration? These inconsistencies weaken the author's conclusions and should be addressed.
Response (11): Indeed, the model predicts an almost equal separation between 45° and 90° and between 90° and 135°, while the data indicate that the spacing between 45° and 90° is much smaller than between 90° and 135°. We do not regard the divergence as evidence undermining our main conclusion since 1) the model is a simplification of the actual situation. For example, the model simulates an ideal case of moving a point mass (effective mass) without friction and without considering Coriolis and centripetal torques. 2) Our study does not make quantitative predictions of all the key kinematic measures; that will require model fitting, parameter estimation, and posture-constrained reaching experiments; instead, our study uses well-established (though simplified) models to qualitatively predict the overall behavioral pattern we would observe. For this purpose, our results are well in line with our expectations: though we did not find equal spacing between direction conditions, we do confirm that the key kinematic measures (Figure 2 and Figure 3 as questioned) show consistent directional trends between model predictions and empirical data. We added new analysis results on this matter: the directional effect we observed (how the key measures changed in microgravity across direction condition) is significantly correlated with our model predictions in most cases. Please check our detailed response (2) above. These results are also added in the revision.
We also highlight in the revision that our modeling is not to quantitatively predict reaching behaviors in space, but to qualitatively prescribe that how mass underestimation, but not the conservative control strategy, can lead to divergent predictions about key kinematic measures of fast reaching.
Then, to strengthen the conclusions, I feel that the following points would need to be addressed:
(1) The authors model the movement control through equations that derive the input control variable in terms of the force acting on the hand and treat the arm as a second-order low-pass filter (Equation 13). Underestimation of the mass in the computation of a feedforward command would lead to a lower-than-expected displacement to that command. But it is not clear if and how the authors account for a potential modification of the time constants of the 2nd order system. The CNS does not effectuate movements with pure torque generators. Muscles have elastic properties that depend on their tonic excitation level, reflex feedback, and other parameters. Indeed, Fisk et al. showed variations of movement characteristics consistent with lower muscle tone, lower bandwidth, and lower damping ratio in 0g compared to 1g. Could the variations in the response to the initial feedforward command be explained by a misrepresentation of the limbs' damping and natural frequency, leading to greater uncertainty about the consequences of the initial command? This would still be an argument for unadapted feedforward control of the movement, leading to the need for more corrective movements. But it would not necessarily reflect an underestimation of body mass.
Fisk, J. O. H. N., Lackner, J. R., & DiZio, P. A. U. L. (1993). Gravitoinertial force level influences arm movement control. Journal of neurophysiology, 69(2), 504-511.
Response (12): We agree that muscle properties, tonic excitation level, proprioception-mediated reflexes all contribute to reaching control. Fisk et al. (1993) study indeed showed that arm movement kinematics change, possibly owing to lower muscle tone and/or damping. However, reduced muscle damping and reduced spindle activity are more likely to affect feedback-based movements. Like in Fisk et al.’s study, people performed continuous arm movements with eyes closed; thus their movements largely relied on proprioceptive control. Our major findings are about the feedforward control, i.e., the reduced and “advanced” peak velocity/acceleration in discrete and ballistic reaching movements. Note that the peak acceleration happens as early as approximately 90-100ms into the movements, clearly showing that feedforward control is affected -- a different effect from Fisk et al’s findings. It is unlikely that people “advanced” their peak velocity/acceleration because they feel the need for more later corrective movements. Thus, underestimation of body mass remains the most plausible explanation.
(2) The movements were measured by having the subjects slide their finger on the surface of a touch screen. In weightlessness, the implications of this contact are expected to be quite different than those on the ground. In weightlessness, the taikonauts would need to actively press downward to maintain contact with the screen, while on Earth, gravity will do the work. The tangential forces that resist movement due to friction might therefore be different in 0g. This could be particularly relevant given that the effect of friction would interact with the limb in a direction-dependent fashion, given the anisotropy of the equivalent mass at the fingertip evoked by the authors. Is there some way to discount or control for these potential effects?
Response (13): We agree that friction might play a role here, but normal interaction with a touch screen typically involves friction between 0.1N and 0.5N (e.g., Ayyildiz et al., 2018). We believe that the directional variation of the friction is even smaller than 0.1N. It is very small compared to the force used to accelerate the arm for the reaching movement (10N-15N). Thus, friction anisotropy is unlikely to explain our data. Indeed, our readers might have the same concern, we thus added some discussion about possible effect of friction.
Citation: Ayyildiz M, Scaraggi M, Sirin O, Basdogan C, Persson BNJ. Contact mechanics between the human finger and a touchscreen under electroadhesion. Proc Natl Acad Sci U S A. 2018 Dec 11;115(50):12668-12673.
(3) The carefully crafted modelling of the limb neglects, nevertheless, the potential instability of the base of the arm. While the taikonauts were able to use their left arm to stabilize their bodies, it is not clear to what extent active stabilization with the contralateral limb can reproduce the stability of the human body seated in a chair in Earth gravity. Unintended motion of the shoulder could account for a smaller-than-expected displacement of the hand in response to the initial feedforward command and/or greater propensity for errors (with a greater need for corrective submovements) in 0g. The direction of movement with respect to the anchoring point could lead to the dependence of the observed effects on movement direction. Could this be tested in some way, e.g., by testing subjects on the ground while standing on an unstable base of support or sitting on a swing, with the same requirement to stabilize the torso using the contralateral arm?
Response (14): Body stabilization is always a challenge for human movement studies in space. We minimized its potential confounding effects by using left-hand grasping and foot straps for postural support throughout the experiment. We think shoulder stability is an unlikely explanation because unexpected shoulder instability should not affect the feedforward (early) part of the ballistic reaching movement: the reduced peak acceleration and its early peak were observed at about 90-100ms after movement initiation. This effect is too early to be explained by an expected stability issue. This argument is now mentioned in the revised Discussion.
The arguments for an underestimation of body mass would be strengthened if the authors could address these points in some way.
Recommendations for the authors:
Reviewing Editor Comments:
General recommendation
Overall, the reviewers agreed this is an interesting study with an original and strong approach. Nonetheless, there were significant weaknesses identified. The main criticism is that there is insufficient evidence for the claim that the movement slowing is due to mass underestimation, rather than other explanations for the increased feedback corrections. To bolster this claim, the reviewers have requested a deeper quantitative analysis of the directional effect and comparison to model predictions. They have also suggested that a 2-dof arm model could be used to predict how mass underestimation would influence multi-joint kinematics, and this should be compared to the data. Alternatively, or additionally, a control experiment could be performed (described in the reviews). We do realize that some of these options may not be feasible or practical. Ultimately, we leave it to you to determine how best to strengthen and solidify the argument for mass underestimation, rather than other causes.
As an alternative approach, you could consider tempering the claim regarding mass underestimation and focus more on the result that slower movements in microgravity are not simply a feedforward, rescaling of the movement trajectories, but rather, have greater feedback corrections. In this case, the reviewers feel it would still be critical to explain and discuss potential reasons for the corrections beyond mass underestimation.
We hope that these points are addressable, either with new analyses, experiments, or with a tempering of the claims. Addressing these points would help improve the eLife assessment.
Reviewer #1 (Recommendations for the authors):
(1) Move model descriptions to the main text to present modelling choices in more detail
Response (15): Thank you for the suggestion. We have moved the model descriptions to the main text to present the modeling choices in more detail and to allow readers to better cross-reference the analyses.
(2) Perform quantitative comparisons of the directional effect with the model's predictions, and add raw kinematic traces to illustrate the effect in more detail.
Response (16): Thanks for the suggestion, we have added the raw kinematics figure from a representative participant and please refer to Response (2) above for the comparisons of directional effect.
(3) Explore the effect of varying cost parameters in addition to mass estimation error to estimate the proportion of data explained by the underestimation hypothesis.
Response (17): Thank you for the suggestion. This has already been done—please see Response (1) above.
Reviewer #2 (Recommendations for the authors):
Minor comments:
(1) It must be justified early on why reaction times are being analyzed in this work. I understood later that it is to rule out any global slowing down of behavioral responses in microgravity.
Response (18): Exactly, RT results are informative about the absence of a global slowing down. Contrary to the conservative-strategy hypothesis, taikonauts did not show generalized slowing; they actually had faster reaction times during spaceflight, incompatible with a generalized slowing strategy. Thanks for point out; we justified that early in the text.
(2) Since the results are presented before the methods, I suggest stressing from the beginning that the reaching task is performed on a tablet and mentioning the instructions given to the participants, to improve the reading experience. The "beep" and "no beep" conditions also arise without obvious justification while reading the paper.
Response (19): Great suggestions. We now give out some experimental details and rationales at the beginning of Results.
(3) Figure 1C: The vel profiles are not returning to 0 at the end, why? Is it because the feedback gain is computed based on the underestimated mass or because a feedforward controller is applied here? Is it compatible with the experimental velocity traces?
Response (20): Figure. 1C shows the forward simulation under the optimal control policy. In our LQG formulation the terminal velocity is softly penalized (finite weight) rather than hard-constrained to zero; with a fixed horizon° the optimal solution can therefore end with a small residual velocity.
In the behavioral data, the hand does come to rest: this is achieved by corrective submovements during the homing phase.
(4) Left-skewed -> I believe this is right-skewed since the peak velocity is earlier.
Response (21): Yes, it should be right-skewed, thanks for point that out.
(5) What was the acquisition frequency of the positional data points? (on the tablet).
Response (22): The sampling frequency is 100 Hz. Thanks for pointing that out; we’ve added this information to the Methods.
(6) Figure S1. The planned duration seems to be longer than in the experiment (it is more around 500 ms for the 135-degree direction in simulation versus less than 400 ms in the experiment). Why?
Response (23): We apologize for a coding error that inadvertently multiplied the body-mass parameter by an extra factor, making the simulated mass too high. We have corrected the code, rerun the simulations, and updated Figures 1 and S1; all qualitative trends remain unchanged, and the revised movement durations (≈300–400 ms) are closer to the experimental values.
(7) After Equation 13: "The control law is given by". This is not the control law, which should have a feedback form u=K*x in the LQ framework. This is just the dynamic equations for the auxiliary state and the force. Please double-check the model description.
Response (24): Thank you for point this out. We have updated and refined all model equations and descriptions, and moved the model description from the Supplementary Materials to the main text; please see the revised manuscript.
Reviewer #3 (Recommendations for the authors):
(1) I have a concern about the interpretation of the anisotropic "equivalent mass". From my understanding, the equivalent mass would be what an external actor would feel as an equivalent inertia if pushing on the end effector from the outside. But the CNS does not push on the arm with a pure force generator acting at the hand to effectuate movement. It applies torque around the joints by applying forces across joints with muscles, causing the links of the arm to rotate around the joints. If the analysis is carried out in joint space, is the effective rotational inertia of the arm also anisotropic with respect to the direction of the movement of the hand? In other words, can the authors reassure me that the simulations are equivalent to an underestimation of the rotational inertia of the links when applied to the joints of the limb? It could be that these are mathematically the same; I have not delved into the mathematics to convince myself either way. But I would appreciate it if the authors could reassure me on this point.
Response (25): Thank you for raising this point. In our work, “equivalent mass” denotes the operational-space inertia projected along the hand-movement direction u, computed as:
This formulation describes the effective mass perceived at the end effector along a given direction, and is standard in operational-space control.
Although the motor command can be coded as either torque/force in the CNS, the actual executions are equivalent no matter whether it is specified as endpoint forces or joint torques, since force and torque are related by
. For small excursions as investigated here, this makes the directional anisotropy in endpoint inertia consistent with the anisotropy of the effective joint-space inertia required to produce the same endpoint motion. Conceptually, therefore, our “mass underestimation” manipulation in operational space corresponds to underestimating the required joint-space inertia mapped through the Jacobian. Since our behavioral data are hand positions, using the operational-space representation is the most direct and appropriate way for modeling.
(2) I would also like to suggest one more level of analysis to test their hypothesis. The authors decomposed the movements into submovements and measured the prevalence of corrective submovements in weightlessness vs. normal gravity. The increase in corrective submovements is consistent with the hypothesis of a misestimation of limb mass, leading to an unexpectedly smaller displacement due to the initial feedforward command, leading to the need for corrections, leading to an increased overall movement duration. According to this hypothesis, however, the initial submovement, while resulting in a smaller than expected displacement, should have the same duration as the analogous movements performed on Earth. The authors could check this by analyzing the duration of the extracted initial submovements.
Response (26): We appreciate the reviewer’s suggestion regarding the analysis of the initial submovement duration. In our decomposition framework, each submovement is modeled as a symmetric log-normal (bell-shaped) component, such that the time to peak speed is always half of the component duration. Thus, the initial submovement duration is directly reflected in the initial submovement peak-speed time already reported in our original manuscript (Figure. 5F).
However, we respectfully disagree with the assumption that mass underestimation would necessarily yield the same submovement duration as on Earth. Under mass underestimation, the movement is effectively under-actuated, and the initial submovement can terminate prematurely, leading to a shorter duration. This is indeed what we observed in the data. Therefore, our reported metrics already address the reviewer’s proposal and support the conclusion that mass underestimation reduces the initial submovement duration in microgravity. Per your suggestion, we now added one more sentence to explain to the reader that initial submovement peak-speed time reflect the duration of the initial submovement.
Some additional minor suggestions:
(1) I believe that it is important to include the data from the control subjects, in some form, in the main article. Perhaps shading behind the main data from the taikonauts to show similarities or differences between groups. It is inconvenient to have to go to the supplementary material to compare the two groups, which is the main test of the experiment.
Response (27): Thank you for the suggestion. For all the core performance variables, the control group showed flat patterns, with no changes across test sessions at all. Thus, including these figures (together with null statistical results) in the main text would obscure our central message, especially given the expanded length of the revised manuscript (we added model details and new analysis results). Instead, following eLife’s format, we have reorganized the Supplementary Material so that each experimental figure has a corresponding supplementary figure showing the control data. This way, readers can quickly locate the control results and directly compare them with the experimental data, while keeping the main text focused.
(2) "Importantly, sensory estimate of bodily property in microgravity is biased but evaded from sensorimotor adaptation, calling for an extension of existing theories of motor learning." Perhaps "immune from" would be a better choice of words.
Response (28): Thanks for the suggestion, we edited our text accordingly.
(3) "First, typical reaching movement exhibits a symmetrical bell-shaped speed profile, which minimizes energy expenditure while maximizing accuracy according to optimal control principles (Todorov, 2004)." While Todorov's analysis is interesting and well accepted, it might be worthwhile citing the original source on the phenomenon of bell-shaped velocity profiles that minimize jerk (derivative of acceleration) and therefore, in some sense, maximize smoothness. Flash and Hogan, 1985.
Response (29): Thanks for the suggestion, we added the citation of minimum jerk.
(4) "Post-hoc analyses revealed slower reaction times for the 45° direction compared to both 90° (p < 0.001, d = 0.293) and 135° (p = 0.003, d = 0.284). Notably, reactions were faster during the in-flight phase compared to pre-flight (p = 0.037, d = 0.333), with no significant difference between in-flight and post-flight phases (p = 0.127)." What can one conclude from this?
Response (30): Although these decreases reached statistical significance, their magnitudes were small. The parallel pattern across groups suggests the effect is not driven by microgravity, but is more plausibly a mild learning/practice effect. We now mentioned this in the Discussion.
(5) "In line with predictions, peak acceleration appeared significantly earlier in the 45° direction than other directions (45° vs. 90°, p < 0.001, d = 0.304; 45° vs. 135°, p < 0.001, d = 0.271)." Which predictions? Because the effective mass is greater at 45º? Could you clarify the prediction?
Response (31): We should be more specific here; thank you for raising this. The predictions are the ones about peak acceleration timing (shown in Fig. 1H). We now modified this sentence as:
“In line with model predictions (Figure 1H), ….”.
(6) Figure 2: Why do 45º movements have longer reaction times but shorter movement durations?
Response (32): Appreciate your careful reading of the results. We believe this is possibly due to flexible motor control across conditions and trials, i.e., people tend to move faster when people react slower with longer reaction time. This has been reflected in across-direction comparisons (as spotted by the reviewer here), and it has also been shown within participant and across participants: For both groups, we found a significant negative correlation between movement duration (MD) and reaction time (RT), both across and within individuals (Figure 2—figure supplement 5). This finding indicates that participants moved faster when their RT was slower, and vice versa. This flexible motor adjustment, likely due to the task requirement for rapid movements, remained consistent during spaceflight.
Additionally, certain assignments teach you how to meet the expectations for professional writing in a given field. Depending on the class, you might be asked to write a lab report, a case study, a literary analysis, a business plan, or an account of a personal interview. You will need to learn and follow the standard conventions for those types of written products.
it's important to know the different ways of writing, otherwise we can make mistakes in writing assignments. Not only in the way of writing but the way you read or speak.
for the backup infrastructure configuration
Maybe we should mention something like "and on protected VMs"? But this section is about Infrastructure, so I'm not sure. But without it it seems that DK used only for he infrastructure
Commands: * /compact to summarise the chat (degrades quality significantly) * /clear (clears current conversation so ensure complete CLAUDE.md and plan.md) * /(find the rest in the folder in the Claude Code course)
run /compact to get a summary, then /clear the context entirely, and paste back in only what matters
trick when you need carry over context fast
build me an auth system” as opposed to “Build email/password authentication using the existing User model, store sessions in Redis with 24-hour expiry, and add middleware that protects all routes under /api/protected.
Example of a good auth prompt
Most of your writing assignments—from brief response papers to in-depth research projects—will depend on your understanding of course reading assignments or related readings you do on your own. And it is difficult, if not impossible, to write effectively about a text that you do not understand. Even when you do understand the reading, it can be hard to write about it if you do not feel personally engaged with the ideas discussed.
Sometimes it is difficult to understand text or readings but I always help myself by doing my own research even when I didn't understand a sentence or a word.
When I was an undergraduate at the University of Florida, I didn’t understand that each academic discipline I took courses in to complete the requirements of my degree (history, philosophy, biology, math, political science, sociology, English) was a different discourse community. Each of these academic fields had their own goals, their own genres, their own writing conventions, their own formats for citing sources, and their own expectations for writing style. I thought each of the teachers I encountered in my undergraduate career just had their own personal preferences that all felt pretty random to me. I didn’t understand that each teacher was trying to act as a representative of the discourse community of their field.
Discipline is and always will be part of any professional or academic career, personally I think that if you want to be successful you have to be organized and disciplined. If I feel it's more challenging, it's because it will help me push myself and see what I'm capable of and how I handle situations under pressure.
As the79commercial business of the harbour declined, so it would seem that its resources were turned to theleisure sector
GOOD QUOTE SLAYYY
There were two solutions to managing the clash between port and resort: segregation or incorporation
potentially good quote!
Guide literatureis particularly prone to adopting this perspective since its market is the visitor population, for whom itsupplies not only empirical data but also appropriate cultural images
SLAYYY look at the primary sources they use? the guide literature can be misleading as they are trying to market a place to the public - they are more likely to highlight hte booming tourist industry and beautiful landscape than the manky iron works and coal mines nearby as well as the booming noisy ports!
At the heart of this economy, and its physical interface with the town,was the harbour. During the nineteenth century, this remained an important and vital area of activity andthe subject of regular maintenance and improvement
Links to the mitskell article! It links in how often these resorts saw harbours and beaches utilised to aid the other. for some, the ports enabled the funds to build big resorts and their ammeneties, for others, the popularity of the place boosted industry in the area!
from the dynamic South Wales industrial areas werebeginning to make a mark, particularly during periods such as bank holidays
The development of leisur ewas closely linked to industrialisation! industrialisation had led to a higher degree of disposible income, organised agitation had led to laws pased legally reducing work hours and providing 'bank holidays' meaning that the mulitutdes of industrial workers could now partake in the leisure acitvities previously only for the wealthy - without industrialisation, we may argue, such seaside resorts would have likely not have had the numbers to help their boom. furthermore, while not all were ports, the development of such resorts, blaney argues, were industrial in nature also, (expand from page 1).
his left a good portion, indeed the majority of the promontory on which the town sat,21undeveloped. There is little to suggest that the failure to establish an early railway link to the towncurtailed expansion
This is think would go against other articles who stres the importance of railway in the resorts expansion
During the first half of the nineteenth century, the town acquired the17essential package of ingredients for a fashionable resort: good indoor and outdoor bathing facilities,assembly rooms, a theatre, markets and shops stocked with fresh food and luxury products, a circulatinglibrary, formal promenades, a network of informal walks and excursions in the neighbouring environsand region, comfortable lodgings and residences and a dedicated guidebook to inform visitors andstructure their expectations
Would this class as urbanisation? It seems like its given the immenities of an industrial place?
page 1 he seems to place the rise of seaside resorts as having mnay characteristics of the industrial revolution * they were new and transformative no real precedent capacity to convert a site into something entirely different epitomised essence of specialisation and differentiation of production and location
so all round, were part of 'integrated urban network'
this article focuses on tneby, which is in the south west wales and on northwestern edge of bristol channel
Kevin Kelley on acts of kindness and the ability to receive them. via [[Peter Rukavina p]]
eLife Assessment
In this useful study, the authors conducted an impressive amount of atomistic simulations with a realistic asymmetric lipid bilayer to probe how the HIV-1 envelope glycoprotein (Env) transmembrane domain, cytoplasmic tail, and membrane environment influence ectodomain orientation and antibody epitope exposure. The simulations convincingly show that ectodomain motion is dominated by tilting relative to the membrane and explicitly demonstrate the role of membrane asymmetry in modulating the protein conformation and orientation. However, due to the qualitative nature of the conducted analyses, the evidence for the coupling between membrane-proximal regions and the antigenic surface is considered incomplete. With stronger integration of prior experimental and computational literature, this work has the potential to serve as a reference for how Env behaves in a realistic, glycosylated, membrane-embedded context.
Reviewer #1 (Public review):
Summary:
In the manuscript "Conformational Variability of HIV-1 Env Trimer and Viral Vulnerability", the authors study the fully glycosylated HIV-1 Env protein using an all-atom forcefield. It combines long all-atom simulations of Env in a realistic asymmetric bilayer with careful data analysis. This work clarifies how the CT domain modulates the overall conformation of the Env ectodomain and characterizes different MPER-TMD conformations. The authors also carefully analyze the accessibility of different antibodies to the Env protein.
Strengths:
This paper is state-of-the-art, given the scale of the system and the sophistication of the methods. The biological question is important, the methodology is rigorous, and the results will interest a broad audience.
Weaknesses:
The manuscript lacks a discussion of previous studies. The authors should consider addressing or comparing their work with the following points:
(1) Tilting of the Env ectodomain has also been reported in previous experimental and theoretical work:
https://doi.org/10.1101/2025.03.26.645577
(2) A previous all-atom simulation study has characterized the conformational heterogeneity of the MPER-TMD domain:
https://doi.org/10.1021/jacs.5c15421
(3) Experimental studies have shown that MPER-directed antibodies recognize the prehairpin intermediate rather than the prefusion state:
https://doi.org/10.1073/pnas.1807259115
(4) How does the CT domain modulate the accessibility of these antibodies studied? The authors are in a strong position to compare their results with the following experimental study:
Reviewer #2 (Public review):
(1) Summary
In this work, the authors aim to elucidate how a viral surface protein behaves in a membrane environment and how its large-scale motions influence the exposure of antibody-binding sites. Using long-timescale, all-atom molecular dynamics simulations of a fully glycosylated, full-length protein embedded in a virus-like membrane, the study systematically examines the coupling between ectodomain motion, transmembrane orientation, membrane interactions, and epitope accessibility. By comparing multiple model variants that differ in cleavage state, initial transmembrane configuration, and presence of the cytoplasmic tail, the authors aim to identify general features of protein-membrane dynamics relevant to antibody recognition.
(2) Strengths
A major strength of this study is the scope and ambition of the simulations. The authors perform multiple microsecond-scale simulations of a highly complex, biologically realistic system that includes the full ectodomain, transmembrane region, cytoplasmic tail, glycans, and a heterogeneous membrane. Such simulations remain technically challenging, and the work represents a substantial computational and methodological effort.
The analysis provides a clear and intuitive description of large-scale protein motions relative to the membrane, including ectodomain tilting and transmembrane orientation. The finding that the ectodomain explores a wide range of tilt angles while the transmembrane region remains more constrained, with limited correlation between the two, offers useful conceptual insight into how global motions may be accommodated without large rearrangements at the membrane anchor.
Another strength is the explicit consideration of membrane and glycan steric effects on antibody accessibility. By evaluating multiple classes of antibodies targeting distinct regions of the protein, the study highlights how membrane proximity and glycan dynamics can differentially influence access to different epitopes. This comparative approach helps place the results in a broader immunological context and may be useful for readers interested in antibody recognition or vaccine design.
Overall, the results are internally consistent across multiple simulations and model variants, and the conclusions are generally well aligned with the data presented.
(3) Weaknesses
The main limitations of the study relate to sampling and model dependence, which are inherent challenges for simulations of this size and complexity. Although the simulations are long by current standards, individual trajectories explore only portions of the available conformational space, and several conclusions rely on pooling data across a limited number of replicas. This makes it difficult to fully assess the robustness of some quantitative trends, particularly for rare events such as specific epitope accessibility states.
In addition, several aspects of the model construction, including the treatment of missing regions, loop rebuilding, and initial configuration choices, are necessarily approximate. While these approaches are reasonable and well motivated, the extent to which some conclusions depend on these modeling choices is not always fully clear from the current presentation.
Finally, the analysis of antibody accessibility is based on geometric and steric criteria, which provide a useful first-order approximation but do not capture potential conformational adaptations of antibodies or membrane remodeling during binding. As a result, the accessibility results should be interpreted primarily as model-based predictions rather than definitive statements about binding competence.
Despite these limitations, the study provides a valuable and carefully executed contribution, and its datasets and analytical framework are likely to be useful to others interested in protein-membrane interactions and antibody recognition.
Reviewer #3 (Public review):
Summary:
This study uses large-scale all-atom molecular dynamics simulations to examine the conformational plasticity of the HIV-1 envelope glycoprotein (Env) in a membrane context, with particular emphasis on how the transmembrane domain (TMD), cytoplasmic tail (CT), and membrane environment influence ectodomain orientation and antibody epitope exposure. By comparing Env constructs with and without the CT, explicitly modeling glycosylation, and embedding Env in an asymmetric lipid bilayer, the authors aim to provide an integrated view of how membrane-proximal regions and lipid interactions shape Env antigenicity, including epitopes targeted by MPER-directed antibodies.
Strengths:
A key strength of this work is the scope and realism of the simulation systems. The authors construct a very large, nearly complete Env-scale model that includes a glycosylated Env trimer embedded in an asymmetric bilayer, enabling analysis of membrane-protein interactions that are difficult to capture experimentally. The inclusion of specific glycans at reported sites, and the focus on constructs with and without the CT, are well motivated by existing biological and structural data.
The simulations reveal substantial tilting motions of the ectodomain relative to the membrane, with angles spanning roughly 0-30{degree sign} (and up to ~50{degree sign} in some analyses), while the ectodomain itself remains relatively rigid. This framing, that much of Env's conformational variability arises from rigid-body tilting rather than large internal rearrangements, is an important conceptual contribution. The authors also provide interesting observations regarding asymmetric bilayer deformations, including localized thinning and altered lipid headgroup interactions near the TMD and CT, which suggest a reciprocal coupling between Env and the surrounding membrane.
The analysis of antibody-relevant epitopes across the prefusion state, including the V1/V2 and V3 loops, the CD4 binding site, and the MPER, is another strength. The study makes effective use of existing experimental knowledge in this context, for example, by focusing on specific glycans known to occlude antibody binding, to motivate and interpret the simulations.
Weaknesses:
While the simulations are technically impressive, the manuscript would benefit from more explicit cross-validation against prior experimental and computational work throughout the Results and Discussion, and better framing in the introduction. Many of the reported behaviors, such as ectodomain tilting, TMD kinking, lipid interactions at helix boundaries, and aspects of membrane deformation, have been described previously in a range of MD studies of HIV Env and related constructs (e.g., PMC2730987, PMC2980712, PMC4254001, PMC4040535, PMC6035291, PMC12665260, PMID: 33882664, PMC11975376). Clearly situating the present results relative to these studies would strengthen the paper by clarifying where the simulations reproduce established behavior and where they extend it to more complete or realistic systems.
A related limitation is that the work remains largely descriptive with respect to conformational coupling. Numerous experimental studies have demonstrated functional and conformational coupling between the TMD, CT, and the antigenic surface, with effects on Env stability, infectivity, and antibody binding (e.g., PMC4701381, PMC4304640, PMC5085267). In this context, the statement that ectodomain and TMD tilting motions are independent is a strong conclusion that is not fully supported by the analyses presented, particularly given the authors' acknowledgment that multiple independent simulations are required to adequately sample conformational space. More direct analyses of coupling, rather than correlations inferred from individual trajectories, would help align the simulations with the existing experimental literature. Given the scale of these simulations, a more thorough analysis of coupling could be this paper's most seminal contribution to the field.
The choice of membrane composition also warrants deeper discussion. The manuscript states that it relies on a plasma membrane model derived from a prior simulation-based study, which itself is based on host plasma membrane (PMID: 35167752), but experimental analyses have shown that HIV virions differ substantially from host plasma membranes (e.g., PMC46679, PMC1413831, PMC10663554, PMC5039752, PMC6881329). In particular, virions are depleted in PC, PE, and PI, and enriched in phosphatidylserine, sphingomyelins, and cholesterol. These differences are likely to influence bilayer thickness, rigidity, and lipid-protein interactions and, therefore, may affect the generality of the conclusions regarding Env dynamics and antigenicity. Notably, the citation provided for membrane composition is a laboratory self-citation, a secondary source, rather than a primary experimental study on plasma membrane composition.
Finally, there are pervasive issues with citation and methodological clarity. Several structural models are referred to only by PDB ID without citation, and in at least one case, a structure described as cryo-EM is in fact an NMR-derived model. Statements regarding residue flexibility, missing regions in structures, and comparisons to prior dynamics studies are often presented without appropriate references. The Methods section also lacks sufficient detail for a system of this size and complexity, limiting readers' ability to assess robustness or reproducibility.
With stronger integration of prior experimental and computational literature, this work has the potential to serve as a valuable reference for how Env behaves in a realistic, glycosylated, membrane-embedded context. The simulation framework itself is well-suited for future studies incorporating mutations, strain variation, antibodies, inhibitors, or receptor and co-receptor engagement. In its current form, the primary contribution of the study is to consolidate and extend existing observations within a single, large-scale model, providing a useful platform for future mechanistic investigations.
Author response:
In response to the comments raised, we outline below the revisions we plan to strengthen the manuscript.
First, we will expand the Introduction and Discussion sections to provide clearer comparison with prior experimental and computational studies of ectodomain tilting, MPER–TMD conformational heterogeneity, and membrane deformation, and to discuss how our simulations reproduce and extend these earlier observations.
Second, we plan to add analyses that more directly assess the coupling between ectodomain and TMD motions. We will also revise the text to emphasize the limits imposed by sampling and model dependence and to discuss the potential benefits of enhanced sampling methods.
Third, we will clarify the rationale for the chosen membrane composition and discuss how differences in lipid content between host plasma membranes and HIV virions may influence bilayer properties and Env dynamics.
Fourth, we will supplement the Methods section to improve clarity and address issues of citation throughout the manuscript.
Finally, we intend to deposit MD trajectories to a public research data repository to the extent permitted by available storage capacity.
Romeo. Ay, nurse; what of that? both with an R. Nurse. Ah. mocker! that's the dog's name; R is for the—No; I know it begins with some other letter:—and she hath the prettiest sententious of it, of you and rosemary, that it would do you good 1365to hear it.
Symbolism: Rosemary was for remembrance and weddings. Nurse's confused rambling shows her affection.
And bring thee cords made like a tackled stair; Which to the high top-gallant of my joy
Plot: Romeo plans for a rope ladder so he can climb to Juliet's room for their wedding night.
Here is for thy pains.
Action: Romeo tries to pay the Nurse, treating her like a servant. She refuses.
but first let me tell ye, if ye should lead her into 1320a fool's paradise, as they say, it were a very gross kind of behavior, as they say: for the gentlewoman is young; and, therefore, if you should deal double with her, truly it were an ill thing to be offered to any gentlewoman, and very weak dealing.
Nurse protects Juliet, warning Romeo not to trick her into a "fool's paradise."
Scurvy knave! I am none of his flirt-gills; I am none of his skains-mates. And thou must stand by 1310too, and suffer every knave to use me at his pleasure?
The Nurse is offended by Mercutio's teasing and asserts her respectability.
Mercutio. No hare, sir; unless a hare, sir, in a lenten pie, 1285that is something stale and hoar ere it be spent. [Sings] An old hare hoar, And an old hare hoar, Is very good meat in lent 1290But a hare that is hoar Is too much for a score, When it hoars ere it be spent. Romeo, will you come to your father's? we'll to dinner, thither.
Bawdy Humor: "Hare" slang for prostitute; "hoar" means moldy/old. More crude jokes about the Nurse.
Mercutio. 'Tis no less, I tell you, for the bawdy hand of the dial is now upon the prick of noon.
Sexual Pun: Mercutio makes a crude joke about the clock hand being on the "prick".
Mercutio. A sail, a sail!
Mockery: Mercutio jokes that the Nurse's entrance is like a ship coming in.
that Petrarch flowed in: Laura to his lady was but a 1200kitchen-wench; marry, she had a better love to be-rhyme her; Dido a dowdy; Cleopatra a gipsy; Helen and Hero hildings and harlots; Thisbe a grey eye or so, but not to the purpose. Signior
Allusion: Says Romeo's love outdoes Petrarch's famous love for Laura, and other famous beauties.
Mercutio. Alas poor Romeo! he is already dead; stabbed with a white wench's black eye; shot through the ear with a love-song; the very pin of his heart cleft with the blind bow-boy's butt-shaft: and is he a man to encounter Tybalt?
Humorous Imagery: Mockingly says Romeo is "dead" from love, shot by a song, Cupid's arrow, etc.
In one respect I'll thy assistant be; For this alliance may so happy prove, To turn your households' rancour to pure love.
Motivation: Friar agrees to marry them hoping it will end the family feud.
I have been feasting with mine enemy, Where on a sudden one hath wounded me,
Metaphor: The Capulet feast was a "feast." Juliet "wounded" him with love.
Within the infant rind of this small flower Poison hath residence and medicine power: For this, being smelt, with that part cheers each part;
Symbolism: Represents duality—things can be both healing and deadly (like love/the plan).
Virtue itself turns vice, being misapplied; And vice sometimes by action dignified.
Theme: Good can become bad if misused; bad can become good. Foreshadows the plan.
The earth that's nature's mother is her tomb;
Paradox: Earth gives life (womb) and receives death (tomb). Central theme.
Friar Laurence. The grey-eyed morn smiles on the frowning night, Chequering the eastern clouds with streaks of light, 1060And flecked darkness like a drunkard reels From forth day's path and Titan's fiery wheels:
Imagery: Dawn smiling, night reeling like a drunkard. Peaceful, poetic start.
Hence will I to my ghostly father's cell,
Plot: Romeo goes to Friar Laurence to arrange the marriage.
O, she is lame! love's heralds should be thoughts, Which ten times faster glide than the sun's beams, Driving back shadows over louring hills: 1380Therefore do nimble-pinion'd doves draw love, And therefore hath the wind-swift Cupid wings.
Metaphor: Love's messengers should be as fast as thoughts, doves, or Cupid.
But old folks, many feign as they were dead; 1390Unwieldy, slow, heavy and pale as lead.
Simile/Character: Juliet impatiently stereotypes the old as slow and dull, unlike her youthful passion.
Nurse. Well, you have made a simple choice; you know not 1415how to choose a man: Romeo! no, not he; though his face be better than any man's, yet his leg excels all men's; and for a hand, and a foot, and a body, though they be not to be talked on, yet they are past compare: he is not the flower of courtesy, 1420but, I'll warrant him, as gentle as a lamb. Go thy ways, wench; serve God. What, have you dined at home?
Comic Delay: The Nurse praises Romeo's body parts (face, leg) but won't give the news, frustrating Juliet.
Nurse. Then hie you hence to Friar Laurence' cell; There stays a husband to make you a wife: Now comes the wanton blood up in your cheeks, They'll be in scarlet straight at any news. Hie you to church; I must another way, 1450To fetch a ladder, by the which your love Must climb a bird's nest soon when it is dark: I am the drudge and toil in your delight, But you shall bear the burden soon at night.
Plot/Imagery: She'll get a ladder for Romeo to climb to Juliet's "bird's nest" (room) after the wedding.
Thy purpose marriage, send me word to-morrow,
Juliet sets the condition: his love must aim at marriage. She's practical.
eLife Assessment
This valuable study uses NAD(P)H fluorescence lifetime imaging (FLIM) to map metabolic states in the Drosophila brain. The authors reveal subtype-specific metabolic profiles in Kenyon cells and report learning-related changes, supported by solid evidence and careful methodology. However, the FLIM shifts observed after memory formation in α/β neurons are small and only weakly significant, so the ability of FLIM to detect subtle physiological changes still requires further validation. Nevertheless, this work provides a strong starting point and demonstrates the promising potential of FLIM for probing neural metabolism in vivo.
Reviewer #1 (Public review):
Summary:
The authors present a novel usage of fluorescence life-time imaging microscopy (FLIM) to measure NAD(P)H autofluorescence in the Drosophila brain, as a proxy for cellular metabolic/redox states. This new method relies on the fact that both NADH and NADPH are autofluorescent, with a different excitation lifetime depending on whether they are free (indicating glycolysis) or protein-bound (indicating oxidative phosphorylation). The authors successfully use this method in Drosophila to measure changes in metabolic activity across different areas of the fly brain, with a particular focus on the main center for associative memory: the mushroom body.
Strengths:
The authors have made a commendable effort to explain the technical aspects of the method in accessible language. This clarity will benefit both non-experts seeking to understand the methodology and researchers interested in applying FLIM to Drosophila in other contexts.
Weaknesses:
Despite being statistically significant, the learning-induced change in f-free in α/β Kenyon cells is minimal (a decrease from 0.76 to 0.73, with a high variability). It is unclear whether this small effect represents a meaningful shift in neuronal metabolic state.
Whether this method can be valuable to examine the effects of long-term memory (after spaced or massed conditioning) remains to be established.
Reviewer #2 (Public review):
This revised manuscript presents a valuable application of NAD(P)H fluorescence lifetime imaging (FLIM) to study metabolic activity in the Drosophila brain. The authors reveal regional differences in oxidative and glycolytic metabolism, with particular emphasis on the mushroom body, a key center for associative learning and memory. They also report metabolic shifts in α/β Kenyon cells following classical conditioning, in line with their known role in energy-demanding memory processes.
The study is well-executed and the authors have added more detailed methodological descriptions in this version, which strengthen the technical contribution. The analysis pipeline is rigorous, with careful curve fitting and appropriate controls. However, the metabolic shifts observed after conditioning are small and only weakly significant, raising questions about the sensitivity of FLIM for detecting subtle physiological changes. The authors acknowledge these limitations in the revised discussion, which helps place the findings in proper context.
Despite this, the work provides a solid foundation for future applications of label-free FLIM in vivo and serves as a valuable technical resource for researchers interested in neural metabolism. Overall, this study represents a meaningful step toward integrating metabolic imaging with the study of neural activity and cognitive function.
Reviewer #3 (Public review):
This study investigates the characteristics of the autofluorescence signal excited by 740 nm 2-photon excitation, in the range of 420-500 nm, across the Drosophila brain. The fluorescence lifetime (FL) appears bi-exponential, with a short 0.4 ns time constant followed by a longer decay. The lifetime decay and the resulting parameter fits vary across the brain. The resulting maps reveal anatomical landmarks, which simultaneous imaging of genetically encoded fluorescent proteins help identify. Past work has shown that the autofluorescence decay time course reflects the balance of the redox enzyme NAD(P)H vs. its protein bound form. The ratio of free to bound NADPH is thought to indicate relative glycolysis vs. oxidative phosphorylation, and thus shifts in the free-to-bound ratio may indicate shifts in metabolic pathways. The basics of this measure have been demonstrated in other organisms, and this study is the first to use the FLIM module of the STELLARIS 8 FALCON microscope from Leica to measure autofluorescence lifetime in the brain of the fly. Methods include registering brains of different flies to a common template and masking out anatomical regions of interest using fluorescence proteins.
The analysis relies on fitting a FL decay model with two free parameters, f_free and T_bound. F_free is the fraction of the normalized curve contributed by a decaying exponential with a time constant 0.4 ns, thought to represent the FL of free NADPH or NADH, which apparently cannot be distinguished. T_bound is the time constant of the second exponential, with scalar amplitude = (1-f_free). The T_bound fit is thought to represent the decay time constant of protein bound NADPH, but can differ depending on the protein. The study shows that across the brain, T_bound can range from 0 to >5 ns, whereas f_free can range from 0.5 to 0.9 ns (Figure 1a). The paper beautifully lays out the analysis pipeline, providing a valuable resource. The full range of fits are reported, including maximum likelihood quality parameters, and can be benchmarks for future studies.
The authors measure properties of NADPH related autofluorescence of Kenyon Cells (KCs) of the fly mushroom body. The somata and calyx of mushroom bodies have a longer average tau_bound than other regions (Figure 1e); the f_free fit is higher for the calyx (input synapses) region than for KC somata; and the average across flies of average f_free fits in alpha/beta KC somata decreases slightly following paired presentation of odor and shock, compared to unpaired presentation of the same stimuli. Though the change is slight, no comparable change is detected in gamma KCs, suggesting that distributions of f_free derived from FL may be sensitive enough to measure changes in metabolic pathways following conditioning.
FLIM as a method is not yet widely prevalent in fly neuroscience, but recent demonstrations of its potential are likely to increase its use. Future efforts will benefit from the description of the properties of the autofluorescence signal to evaluate how autofluorescence may impact measures of FL of genetically engineered indicators.
Author response:
The following is the authors’ response to the original reviews.
Public Reviews:
Reviewer #1 (Public review):
Summary:
The authors present a novel usage of fluorescence lifetime imaging microscopy (FLIM) to measure NAD(P)H autofluorescence in the Drosophila brain, as a proxy for cellular metabolic/redox states. This new method relies on the fact that both NADH and NADPH are autofluorescent, with a different excitation lifetime depending on whether they are free (indicating glycolysis) or protein-bound (indicating oxidative phosphorylation). The authors successfully use this method in Drosophila to measure changes in metabolic activity across different areas of the fly brain, with a particular focus on the main center for associative memory: the mushroom body.
Strengths:
The authors have made a commendable effort to explain the technical aspects of the method in accessible language. This clarity will benefit both non-experts seeking to understand the methodology and researchers interested in applying FLIM to Drosophila in other contexts.
Weaknesses:
(1) Despite being statistically significant, the learning-induced change in f-free in α/β Kenyon cells is minimal (a decrease from 0.76 to 0.73, with a high variability). The authors should provide justification for why they believe this small effect represents a meaningful shift in neuronal metabolic state.
We agree with the reviewer that the observed f_free shift averaged per individual, while statistically significant, is small. However, to our knowledge, this is the first study to investigate a physiological (i.e., not pharmacologically induced) variation in neuronal metabolism using FLIM. As such, there are no established expectations regarding the amplitude of the effect. In the revised manuscript, we have included an additional experiment involving the knockdown of ALAT in α/β Kenyon cells, which further supports our findings. We have also expanded the discussion to expose two potential reasons why this effect may appear modest.
(2) The lack of experiments examining the effects of long-term memory (after spaced or massed conditioning) seems like a missed opportunity. Such experiments could likely reveal more drastic changes in the metabolic profiles of KCs, as a consequence of memory consolidation processes.
We agree with the reviewer that investigating the effects of long-term memory on metabolism represent a valuable future path of investigation. An intrinsic caveat of autofluorescence measurement, however, is to identify the cellular origin of the observed changes. To this respect, long-term memory formation is not an ideal case study as its essential feature is expected to be a metabolic activation localized to Kenyon cells’ axons in the mushroom body vertical lobes (as shown in Comyn et al., 2024), where many different neuron subtypes send intricate processes. This is why we chose to first focus on middle-term memory, where changes at the level of the cell bodies could be expected from our previous work (Rabah et al., 2022). But our pioneer exploration of the applicability of NAD(P)H FLIM to brain metabolism monitoring in vivo now paves the way to extending it to the effect of other forms of memory.
(3) The discussion is mostly just a summary of the findings. It would be useful if the authors could discuss potential future applications of their method and new research questions that it could help address.
The discussion has been expanded by adding interpretations of the findings and remaining challenges.
Reviewer #2 (Public review):
This manuscript presents a compelling application of NAD(P)H fluorescence lifetime imaging (FLIM) to study metabolic activity in the Drosophila brain. The authors reveal regional differences in oxidative and glycolytic metabolism, with a particular focus on the mushroom body, a key structure involved in associative learning and memory. In particular, they identify metabolic shifts in α/β Kenyon cells following classical conditioning, consistent with their established role in energy-demanding middle- and long-term memories.
These results highlight the potential of label-free FLIM for in-vivo neural circuit studies, providing a powerful complement to genetically encoded sensors. This study is well-conducted and employs rigorous analysis, including careful curve fitting and well-designed controls, to ensure the robustness of its findings. It should serve as a valuable technical reference for researchers interested in using FLIM to study neural metabolism in vivo. Overall, this work represents an important step in the application of FLIM to study the interactions between metabolic processes, neural activity, and cognitive function.
Reviewer #3 (Public review):
This study investigates the characteristics of the autofluorescence signal excited by 740 nm 2-photon excitation, in the range of 420-500 nm, across the Drosophila brain. The fluorescence lifetime (FL) appears bi-exponential, with a short 0.4 ns time constant followed by a longer decay. The lifetime decay and the resulting parameter fits vary across the brain. The resulting maps reveal anatomical landmarks, which simultaneous imaging of genetically encoded fluorescent proteins helps to identify. Past work has shown that the autofluorescence decay time course reflects the balance of the redox enzyme NAD(P)H vs. its protein-bound form. The ratio of free-to-bound NADPH is thought to indicate relative glycolysis vs. oxidative phosphorylation, and thus shifts in the free-to-bound ratio may indicate shifts in metabolic pathways. The basics of this measure have been demonstrated in other organisms, and this study is the first to use the FLIM module of the STELLARIS 8 FALCON microscope from Leica to measure autofluorescence lifetime in the brain of the fly. Methods include registering the brains of different flies to a common template and masking out anatomical regions of interest using fluorescence proteins.
The analysis relies on fitting an FL decay model with two free parameters, f_free and t_bound. F_free is the fraction of the normalized curve contributed by a decaying exponential with a time constant of 0.4 ns, thought to represent the FL of free NADPH or NADH, which apparently cannot be distinguished. T_bound is the time constant of the second exponential, with scalar amplitude = (1-f_free). The T_bound fit is thought to represent the decay time constant of protein-bound NADPH but can differ depending on the protein. The study shows that across the brain, T_bound can range from 0 to >5 ns, whereas f_free can range from 0.5 to 0.9 (Figure 1a). These methods appear to be solid, the full range of fits are reported, including maximum likelihood quality parameters, and can be benchmarks for future studies.
The authors measure the properties of NADPH-related autofluorescence of Kenyon Cells(KCs) of the fly mushroom body. The results from the three main figures are:
(1) Somata and calyx of mushroom bodies have a longer average tau_bound than other regions (Figure 1e);
(2) The f_free fit is higher for the calyx (input synapses) region than for KC somata (Figure 2b);
(3) The average across flies of average f_free fits in alpha/beta KC somata decreases from 0.734 to 0.718. Based on the first two findings, an accurate title would be "Autofluorecense lifetime imaging reveals regional differences in NADPH state in Drosophila mushroom bodies."
The third finding is the basis for the title of the paper and the support for this claim is unconvincing. First, the difference in alpha/beta f_free (p-value of 4.98E-2) is small compared to the measured difference in f_free between somas and calyces. It's smaller even than the difference in average soma f_free across datasets (Figure 2b vs c). The metric is also quite derived; first, the model is fit to each (binned) voxel, then the distribution across voxels is averaged and then averaged across flies. If the voxel distributions of f_free are similar to those shown in Supplementary Figure 2, then the actual f_free fits could range between 0.6-0.8. A more convincing statistical test might be to compare the distributions across voxels between alpha/beta vs alpha'/beta' vs. gamma KCs, perhaps with bootstrapping and including appropriate controls for multiple comparisons.
The difference observed is indeed modest relative to the variability of f_free measurements in other contexts. The fact that the difference observed between the somata region and the calyx is larger is not necessarily surprising. Indeed, these areas have different anatomical compositions that may result in different basal metabolic profiles. This is suggested by Figure 1b which shows that the cortex and neuropile have different metabolic signatures. Differences in average f_free values in the somata region can indeed be observed between naive and conditioned flies. However, all comparisons in the article were performed between groups of flies imaged within the same experimental batches, ensuring that external factors were largely controlled for. This absence of control makes it difficult to extract meaningful information from the comparison between naive and conditioned flies.
We agree with the reviewer that the choice of the metric was indeed not well justified in the first manuscript. In the new manuscript, we have tried to illustrate the reasons for this choice with the example of the comparison of f_free in alpha/beta neurons between unpaired and paired conditioning (Dataset 8). First, the idea of averaging across voxels is supported by the fact that the distributions of decay parameters within a single image are predominantly unimodal. Examples for Dataset 8 are now provided in the new Sup. Figure 14. Second, an interpretable comparison between multiple groups of distributions is, to our knowledge, not straightforward to implement. It is now discussed in Supplementary information. To measure interpretable differences in the shapes of the distributions we computed the first three moments of distributions of f_free for Dataset 8 and compared the values obtained between conditions (see Supplementary information and new Sup. Figure 15). Third, averaging across individuals allows to give each experimental subject the same weight in the comparisons.
I recommend the authors address two concerns. First, what degree of fluctuation in autofluorescence decay can we expect over time, e.g. over circadian cycles? That would be helpful in evaluating the magnitude of changes following conditioning. And second, if the authors think that metabolism shifts to OXPHOS over glycolosis, are there further genetic manipulations they could make? They test LDH knockdown in gamma KCs, why not knock it down in alpha/beta neurons? The prediction might be that if it prevents the shift to OXPHOS, the shift in f_free distribution in alpha/beta KCs would be attenuated. The extensive library of genetic reagents is an advantage of working with flies, but it comes with a higher standard for corroborating claims.
In the present study, we used control groups to account for broad fluctuations induced by external factors such as the circadian cycle. We agree with the reviewer that a detailed characterization of circadian variations in the decay parameters would be valuable for assessing the magnitude of conditioning-induced shifts. We have integrated this relevant suggestion in the Discussion. Conducting such an investigation lies unfortunately beyond the scope and means of the current project.
In line with the suggestion of the reviewer, we have included a new experiment to test the influence of the knockdown of ALAT on the conditioning-induced shift measured in alpha/beta neurons. This choice is motivated in the new manuscript. The obtained result shows that no shift is detected in the mutant flies, in accordance with our hypothesis.
FLIM as a method is not yet widely prevalent in fly neuroscience, but recent demonstrations of its potential are likely to increase its use. Future efforts will benefit from the description of the properties of the autofluorescence signal to evaluate how autofluorescence may impact measures of FL of genetically engineered indicators.
Recommendations for the authors
Reviewer #1 (Recommendations for the authors):
(1) Y axes in Figures 1e, 2c, 3b,c are misleading. They must start at 0.
Although we agree that making the Y axes start at 0 is preferable, in our case it makes it difficult to observe the dispersion of the data at the same time (your next suggestion). To make it clearer to the reader that the axes do not start at 0, a broken Y-axis is now displayed in every concerned figure.
(2) These same plots should have individual data points represented, for increased clarity and transparency.
Individual data points were added on all boxplots.
Reviewer #2 (Recommendations for the authors):
I am evaluating this paper as a fly neuroscientist with experience in neurophysiology, including calcium imaging. I have little experience with FLIM but anticipate its use growing as more microscopes and killer apps are developed. From this perspective, I value the opportunity to dig into FLIM and try to understand this autofluorescence signal. I think the effort to show each piece of the analysis pipeline is valuable. The figures are quite beautiful and easy to follow. My main suggestion is to consider moving some of the supplemental data to the main figures. eLife allows unlimited figures, moving key pieces of the pipeline to the main figures would make for smoother reading and emphasize the technical care taken in this study.
We thank the reviewer for their feedback. Following their advice we have moved panels from the supplementary figures to the main text (see new Figure 2).
Unfortunately, the scientific questions and biological data do not rise to the typical standard in the field to support the claims in the title, "In vivo autofluorescence lifetime imaging of the Drosophila brain captures metabolic shifts associated with memory formation". The authors also clearly state what the next steps are: "hypothesis-driven approaches that rely on metabolite-specific sensors" (Intro). The advantage of fly neuroscience is the extensive library of genetic reagents that enable perturbations. The key manipulation in this study is the electric shock conditioning paradigm that subtly shifts the distribution of a parameter fit to an exponential decay in the somas of alpha/beta KCs vs others. This feels like an initial finding that deserves follow-up; but is it a large enough result to motivate a future student to pick this project up? The larger effect appears to be the gradients in f_free across KCs overall (Figure 2b). How does this change with conditioning?
We acknowledge that the observed metabolic shift is modest relative to the variability of f_free and agree that additional corroborating experiments would further strengthen this result. Nevertheless, we believe it remains a valid and valuable finding that will be of interest to researchers in the field. The reviewer is right in pointing out that the gradient across KCs is higher in magnitude, however, the fact that this technique can also report experience-dependent changes, in addition to innate heterogeneities across different cell types, is a major incentive for people who could be interested in applying NAD(P)H FLIM in the future. For this reason, we consider it appropriate to retain mention of the memory-induced shift in the title, while making it less assertive and adding a reference to the structural heterogeneities of f_free revealed in the study. We have also rephrased the abstract to adopt a more cautious tone and expanded the discussion to clarify why a low-magnitude shift in f_free can still carry biological significance in this context. Finally, we have added the results of a new set of data involving the knockdown of ALAT in Kenyon cells, to further support the relevance of our observation relative to memory formation, despite its small magnitude. We believe that these elements together form a good basis for future investigations and that the manuscript merits publication in its present form.
Together, I would recommend reshaping the paper as a methods paper that asks the question, what are the spatial properties of NADPH FL across the brain? The importance of this question is clear in the context of other work on energy metabolism in the MBs. 2P FLIM will likely always have to account for autofluorescence, so this will be of interest. The careful technical work that is the strength of the manuscript could be featured, and whether conditioning shifts f_free could be a curio that might entice future work.
By transferring panels of the supplementary figures to the main text (see new Figure 2) as suggested by Reviewer 2, we have reinforced the methodological part of the manuscript. For the reasons explained above, we however still mention the ‘biological’ findings in the title and abstract.
Minor recommendations on science:
Figure 2C. Plotting either individual data points or distributions would be more convincing.
Individual data points were added on all boxplots.
There are a few mentions of glia. What are the authors' expectations for metabolic pathways in glia vs. neurons? Are glia expected to use one more than the other? The work by Rabah suggests it should be different and perhaps complementary to neurons. Can a glial marker be used in addition to KC markers? This seems crucial to being able to distinguish metabolic changes in KC somata from those in glia.
Drosophila cortex glia are thought to play a similar role as astrocytes in vertebrates (see Introduction). In that perspective, we expect cortex glia to display a higher level of glycolysis than neurons. The work by Rabah et al. is coherent with this hypothesis. Reviewer 2 is right in pointing out that using a glial marker would be interesting. However, current technical limitations make such experiments challenging. These limitations are now exposed in the discussion.
The question of whether KC somata positions are stereotyped can probably be answered in other ways as well. For example, the KCs are in the FAFB connectomic data set and the hemibrain. How do the somata positions compare?
The reviewer’s suggestion is indeed interesting. However, the FAFB and hemibrain connectomic datasets are based on only two individual flies, which probably limits their suitability for assessing the stereotypy of KC subtype distributions. In addition, aligning our data with the FAFB dataset would represent substantial additional work.
The free parameter tau_bound is mysterious if it can be influenced by the identity of the protein. Are there candidate NADPH binding partners that have a spatial distribution in confocal images that could explain the difference between somas and calyx?
There are indeed dozens of NADH- or NADPH-binding proteins. For this reason, in all studies implementing exponential fitting of metabolic FLIM data, tau_bound is considered a complex combination of the contributions from many different proteins. In addition, one should keep in mind that the number of cell types contributing to the autofluorescence signal in the mushroom body calyx (Kenyon cells, astrocyte-like and ensheathing glia, APL neurons, olfactory projection neurons, dopamine neurons) is much higher than in the somas (only Kenyon cells and cortex glia). This could also participate in the observed difference. Hence, focusing on intracellular heterogeneities of potential NAD(P)H binding partners seems premature at that stage.
The phrase "noticeable but not statistically significant" is misleading.
We agree with the reviewer and have removed “noticeable but” from the sentence in the new version of the manuscript.
Minor recommendations on presentation:
The Introduction can be streamlined.
We agree that some parts of the Introduction can seem a bit long for experts of a particular field. However, we think that this level of detail makes the article easily accessible for neuroscientists working on Drosophila and other animal models but not necessarily with FLIM, as well as for experts in energy metabolism that may be familiar with FLIM but not with Drosophila neuroscience.
Command–Grave accent (`): Switch between the windows of the app you're using.
Configuration: System Settings / Keyboard / Keyboard Shortcuts / Keyboard / Move focus to next window
Polish president vetoed DSA implementation law. This does not reduce the working of the DSA bc it's a regulation. Does hinder Polish national elements, like a national coordinator and contact points, Polish entities as trusted flaggers etc.
The independent web is already here, quietly thriving while Big Tech implodes under its own extractive weight. All you have to do is join it.
Extraction as limited viable by def
We must find moments of joy in the horror of our present. Joy is not a prize for getting through it all—joy is a tool that enables us to get through the horror. Our joy is an act of resistance.
joy as resistance
This is less about nostalgia than it is survival. Search engines fail and AI floods the void with slop. Human curation becomes essential infrastructure. We need directories. We need blogrolls. We need people pointing to other people.
'survival' above nostalgia. It needs separate (federated?) infrastructure, esp for discovery.
I love using Shift + Option + F on my Mac to clean up my HTML code. It's like hitting a magic button.
magivc
How do I modify the VS Code HTML formatter?
x
The Nobel Laureate Who (Also) Says Quantum Theory Is "Totally Wrong"

The Nobel Laureate Who (Also) Says Quantum Theory Is "Totally Wrong"
The Nobel Laureate Who (Also) Says Quantum Theory Is "Totally Wrong"
eLife Assessment
This study provides a useful application of computational modelling to examine how people with chronic pain learn under uncertainty, contributing to efforts to link pain with motivational processes. However, the evidence supporting the main claims is incomplete, as the modelling differences are not reflected in observable behaviour or pain measures, and the interpretation extends beyond what the data can substantiate. The conclusions would benefit from a clearer explanation of the behavioural differences that underlie the computational findings.
Reviewer #1 (Public review):
Summary:
This study investigates how individuals with chronic temporomandibular disorder (TMD) learn from uncertain rewards, using a probabilistic three-armed bandit task and computational modelling. The authors aim to identify whether people living with chronic pain show altered learning under uncertainty and how such differences might relate to psychological symptoms.
Strengths:
The work addresses an important question about how chronic pain may influence cognition and motivation. The task design is appropriate for probing adaptive learning, and the modelling approach is novel. The findings of altered uncertainty updating in the TMD group are interesting.
Weaknesses:
Several aspects of the paper limit the strength of the conclusions. The group differences appear only in model-derived parameters, with no corresponding behavioural differences in task performance. Model parameters do not correlate with pain severity, making the proposed mechanistic link between pain and learning speculative. Some of the interpretations extend beyond what the data can directly support.
Reviewer #2 (Public review):
Summary:
In this paper, the authors report on a case-control study in which participants with chronic pain (TMD) were compared to controls on performance of a three-option learning task. The authors find no difference in task behavior, but fit a model to this behavior and suggest that differences in the model-derived metrics (specifically, change in learning rate/estimated volatility/model estimated uncertainty) reveal a relevant between-group effect. They report a mediation effect suggesting that group differences on self-report apathy may be partially mediated by this uncertainty adaptation result.
Strengths:
The role of sensitivity to uncertainty in pathological states is an interesting question and is the focus of a reasonable amount of research at present. This paper provides a useful assessment of these processes in people with chronic pain.
Weaknesses:
(1) The interpretation of the model in the absence of any apparent behavioral effect is not convincing. The model is quite complex with a number of free parameters (what these parameters are is not well explained in the methods, although they seem to be presented in the supplement). These parameters are fitted to participant choice behavior - that is, they explain some sort of group difference in this choice behavior. The authors haven't been able to demonstrate what this difference is. The graphs of learning rate per group (Figure 2) suggest that the control group has a higher initial learning rate and a lower later learning rate. If this were actually the case, you would expect to see it reflected in the choice data (the control group should show higher lose-shift behavior earlier on, with this then declining over time, and the TMD group should show no change). This behavior is not apparent. The absence of a clear effect on behavior suggests that the model results are more likely to be spurious.
(2) As far as I could see, the actual parameters of the model are not reported. The results (Figure 2) illustrate the trial-level model estimated uncertainty/learning rate, etc, but these differ because the fitted model parameters differ. The graphs look like there are substantial differences in v0 (which was not well recovered), but presumably lambda, at least, also differs. The mean(SD) group values for these parameters should be reported, as should the correlations between them (it looks very much like they will be correlated).
(3) The task used seems ill-suited to measuring the reported process. The authors report the performance of a restless bandit task and find an effect on uncertainty adaptation. The task does not manipulate uncertainty (there are no periods of high/low uncertainty) and so the only adaptation that occurs in the task is the change from what appears to be the participants' prior beliefs about uncertainty (which appear to be very different between groups - i.e. the lines in Figure 2a,b,c are very different at trial 0). If the authors are interested in measuring adaptation to uncertainty, it would clearly be more useful to present participants with periods of higher or lower uncertainty.
(4) The main factor driving the better fit of the authors' preferred model over listed alternatives seems to be the inclusion of an additive uncertainty term in the softmax-this differentiates the chosen model from the other two Kalman filter-based models that perform less well. But a similar term is not included in the RW models-given the uncertainty of a binary outcome can be estimated as p(1-p), and the RW models are estimating p, this would seem relatively straightforward to do. It would be useful to know if the factor that actually drives better model fit is indeed in the decision stage (rather than the learning stage).
Reviewer #3 (Public review):
This paper applies a computational model to behavior in a probabilistic operant reward learning task (a 3-armed bandit) to uncover differences between individuals with temporomandibular disorder (TMD) compared with healthy controls. Integrating computational principles and models into pain research is an important direction, and the findings here suggest that TMD is associated with subtle changes in how uncertainty is represented over time as individuals learn to make choices that maximize reward. There are a number of strengths, including the comparison of a volatile Kalman filter (vKF) model to some standard base models (Rescorla Wagner with 1 or 2 learning rates) and parameter recovery analyses suggesting that the combination of task and vKF model may be able to capture some properties of learning and decision-making under uncertainty that may be altered in those suffering from chronic pain-related conditions.
I've focused my comments in four areas: (1) Questions about the patient population, (2) Questions about what the findings here mean in terms of underlying cognitive/motivational processes, (3) Questions about the broader implications for understanding individuals with TMD and other chronic pain-related disorders, and (4) Technical questions about the models and results.
(1) Patient population
This is a computational modelling study, so it is light on characterization of the population, but the patient characteristics could matter. The paper suggests they were hospitalized, but this is not a condition that requires hospitalization per se. It would be helpful to connect and compare the patient characteristics with large-scale studies of TMD, such as the OPPERA study led by Maixner, Fillingim, and Slade.
(2) What cognitive/motivational processes are altered in TMD
The study finds a pattern of alterations in TMD patients that seems clear in Figure 2. Healthy controls (HC) start the task with high estimates of volatility, uncertainty, and learning rate, which drop over the course of the task session. This is consistent with a learner that is initially uncertain about the structure of the environment (i.e., which options are rewarded and how the contingencies change over time) but learns that there is a fixed or slowly changing mean and stationary variance. The TMD patients start off with much lower volatility, uncertainty, and learning rate - which are actually all near 0 - and they remain stable over the course of learning. This is consistent with a learner who believes they know the structure of the environment and ignores new information.
What is surprising is that this pattern of changes over time was found in spite of null group differences in a number of aspects of performance: (1) stay rate, (2) switch rate, (3) win-stay/lose-switch behaviors, (4) overall performance (corrected for chance level), (5) response times, (6) autocorrelation, (7) correlations between participants' choice probability and each option's average reward rate, (7) choice consistency (though how operationalized is not described?), (8) win-stay-lose-shift patterns over time. I'm curious about how the patterns in Figure 2 would emerge if standard aspects of performance are essentially similar across groups (though the study cannot provide evidence in favor of the null). It will be important to replicate these patterns in larger, independent samples with preregistered analyses.
The authors believe that this pattern of findings reveals that TMD patients "maintain a chronically heightened sensitivity to environmental changes" and relate the findings to predictive processing, a hallmark of which (in its simplest form) is precision-weighted updating of priors. They also state that the findings are not related to reduced overall attentiveness or failure to understand the task, but describe them as deficits or impairments in calibrating uncertainty.
The pattern of differences could, in fact, result from differences in prior beliefs, conceptualization of the task, or learning. Unpacking these will be important steps for future work, along with direct measures of priors, cognitive processes during learning, and precision-weighted updating.
(3) Implications for understanding chronic pain
If the findings and conclusions of the paper are correct, individuals with TMD and perhaps other pain-related disorders may have fundamental alterations in the ways in which they make decisions about even simple monetary rewards. The broader questions for the field concern (1) how generalizable such alterations are across tasks, (2) how generalizable they are across patient groups and, conversely, how specific they are to TMD or chronic pain, (3) whether they are the result of neurological dysfunction, as opposed to (e.g.) adaptive strategies or assumptions about the environment/task structure.
It will be important to understand which features of patients' and/or controls' cognition are driving the changes. For example, could the performance differences observed here be attributable to a reduced or altered understanding of the task instructions, more uncertainty about the rules of the game, different assumptions about environments (i.e., that they are more volatile/uncertain or less so), or reduced attention or interest in optimizing performance? Are the controls OVERconfident in their understanding of the environment?
This set of questions will not be easy to answer and will be the work of many groups for many years to come. It is a judgment call how far any one paper must go to address them, but my view is that it is a collaborative effort. Start with a finding, replicate it across labs, take the replicable phenomena and work to unpack the underlying questions. The field must determine whether it is this particular task with this model that produces case-control differences (and why), or whether the findings generalize broadly. Would we see the same findings for monetary losses, sounds, and social rewards? Tasks with painful stimuli instead of rewards?
Another set of questions concerns the space of computational models tested, and whether their parameters are identifiable. An alteration in estimated volatility or learning rate, for example, can come from multiple sources. In one model, it might appear as a learning rate change and in another as a confirmation bias. It would be interesting in this regard to compare the "mechanisms" (parameters) of other models used in pain neuroscience, e.g., models by Seymour, Mancini, Jepma, Petzschner, Smith, Chen, and others (just to name a few).
One immediate next step here could be to formally compare the performance of both patients and controls to normatively optimal models of performance (e.g., Bayes optimal models under different assumptions). This could also help us understand whether the differences in patients reflect deficits and what further experiments we would need to pin that down.<br /> In addition, the volatility parameter in the computational model correlated with apathy. This is interesting. Is there a way to distinguish apathy as a particular clinical characteristic and feature of TMD from apathy in the sense of general disinterest in optimal performance that may characterize many groups?
If we know this, what actionable steps does it lead us to take? Could we take steps to reduce apathy and thus help TMD patients better calibrate to environmental uncertainty in their lives? Or take steps to recalibrate uncertainty (i.e., increase uncertainty adaptation), with benefits on apathy? A hallmark of a finding that the field can build off of is the questions it raises.
(4) Technical questions about the models and results
Clarification of some technical points would help interpret the paper and findings further:
(a) Was the reward probability truly random? Was the random walk different for each person, or constrained?
(b) When were self-report measures administered, and how?
(c) Pain assessments: What types of pain? Was a body map assessed? Widespreadness? Pain at the time of the test, or pain in general?
(d) Parameter recovery: As you point out, r = 0.47 seems very low for recovery of the true quantity, but this depends on noise levels and on how the parameter space is sampled. Is this noise-free recovery, and is it robust to noise? Are the examples of true parameters drawn from the space of participants, or do they otherwise systematically sample the space of true parameters?
(e) What are the covariances across parameter estimates and resultant confusability of parameter estimates (e.g., confusion matrix)?
(f) It would be helpful to have a direct statistical comparison of controls and TMD on model parameter estimates.
(g) Null statistical findings on differences in correlations should not be interpreted as a lack of a true effect. Bayes Factors could help, but an analysis of them will show that hundreds of people are needed before it is possible to say there are no differences with reasonable certainty. Some journals enforce rules around the kinds of language used to describe null statistical findings, and I think it would be helpful to adopt them more broadly.
(h) What is normatively optimal in this task? Are TMD patients less so, or not? The paper states "aberrant precision (uncertainty) weighting and misestimation of environmental volatility". But: are they misestimates?
(i) It's not clear how well the choice of prior variance for all parameters (6.25) is informed by previous research, as sensible values may be task- and context-dependent. Are the main findings robust to how priors are specified in the HBI model?