10,000 Matching Annotations
  1. Sep 2025
    1. Too much time has passed since eukaryotes first appeared on the evolutionary scene, and too muchDNA has been scrambled between too many groups, for scientists to piece everything together. Butthat hasn’t stopped them from trying.

      "Why does research seek to understand the evolutionary history of the eukaryotic cell?" With new breakthroughs with the discovery of the Asgard Archaea, there are still so many studies that need to be performed to gain more knowledge about the origin of the eukaryotic cell. Scientists continue to develop more research and continue to search for answers to questions that seem impossible to answer, because the idea of understanding where life came from has endless possibilities for future advancement. It might hold the key to understanding diseases and how to cure them. Or it might offer answers about climate change and how our ecosystems function. It could possibly help farmers and ranchers, by eliminating animal and plant diseases. The potential is endless.

    2. Now, Takemura, Bell, andothers say that a giant virus could have been the original nucleu

      "Why do you think it is so hard for scientists to agree on a theory that supports the rise of the eukaryotic cell?" There is still so much debate on the origin of the nucleus, whether it has an archaeal origin or possibly came from a giant virus. I think once more research has been done, if possible, on the nucleus origin, we will be able to determine whether or not an update needs to happen to the current tree of life.

    1. Gentrification also changes the racial and ethnic makeup of neighborhoods, as most people moving into these changing urban areas are typically white.

      gentrification is what needs to happen if neighborhoods want to see improvement in the poor neighborhoods.

    2. Canada, the richest 10 percent own 57.4 percent of the country’s wealth. In the United States, the richest 10 percent own over 75 percent of the wealth in the country, the highest of the twenty most developed countries in the world.

      this can't be a good thing for either countries

    3. Canada has stronger social welfare programs than the US. All provinces of Canada provide universal, publicly funded healthcare, for example, and a monthly income is provided to those in extreme poverty.

      I heard of this. Canada is definitely a friendlier country for this. this should be global. Like, how are charging millions of dollars for birth and and expensive medical treatment

    4. US and Canada are also members of the World Trade Organization (WTO), an intergovernmental organization that collectively regulates international trade

      key detail

    5. Both Canada and the United States continue to attract immigrants, drawn to these countries by the hope of good jobs and political freedoms. Each country has dealt with the influx of immigration in very different ways.

      bad. Americans should be employed before a refugee.

    6. Syria has been contentious politically with some fearing the potential for terrorist attacks by migrants. Several state governors outright refused to accept Syrian refugees. The Canadian government, in contrast, agreed to resettle 25,000 Syrians in 2016. Canadian Prime Minister Justin Trudeau greeted the first plane of refugees, offering winter clothing and stuffed animals and saying, “Welcome home.” Throughout history, Canada has welcomed the world’s displaced peoples, accepting 1.2 million refugees since World War II.

      I understand the US because what they're doing is putting the rightful people first. WHY DOESN"T THE SYRIAN GOVERNMENT TAKE CARE OF THEIR OWN. WHY DOES THE US ALWYAS GO THROUGH THIS

    7. Undocumented and unaccompanied child migrants in particular have increased dramatically in recent years. As countries experience economic decline, political turmoil, and often dangerous living conditions, migrants will likely continue to flock to Canada and the US in search of a better life.

      in search for a better life... so let me go to the united states where I don't belong to create a more worse life for those americans who are already struggling

    8. Undocumented, or illegal, immigration to the United States continues to be another significant political issue. Around 11 million undocumented migrants currently live in the US. Just over 50 percent are from Mexico.

      absolutely awful for the true american citizens

    9. The Mississippi River is largely considered to be the most important waterway in terms of commercial transportation. The Port of South Louisiana, located along the Mississippi, is the largest port in the United States in terms of tonnage.

      largest river in the United states of America

    10. Most of North America, to include Mexico, Greenland, and some of the Caribbean, is situated on the North American plate and is thus relatively geologically stable

      the midwest aswell

    11. These diverse physical conditions have enabled North America to have a wide variety of natural resources, but have also contributed to significant regional differences.

      US humans should be able to all help and benefit from our neighbor. It is stated in the bible that everyone should love their neighbor

    12. there are distinct similarities between Canada and the United States in terms of language and a shared history that are quite different from their Spanish-speaking neighbors to the south.

      why is Canada not apart of the US?

    13. The giant trees that stretch over California’s Redwood and Sequoia National Parks are the tallest trees on Earth, towering to over 100 meters (328 feet). These trees are also exceptionally old

      makes me question how something that doesn't move or eat lives for over 100 years and grows to its maximum size is even possible. How come Humans are not able to live healthy long lives

    1. if we actually understood that all of this that I'm seeing right now I'm making it up on the fly. This cup that I'm seeing, it only exists when I create it.

      for - adjacency - constructed reality - umwelt - species perspectival knowing - misunderstanding - sensory signals - map and territory - Donald Hoffman - We have to be careful how we interpret his claim here, as it is often easily misunderstood. - He means that evolution itself, reality itself has constructed this unique set of sense organs, that creates a unique human umwelt in which - the sensory signals give us a very specific map of reality, NOT reality itself - In this way, our sensory signals construct a very unique map of reality, which is different from the way all other species construct their maps

    2. what's interesting about this now is if I think I'm just this little body and I'm nothing but this body and and my conscious experiences are nothing but what my brain does. So, so that's my theory and that's that's all I am. I don't feel very big. I don't feel very important. Um, and so I'm going to probably need to do something to make myself feel a little bit better and I'm going to need to compete with you.

      for - example - poverty mentality - adjacency - poverty mentality - ego reification - othering - competition - If I believe my own spacetime story that - I am this body - thoughts are simply epiphenomena of the brain - then I don't feel very empowered or spacious - instead, I feel small and insignificant - and it motivates me to compete with others to make myself feel better - In this way, my own poverty mentality, based on the wrong-headed belief that I am the map (not the territory) - leads to identity and ego reification and othering

    3. if you want to understand the truth of who you are beyond just this headset description of you then you have to lay aside all concepts period and just know yourself by being yourself not by putting a concept between you and yourself.

      for - quote - who you are beyond your headset - Donald Hoffman - If you want to understand the truth of who you are beyond just this headset description of you - then you have to - lay aside all concepts period and - just know yourself by being yourself, - not by putting a concept between you and yourself. - adjacency - headset - perspectival knowing - Donald Hoffman - unquestioned assumption of other perspectives - imputation - external observable proxy - to private, inner world - As I read Hoffman's use of the word "headset", it brought up some associations with the idea of "perspectival knowing" - There is the perspectival knowing of a species, - but also of the individual of a species - For humans, perspectival knowing must be contextualized within an imputation: - that other perspectives exist - in other words, that other private worlds exist - and ultimately, this is a widely accepted imputation of an inner private world - based upon public, external observable behavioral proxies - This imputation of the other is a fundamental imputation and assumption of the human condition which we all take for granted, - but because it is so foundational, never question

    4. There is another way that you can appreciate that

      for - adjacency - spirituality - science - silence of thoughts in meditation - descriptions of reality - map and territory - Donald Hoffman - nice adjacency - if our thoughts are dependent on and built upon inputs from our senses - and our senses only provide us with a map, and not the territory, - then thinking will only ever keep us in the map world

    5. Almost all of us think of ourselves as an object in spaceime only here for a short amount of time and will soon die

      for - quote - Almost all of us think of ourselves as an object in spacetime only here for a short amount of time and will soon die - Donald Hoffman When I say you transcend any scientific

      • Almost all of us think of ourselves as
        • an object in spacetime only here for a short amount of time and will soon die.
      • When I say you transcend any scientific theory,
        • that means the theory that I am just a 160lb object in spacetime is just a theory and it's not the truth.
      • That's not the truth about who I am.
      • That's just a theory that I have because spacetime itself is just a theory.
      • Nothing inside spacetime is anything but my headset interpretation of a reality that infinitely transcends anything I can experience.
    6. learning by ostensive definition.

      for - definition - learning by ostensive definition - adjacency - ostensible definition - parents - external proxy - children's private experiences - This is a very deep insight and important point - Parents are stewards of culture and they lead their children into a world of shared names - It is important to note that - the parent who teaches the child the name for some aspect of reality - only ever has a proxy to the child's private experience of reality - That proxy is the externally observed behaviour of the child - In fact, we fundamentally only ever have public external proxies to the private, "inner" lives of others

    7. if a bat is sat there thinking that they understand the nature of reality when it's actually just a map

      for - comparison - bat umwelt vs human umwelt - good comparison - all sensory signals of living beings only ever generated major of reality, - never 'reality' itself, whatever that may be - We humans can study other species and observe how their senses create their respective maps of reality - but our senses fall on the same continuum

    8. From an evolutionary point of view, perception is expensive

      for - quote/key insight - perception serves reproduction, not seeing reality as it is

      quote / key insight - perception serves reproduction, not seeing reality as it is - Donald Hoffman - From an evolutionary point of view, perception is expensive. - It takes a lot of calories. - You have to eat a lot of food - to run your brain and - to power your eyes and your ears. - - And so you need to do shortcuts. - You need to make your sensory systems not chew up so much of your energy. - The more expensive your perceptual systems are, - the more you've got to eat to to power those. - So that means you have to go out there and forage and put yourself at harm. - So there's a trade-off. - We try to do things cheaply in evolution. And you don't need to actually go for the truth because that's very very expensive

    9. Darwin's theory says the probability is zero that any sensory system like eyes, ears, smell, touch, taste has ever been shaped to see any aspect of objective reality truly. So the probability is zero that you see any aspect of the truth. Period.

      for - quote - probability of zero that sensory organs are designed to help us see objective reality - Donald Hoffman

    Tags

    Annotators

    URL

    1. The pace and style of the news-cast take some priority over the items in it.

      Not so much anymore, now we have news channels that only talk about the same news all day long. Circling back to the same topics that they had touched on earlier in the day.

    2. For the fact is that many of us do sit there

      I feel like with the change in streaming availability that this is not so much the case anymore for younger watchers. We pick and chose when we want to watch our new/favorite show. We can binge in little bits when we have time. We are not bound by the timing of television. And more so we have many more options of things besides tv, Podcasts, audiobooks. Television is no longer the go to for entertainment.

    3. eople can consciously selectanother channel or another programme, or switch off altogether.

      When I still had cable television years ago, I would switch between shows during commercial breaks so that I could catch multiple shows or pieces of shows at the same time. I have a serious hate for advertisements so I try to not watch them at all.

    4. In British commercial television there was a specificand formal undertaking that ‘programmes’ should not be inter-rupted by advertising; this could take place only in ‘naturalbreaks’:

      This seems like a more natural and sensible way to deal with advertisements. Creating breaks so that can add in more adds is a terrible setup and the audience recognizes that.

    5. or the main play was pre-ceded by a curtain-raiser.

      I have discovered some of my favorite musicians and styles of music just by getting to shows early and actually listening to the opening acts or the intermission act of a show.

    1. Language change is natural, inevitable, and unstoppable. The only languages that do not change, that show no variation, are dead languages.

      This quote stuck out to me as well because it shows how language change is totally natural and unstoppable. Prof. Garley mentions the comparison between modern-day English and Shakespearean English, and that really made me think deeply about how vastly different the two are. The way we speak now is so far removed from how people communicated back then, and it proves the point that living languages are always shifting with how people actually use them. It also pushes back on prescriptive rules that try to hold onto outdated forms (like whom) even though most speakers don’t use them anymore. To me, it shows that change in language isn’t corruption, it’s proof that the language is very much alive and always adapting.

    2. Like table manners, prescriptive rules are imposed by an outside authority. Traditional grammar puts great stock in authorities. Something is right or wrong because a book or a teacher tells us so. But who gets to decide?

      This quote resonated with me because it highlights the distinction between prescriptive and descriptive grammar. Prescriptive rules aren’t really about how language works, but more about social habits that people turned into “rules” over time, enforced by teachers, editors, and grammar books. The table manners comparison makes sense; just like elbows on the table doesn’t stop you from eating, breaking these grammar “rules” doesn’t necessarily stop language from working. It also made me think about how these rules are made up by people, which raises the bigger question of who really gets to decide what’s considered “proper” English. Is it people in higher social classes? And historically, has this “standard English” been tied to white, upper-class norms through what linguists call "standard language ideology"?

    1. aerosolization, where an infected individual generates virus-carrying particles that can remain suspended in the air for long periods.

      Examples of how aerosolization can occur that then leads to the pathogen particles becoming airborne.

    2. determined that increasing the relative humidity beyond 40% significantly reduces dispersal.

      So by increasing the humidity in a classroom/lab there could be more control over the dispersal of airborne pathogens?

    1. I was told by my former boss that writing was my worstskill and I should hone my talents toward account management.

      This underlines how other people's judgements try to limit our potential. Even though her boss told her that her writing was her weakness and that she should move on and try something else, that criticism did not define her abilities or her future success. The author's story is the perfect example that even when outside opinions are discouraging and limiting, our personal conviction can carry us through and allow us to grow beyond the limits that others try to place on us.

    2. And when the doctor finallycalled her daughter, me, who spoke in perfect English -- lo and behold -- we had assurances the CAT scanwould be found, promises that a conference call on Monday would be held, and apologies for any sufferingmy mother had gone through for a most regrettable mistake

      This shows the unfair reality that people are judged by the way that they speak rather than by who they are. When the doctor only stepped in when the daughter who spoke "perfect English" appeared, the bias against those who communicate differently was made readily apparent. Even though she voiced her concerns over her family history with brain tumors, she wasn't taken seriously because her language was imperfect. This reveals how language can create barriers to respect and care, even when the person's needs are serious and just as real and valid as the needs of those who speak "perfect English."

    3. That is, because she expressed them imperfectly her thoughts were imperfect

      This segment suggests that because the author's mother expressed her thoughts imperfectly, others assumed her thoughts were imperfect as well. From my own experiences in foreign language classes, I know that even when my grammar and/or vocabulary was limited, my ideas were still there, and they were complex and meaningful. Language fluency does not determine the depth of thought, it changes the way that those thoughts are expressed.

    4. Some say they understand none of it, as if she were speaking pure Chinese. But to me, my mother'sEnglish is perfectly clear, perfectly natural. It's my mother tongue. Her language, as I hear it, is vivid,direct, full of observation and imagery. That was the language that helped shape the way I saw things,expressed things, made sense of the world.

      This shows how language is deeply linked to perspective. Even though many people cannot understand the author's mother, the author understands her on a basis that goes farther than grammar, she understands the love, culture, and perspective that her language carries. It shows that even though strangers cannot understand, her words are reflective of her own lived experience and that language is not solely defined by "perfection" in sound, but rather by the truth and worldview that form the basis for the words.

    1. research is what makes the difference between facts and opinions.

      Fact is observing reality and having evidence so you can’t deny the real fact and opinion is saying what you believe yet having no evidence

    2. We should be informed consumers of the information made available to us because decisions based on this information have significant consequences.

      We should be informed consumers of the information made available to us because decisions based on this information have significant consequences . And i truly agree the way you taking in and process the information is critical to how you may act based on little information

    3. To illustrate this point, a study investigating a smartphone app targeting surgery residents (graduate students in surgery training) found that the use of this app can increase student engagement and raise test scores (Shaw & Tan, 2015).

      Use of research helps to illustrate a point and study

    4. How are children influenced by the media they are exposed to?

      How are children being influenced by the media that are exposed to. I feel not just children are being easily influenced a lot of adults as well . The children are just a easier target since there still growing and learning the sense of reality what’s real or not or how things function being more vulnerable or gullible

    5. A psychologist interested in the relationship between behavior and exposure to violent images might ask these very questions.

      I’ve always had this agreement with the affect of when people watch graphic videos or triggering videos such as violent or sad triggering videos it will impact you in some sorts of way i feel watching violent videos affect a certain population due to sensitive emotions more then others sometimes it wont make you behave badly but make you have more fear or easily worry more so i do believe what you feed into your mindset by watching will have affect in a certain way but we don’t know how until you do it because we don’t know how our brain process certain images and how it can make you feel and the way we feel affects the way we act

    1. eLife Assessment

      This manuscript introduces a potentially valuable large-scale fMRI dataset pairing vision and language, and employs rigorous decoding analyses to investigate how the brain represents visual, linguistic, and imagined content. The current manuscript blurs the line between a resource paper and a theoretical contribution, and the evidence for truly modality-agnostic representations remains incomplete at this stage. Clarifying the conceptual aims and strengthening both the dataset technicality and the quantitative analyses would improve the manuscript's significance for the fields of cognitive neuroscience and multimodal AI.

    2. Reviewer #1 (Public review):

      Summary:

      The authors introduce a densely-sampled dataset where 6 participants viewed images and sentence descriptions derived from the MS Coco database over the course of 10 scanning sessions. The authors further showcase how image and sentence decoders can be used to predict which images or descriptions were seen, using pairwise decoding across a set of 120 test images. The authors find decodable information widely distributed across the brain, with a left-lateralized focus. The results further showed that modality-agnostic models generally outperformed modality-specific models, and that data based on captions was not explained better by caption-based models but by modality-agnostic models. Finally, the authors decoded imagined scenes.

      Strengths:

      (1) The dataset presents a potentially very valuable resource for investigating visual and semantic representations and their interplay.

      (2) The introduction and discussion are very well written in the context of trying to understand the nature of multimodal representations and present a comprehensive and very useful review of the current literature on the topic.

      Weaknesses:

      (1) The paper is framed as presenting a dataset, yet most of it revolves around the presentation of findings in relation to what the authors call modality-agnostic representations, and in part around mental imagery. This makes it very difficult to assess the manuscript, whether the authors have achieved their aims, and whether the results support the conclusions.

      (2) While the authors have presented a potential use case for such a dataset, there is currently far too little detail regarding data quality metrics expected from the introduction of similar datasets, including the absence of head-motion estimates, quality of intersession alignment, or noise ceilings of all individuals.

      (3) The exact methods and statistical analyses used are still opaque, making it hard for a reader to understand how the authors achieved their results. More detail in the manuscript would be helpful, specifically regarding the exact statistical procedures, what tests were performed across, or how data were pooled across participants.

      (4) Many findings (e.g., Figure 6) are still qualitative but could be supported by quantitative measures.

      (5) Results are significant in regions that typically lack responses to visual stimuli, indicating potential bias in the classifier. This is relevant for the interpretation of the findings. A classification approach less sensitive to outliers (e.g., 70-way classification) could avoid this issue. Given the extreme collinearity of the experimental design, regressors in close temporal proximity will be highly similar, which could lead to leakage effects.

      (6) The manuscript currently lacks a limitations section, specifically regarding the design of the experiment. This involves the use of the overly homogenous dataset Coco, which invites overfitting, the mixing of sentence descriptions and visual images, which invites imagery of previously seen content, and the use of a 1-back task, which can lead to carry-over effects to the subsequent trial.

      (7) I would urge the authors to clarify whether the primary aim is the introduction of a dataset and showing the use of it, or whether it is the set of results presented. This includes the title of this manuscript. While the decoding approach is very interesting and potentially very valuable, I believe that the results in the current form are rather descriptive, and I'm wondering what specifically they add beyond what is known from other related work. This includes imagery-related results. This is completely fine! It just highlights that a stronger framing as a dataset is probably advantageous for improving the significance of this work.

    3. Reviewer #2 (Public review):

      Summary:

      This study introduces SemReps-8K, a large multimodal fMRI dataset collected while subjects viewed natural images and matched captions, and performed mental imagery based on textual cues. The authors aim to train modality-agnostic decoders--models that can predict neural representations independently of the input modality - and use these models to identify brain regions containing modality-agnostic information. They find that such decoders perform comparably or better than modality-specific decoders and generalize to imagery trials.

      Strengths:

      (1) The dataset is a substantial and well-controlled contribution, with >8,000 image-caption trials per subject and careful matching of stimuli across modalities - an essential resource for testing theories of abstract and amodal representation.

      (2) The authors systematically compare unimodal, multimodal, and cross-modal decoders using a wide range of deep learning models, demonstrating thoughtful experimental design and thorough benchmarking.

      (3) Their decoding pipeline is rigorous, with informative performance metrics and whole-brain searchlight analyses, offering valuable insights into the cortical distribution of shared representations.

      (4) Extension to mental imagery decoding is a strong addition, aligning with theoretical predictions about the overlap between perception and imagery.

      Weaknesses:

      While the decoding results are robust, several critical limitations prevent the current findings from conclusively demonstrating truly modality-agnostic representations:

      (1) Shared decoding ≠ abstraction: Successful decoding across modalities does not necessarily imply abstraction or modality-agnostic coding. Participants may engage in modality-specific processes (e.g., visual imagery when reading, inner speech when viewing images) that produce overlapping neural patterns. The analyses do not clearly disambiguate shared representational structure from genuinely modality-independent representations. Furthermore, in Figure 5, the modality-agnostic encoder did not perform better than the modality-specific decoder trained on images (in decoding images), but outperformed the modality-specific decoder trained on captions (in decoding captions). This asymmetry contradicts the premise of a truly "modality-agnostic" encoder. Additionally, given the similar performance between modality-agnostic decoders based on multimodal versus unimodal features, it remains unclear why neural representations did not preferentially align with multimodal features if they were truly modality-independent.

      (2) The current analysis cannot definitively conclude that the decoder itself is modality-agnostic, making "Qualitative Decoding Results" difficult to interpret in this context. This section currently provides illustrative examples, but lacks systematic quantitative analyses.

      (3) The use of mental imagery as evidence for modality-agnostic decoding is problematic. Imagery involves subjective, variable experiences and likely draws on semantic and perceptual networks in flexible ways. Strong decoding in imagery trials could reflect semantic overlap or task strategies rather than evidence of abstraction.

      The manuscript presents a methodologically sophisticated and timely investigation into shared neural representations across modalities. However, the current evidence does not clearly distinguish between shared semantics, overlapping unimodal processes, and true modality-independent representations. A more cautious interpretation is warranted. Nonetheless, the dataset and methodological framework represent a valuable resource for the field.

    4. Reviewer #3 (Public review):

      Summary:

      The authors recorded brain responses while participants viewed images and captions. The images and captions were taken from the COCO dataset, so each image has a corresponding caption, and each caption has a corresponding image. This enabled the authors to extract features from either the presented stimulus or the corresponding stimulus in the other modality. The authors trained linear decoders to take brain responses and predict stimulus features. "Modality-specific" decoders were trained on brain responses to either images or captions, while "modality-agnostic" decoders were trained on brain responses to both stimulus modalities. The decoders were evaluated on brain responses while the participants viewed and imagined new stimuli, and prediction performance was quantified using pairwise accuracy. The authors reported the following results:

      (1) Decoders trained on brain responses to both images and captions can predict new brain responses to either modality.

      (2) Decoders trained on brain responses to both images and captions outperform decoders trained on brain responses to a single modality.

      (3) Many cortical regions represent the same concepts in vision and language.

      (4) Decoders trained on brain responses to both images and captions can decode brain responses to imagined scenes.

      Strengths:

      This is an interesting study that addresses important questions about modality-agnostic representations. Previous work has shown that decoders trained on brain responses to one modality can be used to decode brain responses to another modality. The authors build on these findings by collecting a new multimodal dataset and training decoders on brain responses to both modalities.

      To my knowledge, SemReps-8K is the first dataset of brain responses to vision and language where each stimulus item has a corresponding stimulus item in the other modality. This means that brain responses to a stimulus item can be modeled using visual features of the image, linguistic features of the caption, or multimodal features derived from both the image and the caption. The authors also employed a multimodal one-back matching task, which forces the participants to activate modality-agnostic representations. Overall, SemReps-8K is a valuable resource that will help researchers answer more questions about modality-agnostic representations.

      The analyses are also very comprehensive. The authors trained decoders on brain responses to images, captions, and both modalities, and they tested the decoders on brain responses to images, captions, and imagined scenes. They extracted stimulus features using a range of visual, linguistic, and multimodal models. The modeling framework appears rigorous, and the results offer new insights into the relationship between vision, language, and imagery. In particular, the authors found that decoders trained on brain responses to both images and captions were more effective at decoding brain responses to imagined scenes than decoders trained on brain responses to either modality in isolation. The authors also found that imagined scenes can be decoded from a broad network of cortical regions.

      Weaknesses:

      The characterization of "modality-agnostic" and "modality-specific" decoders seems a bit contradictory. There are three major choices when fitting a decoder: the modality of the training stimuli, the modality of the testing stimuli, and the model used to extract stimulus features. However, the authors characterize their decoders based on only the first choice-"modality-specific" decoders were trained on brain responses to either images or captions, while "modality-agnostic" decoders were trained on brain responses to both stimulus modalities. I think that this leads to some instances where the conclusions are inconsistent with the methods and results.

      First, the authors suggest that "modality-specific decoders are not explicitly encouraged to pick up on modality-agnostic features during training" (line 137) while "modality-agnostic decoders may be more likely to leverage representations that are modality-agnostic" (line 140). However, whether a decoder is required to learn modality-agnostic representations depends on both the training responses and the stimulus features. Consider the case where the stimuli are represented using linguistic features of the captions. When you train a "modality-specific" decoder on image responses, the decoder is forced to rely on modality-agnostic information that is shared between the image responses and the caption features. On the other hand, when you train a "modality-agnostic" decoder on both image responses and caption responses, the decoder has access to the modality-specific information that is shared by the caption responses and the caption features, so it is not explicitly required to learn modality-agnostic features. As a result, while the authors show that "modality-agnostic" decoders outperform "modality-specific" decoders in most conditions, I am not convinced that this is because they are forced to learn more modality-agnostic features.

      Second, the authors claim that "modality-specific decoders can be applied only in the modality that they were trained on, while "modality-agnostic decoders can be applied to decode stimuli from multiple modalities, even without knowing a priori the modality the stimulus was presented in" (line 47). While "modality-agnostic" decoders do outperform "modality-specific" decoders in the cross-modality conditions, it is important to note that "modality-specific" decoders still perform better than expected by chance (figure 5). It is also important to note that knowing about the input modality still improves decoding performance even for "modality-agnostic" decoders, since it determines the optimal feature space-it is better to decode brain responses to images using decoders trained on image features, and it is better to decode brain responses to captions using decoders trained on caption features.

    1. "In fact, discontinuing invasive ventilation in favor of noninvasive respiratory support has been considered the single best approach that neonatologists can implement to reduce BPD."

      Does reduction of invasive respiratory support decrease incidence of BPD? As discussed in respiratory class readings volutrauma, barotrauma, and breath stacking can still occur when utilizing non-invasive support.

    1. The plague began in central Kyrgystan and killed up to 25 million people in China in the 1330s and 1340s, about 15 years before it first arrived in Constantinople.

      I always assumed the 'Black Plague' originated in Europe so I was surprised to see evidence that the plague began in Kyrgyzstan.

      I always liked science so I researched that bit further and found that traces of the plague bacteria were discovered in the enamel of the deceased buried in Kyrgyzstan, the deceased who were said to have died of pestilence.

    2. Young men who wanted to become civil administrators in China entered training schools that concentrated on calligraphy and the teachings of Confucius.

      Confucius had been a Chinese philosopher.

      This compares to our society in how our laws and secular teachings were influenced by Socretes, Plato, and Aristotle.

    3. The early establishment of a professional administrative class of “scholar-officials” was a remarkable element of imperial Chinese rule that made it more stable, longer-lasting, and at least potentially less oppressive than empires in other parts of the world. The imperial courts sent thousands of highly-educated administrators throughout the empire and China was ruled not by hereditary nobles or even elected representatives, but by a class of men who had received rigorous training and had passed very stringent examinations to prove themselves qualified to lead.

      I found this bit interesting as positions of leadership here are based on scholarly and intellectual merit, not democracy or hereditary. This section made me consider my town's local city council. I reside in a small rural town of less than 3,000 people. When there are elections for school board representatives, city council, or other positions where one would run for...most people (locals born and raised here) in town vote for their relatives or family friend over the most qualified competent candidate. Right now the town itself is not faring well due to irresponsible spending, and that nobody on the outside wants to move to this town or start businesses here.

    1. particularly the rate constant k,

      Before, you state, that k has the least direct relation to fertilisation. So why is it particularly interesting? A clarifying sentence might help here.

    2. * Desorbable P (Pdesorb):

      Its not so nice to use : after :

      I suggest to rephrase the sentences, e.g. to The desorbable P parameter ( P desorb ) behaved very ...

    1. eLife Assessment

      This study provides new important insights concerning pathogen variant-specific reproduction parameters from molecular sequencing and case finding. The methods for inferring which variants will likely emerge in subsequent epidemic cycles are solid. This article is of broad interest to infectious disease epidemiology researchers and mathematical modellers of the COVID-19 pandemic.

    2. Reviewer #1 (Public review):

      In this manuscript, the authors describe a new method to more accurately estimate the fitness advantage of new SARS-CoV-2 variants when they emerge. This was a key public health question during the pandemic and drove a number of important policy choices during the latter half of the acute phase of the pandemic. They attempt to link fitness to expected wave size. The analyses are tested on data from 33 different US states for which the data were considered sufficient. The main novelty of the method is that it links the frequency of variants to the number of cases and thus estimates fitness in terms of the reproduction number.

      The results with the new method appear to be more consistent estimates of fitness advantage over time, suggesting that the methods suggested are more accurate than the comparator methods.

      Given that the paper presents a methodological advancement, the absence of a simulation study is a weakness. I am satisfied that the trends estimated via the different approaches suggest a useful advancement for a difficult problem. However, the work would have been considerably stronger if synthetic data had been used to illustrate without doubt how the revised method better captures underlying, pre-specified differences in fitness.

    3. Reviewer #2 (Public review):

      Summary:

      This study develops a joint epidemiological and population genetic model to infer variant-specific effective reproduction numbers Rt and growth advantages of SARS-CoV-2 variants using US case counts and sequence data (Jan 2021-Mar 2022). For this, they use the commonly used renewal equation framework, observation models (negative binomial with zero inflation and Dirichlet-multinomial likelihoods, both to account for overdispersion). For the parameterization of Rt, again, they used a classic cubic spline basis expansion. Additionally, they use Bayesian Inference, specifically SVI. I was reassured to see the sensitivity analysis on the generation time to check effects on Rt.

      This is an incredibly robust study design. Integrating case and sequence data enables estimation of both absolute and relative variant fitness, overcoming limitations of frequency-only or case-only models. This reminds me of https://www.medrxiv.org/content/10.1101/2023.01.02.23284123v4.full

      I also really appreciated the flexible and interpretable parameterization of the renewal equations with splines. But I may be biased since I really like splines!

      The approach is justified, however, it has some big limitations. Specifically, there are some notable weaknesses, that I detail below.

      (1) The model does not account for demographic stochasticity or transmission overdispersion (superspreading), which are known to affect SARS-CoV-2 dynamics and can bias Rt, especially in low incidence or early introduction phases.

      (2) While the authors explore the sensitivity of generation time, the reliance on fixed generation time parameters (with some adjustments for Delta/Omicron) may still bias results

      (3) There is no explicit adjustment for population immunity, which limits the ability to disentangle intrinsic variant fitness (even though the model allows for inclusion of covariates - this to me is one of two major flaws in the study.

      (4) The second major flaw in my opinion is that there is no hierarchical pooling across states - each state is modeled independently. A hierarchical Bayesian model could borrow strength across states, improving estimates for states with sparse data and enabling more robust inference of shared variant effects.

      I would strongly recommend the following things in order of priority, where the first two points I consider critical.

      (1) Implement a hierarchical model for variant growth advantages and Rt across states.

      (2) Include time-varying covariates for vaccination rates, prior infection, and non-pharmaceutical interventions directly. This would help disentangle intrinsic variant transmissibility from changes in population susceptibility and behavior.

      (3) Extend the renewal model to a stochastic or branching process framework that explicitly models overdispersed transmission.

      (4) It would be good to allow for multiple seeding events per variant and per state. This can be informed by phylogeography in a minimum effort way and would improve the accuracy of Rt.

      (5) By now, I don't think it will be a surprise that addressing sampling bias is standard, reweighting sequence data or comparing results with independent surveillance data to assess the impact of non-representative sequencing.

    1. 2. Summarize the information contained in the chemical equation below. How would this reaction be classified? CaCl2(aq) + Na2CO3(aq)→CaCO3(s) + 2NaCl(aq)

      The equation CaCl₂(aq) + Na₂CO₃(aq) → CaCO₃(s) + 2NaCl(aq) shows that an aqueous solution of calcium chloride reacts with an aqueous solution of sodium carbonate to produce a solid precipitate of calcium carbonate and an aqueous solution of sodium chloride. This is a double displacement reaction that can be further classified as a precipitation reaction because it forms an insoluble solid.

    1. ½#Bှ B˜ှ $ ှ$ှ c ှ vBှ BB˜]ှ  ှ  ှ ƗB ှ B#B +ှ Rှ ˜ ှB˜ ှ B B+ှ $ှ #B#ှ B˜  ှ $ှ B˜ှ  Bှ $ှB˜ှ  ှ  ှ μB ှ Bှ # #ှ  ɏှ B ှ B ှ ှ cņ ှB˜ှ #B ှ $ှ B˜ှ  Y #ှ íှ

      Perhaps this is why being a part of a team is such a unique experience. You may not otherwise have anything in common with a teammate, and outside of the sport you may never have crossed paths or become friends, but because you were on the same team, there is a unique bond formed that never really disappears.

    2. {ှ ¢'z ¢ ှ  ှ  ှ ှ ှ  ှ ှ $ ှ ¢?ှှ  ʊှ $ှ ှ ' ှ ¢  ှ ှ cှ ¢ ှ ှ ှ  ?ှ  ှှ $ှ $ှ  ?ှ \''ှ Z '  ှ ှ ှ Àှ

      The spoil sport rejects the rules and thus rejects the game itself. The cheater rejects the rules but still continues to take part in the game-world.

    3. {ှ ¢  ှ ှ  ှ #  ှ ှ ှ $ှ  ှ  ှ c ?ှ ှ ' ှ $ှ   ှ \ ̈¢ ှ ှ  ှ ှ  ှ 'ှ Yှ  ှ Wှ $ှ  ှ  ှ  ှ ှ   ှ $ှ ှ ¢Ċ ှ¢ ˲ှ  ှ ƒှ   ˆ?ှ   ှ  ?ှ ' ှ c ှ  ှ ' ?ှ ှ ¢ 'ှ¢ݘ  ှ vþ\  ] ʊ ှ | ?ှ  ¢ှ  ှ   ှ \ှ ှ \ ?ှ ှ  ှ 'ှ \Zှ ှ ှ # ှ ખှ ှ  ̈Àှ

      The idea that the tension element confers a kind of ethical aspect to play is one that I agree with; as the author says, when a game is full of tension, the players character is revealed

    4. ý ှ ¹ှ ¹ှ ‹ှ ှ $ ှ $ှ ¹ှ‹ှ  ‹ ှ કှ ‹‹& ှ ¹ ှ ¹‹ ¹‹ ှ Ǿ¹ ှ  ှ ‹ှ ƚp !ȥ ¹ှZှ ‹ှ ¹ှ ¹ ှ $ှ$ှ ¹ှ $¹¡íှ

      I didn't understand what this meant so I looked it up; apparently "warp and woof" refers to the fundamental components of weaving, and as an idiom it refers to the "essential foundation or base" of a structure. So, basically what he's saying is that repetition and alternation are essential components on which play is built.

    5. u ှ ౛ှ  ှƔIှ $I ှ :ှ '::Iှ :ှ  ှ I:ှ  ှ "  ှ :ှ :ှ Iှ бှ$ :Y'ှ  ှ I:#Ƭ:ှ I ှ IVှ

      I am curious to learn more about this link between play and the "sacred sphere." I do understand how the author tied the origins of ritual to play, but how exactly do they connect now? Would praying or going to ones place of worship for service be considered a form of play?

    6. gှ Ďà +ှ  ှ ှ àှ  ̧ှ   ̧ှ #äှ ှ #  ှ þှှ c ှ Ȳ ှ ှ – ̧]ှ ̧ ှ ှ cှ  ှ ̈ ှà ှှ  ̈ှ  ̧ ှှ ှ # ̈ ှ #  ?ှ ှ  ှc  +ှ ှ ̧à ှ ှ   ှ  ှ #ှ  ̧+ှ  ̈ှFှ ' ƒှ  ̈ှ c  ှ ှ #c  ̈ှ v ]ှ  Vှ

      I like that the author is taking time to emphasize that play and non-seriousness cannot be conflated.

    7. ˷Fှ=ှ =I Iှ  ှ =ှ = Iှ  Ưှ ှ =# ှ Fှ c ှcှc#ှ Fှ $=Icှ  ̈Ɵ= ှ =$ှ Àှ

      Interesting. Sometimes I am tired and don't feel like going to basketball practice or training. But I do go, because I feel like I have to, and in general I do like the sport, just not all the time. According to this argument, on the when I don't feel like going, I am not actually playing?

    8. u ှ Fှှ c#ှ =$ှ ှ# ̈ ှ c= ှ  ှ ̈== ှ I ှ  ှ Ŏ nှ

      This thought makes me think of the way watching a dog or another animal run around highlights its muscles and athleticism. I've never thought about it this way, but physical play does, in a way, display the body in it's most active and beautiful form.

    9. Ѕ#ှ ှ  ှ{Ҫ  zှ Yှ ှ¹ ှ ှ H'ှ ™#  Vှ ှ Y¹# ှ$ှ Yှ  ှ Y#ှ ှ ှ ှ íှ

      I would argue that it can have moral function- think back to the gambler. I would consider gambling to be both a form of play and, when out of control, a kind of vice.

    10.  ှ  ှ ٌ ှ  ှ $ှvှ Ȃɣʒ  z 0#  ăှ Bှ vှ  ှ Bှ๷# ă?ှ Bှ  B Bှ Y ှ # ှ 0Kှ Bှ # Ȓ$ှ Hှ ှ်ှYှ # ှ 0K  Àှ

      This makes me think about our discussion in our first class; when considering what my favorite game is, I thought about the fact that I play a sport (basketball) but then hesitated to name that as a game. It is a game, of course, but after the years long experience I've had in training for the sport and competing very seriously, calling it a "game" feels in some ways very wrong. To me, it is more than a game; there are instances where some days it feels like work! With this in mind, can it still be considered a form of play?

    11. ှ  ှ  +ှ ှ  ှ ŻB+ှ Ԩ૖ှ ှ B+ှ B+ှ  ှ  ှ 0 íှ

      I would say religion would also fit in this category (as it has roots in what the author is calling myth and ritual)

    12. Zှ  #?ှ $ှ   ٛှ Ż ှ  ှ #ှ  # ှ ှ  ှ  ှ  ှ  ှ ှಆ# ?ှ ှ ?ှ ှ  Ĉှ Ȟ #ှ F' ှ ှ ༖ှ  # ?ှ ှ  c ?ှ ှ ှ   ʊှ  ှ ?ှ ှ ှ ှ ှ cှ  ှ ှ ှ  ှ ှ ှ ှ ऌ ှ $ှ ှ ှ

      Interesting take- I never considered how language might have its basis in play. Now that I think about it though, the earliest forms of language were pictures-- in a sense, people were using their imaginations to create their own visual representations of what they saw in the world around them. Don't we consider a child scribbling or doodling in a coloring book to be a form of play? I suppose, then, that the beginning of language formation could be viewed this way as well.

    13. ှ ှ ှ ှ Zှ   ှ ှ ှ  ှZ ှ  Ƅှ  ှ  ှ ှ  Ż Vှ

      I think what the author is trying to say is that in his examination of play, play itself will be the main subject and will not be viewed as some offshoot or secondary byproduct of another phenomenon

    14. ӵှ >”ɭ”ှ % ှ ňှ ှ ှ ှ šňှ >ှ ှ ” ှ ှ >”šှ ှ” ှ ?ှ !ှ ”$>” ှ š#ှ    ှ ೺ှ %ှ ňňှ ှȈ% ှ”gှ ĥှ ňှ |ှ (”(Ƙှ

      I love this thought. Play is one of the very few unequivocally universal concepts, just like love. I would go so far as to say every single person on this planet, regardless of where they are or what their life is like, has experienced play at some point in their life. When considering it from this perspective, play really is a unifying human experience.

    Annotators

    1. Rhetoric:Art of persuasion

      Look at the context, summarization, before analyzing

      Is the author consistent and supportive of thesis or claim? Rhetorical moves? Tone?Objective? Does the author know the audience? Do you believe the author? Does the flow of text make sense? Logical reasoning? Emotional appeal?

    1. while we wait in silence for that final luxury of fearlessness, the weight of that silence will choke us.

      This passage truly reveals the danger of waiting until we don't have fear to speak because silence will only grow heavier the longer it lingers. Even though fear will often never completely disappear, if we wait for it to go away completely, it is more likely that our silence will completely overtake us, and our thoughts and beliefs will never be shared.

    2. But primarily for us all, it is necessary to teach by living and speaking those truths which we believe and know beyond understanding.

      This also serves as a powerful reminder because if we truly embody the truths that we hold deeply in our lives, we will share them every opportunity we get. By living and speaking these truths, we teach others through our actions and words, which gives them an extra layer of power and credibility.

    3. self~determination -the decision to define ourselves, name ourselves, and speak for ourselves, in~ stead of being defined and spoken for by others.

      This is such a powerful segment of this piece. In this definition, the author combines all of her personal experiences, and you can feel her conviction and determination radiating from the words. In so doing, she once again underscores the importance of speaking up because without standing up for what you believe in, the voices of others will overtake you, and you will lose a piece of yourself in the process.

    4. In the cause of silence, each of us draws the face of her own fear _ fear of contempt, of censure, or some judgment, or recognition, of challenge, of annihilation

      This particular passage highlights how silence often stems from fear or hesitation, leading us to betray ourselves in a way by holding back our true voices. Although silence may feel safer, it limits our growth and impact. Because of this, choosing to speak is powerful, in that it confronts our fear, sparks change, and affirms our identity and the influence our voices can have both on those around us and on our situation.

    1. Rhetorical situation: context or set of circumstances out of which a text arises

      How rhetorical situation(context) shapes the rhetorical act(text)

      Concepts; author, audience, setting, purpose, text

      Author authority and identity, values/pperspective

      Audience engagement, demographic, assumptions of author, where article is published, context audience receives

      Setting, did something specific happen that provided motivation to speak out?

      Purpose, what’s being achieved

      Text format, image, wiritten essay, protest?

    1. A megjelenő modális ablakban a Felhasználó egy legördülő listából választhatja ki a limitcsoportot. A felvett limitcsoportot a Hozzáadás gombbal lehet megerősíteni, vagy a Mégse gombbal törölni.

      In the displayed modal, the User can select the limit group from a dropdown list.

    2. A kijelölt időszakhoz tartozó index az Összetétel menüben jelenik meg a képernyő alján.

      The index linked to the selected period appears at the bottom of the screen in the Composition menu.

    3. A referenciaindexek rögzítésének feltétele, hogy a Index Instrumentumként rögzítésre kerüljön.

      A precondition for recording reference indexes is that the index needs to be registered as an Index Instrument.

    4. - előző neé+forgalmazás ha volt az utolsó hivatalos neé óta adja az arányt
      • previous NAV+distribution value if it was since the last official NAV
    5. azt szabályozza, hogy a díjfizetést mikorra keresse, mikori díjfizetesekkel csökkentse a felhalmozott díjat.

      It determines the date on which the fee payment has to affect the accrued fee value.

    6. díj összeg főszabályként az alapdeviza (az alap/sorozat devizájának) kerekítése szerint van meghatározva.

      fee amount is usually rounded based on the rounding defined in the portfolio base data.

    7. szerződés-/szabályzatszintű adat (rendszerben nem épül rá funkció)
      Contractual/regulation-level data (it does not affect any calculation in the system)
      
    8. - adott napi neé/beé: eszközérték számítás napra számolt díj. (pl. heti értékelés esetén hét napi díj számolt, aminek a bázisa az utolsó nap)
      • NAV/GAV according to specific day: e.g. in case of weekly valuation results it is a fee calculated for one week where basis is the last day.
    9. A díjtételek nyilvántartása érvényességi időszakok felvételével történik, a díjtétel paramétereinek rögzítésével. Így historikusan nyomonkövethetőek a korábban alkalmazott díjak paraméterei is. (Melyek esetleges újraszámolás esetén alkalmazhatóak)

      The registration of fee items is performed by recording of validity periods and fee item parameters. The system allows historical tracking of previously applied fee parameters as well (which can be applied during recalculations).

    10. itt a Felhasználó szabad szövegként adhatja meg a költség nevét.

      the User can specify the cost name as free text on that field.

    1. Scared Straight. The program targets teenagers who are at risk for becoming involved in the criminal justice system. They visit prisons, where selected inmates describe the stark, violent realities of prison life (Figure 1.2). The idea is that when teens hear about how tough it is in prison, they will be scared into the “straight,” law-abiding life. The program makes intuitive sense, and your employer is considering a partnership between the residents of your detention center and the state prison system.

      Ruby Whitehorn

      The concept of the Scared Straight program is to scare young men away from the life in the prison system and try to keep them from making decisions that will put them in there. However, the program assumes that the kids will be scared. They aren’t taking the individuality into part as well. We assume that kids will be exposed to the lifestyle and be influenced to run away from it. However, there has been studies on controlled groups and the research shows that kids that experience these types of programs increase the likelyhood of them committing crimes. The research practically shows that kids who are exposed to this type of life, violence, crime, they are more likely to fall into that livelihood.

      What other controlled or uncontrolled studies can be done to test whether being exposed to the experience is helpful or harmful?

    1. “That you, a Northerner and a soldier, should presume to ask for the hand of a Southern lady, shows, sir, that you have not the least comprehension of us or of our country.”

      A suggestion that the South is a different country, despite Reconstruction.

    2. “I can wear my old muslin cape, but my arms will have to show, and my feet too,”

      There is an intimation here that the poverty brought on from the Civil War is actually an offense to Southern women's modesty.

    3. I shall be proud to constitute myself the one to rescue it for the benefit of posterity

      Cousin Copeland is still holding on to vestiges of the past. While he is a historian, his work is decidedly self-possessed.

    4. The Gardiston spirit was hospitable to the core; but these—these were the Vandals, the despots, under whose presence the whole fair land was groaning. No; she would not ask them in.

      A reminder that the Civil War created circumstances straining and obliterating tradition.

    5. She would have preferred to hold parley from the window over the doorway, like the ladies of olden time, but she feared it would not be dignified, seeing that the times were no longer olden, and therefore she went down to the entrance where the two were awaiting her. “Shall I ask them in?” she thought. “What would Aunt Margaretta have done?

      Her worry about propriety and call back to "olden times" highlights the "ancient" nature of the family and house. It reminds the reader of her lineage and the family's longevity in the South.

    1. Therefore,it is essential that laboratories host open conversations about diver-sity, equity and inclusion, as well as how racism manifests in macro-and microaggressions, unspoken expectations, routines and wealthdisparities 72.

      I had some really valuable conversations with some other ecology friends who have experienced difficulties in the field being POCs

    2. Building on this, instructors can include in lessons on bio-diversity and conservation the acknowledgement of multiple waysof understanding and valuing nature, including cultural, aestheticand spiritual values, as well as non-Western valuation of ecosystemsand biodiversity67

      this is what i found so valuable about many of my classes at W&M; this was a intentional focus

    1. __________________________________________________________________

      I am a traditional student, the advantage is that I don't have a job so after school I go to the town library and study and finish work.

    2. __________________________________________________________________

      I am a traditional student, the advantage is I don't have a job so after practice I'm able to direct my attention straight to my work.

    1. President McKinley became increasingly concerned about the safety of American lives and property in Cuba.

      Continuity: Imperialist actions are almost always preceded by capitalist motives.

    1. What do you value that will be richer in your future life because you will have a college education?

      I will hopefully open my own business with the education i've received due to college and high school , and be successful.

    2. What do you anticipate will be the most difficult part of completing college? ________________________________________________________

      Having time for a job and school work at the same time

    3. ________________________________________________________

      I feel like the most difficult part will be keeping up with everything, but I am very confident I will overcome any difficulties with God by my side, and me putting in time.

    1. and, especially, government oppression and censorship, particularly during and after World War I, ultimately sank the party.

      Continuity: Legacy media andor govt. propaganda swayed public opinion against movements that would seek to eradicate limits to socioeconomic mobility.

    2. Companies rose and fell—and investors suffered losses—as manufacturing firms struggled to maintain supremacy in their particular industries

      Larger firms could afford the "race to the bottom" so to speak. Given this, smaller companies got eliminated from the market because they did not have the hefty cash reserves that would allow them to invest in extra machinery as well as expand their workforce. Thus, limiting their ability to operate on thin margins during the "race".

    1.   !    !"

      even the word of hte great power means nothing

    2. %""

      The history -> when the leader is absent leads to turmoil -> Great damage for the globalized community

    1. ACH transfers involve a payer, a recipient, originating and receiving depository financial institutions (ODFI and RDFI), and an ACH operator that routes and processes the payment.

      That feels like more moving parts than is absolutely necessary!

  2. 2025rebrand-9rooftopscom.pantheonsite.io 2025rebrand-9rooftopscom.pantheonsite.io
    1. eLife Assessment

      This study provides an important extension of credibility-based learning research with a well-controlled paradigm by showing how feedback reliability can distort reward-learning biases in a disinformation-like bandit task. The strength of evidence is convincing for the core effects reported (greater learning from credible feedback; robust computational accounts, parameter recovery) but incomplete for the specific claims about heightened positivity bias at low credibility, which depend on a single dataset, metric choices (absolute vs relative), and potential perseveration or cueing confounds. Limitations concerning external validity and task-induced cognitive load, and the use of relatively simple Bayesian comparators, suggest that incorporating richer active-inference/HGF benchmarks and designs that dissociate positivity bias from choice history would further strengthen this paper.

    2. Reviewer #1 (Public review):

      This is a well-designed and very interesting study examining the impact of imprecise feedback on outcomes on decision-making. I think this is an important addition to the literature and the results here, which provide a computational account of several decision-making biases, are insightful and interesting.

      I do not believe I have substantive concerns related to the actual results presented; my concerns are more related to the framing of some of the work. My main concern is regarding the assertion that the results prove that non-normative and non-Bayesian learning is taking place. I agree with the authors that their results demonstrate that people will make decisions in ways that demonstrate deviations from what would be optimal for maximizing reward in their task under a strict application of Bayes rule. I also agree that they have built reinforcement learning models which do a good job of accounting for the observed behavior. However, the Bayesian models included are rather simple- per the author descriptions, applications of Bayes' rule with either fixed or learned credibility for the feedback agents. In contrast, several versions of the RL models are used, each modified to account for different possible biases. However more complex Bayes-based models exist, notably active inference but even the hierarchical gaussian filter. These formalisms are able to accommodate more complex behavior, such as affect and habits, which might make them more competitive with RL models. I think it is entirely fair to say that these results demonstrate deviations from an idealized and strict Bayesian context; however, the equivalence here of Bayesian and normative is I think misleading or at least requires better justification/explanation. This is because a great deal of work has been done to show that Bayes optimal models can generate behavior or other outcomes that are clearly not optimal to an observer within a given context (consider hallucinations for example) but which make sense in the context of how the model is constructed as well as the priors and desired states the model is given.

      As such, I would recommend that the language be adjusted to carefully define what is meant by normative and Bayesian and to recognize that work that is clearly Bayesian could potentially still be competitive with RL models if implemented to model this task. An even better approach would be to directly use one of these more complex modelling approaches, such as active inference, as the comparator to the RL models, though I would understand if the authors would want this to be a subject for future work.

      Abstract:

      The abstract is lacking in some detail about the experiments done, but this may be a limitation of the required word count? If word count is not an issue, I would recommend adding details of the experiments done and the results. One comment is that there is an appeal to normative learning patterns, but this suggests that learning patterns have a fixed optimal nature, which may not be true in cases where the purpose of the learning (e.g. to confirm the feeling of safety of being in an in-group) may not be about learning accurately to maximize reward. This can be accommodated in a Bayesian framework by modelling priors and desired outcomes. As such the central premise that biased learning is inherently non-normative or non-Bayesian I think would require more justification. This is true in the introduction as well.

      Introduction:

      As noted above the conceptualization of Bayesian learning being equivalent to normative learning I think requires either further justification. Bayesian belief updating can be biased an non-optimal from an observer perspective, while being optimal within the agent doing the updating if the priors/desired outcomes are set up to advantage these "non-optimal" modes of decision making.

      Results:

      I wonder why the agent was presented before the choice - since the agent is only relevant to the feedback after the choice is made. I wonder if that might have induced any false association between the agent identity and the choice itself. This is by no means a critical point but would be interesting to get the authors' thoughts.

      The finding that positive feedback increases learning is one that has been shown before and depends on valence, as the authors note. They expanded their reinforcement learning model to include valence; but they did not modify the Bayesian model in a similar manner. This lack of a valence or recency effect might also explain the failure of the Bayesian models in the preceding section where the contrast effect is discussed. It is not unreasonable to imagine that if humans do employ Bayesian reasoning that this reasoning system has had parameters tuned based on the real world, where recency of information does matter; affect has also been shown to be incorporable into Bayesian information processing (see the work by Hesp on affective charge and the large body of work by Ryan Smith). It may be that the Bayesian models chosen here require further complexity to capture the situation, just like some of the biases required updates to the RL models. This complexity, rather than being arbitrary, may be well justified by decision making in the real world.

      The methods mention several symptom scales- it would be interesting to have the results of these and any interesting correlations noted. It is possible that some of individual variability here could be related to these symptoms, which could introduce precision parameter changes in a Bayesian context and things like reward sensitivity changes in an RL context.

      Discussion:

      (For discussion, not a specific comment on this paper): One wonders also about participant beliefs about the experiment or the intent of the experimenters. I have often had participants tell me they were trying to "figure out" a task or find patterns even when this was not part of the experiment. This is not specific to this paper, but it may be relevant in the future to try and model participant beliefs about the experiment especially in the context of disinformation, when they might be primed to try and "figure things out".

      As a general comment, in the active inference literature, there has been discussion of state-dependent actions, or "habits", which are learned in order to help agents more rapidly make decisions, based on previous learning. It is also possible that what is being observed is that these habits are at play, and that they represent the cognitive biases. This is likely especially true given, as the authors note, the high cognitive load of the task. It is true that this would mean that full-force Bayesian inference is not being used in each trial, or in each experience an agent might have in the world, but this is likely adaptive on the longer timescale of things, considering resource requirements. I think in this case you could argue that we have a departure from "normative" learning, but that is not necessarily a departure from any possible Bayesian framework, since these biases could potentially be modified by the agent or eschewed in favor of more expensive full-on Bayesian learning when warranted. Indeed in their discussion on the strategy of amplifying credible news sources to drown out low-credibility sources, the authors hint to the possibility of longer term strategies that may produce optimal outcomes in some contexts, but which were not necessarily appropriate to this task. As such, the performance on this task- and the consideration of true departure from Bayesian processing- should be considered in this wider context. Another thing to consider is that Bayesian inference is occurring, but that priors present going in produce the biases, or these biases arise from another source, for example factoring in epistemic value over rewards when the actual reward is not large. This again would be covered under an active inference approach, depending on how the priors are tuned. Indeed, given the benefit of social cohesion in an evolutionary perspective, some of these "biases" may be the result of adaptation. For example, it might be better to amplify people's good qualities and minimize their bad qualities in order to make it easier to interact with them; this entails a cost (in this case, not adequately learning from feedback and potentially losing out sometimes), but may fulfill a greater imperative (improved cooperation on things that matter). Given the right priors/desired states, this could still be a Bayes-optimal inference at a social level and as such may be ingrained as a habit which requires effort to break at the individual level during a task such as this.

      The authors note that this task does not relate to "emotional engagement" or "deep, identity-related, issues". While I agree that this is likely mostly true, it is also possible that just being told one is being lied to might elicit an emotional response that could bias responses, even if this is a weak response.

      Comments on revisions:

      In their updated version the authors have made some edits to address my concerns regarding the framing of the 'normative' bayesian model, clarifying that they utilized a simple bayesian model which is intended to adhere in an idealized manner to the intended task structure, though further simulations would have been ideal.

      The authors, however, did not take my recommendation to explore the symptoms in the symptom scales they collected as being a potential source of variability. They note that these were for hypothesis generation and were exploratory, fair enough, but this study is not small and there should have been sufficient sample size for a very reasonable analysis looking at symptom scores.

      However, overall the toned down claims and clarifications of intent are adequate responses to my previous review.

    3. Reviewer #2 (Public review):

      This important paper studies the problem of learning from feedback given by sources of varying credibility. The convincing combination of experiment and computational modeling helps to pin down properties of learning, while opening unresolved questions for future research.

      Summary:

      This paper studies the problem of learning from feedback given by sources of varying credibility. Two bandit-style experiments are conducted in which feedback is provided with uncertainty, but from known sources. Bayesian benchmarks are provided to assess normative facets of learning, and alternative credit assignment models are fit for comparison. Some aspects of normativity appear, in addition to possible deviations such as asymmetric updating from positive and negative outcomes.

      Strengths:

      The paper tackles an important topic, with a relatively clean cognitive perspective. The construction of the experiment enables the use of computational modeling. This helps to pinpoint quantitatively the properties of learning and formally evaluate their impact and importance. The analyses are generally sensible, and advanced parameter recovery analyses (including cross-fitting procedure) provide confidence in the model estimation and comparison. The authors have very thoroughly revised the paper in response to previous comments.

      Weaknesses:

      The authors acknowledge the potential for cognitive load and the interleaved task structure to play a meaningful role in the results, though leave this for future work. This is entirely reasonable, but remains a limitation in our ability to generalize the results. Broadly, some of the results obtain in cases where the extent of generalization is not always addressed and remains uncertain.

    4. Reviewer #3 (Public review):

      Summary

      This paper investigates how disinformation affects reward learning processes in the context of a two-armed bandit task, where feedback is provided by agents with varying reliability (with lying probability explicitly instructed). They find that people learn more from credible sources, but also deviate systematically from optimal Bayesian learning: They learned from uninformative random feedback, learned more from positive feedback, and updated too quickly from fully credible feedback (especially following low-credibility feedback). Overall, this study highlights how misinformation could distort basic reward learning processes, without appeal to higher order social constructs like identity.

      Strengths

      • The experimental design is simple and well-controlled; in particular, it isolates basic learning processes by abstracting away from social context
      • Modeling and statistics meet or exceed standards of rigor
      • Limitations are acknowledged where appropriate, especially those regarding external validity
      • The comparison model, Bayes with biased credibility estimates, is strong; deviations are much more compelling than e.g. a purely optimal model
      • The conclusions are of substantial interest from both a theoretical and applied perspective

      Weaknesses

      The authors have addressed most of my concerns with the initial submission. However, in my view, evidence for the conclusion that less credible feedback yields a stronger positivity bias remains weak. This is due to two issues.

      Absolute or relative positivity bias?

      The conclusion of greater positivity bias for lower credible feedback (Fig 5) hinges on the specific way in which positivity bias is defined. Specifically, we only see the effect when normalizing the difference in sensitivity to positive vs. negative feedback by the sum. I appreciate that the authors present both and add the caveat whenever they mention the conclusion. However, without an argument that the relative definition is more appropriate, the fact of the matter is that the evidence is equivocal.

      There is also a good reason to think that the absolute definition is more appropriate. As expected, participants learn more from credible feedback. Thus, normalizing by average learning (as in the relative definition) amounts to dividing the absolute difference by increasingly large numbers for more credible feedback. If there is a fixed absolute positivity bias (or something that looks like it), the relative bias will necessarily be lower for more credible feedback. In fact, the authors own results demonstrate this phenomenon (see below). A reduction in relative bias thus provides weak evidence for the claim.

      It is interesting that the discovery study shows evidence of a drop in absolute bias. However, for me, this just raises questions. Why is there a difference? Was one a just a fluke? If so, which one?

      Positivity bias or perseveration?

      Positivity bias and perseveration will both predict a stronger relationship between positive (vs. negative) feedback and future choice. They can thus be confused for each other when inferred from choice data. This potentially calls into question all the results on positivity bias.

      The authors clearly identify this concern in the text and go to considerable lengths to rule it out. However, the new results (in revision 1) show that a perseveration-only model can in fact account for the qualitative pattern in the human data (the CA parameters). This contradicts the current conclusion:

      Critically, however, these analyses also confirmed that perseveration cannot account for our main finding of increased positivity bias, relative to the overall extent of CA, for low-credibility feedback.

      Figure 24c shows that the credibility-CA model does in fact show stronger positivity bias for less credible feedback. The model distribution for credibility 1 is visibly lower than for credibilities 0.5 and 0.75.

      The authors need to be clear that it is the magnitude of the effect that the perseveration-only model cannot account for. Furthermore, they should additionally clarify that this is true only for models fit to data; it is possible that the credibility-CA model could capture the full size of the effect with different parameters (which could fit best if the model was implemented slightly differently).

      The authors could make the new analyses somewhat stronger by using parameters optimized to capture just the pattern in CA parameters (for example by MSE). This would show that the models are in principle incapable of capturing the effect. However, this would be a marginal improvement because the conclusion would still rest on a quantitative difference that depends on specific modeling assumptions.

      New simulations clearly demonstrate the confound in relative bias

      Figure 24 also speaks to the relative vs. absolute question. The model without positivity bias shows a slightly stronger absolute "positivity bias" for the most credible feedback, but a weaker relative bias. This is exactly in line with the logic laid out above. In standard bandit tasks, perseveration can be quite well-captured by a fixed absolute positivity bias, which is roughly what we see in the simulations (I'm not sure what to make of the slight increase; perhaps a useful lead for the authors). However, when we divide by average credit assignment, we now see a reduction. This clearly demonstrates that a reduction in relative bias can emerge without any true differences in positivity bias.

      Given everything above, I think it is unlikely that the present data can provide even "solid" evidence for the claim that positivity bias is greater with less credible feedback. This confound could be quickly ruled out, however, by a study in which feedback is sometimes provided in the absence of a choice. This would empirically isolate positivity bias from choice-related effects, including perseveration.

    5. Author response:

      The following is the authors’ response to the original reviews

      Reviewer #1 (Public review):

      This is a well-designed and very interesting study examining the impact of imprecise feedback on outcomes in decision-making. I think this is an important addition to the literature, and the results here, which provide a computational account of several decision-making biases, are insightful and interesting.

      We thank the reviewer for highlighting the strengths of this work.

      I do not believe I have substantive concerns related to the actual results presented; my concerns are more related to the framing of some of the work. My main concern is regarding the assertion that the results prove that non-normative and non-Bayesian learning is taking place. I agree with the authors that their results demonstrate that people will make decisions in ways that demonstrate deviations from what would be optimal for maximizing reward in their task under a strict application of Bayes' rule. I also agree that they have built reinforcement learning models that do a good job of accounting for the observed behavior. However, the Bayesian models included are rather simple, per the author's descriptions, applications of Bayes' rule with either fixed or learned credibility for the feedback agents. In contrast, several versions of the RL models are used, each modified to account for different possible biases. However, more complex Bayes-based models exist, notably active inference, but even the hierarchical Gaussian filter. These formalisms are able to accommodate more complex behavior, such as affect and habits, which might make them more competitive with RL models. I think it is entirely fair to say that these results demonstrate deviations from an idealized and strict Bayesian context; however, the equivalence here of Bayesian and normative is, I think, misleading or at least requires better justification/explanation. This is because a great deal of work has been done to show that Bayes optimal models can generate behavior or other outcomes that are clearly not optimal to an observer within a given context (consider hallucinations for example), but which make sense in the context of how the model is constructed as well as the priors and desired states the model is given.

      As such, I would recommend that the language be adjusted to carefully define what is meant by normative and Bayesian and to recognize that work that is clearly Bayesian could potentially still be competitive with RL models if implemented to model this task. An even better approach would be to directly use one of these more complex modelling approaches, such as active inference, as the comparator to the RL models, though I would understand if the authors would want this to be a subject for future work.

      We thank the reviewer for raising this crucial and insightful point regarding the framing of our results and the definitions of 'normative' and 'Bayesian' learning. Our primary aim in this work was to characterize specific behavioral signatures that demonstrate deviations from predictions generated by a strict, idealized Bayesian framework when learning from disinformation (which we term “biases”). We deliberately employed relatively simple Bayesian models as benchmarks to highlight these specific biases. We fully agree that more sophisticated Bayes-based models (as mentioned by the reviewer, or others) could potentially offer alternative mechanistic explanations for participant behavior. However, we currently do not have a strong notion about which Bayesian models can encompass our findings, and hence, we leave this important question for future work.

      To enhance clarity within the current manuscript we now avoided the use of the term “normative” to refer to our Bayesian models, using the term “ideal” instead. We also define more clearly what exactly we mean by that notion when the idea model is described:

      “This model is based on an idealized assumptions that during the feedback stage of each trial, the value of the chosen bandit is updated (based on feedback valence and credibility) according to Bayes rule reflecting perfect adherence to the instructed task structure (i.e., how true outcomes and feedback are generated).”

      Moreover, we have added a few sentences in the discussion commenting on how more complex Bayesian models might account for our empirical findings:

      “However, as hypothesized, when facing potential disinformation, we also find that individuals exhibit several important biases i.e., deviations from strictly idealized Bayesian strategies. Future studies should explore if and under what assumptions, about the task’s generative structure and/or learner’s priors and objectives, more complex Bayesian models (e.g., active inference (58)) might account for our empirical findings.”

      Abstract:

      The abstract is lacking in some detail about the experiments done, but this may be a limitation of the required word count. If word count is not an issue, I would recommend adding details of the experiments done and the results.

      We thank the reviewer for their valuable suggestion. We have now included more details about the experiment in the abstract:

      “In two experiments, participants completed a two-armed bandit task, where they repeatedly chose between two lotteries and received outcome-feedback from sources of varying credibility, who occasionally disseminated disinformation by lying about true choice outcome (e.g., reporting non reward when a reward was truly earned or vice versa).”

      One comment is that there is an appeal to normative learning patterns, but this suggests that learning patterns have a fixed optimal nature, which may not be true in cases where the purpose of the learning (e.g. to confirm the feeling of safety of being in an in-group) may not be about learning accurately to maximize reward. This can be accommodated in a Bayesian framework by modelling priors and desired outcomes. As such, the central premise that biased learning is inherently non-normative or non-Bayesian, I think, would require more justification. This is true in the introduction as well.

      Introduction:

      As noted above, the conceptualization of Bayesian learning being equivalent to normative learning, I think requires further justification. Bayesian belief updating can be biased and non-optimal from an observer perspective, while being optimal within the agent doing the updating if the priors/desired outcomes are set up to advantage these "non-optimal" modes of decision making.

      We appreciate the reviewer's thoughtful comment regarding the conceptualization of "normative" and "Bayesian" learning. We fully agree that the definition of "normative" is nuanced and can indeed depend on whether one considers reward-maximization or the underlying principles of belief updating. As explained above we now restrict our presentation to deviations from “ideal Bayes” learning patterns and we acknowledge the reviewer’s concern in a caveat in our discussion.

      Results:

      I wonder why the agent was presented before the choice, since the agent is only relevant to the feedback after the choice is made. I wonder if that might have induced any false association between the agent identity and the choice itself. This is by no means a critical point, but it would be interesting to get the authors' thoughts.

      We thank the reviewer for raising this interesting point regarding the presentation of the agent before the choice. Our decision to present the agent at this stage was intentional, as our original experimental design aimed to explore the possible effects of "expected source credibility" on participants' choices (e.g., whether knowledge of feedback credibility will affect choice speed and accuracy). However, we found nothing that would be interesting to report.

      The finding that positive feedback increases learning is one that has been shown before and depends on valence, as the authors note. They expanded their reinforcement learning model to include valence, but they did not modify the Bayesian model in a similar manner. This lack of a valence or recency effect might also explain the failure of the Bayesian models in the preceding section, where the contrast effect is discussed. It is not unreasonable to imagine that if humans do employ Bayesian reasoning that this reasoning system has had parameters tuned based on the real world, where recency of information does matter; affect has also been shown to be incorporable into Bayesian information processing (see the work by Hesp on affective charge and the large body of work by Ryan Smith). It may be that the Bayesian models chosen here require further complexity to capture the situation, just like some of the biases required updates to the RL models. This complexity, rather than being arbitrary, may be well justified by decision-making in the real world.

      Thanks for these additional important ideas which speak more to the notion that more complex Bayesian frameworks may account for biases we report.

      The methods mention several symptom scales- it would be interesting to have the results of these and any interesting correlations noted. It is possible that some of the individual variability here could be related to these symptoms, which could introduce precision parameter changes in a Bayesian context and things like reward sensitivity changes in an RL context.

      We included these questionnaires for exploratory purposes, with the aim of generating informed hypotheses for future research into individual differences in learning. Given the preliminary nature of these analyses, we believe further research is required about this important topic.

      Discussion:

      (For discussion, not a specific comment on this paper): One wonders also about participants' beliefs about the experiment or the intent of the experimenters. I have often had participants tell me they were trying to "figure out" a task or find patterns even when this was not part of the experiment. This is not specific to this paper, but it may be relevant in the future to try and model participant beliefs about the experiment especially in the context of disinformation, when they might be primed to try and "figure things out".

      We thank the reviewer for this important recommendation. We agree and this point is included in our caveat (cited above) that future research should address what assumptions about the generative task structure can allow Bayesian models to account for our empirical patterns.

      As a general comment, in the active inference literature, there has been discussion of state-dependent actions, or "habits", which are learned in order to help agents more rapidly make decisions, based on previous learning. It is also possible that what is being observed is that these habits are at play, and that they represent the cognitive biases. This is likely especially true given, as the authors note, the high cognitive load of the task. It is true that this would mean that full-force Bayesian inference is not being used in each trial, or in each experience an agent might have in the world, but this is likely adaptive on the longer timescale of things, considering resource requirements. I think in this case you could argue that we have a departure from "normative" learning, but that is not necessarily a departure from any possible Bayesian framework, since these biases could potentially be modified by the agent or eschewed in favor of more expensive full-on Bayesian learning when warranted.<br /> Indeed, in their discussion on the strategy of amplifying credible news sources to drown out low-credibility sources, the authors hint at the possibility of longer-term strategies that may produce optimal outcomes in some contexts, but which were not necessarily appropriate to this task. As such, the performance on this task- and the consideration of true departure from Bayesian processing- should be considered in this wider context.

      Another thing to consider is that Bayesian inference is occurring, but that priors present going in produce the biases, or these biases arise from another source, for example, factoring in epistemic value over rewards when the actual reward is not large. This again would be covered under an active inference approach, depending on how the priors are tuned. Indeed, given the benefit of social cohesion in an evolutionary perspective, some of these "biases" may be the result of adaptation. For example, it might be better to amplify people's good qualities and minimize their bad qualities in order to make it easier to interact with them; this entails a cost (in this case, not adequately learning from feedback and potentially losing out sometimes), but may fulfill a greater imperative (improved cooperation on things that matter). Given the right priors/desired states, this could still be a Bayes-optimal inference at a social level and, as such, may be ingrained as a habit that requires effort to break at the individual level during a task such as this.

      We thank the reviewer for these insightful suggestions speaking further to the point about more complex Bayesian models.

      The authors note that this task does not relate to "emotional engagement" or "deep, identity-related issues". While I agree that this is likely mostly true, it is also possible that just being told one is being lied to might elicit an emotional response that could bias responses, even if this is a weak response.

      We agree with the reviewer that a task involving performance-based bonuses, and particularly one where participants are explicitly told they are being lied to, might elicit weak emotional response. However, our primary point is that the degree of these responses is expected to be substantially weaker than those typically observed in the broader disinformation literature, which frequently deals with highly salient political, social, or identity-related topics that inherently carry strong emotional and personal ties for participants, leading to much more pronounced affective engagement and potential biases. Our task deliberately avoids such issues thus minimizing the potential for significant emotion-driven biases. We have toned down the discussion accordingly:

      “This occurs even when the decision at hand entails minimal emotional engagement or pertinence to deep, identity-related, issues.”

      Reviewer #2 (Public review):

      This valuable paper studies the problem of learning from feedback given by sources of varying credibility. The solid combination of experiment and computational modeling helps to pin down properties of learning, although some ambiguity remains in the interpretation of results.

      Summary:

      This paper studies the problem of learning from feedback given by sources of varying credibility. Two banditstyle experiments are conducted in which feedback is provided with uncertainty, but from known sources. Bayesian benchmarks are provided to assess normative facets of learning, and alternative credit assignment models are fit for comparison. Some aspects of normativity appear, in addition to deviations such as asymmetric updating from positive and negative outcomes.

      Strengths:

      The paper tackles an important topic, with a relatively clean cognitive perspective. The construction of the experiment enables the use of computational modeling. This helps to pinpoint quantitatively the properties of learning and formally evaluate their impact and importance. The analyses are generally sensible, and parameter recovery analyses help to provide some confidence in the model estimation and comparison.

      We thank the reviewer for highlighting the strengths of this work.

      Weaknesses:

      (1) The approach in the paper overlaps somewhat with various papers, such as Diaconescu et al. (2014) and Schulz et al. (forthcoming), which also consider the Bayesian problem of learning and applying source credibility, in terms of theory and experiment. The authors should discuss how these papers are complementary, to better provide an integrative picture for readers.

      Diaconescu, A. O., Mathys, C., Weber, L. A., Daunizeau, J., Kasper, L., Lomakina, E. I., ... & Stephan, K. E. (2014). Inferring the intentions of others by hierarchical Bayesian learning. PLoS computational biology, 10(9), e1003810.

      Schulz, L., Schulz, E., Bhui, R., & Dayan, P. Mechanisms of Mistrust: A Bayesian Account of Misinformation Learning. https://doi.org/10.31234/osf.io/8egxh

      We thank the reviewers for pointing us to this relevant work. We have updated the introduction, mentioning these precedents in the literature and highlighting our specific contributions:

      “To address these questions, we adopt a novel approach within the disinformation literature by exploiting a Reinforcement Learning (RL) experimental framework (36). While RL has guided disinformation research in recent years (37–41), our approach is novel in using one of its most popular tasks: the “bandit task”.”

      We also explain in the discussion how these papers relate to the current study:

      “Unlike previous studies wherein participants had to infer source credibility from experience (30,37,72), we took an explicit-instruction approach, allowing us to precisely assess source-credibility impact on learning, without confounding it with errors in learning about the sources themselves. More broadly, our work connects with prior research on observational learning, which examined how individuals learn from the actions or advice of social partners (72–75). This body of work has demonstrated that individuals integrate learning from their private experiences with learning based on others’ actions or advice—whether by inferring the value others attribute to different options or by mimicking their behavior (57,76). However, our task differs significantly from traditional observational learning. Firstly, our feedback agents interpret outcomes rather than demonstrating or recommending actions (30,37,72).”

      (2) It isn't completely clear what the "cross-fitting" procedure accomplishes. Can this be discussed further?

      We thank the reviewer for requesting further clarification on the cross-fitting procedure. Our study utilizes two distinct model families: Bayesian models and CA models. The credit assignment parameters from the CA models can be treated as “data/behavioural features” corresponding to how choice feedback affects choice-propensities. The cross fitting-approach allows us in effect to examine whether these propensity features are predicted from our Bayesian models. To the extent they are not, we can conclude empirical behavior is “biased”.

      Thus, in our cross-fitting procedure we compare the CA model parameters extracted from participant data (empirical features) with those that would be expected if our Bayesian agents performed the task. Specifically, we first fit participant behavior with our Bayesian models, then simulate this model using the best-fitted parameters and fit those simulations with our CA models. This generates a set of CA parameters that would be predicted if participants behavior is reduced to a Bayesian account. By comparing these predicted Bayesian CA parameters with the actual CA parameters obtained from human participants, the cross-fitting procedure allows us to quantitatively demonstrate that the observed participant parameters are indeed statistically significant deviations from normative Bayesian processing. This provides a robust validation that the biases we identify are not artifacts of the CA model's structure but true departures from normative learning.

      We also note that Reviewer 3 suggested an intuitive way to think about the CA parameters—as analogous to logistic regression coefficients in a “sophisticated regression” of choice on (recencyweighted) choice-feedback. We find this suggestion potentially helpful for readers. Under this interpretation, the purpose of the cross-fitting method can be seen simply as estimating the regression coefficients that would be predicted by our Bayesian agents, and comparing those to the empirical coefficients.

      In our manuscript we now explain this issues more clearly by explaining how our model is analogous to a logistic regression:

      “The probability to choose a bandit (say A over B) in this family of models is a logistic function of the contrast choice-propensities between these two bandits. One interpretation of this model is as a “sophisticated” logistic regression, where the CA parameters take the role of “regression coefficients” corresponding to the change in log odds of repeating the just-taken action in future trials based on the feedback (+/- CA for positive or negative feedback, respectively; the model also includes gradual perseveration which allows for constant log-odd changes that are not affected by choice feedback) . The forgetting rate captures the extent to which the effect of each trial on future choices diminishes with time. The Q-values are thus exponentially decaying sums of logistic choice propensities based on the types of feedback a bandit received.”

      We also explain our cross-fitting procedure in more detail:

      “To further characterise deviations between behaviour and our Bayesian learning models, we used a “crossfitting” method. Treating CA parameters as data-features of interest (i.e., feedback dependent changes in choice propensity), our goal was to examine if and how empirical features differ from features extracted from simulations of our Bayesian learning models. Towards that goal, we simulated synthetic data based on Bayesian agents (using participants’ best fitting parameters), but fitted these data using the CA-models, obtaining what we term “Bayesian-CA parameters” (Fig. 2d; Methods). A comparison of these BayesianCA parameters, with empirical-CA parameters obtained by fitting CA models to empirical data, allowed us to uncover patterns consistent with, or deviating from, ideal-Bayesian value-based inference. Under the sophisticated logistic-regression interpretation of the CA-model family the cross-fitting method comprises a comparison between empirical regression coefficients (i.e., empirical CA parameters) and regression coefficient based on simulations of Bayesian models (Bayesian CA parameters).”

      (3) The Credibility-CA model seems to fit the same as the free-credibility Bayesian model in the first experiment and barely better in the second experiment. Why not use a more standard model comparison metric like the Bayesian Information Criterion (BIC)? Even if there are advantages to the bootstrap method (which should be described if so), the BIC would help for comparability between papers.

      We thank the reviewer for this important comment regarding our model comparison approach. We acknowledge that classical information criteria like AIC and BIC are widely used in RL studies. However, we argue our method for model-comparison is superior.

      We conducted a model recovery analysis demonstrating a significant limitation of using AIC or BIC for model-comparison in our data. Both these methods are strongly biased in favor of the Bayesian models. Our PBCM method, on the other hand, is both unbiased and more accurate. We believe this is because “off the shelf” methods like AIC and BIC rely on strong assumptions (such as asymptotic sample size and trial-independence) that are not necessarily met in our tasks (Data is finite; Trials in RL tasks depend on previous trials). PBCM avoids such assumptions to obtain comparison criteria specifically tailored to the structure and size of our empirical data. We have now mentioned this fact in the results section of the main text:

      “We considered using AIC and BIC, which apply “off-the shelf” penalties for model-complexity. However, these methods do not adapt to features like finite sample size (relying instead on asymptotic assumption) or temporal dependence (as is common in reinforcement learning experiments). In contrast, the parametric bootstrap cross-fitting method replaces these fixed penalties with empirical, data-driven criteria for modelselection. Indeed, model-recovery simulations confirmed that whereas AIC and BIC were heavily biased in favour of the Bayesian models, the bootstrap method provided excellent model-recovery (See Fig. S20).”

      We have also included such model recovery in the SI document:

      (4) As suggested in the discussion, the updating based on random feedback could be due to the interleaving of trials. If one is used to learning from the source on most trials, the occasional random trial may be hard to resist updating from. The exact interleaving structure should also be clarified (I assume different sources were shown for each bandit pair). This would also relate to work on RL and working memory: Collins, A. G., & Frank, M. J. (2012). How much of reinforcement learning is working memory, not reinforcement learning? A behavioral, computational, and neurogenetic analysis. European Journal of Neuroscience, 35(7), 10241035.

      We thank the reviewer for this point. The specific interleaved structure of the agents is described in the main text:

      “Each agent provided feedback for 5 trials for each bandit pair (with the agent order interleaved within the bandit pair).”

      As well as in the methods section:

      “Feedback agents were randomly interleaved across trials subject to the constraint that each agent appeared on 5-trials for each bandit pair.”

      We also thank the reviewer for mentioning the relevant work on working memory. We have now added it to our discussion point:

      “In our main study, we show that participants revised their beliefs based on entirely non-credible feedback, whereas an ideal Bayesian strategy dictates such feedback should be ignored. This finding resonates with the “continued-influence effect” whereby misleading information continues to influence an individual's beliefs even after it has been retracted (59,60). One possible explanation is that some participants failed to infer that feedback from the 1-star agent was statistically void of information content, essentially random (e.g., the group-level credibility of this agent was estimated by our free-credibility Bayesian model as higher than 50%). Participants were instructed that this feedback would be “a lie” 50% of the time but were not explicitly told that this meant it was random and should therefore be disregarded. Notably, however, there was no corresponding evidence random feedback affected behaviour in our discovery study. It is possible that an individual’s ability to filter out random information might have been limited due to a high cognitive load induced by our main study task, which required participants to track the values of three bandit pairs and juggle between three interleaved feedback agents (whereas in our discovery study each experimental block featured a single bandit pair). Future studies should explore more systematically how the ability to filter random feedback depends on cognitive load (61).”

      (5) Why does the choice-repetition regression include "only trials for which the last same-pair trial featured the 3-star agent and in which the context trial featured a different bandit pair"? This could be stated more plainly.

      We thank the reviewer for this question. When we previously submitted our manuscript, we thought that finding enhanced credit-assignment for fully credible feedback following potential disinformation from a different context would constitute a striking demonstration of our “contrast effect”. However, upon reexamining this finding we found out we had a coding error (affecting how trials were filtered). We have now rerun and corrected this analysis. We have assessed the contrast effect for both "same-context" trials (where the contextual trial featured the same bandit pair as the learning trial) and "different-context" trials (where the contextual trial featured a different bandit pair). Our re-analysis reveals a selective significant contrast effect in the samecontext condition, but no significant effect in the different-context condition. We have updated the main text to reflect these corrected findings and provide a clearer explanation of the analysis:

      “A comparison of empirical and Bayesian credit-assignment parameters revealed a further deviation from ideal Bayesian learning: participants showed an exaggerated credit-assignment for the 3-star agent compared with Bayesian models [Wilcoxon signed-rank test, instructed-credibility Bayesian model (median difference=0.74, z=11.14); free-credibility Bayesian model (median difference=0.62, z=10.71), all p’s<0.001] (Fig. 3a). One explanation for enhanced learning for the 3-star agents is a contrast effect, whereby credible information looms larger against a backdrop of non-credible information. To test this hypothesis, we examined whether the impact of feedback from the 3-star agent is modulated by the credibility of the agent in the trial immediately preceding it. More specifically, we reasoned that the impact of a 3-star agent would be amplified by a “low credibility context” (i.e., when it is preceded by a low credibility trial). In a binomial mixed effects model, we regressed choice-repetition on feedback valence from the last trial featuring the same bandit pair (i.e., the learning trial) and the feedback agent on the trial immediately preceding that last trial (i.e., the contextual credibility; see Methods for model-specification). This analysis included only learning trials featuring the 3-star agent, and context trials featuring the same bandit pair as the learning trial (Fig. 4a). We found that feedback valence interacted with contextual credibility (F(2,2086)=11.47, p<0.001) such that the feedback-effect (from the 3-star agent) decreased as a function of the preceding context-credibility (3-star context vs. 2-star context: b= -0.29, F(1,2086)=4.06, p=0.044; 2star context vs. 1-star context: b=-0.41, t(2086)=-2.94, p=0.003; and 3-star context vs. 1-star context: b=0.69, t(2086)=-4.74, p<0.001) (Fig. 4b). This contrast effect was not predicted by simulations of our main models of interest (Fig. 4c). No effect was found when focussing on contextual trials featuring a bandit pair different than the one in the learning trial (see SI 3.5). Thus, these results support an interpretation that credible feedback exerts a greater impact on participants’ learning when it follows non-credible feedback, in the same learning context.”

      We have modified the discussion accordingly as well:

      “A striking finding in our study was that for a fully credible feedback agent, credit assignment was exaggerated (i.e., higher than predicted by our Bayesian models). Furthermore, the effect of fully credible feedback on choice was further boosted when it was preceded by a low-credibility context related to current learning. We interpret this in terms of a “contrast effect”, whereby veridical information looms larger against a backdrop of disinformation (21). One upshot is that exaggerated learning might entail a risk of jumping to premature conclusions based on limited credible evidence (e.g., a strong conclusion that a vaccine is produces significant side-effect risks based on weak credible information, following non-credible information about the same vaccine). An intriguing possibility, that could be tested in future studies, is that participants strategically amplify the extent of learning from credible feedback to dilute the impact of learning from noncredible feedback. For example, a person scrolling through a social media feed, encountering copious amounts of disinformation, might amplify the weight they assign to credible feedback in order to dilute effects of ‘fake news’. Ironically, these results also suggest that public campaigns might be more effective when embedding their messages in low-credibility contexts , which may boost their impact.”

      And we have included some additional analyses in the SI document:

      “3.5 Contrast effects for contexts featuring a different bandit

      Given that we observed a contrast effect when both the learning and the immediately preceding "context trial” involved the same pair of bandits, we next investigated whether this effect persisted when the context trial featured a different bandit pair – a situation where the context would be irrelevant to the current learning. Again, we used in a binomial mixed effects model, regressing choice-repetition on feedback valence in the learning trial and the feedback agent in the context trial. This analysis included only learning trials featuring the 3-star agent, and context trials featuring a different bandit pair than the learning trial (Fig. S22a). We found no significant evidence of an interaction between feedback valence and contextual credibility (F(2,2364)=0.21, p=0.81) (Fig. S22b). This null result was consistent with the range of outcomes predicted by our main computational models (Fig. S22c).

      We aimed to formally compare the influence of two types of contextual trials: those featuring the same bandit pair as the learning trial versus those featuring a different pair. To achieve this, we extended our mixedeffects model by incorporating a new predictor variable, "CONTEXT_TYPE" which coded whether the contextual trial involved the same bandit pair (coded as -0.5) or a different bandit pair (+0.5) compared to the learning trial. The Wilkinson notation for this expanded mixed-effects model is:

      𝑅𝐸𝑃𝐸𝐴𝑇 ~ 𝐶𝑂𝑁𝑇𝐸𝑋𝑇_𝑇𝑌𝑃𝐸 ∗ 𝐹𝐸𝐸𝐷𝐵𝐴𝐶𝐾 ∗ (𝐶𝑂𝑁𝑇𝐸𝑋𝑇<sub>2-star</sub> + 𝐶𝑂𝑁𝑇𝐸𝑋𝑇<sub>3-star</sub>) + 𝐵𝐸𝑇𝑇𝐸𝑅 + (1|𝑝𝑎𝑟𝑡𝑖𝑐𝑖𝑝𝑎𝑛𝑡)

      This expanded model revealed a significant three-way interaction between feedback valence, contextual credibility, and context type (F(2,4451) = 7.71, p<0.001). Interpreting this interaction, we found a 2-way interaction between context-source and feedback valence when the context was the same (F(2,4451) = 12.03, p<0.001), but not when context was different (F(2,4451) = 0.23, p = 0.79). Further interpreting the double feedback-valence * context-source interaction (for the same context) we obtained the same conclusions as reported in the main text.”

      (6) Why apply the "Truth-CA" model and not the Bayesian variant that it was motivated by?

      Thanks for this very useful suggestion. We are unsure if we fully understand the question. The Truth-CA model was not motivated by a new Bayesian model. Our Bayesian models were simply used to make the point that participants may partially discriminate between truthful and untruthful feedback (for a given source). This led to the idea that perhaps more credit is assigned for truth (than lie) trials, which is what we found using our Truth-CA model. Note we show that our Bayesian models cannot account for this modulation.

      We have now improved our "Truth-CA" model. Previously, our Truth-CA model considered whether feedback on each trial was true or not based on realized latent true outcomes. However, it is possible that the very same feedback would have had an opposite truth-status if the latent true outcome was different (recall true outcomes are stochastic). This injects noise into the trial classification in our previous model. To avoid this, in our new model feedback is modulated by the probability the reported feedback is true (marginalized over stochasticity of true outcome).

      We have described this new model in the methods section:

      “Additionally, we formulated a “Truth-CA” model, which worked as our Credibility-CA model, but incorporated a free truth-bonus parameter (TB). This parameter modulates the extent of credit assignment for each agent based on the posterior probability of feedback being true (given the credibility of the feedback agent, and the true reward probability of the chosen bandit). The chosen bandit was updated as follows:

      𝑄 ← (1 – 𝑓<sub>Q</sub>) ∗ 𝑄 + [𝐶𝐴(𝑎𝑔𝑒𝑛𝑡) + 𝑇𝐵 ∗ (𝑃(𝑡𝑟𝑢𝑡ℎ) − 0.5)] ∗ 𝐹

      where P(truth) is the posterior probability of the feedback being true in the current trial (for exact calculation of P(truth) see “Methods: Bayesian estimation of posterior belief that feedback is true”).”

      All relevant results have been updated accordingly in the main text:

      “To formally address whether feedback truthfulness modulates credit assignment, we fitted a new variant of the CA model (the “Truth-CA” model) to the data. This variant works as our Credibility-CA model but incorporated a truth-bonus parameter (TB) which increases the degree of credit assignment for feedback as a function of the experimenter-determined likelihood the feedback is true (which is read from the curves in Fig 6a when x is taken to be the true probability the bandit is rewarding). Specifically, after receiving feedback, the Q-value of the chosen option is updated according to the following rule: 𝑄 ← (1 – 𝑓<sub>Q</sub>) ∗ 𝑄 + [𝐶𝐴(𝑎𝑔𝑒𝑛𝑡) + 𝑇𝐵 ∗ (𝑃(𝑡𝑟𝑢𝑡ℎ) − 0.5)] ∗ 𝐹 where 𝑇𝐵 is the free parameter representing the truth bonus, and 𝑃(𝑡𝑟𝑢𝑡ℎ) is the probability the received feedback being true (from the experimenter’s perspective). We acknowledge that this model falls short of providing a mechanistically plausible description of the credit assignment process, because participants have no access to the experimenter’s truthfulness likelihoods (as the true bandit reward probabilities are unknown to them). Nonetheless, we use this ‘oracle model’ as a measurement tool to glean rough estimates for the extent to which credit assignment Is boosted as a function of its truthfulness likelihood. Fitting this Truth-CA model to participants' behaviour revealed a significant positive truth-bonus (mean=0.21, t(203)=3.12, p=0.002), suggesting that participants indeed assign greater weight to feedback that is likely to be true (Fig. 6c; see SI 3.3.1 for detailed ML parameter results). Notably, simulations using our other models (Methods) consistently predicted smaller truth biases (compared to the empirical bias) (Fig. 6d). Moreover, truth bias was still detected even in a more flexible model that allowed for both a positivity bias and truth-bias (see SI 3.7). The upshot is that participants are biased to assign higher credit based on feedback that is more likely to be true in a manner that is inconsistent with out Bayesian models and above and beyond the previously identified positivity biases.“

      Finally, the Supplementary Information for the discovery study has also been revised to feature this analysis:

      “We next assessed whether participants infer whether the feedback they received on each trial was true or false and adjust their credit assignment based on this inference. We again used the “Truth-CA” model to obtain estimates for the truth bonus (TB), the increase in credit assignment as a function of the posterior probability of feedback being true. As in our main study, the fitted truth bias parameter was significantly positive, indicating that participants assign greater weight to feedback they believe is likely to be true (Fig, S4a; see SI 3.3.1 for detailed ML parameter results). Strikingly, model-simulations (Methods) predicted a lower truth bonus than the one observed in participants (Fig. S4b).”

      (7) "Overall, the results from this study support the exact same conclusions (See SI section 1.2) but with one difference. In the discovery study, we found no evidence for learning based on 50%-credibility feedback when examining either the feedback effect on choice repetition or CA in the credibility-CA model (SI 1.2.3)" - this seems like a very salient difference, when the paper reports the feedback effect as a primary finding of interest, though I understand there remains a valence-based difference.

      We agree with the reviewer and thank them for this suggestion. We now state explicitly throughout the manuscript that this finding was obtained only in one of our two studies. In the section “Discovery study” of the results we state explicitly this finding was not found in the discovery study:

      “However, we found no evidence for learning based on 50%-credibility feedback when examining either the feedback effect on choice repetition or CA in the credibility-CA model (SI 1.2.3).”

      We also note that related to another concern from R3 (that perseveration may masquerade as positivity bias) we conducted additional analyses (detailed in SI 3.6.2). These analyses revealed that the observed positivity bias for the 1-star agent in the discovery study falls within the range predicted by simple choice-perseveration. Consequently, we have removed the suggestion that participants still learn from the random agent in the discovery study. Furthermore, we have modified the discussion section to include a possible explanation for this discrepancy between the two studies:

      “Notably, however, there was no corresponding evidence random feedback affected behaviour in our discovery study. It is possible that an individual’s ability to filter out random information might have been limited due to a high cognitive load induced by our main study task, which required participants to track the values of three bandit pairs and juggle between three interleaved feedback agents (whereas in our discovery study each experimental block featured a single bandit pair). Future studies should explore more systematically how the ability to filter random feedback depends on cognitive load (61).”

      (8) "Participants were instructed that this feedback would be "a lie 50% of the time but were not explicitly told that this meant it was random and should therefore be disregarded." - I agree that this is a possible explanation for updating from the random source. It is a meaningful caveat.

      Thank you for this thought. While this can be seen as a caveat—since we don’t know what would have happened with explicit instructions—we also believe it is interesting from another perspective. In many real-life situations, individuals may have all the necessary information to infer that the feedback they receive is uninformative, yet still fail to do so, especially when they are not explicitly told to ignore it.

      In future work, we plan to examine how behaviour changes when participants are given more explicit instructions—for example, that the 50%-credibility agent provides purely random feedback.

      (9) "Future studies should investigate conditions that enhance an ability to discard disinformation, such as providing explicit instructions to ignore misleading feedback, manipulations that increase the time available for evaluating information, or interventions that strengthen source memory." - there is work on some of this in the misinformation literature that should be cited, such as the "continued influence effect". For example: Johnson, H. M., & Seifert, C. M. (1994). Sources of the continued influence effect: When misinformation in memory affects later inferences. Journal of experimental psychology: Learning, memory, and cognition, 20(6), 1420.

      We thank the reviewer for pointing us towards the relevant literature. We have now included citations about the “continued influence effect” of misinformation in the discussion:

      “In our main study, we show that participants revised their beliefs based on entirely non-credible feedback, whereas an ideal Bayesian strategy dictates such feedback should be ignored. This finding resonates with the “continued-influence effect” whereby misleading information continues to influence an individual's beliefs even after it has been retracted (59,60).”

      (10) Are the authors arguing that choice-confirmation bias may be at play? Work on choice-confirmation bias generally includes counterfactual feedback, which is not present here.

      We agree with the reviewer that a definitive test for choice-confirmation bias typically requires counterfactual feedback, which is not present in our current task. In our discussion, we indeed suggest that the positivity bias we observe may stem from a form of choice-confirmation, drawing on the extensive literature on this bias in reinforcement learning (Lefebvre et al., 2017; Palminteri et al., 2017; Palminteri & Lebreton, 2022). However, we fully acknowledge that this link is a hypothesis and that explicitly testing for choice-confirmation bias would necessitate a future study specifically incorporating counterfactual feedback. We have included a clarification of this point in the discussion:

      “Previous reinforcement learning studies, report greater credit-assignment based on positive compared to negative feedback, albeit only in the context of veridical feedback (43,44,62). Here, supporting our a-priori hypothesis we show that this positivity bias is amplified for information of low and intermediate credibility (in absolute terms in the discovery study, and relative to the overall extent of CA in both studies) . Of note, previous literature has interpreted enhanced learning for positive outcomes in reinforcement learning as indicative of a confirmation bias (42,44). For example, positive feedback may confirm, to a greater extent than negative feedback one’s choice as superior (e.g., “I chose the better of the two options”). Leveraging the framework of motivated cognition (35), we posited that feedback of uncertain veracity (e.g., low credibility) amplifies this bias by incentivising individuals to self-servingly accept positive feedback as true (because it confers positive, desirable outcomes), and explain away undesirable, choice-disconfirming, negative feedback as false. This could imply an amplified confirmation bias on social media, where content from sources of uncertain credibility, such as unknown or unverified users, is more easily interpreted in a self-serving manner, disproportionately reinforcing existing beliefs (63). In turn, this could contribute to an exacerbation of the negative social outcomes previously linked to confirmation bias such as polarization (64,65), the formation of ‘echo chambers’ (19), and the persistence of misbelief regarding contemporary issues of importance such as vaccination (66,67) and climate change (68–71). We note however, that further studies are required to determine whether positivity bias in our task is indeed a form of confirmation bias.”

      Reviewer #3 (Public review):

      Summary

      This paper investigates how disinformation affects reward learning processes in the context of a two-armed bandit task, where feedback is provided by agents with varying reliability (with lying probability explicitly instructed). They find that people learn more from credible sources, but also deviate systematically from optimal Bayesian learning: They learned from uninformative random feedback, learned more from positive feedback, and updated too quickly from fully credible feedback (especially following low-credibility feedback). Overall, this study highlights how misinformation could distort basic reward learning processes, without appeal to higher-order social constructs like identity.

      Strengths

      (1) The experimental design is simple and well-controlled; in particular, it isolates basic learning processes by abstracting away from social context.

      (2) Modeling and statistics meet or exceed the standards of rigor.

      (3) Limitations are acknowledged where appropriate, especially those regarding external validity.

      (4) The comparison model, Bayes with biased credibility estimates, is strong; deviations are much more compelling than e.g., a purely optimal model.

      (5) The conclusions are interesting, in particular the finding that positivity bias is stronger when learning from less reliable feedback (although I am somewhat uncertain about the validity of this conclusion)

      We deeply thank the reviewer for highlighting the strengths of this work.

      Weaknesses

      (1) Absolute or relative positivity bias?

      In my view, the biggest weakness in the paper is that the conclusion of greater positivity bias for lower credible feedback (Figure 5) hinges on the specific way in which positivity bias is defined. Specifically, we only see the effect when normalizing the difference in sensitivity to positive vs. negative feedback by the sum. I appreciate that the authors present both and add the caveat whenever they mention the conclusion (with the crucial exception of the abstract). However, what we really need here is an argument that the relative definition is the right way to define asymmetry....

      Unfortunately, my intuition is that the absolute difference is a better measure. I understand that the relative version is common in the RL literature; however previous studies have used standard TD models, whereas the current model updates based on the raw reward. The role of the CA parameter is thus importantly different from a traditional learning rate - in particular, it's more like a logistic regression coefficient (as described below) because it scales the feedback but not the decay. Under this interpretation, a difference in positivity bias across credibility conditions corresponds to a three-way interaction between the exponentially weighted sum of previous feedback of a given type (e.g., positive from the 75% credible agent), feedback positivity, and condition (dummy coded). This interaction corresponds to the nonnormalized, absolute difference.

      Importantly, I'm not terribly confident in this argument, but it does suggest that we need a compelling argument for the relative definition.

      We thank the reviewer for raising this important point about the definition of positivity bias, and for their thoughtful discussion on the absolute versus relative measures. We believe that the relative valence bias offers a distinct and valuable perspective on positivity bias. Conceptually, this measure describes positivity bias in a manner akin to a “percentage difference” relative to the overall level of learning which allows us to control for the overall decreases in the overall amount of credit assignment as feedback becomes less credible. We are unsure if one measure is better or more correct than the other and we believe that reporting both measures enriches the understanding of positivity bias and allows for a more comprehensive characterization of this phenomenon (as long as these measures are interpreted carefully). We have stated the significance of the relative measure in the results section:

      “Following previous research, we quantified positivity bias in 2 ways: 1) as the absolute difference between credit-assignment based on positive or negative feedback, and 2) as the same difference but relative to the overall extent of learning. We note that the second, relative, definition, is more akin to “percentage change” measurements providing a control for the overall lower levels of credit-assignment for less credible agent.”

      We also wish to point out that in our discovery study we had some evidence for amplification of positivity bias in absolute sense.

      (2) Positivity bias or perseveration?

      A key challenge in interpreting many of the results is dissociating perseveration from other learning biases. In particular, a positivity bias (Figure 5) and perseveration will both predict a stronger correlation between positive feedback and future choice. Crucially, the authors do include a perseveration term, so one would hope that perseveration effects have been controlled for and that the CA parameters reflect true positivity biases. However, with finite data, we cannot be sure that the variance will be correctly allocated to each parameter (c.f. collinearity in regressions). The fact that CA- is fit to be negative for many participants (a pattern shown more strongly in the discovery study) is suggestive that this might be happening. A priori, the idea that you would ever increase your value estimate after negative feedback is highly implausible, which suggests that the parameter might be capturing variance besides that it is intended to capture.

      The best way to resolve this uncertainty would involve running a new study in which feedback was sometimes provided in the absence of a choice - this would isolate positivity bias. Short of that, perhaps one could fit a version of the Bayesian model that also includes perseveration. If the authors can show that this model cannot capture the pattern in Figure 5, that would be fairly convincing.

      We thank the reviewer for this very insightful and crucial point regarding the potential confound between positivity bias and perseveration. We entirely agree that distinguishing these effects can be challenging. To rigorously address this concern and ascertain that our observed positivity bias, particularly its inflation for low-credibility feedback, is not merely an artifact of perseveration, we conducted additional analyses as suggested.

      First, following the reviewer’s suggestion we simulated our Bayesian models, including a perseveration term, for both our main and discovery studies. Crucially, none of these simulations predicted the specific pattern of inflated positivity bias for low-credibility feedback that we identified in participants.

      Additionally, taking a “devil’s advocate” approach, we tested whether our credibility-CA model (which includes perseveration but not a feedback valence bias) can predict our positivity bias findings. Thus, we simulated 100 datasets using our Credibility-CA model (based on empirical best-fitting parameters). We then fitted each of these simulated datasets using our CredibilityValence CA model. By examining the distribution of results across these synthetic datasets fits and comparing them to the actual results from participants, we found that while perseveration could indeed lead (as the reviewer suspected) to an artifactual positivity bias, it could not predict the magnitude of the observed inflation of positivity bias for low-credibility feedback (whether measured in absolute or relative terms).

      Based on these comprehensive analyses, we are confident that our main results concerning the modulation of a valence bias as a function of source-credibility cannot be accounted by simple choice-perseveration. We have briefly explained these analyses in the main results section:

      “Previous research has suggested that positivity bias may spuriously arise from pure choice-perseveration (i.e., a tendency to repeat previous choices regardless of outcome) (49,50). While our models included a perseveration-component, this control may not be preferent. Therefore, in additional control analyses, we generated synthetic datasets using models including choice-perseveration but devoid of feedback-valence bias, and fitted them with our credibility-valence model (see SI 3.6.1). These analyses confirmed that perseveration can masquerade as an apparent positivity bias. Critically, however, these analyses also confirmed that perseveration cannot account for our main finding of increased positivity bias, relative to the overall extent of CA, for low-credibility feedback.”

      Additionally, we have added a detailed description of these additional analyses and their findings to the Supplementary Information document:

      “3.6 Positivity bias results cannot be explained by a pure perseveration

      3.6.1 Main study

      Previous research has suggested it may be challenging to dissociate between a feedback-valence positivity bias and perseveration (i.e., a tendency to repeat previous choices regardless of outcome). While our Credit Assignment (CA) models already include a perseveration mechanism to account for this, this control may not be perfect. We thus conducted several tests to examine if our positivity-bias related results could be accounted for by perseveration.

      First we examined whether our Bayesian-models, augmented by a perseveration mechanism (as in our CA model) can generate predictions similar to our empirical results. We repeated our cross-fitting procedure to these extended Bayesian models. To briefly recap, this involved fitting participant behavior with them, generating synthetic datasets based on the resulting maximum likelihood (ML) parameters, and then fitting these simulated datasets with our Credibility-Valence CA model (which is designed to detect positivity bias). This test revealed that adding perseveration to our Bayesian models did not predict a positivity bias in learning. In absolute terms there was a small negativity bias (instructed-credibility Bayesian: b=−0.19, F(1,1218)=17.78, p<0.001, Fig. S23a-b; free-credibility Bayesian: b=−0.17, F(1,1218)=13.74, p<0.001, Fig. S23d-e). In relative terms we detected no valence related bias (instructed-credibility Bayesian: b=−0.034, F(1,609)=0.45, p=0.50, Fig. S22c; free-credibility Bayesian: b=−0.04, F(1,609)=0.51, p=0.47, Fig. S23f). More critically, these simulations also did not predict a change in the level of positivity bias as a function of feedback credibility, neither at an absolute level (instructed-credibility Bayesian: F(2,1218)=0.024, p=0.98, Fig. S23b; free-credibility Bayesian: F(2,1218)=0.008, p=0.99, Fig. S23e), nor at a relative level (instructedcredibility Bayesian: F(2,609)=1.57, p=0.21, Fig. S23c; free-credibility Bayesian: F(2,609)=0.13, p=0.88, Fig. S23f). The upshot is that our positivity-bias findings cannot be accounted for by our Bayesian models even when these are augmented with perseveration.

      However, it is still possible that empirical CA parameters from our credibility-valence model (reported in main text Fig. 5) were distorted, absorbing variance from a perseveration. To address this, we took a “devil's advocate” approach testing the assumption that CA parameters are not truly affected by feedback valance and that there is only perseveration in our data. Towards that goal, we simulated data using our CredibilityCA model (which includes perseveration but does not contain a valence bias in its learning mechanism) and then fitted these synthetic datasets using our Credibility-Valence CA model to see if the observed positivity bias could be explained by perseveration alone. Specifically, we generated 101 “group-level” synthetic datasets (each including one simulation for each participant, based on their empirical ML parameters), and fitted each dataset with our Credibility-Valence CA model. We then analysed the resulting ML parameters in each dataset using the same mixed-effects models as described in the main text, examining the distribution of effects of interest across these simulated datasets. Comparing these simulation results to the data from participants revealed a nuanced picture. While the positivity bias observed in participants is within the range predicted by a pure perseveration account when measured in absolute terms (Fig. S24a), it is much higher than predicted by pure perseveration when measured relative to the overall level of learning (Fig. S24c). More importantly, the inflation in positivity bias for lower credibility feedback is substantially higher in participants than what would be predicted by a pure perseveration account, a finding that holds true for both absolute (Fig. S24b) and relative (Fig. S24d) measures.”

      “3.6.2 Discovery study

      We then replicated these analyses in our discovery study to confirm our findings. We again checked whether extended versions of the Bayesian models (including perseveration) predicted the positivity bias results observed. Our cross-fitting procedure showed that the instructed-credibility Bayesian model with perseveration did predict a positivity bias for all credibility levels in this discovery study, both when measured in absolute terms [50% credibility (b=1.74,t(824)=6.15), 70% credibility (b=2.00,F(1,824)=49.98), 85% credibility (b=1.81,F(1,824)=40.78), 100% credibility (b=2.42,F(1,824)=72.50), all p's<0.001], and in relative terms [50% credibility (b=0.25,t(412)=3.44), 70% credibility (b=0.31,F(1,412)=17.72), 85% credibility (b=0.34,F(1,412)=21.06), 100% credibility (b=0.42,F(1,412)=31.24), all p's<0.001]. However, importantly, these simulations did not predict a change in the level of positivity bias as a function of feedback credibility, neither at an absolute level (F(3,412)=1.43,p=0.24), nor at a relative level (F(3,412)=2.06,p=0.13) (Fig. S25a-c). In contrast, simulations of the free-credibility Bayesian model (with perseveration) predicted a slight negativity bias when measured in absolute terms (b=−0.35,F(1,824)=5.14,p=0.024), and no valence bias when measured relative to the overall degree of learning (b=0.05,F(1,412)=0.55,p=0.46). Crucially, this model also did not predict a change in the level of positivity bias as a function of feedback credibility, neither at an absolute level (F(3,824)=0.27,p=0.77), nor at a relative level (F(3,412)=0.76,p=0.47) (Fig. S25d-f).

      As in our main study, we next assessed whether our Credibility-CA model (which includes perseveration but no valence bias) predicted the positivity bias results observed in participants in the discovery study. This analysis revealed that the average positivity bias in participants is higher than predicted by a pure perseveration account, both when measured in absolute terms (Fig. S26a) and in relative terms (Fig. S26c). Specifically, only the aVBI for the 70% credibility agent was above what a perseveration account would predict, while the rVBI for all agents except the completely credible one exceeded that threshold. Furthermore, the inflation in positivity bias for lower credibility feedback (compared to the 100% credibility agent) is significantly higher in participants than would be predicted by a pure perseveration account, in both absolute (Fig. S26b) and relative (Fig. S26d) terms.

      Together, these results show that the general positivity bias observed in participants could be predicted by an instructed-credibility Bayesian model with perseveration, or by a CA model with perseveration. Moreover, we find that these two models can predict a positivity bias for the 50% credibility agent, raising a concern that our positivity bias findings for this source may be an artefact of not-fully controlled for perseveration. However, the credibility modulation of this positivity bias, where the bias is amplified for lower credibility feedback, is consistently not predicted by perseveration alone, regardless of whether perseveration is incorporated into a Bayesian or a CA model. This finding suggests that participants are genuinely modulating their learning based on feedback credibility, and that this modulation is not merely an artifact of choice perseveration.”

      (3) Veracity detection or positivity bias?

      The "True feedback elicits greater learning" effect (Figure 6) may be simply a re-description of the positivity bias shown in Figure 5. This figure shows that people have higher CA for trials where the feedback was in fact accurate. But assuming that people tend to choose more rewarding options, true-feedback cases will tend to also be positive-feedback cases. Accordingly, a positivity bias would yield this effect, even if people are not at all sensitive to trial-level feedback veracity. Of course, the reverse logic also applies, such that the "positivity bias" could actually reflect discounting of feedback that is less likely to be true. This idea has been proposed before as an explanation for confirmation bias (see Pilgrim et al, 2024 https://doi.org/10.1016/j.cognition.2023.105693and much previous work cited therein). The authors should discuss the ambiguity between the "positivity bias" and "true feedback" effects within the context of this literature....

      Before addressing these excellent comments, we first note that we have now improved our "TruthCA" model. Previously, our Truth-CA model considered whether feedback on each trial was true or not based on realized latent true outcomes. However, it is possible that the very same feedback would have had an opposite truth-status if the latent true outcome was different (recall true outcomes are stochastic). This injects noise into the trial classification in our former model. To avoid this, in our new model feedback is modulated by the probability the reported feedback is true (marginalized over stochasticity of true outcome). Please note in our responses below that we conducted extensive analysis to confirm that positivity bias doesn’t in fact predict the truthbias we detect using our truth biased model

      We have described this new model in the methods section:

      “Additionally, we formulated a “Truth-CA” model, which worked as our Credibility-CA model, but incorporated a free truth-bonus parameter (TB). This parameter modulates the extent of credit assignment for each agent based on the posterior probability of feedback being true (given the credibility of the feedback agent, and the true reward probability of the chosen bandit). The chosen bandit was updated as follows:

      𝑄 ← (1 – 𝑓<sub>Q</sub>) ∗ 𝑄 + [𝐶𝐴(𝑎𝑔𝑒𝑛𝑡) + 𝑇𝐵 ∗ (𝑃(𝑡𝑟𝑢𝑡ℎ) − 0.5)] ∗ 𝐹

      where P(truth) is the posterior probability of the feedback being true in the current trial (for exact calculation of P(truth) see “Methods: Bayesian estimation of posterior belief that feedback is true”).”

      All relevant results have been updated accordingly in the main text:

      To formally address whether feedback truthfulness modulates credit assignment, we fitted a new variant of the CA model (the “Truth-CA” model) to the data. This variant works as our Credibility-CA model, but incorporated a truth-bonus parameter (TB) which increases the degree of credit assignment for feedback as a function of the experimenter-determined likelihood the feedback is true (which is read from the curves in Fig 6a when x is taken to be the true probability the bandit is rewarding). Specifically, after receiving feedback, the Q-value of the chosen option is updated according to the following rule:

      𝑄 ← (1 – 𝑓<sub>Q</sub>) ∗ 𝑄 + [𝐶𝐴(𝑎𝑔𝑒𝑛𝑡) + 𝑇𝐵 ∗ (𝑃(𝑡𝑟𝑢𝑡ℎ) − 0.5)] ∗ 𝐹

      where 𝑇𝐵 is the free parameter representing the truth bonus, and 𝑃(𝑡𝑟𝑢𝑡ℎ) is the probability the received feedback being true (from the experimenter’s perspective). We acknowledge that this model falls short of providing a mechanistically plausible description of the credit assignment process, because participants have no access to the experimenter’s truthfulness likelihoods (as the true bandit reward probabilities are unknown to them). Nonetheless, we use this ‘oracle model’ as a measurement tool to glean rough estimates for the extent to which credit assignment Is boosted as a function of its truthfulness likelihood.

      Fitting this Truth-CA model to participants' behaviour revealed a significant positive truth-bonus (mean=0.21, t(203)=3.12, p=0.002), suggesting that participants indeed assign greater weight to feedback that is likely to be true (Fig. 6c; see SI 3.3.1 for detailed ML parameter results). Notably, simulations using our other models (Methods) consistently predicted smaller truth biases (compared to the empirical bias) (Fig. 6d). Moreover, truth bias was still detected even in a more flexible model that allowed for both a positivity bias and truth-bias (see SI 3.7). The upshot is that participants are biased to assign higher credit based on feedback that is more likely to be true in a manner that is inconsistent with out Bayesian models and above and beyond the previously identified positivity biases.”

      Finally, the Supplementary Information for the discovery study has also been revised to feature this analysis:

      “We next assessed whether participants infer whether the feedback they received on each trial was true or false and adjust their credit assignment based on this inference. We again used the “Truth-CA” model to obtain estimates for the truth bonus (TB), the increase in credit assignment as a function of the posterior probability of feedback being true. As in our main study, the fitted truth bias parameter was significantly positive, indicating that participants assign greater weight to feedback they believe is likely to be true (Fig, S4a; see SI 3.3.1 for detailed ML parameter results). Strikingly, model-simulations (Methods) predicted a lower truth bonus than the one observed in participants (Fig. S4b).”

      Additionally, we thank the reviewer for pointing us to the relevant work by Pilgrim et al. (2024). We agree that the relationship between "true feedback" and "positivity bias" effects is nuanced, and their potential overlap warrants careful consideration. Note our analyses suggest that this is not solely the case. Firstly, simulations of our Credibility-Valence CA model predict only a small "truth bonus" effect, which is notably smaller than what we observed in participants. Secondly, we formulated an extension of our "Truth-CA" model that includes a valence bias in credit assignment. If our truth bonus results were merely an artifact of positivity bias, this extended model should absorb that variance, producing a null truth bonus parameter. However, fitting this model to participant data still revealed a significant positive truth bonus, which again exceeds the range predicted by simulations of our Credibility CA model:

      “3.7 Truth inference is still detected when controlling for valence bias

      Given that participants frequently select bandits that are, on average, mostly rewarding, it is reasonable to assume that positive feedback is more likely to be objectively true than negative feedback. This raises a question if the "truth inference" effect we observed in participants might simply be an alternative description of a positivity bias in learning. To directly test this idea, we extended our Truth-CA model to explicitly account for a valence bias in credit assignment. This extended model features separate CA parameters for positive and negative feedback for each agent. When we fitted this new model to participant behavior, it still revealed a significant truth bonus in both the main study (Wilkoxon’s signrank test: median = 0.09, z(202)=2.12, p=0.034; Fig. S27a) and the discovery study (median = 3.52, z(102)=7.86, p<0.001; Fig. S27c). Moreover, in the main study, this truth bonus remained significantly higher than what was predicted by all the alternative models, with the exception of the instructed-credibility bayesian model (Fig. S27b). In the discovery study, the truth bonus was significantly higher than what was predicted by all the alternative models (Fig. S27d).”

      Together, these findings suggest that our truth inference results are not simply a re-description of a positivity bias.

      Conversely, we acknowledge the reviewer's point that our positivity bias results could potentially stem from a more general truth inference mechanism. We believe that this possibility should be addressed in a future study where participants rate their belief that received feedback is true (rather than a lie).We have extended our discussion to clarify this possibility and to include the suggested citation:

      “Our findings show that individuals increase their credit assignment for feedback in proportion to the perceived probability that the feedback is true, even after controlling for source credibility and feedback valence. Strikingly, this learning bias was not predicted by any of our Bayesian or credit-assignment (CA) models. Notably, our evidence for this bias is based on a “oracle model” that incorporates the probability of feedback truthfulness from the experimenter's perspective, rather than the participant’s. This raises an important open question: how do individuals form beliefs about feedback truthfulness, and how do these beliefs influence credit assignment? Future research should address this by eliciting trial-by-trial beliefs about feedback truthfulness. Doing so would also allow for testing the intriguing possibility that an exaggerated positivity bias for non-credible sources reflects, to some extent, a truth-based discounting of negative feedback—i.e., participants may judge such feedback as less likely to be true. However, it is important to note that the positivity bias observed for fully credible sources (here and in other literature) cannot be attributed to a truth bias—unless participants were, against instructions, distrustful of that source.”

      The authors get close to this in the discussion, but they characterize their results as differing from the predictions of rational models, the opposite of my intuition. They write:

      “Alternative "informational" (motivation-independent) accounts of positivity and confirmation bias predict a contrasting trend (i.e., reduced bias in low- and medium credibility conditions) because in these contexts it is more ambiguous whether feedback confirms one's choice or outcome expectations, as compared to a full-credibility condition.”

      I don't follow the reasoning here at all. It seems to me that the possibility for bias will increase with ambiguity (or perhaps will be maximal at intermediate levels). In the extreme case, when feedback is fully reliable, it is impossible to rationally discount it (illustrated in Figure 6A). The authors should clarify their argument or revise their conclusion here.

      We apologize for the lack of clarity in our previous explanation. We removed the sentence you cited (it was intended to make a different point which we now consider non-essential). Our current narration is consistent with the point you are making.

      (4) Disinformation or less information?

      Zooming out, from a computational/functional perspective, the reliability of feedback is very similar to reward stochasticity (the difference is that reward stochasticity decreases the importance/value of learning in addition to its difficulty). I imagine that many of the effects reported here would be reproduced in that setting. To my surprise, I couldn't quickly find a study asking that precise question, but if the authors know of such work, it would be very useful to draw comparisons. To put a finer point on it, this study does not isolate which (if any) of these effects are specific to disinformation, rather than simply less information. I don't think the authors need to rigorously address this in the current study, but it would be a helpful discussion point.

      We thank the reviewer for highlighting the parallel (and difference) between feedback reliability and reward stochasticity. However, we have not found any comparable results in the literature. We also note that our discussion includes a paragraph addressing the locus of our effects making the point that more studies are necessary to determine whether our findings are due to disinformation per se or sources being less informative. While this paragraph was included in the previous version it led us to infer our Discussion was too long and we therefore shortened it considerably:

      “An important question arises as to the psychological locus of the biases we uncovered. Because we were interested in how individuals process disinformation—deliberately false or misleading information intended to deceive or manipulate—we framed the feedback agents in our study as deceptive, who would occasionally “lie” about the true choice outcome. However, statistically (though not necessarily psychologically), these agents are equivalent to agents who mix truth-telling with random “guessing” or “noise” where inaccuracies may arise from factors such as occasionally lacking access to true outcomes, simple laziness, or mistakes, rather than an intent to deceive. This raises the question of whether the biases we observed are driven by the perception of potential disinformation as deceitful per se or simply as deviating from the truth. Future studies could address this question by directly comparing learning from statistically equivalent sources framed as either lying or noisy. Unlike previous studies wherein participants had to infer source credibility from experience (30,37,72), we took an explicit-instruction approach, allowing us to precisely assess source-credibility impact on learning, without confounding it with errors in learning about the sources themselves. More broadly, our work connects with prior research on observational learning, which examined how individuals learn from the actions or advice of social partners (72–75). This body of work has demonstrated that individuals integrate learning from their private experiences with learning based on others’ actions or advice—whether by inferring the value others attribute to different options or by mimicking their behavior (57,76). However, our task differs significantly from traditional observational learning. Firstly, our feedback agents interpret outcomes rather than demonstrating or recommending actions (30,37,72). Secondly, participants in our study lack private experiences unmediated by feedback sources. Finally, unlike most observational learning paradigms, we systematically address scenarios with deliberately misleading social partners. Future studies could bridge this by incorporating deceptive social partners into observational learning, offering a chance to develop unified models of how individuals integrate social information when credibility is paramount for decision-making.”

      (5) Over-reliance on analyzing model parameters

      Most of the results rely on interpreting model parameters, specifically, the "credit assignment" (CA) parameter. Exacerbating this, many key conclusions rest on a comparison of the CA parameters fit to human data vs. those fit to simulations from a Bayesian model. I've never seen anything like this, and the authors don't justify or even motivate this analysis choice. As a general rule, analyses of model parameters are less convincing than behavioral results because they inevitably depend on arbitrary modeling assumptions that cannot be fully supported. I imagine that most or even all of the results presented here would have behavioral analogues. The paper would benefit greatly from the inclusion of such results. It would also be helpful to provide a description of the model in the main text that makes it very clear what exactly the CA parameter is capturing (see next point).

      We thank the reviewer for this important suggestion which we address together with the following point.

      (6) RL or regression?

      I was initially very confused by the "RL" model because it doesn't update based on the TD error. Consequently, the "Q values" can go beyond the range of possible reward (SI Figure 5). These values are therefore not Q values, which are defined as expectations of future reward ("action values"). Instead, they reflect choice propensities, which are sometimes notated $h$ in the RL literature. This misuse of notation is unfortunately quite common in psychology, so I won't ask the authors to change the variable. However, they should clarify when introducing the model that the Q values are not action values in the technical sense. If there is precedent for this update rule, it should be cited.

      Although the change is subtle, it suggests a very different interpretation of the model.

      Specifically, I think the "RL model" is better understood as a sophisticated logistic regression, rather than a model of value learning. Ignoring the decay term, the CA term is simply the change in log odds of repeating the just-taken action in future trials (the change is negated for negative feedback). The PERS term is the same, but ignoring feedback. The decay captures that the effect of each trial on future choices diminishes with time. Importantly, however, we can re-parameterize the model such that the choice at each trial is a logistic regression where the independent variables are an exponentially decaying sum of feedback of each type (e.g., positive-cred50, positive-cred75, ... negative-cred100). The CA parameters are simply coefficients in this logistic regression.

      Critically, this is not meant to "deflate" the model. Instead, it clarifies that the CA parameter is actually not such an assumption-laden model estimate. It is really quite similar to a regression coefficient, something that is usually considered "model agnostic". It also recasts the non-standard "cross-fitting" approach as a very standard comparison of regression coefficients for model simulations vs. human data. Finally, using different CA parameters for true vs false feedback is no longer a strange and implausible model assumption; it's just another (perfectly valid) regression. This may be a personal thing, but after adopting this view, I found all the results much easier to understand.

      We thank the reviewer for their insightful and illuminating comments, particularly concerning the interpretation of our model parameters and the nature of our Credit assignment model. We believe your interpretation of the model is accurate and we now narrate it to readers in the hope that our modelling will become clearer and more intuitively. We also present to readers how these recasts our “cross-fitting” approach in the way you suggested (we return to this point below).

      Broadly, while we agree that modelling results depend on underlying assumptions, we believe that “model-agnostic” approaches also have important limitations—especially in reinforcement learning (RL), where choices are shaped by histories of past events, which such approaches often fail to fully account for. As students of RL, we are frequently struck by how careful modelling demonstrates that seemingly meaningful “model-agnostic” patterns can emerge as artefacts of unaccounted-for variables. We also note that the term “model-agnostic” is difficult to define—after all, even regression models rely on assumptions, and some computational models make richer or more transparent assumptions than others. Ideally, we aim to support our findings using converging methods wherever possible.

      We want to clarify that many of our reported findings indeed stem from straightforward behavioral analyses (e.g., simple regressions of choice-repetition), which do not rely on complex modeling assumptions. The two key results that primarily depend on the analysis of model parameters are our findings related to positivity bias and truth inference.

      Regarding the positivity bias, identifying truly model-agnostic behavioral signatures, distinct from effects like choice-perseveration, has historically been a significant challenge in the literature. Classical research on this bias rests on the interpretation of model parameters (Lefebvre et al., 2017; Palminteri et al., 2017), or at least on the use of models to assess what an “unbiased learner” baseline should look like (Palminteri & Lebreton, 2022). Some researchers have suggested possible regressions incorporating history effects to detect positivity bias from choicerepetition behavior, but these regressions (as our model) rely on subtle assumptions about forgetting and history effects (Toyama et al., 2019). Specifically, in our case, this issue is also demonstrated by analysis we conducted related to the previous point the reviewer made (about perseveration masquerading as positivity bias). We believe that dissociating clearly positivity bias from perseveration is an important challenge for the field going forward.

      For our truth inference results, obtaining purely behavioral signatures is similarly challenging due to the intricate interdependencies (the reviewer has identified in previous points) between agent credibility, feedback valence, feedback truthfulness, and choice accuracy within our task design.

      Finally, we agree with the reviewer that regression coefficients are often interpreted as a “modelagnostic” pattern. From this perspective even our findings regarding positivity and truth bias are not a case of over-reliance on complex model assumptions but are rather a way to expose deviations between empirical “sophisticated” regression coefficients and coefficients predicted from Bayesian models.

      We have now described the main learning rule of our model in the main text to ensure that the meaning of the CA parameters is clearer for readers:

      “Next, we formulated a family of non-Bayesian computational RL models. Importantly, these models can flexibly express non-Bayesian learning patterns and, as we show in following sections, can serve to identify learning biases deviating from an idealized Bayesian strategy. Here, an assumption is that during feedback, the choice propensity for the chosen bandit (which here is represented by a point estimate, “Q value“, rather than a distribution) either increases or decreases (for positive or negative feedback, respectively) according to a magnitude quantified by the free “Credit-Assignment (CA)” model parameters (47):

      𝑄(𝑐ℎ𝑜𝑠𝑒𝑛) ← (1 – 𝑓<sub>Q</sub>) ∗ 𝑄(𝑐ℎ𝑜𝑠𝑒𝑛) + 𝐶𝐴(𝑎𝑔𝑒𝑛𝑡, 𝑣𝑎𝑙𝑒𝑛𝑐𝑒) ∗ 𝐹

      where F is the feedback received from the agents (coded as 1 for reward feedback and -1 for non-reward feedback), while fQ (∈[0,1]) is the free parameter representing the forgetting rate of the Q-value (Fig. 2a, bottom panel; Fig. S5b; Methods). The probability to choose a bandit (say A over B) in this family of models is a logistic function of the contrast choice-propensities between these two bandits. One interpretation of this model is as a “sophisticated” logistic regression, where the CA parameters take the role of “regression coefficients” corresponding to the change in log odds of repeating the just-taken action in future trials based on the feedback (+/- CA for positive or negative feedback, respectively; the model also includes gradual perseveration which allows for constant log-odd changes that are not affected by choice feedback; see “Methods: RL models”) . The forgetting rate captures the extent to which the effect of each trial on future choices diminishes with time. The Q-values are thus exponentially decaying sums of logistic choice propensities based on the types of feedback a bandit received.”

      We also explain the implications of this perspective for our cross-fitting procedure:

      “To further characterise deviations between behaviour and our Bayesian learning models, we used a “crossfitting” method. Treating CA parameters as data-features of interest (i.e., feedback dependent changes in choice propensity), our goal was to examine if and how empirical features differ from features extracted from simulations of our Bayesian learning models. Towards that goal, we simulated synthetic data based on Bayesian agents (using participants’ best fitting parameters), but fitted these data using the CA-models, obtaining what we term “Bayesian-CA parameters” (Fig. 2d; Methods). A comparison of these BayesianCA parameters, with empirical-CA parameters obtained by fitting CA models to empirical data, allowed us to uncover patterns consistent with, or deviating from, ideal-Bayesian value-based inference. Under the sophisticated logistic-regression interpretation of the CA-model family the cross-fitting method comprises a comparison between empirical regression coefficients (i.e., empirical CA parameters) and regression coefficient based on simulations of Bayesian models (Bayesian CA parameters). Using this approach, we found that both the instructed-credibility and free-credibility Bayesian models predicted increased BayesianCA parameters as a function of agent credibility (Fig. 3c; see SI 3.1.1.2 Tables S8 and S9). However, an in-depth comparison between Bayesian and empirical CA parameters revealed discrepancies from ideal Bayesian learning, which we describe in the following sections.”

      Recommendations for the authors:

      Reviewer #3 (Recommendations for the authors):

      (1) Keep terms consistent, e.g., follow-up vs. main; hallmark vs. traditional.

      We have now changed the text to keep terms consistent.

      (2) CA model is like a learning rate; but it's based on the raw reward, not the TD error - this seems strange.

      We thank the reviewer for this comment. We understand that the use of a CA model instead of a TD error model may seem unusual at first glance. However, the CA model offers an important advantage: it more easily accommodates what we term "negative learning rates". This means that some participants may treat certain agents (especially the random one) as consistently deceitful, leading them to effectively increase/reduce choice tendencies following negative/positive feedback. A CA model handles this naturally by allowing negative CA parameters as a simple extension of positive ones. In contrast, adapting a TD error model to account for this is more complex. For instance, attempting to introduce a "negative learning rate" makes the RW model behave in a non-stable manner (e.g., Q values become <0 or >1). At the initial stages of our project, we explored different approaches to dealing with this issue and we found the CA model provides the best approach. For these reasons, we decided to proceed with our CA model.

      Additionally, we used the CA model in previous studies (e.g., Moran, Dayan & Dolan (2021)) where we included (in SI) a detailed discussion of the similarities and difference between creditassignment and Rescorla-Wagner models

      (3) Why was the follow-up study not pre-registered?

      We appreciate the reviewer's comment regarding preregistration, which we should have done. Unfortunately, this is now “water under the bridge” but going forward we hope to pre-register increasing parts of our work.

      (4) Other work looking at reward stochasticity?

      As noted in point 4 of the main weaknesses, previous work on reward stochasticity primarily focused on explaining the increase/decrease in learning and its mechanistic bases under varying stochasticity levels. In our study, we uniquely characterize several specific learning biases that are modulated by source credibility, a topic not extensively explored within the existing reward stochasticity framework, as far as we know.

      (5) Equation 1 is different from the one in the figure?

      The reviewer is completely correct. The figure provides a simplified visual representation, primarily focusing on the feedback-based update of the Q-value, and for simplicity, it omits the forgetting term present in the full Equation 1. To ensure complete clarity and prevent any misunderstanding, we have now incorporated a more detailed explanation of the model, including the complete Equation 1 and its components, directly within the main text. This comprehensive description will ensure that readers are fully aware of how the model operates.

      “Next, we formulated a family of non-Bayesian computational RL models. Importantly, these models can flexibly express non-Bayesian learning patterns and, as we show in following sections, can serve to identify learning biases deviating from an idealized Bayesian strategy. Here, an assumption is that during feedback, the choice propensity for the chosen bandit (which here is represented by a point estimate, “Q value“, rather than a distribution) either increases or decreases (for positive or negative feedback, respectively) according to a magnitude quantified by the free “Credit-Assignment (CA)” model parameters (47):

      𝑄(𝑐ℎ𝑜𝑠𝑒𝑛) ← (1 – 𝑓<sub>Q</sub>) ∗ 𝑄(𝑐ℎ𝑜𝑠𝑒𝑛) + 𝐶𝐴(𝑎𝑔𝑒𝑛𝑡, 𝑣𝑎𝑙𝑒𝑛𝑐𝑒) ∗ 𝐹

      where F is the feedback received from the agents (coded as 1 for reward feedback and -1 for non-reward feedback), while fQ (∈[0,1]) is the free parameter representing the forgetting rate of the Q-value (Fig. 2a, bottom panel; Fig. S5b; Methods).”

      (6) Please describe/plot the distribution of all fitted parameters in the supplement. I would include the mean and SD in the main text (methods) as well.

      Following the reviewer’s suggestions, we have included in the Supplementary Document tables displaying the mean and SD of fitted parameters from participants for our main models of interest. We have also plotted the distributions of such parameters. Both for the main study:

      (7) "A novel approach within the disinformation literature by exploiting a Reinforcement Learning (RL) experimental framework".

      The idea of applying RL to disinformation is not new. Please tone down novelty claims. It would be nice to cite/discuss some of this work as well.

      https://arxiv.org/abs/2106.05402?utm_source=chatgpt.com https://www.scirp.org/pdf/jbbs_2022110415273931.pdf https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4173312

      We thank the reviewer for pointing us towards relevant literature. We have now toned down the sentence in the introduction and cited the references provided:

      “To address these questions, we adopt a novel approach within the disinformation literature by exploiting a Reinforcement Learning (RL) experimental framework (36). While RL has guided disinformation research in recent years (37–40), our approach is novel in using one of its most popular tasks: the “bandit task”.”

      (8) Figure 3a - The figures should be in the order that they're referenced (3 is referenced before 2).

      We generally try to stick to this important rule but, in this case, we believe that our ordering serves better the narrative and hope the reviewer will excuse this small violation.

      (9) "Additionally, we found a positive feedback-effect for the 3-star agent"

      What is the analysis here? To avoid confusion with the "positive feedback" effect, consider using "positive effect of feedback". The dash wasn't sufficient to avoid confusion in my case.

      We have now updated the terms in the text to avoid confusion.

      (10) The discovery study revealed even stronger results supporting a conclusion that the credibility-CA model was superior to both Bayesian models for most subjects

      This is very subjective, but I'll just mention that my "cherry-picking" flag was raised by this sentence. Are you only mentioning cases where the discovery study was consistent with the main study? Upon a closer read, I think the answer is most likely "no", but you might consider adopting a more systematic (perhaps even explicit) policy on when and how you reference the discovery study to avoid creating this impression in a more casual reader.

      We thank the reviewer for this valuable suggestion. To prevent any impression of "cherry-picking", we have removed specific references to the discovery study from the main body of the text. Instead, all discussions regarding the convergence and divergence of results between the two studies are now in the dedicated section focusing on the discovery study:

      “The discovery study (n=104) used a disinformation task structurally similar to that used in our main study, but with three notable differences: 1) it included 4 feedback agents, with credibilities of 50%, 70%, 85% and 100%, represented by 1, 2, 3, and 4 stars, respectively; 2) each experimental block consisted of a single bandit pair, presented over 16 trials (with 4 trials for each feedback agent); and 3) in certain blocks, unbeknownst to participants, the two bandits within a pair were equally rewarding (see SI section 1.1). Overall, this study's results supported similar conclusions as our main study (see SI section 1.2) with a few differences. We found convergent support for increased learning from more credible sources (SI 1.2.1), superior fit for the CA model over Bayesian models (SI 1.2.2) and increased learning from feedback inferred to be true (SI 1.2.6). Additionally, we found an inflation of positivity bias for low-credibility both when measured relative to the overall level of credit assignment (as in our main study), or in absolute terms (unlike in our main study) (Fig. S3; SI 1.2.5). Moreover, choice-perseveration could not predict an amplification of positivity bias for low-credibility sources (see SI 3.6.2). However, we found no evidence for learning based on 50%-credibility feedback when examining either the feedback effect on choice repetition or CA in the credibility-CA model (SI 1.2.3).”

      (11) An in-depth comparison between Bayesian and empirical CA parameters revealed discrepancies from normative Bayesian learning.

      Consider saying where this in-depth comparison can be found (based on my reading, I think you're referring to the next section?

      We have now modified the sentence for better clarity:

      “However, an in-depth comparison between Bayesian and empirical CA parameters revealed discrepancies from ideal Bayesian learning, which we describe in the following sections.”

      (12) "which essentially provides feedback" Perhaps you meant "random feedback"?

      We have modified the text as suggested by the reviewer.

      <(13) Essentially random

      Why "essentially"? Isn't it just literally random?

      We have modified the text as suggested by the reviewer.

      (14) Both Bayesian models predicted an attenuated credit-assignment for the 3-star agent

      Attenuated relative to what? I wouldn't use this word if you mean weaker than what we see in the human data. Instead, I would say people show an exaggerated credit-assignment, since Bayes is the normative baseline.

      We changed the text according to the reviewer’s suggestion:

      “A comparison of empirical and Bayesian credit-assignment parameters revealed a further deviation from ideal Bayesian learning: participants showed an exaggerated credit-assignment for the 3-star agent compared with Bayesian models.”

      (15) "there was no difference between 2-star and 3-star agent contexts (b=0.051, F(1,2419)=0.39, p=0.53)"

      You cannot confirm the null hypothesis! Instead, you can write "The difference between 2-star and 3-star agent contexts was not significant". Although even with this language, you should be careful that your conclusions don't rest on the lack of a difference (the next sentence is somewhat ambiguous on this point).

      Additionally, the reported b coefs do not match the figure, which if anything, suggests a larger drop from 0.75 (2-star) to 1 (3-star). Is this a mixed vs fixed effects thing? It would be helpful to provide an explanation here.

      We thank the reviewer for this question. When we previously submitted our manuscript, we thought that finding enhanced credit-assignment for fully credible feedback following potential disinformation from a DIFFERENT context would constitute a striking demonstration of our “contrast effect”. However, upon reexamining this finding we found out we had a coding error (affecting how trials were filtered). We have now rerun and corrected this analysis. We have assessed the contrast effect for both "same-context" trials (where the contextual trial featured the same bandit pair as the learning trial) and "different-context" trials (where the contextual trial featured a different bandit pair). Our re-analysis reveals a selective significant contrast effect in the same-context condition, but no significant effect in the different-context condition. We have updated the main text to reflect these corrected findings and provide a clearer explanation of the analysis:

      “A comparison of empirical and Bayesian credit-assignment parameters revealed a further deviation from ideal Bayesian learning: participants showed an exaggerated credit-assignment for the 3-star agent compared with Bayesian models [Wilcoxon signed-rank test, instructed-credibility Bayesian model (median difference=0.74, z=11.14); free-credibility Bayesian model (median difference=0.62, z=10.71), all p’s<0.001] (Fig. 3a). One explanation for enhanced learning for the 3-star agents is a contrast effect, whereby credible information looms larger against a backdrop of non-credible information. To test this hypothesis, we examined whether the impact of feedback from the 3-star agent is modulated by the credibility of the agent in the trial immediately preceding it. More specifically, we reasoned that the impact of a 3-star agent would be amplified by a “low credibility context” (i.e., when it is preceded by a low credibility trial). In a binomial mixed effects model, we regressed choice-repetition on feedback valence from the last trial featuring the same bandit pair (i.e., the learning trial) and the feedback agent on the trial immediately preceding that last trial (i.e., the contextual credibility; see Methods for model-specification). This analysis included only learning trials featuring the 3-star agent, and context trials featuring the same bandit pair as the learning trial (Fig. 4a). We found that feedback valence interacted with contextual credibility (F(2,2086)=11.47, p<0.001) such that the feedback-effect (from the 3-star agent) decreased as a function of the preceding context-credibility (3-star context vs. 2-star context: b= -0.29, F(1,2086)=4.06, p=0.044; 2star context vs. 1-star context: b=-0.41, t(2086)=-2.94, p=0.003; and 3-star context vs. 1-star context: b=0.69, t(2086)=-4.74, p<0.001) (Fig. 4b). This contrast effect was not predicted by simulations of our main models of interest (Fig. 4c). No effect was found when focussing on contextual trials featuring a bandit pair different than the one in the learning trial (see SI 3.5). Thus, these results support an interpretation that credible feedback exerts a greater impact on participants’ learning when it follows non-credible feedback, in the same learning context.”

      We have modified the discussion accordingly as well:

      “A striking finding in our study was that for a fully credible feedback agent, credit assignment was exaggerated (i.e., higher than predicted by our Bayesian models). Furthermore, the effect of fully credible feedback on choice was further boosted when it was preceded by a low-credibility context related to current learning. We interpret this in terms of a “contrast effect”, whereby veridical information looms larger against a backdrop of disinformation (21). One upshot is that exaggerated learning might entail a risk of jumping to premature conclusions based on limited credible evidence (e.g., a strong conclusion that a vaccine produces significant side-effect risks based on weak credible information, following non-credible information about the same vaccine). An intriguing possibility, that could be tested in future studies, is that participants strategically amplify the extent of learning from credible feedback to dilute the impact of learning from noncredible feedback. For example, a person scrolling through a social media feed, encountering copious amounts of disinformation, might amplify the weight they assign to credible feedback in order to dilute effects of ‘fake news’. Ironically, these results also suggest that public campaigns might be more effective when embedding their messages in low-credibility contexts, which may boost their impact.”

      And we have included some additional analyses in the SI document:

      “3.5 Contrast effects for contexts featuring a different bandit Given that we observed a contrast effect when both the learning and the immediately preceding "context trial” involved the same pair of bandits, we next investigated whether this effect persisted when the context trial featured a different bandit pair – a situation where the context would be irrelevant to the current learning. Again, we used in a binomial mixed effects model, regressing choice-repetition on feedback valence in the learning trial and the feedback agent in the context trial. This analysis included only learning trials featuring the 3-star agent, and context trials featuring a different bandit pair than the learning trial (Fig. S22a). We found no significant evidence of an interaction between feedback valence and contextual credibility (F(2,2364)=0.21, p=0.81) (Fig. S22b). This null result was consistent with the range of outcomes predicted by our main computational models (Fig. S22c).”

      We aimed to formally compare the influence of two types of contextual trials: those featuring the same bandit pair as the learning trial versus those featuring a different pair. To achieve this, we extended our mixedeffects model by incorporating a new predictor variable, "CONTEXT_TYPE" which coded whether the contextual trial involved the same bandit pair (coded as -0.5) or a different bandit pair (+0.5) compared to the learning trial. The Wilkinson notation for this expanded mixed-effects model is:

      𝑅𝐸𝑃𝐸𝐴𝑇 ~ 𝐶𝑂𝑁𝑇𝐸𝑋𝑇_𝑇𝑌𝑃𝐸 ∗ 𝐹𝐸𝐸𝐷𝐵𝐴𝐶𝐾 ∗ (𝐶 𝐶𝑂𝑁𝑇𝐸𝑋𝑇<sub>2-star</sub> + 𝐶𝑂𝑁𝑇𝐸𝑋𝑇<sub>3-star</sub>) + 𝐵𝐸𝑇𝑇𝐸𝑅 + (1|𝑝𝑎𝑟𝑡𝑖𝑐𝑖𝑝𝑎𝑛𝑡)

      This expanded model revealed a significant three-way interaction between feedback valence, contextual credibility, and context type (F(2,4451) = 7.71, p<0.001). Interpreting this interaction, we found a 2-way interaction between context-source and feedback valence when the context was the same (F(2,4451) = 12.03, p<0.001), but not when context was different (F(2,4451) = 0.23, p = 0.79). Further interpreting the double feedback-valence * context-source interaction (for the same context) we obtained the same conclusions as reported in the main text.”

      (16) "Strikingly, model-simulations (Methods) showed this pattern is not predicted by any of our other models"

      Why doesn't the Bayesian model predict this?

      Thanks for the comment. Overall, Bayesian models do predict a slight truth inference effect (see Figure 6d). However, these effects are not as strong as the ones observed in participants, suggesting that our results go beyond what would be predicted by a Bayesian model.

      Conceptually, it's important to note that the Bayesian model can infer (after controlling for source credibility and feedback valence) whether feedback is truthful based solely on prior beliefs about the chosen bandit. Using this inferred truth to amplify the weight of truthful feedback would effectively amount to “bootstrapping on one’s own beliefs.” This is most clearly illustrated with the 50% agent: if one believes that a chosen bandit yields rewards 70% of the time, then positive feedback is more likely to be truthful than negative feedback. However, a Bayesian observer would also recognize that, given the agent’s overall unreliability, such feedback should be ignored regardless.

      (17) "A striking finding in our study was that for a fully credible feedback agent, credit assignment was exaggerated (i.e., higher than predicted by a Bayesian strategy)".

      "Since we did not find any significant interactions between BETTER and the other regressors, we decided to omit it from the model formulation".

      Was this decision made after seeing the data? If so, please report the original analysis as well.

      We have included the BETTER regressor again, and we have re-run the analyses. We now report the results of such regression. We have also changed the methods section accordingly:

      “We used a different mixed-effects binomial regression model to test whether value learning from the 3-star agent was modulated by contextual credibility. We focused this analysis on instances where the previous trial with the same bandit pair featured the 3-star agent. We regressed the variable REPEAT, which indicated whether the current trial repeated the choice from the previous trial featuring the same bandit-pair (repeated choice=1, non-repeated choice=0). We included the following regressors: FEEDBACK coding the valence of feedback in the previous trial with the same bandit pair (positive=0.5, negative=-0.5), CONTEXT2-star indicating whether the trial immediately preceding the previous trial with the same bandit pair (context trial) featured the 2-star agent (feedback from 2-star agent=1, otherwise=0), and CONTEXT3star indicating whether the trial immediately preceding the previous trial with the same bandit pair featured the 3-star agent. We also included a regressor (BETTER) coding whether the bandit chosen in the learning trial was the better -mostly rewarding- or the worse -mostly unrewarding- bandit within the pair. We included in this analysis only current trials where the context trial featured a different bandit pair. The model in Wilkinson’s notation was:

      𝑅𝐸𝑃𝐸𝐴𝑇~ 𝐹𝐸𝐸𝐷𝐵𝐴𝐶𝐾 ∗ (𝐶𝑂𝑁𝑇𝐸𝑋𝑇<sub>2-star</sub> + 𝐶𝑂𝑁𝑇𝐸𝑋𝑇<sub>3-star</sub>) + 𝐵𝐸𝑇𝑇𝐸𝑅 + (1|𝑝𝑎𝑟𝑡𝑖𝑐𝑖𝑝𝑎𝑛𝑡) ( 13 )

      In figure 4c, we independently calculate the repeat probability difference for the better (mostly rewarding) and worse (mostly non-rewarding) bandits and averaged across them. This calculation was done at the participants level, and finally averaged across participants.”

    1. Where higher education forwomen was advocated for material reasons, stress was laid on the need to equip middle-class women toearn a living as teachers and protect them from the risk of downward mobilit

      For Jourdain, it allowed her to 1) pursue a career in higher education; 2) enrol for a doctorate in Paris 3) write a dissertation 4) be hired as Vice-head at St. Hughe

    1. Ethiopia and Somalia In the 1980s and 1990s, the media's portrayal ofthe death and starvation in Ethiopia and Somalia further demonstratestheir power in the foreign policy arena. In the former case, journalist PeterBoyer summarized the power of a 1984 NBC report on the wideningEthiopian famine

      Media acts as government’s tool due to dependency on official information and shared elite values.

    2. The Vietnam War. The emergence of the "media as actor" role usuallydates from the Vietnam War

      Media as Actors: Independent influencers shaping the foreign policy agenda through coverage. Criticism: Some argue media follow political consensus rather than create it.

    3. The development of the World WideWeb, the use of cellular phones, the growth of smartphones and applica-tions ("apps"), the emergence of text messaging, the explosion of socialmedia (whether Facebook, Instagram, Twitter, etc.)

      Growth in radio stations, cable systems, and digital platforms (Facebook, Twitter, TikTok).

      Decline of print newspapers; increased digital access.

    Annotators

  3. physerver.hamilton.edu physerver.hamilton.edu
    1. droplet had received acharge of the proper sign and strength as itwas blown out through the atomizer,

      How is it receiving a charge? Isn't the oil initially neutral? What about the blowing out through the atomizer allows it to receive a charge? Also, can we control the charge that the droplet receives?

    2. kinetic energy of agitation

      what is the "kinetic energy of agitation?" I've never heard of that before - I assume it has something to do with mixing or moving particles (particularly in a gas) around, but I'm just curious what it actually is and why it's important

    1. Why? Why was the fixation on the individual? It seems bizarre to isolate a study on the effects of mass media on individual cases rather that the equivalent audience size (mass)

    Annotators

    1. In this Figure, we would say that BBB has a higher electric potential than AAA

      Does it have higher potential because it received an electron from A or it received the electron BECAUSE it have higher potential?

    1. The platform is not abolitionist but firmly anti-slavery expansion.

      Balances states’ rights with strong national projects (railroad, tariffs, land distribution).

      Appeals to a wide coalition: free soil settlers, industrial workers, farmers, and immigrants.

      The West is central: as the future of freedom, industry, and settlement.

    1. Solar PV Share in Electricity Mix, by Country, 2024

      map not final, hover over legend not interactive. Too many digits after the decimal in most countries. Population data seems wrong (use format XXX.X million)

    1. Heat accounted for 74% of energy consumption in the buildings sector. Space cooling is the fastest-growing energy end use in the buildings sector, increasing 4% per year on average since 2000. Policies to accelerate decarbonisation in the buildings sector are advancing globally, however, the sector is not on track to meet net-zero emissions targets. Investment in energy efficiency is insufficient to meet net-zero emissions targets.

      can these be in bullet points?

    1. eLife Assessment

      This valuable study combined careful computational modeling, a large patient sample, and replication in an independent general population sample to provide a computational account of a difference in risk-taking between people who have attempted suicide and those who have not. It is proposed that this difference reflects a general change in the approach to risky (high-reward) options and a lower emotional response to certain rewards. Evidence for the specificity of the effect to suicide, however, is incomplete, which would require additional analyses.

    2. Reviewer #1 (Public review):

      Summary:

      The authors use a gambling task with momentary mood ratings from Rutledge et al. and compare computational models of choice and mood to identify markers of decisional and affective impairments underlying risk-prone behavior in adolescents with suicidal thoughts and behaviors (STB). The results show that adolescents with STB show enhanced gambling behavior (choosing the gamble rather than the sure amount), and this is driven by a bias towards the largest possible win rather than insensitivity to possible losses. Moreover, this group shows a diminished effect of receiving a certain reward (in the non-gambling trials) on mood. The results were replicated in an undifferentiated online sample where participants were divided into groups with or without STB based on their self-report of suicidal ideation on one question in the Beck Depression Inventory self-report instrument. The authors suggest, therefore, that adolescents with decreased sensitivity to certain rewards may need to be monitored more closely for STB due to their increased propensity to take risky decisions aimed at (expected) gains (such as relief from an unbearable situation through suicide), regardless of the potential losses.

      Strengths:

      (1) The study uses a previously validated task design and replicates previously found results through well-explained model-free and model-based analyses.

      (2) Sampling choice is optimal, with adolescents at high risk; an ideal cohort to target early preventative diagnoses and treatments for suicide.

      (3) Replication of the results in an online cohort increases confidence in the findings.

      (4) The models considered for comparison are thorough and well-motivated. The chosen models allow for teasing apart which decision and mood sensitivity parameters relate to risky decision-making across groups based on their hypotheses.

      (5) Novel finding of mood (in)sensitivity to non-risky rewards and its relationship with risk behavior in STB.

      Weaknesses:

      (1) The sample size of 25 for the S- group was justified based on previous studies (lines 181-183); however, all three papers cited mention that their sample was low powered as a study limitation.

      (2) Modeling in the mediation analysis focused on predicting risk behavior in this task from the model-derived bias for gains and suicidal symptom scores. However, the prediction of clinical interest is of suicidal behaviors from task parameters/behavior - as a psychiatrist or psychologist, I would want to use this task to potentially determine who is at higher risk of attempting suicide and therefore needs to be more closely watched rather than the other way around (predicting behavior in the task from their symptom profile). Unfortunately, the analyses presented do not show that this prediction can be made using the current task. I was left wondering: is there a correlation between beta_gain and STB? It is also important to test for the same relationships between task parameters and behavior in the healthy control group, or to clarify that the recommendations for potential clinical relevance of these findings apply exclusively to people with a diagnosis of depression or anxiety disorder. Indeed, in line 672, the authors claim their results provide "computational markers for general suicidal tendency among adolescents", but this was not shown here, as there were no models predicting STB within patient groups or across patients and healthy controls.

      (3) The FDR correction for multiple comparisons mentioned briefly in lines 536-538 was not clear. Which analyses were included in the FDR correction? In particular, did the correlations between gambling rate and BSI-C/BSI-W survive such correction? Were there other correlations tested here (e.g., with the TAI score or ERQ-R and ERQ-S) that should be corrected for? Did the mediation model survive FDR correction? Was there a correction for other mediation models (e.g., with BSI-W as a predictor), or was this specific model hypothesized and pre-registered, and therefore no other models were considered? Did the differences in beta_gain across groups survive FDR when including comparisons of all other parameters across groups? Because the results were replicated in the online dataset, it is ok if they did not survive FDR in the patient dataset, but it is important to be clear about this in presenting the findings in the patient dataset.

      (4) There is a lack of explicit mention when replication analyses differ from the analyses in the patient sample. For instance, the mediation model is different in the two samples: in the patient sample, it is only tested in S+ and S- groups, but not in healthy controls, and the model relates a dimensional measure of suicidal symptoms to gambling in the task, whereas in the online sample, the model includes all participants (including those who are presumably equivalent to healthy controls) and the predictor is a binary measure of S+ versus S- rather than the response to item 9 in the BDI. Indeed, some results did not replicate at all and this needs to be emphasized more as the lack of replication can be interpreted not only as "the link between mood sensitivity to CR and gambling behavior may be specifically observable in suicidal patients" (lines 582-585) - it may also be that this link is not truly there, and without a replication it needs to be interpreted with caution.

      (5) In interpreting their results, the authors use terms such as "motivation" (line 594) or "risk attitude" (line 606) that are not clear. In particular, how was risk attitude operationalized in this task? Is a bias for risky rewards not indicative of risk attitude? I ask because the claim is that "we did not observe a difference in risk attitude per se between STB and controls". However, it seems that participants with STB chose the risky option more often, so why is there no difference in risk attitude between the groups?

    3. Reviewer #2 (Public review):

      Summary:

      This article addresses a very pertinent question: what are the computational mechanisms underlying risky behaviour in patients who have attempted suicide? In particular, it is impressive how the authors find a broad behavioural effect whose mechanisms they can then explain and refine through computational modeling. This work is important because, currently, beyond previous suicide attempts, there has been a lack of predictive measures. This study is the first step towards that: understanding the cognition on a group level. This is before being able to include it in future predictive studies (based on the cross-sectional data, this study by itself cannot assess the predictive validity of the measure).

      Strengths:

      (1) Large sample size.

      (2) Replication of their own findings.

      (3) Well-controlled task with measures of behaviour and mood + precise and well-validated computational modeling.

      Weaknesses:

      I can't really see any major weakness, but I have a few questions:

      (1) I can see from the parameter recovery that the parameters are very well identified. Is it surprising that this is the case, given how many parameters there are for 90 trials? Could the authors show cross-correlations? I.e., make a correlation matrix with all real parameters and all fitted parameters to show that not only the diagonal (i.e., same data is the scatter plots in S3) are high, but that the off-diagonals are low.

      (2) Could the authors clarify the result in Figure 2B of a correlation between gambling rate and suicidal ideation score, is that a different result than they had before with the group main effect? I.e., is your analysis like this: gambling rate ~ suicide ideation + group assignment? (or a partial correlation)? I'm asking because BSI-C is also different between the groups. [same comment for later analyses, e.g. on approach parameter].

      (3) The authors correlate the impact of certain rewards on mood with the % gambling variable. Could there not be a more direct analysis by including mood directly in the choice model?

      (4) In the large online sample, you split all participants into S+ and S-. I would have imagined that instead, you would do analyses that control for other clinical traits. Or, for example, you have in the S- group only participants who also have high depression scores, but low suicide items.

    4. Reviewer #3 (Public review):

      This manuscript investigates computational mechanisms underlying increased risk-taking behavior in adolescent patients with suicidal thoughts and behaviors. Using a well-established gambling task that incorporates momentary mood ratings and previously established computational modeling approaches, the authors identify particular aspects of choice behavior (which they term approach bias) and mood responsivity (to certain rewards) that differ as a function of suicidality. The authors replicate their findings on both clinical and large-scale non-clinical samples.

      The main problem, however, is that the results do not seem to support a specific conclusion with regard to suicidality. The S+ and S- groups differ substantially in the severity of symptoms, as can be seen by all symptom questionnaires and the baseline and mean mood, where S- is closer to HC than it is to S+. The main analyses control for illness duration and medication but not for symptom severity. The supplementary analysis in Figure S11 is insufficient as it mistakes the absence of evidence (i.e., p > 0.05) for evidence of absence. Therefore, the results do not adequately deconfound suicidality from general symptom severity.

      The second main issue is that the relationship between an increased approach bias and decreased mood response to CR is conceptually unclear. In this respect, it would be natural to test whether mood responses influence subsequent gambling choices. This could be done either within the model by having mood moderate the approach bias or outside the model using model-agnostic analyses.

      Additionally, there is a conceptual inconsistency between the choice and mood findings that partly results from the analytic strategy. The approach bias is implemented in choice as a categorical value-independent effect, whereas the mood responses always scale linearly with the magnitude of outcomes. One way to make the models more conceptually related would be to include a categorical value-independent mood response to choosing to gamble/not to gamble.

      The manuscript requires editing to improve clarity and precision. The use of terms such as "mood" and "approach motivation" is often inaccurate or not sufficiently specific. There are also many grammatical errors throughout the text.

      Claims of clinical relevance should be toned down, given that the findings are based on noisy parameter estimates whose clinical utility for the treatment of an individual patient is doubtful at best.

    5. Author response:

      We thank the reviewers for recognizing the strengths of our work, as well as for their thoughtful and constructive feedback. In this provisional response, we focus on the main concern raised—namely, the need for stronger evidence that the effect is specific to suicide. A full revision of the manuscript will follow, in which we will address this point in greater depth and respond carefully to all additional comments in a point-by-point manner.

      More specifically, reviewer 3 points out that “The main analyses control for illness duration and medication but not for symptom severity. The supplementary analysis in Figure S11 is insufficient as it mistakes the absence of evidence (i.e., p > 0.05) for evidence of absence.”. This is indeed an important point that we address below.

      (1) Correction for symptom severity.

      To address the request for evidence on specificity to suicidality beyond general symptom severity, we performed separate linear regressions to explain in gambling behaviour, value-insensitive approach parameter (β<sub>gain</sub>), and mood sensitivity to certain rewards (β<sub>CR</sub>) with group as a predictor (1 for S<sup>+</sup> group and 0 for S<sup>-</sup> group) and scores for anxiety and depression as covariates. Results remained significant after controlling anxiety and depression (ps < 0.027).

      Author response table 1.

      Given high correlations among anxiety and depression questionnaires (rs > 0.753, ps < 0.001), we performed Principal Components Analysis (PCA) on the clinical questionnaire to extract the orthogonal components, where each component explained 86.95%, 7.09%, 3.27%, and 2.68% variance, respectively. We then performed linear regressions using these components as covariates to control for anxiety and depression. Our main results remained significant (ps < 0.027).

      Author response table 2.

      We believe that these analyses provide evidence that the main effects on gambling and on mood were specific to suicide.

      (2) Evidence of absence of effect of symptom severity

      Based on clinical interviews, we included patients with and without suicidality (S<sup>+</sup> and S<sup>-</sup> groups). However, in line with suicidal-related literature (e.g., Tsypes et al., 2024), S<sup>+</sup> and S<sup>-</sup> differed substantially in the severity of symptoms (see Table 1). Although we median-split patients by the scores of general symptoms (e.g., depression and anxiety) and verified no significant differences in these severities (Figure S11), the “absence of evidence” cannot provide insights of “evidence of absence”. We, therefore, additionally conducted Bayesian statistics in gambling behavior, value-insensitive approach parameter, and mood sensitivity to certain rewards. BF<sub>01</sub> is a Bayes factor comparing the null model (M<sub>0</sub>) to the alternative model (M<sub>1</sub>), where M<sub>0</sub> assumes no group difference. BF<sub>01</sub> > 1 indicates that evidence favors M<sub>0</sub>. As can be seen below, most results supported null hypothesis, suggesting that general symptoms of anxiety and depression overall did not influence our main results.

      Author response table 3.

      Overall, we believe that these analyses provide compelling evidence for the specificity of the effect to suicide, above and beyond depression and anxiety.

    1. This is an exploratory experiment in using nascent IndyWeb constellations of capabilities to facilitate

      • Autonomous,
      • unstoppabe, unenclosable, attributed, with full
      • audit trail and provenance, permanence of
      • participants owned and moderated
      • sharing and collaboration
      • interplay
      • and converations
      • both on page and on the margins
    1. eLife Assessment

      This study presents a valuable finding on the molecular mechanisms that govern GABAergic inhibitory synapse function. The authors propose that Endophilin A1 serves as a novel regulator of GABAergic synapses by acting as a component of the inhibitory postsynaptic density. The findings are convincing and likely to interest a broad audience of scientists focusing on inhibitory synaptic transmission, the excitation-inhibition balance, and its disruption in disorders such as epilepsy.

    2. Reviewer #1 (Public review):

      Summary:

      In the present study, Chen et al. investigate the role of Endophilin A1 in regulating GABAergic synapse formation and function. To this end, the authors use constitutive or conditional knockout of Endophilin A1 (EEN1) to assess the consequences on GABAergic synapse composition and function, as well as the outcome for PTZ-induced seizure susceptibility. The authors show that EEN1 KO mice show a higher susceptibility to PTZ-induced seizures, accompanied by a reduction in the GABAergic synaptic scaffolding protein gephyrin as well as specific GABAAR subunits and eIPSCs. The authors then investigate the underlying mechanisms, demonstrating that Endophilin A1 binds directly to gephyrin and GABAAR subunits, and identifying the subdomains of Endophilin A1 that contribute to this effect. Overall, the authors state that their study places Endophilin A1 as a new regulator of GABAergic synapse function.

      Strengths:

      Overall, the topic of this manuscript is very timely, since there has been substantial recent interest in describing the mechanisms governing inhibitory synaptic transmission at GABAergic synapses. The study will therefore be of interest to a wide audience of neuroscientists studying synaptic transmission and its role in disease. The manuscript is well written and contains a substantial quantity of data. In the revised version of the manuscript, the authors have increased the number of samples analyzed and have significantly improved the statistical analysis, thereby substantially strengthening the conclusions of their study.

    3. Reviewer #2 (Public review):

      Summary:

      The function of neural circuits relies heavily on the balance of excitatory and inhibitory inputs. Particularly, inhibitory inputs are understudied when compared to their excitatory counterparts due to the diversity of inhibitory neurons, their synaptic molecular heterogeneity, and their elusive signature. Thus, insights into these aspects of inhibitory inputs can inform us largely on the functions of neural circuits and the brain.

      Endophilin A1, an endocytic protein heavily expressed in neurons, has been implicated in numerous pre- and postsynaptic functions, however largely at excitatory synapses. Thus, whether this crucial protein plays any role in inhibitory synapse, and whether this regulates functions at the synaptic, circuit, or brain level remains to be determined.

      The three remaining concerns are:

      (1) The use of one-way ANOVA is not well justified.

      (2) The use of superplots to show culture to culture variability would make it more transparent.

      (3) Change EEN1 in Figure 8B to EndoA1.

      Comments on revised version:

      The authors addressed the concerns adequately.

    4. Reviewer #3 (Public review):

      Chen et al. identify endophilin A1 as a novel component of the inhibitory postsynaptic scaffold. Their data show impaired evoked inhibitory synaptic transmission in CA1 neurons of mice lacking endophilin A1, and an increased susceptibility to seizures. Endophilin can interact with the postsynaptic scaffold protein gephyrin and promotes assembly of the inhibitory postsynaptic element. Endophilin A1 is known to play a role in presynaptic terminals and in dendritic spines, but a role for endophilin A1 at inhibitory postsynaptic densities has not yet been described, providing a valuable addition to the field.

      To investigate the role of endophilin A1 at inhibitory postsynapses, the authors used a broad array of experimental approaches, including tests of seizure susceptibility, electrophysiology, biochemistry, neuronal culture and image analysis. The authors have addressed the remaining concerns in their revision. Taken together, their results expand the synaptic role of endophilin-A1 to include the inhibitory post synaptic element.

    5. Author response:

      The following is the authors’ response to the previous reviews

      Reviewer #2 (Recommendations for the authors):

      Comments on revised version:

      The authors addressed the concerns adequately. The three remaining concerns are:

      (1) The use of one-way ANOVA is not well justified.

      The statement about statistical test in “Statistical analysis” section is as follows in the revised manuscript, “Data sets were tested for normality and direct comparisons between two groups were made using two-tailed Student’s t test (t test, for normally distributed data) as indicated. To evaluate statistical significance of three or more groups of samples, one-way ANOVA analysis with a Tukey test was used or repeated measures ANOVA analysis with a Tukey test was used in behavior assays. Statistical parameters are reported in the figures and the corresponding legends”.

      We used a one-way ANOVA for the data about one categorical independent variable and one quantitative dependent variable. The independent variable should have at least three different groups or categories. And we conducted repeated measures ANOVA analysis for the data about behavioral tests according to the suggestion by Reviewer #1 (Point 18) in revised manuscript.

      (2) The use of superplots to show culture to culture variability would make it more transparent.

      Thanks for the nice suggestion. While superplots could more transparently show culture to culture variability, it is difficult to add more colors or even shades to the scatterplots in the current form, which have already been color coded for multiple groups of samples. The scatterplots we used effectively illustrate the variability across all collected data and do not affect the conclusions of our study. Therefore, we prefer not to change the way of data presentation in the revised manuscript.

      (3) Change EEN1 in Figure 8B to EndoA1.

      Thanks a lot for the sharp eye. Corrected.

      Reviewer #3 (Recommendations for the authors):

      Specific comments:

      The authors have made a substantial effort to improve their manuscript. A number of issues, related to numbers of observations mentioned by the reviewers, are clarified in the revised manuscript. The authors have also clarified some of the other questions from the reviewers. The long list of issues brought up by the reviewers and the many corrections needed still raise questions about data quality in this manuscript.

      In response to my comments (Point 2), the added experiment with PSD95.FingR and GPN.FingR in cultured neurons (Fig. S5A-D) is a good addition; the in vivo data using FingRs in Figure S3 look less convincing however. In response to my Point 5, the authors have added a cell-free binding assay (Figure 5I). This is a useful addition, but to convincingly make the point of interaction between Gephyrin and EndoA1, more rigorous biophysical quantitation of binding is needed. The legend in Figure 5I states that 4 independent experiments were performed, but the graph only shows 3 dots. This needs to be corrected.

      We sincerely appreciate your comments and apologize for any concerns raised. As suggested (Point 2), we made many efforts to visualize endogenous postsynaptic proteins using recombinant probes. However, due to much lower expression of GPN.FingR compared with PSD95.FingR in P21 brain slices following viral infection (Figure S3), we were unable to obtain better imaging results. To strengthen our data and conclusions, we additionally performed experiments with PSD95.FingR and GPN.FingR in cultured neurons (Fig. S5A-D) in the revised manuscript.

      Regarding the biophysical quantification of gephyrin–endophilin A1 binding, we do not have the equipment for this type of experiment (surface plasmon resonance or isothermal titration calorimetry). Instead, we performed a pull-down assay as an alternative to confirm their interaction (Figure 5I). We also apologize for the error in the number of independent experiments stated in the figure legend and have corrected it in the revised manuscript.

    1. Verifiable Credentials system allows people to prove their belonging to a particular group, and it allows groups to freely associate with one another.

      First one to star the project

      I remember Grace making the point that it is our duty to try out people's work in adjacent forrows

    1. Editors Assessment:

      This paper presents present the genome sequencing of the house sparrow (Passer domesticus) carrying out genome assembly and annotation using in silico approaches with tools that could be a valuable resource for understanding passerine evolution, biology, ethnology, geography, and demography. The final genome assembly was generated using short read sequencing and a computational workflow that included Shovill, SPAdes, MaSuRCA, and BUSCO benchmarking. Producing a 922 MB reference genome with 24,152 genes. The first draft was significantly smaller than this but peer review provided suggestions on how to improve the assembly quality. And after a few attempts and assembly with a reasonable size and BUSCO score was achieved. This openly available data potentially serving as a valuable resource for checking adaptation, divergence, and speciation of birds.

      This evaluation refers to version 2 of the preprint

    2. AbstractThe common house sparrow, Passer domesticus is a small bird belonging to the family Passeridae. Here, we provide high-quality whole genome sequence data along with assembly for the house sparrow. The final genome assembly was assembled using a Shovill/SPAdes/MASURCA/BUSCO workflow, consisting of contigs spanning 268193 bases and coalescing around a 922 MB sized reference genome. We employed rigorous statistical thresholds to check the coverage, as the Passer genome showed considerable similarity to Gallus gallus (chicken) and Taeniopygia guttata (Zebra finch) genomes, also providing a functional annotation. This new annotated genome assembly will be a valuable resource as a reference for comparative and population genomic analyses of passerine, avian, and vertebrate evolution.Significance Avian evolution has been of great interest in the context of extinction. Annotating the genomes such as passerines would be of significant interest as we could understand the behavior/foraging traits and further explore their evolutionary landscape. In this work, we provide a full genome sequence of Indian house sparrow, viz. Passer domesticus which will serve as a useful resource in understanding the adaptability, evolution, geography, allee effects and circadian rhythms.

      This work has been published in GigaByte Journal under a CC-BY 4.0 license (https://doi.org/10.46471/gigabyte.161), and has published the reviews under the same license.

      Reviewer 1. Gang Wang

      Is the language of sufficient quality? Yes. There are many details in the article, such as citation format, spelling, etc. [Supplementary Table 3a, 3b, 3c) → (Supplementary Table 3a, 3b, 3c) The citation format of the article also needs to be adjusted according to the journal requirements.

      Is there sufficient detail in the methods and data-processing steps to allow reproduction? No. A previous reviewer mentioned that RagTag could be used to improve the quality of genome assembly. I suggest you seriously consider this.

      Is there sufficient information for others to reuse this dataset or integrate it with other data? No

      Overall Comments: The article is logically clear and the analysis is complete. The description of both sample collection and sequencing is relatively clear. At the same time, the analysis process shown in Figure 1 is also very reasonable. However, as described by the previous reviewer, I suggest that you remove the high-quality level. There are many details in the article, such as citation format, spelling, etc. [Supplementary Table 3a, 3b, 3c) → (Supplementary Table 3a, 3b, 3c) The citation format of the article also needs to be adjusted according to the journal requirements. Figure 2, the letters of a and b are too different, please unify them. Figure 4 is completely unclear, please increase the font size. A previous reviewer mentioned that RagTag could be used to improve the quality of genome assembly. I suggest you seriously consider this. Re-review: The authors used FCS-GX to exclude contaminating sequences in the genome, so I agree that this paper should be published.

      Reviewer 2. Agustin Ariel Baricalla

      Are all data available and do they match the descriptions in the paper? No. Matching data: NCBI project with access to the NCBI-SRA deposited raw data. Nonmatching data: Oxford Nanopore data: The authors reply to a previously submitted manuscript arguing that this data was not used, but Fig. 1 refers to Nanopore Minion data. The manuscript body and the additional data section do not include the Quast and BUSCO reports or their corresponding plots.

      Are the data and metadata consistent with relevant minimum information or reporting standards? See GigaDB checklists for examples http://gigadb.org/site/guide No. GigaByte suggests a checklist including the genome, CDS, and proteins in FASTA format, as well as the annotations in GFF format; however, these items are not available for evaluation.

      Is there sufficient detail in the methods and data-processing steps to allow reproduction? Yes. The FastP step for raw data processing is mentioned in the results section but is not detailed in the methods section.

      Is there sufficient data validation and statistical analyses of data quality? No. The authors have not included the BUSCO results. The OrthoDB database for 'passeriformes_odb12' contains over 10,000 curated genes, representing approximately 50-60% of the total genes in a typical passeriform genome. Therefore, the BUSCO report for the new assembly should be provided. The author mentioned that "The gene completeness for Passer was assessed through Benchmarking Universal Single-Copy Orthologs ( Busco version 5.5.0 ) [26] by using the orthologous genes in the Gallus gallus [ chicken] genome" but BUSCO uses the OrthoDB datasets to run, I do not understand what this phrase refers to.

      Is there sufficient information for others to reuse this dataset or integrate it with other data? Yes. All the procedures are consistent and the programs or pipelines are well-known and well documented in the bioinformatic and genomic fields.

      Additional Comments: The inclusion of the mitochondrial genome represents a significant improvement in this manuscript. I recommend presenting all nuclear results together first, followed by a separate and clear description of the mitochondrial analysis and findings to enhance clarity. The data is interesting for analyzing the genetic dynamics behind Passer domesticus adaptation and evolution and can show differences between the previous genomes available from a European reference sample but this is not presented in this work. As of this revision, the NCBI's Passer domesticus genome includes two European reference genomes, both classified with 'chromosome-like' status (NCBI: GCF_036417665.1 and GCA_001700915.1). These genomes can be utilized in two distinct ways: (1) performing a 'genome-guided assembly' with MASURCA, using one of these genomes alongside the Illumina data, or (2) conducting genome scaffolding by employing one of these genomes as a reference and the assembled genome from raw reads as a query, using tools like RagTag or the chromosome scaffolder available in MASURCA. Both approaches could potentially lead to improvements in scaffold number and contiguity metrics, such as N50, N90, and the largest scaffold.

      Re-review: The authors have subtly improved the original version previously presented, but have not managed to surpass the minimum standards established by the publisher to be published by the journal. Easily achievable changes have been requested to complement the analysis previously made and have been ignored. Requests have not been answered, graphics that generate confusion between them and the text presented have not been fixed, and no relevant improvement between the previous and current versions has been shown.

    1. it was expected thatthey would be useful in their land, owing to the good companythey had enjoyed and the gifts they had received

      It seems like they expected the natives to just acclimate to the new customs traditions just because they were giving gifts to the natives.

    Annotators

    1. Cost Estimation of Developing a Custom NFT Marketplace

      Unlock your unique position in the NFT realm with Custom NFT Marketplace Development. Discover how personalized experiences, branding, and reduced risks can lead to increased ROI. Dive into our blog: "Custom NFT Marketplace Development & its Cost" to understand the benefits and costs of building a custom solution tailored to your business mission.

  4. learn-ap-southeast-2-prod-fleet01-xythos.content.blackboardcdn.com learn-ap-southeast-2-prod-fleet01-xythos.content.blackboardcdn.com
    1. Buddhist ritual involving foreign substances

      Did Buddhist ritual uses increase saffron’s importance in China, making it more than just an imported good? Also, what was the extent of use of saffron among commoners? Since it was an important imported item, it must've been expensive, as it is today. Does that also tell us something about the relationship between Chinese people with their religion?

    Annotators