We adhere to the highest standards ofaccuracy and truth in advancing theinterests of those we represent and incommunicating with the public
This passage relates to truth because they are saying they value accuracy and truth.
We adhere to the highest standards ofaccuracy and truth in advancing theinterests of those we represent and incommunicating with the public
This passage relates to truth because they are saying they value accuracy and truth.
Publishing
Almost like a gatekeeper in a way
bias in the peer‐review process
An interesting thing to research, I'm curious about the method sections
cost-shifting
The big downsides of this is higher insurance premiums for people with private insurance, contributes to the overall rise in healthcare costs, and creates inequalities in who pays for care.
Arguably, video games even hold a place in the art world, with the increasing complexity of animation and story lines.Jona Tres Kap, “The Video Game Revolution: But is it Art?” PBS,
it shows games aren't just toys anymore. They are now seen as a real form of artjust like movies or books and deserve to be studied and respected
Video games are no longer a convenient scapegoat for America’s obesity problems; Wii Fit offers everything from yoga to boxing, and Dance Dance Revolution estimates calories burned while players dance.
this part shows how the image of gaming changed. By adding exercise and movement, games stopped being seen as unhealthy and started being seen as a fun way to stay active and fit.
Such a scene is unlikely to have taken place in the early days of video games when Atari reigned supreme and the action of playing solely centered on hand-eye coordination. The learning curve was relatively nonexistent;
games moved from simple reflex tests to deep stories. While old games were just about fast reactions, new games focus on physical movement and storytelling, making them easier for everyone to enjoy
Steven Tweedie. This disturbing image of a Chinese worker with close to 100 iPhones reveals how App Store rankings can be manipulated. February 2015. URL: https://www.businessinsider.com/photo-shows-how-fake-app-store-rankings-are-made-2015-2
In this article, Steven Tweedie describes a photo showing a worker surrounded by nearly 100 iPhones, all being used to download and interact with apps. The purpose is to artificially boost app rankings in the App Store. This demonstrates how companies can manipulate popularity metrics by using large numbers of devices to create fake downloads and activity. This makes apps appear more popular than they actually are.
Sean Cole. Inside the weird, shady world of click farms. January 2024. URL: https://www.huckmag.com/article/inside-the-weird-shady-world-of-click-farms (visited on 2024-03-07).
The Huck Magazine provides an article that goes over click farms and added an observation that I did not expect earlier. It isn't just bots on automated scripts running servers, rather it is actual humans manually liking and following posts at a large scale. This blurs the distinction between "bot" and "human" which I believe this chapter doesn't fully address since it is the artificial manipulation of metrics, regardless of who does it.
Buy TikTok Followers. 2023. URL: https://www.socialwick.com (visited on 2023-12-02).
I find this website so interesting, while i was exploring i compared what followers the platform finds most valuble. I found that 1000 followers on tiktok cost 127 while the same amount in instagram cost 115, the same amount on x cost 208, on youtube 288, facebook 104 and spotify 66. It is interesting the most expensive followers to purchase are subscribers to your youtube, i wonder what the reason for this is, is it supply/demand or possibly how strict certain sites are. on bots, I also wonder if certain platforms are happy about bots as they are a substantial amount of "users" and "interractions" added to their platform.
Sean Cole. Inside the weird, shady world of click farms. January 2024. URL: https://www.huckmag.com/article/inside-the-weird-shady-world-of-click-farms (visited on 2024-03-07).
One detail that I found interesting was how easy it is for social media algorithms to be manipulated by bots. A relatively small number of bots can have a huge impact on a post's virality. This makes me question what information we see as legitimate online due to its virality is actually misinformation promoted by a bot farm.
TweetDelete - Easily delete your old tweets. URL: https://tweetdelete.net/ (visited on 2023-12-02).
I found this source interesting because it reminded me of the example we discussed in class of the woman whose racist tweet got her fired from her job and created uproar on Twitter. This website helps with deleting posts from Twitter, so I wonder how things would have gone for the woman if she were able to erase her post before it gained traction all over the world.
Buy TikTok Followers. 2023. URL: https://www.socialwick.com (visited on 2023-12-02).
This is a website where people can buy followers, likes and views at many different prices. The goal is to show people who follow you your influence on social networks to increase interaction. I think it's useful for content creators because they want to cooperate with business parties and those parties need people with high engagement and it's all based on likes, views and followers. With high views, posts will be pushed and appear on trends faster and create trust because we will often stop at videos or posts with high interaction instead of low interaction.
TweetDelete - Easily delete your old tweets. URL: https://tweetdelete.net/ (visited on 2023-12-02).
A program that helps with wiping out posts from social media like Twitter. I don't quite have a position on software such as this, but the ability, with a few clicks, to wipe out past posts, followers, and anything linked to a person's digital past, although efficient, opens room for someone to simply delete a wrong past (if given the circumstances) without reflection. I feel like going back and deleting something digital often comes from a place of regret or shame when interacting with that specific post. If individuals with termulous past do not have to face their past acts, what's to say they even care about the principles at stake, and rather just the image of themselves?
Programming paradigm. July 2023. Page Version ID: 1167849453. URL: https://en.wikipedia.org/w/index.php?title=Programming_paradigm&oldid=1167849453 (visited on
I think programming paradigms are less about strict categories and more about different ways of thinking through problems. Wikipedia frames them as high level approaches to structuring programs, like imperative or declarative styles, but in the context of bots on social media, this feels especially relevant. Bots aren’t just code,they reflect choices about control, autoation, and interaction. For example, a reactive or rule based paradigm directly shapes how bots respond to users. Ethically, that means the paradigm itself can embed biases or power dynamics, which makes how we code inseparable from how bots behave online.
One source from the bibliography that stood out to me is Sean Cole’s article about click farms. The article explains how companies hire large groups of real people to manually like, follow, and interact with content in order to artificially boost popularity. What surprised me is that this is not even fully automated—many “fake” engagements actually come from humans working in organized systems, which makes it harder to detect than bots. This connects to the chapter’s discussion of bots and influence, because it shows that manipulation online is not only done by algorithms but also by coordinated human labor. In my opinion, this makes the problem even more serious, since it blurs the line between real and fake activity and makes platforms harder to regulate.
Sean Cole. Inside the weird, shady world of click farms. January 2024. URL:
I read C2, which explains how click farms work and how they create fake popularity online. What stood out to me most was that this system relies on real people, not just bots or software, to keep liking and following content across many devices. That connected to the chapter’s idea of “human computers” and made me see false engagement differently. It is not just about technology. It's also about the people doing this repetitive work behind the scenes. I found it a little disturbing to realize how easily likes, follows, and views can be manipulated, since many users, including me, often treat those numbers as signs of trust or popularity.
On the other hand, some bots are made with the intention of harming, countering, or deceiving others. For example, people use bots to spam advertisements at people. You can use bots as a way of buying fake followers [c8], or making fake crowds that appear to support a cause (called Astroturfing [c9]).
I think this part of the chapter shows a really concerning side of bots. Using bots for fake followers or astroturfing makes it hard to know what’s real online. It made me realize we need to be more critical of what we see, since everything can be artificially created online.
[Morten] Bay found that 50.9% of people tweeting negatively about “The Last Jedi” were “politically motivated or not even human,” with a number of these users appearing to be Russian trolls. The overall backlash against the film wasn’t even that great, with only 21.9% of tweets analyzed about the movie being negative in the first place.
This stat that states how over half of the negative tweets about The Last Jedi were from bots or accounts that were politically motivated make me rethink how I interpret online backlash. Whenever I see this online, I assume it is a group psychology, but this statistic has showcased to me that a lot of this really just comes down to bots. This makes me wonder how often public opinion is shaped without anyone noticing.
As a final example, we wanted to tell you about Microsoft Tay a bot that got corrupted. In 2016, Microsft launched a Twitter bot that was intended to learn to speak from other Twitter users and have conversations. Twitter users quickly started tweeting racist comments at Tay, which Tay learned from and started tweeting out within one day.
This example about Microsoft Tay is kind of scary because it shows how easily AI can be influenced by people online. Tay was supposed to learn from users, but instead it quickly picked up harmful and racist language. This makes me think that AI is not neutral—it reflects the behavior of the people interacting with it. It also raises a question about responsibility: should developers be more careful about how they design these systems to prevent this kind of outcome?
As one example, in 2016, Rian Johnson, who was in the middle of directing Star Wars: The Last Jedi, got bombarded by tweets that all originated in Russia (likely making at least some use of bots).
I found this example really interesting because it shows how bots can influence people’s opinions without them realizing it. The fact that a large number of tweets came from accounts in another country makes me think about how easy it is to manipulate public discussions online. It also raises a question for me: how can regular users tell the difference between real opinions and coordinated bot activity? I feel like this is especially important today since social media plays such a big role in shaping what people believe.
Some bots are intended to be helpful, using automation to make tasks easier for others or to provide information, such as: Auto caption: https://twitter.com/headlinerclip [c3] Vaccine progress: https://twitter.com/vax_progress [c4] Blocking groups of people: https://twitter.com/blockpartyapp_ [c5] Social Media managing programs that help people schedule and coordinate posts Delete old tweets: https://tweetdelete.net/ [c6] See a new photo of a red panda every hour: https://twitter.com/RedPandaEveryHr [c7] Bots might have significant limits on how helpful they are, such as tech support bots you might have had frustrating experiences with on various websites.
Based on my experience, this is a example of the usefulness of using bots as I have been using Instagram for a long time and sometimes fake followers who tag my personal account in a few promotional posts, the bots clean up and block them. I often use auto captions for Tiktok to easily translate many languages to understand the message they are saying.
Antagonistic bots can also be used as a form of political pushback that may be ethically justifiable
I feel like a key component missing in this analysis of bots is their ability to keep individuals engaged with platforms or to simply exhaust political opposition. I believe I had heard from multiple content creators via posts and podcasts that many times, you are simply arguing with bots on platforms like Reddit and Twitter. This new channel, which takes up energy for discourse, destroys the framework of democracy as distinct political beliefs are not actively engaging with one another and delibverating rather, people get exhausted with debating by the time they encounter a real account.
The arts also draw on a range of learning modalities (visual, aural, kinesthetic) and intelligences (e.g., bodily/kinesthetic, spatial, visual, musical). For example, drama communicates visually, aurally, and kinesthetically and draws on interpersonal, intrapersonal, and linguistic intelligences. Dance communicates visually, kinesthetically, and aurally (if music is used) and draws on bodily/kinesthetic, spatial, and musical intelligences.
These learning modalities are so important to know about, so we can reach all students. In my Theatre Methods class, we talked about how, as teachers, we can't just teach in the learning style that makes sense to us. We must erase that bias to be able to teach everyone!
Classrooms are full of individuals that learn in different ways. For example, some students learn aurally, visually, or kinesthetically. Some learn quickly, others struggle, and still others fall somewhere between. Acknowledging this diversity, many educators are recognizing that it is no longer appropriate to approach teaching as a singular, one-size-fits-all endeavor.
I am glad that this is changing things as of recently. I have been able to learn how to help students on the spectrum, as this is now common for teachers to learn. I just remember feeling bad in school because I didn't fit the exact mold. I am so glad we now consider all students so they don't feel this way!
At last, you are ready to begin writing the rough draft of your research paper. The textbook Successful Writing points out that although transferring your ideas and research into words is exciting, it can also be challenging. In this section, you will learn strategies for handling the more challenging aspects of writing a research paper, such as integrating material from your sources, citing information correctly, and avoiding any misuse of your sources.
I think that starting a rough draft of how you want to show your main ideas of your paper can be challenging, so if you give yourself plenty of time you can become more successful with the outcome of your paper.
Research papers generally follow the same basic structure: an introduction that presents the writer’s thesis, a body section that develops the thesis with supporting points and evidence, and a conclusion that revisits the thesis and provides additional insights or suggestions for further research.
This is the basic layout when writing an essay.
Now, there are many reasons one might be suspicious about utilitarianism as a cheat code for acting morally,
This thought instantly came to my mind. Although I see utilitarianism as a helpful framework in some cases, I see it as a harmful one in this case. Why? I think it could be extremley subjective, with major consequences. With things like generative AI, a utilitarian perspective may see the benefit (utility) of AI outweigh the negatives, such as the loss of critical thinking skills in students. Just because generative AI might help with efficency, doesn't mean its consequences should be ignored either.
There’s an uncomfortable parallel between using AI coding tools and playing slot machines28. You send a prompt, wait, and either get something great or something useless. I found myself up late at night wanting to do “just one more prompt,” constantly trying AI just to see what would happen even when I knew it probably wouldn’t work. The sunk cost fallacy kicked in too: I’d keep at it even in tasks it was clearly ill-suited for, telling myself “maybe if I phrase it differently this time.”The tiredness feedback loop made it worse29. When I had energy, I could write precise, well-scoped prompts and be genuinely productive. But when I was tired, my prompts became vague, the output got worse, and I’d try again, getting more tired in the process. In these cases, AI was probably slower than just implementing something myself, but it was too hard to break out of the loop30.
interesting because this kind of self regulation matters around not getting addicted but also for general problem solving. engineers, go to therapy
It was also invaluable for reacquainting myself with parts of the project I hadn’t looked at for a few days25. I could control how deep to go: “tell me about this component” for a surface-level refresher, “give me a detailed linear walkthrough” for a deeper dive, “audit unsafe usages in this repo” to go hunting for problems. When you’re context switching a lot, you lose context fast. AI let me reacquire it on demand.
might be good to figure out ways to tailor this kind of thing to personal taste
In the rewrite, refactoring became the core of my workflow. After every large batch of generated code, I’d step back and ask “is this ugly?” Sometimes AI could clean it up. Other times there was a large-scale abstraction that AI couldn’t see but I could; I’d give it the direction and let it execute21. If you have taste, the cost of a wrong approach drops dramatically because you can restructure quickly22.
this is tiring and i am only good at it after having read a lot of code
AI basically let me put aside all my doubts on technical calls, my uncertainty of building the right thing and my reluctance to get started by giving me very concrete problems to work on. Instead of “I need to understand how SQLite’s parsing works”, it was “I need to get AI to suggest an approach for me so I can tear it up and build something better"18. I work so much better with concrete prototypes to play with and code to look at than endlessly thinking about designs in my head, and AI lets me get to that point at a pace I could not have dreamed about before. Once I took the first step, every step after that was so much easier.
blank page
Being a maintainer is much more than just “throwing the code out there” and seeing what happens. It’s triaging bugs, investigating crashes, writing documentation, building a community, and, most importantly, having a direction for the project.
huh, this is the first time open source has soundsd appealing
measurement
I don't understand the concept
Set a business goalThis technique can also be used to align our team and stakeholders on the attributes and adjectives our designs should convey.
This sentence means a business goal should help the team and other people agree on the style and feeling the design should have. It is about making sure everyone wants the same look and message before the design work starts. That way, the final result matches the team’s vision.
To avoid perfect competition, companies must strive to build an economic moat that gives them a sustainable competitive advantage over time.
This sentence explains that companies need a strong economic moat to stay ahead of competitors. A moat is a lasting advantage, like a strong brand, lower costs, or loyal customers, that helps protect a business over time. Without it, companies can get copied and lose their edge.
But on the one side sat a man like death, And on the other a woman sat like si
Personification is being portrayed here. He calls the women "sin" and I think he could be implying that she is tempting and dangerous on the other hand he refers to the man as "death" which could imply that its unstoppable and if I put them together one leads to the other so is the hermaphroditus sitting between the two somehow.
Love being blind
The phrase "love being blind" is what I find most interesting because it helped me understand the poem a little better. Thinking about this familiar expression made me wonder what the author could be saying here. I think he asking rather important question if love is truly blind, does gender actually matter, or is love simply love?
Beneath the woman’s and the water’s kiss
"Water Kiss" feels symbolic and sounds beautiful. Maybe the idea behind a water kiss is that it carries all the qualities of water for instance water is soft and gentle.
barren hours?
I've never heard the phrase "barren hours" so I'm interested in what he means here. I know barren means unable to produce, but why does he describe the hours as barren?
Nay, sweet, it is not fear but love, I know
Here he goes again. Could he be using love and fear to mimic the fact that Hermaphroditus is both genders and cannot be easily defined?
Yea, love, I see; it is not love but fear.
This interpretation is tricky. At first, he seems to recognize love, but then he contradicts himself by redefining it as fear. Maybe its both? love often comes with fear, the fear of losing someone, the fear of many things really. So is he suggesting that love can exist without fear, or that it isn't truly love because its driven by fear?
two fruitless flowers
"Two fruitless flowers" suggests that the flowers are both sexes, but the word "fruitless" in this poem seems to imply that they are unproductive.
hild Trends (2014), for instance,conducted a systematic literature review of different social-emotional skills andhighlighted the need for further research on the importance of the following five skills:self-control, persistence, mastery orientation, academic self-efficacy, and social compe-tence.
I feel like these are all forms of resistance capital. As much as data needs to be collected on these skills, there also needs to be data collection on historical systemic inequities.
Further-more, the availability of data alone does not ensure successful data-driven decision-making (Provost and Fawcett 2013). Consequently, there is an urgent need for furtherresearch on the use of an appropriate data-analytic thinking framework for edu
I feel like there is so much work to be done in this space. For starters, data literacy needs to be foundational in education curriculum. Also, data governance is a challenge. There is so much gatekeeping involved in data offices. Even if more educators were data literate, they may face limitations based on short sighted data policy at their campuses.
At the same time, emerging AI technologies not only pose threats but also createopportunities of producing a wide variety of data types from human interactions with theseplatforms.
Social sciences are going to play a huge role in shaping the development and evolution of AI.
Reintroduce the argument introduced in your thesis statement. Reiterate the key points of your research. Offer some forecasts for the future (example: “Hopefully now with a clearer understanding about free soloing and the rock-climbing community, others might understand the draw to such a seemingly risky sport…”).
These are examples and the general idea on what should go in the conclusion
Finally, work to avoid adding any new information and questions in this final section of your writing.
Don't add new information in the conclusion because this is meant to close your argument and all your thoughts and information you just talked about.
books and scholarly articles. Academic books generally fall into three categories: (1) textbooks written with students in mind, (2) monographs which give an extended report on a large research project, and (3) edited-volumes in which each chapter is authored by different people. Scholarly articles appear in academic journals, which are published multiple times a year in order to share the latest research findings with scholars in the field
These are good examples of sources one can use when it comes to research and finding sources.
Instead, the main objective is to highlight specific information about your topic.
Informative essays aren't for convincing others but more to highlight the main things in your topic.
For this essay, you will focus on one or two driving questions about your topic, which will drive your research and help you reach a conclusion.
Focus on two driving questions to narrow down the research
Typically, these essays aim to answer the five Ws and H questions: who, what, where, when, why, and how.
This is helping when writing a report because it gives you a foundation on what you will be talking about.
The Informative Research Report is a report that relays the results of a central research question in an organized manner through more formal sources.
Research that you receive in a formal way and can be used for a report.
Strive for a spirit of openness in all aspects of the marketing profession.
This passage fits into the liberty category because liberty values open communication and it says they strive for openness.
Consider environmental and societal stewardship in our decision-making.
This passage fits into care because they are taking the environment into consideration which is minimizing harm.
This means striving for transparency and fairness in all aspects of the marketing ecosystem.
This category fits truth because they're valuing transparency and fairness in all aspects.
All the problems of A.I. writing--inaccuracy, misinformation, plagiarism, misrepresentation, and, above all, hack work--were widespread in the early days of blogs2 and throughout the digital media boom years, accelerated by the Facebook News Feed and other platform distribution schemes. I recognize here that I’m recapitulating a boosterish argument that suggests that A.I. isn’t really a “big deal.” To be clear, I think it is! But it’s a big deal within a bigger deal, and that matters.
Structural concern!
A strong lan-guage education policy in the United States thatwould support bilingualism as a resource muststart by acknowledging the language practices ofU.S. bilingual communities, and not simply relyon the constructed understandings of nationallanguages that have informed much language ed-ucation policy in the past
shows that SAE was never a naturally occurring standard. It was made and then imposed. Writing classrooms that enforce SAE without questioning where it came from are repeating a historical pattern of treating one group's language as the default and everyone else's as a problem
language policy in formal education hastraditionally served the interest of nation-states,including the United States. Language educationprograms have been made to fitestablished pat-terns and pedagogical traditions, sometimes tocurb bilingualism, at other times to promote it
SAE is not a neutral academic standard. SAE in writing classrooms has never really been about helping students learn
a translan-guaging policy would encourage students' use ofall their language resources in learning new ones,rather than banishing their home language prac-tices from the classroom.
SAE only instruction not only "corrects" their language, it erases a part of they are and how they think
Do we need to use our pharmacology book or anything other than the today's medical assistant?
can you clarify the exact meeting days for this class?
Can you please clarify the due date for the Getting Started Review?
Regarding important dates, for those of us planning to walk at graduation in June, should we begin the application process to graduate?
Will the instructor's check-off be similar to HLTH 231? Will it be one-on-one with the instructor? With 10 people in class, is there a risk of time constraints?
Will these be the only two chapters in the assigned text that we will use for the entirety of the quarter?
Germany experienced a 27.4% BEV sales drop after ending its subsidy in December 2023, the steepest decline in Europe.
Germany experienced a 27.4% drop in BEV sales after ending its subsidy in December 2023, the steepest decline in Europe
Germany's abrupt subsidy termination in December 2023 caused the deepest BEV sales drop in Europe because the purchase bonus was the only meaningful instrument for private buyers
Germany's abrupt subsidy termination in December 2023 led to the deepest BEV sales drop in Europe, as the purchase bonus was the only meaningful incentive for private buyers.
fall out of manufacturer warranty
fall out of the manufacturer's warranty
and the vehicles flow onto the used market
and the vehicles flow into the used market
(up to €6,000, budget: €3 billion for an estimated 800,000 vehicles through 2029). It may boost short-term demand, but analysts note that without accompanying structural measures, it carries the risk of another cliff-edge effect when the budget is exhausted.
(up to €6,000; budget: €3 billion, covering an estimated 800,000 vehicles through 2029). It may boost short-term demand, but analysts note that, without accompanying structural measures, it risks another cliff-edge effect when the budget is exhausted.
In Germany, the purchase subsidy was virtually the only instrument for private buyers
In Germany, the purchase subsidy was the only instrument available to private buyers.
Germany is not the only country to have abolished a subsidy, but only Germany experienced a -27% collapse in BEV sales the year after.
Germany is not the only country to have abolished a subsidy, but it was the only one to experience a -27% collapse in BEV sales the year after
The real explanatory variable, as the previous chapter showed, is policy architecture: the combination of tax incentives, registration structures, and running-cost advantages that shape the total cost of ownership.
This paragraph should be integrated into the box titled "GDP doesn't explain it either.". The idea that the real explanation as to which countries have better results is found in the policy architecture should be the last idea in the "box" talking about GDP. Just make sure its not mentioned twice, since its already written tehre as well.
That said, charging infrastructure remains a critical bottleneck for the next phase of growth. In our infrastructure study Europe's Charging Infrastructure Gap 2030 (October 2025), we showed that the EU is on track to reach only 1.7 million of the targeted 3.5 million charging points by 2030, a shortfall of 74%. Particularly critical are "charging deserts": regions where the nearest charging station is more than 40 kilometers away, including parts of central Germany, rural France, and the Balkans.
This paragraph should be placed jusr after the following sentence as a new paragraph: "The Netherlands has over 1,100 charging points per 100,000 inhabitants, the densest network in Europe, yet its BEV market share is lower than Denmark's or Norway's, which have far fewer chargers. Infrastructure is a necessary condition, but not a sufficient driver of adoption."
In the Netherlands, the market remained stable.
"In the Netherlands, the market remained stable." --> Change to "In the Netherlands, the market share of BEVs also kept rising"
In Sweden and Finland, the dips were moderate.
"In Sweden and Finland, the dips were moderate." --> please change this to the following: "In Sweden and Finland, the absolute sales of new BEVs did go down a little bit, but market share of BEVs mostly continued growing: Swedens market share went up after (2022: 14.4% → 2023: 19.5% → 2024: 21.2%). Finlands market share also went up (2022: 17.8% → 2023: 33.9%), but then dropped slightly in 2024 (29.5%).
country, i.e. the share of fully electric cars among all new registrations in 2025. Use the toggle in the top right to switch between map and table view, and the tabs to see registrations per 100,000 inhabitants and absolute figures
country, i.e., the share of fully electric cars among all new registrations in 2025. Use the toggle in the top-right corner to switch between map and table view, and the tabs to view registrations per 100,000 inhabitants and absolute figures.
Finland spent €25 million on EV purchase subsidies and reached 37% market share. Germany spent €10 billion and reached 19%. The difference isn't the size of the subsidy. It's the policy architecture.
Before this paragraph we need an introductory sentence. Please insert: While European nations share the goal of decarbonizing transport, their strategies for incentivizing battery electric vehicles (BEVs) vary wildly and with vastly different levels of efficiency.
without a market collapse while others saw the deepest BEV sales drop in their history. It traces how the used car market is emerging as a powerful, subsidy-free engine of private EV adoption. And it asks what Chinese manufacturers' rapid expansion means for the European market.
without a market collapse, while others saw the deepest BEV sales drop in their history. It traces how the used-car market is emerging as a powerful, subsidy-free engine of private EV adoption. It asks what Chinese manufacturers' rapid expansion means for the European market.
ncluding EFTA and the UK, the European total reached approximately 2.5 million BEV registrations. EU BEV sales grew approximately 30% year-on-year (ACEA). The recovery was driven largely by regulatory pressure: EU fleet CO2 targets forced manufacturers to push EV sales to avoid penalties, while the UK's ZEV mandate required 28% of new sales to be zero-emission. The UK fell just short at 23.4%, with manufacturers subsidizing BEV sales by over GBP 5 billion to close the gap (SMMT).
whole paragraph reads a bit fragmented
The EU-wide BEV market share stood at 17.4% in 2025, up sharply from 13.6% in 2024, a year depressed by the loss of German subsidies.
weird sentence, too much focused on germany
With 545,000 new BEV registrations in 2025, Germany is Europe's largest electric car market in absolute terms. But at a market share of 19.1%, it sits squarely in the middle of the pack.
For readability, we need an intro sentence here, something along the lines of: "The gap between BEV registrations in Europe differs"
I also would not mind having a few more interesting outlieers mentioned here. E.g. croatia 1,9 %, maybe Malta (which I think is somewhat surprising) and perhaps a more general insight as in "norther countries lead the list of highes BEV adaption in 2025" (if true!)
now account for 69% of all new registrations on the continent
~61%, not 69% according to https://www.acea.auto/pc-registrations/new-car-registrations-1-8-in-2025-battery-electric-17-4-market-share/
It traces how the used car market is emerging as a powerful, subsidy-free engine of private EV adoption.
does it trace that for EU as well? I thought we only have this for germany.
I assume this only relates to what is happening in germany, whioch is fine, I would jusr phrase oit accordingly, e.g. "Based on the Germany example, the study traces how the used e-car market is emerging as a...."
Europe's top 5 BEV markets by volume in 2025: Germany (545,000), UK (473,000), France (327,000), Norway (172,000), Netherlands (156,000).
I would move this bullet point up to the beginning of the list.
examines why some countries managed the end of purchase subsidies without a market collapse while others saw the deepest BEV sales drop in their history.
before "examines why some countries managed the end of purchase subsidies without a market collapse while others saw the deepest BEV sales drop in their history." insert something along the lines of "compares European countries/BEV numbers in Europe" .... and then move to the next sentence which can say something like "This study then examines the impact of subsidies and why some countries managed the end of purchase subsidies without a market collapse while others saw deep BEV sales drops."
The difference isn't the size of the subsidy, it's the policy architecture
The difference isn't the size of the subsidy; it's the policy architecture
Yet the gap between Europe's leaders and laggards has never been wider.
If we now continue talking about only BEVs i think we need a bit more here, something along the lines of "This becomes particularly striking when we look at BEVs."
3.5. Activity: Find Lists of Bots# In order to get more of a sense of what bots are out there, try searching for social media bots and see what you can find. Try strategies like: Google: “Most useful Instagram bots” Google: “Funniest Twitter bots” Read through the Reddit “botwatch” subreddit [c35] Read through a list of registered bots on Wikipedia [c36] 3.5.1. Reflection Questions:# What bots do you find surprising? What bots do you like? What bots do you dislike?
Several surprising bots focus on transparency, such as NYPDedits, which monitors Wikipedia for anonymous edits coming from police department IP addresses to ensure institutional accountability. Among the bots users often like are helpful utility tools, such as Musico Bot on Discord for shared listening experiences, or customer service bots that provide 24-hour support. Conversely, many dislike antagonistic bots that are designed for deception, such as those used for spamming or artificially manipulating public opinion. Ultimately, because these bots lack human intent, their actions are viewed as technical functions of their programming rather than personal choices, which shifts the moral weight of their behavior onto the developers who run them.
Google: “Most useful Instagram bots”
While researching different types of bots, I was reminded of a time that I made use of a bot on Instagram. In 2017 I built an Instagram page with a couple of friends, which grew to a following of a few thousand. We would regularly use a bot that we could send posts to that would respond via a DM with the video or photo file itself. This allowed us to easily download posts from any public account.
How are people’s expectations different for a bot and a “normal” user? Choose an example social media bot (find on your own or look at Examples of Bots (or apps).) What does this bot do that a normal person wouldn’t be able to, or wouldn’t be able to as easily? Who is in charge of creating and running this bot? Does the fact that it is a bot change how you feel about its actions
Expectations for bots focus on efficiency, speed, and rigid adherence to code, whereas normal users are expected to possess empathy, social nuance, and accountability for their "intent." For example, a unit conversion bot can scan thousands of posts to provide instant metric offsets that a task a human could not perform at that scale without extreme fatigue. These bots are typically managed by independent developers who use APIs to automate actions. Because a bot lacks personal will, we often view its errors as technical bugs rather than moral failings, shifting the ethical responsibility back to the person who created or ran the program.
How are people’s expectations different for a bot and a “normal” user?
We expect normal users to be prone to error, grammar mistakes, bias, sympathies, and conversational tendencies. We expect bots to attempt to follow instructions to the best of their abilities. A bot's behavior is more observably coherent, it holds pattern and expectation. When we are surfing social media, bots inherently don't incite the dramas which human nature can. They can be programmed to, but that's because it's still following instructions to mimic humans and incite human emotions.
But, since the donkey does not understand the act of protest it is performing, it can’t be rightly punished for protesting. The protesters have managed to separate the intention of protest (the political message inscribed on the donkey) and the act of protest (the donkey wandering through the streets). This allows the protesters to remain anonymous and the donkey unaware of it’s political mission.
This is so interesting, to have an object that is very obviously not aware of its purpose being to protest let alone what it is specifically protesting. Yet, it is making an impact on the creators behalf using their beliefs .
Why do you think social media platforms allow bots to operate?
I think that social media platforms allow bots to operate because certain bots are used to boost engagement on posts. With boosted engagement on posts, things on the app become more popular, and the app will gain more traction. So, I think that social media platforms allow the operation of bots to promote engagement with their platform.
Reading this chapter made me realize how powerful and subtle social media bots can be. I used to think bots were just obvious spam accounts, but the examples show that many bots can look very human and influence people without being noticed. This connects to what i learned before about media shaping behavior—bots can amplify certain ideas and make them seem more popular than they actually are. I think this is a little concerning, especially during elections or social movements, because people might unknowingly be influenced by automated systems. At the same time, I don’t think bots are always bad, since they can also provide useful services. A question I still have is: how can platforms realistically detect advanced bots without also wrongly flagging real users?
Thomas T. Hills. The calculus of ignorance. Behavioural Public Policy, 7(3):846–850, July 2023. URL: https://www.cambridge.org/core/journals/behavioural-public-policy/article/calculus-of-ignorance/14E02A10E307E3FDEFE0E7C86D9E4126 (visited on 2024-04-01), doi:10.1017/bpp.2022.6.
reading this text i found that it is on deliberating what motivated peoples willingness to ignore certain things and how we may go about fixing it. The author summarizes his answer and findings by saying that peoples choices are formed by what they know and how they learned it as well as the cost and benefit of finding out more.
The author also makes a comment saying that ignorance has costs and benefits, in previous classes specifically ones that have to do with social sciences of literature i have read much into the topic of ignorance and choosing not to pick sides and how these are the greatest perils of our population.
20 YEARS OF TRIAL AND ERROR, OR 60 SECONDSThe Garden Plan. 20 Years of Trial and Error Behind It and You Get Every Detail in 60 Seconds.
Reflecting my personal experience I'd put 15+ years here. I moved to the farm 18 years ago and grew with the family business for many years then later on my own.
“It really helps to cut down, if not eliminate, all the guess work on when to plant what. Using this program has helped me to have success in my garden.”
Let me work on getting real reviews of the garden plan itself. Should be able to get this quick.
The vegetable selection and layout work across most US growing zones. You'll want to adjust your planting dates based on your local last frost date, but the plan itself is zone-flexible. The Seedtime app can help you nail down exact timing for your area.
See note at the top about zones
The data in question here is over what percentage of Twitter users are spam bots, which Twitter claimed was less than 5%, and Elon Musk claimed is higher than 5%.
I made a comment in the last chapter that is slightly similar to what i will say now, but it is interesting to me the slight give and take regarding large platforms and their relationships with bots. My take is That the platforms make money in general by having many users and interaactions, but users don't want to have the impression that they are just interacting and seeing bots all over their social media. So it is interesting that about 5% of users in x may be bots.
of your audience view your plan as "impractical"?
Audience needs to know what the problem is and what are the solution(s)
Do they understand the nature of the "problem"?
Significance of the problem question
I'm arguing that the way we use them matters more than whether we use them, and that the distinction between tool use and cognitive outsourcing is the single most important line in this entire conversation, and that almost nobody is drawing it clearly. Schwartz can use Claude to write a paper because Schwartz already knows the physics. His decades of experience are the immune system that catches Claude's hallucinations. A first-year student using the same tool, on the same problem, with the same supervisor giving the same feedback, produces the same output with none of the understanding. The paper looks identical. The scientist doesn't.
How do you know whether you're doing the latter?
This is the distinction that I think the current debate keeps missing. Using an LLM as a sounding board: fine. Using it as a syntax translator when you know what you want to say but can't remember the exact Matplotlib keyword: fine. Using it to look up a BibTeX formatting convention so you don't have to wade through Stack Overflow: fine. In all of these cases, the human is the architect. The machine holds the dictionary. The thinking has already been done, and the tool is just smoothing the last mile of execution. But the moment you use the machine to bypass the thinking itself, to let it make the methodological choices, to let it decide what the data means, to let it write the argument while you nod along, you have crossed a line that is very difficult to see and very difficult to uncross. You haven't saved time. You've forfeited the experience that the time was supposed to give you.
Hard to know whether you're doing this when you're in more unfamiliar territory
Bots, on the other hand, will do actions through social media accounts and can appear to be like any other user. The bot might be the only thing posting to the account, or human users might sometimes use a bot to post for them. Note that sometimes people use “bots” to mean inauthentically run accounts, such as those run by actual humans, but are paid to post things like advertisements or political content. We will not consider those to be bots, since they aren’t run by a computer. Though we might consider these to be run by “human computers” who are following the instructions given to them, such as in a click farm:
It interesting to see how bots are so similar to users. It is especially scary to think about this when you look at how most platforms don't have bot tags or even if they do when people intentionally don't label accounts as bots for malice intent.I have never really considered these accounts that are run by humans that post junk and spam to be bots but I can see how people would consider a bot account anything that spams them.
Note that sometimes people use “bots” to mean inauthentically run accounts, such as those run by actual humans, but are paid to post things like advertisements or political content. We will not consider those to be bots, since they aren’t run by a computer. Though we might consider these to be run by “human computers” who are following the instructions given to them, such as in a click farm:
I have actually seen a lot of bots running some celebrities' accounts. They are posting things that are relevent to celebrities' life, however the tone and content aren't really convincing to me that they are actually posted by themselves. Most of them are just for maintaining fans and to my opinion it's somewhat meaningless.
But my other response to you is to ask why the investment in and privileging of certain epistemic categories of domination as opposed to others? The question of migrant labor illustrates how race and class and geography and history are intertwined in very specific ways—the Middle Eastern cases (whether the Gulf or in Lebanon) are indeed different from that of the history of migrant labor in the United States, which has always been implicated in settler colonialism.
This quote made me pause and evaluate the framings and vocabulary that we use in understanding the Middle East. Is it proper to use vocabulary that is very inherently rooted in particular Western realizations of race and racism in a completely different context? We discussed in class, for example, how the slave trade in the Ottoman Empire and slavery as a whole was different from the Atlantic slave trade. Yet, simply even the word 'slavery' evokes certain images and meanings that are not cognizant with the Ottoman definition. The same can be said about the milieus of English, Western European-colonial-influenced vocabulary.
So much of the nationalist struggle in Syria, Palestine and Iraq, as well as Egypt, revolved around challenging and dismantling European colonial pluralism that held that only European power could keep allegedly antagonistic native communities in harmony.
I found this quote to me demonstrable of essentially every western action in the middle east. Carve up some of Afghanistan here, make Israel a child state here, go to war with Iraq for oil there, etc, etc. Furthermore, it follows that like South Africa with Apartheid, the contemporary history of the middle east will be and is one of resistance against the western metropole as a whole.
“You know, like all these authors and illustrators, you could use translanguaging in your memoirs if it will help you tell your story truthfully, more like the way it really happened.” Their faces seemed to open up like a thank you.
It is so important to find picture books that use translanguaging. These books can serve as models. Students can see how authors use multiple languages and apply that to their own writing. But these books also serve a deeper purpose. They show students that their languages belong in school. Many students are used to thinking they should only write in English. Seeing these books challenges that idea. It almost gives them permission to use all their language resources and write in a way that feels natural to them. It tells them, “these authors are doing it and you can do it too.”
This is why text choice is so important. The books we use can send powerful messages to students. Books that include translanguaging show students that their language practices are important and something we want to see in their own writing. This is why the students’ “faces seemed to open up like a thank you.”
The elevation of the 410-km discontinuity may reflect the wet condition of the slab, as a small amount of water (0.12–0.5 wt%) speeds up the olivine-to-wadsleyite phase transformation and is equivalent to a temperature increase of 150oC.
this suggests that there is a very fast reduction in temperature, while the slab is expected to be dry, there is a sped up phase transition which indicates water. this is again representaive of how phase diagrams shift when water is involved and can even impact chemical procesess.
The upper and lower mantles have relatively low water storage capacities, whereas the transition zone has a higher water storage capacity
as seen in this diagram above in figure 3, this layered capacity to stroe water acts as a trap, sometimes holding water in a transition zone where hydrous phase minerals get stuck.
. The solubility of water in nominally anhydrous minerals has been measured for phases that occur under mantle conditions
nominally anhydrous minerals are usually silicates such as olivine and they store most of the water that is in the mantle due to the instability of a hydrous phase in a normal geotherm.
including DHMS, are stable only at the low temperatures representative of slab conditions; they are not stable along the normal mantle geotherm.
This shows that a lot of these minerals have a hard time forming undernormal conditions and only appear when temperatures are lower than expected geotherm levels.
Water affects the position of phase boundaries such as the α−β transformation in the Mg2SiO4–Fe2SiO4 system
We have observed this in class and it not only changes the position of the boundaries, but changes the shape and favors specific minerals over others due to this change.
[u38] Ruha Benjamin. Viral Justice: How We Grow the World We Want. Princeton University Press, October 2022. ISBN 978-0-691-22288-2. URL: https://press.princeton.edu/books/hardcover/9780691222882/viral-justice (visited on 2023-12-10).
Source: Ruha Benjamin
The source explains and builds on the idea that social change doesn't just come from large-scale policies and incentives but also from smaller, everyday actions that influence us to spread information and feelings in a positive way. One key detail mentioned in the source, which was the reason for Ruha to highlight the importance of individual choices and community care, was events such as the pandemic and the BLM movement, which proved the importance of caring for others.
In the 1976 book The Selfish Gene [l3], evolutionary biologist Richard Dawkins[1] said rather than looking at the evolution of organisms, it made even more sense to look at the evolution of the genes of those organisms (sections of DNA that perform some functions and are inherited). For example, if a bee protects its nest by stinging an attacking animal and dying, then it can’t reproduce and it might look like a failure of evolution. But if the gene that told the bee to die protecting the nest was shared by the other bees in the nest, then that one bee dying allows the gene to keep being replicated, so the gene is successful evolutionarily. Since genes contained information about how organisms would grow and live, then biological evolution could be considered to be evolving information. Dawkins then took this idea of the evolution of information and applied it to culture, coining the term “meme” (intended to sound like “gene” [l4]). A meme is a piece of culture that might reproduce in an evolutionary fashion, like a hummable tune that someone hears and starts humming to themselves, perhaps changing it, and then others overhearing next. In this view, any piece of human culture can be considered a meme that is spreading (or failing to spread) according to evolutionary forces. So we can use an evolutionary perspective to consider the spread of: Technology (languages, weapons, medicine, writing, math, computers, etc.), religions philosophies political ideas (democracy, authoritarianism, etc.) art organizations etc.
I'm really surprised and fascinated by how the vastly popular social element memes of today originated with a deep connection to the biological process of evolution. I totally agree with this perspective that information evolves and changes over time in the same way as how organisms evolve over time. An example of this is how the 6 7 meme recently emerged over social media.
Pia ThompsonCEO, Checkbook
This is strong but we should add the Checkbook Logo
LendAPI Partner of the Year2025 • Excellence in Compliance
Just below this add
"How it works" section 1. Connect your policies + controls 2. Guardian scans for gaps and risks 3. Get real-time alerts + fixes
A Better Way to Build Compliance
As a part of the solution addd a use case:
"Preparing for Bank Partnership" * Before: weeks of manual work * After: automated readiness dashboard
icking the b
lol does this work?
probate
The 2nd time it is mentioned
probate
First mention move the definition here
Table 7.1
The "Fee Simple Subject to Condition Subsequent" row describes violation requiring "owner action to reclaim" but the definition column says "violation does not automatically end the estate." This is correct but the distinction between automatic forfeiture (determinable) and right of entry (subsequent) could be clearer. Consider adding a comparison row or side note.
On 2026-03-31 13:36:35, user AmirV wrote:
any possible timelines on when the appendix section will have more detail? Table A3, model implementation, etc.
On 2026-03-30 07:16:33, user Philip Lewis wrote:
A link to the published version in the European Journal of Epidemiology: https://doi.org/10.1007/s10654-026-01372-8
On 2026-03-28 19:28:54, user Aiste Steponenaite wrote:
The study has now been published: https://doi.org/10.1007/s10654-026-01372-8
On 2026-03-29 15:01:54, user Ian Buller wrote:
Quick note that your citation of the abstract by Brown & Vo et al. (2022; DOI: 10.1158/1538-7755.DISP21-PO-192) is now published as a manuscript in JNCI by Vo & Brown et al. (2025; DOI: 10.1093/jnci/djaf066). I am a co-author on both.
On 2026-03-27 14:32:15, user Peter Ellis wrote:
If you can validate this finding by some other method then this would be a truly remarkable finding. The Y chromosome contains numerous genes that are essential for spermatogenesis - it should not be possible for any cell lineage lacking the Y to give rise to mature sperm. The only possible point at which the Y could be lost (or Y-bearing cells could be lost) would be post-meiotic.
Is it possible instead that there is some alteration in chromatin packaging which somehow selectively affects the extraction efficiency of Y chromosomal DNA sequences?
Alternatively, how exactly is the calculation of Y content being done? If this is an aggregate measurement from bulk DNA, is it possible that rather than there being cells that have fully lost the Y, there is a mix of cell lineages present, each of which has a range of different Y microdeletions present?
Given the known essentiality of the Y for sperm production, I think you will find it challenging to get this past peer review without some kind of per-cell analysis, which could be FISH-based or single-cell genotyping. In either case you'd need very high throughput to have statistical power to detect 1% of cells with LOY
On 2026-03-26 18:17:48, user Danny Chu wrote:
The article is out: https://link.springer.com/article/10.1007/s00262-026-04359-2
On 2026-03-26 15:44:14, user Peter J. Wolf wrote:
As both a researcher and community cat caregiver, I’m very pleased to see this work being conducted!
I was rather surprised to see the relatively low instances of secondary traumatic stress (i.e., 47% moderate, 10% high) reported in this study. I imagine this is the result of using the thresholds proposed by Stamm (2010). You might consider repeating your analysis using the revised thresholds proposed by De La Rosa et al. (2018).
Literature cited<br /> De La Rosa, G. M., Webb-Murphy, J. A., Fesperman, S. F., & Johnston, S. L. (2018). Professional quality of life normative benchmarks. Psychological Trauma: Theory, Research, Practice, and Policy, 10, 225–228. https://doi.org/10.1037/tra0000263
Stamm, B. H. (2010). The Concise ProQOL Manual (2nd ed.). https://proqol.org/proqol-manual
On 2026-03-25 06:58:19, user Eugenio Forbes wrote:
As part of the methodology, did anyone diagnose equipment and cables, plot the recordings to verify that it's not mostly noise?
On 2026-03-25 02:47:47, user Tin Pham wrote:
This paper could have been ameliorated by specifying the target trial specifications (eg. eligibility criteria, treatment and outcome, follow-up, causal contrasts) and the emulated analogues, according to the TARGET guideline (Cashin et al, 2025). Also, I suggest some sensitivity analyses be done (e.g. varying the lag time, different model specifications for calculating the propensity score, ITT vs PP treatment estimates) to verify the robustness of the findings.<br /> _____<br /> References: <br /> Cashin AG, Hansford HJ, Hernán MA, et al. Transparent Reporting of Observational Studies Emulating a Target Trial—The TARGET Statement. JAMA. 2025;334(12):1084–1093. doi:10.1001/jama.2025.13350
On 2026-03-23 19:50:14, user Neville Calleja wrote:
There is clearly a number of confounders here and the way it is written and summarised in the abstract does not give enough credit to this. The link with pre-existing EBV infection, potential infection before the vaccine took full effect etc has not been well described. A number of subset analyses have been carried out, which may border on data dredging, rather than formal multivariate analyses. Also clearly the involvement of a major antivax advocate has meant that the study has been highjacked.
On 2026-03-23 18:21:15, user Evolutionary Health Group wrote:
We at the Evolutionary Health Group ( https://evoheal.github.io/ ) enjoyed this paper. Here are our highlights:
The saliva-based work presented here shows convincing equivalence to blood tests across multiple pathogens, cohorts, and age groups. The attention to real-world validation shows that the method is platform-level, opening the possibility of applying this type of assay in other contexts.
Because saliva samples are self-collected, non-invasive, and stable, the authors demonstrate that it's possible to capture the daily resolution of important pathogens, addressing a major limitation in epidemiology where true dynamics can't be captured due to infrequent sampling.
Blood-based studies frequently under-sample children, older adults, rural populations, and low-resource populations. Saliva testing removes many of the cost, commute, and invasiveness barriers and helps create more representative datasets for use in epidemiological inference and public health policy.
Professionally collected blood samples benefit from consistent quality. Self-collected samples are more likely to suffer a loss of quality due to improper collection techniques. We would be interested to know how sample quality compares when collections are truly independent of any professional guidance.
On 2026-03-22 22:22:03, user Elizabeth Hughes wrote:
A revised version of this article has been published in the International Journal of Epidemiology https://doi.org/10.1093/ije/dyag028
On 2026-03-17 13:20:45, user Eric Kernfeld wrote:
This looks super useful and I am excited to check it out, but the blog post link on the gpmap landing page is down -- https://www.bristol.ac.uk/ieugwasr/blog/2026/03/11/gpmap-blog-post/
On 2026-03-14 00:41:31, user Lisa DeTora wrote:
What an interesting study! I'd be curious about your views on more open-ended questions users might ask of LLMs about specific vaccines. I also wonder if you have advice for public health agencies or healthcare providers on using LLMs in this setting
One small point: the vaccine hesitancy reference is pre-COVID. My understanding is that this problem has been somewhat worse since the pandemic, making the problems you seek to address even more critical.
On 2026-03-13 17:56:20, user NomosLogic wrote:
Important preprint out of Johns Hopkins — LLMs evaluated as a diagnostic safety net for correcting physician errors.<br /> The right question to ask alongside it: for which clinical decisions is "better probabilistic reasoning" the correct architectural answer, and for which decisions is determinism required?<br /> Drug-gene interactions have correct answers. They are computable. An LLM that reasons well about a CYP2C19 finding is still approximating what a deterministic rules engine computes exactly — every time, auditably, without session-level variance.<br /> The safety net shouldn't be a better guesser. It should be a system that cannot get the answer wrong.
On 2026-03-13 17:40:32, user Sasan Jaili wrote:
The peer-reviewed version of the paper is now available online at Nature Biomedical Engineering:
On 2026-03-10 21:27:15, user Abdul Harif wrote:
Training a 160-input multilayer perceptron on a cohort of only 35 unique participants is highly prone to overfitting, even with the inclusion of dropout layers and early stopping. Also, the control group only provided a single breath specimen at one time point, whereas the patient group provided breath specimens before treatment and again 6 to 8 weeks later. This introduces unmitigated temporal confounding variables, such as seasonal changes or device drift over the 8-week period, which the control group does not account for.
On 2025-12-15 13:19:11, user Chris Smith wrote:
Was the data stratified by participant or by breath?
On 2026-03-09 13:24:18, user David Glasser, MD wrote:
All authors are equity owners of the system studied. Were these the same experts that reviewed discordant cases and sided with the system they owned almost 4 times as often as the board certified clinicians who made the initial assessments?
Ethical review was by the company that markets the system studied.
MAJOR sources of bias and conflicts of interest here. I give them credit for being forthright about disclosing them.
On 2026-02-20 13:58:16, user Peer Reviewer wrote:
We requested the materials needed to reproduce the main results in this preprint. Although the manuscript states that “all data produced in the present study are available upon reasonable request,” a request from our group did not receive a response, and the requested materials have not been made available for independent replication.
On 2026-03-09 10:10:35, user Danni Collingridge Moore wrote:
This preprint has now been published in the journal Palliative Care and Social Practice - https://doi.org/10.1177/2632352425141362
On 2026-02-27 17:03:20, user Deepak Modi wrote:
The NGS dataset in this study is available with IBDC Study Accession: INRP000591 and INSDC (SRA) Project Accession: PRJEB108860
On 2026-02-23 21:17:30, user Tingjie Guo wrote:
The article has been published in Clinical Pharmacokinetics https://doi.org/10.1007/s40262-025-01545-w
On 2026-02-22 17:29:44, user Sue Hewitt wrote:
How is it possible to review the supplementary tables, which do not seem to be included in the preprint? Will this research be submitted for peer review?
On 2026-02-21 03:18:58, user Naoki Watanabe wrote:
We are pleased to announce that this preprint has undergone peer review and has been published in a formal journal. Please refer to the final version of the article.
Watanabe N, Watari T, Otsuka Y. Desulfovibrio Bacteremia in Patients with Abdominal Infections, Japan, 2020–2025. Emerg Infect Dis. 2026 Feb [cited 2026 Feb 21];32(2). Available from: http://dx.doi.org/10.3201/eid3202.251581
On 2026-02-09 01:32:51, user Emre wrote:
This article was published as a research paper in the following journal after peer review. https://journals.tubitak.gov.tr/biology/vol47/iss6/7/
On 2026-02-08 00:04:37, user Mingxin LIU wrote:
This preprint has been published in BMC Medical Informatics and Decision Making and can be accessed at: " https://doi.org/10.1186/s12911-026-03370-y ."
Readers are encouraged to refer to the published version for the final peer-reviewed content.
On 2026-02-04 13:31:54, user Madhav Chaturvedi wrote:
This article has now been published in BMC Infectious Diseases, with the following DOI: https://doi.org/10.1186/s12879-025-12211-8
On 2026-01-31 09:06:49, user Chris Morgan wrote:
I understand from the Methods that the background population were required to have at least 12 months registration prior to each observation year and that PD events prior to 2007 were excluded. I am therefore assuming PD patients had to have the first diagnosis after this 12 months as a wash-in to ascertain true incident cases. As this is not explicit in the text and noting the higher incidence in the early years of the study, could the author just confirm this please
On 2026-01-29 02:57:44, user Vanessa Haase wrote:
Correction to my previous comment: The HQ calculations for heart rate increase are scientifically invalid. Heart rate increase from nicotinic agonists is the intended pharmacological effect, not an adverse outcome. The authors use an ARfD based on heart rate increases, but HQ methodology is designed to assess adverse health effects. Pharmacological receptor activation that produces the desired stimulant effect cannot be characterized as a toxicological hazard. By this logic, caffeine would have unacceptable HQs for increased alertness.<br /> Valid cardiovascular risk assessment requires identifying doses causing actual adverse outcomes like sustained tachycardia leading to arrhythmias, hypertensive crises, or cardiovascular events in vulnerable populations. The physiological response that constitutes the purpose of product use is not a safety threshold exceedance.<br /> The conclusions about exceeding safety thresholds rest on this fundamental mischaracterization of pharmacology as toxicity. This undermines legitimate regulatory concerns about unregulated nicotine analogues. Meaningful risk assessment requires identification of true adverse cardiovascular outcomes, not normal receptor-mediated responses.
On 2026-01-26 18:38:43, user Johanna Karla Lehmann wrote:
No single factor that could be associated with a deterioration in post-COVID-19 symptoms after a SARS-CoV2 vaccine dose was investigated (objectives, title, primary outcome). Factors that could have been considered include quantitative changes in spike production, ACE2 expression, Ang II and Ang 1-7 levels, immunological/ inflammatory markers or changes in the severity/extent of comorbidity (blood pressure behaviour, changes in blood circulation, glucose/lipid metabolism, blood clotting, etc.) and smoking habits.<br /> A deterioration and/or increase in symptoms comes as no surprise. Both the infection and the desired acquired immunity through a COVID-19 vaccination (vaccine indication) require SARS-CoV2 spike antigens (RBD of the spike S1 subunit). These are known to react with ACE2 and trigger pathophysiologically undesirable specific organ dysfunctions and symptoms via an Ang II increase and other reactions (bradykinin increase, etc.). For long Covid, an influence of spikes on nicotinic acetylcholine receptor reactions in the periphery and in the CNS is also being discussed. This was evident in the symptoms of coughing (probably dry cough!) and concentration problems, both of which increased significantly after vaccination.<br /> Vaccination (inclusion criterion) and coughing (a symptom!) were named as the ‘identified only factors’ for a worsening of post-COVID-19 symptoms. Neither of these can be described as factors underlying the worsening. The different risks associated with specific vaccines (mRNA, adenovirus, protein-based, attenuated) should be carefully examined in a representative, controlled, comprehensive clinical study.
On 2026-01-26 15:25:00, user Veronica Ruiz wrote:
In response to the question Should antigen-antibody rapid diagnostic tests be used to detect acute HIV infection? I believe that use of fourth-generation rapid tests should continue and be encouraged simply because reduce the window period, and it always should be incorporated into a complete diagnostic and confirmatory algorithm. Like other HIV diagnostic tools, its true usefulness lies in the interpretation framed within an algorithm, and I believe that the vast majority of people who work in the field of diagnosis understand this, and it is clearly outlined in all the guides or recommendations on the subject. In other words, fourth-generation rapid tests alone do not guarantee the detection of acute infection, but in combination of counseling applied to high-risk populations including continuous monitoring and application of other tests including NAT and Ac/Agp24 combo for the detection of the acute viremia phase and the information provided to users can considerably improve the chance of early detection.<br /> As has already been expressed in other comments, there is a huge bias in comparing studies that use different tests, including some that were not approved for use and whose sensitivity increased in updated versions, as well as including mixed populations of very different types in which the percentage of incidence of HIV infection is not comparable.<br /> However, I believe that the greatest bias in the sensitivity calculation is the failure to take into account the appearance of different markers throughout the evolution of the infection, a topic already described by Fiebig in Fiebig EW, Wright DJ, Rawal BD, et al. Dynamics of HIV viremia and antibody seroconversion in plasma donors: implications for diagnosis and staging of primary HIV infection. AIDS 2003; 17(13):1871–1879. <br /> This review conflates detection methods using tests that detect different markers (Table I) related to the time of infection. For example when compared to a NAT test, which has a shorter window period, a fourth-rate rapid test won't have the same diagnostic scope, just like tests that exclusively detect the p24 antigen, whose detection threshold is well-proven to be much lower than any rapid test.<br /> Is also a well-known and reported fact that fourth-generation rapid tests do not perform as well as instrumental methods, but is an inherent limitation of method. Therefore, comparisons should be made using methods with at least a similar detection threshold and window period.The discussion is always interesting and enriching, and I will seek to contact the authors to continue it.
On 2026-01-23 13:21:44, user Dr Ali Johnson Onoja wrote:
The analysis aggregates performance data from a heterogeneous mix of fourth-generation HIV rapid tests, including research-use-only products (e.g., SD Bioline HIV Ag/Ab Combo), discontinued devices (Combo RT, D4G, E4G, Geenius HIV-1/2, Bio-Rad GS HIV Combo), the FDA-approved U.S. version of Determine HIV-1/2 Ag/Ab Combo, and the WHO-prequalified Alere HIV Combo/Determine HIV Early Detect. In the Nigerian context, where national HIV testing algorithms approved by the Federal Ministry of Health (FMoH) and NACA restrict use to WHO-prequalified assays, pooling data from obsolete or non-programmatic tests without stratification by brand, version, or regulatory status may misrepresent the true performance of diagnostics currently available or deployable in Nigeria.<br /> Assumption of Class-Dependent Performance and Its Programmatic Implications in Nigeria<br /> The review assumes that diagnostic performance is determined primarily by test class (i.e., fourth-generation Ag/Ab RDTs), without sufficient consideration of infection kinetics, targeted biomarkers, assay technology, or specimen type. In Nigeria—where HIV testing is predominantly conducted using finger-prick whole blood in community, primary healthcare, and outreach settings—test performance during acute HIV infection (AHI) is heavily influenced by the timing of presentation and the biological stage of infection. Failure to account for Fiebig stage–specific detectability risks overgeneralizing performance expectations and may undermine rational decisions about where and how fourth-generation RDTs could add value within Nigerian testing strategies.<br /> Non-Standard Definitions of Acute HIV Infection and Relevance to Nigerian Epidemiology<br /> Definitions of AHI vary widely across included studies, spanning multiple Fiebig stages (I–III or I–IV), each characterised by distinct biomarker kinetics (HIV RNA -> p24 antigen -> antibody). In the Nigerian epidemic—where individuals often present late for testing but key populations and high-incidence sub-groups may test during early infection—averaging sensitivity across biologically heterogeneous stages obscures the specific window (notably p24-positive Fiebig II–III) in which Ag/Ab RDTs are theoretically expected to improve case detection. This limits the applicability of pooled sensitivity estimates for informing targeted AHI screening strategies in Nigeria, including among key populations, STI clinics, and PrEP entry points.<br /> Influence of Older Devices and Study Design on Applicability to Nigeria<br /> Lower pooled sensitivity estimates are largely driven by evaluations of older diagnostic devices and laboratory-based case–control studies. Approximately two-thirds of included studies rely on non-consecutive sampling, small AHI sample sizes (<100), and archived specimens—designs known to introduce spectrum and selection bias. For Nigeria, where HIV testing occurs primarily in real-world service delivery settings with operational constraints, such estimates may understate the potential performance of newer WHO-prequalified fourth-generation RDTs when integrated appropriately into national algorithms. Consequently, these findings should be interpreted cautiously when informing policy decisions, guideline updates, or pilot implementation of AHI screening in Nigeria.
On 2025-12-24 04:25:36, user Dr Micah Matiang'i wrote:
If the role of Ag/ab RDTs is not well understood in resource limited settings , then there is need to do more population based studies before WHO reaches a conclusion
On 2025-12-19 17:14:16, user Cesar Ugarte wrote:
The preprint by Fajardo et al. addresses an important evidence gap regarding the utility of combined antigen–antibody tests for detecting acute HIV infection. Although the authors adopt a valuable global perspective, the interpretation and synthesis of the data would benefit from greater nuance to enhance clinical relevance. The authors' QUADAS-2 assessment shows High Risk of Bias regarding patient selection and Unclear Risk regarding the conduct of the index test. In diagnostic epidemiology, such findings are not just descriptive but also signal a huge spectrum effect and possible threshold bias. Therefore, the summary estimates presented in Figures 3 and 4 may reflect a statistical average of disparate clinical realities rather than a reliable indicator of test performance (for example in Figure 3 there are 10 studies with a sensitivity less than 10%, including some with 0%, so the evaluation in detail of these studies should be done to see if these studies can be combined with the other ones). Another issue is the inclusion of "obsolete" diagnostic platforms that have been withdrawn due to suboptimal performance. A sensitivity analysis or subgroup stratification should be restricted to tests currently on the market. This would enable the reader to distinguish between the historical evolution of the technology and the expected performance in contemporary clinical practice.
The interpretation of diagnostic performance also should be addressed in detail. Whereas sensitivity and specificity have usually been considered "intrinsic" to a test (so doesn´t depends on disease prevalence), evidence suggests significant variation across clinical settings. The underlying epidemiological status and operator expertise can affect the test’s accuracy. Finally, I agree with the authors that real-world evidence on cost-effectiveness and implementation barriers is lacking. However, we should be very careful to avoid having a biased meta-analytic estimate that leads to the premature abandonment of "imperfect" but viable diagnostic solutions. In the case of acute HIV infection, for which early detection is critical to ART initiation and reduction of secondary transmission, interpretation of this evidence needs to balance statistical rigor against the urgent public health need for early diagnosis.
On 2025-12-17 21:30:43, user Norman Moore wrote:
We have contacted the authors of the article Should antigen-antibody rapid diagnostic tests be used to detect acute HIV infection? A systematic review and meta-analysis of diagnostic performance by Fajardo et al. ( https://doi.org/10.1101/2025.10.14.25338004) . The primary limitation of this article is that it conflates the performance of 4th generation HIV tests that (1) were never launched, (2) that were earlier versions of tests that are no longer available in most parts of the world, and (3) tests that have received WHO pre-qualification (PQ), in a single analysis despite the known and significant differences in performance among them. This has resulted in lower performance representation of certain products over others. It would be more beneficial to the medical community to have a meta-analysis that includes HIV diagnostic tests that are both CE marked and have WHO PQ to maximize the real-world applicability of this systematic review.
On 2025-12-13 03:02:23, user Missiani wrote:
Title: Should antigen-antibody rapid diagnostic tests be used to detect acute HIV infection? A systematic review and meta-analysis of diagnostic performance<br /> Authors: Emmanuel Fajardo, Céline Lastrucci1, Pascal Jolivet1, Magdalena DiChiara1, Carlota Baptista da Silva1, Busi Msimanga1, Anita Sands2, Cheryl Johnson1
The authors systematically searched six databases for studies evaluating Ag/Ab RDTs vs laboratory reference standards in individuals aged >=18 months. Out of 53 studies from 24 countries, they documented a pooled sensitivity of Ag/Ab RDTs for AHI to be 48% (95% CI: 34–62) with specificity of 97% (95% CI: 84–100). They concluded that Ag/Ab RDTs appear to have limited ability to detect AHI, missing more than half of AHI cases<br /> They also documented analytical sensitivity (detection of p24 antigen) at 31%, and antibody detection at 15% which was too low.
I have three main comments that can improve the programmatic application of this manuscript <br /> 1. The study is presented negatively and concludes “Detection of AHI using Ag/Ab RDTs remains a challenge” despite the effort made and resources used. The study oversimplifies highly variable diagnostic data and assumes similarity between the studies, ignoring that a sensitivity of 48% and a specificity of 97% means that half of the kits performed better and almost all were specific. In Table 2, the authors examined region, study settings, and design, population, specimen, etc., but did not examine the group of kits whose sensitivity and specificity exceeded the pooled values. By examining this group of kits, they will successfully address the title of the article (Should antigen-antibody rapid diagnostic tests be used to detect acute HIV infection). Omitting this subgroup analysis presents one dimension of the data. We recommend they include this analysis as a way to address the gaps.<br /> 2. The authors present the p24 and Ag/Ab as a standalone approach rather than a combined or multiplex kit to address early diagnosis during AHI, which will provide an opportunity in low-income countries to reduce transmission, improve linkage to care and clinical outcome. Based on their sensitivity of 48%, multiplexing the test would improve diagnosis by the same margin, which is a substantial gain. We recommend adding a paragraph on the impact of incorporating p24 into a multiplex platform. Because many diagnostic tests are now packaged as multiplex platforms, incorporating this perspective will give the title greater depth and better reflect current testing practices<br /> 3. Some test kits reviewed in the study are either obsolete, recalled, or never progressed beyond early pre-evaluation stages. This raises significant concerns about the validity and current use of the findings. Manufacturers may have already recognized the kits’ poor sensitivity and, as a result, chose not to move forward with full production. Without acknowledging the discontinued or preliminary status of these kits, the study’s conclusions risk being misleading since the kits are not on the market. Recognizing the actual status of these products is essential, as it directly affects how their findings should be interpreted and whether they can responsibly inform policy or implementation decisions.
On 2026-01-26 09:10:28, user Gail Davey wrote:
The Neglected Tropical Diseases considered by the 2021-25 Ethiopian National Strategic Plan ( https://espen.afro.who.int/sites/default/files/content/document/Third%20NTD%20national%20Strategic%20Plan%202021-2025_0.pdf ) include podoconiosis. This is also considered among the skin-NTDs by WHO. Extensive information is available on the distribution and impact of podoconiosis, which has a greater burden than LF in Ethiopia. It would be helpful to include mention of this conditon within the manuscript.
On 2026-01-23 19:44:39, user David Laursen wrote:
Thanks for an interesting preprint, which I hope to read more carefully soon. I am not particularly well versed within causal inference so apologies if the question is unclear.
I noticed your warning against conditioning on post-treatment belief (since it is a collider). Just wondering, does this reservation extend more generally to cautioning against testing for success of blinding at all, regardless of doing a stratified analysis of treatment effects by belief (in an estimation setting, this would probably be estimating differences in beliefs between arms, either with conventional 2x2 measures, or blinding indices). This appears to be a central discussion in many fields, so would appreciate your reflections.
On 2026-01-22 00:38:39, user Sydney Garretson wrote:
This paper has been peer reviewed and published in Journal of Global Health https://jogh.org/2025/jogh-15-04258 . Please add the link to the published article if possible.
On 2026-01-19 13:00:58, user Gene C Koh wrote:
Gene Ching Chiek Koh, Serena Nik-Zainal
Department of Genomic Medicine, University of Cambridge, CB2 0QQ, UK.
We commend Kanwal et al. for their timely evaluation of the in vivo mutagenic potential of CX-5461. This follows our report that CX-5461 induces substantial mutagenesis in cultured mammalian cells1. The authors analysed samples from four patients treated with CX-5461, including marrow aspirates, trephine biopsies, PBMCs, and skin lesions collected at early treatment timepoints (baseline; days 1, 2, or 9; and end-of-treatment of a 21-/28-day cycle), and used error-corrected duplex sequencing to detect low-frequency mutations. They concluded that CX-5461 exposure did not increase single-/ double-base substitution or indel burdens, nor reproduced the mutational signatures reported in our in vitro study. While we welcome their contribution, several methodological and interpretive shortcomings limit the conclusions that can be drawn.
1. Data presentation<br /> Figures 1–3 present absolute mutation counts instead of frequencies normalized to total informative duplex bases per sample. In duplex sequencing, normalization is a basic requirement to account for variability in sequencing depth and library complexity; without it, true mutation accrual or fold-change differences versus controls (if any) cannot be assessed reliably.
2. Experimental controls, assay sensitivity, and performance<br /> The study lacks essential positive and negative controls making it impossible to evaluate whether the sequencing and analytical processes used by the authors have worked. Clinical samples with known mutational signatures detectable through this approach should have been included to confirm assay sensitivity and substantiate a true negative finding. This is fundamental. Samples from patients unexposed to CX-5461 were also required as negative controls to establish background variability, affording confidence intervals and statistical robustness.<br /> Moreover, the authors have not shown awareness of the assay’s limit of detection (LOD). What is the smallest measurable fold-change at the reported sequencing depth? Without this, one cannot determine the smallest mutational differences that could have been missed. The authors have not disclosed quality-control metrics required to understand whether sufficient data quality was achieved for detecting differential mutagenesis. P/S: TwinStrand kit has an error rate ~0.5e-7 to 1e-7 depending on the protocols, and this can be considerably higher if DNA quality is low or from fixed biopsies.
3. Lack of curation, comparisons to literature<br /> The reported mutation counts did not make sense (baseline values exceeding treated samples, patient samples sometimes lower than kit control). The authors should perform some ‘sanity check’ comparisons with published mutation frequencies of respective normal adult tissues from other duplex-sequencing studies2,3. Analytical rigour would include, for example, examining whether detected variants represent driver mutations from clonal haematopoiesis or occurred in genes under post-treatment selection. Such analyses would have demonstrated critical evaluation of data quality and biological relevance.
4. Cell-type considerations, sampling window<br /> Most analysed compartments—PBMCs, MACS-sorted marrow fractions—are dominated by mature, non-dividing cells that rarely fix new mutations. A more relevant population for assessing mutagenicity is the haematopoietic stem and progenitor cells (HSPCs), typically <0.5% of marrow cells. A null result in the analysed compartments could just mean no widespread mutation fixation in mature immune cells; it does not exclude the possibility of mutagenesis in progenitors below the detection threshold of the current assay.<br /> In addition, samples were taken at very early timepoints (days 1, 2, 9, or EOT) of the first treatment cycle. At such intervals, mutagenic events are unlikely to have become fixed, as mutagen-induced DNA damage will need time to become embedded through DNA repair and replication. Exposure in terminally-differentiated cells might yield no detectable mutations. If exposure occurs on dividing cells, mutational footprints may only become detectable months or years after exposure. The current dataset lacks the temporal window necessary to assess cumulative in vivo mutagenicity.
5. Expected evidence of prior treatments <br /> All four patients reportedly had “measurable, relapsed, or refractory advanced haematologic malignancies without any standard therapeutic options available”4. Although treatment histories were not provided, these patients likely received multiple prior therapies (e.g., doxorubicin, cyclophosphamide, etc) that could induce characteristic mutational signatures in normal haematopoietic cells5. Were signatures of prior therapy detected by the authors? Their absence raises concerns regarding the overall assay sensitivity and/or suggests that sampling strategy was suboptimal for detecting mutagenic exposures.
6. Interpretation of model data<br /> While critical of our findings in cultured human cells as “not adequately representative of physiological human tissue” – a limitation we explicitly acknowledged in our manuscript’s title and discussion – the authors cited a C. elegans study6 in support of their argument of “low non-selective mutagenic potential of CX-5461”. This interpretation is incorrect: the worm study reported high copy-number aberrations, high SNV burdens, and a distinct A>T/T>A-rich signature after CX-5461 exposure, with survival requiring multiple repair pathways (homology-directed repair, microhomology-mediated end joining, nucleotide excision repair, and translesion synthesis). If anything, these cross-species findings reinforce rather than contradict our observations that CX-5461 is highly mutagenic. The concentrations used in that study were chosen to promote viability in the worms, not to minimise mutagenicity. Selective viability does not equate to selective mutagenicity.
7. Clinical mutagenicity testing<br /> We agree that clinical safety assessments must be rigorous and physiologically relevant. The authors dismissed our experiments as not rivalling the “GLP-compliant, non-mutagenic” results of the CX-5461 drug development pathway. However, those mutagenicity data are not available in the public domain and have neither been shared by the authors nor the company that distributes CX-5461.
We urge the authors to reconsider and not simply dismiss our findings. First, the primary clinical quality mutagenicity assay (required by agencies such as the US Food and Drug Administration (FDA), European Medicines Agency, and UK Medicines and Healthcare Regulatory Agency (MHRA)) referred to by the authors comprises the Ames test – a reverse gene mutation test performed in prokaryotes (e.g., E.coli, Salmonella).
Second, according to the FDA’s ICH S2(R1) guidance for a standard battery of mutagenicity assays (Safety Implementation Working Group of the International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use), additional genotoxic assays should be performed in mammalian cells in vitro (where some of the more common assays include metaphase chromosome aberration assays, the micronucleus assay, and the mouse lymphoma L5178Y cell Tk (thymidine kinase) gene mutation assay (MLA)) or in in vivo studies as necessary.
Third, the FDA guidance acknowledges that “no single test is capable of detecting all genotoxic mechanisms relevant in tumorigenesis” and that the standard battery serves primarily for hazard identification rather than comprehensive assessment of mutagenic potential. For negative in vivo results, the ICH S2(R1) guidance requires evidence of adequate target-tissue exposure (e.g., toxicity in the tissue, TK/PK data, or direct tissue concentrations) to validate interpretability. Without such data, negative findings have limited meaning, especially where in vitro systems demonstrate strong mutagenicity.
Fourth, while the Ames test served its purpose for decades, there are well-described problems including false positives, false negatives and critically, a lack of human metabolism that even supplementation with rodent S9 mix cannot always overcome.
Finally, a point also raised by the accompanying commentary to our publication is that perhaps the time has come to re-evaluate how mutagenicity assays are performed. Current assays cannot capture the genome-wide mutation patterns revealed by whole-genome sequencing in human cells, and as a community we should consider using unbiased, agnostic, modern genomic approaches capable of detecting all classes of mutational changes in human cells. This is not an attack on CX-5461; rather, it is a call to the community to consider re-evaluation of mutagenicity assays in drug development.
8. Unsubstantiated claims<br /> The claim of potential contaminants accounting for the mutagenic outcomes we and others have observed is speculative and unsupported. The fact that multiple studies1,6 observed the same mutagenic outcomes using CX-5461 from independent sources suggests that this is unlikely. The authors showed no analytic chemistry (LC-MS/MS) and/or spiking experiments to substantiate this claim.
9. Inadequate supporting material throughout <br /> There were many gaps in the methods/supporting information, including adequate clinical annotation, precise sampling times/total treatment cycle, and basic quality-control metrics. Experimental details (e.g., antibodies used for MACS sorting, essential for interpreting analysed subpopulations) were not provided. These omissions limit transparency, reproducibility, and the interpretability of the findings.
10. Beneficence, non-maleficence, autonomy, justice<br /> First, in academia and medicine, we are guided by the principle of doing no harm. In identifying mutagenesis in experimental systems (an incidental finding), we acted in the best interest of the community – reporting an observation that could have an impact on patients and acknowledging the limitations of our system. We have no role in the (dis)continuation of clinical trials; we simply presented our data transparently and highlighted potential risk. <br /> Second, while the authors chose to discontinue their trial, several others remained active (e.g., NCT04890613, NCT06606990, NCT07069699, NCT07147231, NCT07137416). Their decision was conservative, and in our view, scientifically prudent. We commend their caution. However, it does not justify criticism of those of us reporting safety concerns in good faith.<br /> Third, as a community, we serve society better by being aware of issues, addressing the problems with robust experiments rather than polarising into groups “for” or “against” a compound, so that truly beneficial compounds can get to patients as quickly as possible. <br /> Finally, safety concerns may extend beyond mutagenesis and include tumour promotion effects. CX-5461’s interaction with TOP2B, for example, has been linked to serious, late-emerging toxicities, including therapy-induced leukaemia and cardiotoxicity7-10.
Concluding remarks<br /> Given the experimental and analytical shortcomings outlined above, definitive conclusions regarding CX-5461’s in vivo mutagenicity cannot yet be drawn. The absence of evidence should not be taken as evidence of absence. Rigorous, longitudinal studies with appropriate controls and independent oversight are required to assess true medium- to long-term risks.
We share the authors’ view that thorough, transparent evaluation of anticancer agents is essential. Given the authors’ vested interest in finding a negative result, we suggest independent individuals be involved in performing the analysis/interpretation of their studies to negate potential conflicts of interest. We remain open to collaboration in this effort, in the shared interest of patient safety and scientific integrity.
On 2026-01-14 16:06:13, user Charles Tritt wrote:
This is interesting and important work. However, the flaw I see in this study is that it included only 40 episodes of hypoxia (defined as a SpO2 of < 90%) out of 1760 measurements. Arterial saturation measurements are only clinically significant when they are significantly low, so the approach used doesn’t seem to answer the important question – are there systematic pulse ox errors that make a clinical difference?
The linked protocols show induced hypoxia (I assume by subjects breathing air diluted with nitrogen). Of course, this couldn’t be done with critically ill patients. But I don’t see that the state of the patients being particularly important to the question of systematic pulse ox errors. Would it not be a better approach to test healthy individuals an induce hypoxia so their data set contains the clinically important information.
On 2026-01-13 16:13:38, user Christine Stabell Benn wrote:
Comment on “Non-specific effects of vaccines on all-cause mortality: a meta-analysis of randomized controlled trials (RCTs) 2012–2025”<br /> Christine Stabell Benn, Frederik Schaltz-Buchholzer, Sebastian Nielsen, Peter Aaby<br /> We commend the authors for addressing the important and contentious question of non-specific effects (NSEs) of vaccines on all-cause mortality. However, we have several major concerns regarding the framing, completeness, methodology, and interpretation of the preprint. Collectively, these issues undermine the conclusions drawn.
1. Restricted research question and dismissal of large parts of the evidence baseThe authors explicitly restrict their review to randomized controlled trials (RCTs) published after the WHO review of non-specific effects(1). If the stated objective is to assess the evidence for NSEs on all-cause mortality in randomized trials, an updated meta-analysis incorporating all relevant RCTs, rather than an arbitrarily time-limited subset, would be more informative. The decision to exclude pre-2012 RCTs from the main analysis appears methodological rather than substantive and risks answering a narrow procedural question rather than addressing the broader scientific question.
More importantly, NSEs represent a research area in which randomized trials are inherently difficult or impossible to conduct at scale, because the vaccines in question are already part of routine immunization schedules. As in other areas of public health - such as smoking, breastfeeding, or nutrition - causal inference therefore relies on triangulation across multiple study designs, including observational studies and natural experiments, supported by biological and immunological evidence.<br /> If the intention is to provide a meaningful update on the state of the evidence for NSEs, a comprehensive synthesis that acknowledges the strengths and limitations of all relevant study designs - or at minimum a clear and balanced justification for excluding them - is required.
2. Incomplete identification of relevant randomized trialsDespite claiming a comprehensive search, the review misses several important randomized controlled trials that are directly relevant to NSEs, including recent RCTs published well within the stated search window (e.g. PubMed IDs: 39357573, 38350670, 33893799, 30256314). The omission of these trials raises concerns about the sensitivity of the search strategy and undermines confidence in the completeness of the evidence base.
3. Extreme clinical and methodological heterogeneity invalidates the pooled meta-analysis<br /> The meta-analysis combines trials of three different vaccines (BCG, measles vaccine, and OPV) administered at vastly different ages (birth to 59 months), with follow-up periods ranging from days to five years, and using different randomization schemes and outcomes structures. This is not merely “heterogeneity,” but fundamentally different interventions addressing different biological hypotheses.
Pooling these studies is not equivalent to combining “apples and bananas,” but rather apples and cars. The resulting pooled estimate does not correspond to a coherent causal treatment effect and is therefore not interpretable.
4. Non-adherence with the WHO meta-analysis methodologyBy pooling all vaccines together, and furthermore by not focusing on the time window where a given vaccine is the most recent, the authors of the new meta-analysis violates the principles set out in the WHO meta-analysis, which emphasized vaccine-specific analyses and the importance of the most recent vaccine exposure.
5. Overreliance on conservative confidence interval methods without adequate justificationThe authors emphasize the use of the Hartung-Knapp-Sidik-Jonkman (HKSJ) method as providing “more reliable and conservative control of type I error.” While HKSJ can be appropriate when few studies estimate the same underlying effect, its application here - given the very marked heterogeneity and conceptual incoherence of the pooled treatment effect - adds statistical conservatism without resolving the more fundamental problem of model misspecification. The resulting wide confidence intervals should not be interpreted as robust evidence against NSEs.
6. Misinterpretation of heterogeneity statistics (I²)The statement that an I² of ~44% indicates that “approximately half the differences in the results are due to actual variations between studies” is misleading in this context. I² is meaningful only when studies estimate the same underlying causal association. When fundamentally different interventions are pooled, I² no longer has the interpretation implied by the authors.
7. Speculation that early BCG effects are due to bias is unsubstantiatedThe manuscript repeatedly suggests that observed mortality reductions within the first 1–3 days after BCG vaccination may reflect bias due to lack of blinding. This speculation appears inconsistent with the design and reporting of the original trials. In Guinea-Bissau randomization occurred at discharge, and post-randomization care was not provided by study staff(2). In the Indian trial, the authors explicitly state that it is unlikely that the lack of blinding influenced the result. In previous open label randomized trials of BCG Russian strain in the same sites, no difference in neonatal mortality was found, which suggests that the lack of blinding did not bias the findings(3).
Given these safeguards, attributing early effects to bias is unsupported by trial evidence and suggests that the original studies were not carefully read or adequately considered.
8. Ignoring extensive mechanistic evidence for rapid BCG effectsThe authors further imply that effects within days are biologically implausible. This overlooks a substantial body of experimental and clinical evidence demonstrating that BCG induces trained innate immunity, including rapid functional reprogramming of myeloid cells and emergency granulopoiesis, which can occur within days and protect against severe infections such as sepsis(4, 5). These mechanisms provide a biologically coherent explanation for early effects and should have been discussed as plausible alternatives to bias.
9. Failure to engage with established explanations for heterogeneous measles vaccine effectsThe manuscript notes heterogeneity across measles vaccine trials but does not engage with recent work offering compelling explanations for these differences, including interactions with OPV campaigns and vaccination sequence effects(6). Ignoring this literature leads to an oversimplified interpretation in which heterogeneity is treated primarily as noise rather than as potentially informative signal.
10. Introduction of an a posteriori unifying hypothesisLate in the discussion, the authors invoke a new hypothesis that all live-attenuated vaccines should yield similar NSEs on all-cause mortality. This hypothesis appears post hoc and is not clearly justified biologically. It has never been a hypothesis within the NSE field and is biologically implausible, not least because baseline mortality differs substantially by age. Introducing this assumption only after the pooled analysis further weakens the inferential logic of the paper.
Overall assessmentThe manuscript raises an important question, but its conclusions are undermined by:<br /> • an artificially restricted scope,<br /> • incomplete inclusion of relevant RCTs,<br /> • inappropriate pooling across fundamentally different interventions,<br /> • speculative dismissal of biologically plausible findings,<br /> • and inconsistent use of hypotheses introduced after the analysis.<br /> As currently written, the preprint does not provide a reliable basis for concluding that NSEs of vaccines on all-cause mortality are absent or unimportant. A substantially revised analysis - grounded in a comprehensive evidence base, clearer causal questions, and vaccine-specific syntheses - would be required to support such claims.
References1. Higgins JP, Soares-Weiser K, Lopez-Lopez JA, Kakourou A, Chaplin K, Christensen H, et al. Association of BCG, DTP, and measles containing vaccines with childhood mortality: systematic review. BMJ. 2016;355:i5170.<br /> 2. Biering-Sorensen S, Aaby P, Lund N, Monteiro I, Jensen KJ, Eriksen HB, et al. Early BCG-Denmark and Neonatal Mortality Among Infants Weighing <2500 g: A Randomized Controlled Trial. Clin Infect Dis. 2017;65(7):1183-90.<br /> 3. Adhisivam B, Kamalarathnam C, Bhat BV, Jayaraman K, Namachivayam SP, Shann F, et al. Effect of BCG Danish and oral polio vaccine on neonatal mortality in newborn babies weighing less than 2000 g in India: multicentre open label randomised controlled trial (BLOW2). BMJ. 2025;390:e084745.<br /> 4. Kleinnijenhuis J, Quintin J, Preijers F, Joosten LA, Ifrim DC, Saeed S, et al. Bacille Calmette-Guerin induces NOD2-dependent nonspecific protection from reinfection via epigenetic reprogramming of monocytes. Proc Natl Acad Sci U S A. 2012;109(43):17537-42.<br /> 5. Brook B, Harbeson DJ, Shannon CP, Cai B, He D, Ben-Othman R, et al. BCG vaccination-induced emergency granulopoiesis provides rapid protection from neonatal sepsis. Sci Transl Med. 2020;12(542):eaax4517.<br /> 6. Nielsen S, Fisker AB, da Silva I, Byberg S, Biering-Sørensen S, Balé C, et al. Effect of early two-dose measles vaccination on childhood mortality and modification by maternal measles antibody in Guinea-Bissau, West Africa: A single-centre open-label randomised controlled trial. EClinicalMedicine. 2022;49:101467.
On 2026-01-12 13:11:20, user Ryan wrote:
I recommend leaving an HR for testing positivity or adding the positivity rate as adjusted variable, this will allow testing level to be compared and not just testing timing on the results. From the look of it hin the log ratios over time, it does not look like it will completely wash out the signal, however, it is hard to tell with giving the actual values.
On 2026-01-10 20:05:34, user Sequoia wrote:
I'm interested in seeing if replacing the UPFs near the checkout with non-UPFs would result in an increase in non-UPFs consumption, especially if they were easy to eat with no preparation required (e.g. an apple or energy balls), and if so, by how much.<br /> I am delighted to know that research is being done on this critical subject.
On 2026-01-09 08:47:36, user Janne Ruotsalainen wrote:
The MVA vector is highly immunogenic as it has been used as a small pox vaccine. The ex vivo ELISPOT responses against MVA are pretty high raising the question whether the anti-vector T cell responses became immunodominant and thus suppressed some neoantigen specific T cell responses?
On 2026-01-09 07:27:44, user briand06 wrote:
It seems I cannot find the Supplementary Methods here. Is it already available?
On 2026-01-07 12:56:34, user Andreas Eilersen wrote:
The transcribed hospitalisation records mentioned in the paper can be found on Github (collected and published by Søren K. Poder): https://github.com/basspoder/smallpox_post_honeymoon_phase_1824_1872/releases/tag/Dataset
On 2025-12-30 06:02:44, user James P wrote:
Really nice, timely methods paper. It puts clear names and concrete examples on two ways biomarker studies can look better than they truly are in practice: (1) enrichment/range restriction, where you study a highly selected group and results don’t transport to typical patients, and (2) “double dipping,” where the same biomarker data influence who gets included and how performance is judged, which can inflate accuracy.
I also appreciated how the audit plus the simple simulation experiments make the problems intuitive rather than abstract. The recommendations are practical (be explicit about the intended target population/estimand, and separate discovery from confirmation or prespecify analyses) and feel immediately useful for trial-ready cohorts and clinical workflows.
On 2025-12-28 19:52:27, user Basam Alasaly wrote:
This manuscript was published online in the Journal of the American Medical Informatics Association on October 27, 2025 ( https://academic.oup.com/jamia/advance-article/doi/10.1093/jamia/ocaf182/8304365) .
On 2025-12-27 04:25:05, user Anjum wrote:
Hi <br /> This manuscript has been published at <br /> John A, V R Reshma, El-Hazimy K. Bridging the Nutrition Education Gap: From Theory to Practice- A Scalable Model for Nutrition Practicums in Medical Training. Journal of Teaching Innovation and Reform. 2025;1:11-25.
On 2025-12-25 14:22:54, user Frederik Schaltz-Buchholzer wrote:
Did you, in your analysis of necessary BCG safety stock, take into account that it is physically impossible to vaccinate 20 newborns with a 20-dose BCG vial?<br /> https://pubmed.ncbi.nlm.nih.gov/28169606/
All the best,<br /> Frederik Schaltz-Buchholzer
On 2025-12-12 01:14:46, user Holler, Joseph wrote:
Please find the published article at https://doi.org/10.1186/s12913-025-13653-1
On 2025-12-09 16:24:30, user Alessandro Crimi wrote:
The paper has actually being published after peer-review, here is the link https://ieeexplore.ieee.org/document/11253716
On 2025-12-05 05:35:16, user Evolutionary Health Group wrote:
We at Evolutionary Health Group ( https://evoheal.github.io/ ) really enjoyed this paper.
Here are our highlights:
One of the strongest contributions is the introduction of a nonlinear null model of covariates that outputs a single scaler, which can be inserted into existing linear frameworks while adding the power of nonlinear modeling.
The authors demonstrate that nonlinear covariate modeling consistently helps more than it harms: adding the null prediction rarely interferes with genetic inference and the gains are substantial for many traits, giving the method an encouraging risk-benefit profile.
Instead of attempting to model exposures explicitly, the authors show that spatiotemporal information can capture complex environmental influences. Even though these features are non-causal, researchers can use such data to hypothesize environmental drivers without having to specify them individually in models.
Using TreeSHAP-IQ, the authors show that nonlinear models find age-sex, seasonality-sex, and birth-home location interactions. These patterns are biologically credible and validated by external literature but cannot be captured by standard linear covariate adjustments. This shows that nonlinear covariate modeling doesn't just improve predictions, it produces interpretable biological insights.
drift
Why is it called 'drift'? It seems to be simply an offset in the random walk, not making the rw 'drift' in any particular direction... E.g. if I set it to 500 instead of 0.5, the resulting rw's still have similar shapes, just with a different mean.
<1
Am I right in thinking that this and the following formulas assume that the mean of the series has already been subtracted at some point? Otherwise, -1 < beta_1 < 1 still implies that the sequence tends to revert towards zero... Suppose the series is outdoor temperatures in Kelvin: then e.g. beta_1 = 0.5 will make the temperatures shrink towards absolute zero, won't it?
Regarding important dates, for those of us planning to walk at graduation in June, should we begin the application process to graduate?
Will the instructor's check-off be similar to HLTH 231? Will it be one-on-one with the instructor? With 10 people in class, is there a risk of time constraints?
Will these be the only two chapters in the assigned text that we will use for the entirety of the quarter?
Can you please clarify the due date for the Getting Started Review?
Internalized oppression is when a person internalizes negative messages, stereotypes, etc. that are associated with some aspect of them. For example, if they are a person of color and hate their skin color or hair texture, this could represent internalized racism that they somehow learned in their lifetime. Colonial mentality is when one believes in the inferiority of colonized peoples or the inferiority of some aspect within being a colonized or formerly colonized people. One example of this is if an English-speaking country colonized another country and colonizers teach them that English is superior, then believing that English is better than their language(s) in part shows that this colonial mentality accepts the superiority of the colonizer. Even though institutional and interpersonal oppression seem to be the most damaging and harmful of the oppressions, decolonial scholars importantly point out the extreme dangers of internalized oppression. When a person believes themselves to be inferior, this contributes to their continual subjugation and oppression as not believing they have power and agency. When people absorb ideas and beliefs of a group, they will perpetuate or challenge these ideas in regard to themselves, their family, and others. This helps to complete the cycle between the I’s.
This is far more apparent in the attention economy we live in in the present day.
closely audit total instructional hours
With less instructional time, it is important to get as much as important instruction out during that time.
For a classic discussion
Agreement:
I do agree with the idea that facts based on observations are far more complex than just looking at something to see if it is true or not. If observations can vary based on prior knowledge, then it is variable across different people, meaning what one person sees doesn't always exactly match what another one might. Observations can and should be challenged in order to expand what we truly know and see as facts.
But it does show that anyview to the effect that scientific knowledge is based on the factsacquired by observation must allow that the facts as well as theknowledge are fallible and subject to correction and that scientificknowledge and the facts on which it might be said to be based areinterdependent.
FLAG: scientific knowledge is based on fact and observable statement, but these observations are subject to correction as science advances
The point is that if theknowledge that provides the categories we use to describe ourobservations is defective, the observation statements that presup-pose those categories are similarly defective.
FLAG
Knowledgeabout the moon’s surface is not based on and derived from moun-tains and craters but from factual statements about mountains andcraters.
FLAG: it isn't just about knowing the facts, it is about how you present them
He stillsees only a fraction of what the experts can see, but the picturesare definitely making sense now and so do most of the commentsmade on them.
the more you observe, the more you learn about something
It would seem that there is a sense in which what anobserver sees is affected by his or her past experience.
FLAG: elaborates on the focal point: observation can be influenced by prior knowledge
One concerns the nature ofthese ‘facts’ and how scientists are meant to have access to them.The second concerns how the laws and theories that constituteour knowledge are derived from the facts once they have beenobtained.
main focal points
a common view of science, that scientificknowledge is derived from the fact,
FLAG: common view of science
Nevertheless, we will find that the slogan is not entirelymisguided and I will attempt to formulate a defensible version of it.
leading to main argument
学生
The audience is students
There’s a joke
Using humor language to engage the audience
Organized by the art collective Project 51, Play the LA River offered fifty-one weeks of river-based play between September 2014 and September 2015to parallel the fifty-one miles of the Los Angeles River, from its inland head-waters in the San Fernando Valley to its ocean mouth in downtown LosAngeles. Described as a game of “urban exploration and imagination,” thegame invited Angelenos to discover their local river, one that many did noteven know was there or had thought lost to pavement and pollution. Playthe LA River invited participation on three fronts: first, through a playablecard deck divided into four geographical suits, with each card giving direc-tions to a particular site on the river and suggesting activities tailored to thatlocation (Figure 10)
More public and art/researcher collective collabs are needed! Long term too
In part inspired by World without Oil and the Continuous City work of theBuilders Association, Black Cloud began as a game proposal for the DigitalMedia and Learning Competition sponsored by the MacArthur Foundation.Designed for high-school students in South Central Los Angeles and down-town Cairo, Egypt, Black Cloud was described as “a game, where studentsstudy local air quality by searching for secret neighborhood air quality sensorstations based only [sic] the air quality data the sensors transmit.”
Citizen science games are like Mozilla Open Source projects. In a way they are like Maps too, OpenFoodFacts, or Wikipedia...