But even the less advanced andmore primitive tribes may be equally benefited,
Hints of biological determinism
But even the less advanced andmore primitive tribes may be equally benefited,
Hints of biological determinism
Aleading architect of colonial administrative policy, Lugard developed thesystem of Indirect Rule, whereby British colonial officials ruled through (andso hid behind) indigenous political leaders who were responsible for collectingtaxes from their people, supplying laborers when instructed, and maintaininglaw and order in their territories.
So basically the architect of wickedness and weakness.
__________________________________________________________________
Becky needs to make a decision on what's best for her and her future. She could also go talk to her professor to see if he could extend the due date.
As a result of this battle Menelik gained enormous local and international prestige. OnOctober 26, the Italians agreed to the Peace Treaty of Addis Ababa, whereby they acceptedthe annulment of the Treaty of Ucciali and recognized the absolute and completeindependence of Ethiopia. Menelik, on the other hand, did not consider himself in positionto insist on an Italian withdrawal from Eritrea though he had often expressed a desire ofobtaining access to the sea. In the months which followed, the French and Britishgovernments sent diplomatic missions to sign treaties of friendship with Menelik; othermissions came from the Sudanese Mahdists, the Sultan of the Ottoman Empire; and theTsar of Russia. Addis Ababa thus emerged as a regular diplomatic center where severalimportant foreign powers had legations [ministers and their staff].
Menelik finally got the aura he deserved. Seriously though it is a pretty straightforward story which really only tells us about the great leadership of Menelik. I wonder if no other African state was in a position as advantageous and Ethiopia at this point. What really made them succeed was Menelik's decision to acquire as many rifles as he could - did no one else have this idea? What I imagine is more likely is that most African states were highly fractionalized and those in no position to engage in such strategic moves.
r nearly 43 percent of the original fightingforce of 10,596 Italian and about 7,100 [African] troops. The Italians also abandoned alltheir cannon, as well as about 11,000 out of the 14,519 rifles with which they started thebattle
Extreme L
The Ethiopian army moreover was much larger than that of the Italians. Not countingsoldiers with spears, he had well over 100,000 men with modern rifles. The Italians fortheir part had somewhat more cannon—56 as against Menelik’s 40—but only about17,000 men, of whom 10,596 were Italian and the rest Eritrean levies [draftees orconscripts
Damn they were severely outnumbered. Is this just European hubris? How could they have expected to possibly defeat him with such a weak army?
After a delay of over two years, which he turned to good advantage by importing verylarge quantities of fire-arms especially from France and Russia, Menelik at lengthdenounced the Treaty of Ucciali on February 12, 1893; a week or so later, on February 27,he informed the European Powers of his decision, declaring: “Ethiopia has need of no one;she stretches out-her hands unto God.
Menelik was pretty goated. Pretty fire way to declare war
I said that because of our friendship, our affairs in Europe might be carriedon with the aid of the sovereign of Italy, but I have not made any treaty which obliges meto do so. I am not the man to employ the aid of another to carry on my affairs your Majestyunderstands very well.
He flamed him basically
The Italian text, however, made it obligatory for theEmperor to conduct all his transactions with the other powers through the Italiangovernment. Though the Italian formula was soon used by the Italian government to claimthat it had established a protectorate over Ethiopia, the time was not yet ripe for an openconflict.
Very interesting. I wonder if this apparent miscommunication was intentional.
To avoid disputes among themselves the European Powers had devised the GeneralAct of Berlin which was signed on February 26 that yea
This is so goofy. The Europeans finally sort of realized they should stop fighting constantly but then then in like 20 years they immediately start fighting one of the most horrific wars ever.
Further sales and gifts followed with the result that Menelik soonhad by far the best equipped army of any independent African state
This is important context. Menelik understood that he needed to match the power of other states.
Menelik to bring this work to fruition, as well as to withstand the pressureof the European powers in the scramble for Africa, and to lay the foundations of a modernstate.
That is a tough list of tasks.
The powers of the monarchs had been usurpedby the feudal lords and centralized government had been replaced by the autonomy of thevarious provinces whose rulers warred among themselves
A warring states period you might say.
looted in 1868 from Ethiopia by British troops and housed today in the BritishMuseum and the British Library.
Classic
post hoc comparisons showed only that the E-A bilinguals’ F1-Bark valuesare significantly higher compared with monolinguals.
Significant effects: ・Speaker group: E–A bilinguals had higher F1-Bark than monolinguals.
L2 acquisition, L1 attrition relationship
Correlations between native-likeness in L1 and L2 were positive, but only significant for F1-Bark in the sentence condition, meaning that bilinguals with more native-like L2 /æ/ also had more native-like L1 /a/ in tongue height.
compare the F1/F2 vowel space for /a-æ/, /ɪ/ and /u-ʊ/ of our bilingual and monolingualspeaker groups with published reference values
The authors compared their vowel measurements with earlier studies to check consistency.
Arabic reference: Alghamdi (1998) — male Saudi speakers.
English reference: Deterding (1997) — Standard Southern British English (SSBE) speakers.
sound discrimination
Sound discrimination (RQ3): For bilinguals only, models tested whether higher sound discrimination aptitude correlated with more nativelike vowel production (smaller distance from monolingual norms).
“Distance-from-norm” calculated as absolute difference between each bilingual’s formant value and the monolingual mean.
formant analy-ses of F1 and F2,
Analyzed formant frequencies F1 (vowel height) and F2 (vowel frontness) using Praat.
Formant values measured at the vowel midpoint and normalized using the Bark Difference Method (Syrdal & Gopal, 1986; McCloy, 2016) to eliminate physiological variation.
Monolinguals
Monolinguals: Completed only the production task in their native language.
collected data in Saudi Arabia and in the United Kingdom
Data collected in Saudi Arabia and the UK through three sessions for bilinguals: 1. L2 proficiency and sound discrimination test 2. Arabic vowel recording session 3. English vowel recording session (within one week)
Recording setup: Participants read words in isolation and in sentences, 3 repetitions per word per condition, recorded with a digital voice recorder.
tested their ability to recognize the newly learned wordsin a total of 30 spoken utterances,
Task: Participants learned 3 Cantonese words and identified them in 30 test utterances.
dapted version of Part V—Sound Discrimination of the Pimsleur Language Aptitude Battery
Used an adapted Pimsleur Language Aptitude Battery (PLAB) Part V to measure sound discrimination aptitude, a component of language learning aptitude.
Participants learned and recognized new sounds in Cantonese, a language unfamiliar to them, to test pure sound discrimination ability without interference from known languages.
Vowel productions
Target vowels: short vowels /a-æ/, /ɪ/, and /u-ʊ/ in both Arabic and English.
Rationale: Used a controlled reading task (instead of spontaneous speech) to ensure equal numbers of repetitions, consistent phonetic contexts (/hVd/), and reduced coarticulation effects.
Conditions: Each word produced 3 times in isolation and 3 times in a carrier phrase; speaking condition later analyzed as a variable.
Priming: Real words containing the same vowel were used to activate intended categories (e.g., would, could, hood).
Participants
Groups: 4 groups of 15 participants each — Arabic-English (A-E) bilinguals, English-Arabic (E-A) bilinguals, Arabic monolinguals, and English monolinguals.
Monolinguals: Served as control groups; matched by education, region, and age. None spoke additional languages.
All groups were similar in age and gender distribution, though bilingual groups differed slightly but significantly in age of arrival (AoA) and length of residence (LoR).
Research questions
The study explores how L2 acquisition and L1 attrition interact, and whether sound discrimination aptitude affects this relationship. * RQ1: How nativelike are the bilinguals’ L2 vowel productions? * RQ2: Do the bilinguals show L1 vowel attrition? * RQ3: Are differences in vowel production related to sound discrimination aptitude? * RQ4: Do native-like L2 vowels correspond to nonnative L1 vowels, and vice versa?
we explored if increased sound discrimination aptitude may berelated to more nativelike L1/L2 vowel productions in our bilingual speaker groups.
This study does not test perception directly, but examines whether general sound discrimination aptitude predicts how accurately bilinguals produce vowels. (The aptitude test used an unfamiliar language (Cantonese) to avoid bias from prior language knowledge.)
Sound discrimination aptitude
Defined as the ability to identify and store new sounds in long-term memory; part of language learning aptitude (Carroll, 1971). Strong phonemic coding ability supports accurate pronunciation, while low ability can hinder it.
vowel length is phonemic in Arabic
Arabic vowel length is phonemic, and long vowels are more peripheral than short ones.
The analysis centers on short vowels that are phonetically similar between Arabic and English: /a–æ/, /ɪ/, and /u–ʊ/.
arget varieties in our study
The study focuses on Modern Standard Arabic (MSA) and Standard Southern British English (SSBE).
investigated L1 and L2 vowel productions in two groups of advanced latebilinguals, namely, Arabic-English (A-E) and English-Arabic (E-A) bilinguals
This study examines both processes in Arabic-English (A-E) and English-Arabic (E-A) late bilinguals, both long-term immersed in the L2 country.
It analyzes short vowel production to explore how acquisition and attrition interact and whether achieving native-like pronunciation in one language reduces it in the other.
still asmall number of studies that explore how acquisition and attrition are related by comparing L1 andL2 speech production
Gap of current study: few studies directly compare L1 attrition and L2 acquisition in the same bilinguals.
presence of an L2 system might leadto changes in the realization of L1 vowels
Research shows bilinguals’ L1 vowels can shift closer to or further from their L2 vowels.
Speech Learning Model
The Speech Learning Model (SLM) explains this through two processes: 1. Category assimilation – when similar L1 and L2 sounds merge, preventing accurate L2 production. 2. Category dissimilation – when similar sounds are exaggerated apart, leading to distinct but nonnative categories.
These processes can affect both L2 and L1 speech. Changes in the first language (L1) due to L2 influence are known as L1 phonetic attrition.
Keywords
It’s interesting how without fail you can always find the ideas of poverty as well with urbanism but as i was growing up i assumed being in an urban neighborhood was always more rich because you’re able to see everyone as yourself. Almost as though as you have less wealth you have more cultural and social wealth
UI and UX 101 for Web Developers and Designers
I agree that colors are a big factor for websites, just like they are for other things. For example, the color yellow is used for most road signs and school buses because it's one of the first colors that catches our attention. Warmer colors seem to have more energy, while cooler colors tend to relax. Lighter colors feel "lighter", while darker colors feel heavier. The way you color your website does just as much to set the theme as your content does.
My Notes: This video discusses very important points about how almost every element of a site can have a huge effect on your website. One overall topic is the need to make your site have the corrects visual design to be appropriate for the site and its purpose. This topic discusses how colors and font can improve or ruin your site in significant ways. The other topic that is majorly discussed is related to the topics we have seen in class such as the importance of the alignment, placement, and amount of elements in a site. These elements are reiterated to be very important to not only accessibility but the overall visual appeal of site. This is important to keeping users on your site and make their experience positive.
UI and UX 101 for Web Developers and Designers
Don't cram elements together. Provide "breathing room" around content. This makes the page less overwhelming and easier to digest, creating a positive user experience.
UI and UX 101 for Web Developers and Designers
Basic Design Principles that can help would be: 1. Alignment - Ensure that the design is congruent with other things. This can include title, images, and headers. Borders can additionally help out with this. 2. Negative Space - Ensure that there is space for all the elements. This can be text, images, etc. 3. Font - There shouldn't be more than two font sizes/text on a page. 4. Colors: Display different meanings and has psychological effects. For example, pastel colors would serve more ideal for a flower business. 5. Templates - Templates are fine to utilize as a guideline for feasibility. 6. UX vs UI - Better to aim for practicality versus aesthetic. They should be working in conjunction. Search bar, logo at top left to take back at the home page, etc. 7. Understand humans are visual creatures.
Great to internalize the basic principles, however, it's important to deviate from some of the principles. It's important to make your website design be abstract and contest other websites. Though some of the information was captivating and had some good points of comparing brand name websites.
UI and UX 101 for Web Developers and Designers
The video really focuses on how empathy is key in UX design, showing that understanding your users is just as important as the visuals. It explains how observing real user behavior — like where people get stuck or confused — helps designers improve layouts, navigation, and overall flow. The speaker also mentions using usability testing and accessibility checks to make sure everyone can interact with the product comfortably.
UI and UX 101 for Web Developers and Designers
This video explains how good design isn’t just about visuals but it’s about solving problems and making things easy to use. The video also showed how testing and feedback help improve the overall experience, so users enjoy interacting with the website or app.
UI and UX 101 for Web Developers and DesignersTap to unmute2xUI and UX 101 for Web Developers and DesignersStefan Mischook 5,964 views 1 year agoSearchCopy linkInfoShoppingIf playback doesn't begin shortly, try restarting your device.Pull up for precise seeking2:00•You're signed outVideos you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.CancelConfirmgroup besides the live coaching sessionsand uh the interactiveUp nextLiveUpcomingCancelPlay NowStefan MischookSubscribeSubscribedContact: stefan@studioweb.com Entrepreneur | Educator | Tech Mentor I’ve been an entrepreneur since 18, launching my first business in the pet industry before shifting into tech. By 1994, I was building commercial websites, and in 2002, I released my first programming and entrepreneurship courses. In 2011 I launched StudioWeb.com, a gamified teaching and classroom management platform now used in schools across North America. My book, Web Design Start Here (published in 2015), continues to receive great reviews and is available on Amazon. YouTube: What started as a YouTube hobby has grown into a thriving platform where I share insights on coding, entrepreneurship, and tech. I’ve been fortunate to collaborate with top brands, including PayPal, Docker, JetBrains, Wix, BenQ, and more. If you’re looking for a trusted voice in tech and business, let’s connect. StefThe State of the Developer Ecosystem in 202529:00HideShareInclude playlistAn error occurred while retrieving sharing information. Please try again later.Stable VolumeAmbient modeAnnotationsSubtitles/CC (1)English (auto-generated)Sleep timerOffPlayback speed2QualityAuto (1440p HD)14:1014:17 / 15:42•Watch full videoLive••27:14My Unconventional Coding Story | Self-TaughtTravis Media812K views • 2 years agoLivePlaylist ()Mix (50+)14:21The Only 5 Web Design Skills That Actually Matter (2025)Self-Made Web Designer143K views • 3 months agoLivePlaylist ()Mix (50+)15:59I Just Started My Own Car CompanyAndy Didorosi188K views • 12 days agoLivePlaylist ()Mix (50+)15:24how to progress faster than anyone else (in tech)Phillip Choi64K views • 8 days agoLivePlaylist ()Mix (50+)12:237. Foundational Models for Generative Computational Design by Prof. Ferdous Alam (MIT)Jitesh Panchal4 views • 33 minutes agoLivePlaylist ()Mix (50+)22:41Reacting to 21 Design Portfolios in 22 MinutesFlux Academy925K views • 2 years agoLivePlaylist ()Mix (50+)1:19:28Oz Pearlman (Mentalist): This Small Mistake Makes People Dislike You! They Do This, They’re Lying!The Diary Of A CEO168K views • 10 hours agoLivePlaylist ()Mix (50+)23:31The Most Regretted College DegreesSpeeed551K views • 5 days agoLivePlaylist ()Mix (50+)15:344 levels of UI/UX design (and BIG mistakes to avoid)Tim Gabe311K views • 6 months agoLivePlaylist ()Mix (50+)26:35How to Start Freelancing in 2024Stefan Mischook54K views • 1 year agoLivePlaylist ()Mix (50+)15:09Why 2025 is the single most pivotal year in our lifetime | Peter LeydenBig Think and Freethink682K views • 3 days agoLivePlaylist ()Mix (50+)43:53Freelance Web Developer RoadmapTraversy Media146K views • 7 months agoLivePlaylist ()Mix (50+) Toggle info cards/end screens visibility UI and UX 101 for Web Developers and Designers
It's all about the eye and the visual aspect of the website building. You may be a great coder and developer and can create many crazy and fascinating functions, but if your website is too cluttered with buttons, its not going to be usable.
UI and UX 101 for Web Developers and Designers
The colors of the website influence the mood of the user. Consequentially, it also affects their impression of the website.
UI and UX 101 for Web Developers and Designers
I think the best part of this video is at 9 minutes when he starts talking about established standards when it comes to design. Things like making sure in the top left of any page on your site, you have the site logo that you can click, which will take you back to the home page. This could be an easy thing to forget as a beginner designer but becomes highly noticable to users, when they get 8 pages deep and want to go back to the home page but get stuck. That little inconvinience can cause frustration in users.
UI and UX 101 for Web Developers and Designers
I like that this video is very easy to understand, even for a beginner. Some of the basic design principles he talked about were: alignment, negative space, font use, and colors. It was interesting how he explained that different colors can make or break a website. For example, a wedding or flower shop website would use white and other light colors instead of darker colors. It was also helpful that he stated that the maximum fonts a designer should use is 2. This way, the website looks clean and organized.
UI and UX 101 for Web Developers and Designers
Spacing is very important as it helps the users understand the paragraphs with the images or visuals that need to go with it. This is a proper way to plan out the webpage for anything.
UI and UX 101 for Web Developers and Designers
Basic Design Principles Include: - Alignment - everything lines up - Negative Space - create space for elements in your page - font use - no more than two font faces on a particular page - don't use serif fonts in body texts - fonts play a big role in setting the mood for a page - Colors convey meaning (red = anger) -Don't be afraid to use templates -Top left should be a logo that is clickable to get you to get to the previous or home page -Have a navigation menu at the top of the page - search should be at the top of page
UI and UX 101 for Web Developers and Designers
Having a clear navigation menu at the top of a website is important for usability. Users expect the logo on the top left to link back to the homepage and the navigation bar to be easy to find and use. Also, using enough negative space also called white space, helps a website look clean and easy to read. When elements are too close together, users feel overwhelmed, but spacing things out gives the page breathing room.
If you are design challenged, (seeing good design or not) then accept that it's not natural for you. You have to spend extra time to watch for alignment, things need to be set up evenly and straight. Images need room to breathe and the human eye should be able to naturally follow the website. Colors convey emotion without explicitly stating it. If you have a website that is dark and gloomy you might not want to have it as a wedding site or similar. Templates are valid to use as well if design challenges you.
UI and UX 101 for Web Developers and Designers
When presenting clients with your work, showcase to them about 3 different choices. All 3 choices should be fairly similar because you don't want to deviate too far from the template. Presenting the client with multiple choices shows them your abilities and breadth.
UI and UX 101 for Web Developers and Designers
Good web design means keeping everything neat and easy to read by lining things up, leaving space between sections, using one or two fonts, and choosing colors that match the mood. Even if you are not great at design, using templates and clear layouts like having the logo in the top left and simple menus makes your site look clean and professional.
UI and UX 101 for Web Developers and Designers
The video talks about how good design is more than just how something looks. It shows how user experience and user interface work together to make a website or app simple and enjoyable to use. The speaker explains that things like clear navigation, readable text, and quick feedback when you click something make users feel comfortable. It also focuses on keeping the layout clean and consistent so people do not get confused. Overall, the video is a clear and useful explanation of how thoughtful design helps people have a better online experience.
UI and UX 101 for Web Developers and Designers
Choosing the right colors for your website will have a huge impact for the user. Always have the color match with the compony and what they do.
I believe that Stefan had provided a lot of the basic foundations for being good at UX. While I believe his opinion on you either have it or you don't is discouraging, the information he taught in the video was very informative. A lot of the information seemed like it should be basic knowledge, at the same time he also dove into topics I would not have thought about with this video.
UI and UX 101 for Web Developers and Designers
Some Basic Design Principles are: 1. Alignment, making things appear clean and correct. 2. Negative Space, create space around elements so that the information isn't overwhelming to the user. 3. Font Use, have consistent font families 4. Don't Use 'Serif' Fonts in Body Text, its style is too "flairy" (this is a strange rule). 5. Logical Color Use; make colors themed for your site 6. Templates Are Ok, they save design resources
After these rules, he gets into some strict UX guidelines that I personally disagree with. They're good for learning, but following into fully-fledged programs with this methodology just makes everything look the same!!
UI and UX 101 for Web Developers and Designers
When creating a website, your image must be positioned in a way that allows the eye to follow it.
UI and UX 101 for Web Developers and Designers
One of the basic design principles is font use. Use at most two different fonts and use sans-serif fonts in body text. Fonts set the mood and chracter of the page. I also believe the the use of font weight can/needs to be used in a way to convey information, what information is helpful, and where to guide the reader/viewer.
UI and UX 101 for Web Developers and Designers
There should not be more than 2 font faces on a website and it would be smart to avoid Serif fonts in the body paragraph, it is alright to use in the Header and Title though. Fonts themselves impact the feel of the page, creating the vibe or mood of the webpage itself.
UI and UX 101 for Web Developers and Designers
It's important to figure out whether you have the ability to see good design or not: * If your pages are crappy/don't look good then you aren't able to see good design. * Sometimes going back to basic design principles can be helpful * Negative space usage is important so users don't get overwhelmed * Sometimes templates can be helpful to save yourself extra time
UI and UX 101 for Web Developers and Designers
2:40, I love this discussion on how to space your pages. Often as developers we think of the bare logic, look at our UI an go "eh this is good enough". The analogy of having a cramped living room really brings it all together. You wouldn't want to move around a room that was too cramped, and users don't want to navigate a cramped website. The way a website is spaced and organized can effect a user's psychology.
UI and UX 101 for Web Developers and Designers
having different font size is back on a website? I think having different font size can be good, because it can grab the viewer attention.
UI and UX 101 for Web Developers and Designers
You want to make sure to space everything out on your website page, so that it doesn't look really cluttered. Using the space makes everything easier to see and it looks better for the user.
UI and UX 101 for Web Developers and Designers
Will websites/designs be rushed and not cared for if the user was blind? What do yall think? Would it really matter if the person could not actually see the website.
UI and UX 101 for Web Developers and DesignersTap to unmute2xUI and UX 101 for Web Developers and DesignersStefan Mischook 5,964 views 1 year agoSearchCopy linkInfoShoppingIf playback doesn't begin shortly, try restarting your device.0:00Pull up for precise seekingVolume3:28•You're signed outVideos you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.CancelConfirmgood you have to give the human eye youhave to give theUp nextLiveUpcomingCancelPlay NowStefan MischookSubscribeSubscribedContact: stefan@studioweb.com Entrepreneur | Educator | Tech Mentor I’ve been an entrepreneur since 18, launching my first business in the pet industry before shifting into tech. By 1994, I was building commercial websites, and in 2002, I released my first programming and entrepreneurship courses. In 2011 I launched StudioWeb.com, a gamified teaching and classroom management platform now used in schools across North America. My book, Web Design Start Here (published in 2015), continues to receive great reviews and is available on Amazon. YouTube: What started as a YouTube hobby has grown into a thriving platform where I share insights on coding, entrepreneurship, and tech. I’ve been fortunate to collaborate with top brands, including PayPal, Docker, JetBrains, Wix, BenQ, and more. If you’re looking for a trusted voice in tech and business, let’s connect. StefThe State of the Developer Ecosystem in 202529:00HideShareInclude playlistAn error occurred while retrieving sharing information. Please try again later.0:001:51 / 15:42Live•Watch full video••18:53Learn Coding FastStefan Mischook52K views • 1 year agoLivePlaylist ()Mix (50+)12:33If I Had to Start a Web Design Business in 2025...5 Things I'd NEVER Do AgainSelf-Made Web Designer27K views • 8 months agoLivePlaylist ()Mix (50+)14:52The Essential UX Interview Questions (And How To Answer Them!)CareerFoundry314K views • 4 years agoLivePlaylist ()Mix (50+)13:59Why is Coding Hard to Learn?Stefan Mischook15K views • 1 year agoLivePlaylist ()Mix (50+)17:34Is getting to UX design *worth it* in 2025?Aliena Cai53K views • 7 months agoLivePlaylist ()Mix (50+)14:06Prototyping in Figma is dead, do this insteadJesse Showalter35K views • 3 weeks agoLivePlaylist ()Mix (50+)11:46How Web Developers can make 10x MORE MONEY?Stefan Mischook12K views • 1 year agoLivePlaylist ()Mix (50+)Popular VideosLivePlaylist (13)Mix (50+)26:35How to Start Freelancing in 2024Stefan Mischook54K views • 1 year agoLivePlaylist ()Mix (50+)9:42Don't Become a UX/UI Designer BEFORE Considering These!Mizko239K views • 2 years agoLivePlaylist ()Mix (50+)14:21The Only 5 Web Design Skills That Actually Matter (2025)Self-Made Web Designer143K views • 3 months agoLivePlaylist ()Mix (50+)12:27Learning Software Engineering During the Era of AI | Raymond Fu | TEDxCSTUTEDx Talks440K views • 3 months agoLivePlaylist ()Mix (50+) UI and UX 101 for Web Developers and Designers
You want to make sure that your elements are lined-up. If they aren't lined up then it will look sloppy.
field on the way home and busted it. School Photograph
It’s so interesting how as you continue in life this mocking of class and income almost increases at an exponential rate in which as you realize you’re poor you start comparing yourself with others
And that's exactly what it felt like being told you're poor without being ready for it.
I think this idea of sapp railing he was poor because of others mocking him is incredibly powerful and shows how school can reinforce the labels of class
The point is, it can be a mistake for a teacher to make assumptions about a student's circumstances or support system without knowing the situation.
I enjoy how a lot of the readings this week revolve around reflecting and understanding a educators bias when it comes to students
gift. Every student has a story to tell, and often those stories are difficult to hear. Teaching
It’s interesting reading how some teachers have to understand that their bias especially as i aim to work in the orientation program i also have to acknowledge where the students are coming from
find myself asking, What did I learn after almost a dozen years in a low-income urban classroom, surrounded by students defined as "at risk" because of their poverty and race? What did I learn about my students? What did I learn about myself?
I admire the authors honesty about her own assumptions as its rare to see people admit their own biases
Students should connect what they read to their lives through writing;
This is what I love about reading and writing and why I chose to be a comparative literature major. When people's writing is informed by their lived experience, their creativity is shaped by who they are and the concrete things that they have faced, instead of just from their idea of what is "good" writing. When writing is disconnected from people's concrete day to day lives, and when it is isn't accessible to the people, it loses its purpose and meaning.
My students taught me during my career. They were the student teach-ers, and they gave me an education I could not have gotten anywhere else.
This is exactly how it should be and I felt very moved hearing this perspective! A teacher doesn't just teach and talk at their students, but learns from their lived experiences and knowledge that they may not have based on their class and cultural background. A classroom is a collective space where everyone can learn from each other and support each other.
on July 18, 2023
Currency: This Information Is not out of date, due to it being reviewed by a medical professional on July 18th, 2023.
I would consider this to be out of date if the article was more than 10 years old.
Their six-bedroom house is worth about $150,000.17 Alexander is an only child. Both parents grew up in small towns in the
I have family that have large houses but few children and ive come to realize they have almost a sense of entitlement when it comes to adults and how they feel entitled to almost always get a response and be acknowledged
research. 38 DISCUSS
I didn’t realize how much language use a home can shape the confidence of children when speaking to teachers
The McAllister's apartment is in a public housing project near a busy street. The com-plex consists of rows of two- and three-story brick units. The buildings, blocky and brown, have small yards enclosed by con-crete and wood fences. Large floodlights are mounted on the comers of the buildings, and wide concrete sidewalks cut through the spaces between units. The ground is bare in many places; paper wrappers and glass litter the area.
Housing plays a big role in a students ability to thrive. These details might seem unimportant to some, but the environment a student lives in shapes how they interact with school and their peers and teachers. In this home, there isn't much space for a kid to play without dangers of glass or the confinements of a small yard. This aspect of Harold's life that is determined by his class background, shapes his ability to move freely and therefore may shape is participation in school.
Whining, he wonders what he will do.
It is interesting how having his schedule full and so many planned creative activities has made Alex less creative in some ways. He can't imagine what he could do in his free time because everything is always planned out. When I was younger, I was lucky to have access to an after school program, but there was not too much structure for it, we just got to be outside or inside and play with safety rules and guidelines. This allowed my friends and I to get creative without structure. We would run behind the bungalows, make potions out of the weeds that were growing, and play games we made up. With all of Alex's set activities it is hard for him to develop that type of creativity that was so intrinsic to my childhood.
These are differences with potential long-term consequences. In an historical moment when the dominant society privileges active, informed, assertive clients of health and edu-cational services, the strategies employed by children and parents are not equally effec-tive across classes.
This is why this research is important. There needs to be an understanding of the true conditions of people based on their class and the long-term effects of that to cultivate an academic culture that does not solely adhere to middle and upper class culture. Students from all socio-economic backgrounds should feel that their education is not only accessible in that they can go to school, but that it is relevant and important to them.
Dr. Julie Collier
Conventions: The name of the practitioner is included for the identification of the document, incase multiple practitioners assess one patient, record-keeping, legality, incase of disputes, and for document or patient transfer to another field or facility.
25 Jan 2019 19:08 PSTPrivate & Confidential 3/4
Conventions: The date is repeated, presumably for proper filing or record-keeping, accountability, patient transfer, or research purposes. Additionally, the time the document was initiated and the time the document was finalized and signed by the physician, is included for similar reasons, particularly that of patient transfer.
Add supporting photos (optional)
Conventions: The picture of the patient's condition is repeated because the initial photo was reduced in size to fit on the primary document page. The second photo provides a larger picture of the condition at the end of the document.
Private & Confidential 1/4
Conventions: The phrase "Private & Confidential" is repeated at the bottom of every page, coinciding with HIPPA and stressing the importance of keeping the patient's medical and personal information classified.
his is a 23-year-old female
Tone and Style: The tone and style of this document is mostly technical, but also partially informal. The physician uses short sentences with clear facts regarding the patient's condition. However, the technical writing style typically requires formal writing. While the document includes relatively complete sentences with proper punctuation, there are few errors, including the highlighted portion for this annotation. Therefore, the document may not be able to be referred to as entirely technical.
S - Subjective
Structure: The structure of the document includes headlines and summarized relevant information about the subjective and objective data, the assessment of the physician, and the plan for treatment. This allows for efficient reading of the document and serves as a record for the treatment of the physician. Additionally, the headings provide a template for the steps the physician or possibly student should take, and provides organization to the document.
Dr. Julie Collier
Audience: The audience ranges for SOAP notes. Such can be utilized by the original physician themself, other follow-up healthcare professionals, hospital administrators, educators in hospitals or medical schools, and the students of such, and potentially the patient themself. The direct audience of SOAP notes is physicians, as the healthcare providers are intentionally writing the notes for other clinicians.
What you will do about it
Purpose: The Plan, or P, outlines the next steps for the patient or healthcare facility to take. This includes the treatment, follow-ups, and patient education, that is assigned based on the patient's diagnosed condition.
What you think is going on
Purpose: Assessment, or A, summarizes the diagnostic impression on the provider, outlining the provider's analysis of the subjective and objective data.
What you see
Purpose: O, or objective, explains what the provider is observing, recording the measurable data, factual data, and, sometimes, a supporting picture of the condition.
What the patient tells you
Purpose: SOAP notes allow healthcare providers to track, record, and communicate the condition and care of the patient over time. S, or subjective, records the patient's perception of their condition, including their symptoms, own assessment, and medical history.
aduate an
The background of this section has Far Beyond banners in the campus shot. I think this is the same for many of these. Can we swap out the image and it updates everywhere?
As a statistician, I am in strong agreement on the widespread inappropriate use of statistical inference (page 2) and the importance of software. I also strongly agree that “independent critical inspection [is] particularly challenging” (page 3). I also strongly agree that “The main difficulty in achieving such audits is that none of today’s scientific institutions consider them part of their mission”, as this is everyone’s problem and nobody’s problem.
I also agree that automation has encouraged standardisation and I have personally supported standardisation because some practices are so bad that many authors need to be “standardised”. However, I’ve also felt frustration at the sometimes fussy requirements when uploading R packages to CRAN (https://cran.r-project.org/). Similarly, some blanket changes from CRAN seem pedantic. There’s likely a balance between reducing poor practice and becoming too prescriptive.
In terms of transparency (section 2.4) I did think about the “Verbose=TRUE” option that I sometimes see in R. I tend to turn this on, as it’s good to see more of the workings, but perhaps the default is off? I did look at some packages using the google search: “verbose site:cran.r-project.org/web/packages”. I was also reminded of the difference between Bayesian and frequentist statistical modelling. Frequentist modelling often uses maximum likelihood to create parameter estimates, which usually runs quickly to create the estimates. In contrast, Bayesian methods often use MCMC, which is often slow and creates long chains of estimates; however, the chains will show if the likelihood does not have a clear maximum, which is usually from a badly specified model, whereas the maximum likelihood simply finds any peak. Frustratingly, I often get more push back from reviewers when using Bayesian methods, whereas in my opinion it should be the other way around as the Bayesian estimates have shown far more of the inner workings.
Some reflection on the growing use of AI to write software may be worthwhile. Presumably this could be more standardised, but there are other concerns. Using automation to check code could also be worthwhile.
For section 3, I thought that more sharing of code would mean “more eyeballs”, but the sharing needs to be done in FAIR way.
I wondered if highly-used software should get more scrutiny. Peer review is a scarce resource, so is likely better directed towards high use software. Andrew Gelman recently put forward a similar argument for checking published papers when they reach 250 citations: https://statmodeling.stat.columbia.edu/2025/02/26/pp/.
I agreed with the need for effort (page 19) and wondered if this paper could call for more effort.
Minor comments:
typo “asses” on page 7.
“supercomputers are rare”, should this be “relatively rare” or am I speaking from a privileged university where I’ve always had access to supercomputers.
I did think about “testthat” at multiple points whilst reading the paper (https://testthat.r-lib.org/)
Can badges on github about downloads and maturity help (page 7)? Although, far from all software is on github.
This summary article does not present new data or experiments but instead takes a broad look at automated reasoning and software. Reviewer #1 thought the article needed much more detail, including citations, examples, screenshots and figures. They were concerned about strong generalisations that were lacking evidence and have provided places where they wanted these details. Reviewer #2 considers the differences between reviewability and the practicalities of reviewing everything, and how being easily able to build-on other software acts as a kind of reproducibility. In my own editorial review, I generally enjoyed reading the paper and it prompted some interesting thoughts on trade-offs with standardisation and the level of detail shown to users for statistical code.
Thank you for submitting this paper. I think the paper requires substantial, major revisions to be published. Throughout the paper I noted many instances where references or examples would help make the intent clear. I also think the message of the paper would benefit from several figures to demonstrate workflows or ideas. The figures presented are essentially tables, and I think the message could be made clearer for the reader if they were presented as flow charts or at least with clear numbering to hook the ideas to the reader - e.g., Figures 1 & 2 would benefit from having numbers on the key ideas.
The paper is lacking many instances of citation, and at times reads as though it is an essay delivering an opinion. I'm not sure if this is the type of article that the journal would like, but two examples of sentences missing citations are:
"Over the last two decades, an unexpectedly large number of peer-reviewed findings across many scientific disciplines have been found to be irreproducible upon closer inspection." (Introduction, page 2)
"A large number of examples cited in this context involves faulty software or inappropriate use of software" (Introduction, page 3)
Two examples of sentences missing examples are:
Experimental software evolves at a much faster pace than mature software, and documentation is rarely up to date or complete (in Mature vs. experimental software, page 7). Could the author provide more examples of what "experimental software" is? There is also consistent use of universal terms like "...is rarely up to date or complete", which would be better phrased as "is often not up to date or complete"
There are various techniques for ensuring or verifying that a piece of software conforms to a formal specification.
Overall the paper introduces many new concepts, and I think it would greatly benefit from being made shorter and more concise, with adding some key figures for the reader to refer back to to understand these new ideas. The paper is well written, and it is clear the author is a great writer, and has put a lot of thought into the ideas. However it is my opinion that because these ideas are so big and require so much unpacking, they are also harder to understand. The reader would benefit from having more guidance to come back to understand these ideas.
I hope this review is helpful to the author.
Highlight [page 2]: Ever since the beginnings of organized science in the 17th century, researchers are expected to put all facts supporting their conclusions on the table, and allow their peers to inspect them for accuracy, pertinence, completeness, and bias. Since the 1950s, critical inspection has become an integral part of the publication process in the form of peer review, which is still widely regarded as a key criterion for trustworthy results.
Highlight [page 2]: Over the last two decades, an unexpectedly large number of peer-reviewed findings across many scientific disciplines have been found to be irreproducible upon closer inspection.
Highlight [page 2]: In the quantitative sciences, almost all of today’s research critically relies on computational techniques, even when they are not the primary tool for investigation - and Note [page 2]: Again, it does feel like it would be great to acknowledge research in this space.
Highlight [page 2]: But then, scientists mostly abandoned doubting.
Highlight [page 2]: Automation bias
Highlight [page 3]: A large number of examples cited in this context involves faulty software or inappropriate use of software
Highlight [page 3]: A particularly frequent issue is the inappropriate use of statistical inference techniques.
Highlight [page 3]: The Open Science movement has made a first step towards dealing with automated reasoning in insisting on the necessity to publish scientific software, and ideally making the full development process transparent by the adoption of Open Source practices - and Note [page 3]: Could you provide an example of one of these Open Science movements?
Highlight [page 3]: Almost no scientific software is subjected to independent review today.
Highlight [page 3]: In fact, we do not even have established processes for performing such reviews
Highlight [page 3]: as I will show
Highlight [page 3]: is as much a source of mistakes as defects in the software itself
Highlight [page 3]: This means that reviewing the use of scientific software requires particular attention to potential mismatches between the software’s behavior and its users’ expectations, in particular concerning edge cases and tacit assumptions made by the software developers. They are necessarily expressed somewhere in the software’s source code, but users are often not aware of them.
Highlight [page 4]: the preservation of epistemic diversity
Highlight [page 5]: The five dimensions of scientific software that influence its reviewability.
Highlight [page 6]: In between these extremes, we have in particular domain libraries and tools, which play a very important role in computational science, i.e. in studies where computational techniques are the principal means of investigation
Highlight [page 6]: Situated software is smaller and simpler, which makes it easier to understand and thus to review.
Highlight [page 6]: Domain tools and libraries
Highlight [page 7]: Experimental software evolves at a much faster pace than mature software, and documentation is rarely up to date or complete
Highlight [page 7]: An extreme case of experimental software is machine learning models that are constantly updated with new training data.
Highlight [page 7]: interlocutor
Highlight [page 7]: A grey zone
Highlight [page 7]: The libraries of the scientific Python ecosystem
Highlight [page 7]: too late that some of their critical dependencies are not as mature as they seemed to be
Highlight [page 7]: The main difference in practice is the widespread use of experimental software by unsuspecting scientists who believe it to be mature, whereas users of instrument prototypes are usually well aware of the experimental status of their equipment.
Highlight [page 8]: Convivial software [Kell 2020], named in reference to Ivan Illich’s book “Tools for conviviality” [Illich 1973], is software that aims at augmenting its users’ agency over their computation
Highlight [page 8]: Shaw recently proposed the less pejorative term vernacular developers [Shaw 2022]
Highlight [page 8]: which Illich has described in detail
Highlight [page 8]: what has happened with computing technology for the general public
Highlight [page 8]: tech corporations
Highlight [page 8]: Some research communities have fallen into this trap as well, by adopting proprietary tools such as MATLAB as a foundation for their computational tools and models.
Highlight [page 8]: Historically, the Free Software movement was born in a universe of convivial technology.
Highlight [page 8]: most of the software they produced and used was placed in the public domain
Highlight [page 8]: as they saw legal constraints as the main obstacle to preserving conviviality
Highlight [page 9]: Software complexity has led to a creeping loss of user agency, to the point that even building and installing Open Source software from its source code is often no longer accessible to non-experts, making them dependent not only on the development communities, but also on packaging experts. An experience report on building the popular machine learning library PyTorch from source code nicely illustrates this point [Courtès 2021].
Highlight [page 9]: It survives mainly in communities whose technology has its roots in the 1980s, such as programming systems inheriting from Smalltalk (e.g. Squeak, Pharo, and Cuis), or the programmable text editor GNU Emacs.
Highlight [page 9]: FLOSS has been rapidly gaining in popularity, and receives strong support from the Open Science movement
Highlight [page 9]: the traditional values of scientific research.
Highlight [page 9]: always been convivial
Highlight [page 9]: Transparent software
Highlight [page 9]: Large language models are an extreme example.
Highlight [page 10]: Even highly interactive software, for example in data analysis, performs nonobvious computations, yielding output that an experienced user can perhaps judge for plausibility, but not for correctness.
Highlight [page 10]: It is much easier to develop trust in transparent than in opaque software.
Highlight [page 10]: but also less important
Highlight [page 10]: even a very weak trustworthiness indicator such as popularity becomes sufficient
Highlight [page 10]: This is currently a much discussed issue with machine learning models,
Highlight [page 10]: treated extensively in the philosophy of science.
Highlight [page 11]: The importance of this execution environment is not sufficiently appreciated by most researchers today, who tend to consider it a technical detail
Highlight [page 11]: Software environments have only recently been recognized as highly relevant for automated reasoning in science and beyond
Highlight [page 11]: However, they have not yet found their way into mainstream computational science.
Highlight [page 12]: Non-industrial components are occasionally made for special needs, but this is discouraged by their high manufacturing cost
Highlight [page 12]: cables
Highlight [page 13]: which an experienced microscopist will recognize. Software with a small defect, on the other hand, can introduce unpredictable errors in both kind and magnitude, which neither a domain expert nor a professional programmer or computer scientist can diagnose easily.
Highlight [page 13]: where “traditional” means not relying on any form of automated reasoning.
Highlight [page 14]: Figure 2: Four measures that can be taken to make scientific software more trustworthy.
Highlight [page 14]: mature wide-spectrum software
Highlight [page 15]: The main difficulty in achieving such audits is that none of today’s scientific institutions consider them part of their mission.
Highlight [page 15]: Many computers, operating systems, and compilers were designed specifically for the needs of scientists.
Highlight [page 15]: Today, scientists use mostly commodity hardware
Highlight [page 15]: even considered advantageous if it also creates a barrier to reverse- engineering of the software by competitors
Highlight [page 15]: few customers (e.g. banks, or medical equipment manufacturers) are willing to pay for
Highlight [page 16]: a convivial collection of more situated modules, possibly supported by a shared wide-spectrum layer.
Highlight [page 16]: In terms of FLOSS jargon, users make a partial fork of the project. Version control systems ensure provenance tracking and support the discovery of other forks. Keeping up to date with relevant forks of one’s software, and with the motivations for them, is part of everyday research work at the same level as keeping up to date with publications in one’s wider community. In fact, another way to describe this approach is full integration of scientific software development into established research practices, rather than keeping it a distinct activity governed by different rules.
Highlight [page 17]: a universe is very
Highlight [page 17]: Improvement thus happens by small-step evolution rather than by large-scale design. While this may look strange to anyone used to today’s software development practices, it is very similar to how scientific models and theories have evolved in the pre-digital era.
Highlight [page 17]: Existing code refactoring tools can probably be adapted to support application-specific forks, for example via code specialization. But tools for working with the forks, i.e. discovering, exploring, and comparing code from multiple forks, are so far lacking. The ideal toolbox should support both forking and merging, where merging refers to creating consensual code versions from multiple forks. Such maintenance by consensus would probably be much slower than maintenance performed by a coordinated team.
Highlight [page 18]: An interesting line of research in software engineering is exploring possibilities to make complete software systems explainable [Nierstrasz and Girba 2022]. Although motivated by situated business applications, the basic ideas should be transferable to scientific computing
Highlight [page 18]: Unlike traditional notebooks, Glamorous Toolkit [feenk.com 2023],
Highlight [page 18]: In Glamorous Toolkit, whenever you look at some code, you can access corresponding examples (and also other references to the code) with a few mouse clicks
Highlight [page 18]: There are various techniques for ensuring or verifying that a piece of software conforms to a formal specification
Highlight [page 18]: The use of these tools is, for now, reserved to software that is critical for safety or security,
Highlight [page 19]: formal specifications
Highlight [page 19]: All of them are much more elaborate than the specification of the result they produce. They are also rather opaque.
Highlight [page 19]: Moreover, specifications are usually more modular than algorithms, which also helps human readers to better understand what the software does [Hinsen 2023]
Highlight [page 19]: In software engineering, specifications are written to formalize the expected behavior of the software before it is written. The software is considered correct if it conforms to the specification.
Highlight [page 19]: A formal specification has to evolve in the same way, and is best seen as the formalization of the scientific knowledge. Change can flow from specification to software, but also in the opposite direction.
Highlight [page 19]: My own experimental Digital Scientific Notation, Leibniz [Hinsen 2024], is intended to resemble traditional mathematical notation as used e.g. in physics. Its statements are embeddable into a narrative, such as a journal article, and it intentionally lacks typical programming language features such as scopes that do not exist in natural language, nor in mathematical notation.
Highlight [page 20]: Situated software is easy to recognize.
Highlight [page 20]: Examples from the reproducibility crisis support this view
Highlight [page 21]: The ideal structure for a reliable scientific software stack would thus consist of a foundation of mature software, on top of which a transparent layer of situated software, such as a script, a notebook, or a workflow, orchestrates the computations that together answer a specific scientific question. Both layers of such a stack are reviewable, as I have explained in section 3.1, but adequate reviewing processes remain to be enacted.
Highlight [page 21]: has been neglected by research institutions all around the world
In his article Establishing trust in automated reasoning (Hinsen, 2023) Hinsen argues that much of current scientific software lacks reviewability. Because scientific software has become such a central part of many scientific endeavors he worries that unreviewed software might contain mistakes which will never be spotted and consequently taint the scientific record. To illustrate this worry he cites issues with reproductions in different fields of science, which are often subsumed under the umbrella term of reproducibility crises. These crises, though not uncontested, have varied sources. In the field of social psychology reproducibility issues can for example often be traced to errors in statistical analyses, while shifting baselines and data leakage lead to problems in ML. Hinsen is only concerned with errors in scientific software. He suggests that potential errors could be spotted more easily if scientific software would be more reviewable. Thus he proposes five criteria against which reviewability could be judged. I will not discuss them in detail in this commentary and refer the interested reader to Hinsen (2023, section 2) for an extensive discussion. I note though, that the five criteria are meant to ensure an ideal type of reproducibility which Hinsen defines as follows: “Ideally, each piece of software should perform a well-defined computation that is documented in sufficient detail for its users and verifiable by independent reviewers.” (Hinsen, 2023, p.2). I take the upshot of these criteria to be that one could assert the reviewability of a piece of software before actually doing the review. They could thus function, perhaps contrary to Hinsen’s open science convictions, as a gatekeeping device in a peer review process for software. An editor could ”desk reject” software for not fulfilling the criteria before even sending it out to potential reviewers. If I am correct in this interpretation then we should entertain the same caution with them as we do with preregistration.
To be fair, Hinsen envisions a software review process which differs from current peer review with its acknowledged defects in several ways. He says, ”Developing suitable intermediate processes and institutions for reviewing such software is perhaps possible, but I consider it scientifically more appropriate to restructure such software into a convivial collection of more situated modules, possibly supported by a shared wide-spectrum layer.” (Hinsen, 2023, p.16).
Convivial software in turn is supposed to augment ”its users’ agency over their computation.” (Hinsen, 2023, p.16). This gives us a hint about the kind of user Hinsen has in mind – it is the software developer as a user. His concept of reviewability aims to make software transparent only to this kind of user (see Hinsen, 2023, p.20). In one of his many comparisons of scientific software to science, he notes that ”[. . . ] the main intellectual artifacts of science, i.e. theories and models, have always been convivial.” (Hinsen, 2023, p.9) and we can guess that he wants this to be the case for software too. But, if at all, scientific theories and models only have ever been convivial for scientists. The comparison also works the other way around, science as much as software is heavily fragmented into modules (disciplines). Scientists have always relied on the results of other scientists – they often have done and still do so without reviewing them. Has this hindered progress? I think one would be hard pressed to answer such a question in general for science, and perhaps it is the same for scientific software.
As Hinsen admits formal peer review is a quite novel addition to scientific methodology, being enforced on a larger scale only since the past fifty years or so. Science has progressed many years without, so we could ask why scientific software should not do likewise. Hinsen’s answer of course has to do with how he grades such software with respect to his reviewability criteria – obviously, most of it scores badly. Most scientific software is neither reviewed nor reviewable, Hinsen claims. This he considers a defect, because only reviewable software has to potential of being reviewed. Many practical considerations he discusses actually speak against the hope that most reviewable software will actually be reviewed. Still, without reviewability, it is hard, if not impossible, to spot mistakes. A case that was recently brought to my attention emphasizes this point. In Beheim et al. (2021) it is pointed out that a statistical analysis imputed missing values in an archaeo-historical database with the number 0. But for the statistical model (and software!) in use 0 had a different meaning than not available. This casts doubt on the conclusion that was drawn from the model. Beheim et al. were only able to spot this assumption because the code and data were available for review1. Cases like this abound and are examples for invisible programming values that philosopher James Moor discussed in the context of computer ethics (see Moor, 1985, The invisibility factor). Hinsen calls such values “tacit assumptions made by software developers” (Hinsen, 2023, p.3). We might speculate though, what would have happened if this questionable result had been incorporated into the scientific canon. Would later scientists really have continued building on it without ever realizing their shaky foundations? Or would the whole edifice have had to face the tribunal of experience at some point and crumbled? Perhaps the originating problem would never have been found and a whole research program would have been abandoned, perhaps a completely different part would have been blamed and excised – hard to say!
But maybe reviewability can also serve a different aim than establishing trust in the results of certain pieces of scientific software. Perhaps, it facilitates building on and incorporating pieces of such software in other projects. Its purpose could be more instrumental than epistemic. Although Hinsen seems to worry more about the epistemic problems coming with lack of reviewability, many points he makes implicitly deal with practical problems of software engineering. Whoever has fought against jupyter notebooks with legacy python requirements can immediately relate to his wish for keeping the execution environment as small as possible. For Hinsen software is actually defined by its execution environment (Hinsen, 2023, p.11), thus the complete environment must be available for its reviewability2. Software cannot be really seen as a separate entity and a review always reviews the whole environment. Analogously to Quine-Duhem we could call this situation review holism. But review holism might be less problematic than its scientific cousin suggests. We might not actually need to explicitly review the whole system. Perhaps it is sufficient if we achieve frictionless reproducibility (see Donoho, 2024), that is, other people can more or less easily incorporate and built on the software in question. Firstly if other software which incorporates the software in question works, it already is a type of successful reproduction. Secondly, the process of how software evolves might weed out any major errors, whatever errors remain are perhaps just irrelevant. In all fairness it has to be said that Hinsen does not think this is the case with current software. He argues that ”Software with a small defect, on the other hand, can introduce unpredictable errors in both kind and magnitude, which neither a domain expert nor a professional programmer or computer scientist can diagnose easily.” (Hinsen, 2023, p.13). But if that is the case then Hinsen’s later recourse to reliabilist-style justifications for software correctness is blocked too. We are in a situation for which the late Humphreys coined the term strange error (Rathkopf & Heinrichs, 2023, p.5). Strange errors are a challenge for any reliabilist account of justification because their magnitude can easily overwhelm arduously collected reliability assurances. If computational reliabilism was just reliabilism, and Hinsen seems to take it as such3, it would suffer from this problem too. But computational reliabilism has an additional internalist component, which explicitly allows for the whole toolbox of ”rationalist” software verification methods. If possible we should learn something about our tools other than their mere reliability. As Hacking said, ”[To understand] whether one sees through a microscope, one needs to know quite a lot about the tools.” (Hacking, 1981, p.135).
I would go so far and say that, if available, internalist justificiations are preferable to reliabilistic guarantees. It is only the case that often they are not and then we might content ourselves with the guarantees reliabilism provides. I said might content here, because such guarantees are unlikely to satisfy the skeptic. Obviously strange errors are always a possibility and no finite observation of correct software behaviour can completely rule them out. But in practice such concerns tend to fade over time, although they provide opportunity for unchecked philosophically skepticism. Many discussions about software opacity feed from such skepticism and this is what I tried to balance with computational reliabilism. In this spirit computational reliabilism was an attempt to temper theoretical skeptics in philosophy, not to give normative guidance to software engineering practice. My view was always that practice has the last say over philosophical concerns. If the emerging view in software engineering practice now is that more skepticism is appropriate, I will happily concur. But I should like to remind the practitioner that evidence for such skepticism has to be given in practice too, mere theoretical possibilities are not sufficient to establish it.
Reviewability does not mean reviewed. And only reviews can give us trust - or so we might think. As Hinsen acknowledges we should not expect that a majority of scientific software will ever be reviewed. Does this mean we cannot trust the results from such software? Above I tried to sketch a way out of this conundrum: We can view reviewability as advocated by Hinsen as a way to enable frictionless reproducibility, which in turn lets us built upon software, incorporate it in our own projects and use its results. As long as it works in a practically fulfilling way, this might be all the reviewing we need.
1A statistician once told me, that one glance at the raw data of this example immediately made clear to him that whatever problem there was with imputation, the data would never have supported the desired conclusions in any way. One man’s glance is another’s review.
2Hinsen’s definition of software closely parallels that of Moor, who argued that computer programs are a relation between a computer, a set of instructions and an activity (Moor, 1978, p.214).
3Hinsen characterizes computational reliabilism as follows, ”As an alternative source of trust, they propose computational reliabilism, which is trust derived from the experience that a computational procedure has produced mostly good results in a large number of applications.” (Hinsen, 2023, p.10)
Beheim, B., Atkinson, Q. D., Bulbulia, J., Gervais, W., Gray, R. D., Henrich, J., Lang, M., Monroe, M. W., Muthukrishna, M., Norenzayan, A., Purzy- cki, B. G., Shariff, A., Slingerland, E., Spicer, R., & Willard, A. K. (2021). Treatment of missing data determined conclusions regarding moralizing gods. Nature, 595 (7866), E29–E34. https://doi.org/10.1038/s41586-021-03655-4
Donoho, D. (2024). Data Science at the Singularity. Harvard Data Science Re- view, 6 (1). https://doi.org/10.1162/99608f92.b91339ef
Hacking, I. (1981). Do We See Through a Microscope? Pacific Philosophical Quarterly, 62 (4), 305–322. https://doi.org/10.1111/j.1468-0114.1981.tb00070.x
Hinsen, K. (2023, July). Establishing trust in automated reasoning. https:// doi.org/10.31222/osf.io/nt96q
Moor, J. H. (1978). Three Myths of Computer Science. The British Journal for the Philosophy of Science, 29 (3), 213–222. https://doi.org/10.1093/bjps/29.3.213
Moor, J. H. (1985). What is computer ethics? Metaphilosophy, 16 (4), 266–275. https://doi.org/10.1111/j.1467-9973.1985.tb00173.x
Rathkopf, C., & Heinrichs, B. (2023). Learning to Live with Strange Error: Be- yond Trustworthiness in Artificial Intelligence Ethics. Cambridge Quarterly of Healthcare Ethics, 1–13. https://doi.org/10.1017/S0963180122000688
Dear editors and reviewers, Thank you for your careful reading of my manuscript and the detailed and insightful feedback. It has contributed significantly to the improvements in the revised version. Please find my detailed responses below.
Thank you for this helpful review, and in particular for pointing out the need for more references, illustrations, and examples in various places of my manuscript. In the case of the section on experimental software, the search for examples made clear to me that the label was in fact badly chosen. I have relabeled the dimension as “stable vs. evolving software”, and rewritten the section almost entirely. Another major change motivated by your feedback is the addition of a figure showing the structure of a typical scientific software stack (Fig. 2), and of three case studies (section 2.7) in which I evaluate scientific software packages according to my five dimensions of reviewability. The discussion of conviviality (section 2.4), a concept that is indeed not widely known yet, has been much expanded. I have followed the advice to add references in many places. I have been more hesitant to follow the requests for additional examples and illustrations, because of the inevitable conflict with the equally understandable request to make the paper more compact. In many cases, I have preferred to refer to examples discussed in the literature. A few comments deserve a more detailed reply:
Highlight [page 3]: In fact, we do not even have established processes for performing such reviews
and Note [page 3]: I disagree, there is the Journal of Open Source Software: https://joss.theoj.org/, rOpenSci has a guide for development of peer review of statistical software: https://github.com/ropensci/statistical software-review-book, and also maintain a very clear process of software review: https://ropensci.org/software-review/
As I say in the section “Review the reviewable”, these reviews are not independent critical examination of the software as I define it. Reviewers are not asked to evaluate the software’s correctness or appropriateness for any specific purpose. They are expected to comment only on formal characteristics of the software publication process (e.g. “is there a license?”), and on a few software engineering quality indicators (“is there a test suite?”).
Highlight [page 3]: This means that reviewing the use of scientific software requires particular attention to potential mismatches between the software’s behavior and its users’ expectations, in particular concerning edge cases and tacit assumptions made by the software developers. They are necessarily expressed somewhere in the software’s source code, but users are often not aware of them.
and Note [page 3]: The same can be said of assumptions for equations and mathematics- the problem here is dealing with abstraction of complexity and the potential unintended consequences.
Indeed. That’s why we need someone other than the authors to go through mathematical reasoning and verify it. Which we do.
Wide-spectrum vs. situated software
Highlight [page 6]: Situated software is smaller and simpler, which makes it easier to understand and thus to review.
and Note [page 6]: I’m not sure I agree it is always smaller and simpler- the custom code for a new method could be incredibly complicated.
The comparison is between situated software and more generic software performing the same operation. For example, a script reading one specific CSV file compared to a subroutine reading arbitrary CSV files. I have yet to see a case in which abstraction from a concrete to a generic function makes code smaller or simpler.
Convivial vs. proprietary software
Highlight [page 8]: most of the software they produced and used was placed in the public domain
and Note [page 8]: Can you provide an example of this? I’m also curious how the software was placed in the public domain if there was no way to distribute it via the internet.
Software distribution in science was well organized long before the Internet, it was just slower and more expensive. Both decks of punched cards and magnetic tapes were routinely sent by mail. The earliest organized software distribution for science I am aware of was the DECUS Software Library in the early 1960s.
Size of the minimal execution environment
Note [page 11]: Could you provide an example of what it might look like if they were in mainstream computational science? For example, https://github.com/ropensci/rix implements using reproducible environments for R with NIX. What makes this not mainstream? Are you talking about mainstream in the sense of MS Excel? SPSS/SAS/STATA?
I have looked for quantitative studies on software use in science that would allow to give a precise meaning to “mainstream”, but I have not been able to find any. Based on my personal experience, mostly with teaching MOOCs on computational science in which students are asked about the software they use, the most widely used platform is Microsoft Windows. Linux is already a minority platform (though overrepresented in computer science), and Nix users are again a small minority among Linux users.
Analogies in experimental and theoretical science
Highlight [page 13]: which an experienced microscopist will recognize. Soft ware with a small defect, on the other hand, can introduce unpredictable errors in both kind and magnitude, which neither a domain expert nor a professional programmer or computer scientist can diag- nose easily.
and Note [page 13]: I don’t think this is a fair comparison. Surely there must be instances of experiences microscopists not identifying defects? Similarly, why can’t there be examples of domain expert or professional program mer/computer scientist identifying errors. Don’t unit tests help protect us against some of our errors? Granted, they aren’t bullet proof, and perhaps act more like guard rails.
There are probably cases of microscopists not noticing defects, but my point is that if you ask them to look for defects, they know what to do (and I have made this clearer in my text). For contrast, take GROMACS (one of my case studies in the revised manuscript) and ask either an expert programmer or an experienced computational biophysicist if it correctly implements, say, the AMBER force field. They wouldn’t know what to do to answer that question, both because it is ill-defined (there is no precise definition of the AMBER force field) and because the number of possible mistakes and symptoms of mistakes is enormous. I have seen a protein simulation program fail for proteins whose number of atoms was in a narrow interval, defined by the size that a compiler attributed to a specific data structure. I was able to catch and track down this failure only because a result was obviously wrong for my use case. I have never heard of similar issues with microscopes.
Review the reviewable
Highlight [page 15]: The main difficulty in achieving such audits is that none of today’s scientific institutions consider them part of their mission.
and Note [page 15]: I disagree. Monash provides an example here where they view software as a first class research output: https://robjhyndman.com/files/EBS_research_software.pdf
This example is about superficial reviews in the context of career evaluation. Other institutions have similar processes. As far as I know, none of them ask reviewers to look at the actual code and comment on its correctness or its suitability for some specific purpose.
Science vs. the software industry
Highlight [page 15]: few customers (e.g. banks, or medical equipment manufacturers) are willing to pay for
and Note [page 15]: What about software like SPSS/STATA/SAS- surely many many industries, and also researchers will pay for software like this that is considered mature?
I could indeed extend the list of examples to include various industries. Compared to the huge number of individuals using PCs and smartphones, that’s still few customers.
Emphasize situated and convivial software
Note [page 16]: Could the author provide a diagram or schematic to more clearly show how such a system would work with forks etc?
I have decided the contrary: I have significantly shortened this section, removing all speculation about how the ideas could be turned into concrete technology. The reason is that I have been working on this topic since I wrote the reviewed version of this manuscript, and I have a lot more to say about it than would be reasonable to include in this work. This will become a separate article.
Make scientific software explainable
Note [page 18]: I think it would be very beneficial to show screenshots of what the author means- while I can follow the link to Glamorous Toolkit, bitrot is a thing, and that might go away, so it would good to see exactly what the author means when they discuss these examples.
Unfortunately, static screenshots can only convey a limited impression of Glamorous Toolkit, but I agree that they have are a more stable support than the software itself. Rather than adding my own screenshots, I refer to a recent paper by the authors of Glamorous Toolkit that includes many screenshots for illustration.
Use Digital Scientific Notations
Highlight [page 19]: formal specifications and Note [page 19]: It would be really helpful if you could demonstrate an example of a formal specification so we can understand how they could be considered constraints.
Highlight [page 19]: Moreover, specifications are usually more modular than algorithms, which also helps human readers to better understand what the software does [Hinsen 2023]
and Note [page 19]: A tight example of this would be really useful to make this point clear. Perhaps with a figure of a specification alongside an algorithm.
I do give an example: sorting a list. To write down an actual formalized version, I’d have to introduce a formal specification language and explain it, which I think goes well beyond the scope of this article. Illustrating modularity requires an even larger example. This is, however, an interesting challenge which I’d be happy to take up in a future article.
Highlight [page 19]: In software engineering, specifications are written to formalize the expected behavior of the software before it is written. The software is considered correct if it conforms to the specification.
and Note [page 19]: Is an example of this test drive development?
Not exactly, though the underlying idea is similar: provide a condition that a result must satisfy as evidence for being correct. With testing, the condition is spelt out for one specific input. In a formal specification, the condition is written down for all possible inputs.
First of all, I would like to thank the reviewer for this thoughtful review. It addresses many points that require clarifications in the my article, which I hope to have done adequately in the revised version.
One such point is the role and form of reviewing processes for software. I have made it clearer that I take “review” to mean “critical independent inspection”. It could be performed by the user of a piece of software, but the standard case should be a review performed by experts at the request of some institution that then publishes the reviewer’s findings. There is no notion of gatekeeping attached to such reviews. Users are free to ignore them. Given that today, we publish and use scientific software without any review at all, the risk of shifting to the opposite extreme of having reviewers become gatekeepers seems unlikely to me.
Your comment on users being software developers addresses another important point that I had failed to make clear: conviviality is all about diminishing the distinction between developers and users. Users gain agency over their computations at the price of taking on more of a developer role. This is now stated explicitly in the revised article. Your hypothesis that I want scientific software to be convivial is only partially true. I want convivially structured software to be an option for scientists, with adequate infrastructure and tooling support, but I do not consider it to be the best approach for all scientific software.
The paragraph on the relevance and importance of reviewing in your comment is a valid point of view but, unsurprisingly, not mine. In the grand scheme of science, no specific quality assurance measure is strictly necessary. There is always another layer above that will catch mistakes that weren’t detected in the layer below. It is thus unlikely that unreliable software will cause all of science to crumble. But from many perspectives, including overall efficiency, personal satisfaction of practitioners, and insight derived from the process, it is preferable to catch mistakes as closely as possible to their source. Pre-digital theoreticians have always double-checked their manual calculations before submitting their papers, rather than sending off unchecked results and count on confrontation with experiment for finding mistakes. I believe that we should follow this same approach with software. The cost of mistakes can be quite high. Consider the story of the five retracted protein structures that I cite in my article (Miller, 2006, 10.1126/science.314.5807.1856). The five publications that were retracted involved years of work by researchers, reviewers, and editors. In between their publication and their retraction, other protein crystallographers saw their work rejected because it was in contradiction with the high-profile articles that later turned out to be wrong. The whole story has probably involved a few ruined careers in addition to its monetary cost. In contrast, independent critical examination of the software and the research processes in which it was used would likely have spotted the problem rather quickly (Matthews, 2007).
You point out that reviewability is also a criterion in choosing software to build on, and I agree. Building on other people’s software requires trusting it. Incorporating it into one’s own work (the core principle of convivial software) requires understanding it. This is in fact what motivated my reflections on this topic. I am not much interested in neatly separating epistemic and practical issues. I am a practitioner, my interest in epistemology comes from a desire for improving practices.
Review holism is something I have not thought about before. I consider it both impossible to apply in practice and of little practical value. What I am suggesting, and I hope to have made this clearer in my revision, is that reviewing must take into account the dependency graph. Reviewing software X requires a prior review of its dependencies (possibly already done by someone else), and a consideration of how each dependency influences the software under consideration. However, I do not consider Donoho’s “frictionless reproducibility” a sufficient basis for trust. It has the same problem as the widespread practice of tacitly assuming a piece of software to be correct because it is widely used. This reasoning is valid only if mistakes have a high chance of being noticed, and that’s in my experience not true for many kinds of research software. “It works”, when pronounced by a computational scientist, really means “There is no evidence that it doesn’t work”.
This is also why I point out the chaotic nature of computation. It is not about Humphreys’ “strange errors”, for which I have no solution to offer. It is about the fact that looking for mistakes requires some prior idea of what the symptoms of a mistake might be. Experienced researchers do have such prior ideas for scientific instruments, and also e.g. for numerical algorithms. They come from an understanding of the instruments and their use, including in particular a knowledge of how they can go wrong. But once your substrate is a Turing-complete language, no such understanding is possible any more. Every programmer has made the experience of chasing down some bug that at first sight seems impossible. My long-term hope is that scientific computing will move towards domain-specific languages that are explicitly not Turing-complete, and offer useful guarantees in exchange. Unfortunately, I am not aware of any research in this space.
I fully agree with you that internalist justifications are preferable to reliabilistic ones. But being fundamentally a pragmatist, I don’t care much about that distinction. Indisputable justification doesn’t really exist anywhere in science. I am fine with trust that has a solid basis, even if there remains a chance of failure. I’d already be happy if every researcher could answer the question “why do you trust your computational results?” in a way that shows signs of critical reflection.
What I care about ultimately is improving practices in computational science. Over the last 30 years, I have seen numerous mistakes being discovered by chance, often leading to abandoned research projects. Some of these mistakes were due to software bugs, but the most common cause was an incorrect mental model of what the software does. I believe that the best technique we have found so far to spot mistakes in science is critical independent inspection. That’s why I am hoping to see it applied more widely to computation.
Miller, G. (2006) A Scientist’s Nightmare: Software Problem Leads to Five Retractions. Science 314, 1856. https://doi.org/10.1126/science.314.5807.1856
Matthews, B.W. (2007) Five retracted structure reports: Inverted or incorrect? Protein Science 16, 1013. https://doi.org/10.1110/ps.072888607
Bayesian methods often use MCMC, which is often slow and creates long chains of estimates; however, the chains will show if the likelihood does not have a clear maximum, which is usually from a badly specified model...
That is an interesting observation I haven’t seen mentioned bedore. I agree that Bayesian inference is particularly amenable to inspection. One more reason to normalize inspection and inspectability in computational science.
Some reflection on the growing use of AI to write software may be worthwhile.
The use of AI in writing and reviewing software is a topic I have considered for this review, since the technology has evolved enormously since I wrote the current version of the manuscript. However, in view of reviewer 1’s constant admonition to back up statements with citations, I refrained from delving into this topic. We all know it’s happening, but it’s too early to observe a clear impact on research software. I have therefore limited myself to a short comment in the Conclusion section.
I wondered if highly-used software should get more scrutiny.
This is an interesting suggestion. If and when we get serious about reviewing code, resource allocation will become an important topic. For getting started, it’s probably more productive to review newly published code than heavily used code, because there is a better chance that authors actually act on the feedback and improve their code before it has many users. That in turn will help improve the reviewing process, which is what matters most right now, in my opinion.
“supercomputers are rare”, should this be “relatively rare” or am I speaking from a privileged university where I’ve always had access to supercomputers.
If you have easy access to supercomputer, you should indeed consider yourself privileged. But did you ever use supercomputer time for reviewing someone else’s work? I have relatively easy access to supercomputers as well, but I do have to make a re quest and promise to do innovative research with the allocated resources.
I did think about “testthat” at multiple points whilst reading the paper (https://testthat.r-lib.org/)
I hadn’t seen “testthat” before, not being much of a user of R. It looks interesting, and reminds me of similar test support features in Smalltalk which I found very helpful. Improving testing culture is definitely a valuable contribution to improving computational practices.
Can badges on github about downloads and maturity help (page 7)?
Badges can help, on GitHub or elsewhere, e.g. in scientific software catalogs. I see them as a coarse-grained output of reviewing. The right balance to find is between the visibility of a badge and the precision of a carefully written review report. One risk with badges is the temptation to automate the evaluation that leads to it. This is fine for quantitative measures such as test coverage, but what we mostly lack today is human expert judgement on software.
C
Should we put a divider between this and the header, since the bottom of the header is white and there's nothing else breaking up the header and the main content?
I strategically learn the expectations of my U.S. academic audience, what I really want to say comes across smoothly, without little annoying blips in my readers’ experience
When writers write they need to focus on small things such as citations and other things for their Audience. I can see the amount of intention and focus there is when it comes to writing. More respect for writers and their ability to educate and write the things they are trying to express.
time
This is true while communicating, both people become senders and receivers. This switches time and again as the conversation changes towards each person.
Unlike the linear and interactive models, it doesn’t view communication as a sequential process with distinct senders and receivers. Instead, it emphasizes that communication is a simultaneous and ongoing process
This model is very interesting to me. When I think of communication, I typically think of it through the interactive model context, where the sender sends their message to the receiver and the receiver sends back their feedback, in a sequential process. But the transactional model is very intriguing because it presents a more complex way of communication that does not follow a sequential process.
In this model, both the sender and receiver actively participate, with the receiver providing feedback to the sender, indicating understanding or misunderstanding.
I think this model of communication is better than the linear model, because it makes it less likely for there to be miscommunication, due to the feedback loop.
the linear model is often criticized for its lack of feedback from the receiver, failing to capture the dynamic and interactive nature of most human communication.
While, the linear model probably works well in many circumstances, I understand the criticisms it receives for its lack of feedback and high possibility of misinterpretation.
The receiver is the target of the message and is responsible for decoding its meaning based on their own experiences and background.
The receiver is also very important to note in communication. Not only is it important for the sender to communicate a clear and direct message, but it is also important for the receiver to understand the message, and if they don't they need to communicate that with the sender, so that they can receive clarification.
The message travels through a channel, which is the medium that acts as the bridge between the sender and receiver
This shows how important it is to send your message in the correct way. Sometimes this channel is an email can be a simple meeting other times it needs to be a more detailed meeting or conversation. It is important to communicate in the in the correct way, so that the message is as clear as possible.
What went wrong?
Clearly, a lot went wrong in this scenario. Not only was the email's content confusing and led John to think that it meant the deadline was pushed back to the following Friday, but it also never even made it to Maria because her inbox was flooded. I'm not sure if Sarah is to blame for Maria not seeing her email, but I think that on top of the email, she should have communicated the change in deadline in person, just to be sure that everyone understood.
Communication is the foundation of human interaction, shaping how we connect, collaborate, and coexist in personal, professional, and academic settings
Communication really is the foundation of human interaction, which is why it is so important to know how to communicate properly.
digital misinterpretation
it is hard to interpret tone over text or email, and sarcasm can often be misunderstood as aggression or confrontation. Playfulness can also come off the wrong way, as can many other sentiments
anticipate noise
It is often easier to address a problem before it arises instead of trying to deal with it in the aftermath
both senders and receivers at the same time.
Example could be socratic seminar-type discussions in class, where everybody is simultaneously engaged in observing others in order to refine their own response. This is definitely the most complex one to understand.
processes
An example could be a mass email or newsletter, where the sender is not expecting a response and the receiver does not see any need to reply
process
what makes sense to one person may not translate the same to another person.
Portland Trail Blazers coach Chauncey Billups and Miami Heat guard Terry Rozier have been arrested as part of a pair of wide-ranging investigations related to illegal sports betting and rigged poker games backed by the Mafia, authorities announced Thursday.Portland Trail Blazers coach Chauncey Billups and Miami Heat guard Terry Rozier have been arrested as part of a pair of wide-ranging investigations related to illegal sports betting and rigged poker games backed by the Mafia, authorities announced Thursday.
An extensive survey by Neumark and Wascher (2007) concluded that nearly two-thirds of the more than 100 newer minimum wage studies, and 85% of the most convincing ones, found consistent evidence of job loss effects on low-skilled workers.
cetacean
Whales, dolphins, porpoises, and other totally aquatic mammals are classified as cetaceans. Whales and dolphins are examples of cetaceans which are marine mammals that breathe air through blowholes and live fully in the water.
Genetic sex deter- its degree of degradation (Figure 1) and, thus, inmined by our new method was consistent with the its ratio of amplifiable DNA to total DNA. This,field data. Further demonstrating the reliability of in turn, impacts the success of amplification. Wethis method, a male used as a control was sexed 15 have found that this sexing method can reliablytimes, and 17 other individuals were sexed twice, be used to determine the relative amplifiability ofproviding consistent results each time. degraded DNA, which then allows the success ofFor low-quality sperm whale samples
This validates the accuracy and repeatability of the new 94-bp PCR method. The authors tested multiple samples repeatedly and obtained identical results confirming that the technique produces consistent outcomes even with degraded DNA. This builds confidence in using the assay for real-world cetacean studies.
All samples1x PCR buffer, with 0.4 mg/mL bovine serum albu- were used as templates in reactions with Rosel’smin (BSA), 1.5 mM MgCl2, 0.2 mM of each dNTP, primers and with CetSex94-F and CetSex94-R
Only the primers part I’m having a hard time selecting specific annotation . This demonstrates the researchers' primary experimental strategy they developed novel primers that amplify a 94 bp DNA fragment enabling the analysis of even damaged cetacean DNA. This is important since pieces larger than 300 bp caused the majority of previous PCR based sexing techniques to fail.
Determining the sex of individuals in wild popu- Currently, only one molecular sexing method forlations is important to many areas of study from cetaceans addresses the issues of fragment lengthsocial structure and behavioral studies to popula- and multiplexed primer sets. Morin et al. (2005)tion genetics.
Only first sentence…… When physical traits cannot accurately determine a cetacean's sex the authors describe molecular sexing as a genetic technique. This identifies the aim of the work, which is to increase the precision of sex determination from damaged DNA particularly in species that do not exhibit obvious sexual dimorphism.
Down All the Days (2013 Mix)<br /> by [[ThePoguesOfficial]] on YouTube accessed on 2025-10-23T08:36:17
eLife Assessment
This paper addresses the significant question of quantifying epistasis patterns, which affect the predictability of evolution, by reanalyzing a recently published combinatorial deep mutational scan experiment. The findings are that epistasis is fluid, i.e. strongly background dependent, but that fitness effects of mutations are predictable based on the wild-type phenotype. However, these potentially interesting claims are inadequately supported by the analysis, because measurement noise is not accounted for, arbitrary cutoffs are used, and global nonlinearities are not sufficiently considered. If the results continue to hold after these major improvements in the analysis, they should be of interest to all biologists working in the field of fitness landscapes.
Reviewer #1 (Public review):
This paper describes a number of patterns of epistasis in a large fitness landscape dataset recently published by Papkou et al. The paper is motivated by an important goal in the field of evolutionary biology to understand the statistical structure of epistasis in protein fitness landscapes, and it capitalizes on the unique opportunities presented by this new dataset to address this problem.
The paper reports some interesting previously unobserved patterns that may have implications for our understanding of fitness landscapes and protein evolution. In particular, Figure 5 is very intriguing. However, I have two major concerns detailed below. First, I found the paper rather descriptive (it makes little attempt to gain deeper insights into the origins of the observed patterns) and unfocused (it reports what appears to be a disjointed collection of various statistics without a clear narrative. Second, I have concerns with the statistical rigor of the work.
(1) I think Figures 5 and 7 are the main, most interesting, and novel results of the paper. However, I don't think that the statement "Only a small fraction of mutations exhibit global epistasis" accurately describes what we see in Figure 5. To me, the most striking feature of this figure is that the effects of most mutations at all sites appear to be a mixture of three patterns. The most interesting pattern noted by the authors is of course the "strong" global epistasis, i.e., when the effect of a mutation is highly negatively correlated with the fitness of the background genotype. The second pattern is a "weak" global epistasis, where the correlation with background fitness is much weaker or non-existent. The third pattern is the vertically spread-out cluster at low-fitness backgrounds, i.e., a mutation has a wide range of mostly positive effects that are clearly not correlated with fitness. What is very interesting to me is that all background genotypes fall into these three groups with respect to almost every mutation, but the proportions of the three groups are different for different mutations. In contrast to the authors' statement, it seems to me that almost all mutations display strong global epistasis in at least a subset of backgrounds. A clear example is C>A mutation at site 3.
1a. I think the authors ought to try to dissect these patterns and investigate them separately rather than lumping them all together and declaring that global epistasis is rare. For example, I would like to know whether those backgrounds in which mutations exhibit strong global epistasis are the same for all mutations or whether they are mutation- or perhaps position-specific. Both answers could be potentially very interesting, either pointing to some specific site-site interactions or, alternatively, suggesting that the statistical patterns are conserved despite variation in the underlying interactions.
1b. Another rather remarkable feature of this plot is that the slopes of the strong global epistasis patterns seem to be very similar across mutations. Is this the case? Is there anything special about this slope? For example, does this slope simply reflect the fact that a given mutation becomes essentially lethal (i.e., produces the same minimal fitness) in a certain set of background genotypes?
1c. Finally, how consistent are these patterns with some null expectations? Specifically, would one expect the same distribution of global epistasis slopes on an uncorrelated landscape? Are the pivot points unusually clustered relative to an expectation on an uncorrelated landscape?
1d. The shapes of the DFE shown in Figure 7 are also quite interesting, particularly the bimodal nature of the DFE in high-fitness (HF) backgrounds. I think this bimodality must be a reflection of the clustering of mutation-background combinations mentioned above. I think the authors ought to draw this connection explicitly. Do all HF backgrounds have a bimodal DFE? What mutations occupy the "moving" peak?
1e. In several figures, the authors compare the patterns for HF and low-fitness (LF) genotypes. In some cases, there are some stark differences between these two groups, most notably in the shape of the DFE (Figure 7B, C). But there is no discussion about what could underlie these differences. Why are the statistics of epistasis different for HF and LF genotypes? Can the authors at least speculate about possible reasons? Why do HF and LF genotypes have qualitatively different DFEs? I actually don't quite understand why the transition between bimodal DFE in Figure 7B and unimodal DFE in Figure 7C is so abrupt. Is there something biologically special about the threshold that separates LF and HF genotypes? My understanding was that this was just a statistical cutoff. Perhaps the authors can plot the DFEs for all backgrounds on the same plot and just draw a line that separates HF and LF backgrounds so that the reader can better see whether the DFE shape changes gradually or abruptly.
1f. The analysis of the synonymous mutations is also interesting. However I think a few additional analyses are necessary to clarify what is happening here. I would like to know the extent to which synonymous mutations are more often neutral compared to non-synonymous ones. Then, synonymous pairs interact in the same way as non-synonymous pair (i.e., plot Figure 1 for synonymous pairs)? Do synonymous or non-synonymous mutations that are neutral exhibit less epistasis than non-neutral ones? Finally, do non-synonymous mutations alter epistasis among other mutations more often than synonymous mutations do? What about synonymous-neutral versus synonymous-non-neutral. Basically, I'd like to understand the extent to which a mutation that is neutral in a given background is more or less likely to alter epistasis between other mutations than a non-neutral mutation in the same background.
(2) I have two related methodological concerns. First, in several analyses, the authors employ thresholds that appear to be arbitrary. And second, I did not see any account of measurement errors. For example, the authors chose the 0.05 threshold to distinguish between epistasis and no epistasis, but why this particular threshold was chosen is not justified. Another example: is whether the product s12 × (s1 + s2) is greater or smaller than zero for any given mutation is uncertain due to measurement errors. Presumably, how to classify each pair of mutations should depend on the precision with which the fitness of mutants is measured. These thresholds could well be different across mutants. We know, for example, that low-fitness mutants typically have noisier fitness estimates than high-fitness mutants. I think the authors should use a statistically rigorous procedure to categorize mutations and their epistatic interactions. I think it is very important to address this issue. I got very concerned about it when I saw on LL 383-388 that synonymous stop codon mutations appear to modulate epistasis among other mutations. This seems very strange to me and makes me quite worried that this is a result of noise in LF genotypes.
Reviewer #2 (Public review):
Significance:
This paper reanalyzes an experimental fitness landscape generated by Papkou et al., who assayed the fitness of all possible combinations of 4 nucleotide states at 9 sites in the E. coli DHFR gene, which confers antibiotic resistance. The 9 nucleotide sites make up 3 amino acid sites in the protein, of which one was shown to be the primary determinant of fitness by Papkou et al. This paper sought to assess whether pairwise epistatic interactions differ among genetic backgrounds at other sites and whether there are major patterns in any such differences. They use a "double mutant cycle" approach to quantify pairwise epistasis, where the epistatic interaction between two mutations is the difference between the measured fitness of the double-mutant and its predicted fitness in the absence of epistasis (which equals the sum of individual effects of each mutation observed in the single mutants relative to the reference genotype). The paper claims that epistasis is "fluid," because pairwise epistatic effects often differs depending on the genetic state at the other site. It also claims that this fluidity is "binary," because pairwise effects depend strongly on the state at nucleotide positions 5 and 6 but weakly on those at other sites. Finally, they compare the distribution of fitness effects (DFE) of single mutations for starting genotypes with similar fitness and find that despite the apparent "fluidity" of interactions this distribution is well-predicted by the fitness of the starting genotype.
The paper addresses an important question for genetics and evolution: how complex and unpredictable are the effects and interactions among mutations in a protein? Epistasis can make the phenotype hard to predict from the genotype and also affect the evolutionary navigability of a genotype landscape. Whether pairwise epistatic interactions depend on genetic background - that is, whether there are important high-order interactions -- is important because interactions of order greater than pairwise would make phenotypes especially idiosyncratic and difficult to predict from the genotype (or by extrapolating from experimentally measured phenotypes of genotypes randomly sampled from the huge space of possible genotypes). Another interesting question is the sparsity of such high-order interactions: if they exist but mostly depend on a small number of identifiable sequence sites in the background, then this would drastically reduce the complexity and idiosyncrasy relative to a landscape on which "fluidity" involves interactions among groups of all sites in the protein. A number of papers in the recent literature have addressed the topics of high-order epistasis and sparsity and have come to conflicting conclusions. This paper contributes to that body of literature with a case study of one published experimental dataset of high quality. The findings are therefore potentially significant if convincingly supported.
Validity:
In my judgment, the major conclusions of this paper are not well supported by the data. There are three major problems with the analysis.
(1) Lack of statistical tests. The authors conclude that pairwise interactions differ among backgrounds, but no statistical analysis is provided to establish that the observed differences are statistically significant, rather than being attributable to error and noise in the assay measurements. It has been established previously that the methods the authors use to estimate high-order interactions can result in inflated inferences of epistasis because of the propagation of measurement noise (see PMID 31527666 and 39261454). Error propagation can be extreme because first-order mutation effects are calculated as the difference between the measured phenotype of a single-mutant variant and the reference genotype; pairwise effects are then calculated as the difference between the measured phenotype of a double mutant and the sum of the differences described above for the single mutants. This paper claims fluidity when this latter difference itself differs when assessed in two different backgrounds. At each step of these calculations, measurement noise propagates. Because no statistical analysis is provided to evaluate whether these observed differences are greater than expected because of propagated error, the paper has not convincingly established or quantified "fluidity" in epistatic effects.
(2) Arbitrary cutoffs. Many of the analyses involve assigning pairwise interactions into discrete categories, based on the magnitude and direction of the difference between the predicted and observed phenotypes for a pairwise mutant. For example, the authors categorize as a positive pairwise interaction if the apparent deviation of phenotype from prediction is >0.05, negative if the deviation is <-0.05, and no interaction if the deviation is between these cutoffs. Fluidity is diagnosed when the category for a pairwise interaction differs among backgrounds. These cutoffs are essentially arbitrary, and the effects are assigned to categories without assessing statistical significance. For example, an interaction of 0.06 in one background and 0.04 in another would be classified as fluid, but it is very plausible that such a difference would arise due to error alone. The frequency of epistatic interactions in each category as claimed in the paper, as well as the extent of fluidity across backgrounds, could therefore be systematically overestimated or underestimated, affecting the major conclusions of the study.
(3) Global nonlinearities. The analyses do not consider the fact that apparent fluidity could be attributable to the fact that fitness measurements are bounded by a minimum (the fitness of cells carrying proteins in which DHFR is essentially nonfunctional) and a maximum (the fitness of cells in which some biological factor other than DHFR function is limiting for fitness). The data are clearly bounded; the original Papkou et al. paper states that 93% of genotypes are at the low-fitness limit at which deleterious effects no longer influence fitness. Because of this bounding, mutations that are strongly deleterious to DHFR function will therefore have an apparently smaller effect when introduced in combination with other deleterious mutations, leading to apparent epistatic interactions; moreover, these apparent interactions will have different magnitudes if they are introduced into backgrounds that themselves differ in DHFR function/fitness, leading to apparent "fluidity" of these interactions. This is a well-established issue in the literature (see PMIDs 30037990, 28100592, 39261454). It is therefore important to adjust for these global nonlinearities before assessing interactions, but the authors have not done this.
This global nonlinearity could explain much of the fluidity claimed in this paper. It could explain the observation that epistasis does not seem to depend as much on genetic background for low-fitness backgrounds, and the latter is constant (Figure 2B and 2C): these patterns would arise simply because the effects of deleterious mutations are all epistatically masked in backgrounds that are already near the fitness minimum. It would also explain the observations in Figure 7. For background genotypes with relatively high fitness, there are two distinct peaks of fitness effects, which likely correspond to neutral mutations and deleterious mutations that bring fitness to the lower bound of measurement; as the fitness of the background declines, the deleterious mutations have a smaller effect, so the two peaks draw closer to each other, and in the lowest-fitness backgrounds, they collapse into a single unimodal distribution in which all mutations are approximately neutral (with the distribution reflecting only noise).<br /> Global nonlinearity could also explain the apparent "binary" nature of epistasis. Sites 4 and 5 change the second amino acid, and the Papkou paper shows that only 3 amino acid states (C, D, and E) are compatible with function; all others abolish function and yield lower-bound fitness, while mutations at other sites have much weaker effects. The apparent binary nature of epistasis in Figure 5 corresponds to these effects given the nonlinearity of the fitness assay. Most mutations are close to neutral irrespective of the fitness of the background into which they are introduced: these are the "non-epistatic" mutations in the binary scheme. For the mutations at sites 4 and 5 that abolish one of the beneficial mutations, however, these have a strong background-dependence: they are very deleterious when introduced into a high-fitness background but their impact shrinks as they are introduced into backgrounds with progressively lower fitness. The apparent "binary" nature of global epistasis is likely to be a simple artifact of bounding and the bimodal distribution of functional effects: neutral mutations are insensitive to background, while the magnitude of the fitness effect of deleterious mutations declines with background fitness because they are masked by the lower bound. The authors' statement is that "global epistasis often does not hold." This is not established. A more plausible conclusion is that global epistasis imposed by the phenotype limits affects all mutations, but it does so in a nonlinear fashion.
In conclusion, most of the major claims in the paper could be artifactual. Much of the claimed pairwise epistasis could be caused by measurement noise, the use of arbitrary cutoffs, and the lack of adjustment for global nonlinearity. Much of the fluidity or higher-order epistasis could be attributable to the same issues. And the apparently binary nature of global epistasis is also the expected result of this nonlinearity.
Reviewer #3 (Public review):
Summary:
The authors have studied a previously published large dataset on the fitness landscape of a 9 base-pair region of the folA gene. The objective of the paper is to understand various aspects of epistasis in this system, which the authors have achieved through detailed and computationally expensive exploration of the landscape. The authors describe epistasis in this system as "fluid", meaning that it depends sensitively on the genetic background, thereby reducing the predictability of evolution at the genetic level. However, the study also finds two robust patterns. The first is the existence of a "pivot point" for a majority of mutations, which is a fixed growth rate at which the effect of mutations switches from beneficial to deleterious (consistent with a previous study on the topic). The second is the observation that the distribution of fitness effects (DFE) of mutations is predicted quite well by the fitness of the genotype, especially for high-fitness genotypes. While the work does not offer a synthesis of the multitude of reported results, the information provided here raises interesting questions for future studies in this field.
Strengths:
A major strength of the study is its detailed and multifaceted approach, which has helped the authors tease out a number of interesting epistatic properties. The study makes a timely contribution by focusing on topical issues like the prevalence of global epistasis, the existence of pivot points, and the dependence of DFE on the background genotype and its fitness. The methodology is presented in a largely transparent manner, which makes it easy to interpret and evaluate the results.
The authors have classified pairwise epistasis into six types and found that the type of epistasis changes depending on background mutations. Switches happen more frequently for mutations at functionally important sites. Interestingly, the authors find that even synonymous mutations in stop codons can alter the epistatic interaction between mutations in other codons. Consistent with these observations of "fluidity", the study reports limited instances of global epistasis (which predicts a simple linear relationship between the size of a mutational effect and the fitness of the genetic background in which it occurs). Overall, the work presents some evidence for the genetic context-dependent nature of epistasis in this system.
Weaknesses:
Despite the wealth of information provided by the study, there are some shortcomings of the paper which must be mentioned.
(1) In the Significance Statement, the authors say that the "fluid" nature of epistasis is a previously unknown property. This is not accurate. What the authors describe as "fluidity" is essentially the prevalence of certain forms of higher-order epistasis (i.e., epistasis beyond pairwise mutational interactions). The existence of higher-order epistasis is a well-known feature of many landscapes. For example, in an early work, (Szendro et. al., J. Stat. Mech., 2013), the presence of a significant degree of higher-order epistasis was reported for a number of empirical fitness landscapes. Likewise, (Weinreich et. al., Curr. Opin. Genet. Dev., 2013) analysed several fitness landscapes and found that higher-order epistatic terms were on average larger than the pairwise term in nearly all cases. They further showed that ignoring higher-order epistasis leads to a significant overestimate of accessible evolutionary paths. The literature on higher-order epistasis has grown substantially since these early works. Any future versions of the present preprint will benefit from a more thorough contextual discussion of the literature on higher-order epistasis.
(2) In the paper, the term 'sign epistasis' is used in a way that is different from its well-established meaning. (Pairwise) sign epistasis, in its standard usage, is said to occur when the effect of a mutation switches from beneficial to deleterious (or vice versa) when a mutation occurs at a different locus. The authors require a stronger condition, namely that the sum of the individual effects of two mutations should have the opposite sign from their joint effect. This is a sufficient condition for sign epistasis, but not a necessary one. The property studied by the authors is important in its own right, but it is not equivalent to sign epistasis.
(3) The authors have looked for global epistasis in all 108 (9x12) mutations, out of which only 16 showed a correlation of R^2 > 0.4. 14 out of these 16 mutations were in the functionally important nucleotide positions. Based on this, the authors conclude that global epistasis is rare in this landscape, and further, that mutations in this landscape can be classified into one of two binary states - those that exhibit global epistasis (a small minority) and those that do not (the majority). I suspect, however, that a biologically significant binary classification based on these data may be premature. Unsurprisingly, mutational effects are stronger at the functional sites as seen in Figure 5 and Figure 2, which means that even if global epistasis is present for all mutations, a statistical signal will be more easily detected for the functionally important sites. Indeed, the authors show that the means of DFEs decrease linearly with background fitness, which hints at the possibility that a weak global epistatic effect may be present (though hard to detect) in the individual mutations. Given the high importance of the phenomenon of global epistasis, it pays to be cautious in interpreting these results.
(4) The study reports that synonymous mutations frequently change the nature of epistasis between mutations in other codons. However, it is unclear whether this should be surprising, because, as the authors have already noted, synonymous mutations can have an impact on cellular functions. The reader may wonder if the synonymous mutations that cause changes in epistatic interactions in a certain background also tend to be non-neutral in that background. Unfortunately, the fitness effect of synonymous mutations has not been reported in the paper.
(5) The authors find that DFEs of high-fitness genotypes tend to depend only on fitness and not on genetic composition. This is an intriguing observation, but unfortunately, the authors do not provide any possible explanation or connect it to theoretical literature. I am reminded of work by (Agarwala and Fisher, Theor. Popul. Biol., 2019) as well as (Reddy and Desai, eLife, 2023) where conditions under which the DFE depends only on the fitness have been derived. Any discussion of possible connections to these works could be a useful addition.
Author response:
Thank you for sharing a detailed review of our manuscript titled, Variations and predictability of epistasis on an intragenic fitness landscape. We have now carefully gone through the reviewers’ and the editor’s comments and have the following preliminary responses.
(1) Measurement noise in the folA fitness landscape. All three reviewers and the editors raise the important matter of incorporating measurement noise in the fitness landscape. The paper by Papkou and coworkers makes the fitness measurements of the landscape in six independent repeats. They show that the fitness data is highly correlated in each repeat, and use the weighted mean of the repeats to report their results. They do not study how measurement noise influences their findings. The results by Papkou and coworkers were our starting point, and hence, we built on the landscape properties reported in their study. As a result, we also analyse our results working with the same mean of the six independent measurements.
The main result of the work by Papkou and coworkers is that largest subgraph in the landscape has 514 fitness peaks.
We revisit this result by quantifying how measurement noise changes this number. By doing this, we note the subgraph contains only 127 peaks which are statistically significant. We define a sequence as a peak when its corresponding fitness is greater than all its one-distance neighbours with a p-value < 0.05. This shows that, as pointed out in the reviews, incorporating noise in the landscape results significantly changes how we view the landscape – a facet not included in Papkou et al and the current version of our manuscript.
Not incorporating measurement noise means that the entire landscape has 4055 peaks. When measurement noise is included in the analysis, this number reduces to 137, out of which 136 are high fitness backgrounds (functional).
In the revised version of our manuscript, we will incorporate measurement noise in our analysis. Through this, we will also address the concern regarding the use of an arbitrary cut-off to study “fluid” epistasis. However, we note that arbitrary cut-offs to define DFEs have been recently used (Sane et al., PNAS, 2023).
We also note that previous work with large scale landscapes (Wu et al, eLife, 2016) also reported a fitness landscape with a single experiment, with no repeats.
(2) Global nonlinearities and higher-order leading to fluid epistasis. Attempts at building models for higher-order epistasis from empirical data have largely been confined to landscapes of a limited data size. For example, Sailer & Harms, Genetics, 2017 propose models for higher-order epistasis from seven empirical data sets, each with less than a 100 data points. Another recent attempt (Park et al, Nat Comm, 2024) proposes rule for protein structure-function with 20 fitness landscapes. In this study, only one landscape which used fitness as a phenotype had ~160000 data points (of which only 42% were included for analysis). All other data sets which used fitness as a phenotype contained less than 10000 data points. While these statistical proposals of how higher-order epistasis operates exist, none of them are reliant of large scale, exhaustive network, like the one proposed by Papkou and coworkers.
In the edited manuscript, we will replace our arbitrary cut-off with results of statistical tests carried out based on measurement noise.
Global non-linearities shape evolutionary responses. We would like to emphasize that the goal of this work to study and understand how these global non-linearities result in patterns on a large fitness landscape by presenting the sum total of these fundamental factors in shaping statistical patterns.
While we understand that we may not have sufficiently explained the effects of global non-linearities on our results, we do not agree with the reviewer’s conclusion that our results are artifacts of these non-linearities. We will expand on the role of these nonlinearities on the patterns that we observe (like, fitness being bounded, as pointed out by reviewer 2, or differential impact of a mutation in functional vs. non-functional variants).
We also speculate that changing our arbitrary cut-off (selection coefficient of 0.05) to measurement noise will not alter our results qualitatively.
The question we address in our work is, therefore, how does the nature of epistasis change with genetic background over a large, exhaustive landscape. The nature of epistasis between two mutations is analysed in all 4<sup>7</sup> backgrounds. The causative agents for the change in epistasis will be context-dependent, depending on the precise nature of the two mutations and the background. For instance, a certain background might simply introduce a Stop codon in the sequence. Notwithstanding these precise, local mechanistic explanations, we seek to answer how epistasis changes statistically in a sequence. Investigating statistical patterns which explain switch in nature of epistasis in deep, exhaustive landscapes is a long-term goal of this research.
(3) Last, in our revised manuscript, we will address the reviewers’ other minor comments on the various aspects of the manuscript.
This take on the Tea Party as a kabuki dance entirely manipulated from above simply cannot do justice to the volunteer engagement of many thousands of men and women who travel to rallies with their homemade signs and, even more remarkably, have formed ongoing, regularly meeting local Tea Party groups.
Great EV from the author, kabuki dance means political theatre
confirming the original belief.
you can use your thinking patterns to break this cuycle
Encourage teachers to feel empathy rather than pity; kids will appreciate your ability to know what it’s like to be in their shoes. Establish a school culture of caring, not of giving up. You can help foster such a culture by speaking respectfully, not conde-scendingly, of and to your student population, and by using positive affi rma-tions, both vocally and through displays and posters
The way that this part moves the emphasis from sympathy to true empathy is excellent. It is empowering because it serves as a reminder to educators that showing sympathy should never equate to reducing standards. Reading this gives me hope because schools can actually change things by prioritizing empathy over guilt.
nstead, poor children often feel isolated and unloved, feelings that kick off a downward spiral of unhappy life events, including poor academic performance, behavioral problems, dropping out of school, and drug abuse. These events tend to rule out col-lege as an option and perpetuate the cycle of poverty
This section demonstrates the profound emotional and social difficulties that impoverished children face. It's heartbreaking to see how a child's confidence and hope may be destroyed by a lack of support and ongoing stress. It reminds me how unfair it is that circumstances beyond of their control mold their future.
In other words, one problem created by poverty begets another, which in turn contributes to another, leading to a seemingly endless cascade of del-eterious consequences.
The progressive nature of poverty and the way that interrelated risk factors can keep people in a difficult situation are both well illustrated in this section. The problem is made to feel real and relatable by using the example of a young person who sustains long-term consequences from a single accident. You can think about highlighting how this cycle is maintained by structural obstacles rather than personal shortcomings.
s focus on the issue, not the person.
attacking the person will only create more conflict
Conflict styles.
each has its own uses, and can be the best approach in various circumtances
Styles of Conflict Management
styles probably depend on the individual's goals and values, which could result in a disconnect in resolution patterns
Denies feeling hopeless, helpless, worthless.
Such assessment can help explain the release of the patient.
Head CT scan without contrast, no intracranial hemorrhage or any fracture. CTcervical without contrast. No fracture
Tone and Style: The tone and style are technical, as the document is composed of incomplete sentences, with non-emotional facts, and no obvious errors. The sentences are short, allowing for efficient reading of the document, including only the necessary information.
declined medications
Conventions: The document repeats the information that the patient declined medication. This information is important for follow-up care, especially such at a different facility, or for lawyers in the case of a dispute.
admitted involuntary on72-hour hold for danger to self.
Legal importance; displays the use of the legal holding of the patient.
Case was discussed with the patient, patient’s nurse, social worker, the treatment team did not find patient iseligible for involuntary
Audience: The audience is directed towards healthcare workers, particularly those who are providing the follow-up care. Additionally, in the rare case of disputes with the medical care provided, the audience could be for lawyers, either those of the hospital or of the patient.
DISCHARGE INSTRUCTION
Conventions: The capitalized headings are repeated for an organized structure within the document, separating and identifying the relevant information presented. This allows for the efficient reading of the important information from the patient's visit at the healthcare facility.
no substance abuse problem and this is hisfirst inpatient psychiatric hospitalization.
Medical history that is relevant to the current condition of the patient.
1. Client name or identifier is present on the progress note.2. The diagnosis is indicated.3. The progress note supports the code billed. Time is indicated on the progress note.4. Provider identifier is present on the progress note.
Purpose: The purpose of the discharge summary is to summarize the patient's condition, assessments, stay, reasons for discharge, and discharge process. This summary of the visit to the medical facility as a precedent for additional care, documents the medication the patient was prescribed and receiving, ensures the hospital/facility is not liable if the patient attempts to dispute, and serves as documentation for insurance purposes.
REASON FOR HOSPITALIZATION
Structure: The discharge summary is organized by headings that separate the relevant information required for additional care or liability disputes. Underneath are short sentences that explain every important step in the patient's process throughout their stay at the hospital or healthcare facility, and the plan or their medication afterwards.
Gossip/Backbiting
can tear down a group from the inside out and create cliques that divide the group
ddress Problems Early
can help to prevent bad habits from forming in the first place
may result in confusion or inefficiency
members must be experienced and well-versed in their work already. Also, the assignment should probably have some room for error in case of miscommunication/inconsistency
radiation
Not electromagnetic radiation, per se, but charged particles, the protons, electrons and other ions that get lofted from the surface of the Sun, sometimes explosively so.
Activity 1
are we supposed to comment here or in some "evolve guide"? What is the Evolve Guide? Where does it exist? I am a luddite in a hellscape of technology.
I don't know how to use this thing...what is this? Is this a comment? When it says "public" is it really public or is it only the people in this group that can see it? Is this annotation associated with some part of the page (is pressbooks a webpage? What is it? an application? an add on? an extension? a platform?)...I feel like this comment is just going to disappear into the ether and have no response or point, like an email over break, lost to the void. Speak void!
Our Heritage in B2B Delivery GFS was founded to serve the complex needs of B2B businesses at a time when eCommerce was almost non-existent — giving us a unique heritage in complex, high-volume parcel delivery that’s built for the demands of modern business. Now with decades of experience behind us, we understand the operational pressures, service expectations and commercial realities that B2B businesses face. It’s this deep-rooted expertise that enables us to deliver smart, scalable delivery solutions built specifically for the pace and complexity of B2B commerce today.
Can we tweak this section to include shipping here as part of the GFS heritage
Suggestion now with decades of shipping and eCommerce experience behind us
Why Choose GFS for Your B2B Deliveries?
we want to add the term Shipping to the content in a powerful place - can we use this for b2b deliveries and shipments?
I think that religious superiority, religious supremacy is in some ways just because of the numbers a bigger problem even than white supremacy
for - quote - religious supremacy - religious is, in some ways, just because of the numbers a bigger problem than white supremacy - Jenny Gage
On the other hand, social media use can impact negatively on wellbeing and mental health, damaging self-esteem through experience of judgement, attention to markers of popularity, and appearance comparison. Posting without considering privacy or appropriateness, and ‘stress’ posting could have negative longer term consequences. Teenagers highlighted concerns about social media impacting negatively on their real life relationships, and causing anxiety and sleep disruption. Teenagers also reported the well-documented negative consequences of social media use of cyberbullying, online exclusion, and the impact of viewing distressing content. Despite the strong association between cyberbullying and face-to-face bullying, research has argued that the anonymous nature of cyberbullying enables more extreme levels of victimisation, and its repetitive nature has a more intense impact on the individual
this is some of the mixed evidence
open communication and deeper connection.
it is very important to have good communication in relationships; that is the foundation of a healthy, lasting relationship
appreciative listening
reserved for things you enjoy/support/believe in. the most "fun" type
mirror the speaker’s emotions
should be comforting to the speaker
Punctuality is highly valued, and efficiency in completing tasks takes precedenc
This is generally the American approach to time
. A firm handshake at the beginning of a business meeting can nonverbally convey professionalism and confidence
stength of a handshake is very important, and can tell a lot about someone's perceived character
eLife Assessment
This valuable study introduces the peptidisc-TPP approach as a promising solution to challenges in membrane proteomics, enabling thermal proteome profiling in a detergent-free system. The concept is innovative and holds significant potential, and the demonstration of its utility and validation is solid. The method presents a strong foundation for broader applications in identifying physiologically and pharmacologically relevant membrane protein-ligand interactions.
Reviewer #1 (Public review):
Summary:
The idea is appealing, but the authors have not sufficiently demonstrated the utility of this approach.
Strengths:
Novelty of the approach, potential implications for discovering novel interactions
Comments on revisions:
The authors have adequately addressed most of my concerns in this improved version of the manuscript
Reviewer #2 (Public review):
Summary:
The membrane mimetic thermal proteome profiling (MM-TPP) presented by Jandu et al. promises a useful way to minimize the interference of detergents in efficient mass spectrometry analysis of membrane proteins. Thermal proteome profiling is a mass spectrometric method that measures binding of a drug to different proteins in a cell lysate by monitoring thermal stabilization of the proteins because of the interaction with the ligands that are being studied. This method has been underexplored for membrane proteome because of the inefficient mass spectrometric detection of membrane proteins and because of the interference from detergents that are used often for membrane protein solubilization.
Strengths:
In this report the binding of ligands to membrane protein targets has been monitored in crude membrane lysates or tissue homogenates exalting the efficacy of the method to detect both intended and off-target binding events in a complex physiologically relevant sample setting. The manuscript is lucidly written and the data presented seems clear. Kudos to the authors. This methodology shows immense potential for identifying membrane protein binders (small-molecule or protein) in a near-native environment, and as a result promises to be a great tool for drug discovery campaigns.
Weaknesses:
While this is a solid report and a promising tool for analyzing membrane protein drug interactions in a detergent-free environment, it is crucial to bear in mind that the process of reconstitution begins with detergent solubilization of the proteome and does not completely circumvent structural perturbations invoked by detergents.
Author response:
The following is the authors’ response to the original reviews.
Reviewer #1 (Public Review):
Summary:
The idea is appealing, but the authors have not sufficiently demonstrated the utility of this approach.
Strengths:
Novelty of the approach, potential impli=cations for discovering novel interactions
Weaknesses:
The Duong had introduced their highly elegant peptidisc approach several years ago. In this present work, they combine it with thermal proteome profiling (TPP) and attempt to demonstrate the utility of this combination for identifying novel membrane protein-ligand interactions.
While I find this idea intriguing, and the approach potentially useful, I do not feel that the authors had sufficiently demonstrated the utility of this approach. My main concern is that no novel interactions are identified and validated. For the presentation of any new methodology, I think this is quite necessary. In addition, except for MsbA, no orthogonal methods are used to support the conclusions, and the authors rely entirely on quantifying rather small differences in abundances using either iBAQ or LFQ.
We thank the reviewer for their thoughtful comments. In this revision, we have experimentally addressed the reviewer’s concerns in three ways:
(1) To demonstrate the utility of our MM-TPP method over the detergent-based TPP workflow (termed DB-TPP), we performed a side-by-side comparison using ATP–VO₄ at 51 °C (Figure 3B and Figure 4A). From the DB-TPP dataset, 7.4% of all identified proteins were annotated as ATP-binding, while 6.4% of proteins differentially stabilized were annotated as ATP-binding. In contrast, in the MM-TPP dataset, 9.3% of all identified proteins were annotated as ATP-binding proteins, while 17% of proteins differentially stabilized were annotated as ATP-binding. The lack of enrichment in the detergent-based approach indicates that the observed differences are likely stochastic, rather than a result of specific ATP–VO₄-mediated stabilization as found with MM-TPP. For instance, several key proteins—BCS1, P2RY6, SLC27A2, ABCB1, ABCC2, and ABCC9— found differentially stabilized using the MM-TPP method showed no such pattern in the DB-TPP dataset. This divergence strongly supports the specificity and utility of our Peptidisc approach.
(2) To demonstrate that MM-TPP can resolve not only the broader effects of ATP–VO₄ but also specific ligand–protein interactions, we employed 2-methylthio-ADP (2-MeS-ADP), a selective agonist of the P2RY12 receptor [PMID: 24784220]. In that case, we observed clear thermal stabilization of P2RY12, with more than 6-fold increase in stability at both 51 °C and 57 °C (–log₁₀ p > 5.97; Figure 4B and Figure S4). Notably, no other proteins—including the structurally related but non-responsive P2RY6 receptor- showed comparable stabilization fold change at these temperatures.
(3) To further probe the reproducibility of the method, we performed an independent MMTPP evaluation with ATP–VO₄ at 51 °C using data-independent acquisition (DIA), in contrast to the data-dependent acquisition (DDA) approach used in the initial study (Figure S5). Overall, 7.8% of all identified proteins were annotated as ATP-binding, and as before, this proportion increased to 17% among proteins with log₂ fold changes greater than 0.5. Specifically, BCS1 and SLC27A2 exhibited strong stabilization (log₂ fold change > 1), while P2RY6, ABCB11, ABCC2, and ABCG2 showed moderate stabilization (log₂ fold changes between 0.5 and 1), and consistent with previous results, P2RX4 was destabilized, with a log₂ fold change below –1. These findings support the consistency and reproducibility of the method across distinct data acquisition methods.
My main concern is that no novel interactions are identified and validated. For the presentation of any new methodology, I think this is quite necessary.
The primary objective of our study is to establish and benchmark the MM-TPP workflow using known targets, rather than to discover novel ligand–protein interactions. Identifying new binders requires extensive screening and downstream validations, which we believe is beyond the scope of this methodological report. Instead, our study highlights the sensitivity and reliability of the MM-TPP approach by demonstrating consistent and reproducible results with well-characterized interactions.
We respectfully disagree with the notion that introducing a new methodology must necessarily include the discovery of novel interactions. For instance, Martinez Molina et al. [PMID: 23828940] introduced the cellular thermal shift assay (CETSA) by validating established targets such as MetAP2 with TNP-470 and CDK2 with AZD-5438, without identifying novel protein–ligand pairs. Similarly, Kalxdorf et al. [PMID: 33398190] published their cell-surface thermal proteome profiling (CS-TPP) using Ouabain to stabilize the Na⁺/K⁺-ATPase pump in K562 cells, and SB431542 to stabilize its canonical target JAG1. In fact, when these methods revealed additional stabilizations, these were not validated but instead interpreted through reasoning grounded in the literature. For instance, they attributed the SB431542-induced stabilization of MCT1 to its reported role in cell migration and tumor invasiveness, and explained that SLC1A2 stabilization is related to the disruption of Na⁺/K⁺-ATPase activity by Ouabain. In the same way, our interpretation of ATP-VO₄–mediated stabilization of Mao-B is justified by predictive AlphaFold-3 rather than direct orthogonal assays, which are beyond the scope of our methodological presentation.
Collectively, the influential studies cited above have set methodological precedents by prioritizing validation and proof-of-concept over merely finding uncharacterized binders. In the same spirit, our work is centred on establishing MM-TPP as a robust platform for probing membrane protein–ligand interactions in a water-soluble format. The discovery of novel binders remains an exciting future direction—one that will build upon the methodological foundation laid by the present study.
In addition, except for MsbA, no orthogonal methods are used to support the conclusions, and the authors rely entirely on quantifying rather small differences in abundances using either iBAQ or LFQ.
We deliberately began this study with our model protein, MsbA, examined under both native and overexpressed conditions, to establish an adequation between MMTPP (Figure 2D) and biochemical stability assays (Figure 2A). This validation has provided us with the foundation to confidently extend MM-TPP to the mouse organ proteome. To demonstrate the validity of our workflow, we have used ATP-VO₄ because it has expected targets.
We note that orthogonal validation often requires overproduction and purification of the candidate proteins, including suitable antibodies, which is a true challenge for membrane proteins. Here, we demonstrate that MM-TPP can detect ligand-induced thermal shifts directly in native membrane preparations, without requiring protein overproduction or purification. We also emphasize several influential studies in TPP, including Martinez Molina et al. (PMID: 23828940) and Fang et al. (PMID: 34188175), which focused primarily on establishing and benchmarking the methodology, rather than on extensive orthogonal validation. In the same spirit, our study prioritizes methodological development, and accordingly, several orthogonal validations are now included in this revision.
[...] and the authors rely entirely on quantifying rather small differences in abundances using either iBAQ or LFQ.
To clarify, all analyses on ligand-induced stabilization or destabilization were carried out using LFQ values. The sole exception is on Figure 2B, where we used iBAQ values to depict the relative abundance of proteins within a single sample; this to show MsbA's relative level within the E. coli peptidisc library.
Respectfully, we disagree with the assertion that we are “quantifying rather small differences in abundances using either iBAQ or LFQ.” We were able to clearly distinguish between stabilizations driven by specific ligands binding to their targets versus those caused by non-specific ligands with broader activity. This is further confirmed by comparing 2-MeS-ADP, a selective ligand for P2RY12, with ATP-VO₄, a highly promiscuous ligand, and AMP-PNP, which exhibits intermediate breadth. When tested in triplicate at 51 °C, 2-MeS-ADP significantly altered the thermal stability of 27 proteins, AMP-PNP 44 proteins, and ATP-VO₄ 230 proteins, consistent with the expectation that broader ligands stabilize more proteins nonspecifically. Importantly, 2-MeS-ADP produced markedly stronger stabilization of its intended target, P2RY12 (–log<sub>10</sub>p = 9.32), than the top stabilized proteins for ATP–VO₄ (DNAJB3, –log₁₀p = 5.87) or AMP-PNP (FTH1, p = 5.34). Moreover, 2-MeS-ADP did not significantly stabilize proteins that were consistently stabilized by the broad ligands, such as SLC27A2, which was strongly stabilized by both ATP-VO<sub>4</sub> and AMP-PNP (–log<sub>10</sub> p>2.5). Together, these findings demonstrate that MMTPP can robustly distinguish between broad-spectrum and target-specific ligands, with selective ligands inducing stronger and more physiologically meaningful stabilization at their intended targets compared to promiscuous ligands.
Finally, we emphasize that our findings are not marginal, but meet quantitative and statistical rigor consistent with best practices in proteomics. We apply dual thresholds combining effect size (|log₂FC| ≥ 1, i.e., at least a two-fold change) with statistical significance (FDR-adjusted p ≤ 0.05)—criteria commonly used in proteomics methodology studies (e.g., PMID: 24942700, 38724498). Moreover, the stabilization and destabilization events we report are reproducible across biological replicates (n = 3), consistent across adjacent temperatures for most targets, and technically robust across acquisition modes (DDA vs. DIA). Taken together, these results reflect statistically valid and biologically meaningful effects, fully aligned with standards set by prior published proteomics studies.
Furthermore, the reported changes in abundances are solely based on iBAQ or LFQ analysis. This must be supported by a more quantitative approach such as SILAC or labeled peptides. In summary, I think this story requires a stronger and broader demonstration of the ability of peptidisc-TPP to identify novel physiologically/pharmacologically relevant interactions.
With respect to labeling strategies, we deliberately avoided using TMT due to concerns about both cost and potential data quality issues. Some recent studies have documented the drawbacks of TMT in contexts directly relevant to our work. For example, a benchmarking study of LiP-MS workflows showed that although TMT increased proteome depth and reduced technical variance, it was less accurate in identifying true drug–protein interactions and produced weaker dose–response correlations compared with label-free DIA approaches [PMID: 40089063]. More broadly, technical reviews have highlighted that isobaric tagging is intrinsically prone to ratio compression and reporterion interference due to co-isolation and co-fragmentation of peptides, which flatten measured fold-changes and obscure biologically meaningful differences [PMID: 22580419, 22036744]. In terms of SILAC, the technique requires metabolic incorporation of heavy amino acids, which is feasible in cultured cells but not in physiologically relevant tissues such as the liver organ used here. SILAC mouse models exist, but they are expensive and time-consuming [PMID: 18662549, 21909926]. We are not a mouse lab, and introducing liver organ SILAC labeling in our workflow is beyond the scope of these revisions. We also note that several hallmark TPP studies have been successfully carried out using label-free quantification [PMID: 25278616, 26379230, 33398190, 23828940], establishing this as an accepted and widely applied approach in the field.
To further support our conclusions, we added controls showing that detergent solubilization of mouse liver membranes followed by SP4 cleanup fails to detect ATP-VO₄– mediated stabilization of ATP-binding proteins, underscoring the necessity of Peptidisc reconstitution for capturing ligand-induced thermal stabilization. We also present new data demonstrating selective stabilization of the P2Y12 receptor by its agonist 2-MeS-ADP, providing orthogonal, receptor-specific validation within the MM-TPP framework. Finally, an orthogonal DIA acquisition on separate replicates confirmed robust ATP-vanadate stabilization of ATP-binding proteins, including BCS1l and SLC27A2. Together, these additions reinforce that the observed stabilizations are genuine, physiologically relevant ligand–protein interactions and highlight the unique advantage of the Peptidisc-based workflow in capturing such events.
Cited Reference:
24784220: Zhang J, Zhang K, Gao ZG, et al. Agonist-bound structure of the human P2Y₁₂ receptor. Nature. 2014;509(7498):119-122. doi:10.1038/nature13288.
23828940: Martinez Molina D, Jafari R, Ignatushchenko M, et al. Monitoring drug target engagement in cells and tissues using the cellular thermal shift assay. Science. 2013;341(6141):84-87. doi:10.1126/science.1233606.
33398190: Kalxdorf M, Günthner I, Becher I, et al. Cell surface thermal proteome profiling tracks perturbations and drug targets on the plasma membrane. Nat Methods. 2021;18(1):84-91. doi:10.1038/s41592-020-01022-1.
34188175: Fang S, Kirk PDW, Bantscheff M, Lilley KS, Crook OM. A Bayesian semi-parametric model for thermal proteome profiling. Commun Biol. 2021;4(1):810. doi:10.1038/s42003-021-02306-8.
24942700: Cox J, Hein MY, Luber CA, Paron I, Nagaraj N, Mann M. Accurate proteome-wide label-free quantification by delayed normalization and maximal peptide ratio extraction, termed MaxLFQ. Mol Cell Proteomics. 2014;13(9):2513-2526. doi:10.1074/mcp.M113.031591.
38724498: Peng H, Wang H, Kong W, Li J, Goh WWB. Optimizing differential expression analysis for proteomics data via high-performing rules and ensemble inference. Nat Commun. 2024;15(1):3922. doi:10.1038/s41467-02447899-w.
40089063: Koudelka T, Bassot C, Piazza I. Benchmarking of quantitative proteomics workflows for limited proteolysis mass spectrometry. Mol Cell Proteomics. 2025;24(4):100945. doi:10.1016/j.mcpro.2025.100945.
22580419: Christoforou AL, Lilley KS. Isobaric tagging approaches in quantitative proteomics: the ups and downs. Anal Bioanal Chem. 2012;404(4):1029-1037. doi:10.1007/s00216-012-6012-9.
22036744: Christoforou AL, Lilley KS. Isobaric tagging approaches in quantitative proteomics: the ups and downs. Anal Bioanal Chem. 2012;404(4):1029-1037. doi:10.1007/s00216-012-6012-9.
18662549: Krüger M, Moser M, Ussar S, et al. SILAC mouse for quantitative proteomics uncovers kindlin-3 as an essential factor for red blood cell function. Cell. 2008;134(2):353-364. doi:10.1016/j.cell.2008.05.033.
21909926: Zanivan S, Krueger M, Mann M. In vivo quantitative proteomics: the SILAC mouse. Methods Mol Biol. 2012;757:435-450. doi:10.1007/978-1-61779-166-6_25.
25278616: Kalxdorf M, Becher I, Savitski MM, et al. Temperature-dependent cellular protein stability enables highprecision proteomics profiling. Nat Methods. 2015;12(12):1147-1150. doi:10.1038/nmeth.3651.
26379230: Savitski MM, Reinhard FBM, Franken H, et al. Tracking cancer drugs in living cells by thermal profiling of the proteome. Science. 2015;346(6205):1255784. doi:10.1126/science.1255784.
33452728: Leuenberger P, Ganscha S, Kahraman A, et al. Cell-wide analysis of protein thermal unfolding reveals determinants of thermostability. Science. 2020;355(6327):eaai7825. doi:10.1126/science.aai7825.
23066101: Savitski MM, Zinn N, Faelth-Savitski M, et al. Quantitative thermal proteome profiling reveals ligand interactions and thermal stability changes in cells. Nat Methods. 2013;10(12):1094-1096. doi:10.1038/nmeth.2766.
30858367: Piazza I, Kochanowski K, Cappelletti V, et al. A machine learning-based chemoproteomic approach to identify drug targets and binding sites in complex proteomes. Nat Commun. 2019;10(1):1216. doi:10.1038/s41467019-09199-0.
Reviewer #2 (Public Review):
Summary:
The membrane mimetic thermal proteome profiling (MM-TPP) presented by Jandu et al. seems to be a useful way to minimize the interference of detergents in efficient mass spectrometry analysis of membrane proteins. Thermal proteome profiling is a mass spectrometric method that measures binding of a drug to different proteins in a cell lysate by monitoring thermal stabilization of the proteins because of the interaction with the ligands that are being studied. This method has been underexplored for membrane proteome because of the inefficient mass spectrometric detection of membrane proteins and because of the interference from detergents that are used often for membrane protein solubilization.
Strengths:
In this report the binding of ligands to membrane protein targets has been monitored in crude membrane lysates or tissue homogenates exalting the efficacy of the method to detect both intended and off-target binding events in a complex physiologically relevant sample setting.
The manuscript is lucidly written and the data presented seems clear. The only insignificant grammatical error I found was that the 'P' in the word peptidisc is not capitalized in the beginning of the methods section "MM-TPP profiling on membrane proteomes". The clear writing made it easy to understand and evaluate what has been presented. Kudos to the authors.
Weaknesses:
While this is a solid report and a promising tool for analyzing membrane protein drug interactions, addressing some of the minor caveats listed below could make it much more impactful.
The authors claim that MM-TPP is done by "completely circumventing structural perturbations invoked by detergents[1] ". This may not be entirely accurate, because before reconstitution of the membrane proteins in peptidisc, the membrane fractions are solubilized by 1% DDM. The solubilization and following centrifugation step lasts at least for 45 min. It is less likely that all the structural perturbations caused by DDM to various membrane proteins and their transient interactions become completely reversed or rescued by peptidisc reconstitution.
We thank the reviewer for this insightful comment. In response, we have revised the sentence and expanded the discussion to clarify that the Peptidisc provides a complementary approach to detergent-based preparations for studying membrane proteins, preserving native lipid–protein interactions and stabilization effects that may be diminished in detergent.
To further address the structural perturbations invoked by detergents, and as already detailed to our response to Reviewer 1, we have compared the thermal profile of the Peptidisc library to the mouse liver membranes solubilized with 1% DDM, after incubation with ATP–VO₄ at 51 °C (Figure 4A). The results with the detergent extract revealed random patterns of stabilization and destabilization, with only 6.4% of differentially stabilized proteins being ATP-binding—comparable to the 7.4% observed in the background. In contrast, in the Peptidisc library, 17% of differentially stabilized proteins were ATP-binding, compared to 9.3% in the background. Thus, while Peptidisc reconstitution does not fully avoid initial detergent exposure, these findings underscore the importance of implementing Peptidisc in the TPP workflow when dealing with membrane proteins.
In the introduction, the authors make statements such as "..it is widely acknowledged that even mild detergents can disrupt protein structures and activities, leading to challenges in accurately identifying drug targets.." and "[peptidisc] libraries are instrumental in capturing and stabilizing IMPs in their functional states while preserving their interactomes and lipid allosteric modulators...'. These need to be rephrased, as it has been shown by countless studies that even with membrane protein suspended in micelles robust ligand binding assays and binding kinetics have been performed leading to physiologically relevant conclusions and identification of protein-protein and protein-ligand interactions.
We thank the reviewer for this valuable feedback and fully agree with the point raised. In response, we have revised the Introduction and conclusion to moderate the language concerning the limitations of detergent use. We now explicitly acknowledge that numerous studies have successfully used detergent micelles for ligand-binding assays and kinetic analyses, yielding physiologically relevant insights into both protein–protein and protein–ligand interactions [e.g., PMID: 22004748, 26440106, 31776188].
At the same time, we clarify that the Peptidisc method offers a complementary advantage, particularly in the context of thermal proteome profiling (TPP), which involves mass spectrometry workflows that are incompatible with detergents. In this setting, Peptidiscs facilitate the detection of ligand-binding events that may be more difficult to observe in detergent micelles.
We have reframed our discussion accordingly to present Peptidiscs not as a replacement for detergent-based methods, but rather as a complementary tool that broadens the available methodological landscape for studying membrane protein interactions.
If the method involves detergent solubilization, for example using 1% DDM, it is a bit disingenuous to argue that 'interactomes and lipid allosteric modulators' characterized by lowaffinity interactions will remain intact or can be rescued upon detergent removal. Authors should discuss this or at least highlight the primary caveat of the peptidisc method of membrane protein reconstitution - which is that it begins with detergent solubilization of the proteome and does not completely circumvent structural perturbations invoked by detergents.
We would like to clarify that, in our current workflow, ligand incubation occurs after reconstitution into Peptidiscs. As such, the method is designed to circumvent the negative effects of detergent during the critical steps involving low-affinity interactions.
That said, we fully acknowledge that Peptidisc reconstitution begins with detergent solubilization (e.g., 1% DDM), and we have revised the conclusion to explicitly state this important caveat. As the reviewer correctly points out, this initial step may introduce some structural perturbations or result in the loss of weakly associated lipid modulators.
However, reconstitution into Peptidiscs rapidly restores a detergent-free environment for membrane proteins, which has been shown in our previous studies [PMID: 38577106, 38232390, 31736482, 31364989] to mitigate these effects. Specifically, we have demonstrated that time-limited DDM exposure, followed by Peptidisc reconstitution, minimizes membrane protein delipidation, enhances thermal stability, retains functionality, and preserves multi-protein assemblies.
It would also be important to test detergents that are even milder than 1% DDM and ones which are harsher than 1% DDM to show that this method of reconstitution can indeed rescue the perturbations to the structure and interactions of the membrane protein done by detergents during solubilization step.
We selected 1% DDM based on our previous work [PMID: 37295717, 39313981,38232390], where it consistently enabled robust and reproducible solubilization for Peptidisc reconstitution. We agree that comparing milder detergents (e.g., LMNG) and harsher ones (e.g., SDC) would provide valuable insights into how detergent strength influences structural perturbations, and how effectively these can be mitigated by Peptidisc reconstitution. Preliminary data (not shown) from mouse liver membranes indicate broadly similar proteomic profiles following solubilization with DDM, LMNG, and SDC, although potential differences in functional activity or ligand binding remain to be investigated.
Based on the methods provided, it appears that the final amount of detergent in peptidisc membrane protein library was 0.008%, which is ~150 uM. The CMC of DDM depending on the amount of NaCl could be between 120-170 uM.
While we cannot entirely rule out the presence of residual DDM (0.008%) in the raw library, its free concentration may be lower than initially estimated. This is related to the formation of mixed micelles with the amphipathic peptide scaffold, which is supplied in excess during reconstitution. These mixed micelles are subsequently removed during the ultrafiltration step. Furthermore, in related work using His-tagged Peptidiscs [PMID: 32364744], we purified the library by nickel-affinity chromatography following a 5× dilution into a detergent-free buffer. Although this purification step reduced the number of soluble proteins, the same membrane proteins were retained, suggesting that any residual detergent does not significantly interfere with Peptidisc reconstitution. Supporting this, our MM-TPP assays on purified libraries (data not shown) consistently demonstrated stabilization of ATP-binding proteins (e.g., SLC27A2, DNAJB3), indicating that the observed ligand–protein interactions result from successful incorporation into Peptidiscs.
Perhaps, to completely circumvent the perturbations from detergents other methods of detergentfree solubilization such as using SMA polymers and SMALP reconstitution could be explored for a comparison. Moreover, a comparison of the peptidisc reconstitution with detergent-free extraction strategies, such as SMA copolymers, could lend more strength to the presented method.
We agree that detergent-free methods such as SMA polymers hold promise for membrane protein solubilization. However, in preliminary single-replicate experiments using SMA2000 at 51 °C in the presence of ATP–VO₄ (data not shown), we observed broad, non-specific stabilization effects. Of the 2,287 quantified proteins, 9.3% were annotated as ATP-binding, yet 9.9% of the 101 proteins showing a log₂ fold change >1 or <–1 were ATPbinding, indicating no meaningful enrichment. Given this lack of specificity and the limited dataset, we chose not to pursue further SMA experiments and have not included them here. However, in a recent study (https://doi.org/10.1101/2025.08.25.672181), we directly compared Peptidisc, SMA, and nanodiscs for liver membrane proteome profiling. In that work, Peptidisc outperformed both SMA and nanodiscs in detecting membrane protein dysregulation between healthy and diseased liver. By extension, we expect Peptidisc to offer superior sensitivity and specificity for detecting ligand-induced stabilization events, such as those observed here with ATP–vanadate.
Cross-verification of the identified interactions, and subsequent stabilization or destabilizations, should be demonstrated by other in vitro methods of thermal stability and ligand binding analysis using purified protein to support the efficacy of the MM-TPP method. An example cross-verification using SDS-PAGE, of the well-studied MsbA, is shown in Figure 2. In a similar fashion, other discussed targets such as, BCS1L, P2RX4, DgkA, Mao-B, and some un-annotated IMPs shown in supplementary figure 3 that display substantial stabilization or destabilization should be cross-verified.
We appreciate this suggestion and note that a similar point was raised in R1’s comment “In addition, except for MsbA, no orthogonal methods are used to support the conclusions, and the authors rely entirely on quantifying rather small differences in abundances using either iBAQ or LFQ.” We have developed a detailed response to R1 on this matter, which equally applies here.
Cited Reference:
35616533: Young JW, Wason IS, Zhao Z, et al. Development of a Method Combining Peptidiscs and Proteomics to Identify, Stabilize, and Purify a Detergent-Sensitive Membrane Protein Assembly. J Proteome Res. 2022;21(7):1748-1758. doi:10.1021/acs.jproteome.2c00129. PMID: 35616533.
31364989: Carlson ML, Stacey RG, Young JW, et al. Profiling the Escherichia coli membrane protein interactome captured in Peptidisc libraries. Elife. 2019;8:e46615. doi:10.7554/eLife.46615.
22004748: O'Malley MA, Helgeson ME, Wagner NJ, Robinson AS. Toward rational design of protein detergent complexes: determinants of mixed micelles that are critical for the in vitro stabilization of a G-protein coupled receptor. Biophys J. 2011;101(8):1938-1948. doi:10.1016/j.bpj.2011.09.018.
26440106: Allison TM, Reading E, Liko I, Baldwin AJ, Laganowsky A, Robinson CV. Quantifying the stabilizing effects of protein-ligand interactions in the gas phase. Nat Commun. 2015;6:8551. doi:10.1038/ncomms9551.
31776188: Beckner RL, Zoubak L, Hines KG, Gawrisch K, Yeliseev AA. Probing thermostability of detergentsolubilized CB2 receptor by parallel G protein-activation and ligand-binding assays. J Biol Chem. 2020;295(1):181190. doi:10.1074/jbc.RA119.010696.
38577106: Jandu RS, Yu H, Zhao Z, Le HT, Kim S, Huan T, Duong van Hoa F. Capture of endogenous lipids in peptidiscs and effect on protein stability and activity. iScience. 2024;27(4):109382. doi:10.1016/j.isci.2024.109382.
38232390: Antony F, Brough Z, Zhao Z, Duong van Hoa F. Capture of the Mouse Organ Membrane Proteome Specificity in Peptidisc Libraries. J Proteome Res. 2024;23(2):857-867. doi:10.1021/acs.jproteome.3c00825.
31736482: Saville JW, Troman LA, Duong Van Hoa F. PeptiQuick, a one-step incorporation of membrane proteins into biotinylated peptidiscs for streamlined protein binding assays. J Vis Exp. 2019;(153). doi:10.3791/60661.
37295717: Zhao Z, Khurana A, Antony F, et al. A Peptidisc-Based Survey of the Plasma Membrane Proteome of a Mammalian Cell. Mol Cell Proteomics. 2023;22(8):100588. doi:10.1016/j.mcpro.2023.100588.
39313981: Antony F, Brough Z, Orangi M, Al-Seragi M, Aoki H, Babu M, Duong van Hoa F. Sensitive Profiling of Mouse Liver Membrane Proteome Dysregulation Following a High-Fat and Alcohol Diet Treatment. Proteomics. 2024;24(23-24):e202300599. doi:10.1002/pmic.202300599.
32364744: Young JW, Wason IS, Zhao Z, Rattray DG, Foster LJ, Duong Van Hoa F. His-Tagged Peptidiscs Enable Affinity Purification of the Membrane Proteome for Downstream Mass Spectrometry Analysis. J Proteome Res. 2020;19(7):2553-2562. doi:10.1021/acs.jproteome.0c00022.
32591519: The M, Käll L. Focus on the spectra that matter by clustering of quantification data in shotgun proteomics. Nat Commun. 2020;11(1):3234. doi:10.1038/s41467-020-17037-3.
33188197: Kurzawa N, Becher I, Sridharan S, et al. A computational method for detection of ligand-binding proteins from dose range thermal proteome profiles. Nat Commun. 2020;11(1):5783. doi:10.1038/s41467-02019529-8.
26524241: Reinhard FBM, Eberhard D, Werner T, et al. Thermal proteome profiling monitors ligand interactions with cellular membrane proteins. Nat Methods. 2015;12(12):1129-1131. doi:10.1038/nmeth.3652.
23828940: Martinez Molina D, Jafari R, Ignatushchenko M, et al. Monitoring drug target engagement in cells and tissues using the cellular thermal shift assay. Science. 2013;341(6141):84-87. doi:10.1126/science.1233606.
32133759: Mateus A, Kurzawa N, Becher I, et al. Thermal proteome profiling for interrogating protein interactions. Mol Syst Biol. 2020;16(3):e9232. doi:10.15252/msb.20199232.
14755328: Dorsam RT, Kunapuli SP. Central role of the P2Y12 receptor in platelet activation. J Clin Invest. 2004;113(3):340-345. doi:10.1172/JCI20986.
Reviewer #1 (Recommendations for the authors):
“The authors use iBAC or LFQ to compare across samples. This inconsistency is puzzling. As far as I know, LFQ should always be used when comparing across samples”
As mentioned above, we use iBAQ only in Fig. 2B to illustrate within-sample relative abundance; all comparative analyses elsewhere use LFQ. We have updated the Fig. 2B legend to state this explicitly.
We used iBAQ Fig. 2B as it provides a notion of protein abundance within a sample, normalizing the summed peptide intensities by the number of theoretically observable peptides. This normalization facilitates comparisons between proteins within the same sample, offering a clearer understanding of their relative molar proportions [PMID: 33452728]. LFQ, by contrast, is optimized for comparing the same protein across different samples. It achieves this by performing delayed normalization to reduce run-to-run variability and by applying maximal peptide ratio extraction, which integrates pairwise peptide intensity ratios across all samples to build a consistent protein-level quantification matrix [PMID: 24942700]. These features make LFQ more robust to missing values and technical variation, thereby enabling accurate detection of relative abundance changes in the same protein under different experimental conditions. This distinction is well supported by the proteomics literature: Smits et al. [PMID: 23066101] used iBAQ specifically to determine the relative abundance of proteins within one sample, whereas LFQ was applied for comparative analyses between conditions.
“[Regarding Figure 2A] Why does the control also contain ATP-vanadate? Also, I am not aware of a commercially available chemical "ATP-VO4". I assume this is a mistake”
The control condition in Figure 2A was mislabeled, and the figure has been corrected to remove this discrepancy. In our experiments, ATP and orthovanadate (VO<sub>4</sub>) were added together, and for simplicity this was annotated as “ATP-VO<sub>4</sub>.”
“[Regarding Figure 2B] What is the fold change in MsbA iBAQ values? It seems that the differences are quite small, and as such require a more quantitative approach than iBAQ (e.g SILAC or some other internal standard). In addition, what information does this panel add relative to 2C”
The figure has been updated to clarify that the values shown are log₂transformed iBAQ intensities. Figures 2B and 2C are complementary: Figure 2B shows that in the control sample, MsbA’s peptide abundance decreases with temperatures (51, 56, and 61 °C) relative to the remaining bulk proteins. Figure 2C shows the specific thermal profiles of MsbA in control and ATP–vanadate conditions. To make this clearer, we have added a sentence to the Results section explaining the specific role of Figure 2B.
Together, these panels indicate that the method can identify ligand-induced stabilization even for proteins whose abundance decreases faster than the bulk during the TPP assay. We have provided the rationale for not using SILAC or TMT labeling in our public response.
“[Regarding Figure 2C] Although not mentioned in the legend, I assume this is iBAQ quantification, which as mentioned above isn't accurate enough for such small differences. In addition, I find this data confusing: why is MsbA more stable at the lower temperatures in the absence of ATP-vanadate? The smoothed-line representation is misleading, certainly given the low number of data points”
The data presented represent LFQ values for MsbA, and we have updated the figure legend to clearly indicate this. Additionally, as suggested, we have removed the smoothing line to more accurately reflect the data. Regarding the reviewer’s concern about stability at lower temperatures, we note that MsbA exhibits comparable abundance at 38 °C and 46 °C under both conditions, with overlapping error bars. We therefore interpret these data as indicating no significant difference in stability at the lower temperatures, with ligand-dependent stabilization becoming apparent only at elevated temperatures. We do not exclude the possibility that MsbA stability at these temperatures is affected by the conformational dynamics of this ABC transporter upon ATP binding and hydrolysis.
“[Regarding Figure 3A] is this raw LFQ data? Why did the authors suddenly change from iBAQ to LFQ? I find this inconsistency puzzling”
To clarify, all analyses of protein stabilization or destabilization presented in the manuscript are based on LFQ values. The only instance where iBAQ was used is Figure 2B, where it served to illustrate the relative peptide abundance of MsbA within the same sample. We have revised the figure legends and text to make this distinction explicit and ensure consistency in presentation.
“[Regarding Figure 3B] The non-specific ATP-dependent stabilization increases the likelihood of false positive hits. This limitation is not mentioned by the authors. I think it is important to show other small molecules, in addition to ATP. The authors suggest that their approach is highly relevant for drug screening. Therefore, a good choice is to test an effect of a known stabilizing drug (eg VX-809 and CFTR)”
We thank the reviewer for this suggestion. As noted in the manuscript (results and discussion sections), ATP is a natural hydrotrope and is therefore expected to induce broad, non-specific stabilization effects, a phenomenon also observed in previous proteome-wide studies, which demonstrated ATP’s widespread influence on cytosolic protein solubility and thermal stability (PMID: 30858367). To demonstrate that MM-TPP can resolve specific ligand–protein interactions beyond these global ATP effects, we tested 2-methylthio-ADP (2-MeS-ADP), a selective agonist of P2RY12 (PMID: 14755328). In these experiments, we observed robust and reproducible stabilization of P2RY12 at both 51°C and 57°C, with no consistent stabilization of unrelated proteins across temperatures. This provides direct evidence that our workflow can distinguish specific from non-specific ligand-induced effects. We selected 2-MeS-ADP due to its structural stability and receptor higher-affinity over ADP, allowing us to extend our existing workflow while testing a receptor-specific interaction. We agree that extending this approach to clinically relevant small-molecule drugs, such as VX-809 with CFTR, would further underscore the pharmacological potential of MM-TPP, and we have now noted this as an important avenue for future studies.
“X axis of Figure 3B: Log 2 fold difference of what? iBAQ? LFQ? Similar ambiguity regarding the Y axis of 3E. What peptide? And why the constant changes in estimating abundances?”
We thank the reviewer for pointing out these inaccuracies in the figure annotations. As mentioned above, all analyses (except Figure 2B) are based on LFQ values. We have revised the figure legends and text to make this clear.
In Figure 3E, “peptide intensity” refers to log2 LFQ peptide intensities derived from the BCS1L protein, as indicated in the figure caption.
“The authors suggest that P2RY6 and P2RY12 are stabilized by ADP, the hydrolysis product of ATP. Currently, the support for this suggestion is highly indirect. To support this claim, the authors need to directly show the effect of ADP. In reference to the alpha fold results shown in Figure 4D, the authors state that "Collectively, these data highlight the ability of MM-TPP to detect the side effects of parent compounds, an important consideration for drug development". To support this claim, it is necessary to show that Mao-B is indeed best stabilized with ADP or AMP, rather than ATP.”
In this revision, we chose not to test ADP directly, as it is a broadly binding, relatively weak ligand that would likely stabilize many proteins without revealing clear target-specific effects. Since we had already evaluated ATP-VO₄, a similarly broad, non-specific ligand, additional testing with ADP would provide limited additional insight. Instead, we prioritized 2-methylthio-ADP, a selective agonist of P2RY12, to more effectively demonstrate the specificity of MM-TPP. With this ligand, we observed clear and reproducible stabilization of P2RY12, underscoring the ability of MM-TPP to resolve receptor–ligand interactions beyond ATP’s broad hydrotropic effects. Importantly, and as expected, we did not observe stabilization of the related purinergic receptor P2RY6, further supporting the specificity of the observed effect.
We have also revised the AlphaFold-related statement in Figure 4D to adopt a more cautious tone: “Collectively, these data suggest that MM-TPP may detect potential side effects of parent compounds, an important consideration for drug development.” In this context, we use AlphaFold not as a validation tool, but rather as a structural aid to help rationalize why certain off-target proteins (e.g., ATP with Mao-B) exhibit stabilization.
Reviewer #2 (Recommendations for the authors):
“In the main text, it will be useful to include the unique peptides table of at least the targets discussed in the manuscript. For example, in presence of AMP-PNP at 51oC P2RY6 shows 4-6 peptides in all n=3 positive & negative ionization modes. But, for P2RY12 only 1-3 peptides were observed. Depending on the sequence length and the relative abundance in the cell of a protein of interest, the number of peptides observed could vary a lot per protein. Given the unique peptide abundance reported in the supplementary file, for various proteins in different conditions, it appears the threshold of observation of two unique peptides for a protein to be analyzed seems less stringent.”
By applying a filter requiring at least two unique peptides in at least one replicate, we exclude, on average, 15–20% of the total identified proteins. We consider this a reasonable level of stringency that balances confidence in protein identification with the retention of relevant data. This threshold was selected because it aligns with established LC-MS/MS data analysis practices (PMID: 32591519, 33188197, 26524241), and we have included these references in the Methods section to justify our approach. We have included in this revision a Supplemental Table 2 showing the unique peptide counts for proteins highlighted in this study.
“It appears that the time of heat treatment for peptidisc library subjected to MM-TPP profiling was chosen as 3 min based on the results presented in Supplementary Figure 1A, especially the loss of MsbA observed in 1% DDM after 3 min heat perturbation. However, when reconstituted in peptidisc there seems to be no loss in MsbA even after 12 mins at 45oC. So, perhaps a longer heat treatment would be a more efficient perturbation.”
Previous studies indicate that heat exposure of 3–5 minutes is optimal for visualizing protein denaturation (PMID: 23828940, 32133759). We have added a statement to the Results section to justify our choice of heat exposure. Although MsbA remains stable at 45 °C for extended periods, higher temperatures allow for more effective perturbation to reveal destabilization. Supplementary Figure 1A specifically illustrates MsbA instability in detergent environments.
“Some of the stabilized temperatures listed in Table 1 are a bit confusing. For example, ABCC3 and ABCG2. In the case of ABCC3 stabilization was observed at 51oC and 60oC, but 56oC is not mentioned. In the same way, 51oC is not mentioned for ABCG2. You would expect protein to be stabilized at 56oC if it is stabilized at both 51oC and 60oC. So, it is unclear if the stabilizations were not monitored for these proteins at the missing temperatures in the table or if no peptides could be recorded at these temperatures as in the case of P2RX4 at 60oC in Figure 4C.”
Both scenarios are represented in our data. For some proteins, like ABCG2, sufficient peptide coverage was achieved, but no stabilization was observed at intermediate temperatures (e.g., 56 °C), likely because the perturbation was not strong enough to reveal an effect. In other cases, such as ABCC3 at 56 °C or P2RX4 at 60 °C, the proteins were not detected due to insufficient peptide identifications at those temperatures, which explains their omission from the table.
“In Figure 4C, it is perplexing to note that despite n = 3 there were no peptide fragments detected for P2RX4 at 60oC in presence of ATP-VO4, but they were detected in presence of AMP-PNP. It will be useful to learn authors explanation for this, especially because both of these ligands destabilize P2RX4. In Figure 4B, it would have been great to see the effect of ADP too, to corroborate the theory that ATP metabolites could impact the thermal stability.”
In Figure 4C, the absence of P2RX4 peptide detection at 60 °C with ATP–VO₄ mirrors variability observed in the corresponding control (n = 6). Specifically, neither the control nor ATP–VO₄ produced unique peptides for P2RX4 at 60 °C in that replicate, whereas peptides were detected at 60 °C in other replicates for both the control and AMPPNP, and at 64 °C for ATP–VO<sub>4</sub>, the controls, and AMP-PNP. Such missing values are a natural feature of MS-based proteomics and can arise from multiple technical factors, including inconsistent heating, incomplete digestion, stochastic MS injection, or interference from Peptidisc peptides. We therefore interpret the absence of peptides in this replicate as a technical artifact rather than evidence against protein destabilization. Importantly, the overall dataset consistently shows that both ATP–VO₄ and AMP-PNP destabilize P2RX4, supporting their characterization as broad, non-specific ligands with off-target effects.
Because ATP and ADP belong to the same class of broadly binding, non-specific ligands, additional testing with ADP would not provide meaningful mechanistic insight. Instead, we chose to test 2-methylthio-ADP, a selective P2RY12 agonist. This experiment revealed robust, reproducible stabilization of P2RY12, without consistent effects on unrelated proteins at 51 °C and 57 °C, thereby demonstrating the ability of MM-TPP to detect specific receptor–ligand interactions.
Finally, we note that P2RX4 is not a primary target of ATP–VO<sub>4</sub> or AMP-PNP. Consequently, the observed destabilization of P2RX4 is expected to be less pronounced than the strong, physiologically consistent stabilization of ABC transporters by ATP–VO<sub>4</sub>, as shown in Figure 3D, where the majority of ABC transporters are thermally stabilized across all tested temperatures.
“As per Figure 4, P2Y receptors P2RY6 and P2RY12 both showed great thermal stability in presence of ATP-VO4 despite their preference for ADP. The authors argue this could be because of ATP metabolism, and binding of the resultant ADP to the P2RY6. If P2RX4 prefers ATP and not the metabolized product ADP that apparently is available, ideally you should not see a change in stability. A stark destabilization would indicate interaction of some sorts. P2X receptors are activated by ATP and are not naturally activated by AMP-PNP. So, destabilization of P2RX4 upon binding to ATP that can activate P2X receptors is conceivable. However, destabilization both in presence of ATP-VO4 and AMP-PNP is unclear. It is perhaps useful to test effect of ADP using this method, and maybe even compare some antagonists such as TNPATP.”
In this study, we did not directly test ADP, as we had already demonstrated that MM-TPP detects stabilization by broad-binding ligands such as ATP–VO₄. Instead, we focused on a more selective ligand, 2-MeS-ADP, a specific agonist of P2RY12 [PMID: 14755328]. Here, we observed robust and reproducible stabilization of P2RY12 at 51 °C and 57 °C, while P2RY6 showed no significant changes, and no other proteins were consistently stabilized (Figure 4B, S4). This confirms that MM-TPP can distinguish specific ligand–receptor interactions from broader ATP-induced effects. To further explore the assay’s nuance and sensitivity, testing additional nucleotide ligands—including antagonists like TNP-ATP or ATPγS—would provide valuable insights, and we have identified this as an important future direction.
Shipping to Canada Contact our Canada Shipping Team
Look to add a couple of sentences as a summary of the page
Leave as normal text (not a header)
Contact GFS today to get started and unlock the potential of the Canadian market.
remove h tag
Fast Delivery Times: Deliver to key Canadian cities like Toronto and Montréal in as little as 3-6 days. Comprehensive Coverage: We serve all Canadian provinces, including harder-to-reach areas like Yukon and Newfoundland, with delivery times ranging from 3 to 7 business days. End-to-End Solutions: From UK pick-up to final mile delivery, GFS handles the entire logistics process, including import clearance, customs handling and delivery to your customer’s door
Can we add an additional box that would cover eCommerce key benefits
Can we look at how we can add a section or add eCommerce into this work as these are the main keywords for the page that we are not competitively ranking for at the moment
Can we add a section that is similar to the section on the US page. It would sit between the key benefits section and the tax section
"What sets apart GFS International’s UK to USA Parcel Shipping Services?"
Simplify Your Cross-Border Logistics With Canada’s booming eCommerce market, seamless and efficient delivery is key to staying ahead of the competition. If you’re looking to expand or simplify your eCommerce business into Canada, GFS has you covered. Why Ship to Canada with GFS? Canada is a strategic eCommerce lane, offering vast opportunities for businesses looking to expand their global footprint. GFS provides reliable, consistent delivery solutions to all Canadian provinces, ensuring your products reach your customers quickly and efficiently.
We would advise amending this content into one title (h2) and a section of content
Example Simplify Your cross Border Logistics and Ship to Canada with GFS
Expanding your eCommerce reach to Canada is easy with GFS. Our multi-carrier delivery solution takes the complexity out of cross-border logistics, giving you one seamless platform to manage every shipment. From customs documentation to carrier selection, GFS connects you with trusted international partners to ensure fast, reliable delivery across Canada. With transparent tracking, competitive rates, and expert support, you can focus on growing your business while we handle the logistics. Simplify your international shipping, reduce operational hassle, and keep your Canadian customers happy with an all-in-one delivery solution from GFS.
8
what up doc me the banana
eLife Assessment
This valuable study reports the physiological function of a putative transmembrane UDP-N-acetylglucosamine transporter called SLC35G3 in spermatogenesis. The conclusion that SLC35G3 is a new and essential factor for male fertility in mice and probably in humans is supported by convincing data. This study will be of interest to reproductive biologists and physicians working on male infertility.
Reviewer #2 (Public review):
Summary:
This study characterized the function of SLC35G3, a putative transmembrane UDP-N-acetylglucosamine transporter, in spermatogenesis. They showed that SLC35G3 is testis-specific and expressed in round spermatids. Slc35g3-null males were sterile but females were fertile. Slc35g3-null males produced normal sperm count but sperm showed subtle head morphology. Sperm from Slc35g3-null males have defects in uterotubal junction passage, ZP binding, and oocyte fusion. Loss of SLC35G3 causes abnormal processing and glycosylation of a number sperm proteins in testis and sperm. They demonstrated that SLC35G3 functions as a UDP-GlcNAc transporter in cell lines. Two human SLC35G3 variants impaired its transporter activity, implicating these variants in human infertility.
Strengths:
This study is thorough. The mutant phenotype is strong and interesting. The major conclusions are supported by the data. This study demonstrated SLC35G3 as a new and essential factor for male fertility in mice, which is likely conserved in humans.
Weaknesses:
Some data interpretations needed to be revised. These have been adequately addressed in the revised manuscript.
Author response:
The following is the authors’ response to the original reviews.
Reviewer #1 (Public review):
Summary:
In the present manuscript, Mashiko and colleagues describe a novel phenotype associated with deficient SLC35G3, a testis-specific sugar transporter that is important in glycosylation of key proteins in sperm function. The study characterizes a knockout mouse for this gene and the multifaceted male infertility that ensues. The manuscript is well-written and describes novel physiology through a broad set of appropriate assays.
Strengths:
Robust analysis with detailed functional and molecular assays
Weaknesses:
(1) The abstract references reported mutations in human SLC35G3, but this is not discussed or correlated to the murine findings to a sufficient degree in the manuscript. The HEK293T experiments are reasonable and add value, but a more detailed discussion of the clinical phenotype of the known mutations in this gene and whether they are recapitulated in this study (or not) would be beneficial.
Since no patients have been identified, our experiment was conducted to investigate the activity of the mutation found in humans.
(2) Can the authors expand on how this mutation causes such a wide array of phenotypic defects? I am surprised there is a morphological defect, a fertilization defect, and a transit defect. Do the authors believe all of these are present in humans as well?
Thank you for your comment. There are many glycoprotein-coding genes that influence sperm head morphology, fertilization defect, and transit defect have been identified in knockout mouse studies, and most of these are conserved in humans. Therefore, we believe that glycan modification by SLC35G3 is also involved in the regulation of human sperm.
Reviewer #2 (Public review):
Summary:
This study characterized the function of SLC35G3, a putative transmembrane UDP-N-acetylglucosamine transporter, in spermatogenesis. They showed that SLC35G3 is testis-specific and expressed in round spermatids. Slc35g3-null males were sterile, but females were fertile. Slc35g3-null males produced a normal sperm count, but sperm showed subtle head morphology. Sperm from Slc35g3-null males have defects in uterotubal junction passage, ZP binding, and oocyte fusion. Loss of SLC35G3 causes abnormal processing and glycosylation of a number of sperm proteins in the testis and sperm. They demonstrated that SLC35G3 functions as a UDP-GlcNAc transporter in cell lines. Two human SLC35G3 variants impaired their transporter activity, implicating these variants in human infertility.
Strengths:
This study is thorough. The mutant phenotype is strong and interesting. The major conclusions are supported by the data. This study demonstrated SLC35G3 as a new and essential factor for male fertility in mice, which is likely conserved in humans.
Weaknesses:
Some data interpretations need to be revised.
Thank you for comments. We revised interpretations.
Reviewer #1 (Recommendations for the authors):
(1) The introduction could be structured more efficiently. Much of what is discussed in the first paragraph appears to be redundant to the second paragraph (or perhaps unrelated to the present manuscript).
In the Introduction, we described the process of glycoprotein formation, 1) quality control or nascent glycoproteins in the ER and its relations importance in sperm fertilizing ability, 2) glycan maturation in the Golgi apparatus and its importance in sperm fertilizing ability, and 3) the supply of nucleotide sugars as the basis of these processes.
We would like to retain this structure in the revised manuscript and appreciate your understanding.
(2) Given the significant difference in morphology between murine and human sperm, can the authors comment on whether these findings are directly translatable to humans?
Thank you for your comment. There are significant differences in sperm morphology between mice and humans, but many glycoprotein-coding genes that influence sperm head morphology have been identified in knockout mouse studies, and most of these are conserved in humans. Therefore, we believe that glycan modification by SLC35G3 is also involved in the regulation of human sperm head morphology. Observing sperm samples from individuals with SLC35G3 mutations is the most direct approach to verify this point and is considered an important goal for future research. The following text has been added to clarify the point:
New Line 338; While these proteins are also found in humans, it is still too early to infer the importance of SLC35G3 in the morphogenesis of human sperm heads. Observing sperm samples from individuals with SLC35G3 mutations would be the most direct approach to address this, and we consider it an important objective for future studies.
(3) Line 194 - while the inability to pass the UTJ may indeed be a component of this infertility phenotype, I would argue that a complete lack of ability to fertilize (even with IVF but not ICSI) suggests that the primary defect is elsewhere. This statement should be removed, and the topic of these two separate mechanisms should be compared/contrasted in the discussion.
We agree that this is an overstatement, so we changed it;
New line 187; Thus, the defective UTJ migration is one of the primary causes of Slc35g3-/- male infertility.
We believe the current statement in the discussion can stay as it is.
Line 379; We reaffirmed that glycosylation-related genes specific to the testis play a crucial role in the synthesis, quality control, and function of glycoproteins on sperm, which are essential for male fertility through their interactions with eggs and the female reproductive system.
(4) Did the authors consider performing TEM to assess the sperm ultrastructure and the acrosome?
Since morphological abnormalities were evident even at the macro level, TEM was not performed in this study. In the future, we plan to use immune-TEM against affected/non-affected glycoproteins when the antibodies become available.
(5) I would argue that Figure 3 should not be labeled as "essential", given the abnormal sperm head morphology compared to humans, the relatively modest difference between the groups on PCA, and more broadly speaking, the relatively poor correlation with morphology and human male infertility. While globozoospermia is clearly an exception, the data in this figure may not translate to human sperm and/or may not be clinically relevant even if it does.
Indeed, other KO spermatozoa with similar morphological features are known to cause a reduction in litter size but do not result in complete infertility. As discussed in line 1, this head shape is not essential for fertilization. Reviewer 2 also pointed out that the phrase "Slc35g3 is essential for sperm head formation" is too strong; therefore, we would like to revise Fig3 title to "Slc35g3 is involved in the regulation of sperm head morphology."
(6) Have the authors generated slc35b4 KO mice?
No, we did not. Since Slc35b4 is expressed throughout the body, a straight knockout may affect other organs or developmental processes. To investigate its role specifically in the testis, it will be necessary to generate a conditional knockout (cKO) model. As this requires considerable cost, time, and labor, we would like to leave it for future investigation.
Reviewer #2 (Recommendations for the authors):
(1) Lines 122-123: "it is prominently expressed in the testis, beginning 21 days postpartum (Figure 1B), suggesting expression from the secondary spermatocyte stage to the round spermatid stage in mice." Day 21 indicates the first appearance of round spermatids, but not secondary spermatocytes. Please change to the following: ...suggesting that its expression begins in round spermatids in mice.
I agree with your comment and have revised the text accordingly (New line 114).
(2) Figure 1E: What germ cells are they? The type of germ cells needs to be labelled on the image. Double staining with a germ cell marker would be helpful to distinguish germ cells from testicular somatic cells.
Thank you for your comment. We replaced the Figure 1E as follows.
To distinguish germ cells from testicular somatic cells, we used the germ cell marker TRA98 antibody. Furthermore, based on the nuclear and GM130 staining pattern, we consider that the Golgi apparatus of round spermatids is labeled.
(3) Figure 2C: The most abundant WB band is between 20 and 25 kD and is non-specific. Does the arrow point to the expected SLC35G3 band? There are two minor bands above the main non-specific band. Are both bands specific to SLC35G3? Given the strong non-specific band on WB, how specific is the immunofluorescence signal produced by this antibody? These need to be explained and discussed.
The arrow pointed to the expected size (35kDa).
We thought that these non-specific bands could be due to blood contamination, so we retried with testicular germ cells. We confirmed that non-specific bands disappeared in the subsequent Western blot analysis. The specificity of the immunofluorescence signal is supported by its complete absence in the KO, as shown in the Supplementary Figures. We have decided to include this improved dataset. Thank you for your comment, which helped us improve the data.
Author response image 1.
(4) Line 184: "Slc35g3-/--derived sperm have defects in ZP binding and oolemma fusion ability, but genomic integrity is intact." Producing viable offspring does not necessarily mean that genomic integrity is intact. Suggestion: Slc35g3-/--derived sperm have defects in ZP binding and oolemma fusion ability but produce viable offspring. Likewise, the Figure S9 caption also needs to be changed.
Thank you for your constructive comment. We have revised the text as you suggested.
(5) Figure 3. "Slc35g3 is essential for sperm head formation". This statement is too strong. It is not essential for sperm head formation. The sperm head is still formed, but shows subtle deformation.
Thank you for your suggestion. We changed as follows:
FIg.3; ”Slc35g3 is involved in the regulation of sperm head morphology.”
(6) Lines 204-205: Figure 6B: "Interestingly, some bands of sperm acrosome-associated 1 (SPACA1; 26) disappeared in Slc35g3-/- testis lysates." I don't see the absence of SPACA1 bands in -/- testis. This needs to be clearly labeled with arrows. On the contrary, the bands are stronger in Slc35g3-/- testis lysates.
Thank you for your comment. After carefully considering your comments, we concluded that using "disappeared" is indeed inappropriate. We would like to revise the sentence as follows: New line 197; "Interestingly, SPACA1 (Sperm Acrosome Associated 1; 26) exhibited a subtle difference in banding pattern in the Slc35g3-/- testis lysate."
Play around with the different settings (pack, dense, squish, all)
These settings might not change visibly the track view unless the person is already zoomed out - which is in 28-1
In the graphical representation of a gene, how are exons and introns depicted at ENSEMBL and NCBI?
Graphical annotation is no longer readily available on the new version of NCBI genes
We would add a small closing section with some of GFS brand values and services - relate back to the tech and you customer service
eCommerce Returns
Can we add a blurb of content for text here and amend the H2 to a h3
Domestic eCommerce Shipping
We would advise a small blurb of content to explain that this is the options you offer for domestic ecommerce Shipping
Domestic eCommerce ShippingTimed Give customers peace of mind with a guaranteed timeslot before 9am, 10:30am or 12pm the next day Next Day One of the most popular delivery options for fast delivery Standard (2 Day) For fast, yet cost-effective delivery Economy (3+ Days) A low-cost option for customers who are happy to wait 2-Man Suited to larger items such as furniture, white goods and electronics so your customer doesn’t have to lift a finger Weekend Great for customers who are too busy in the week, they can make sure they are home at the weekend to accept their delivery Swap It Deliver and collect at the same time to enable quick swap of old, faulty or unwanted goods for new Enquire NowClick and Collect Delivery Services
We have two of those empty h2 tags again in this section
Multi-Carrier Services with the GFS Advantage. We See Delivery Differently.
Either remove the H tag or update the content to something that is more optimised
we want this title to relate back to domestic ecommerce shipping
Managed Multi-Carrier Services > International Delivery Services > eCommerce Technology >
Can we add a short sentence under each title to advise what that link goes to - a bit more context for the user
Sambi - do we have content like the below as it would be a great blog idea
https://www.fedex.com/en-gb/shipping-channel/ecommerce/beginners-guide.html
Can we look at the weightings for the page for desktop - the content needs to be more of a hero
eLife Assessment
This study reports important negative results, showing that genetically removing the RNA-binding protein PTBP1 in astrocytes is insufficient to convert them into neurons, thereby challenging previous claims in the field. It also offers a compelling analysis of PTBP1's role in regulating astrocyte-specific splicing. The evidence is strong, as the experiments are technically sound, carefully controlled, and supported by both imaging and transcriptomic analyses.
Reviewer #1 (Public review):
Summary:
Zhang et al. used a conditional knockout mouse model to re-examine the role of the RNA-binding protein PTBP1 in the transdifferentiation of astroglial cells into neurons. Several earlier studies reported that PTBP1 knockdown can efficiently induce the transdifferentiation of rodent glial cells into neurons, suggesting potential therapeutic applications for neurodegenerative diseases. However, these findings have been contested by subsequent studies, which in turn have been challenged by more recent publications. In their current work, Zhang et al. deleted exon 2 of the Ptbp1 gene using an astrocyte-specific, tamoxifen-inducible Cre line and investigated - using fluorescence imaging and bulk and single-cell RNA-sequencing - whether this manipulation promotes the transdifferentiation of astrocytes into neurons across various brain regions. The data strongly indicate that genetic ablation of PTBP1 is not sufficient to drive efficient conversion of astrocytes into neurons. Interestingly, while PTBP1 loss alters splicing patterns in numerous genes, these changes do not shift the astroglial transcriptome toward a neuronal profile.
Strengths:
Although this is not the first report of PTBP1 ablation in mouse astrocytes in vivo, this study utilizes a distinct knockout strategy and provides novel insights into PTBP1-regulated splicing events in astrocytes. The manuscript is well written, and the experiments are technically sound and properly controlled. I believe this study will be of considerable interest to the broad readership of eLife.
Original weaknesses:
(1) The primary point that needs to be addressed is a better understanding of the effect of exon 2 deletion on PTBP1 expression. Figure 4D shows successful deletion of exon 2 in knockout astrocytes. However - assuming that the coverage plots are CPM-normalized - the overall PTBP1 mRNA expression level appears unchanged. Figure 6A further supports this observation. This is surprising, as one would expect that the loss of exon 2 would shift the open reading frame and trigger nonsense-mediated decay of the PTBP1 transcript. Given this uncertainty, the authors should confirm the successful elimination of PTBP1 protein in cKO astrocytes using an orthogonal approach, such as Western blotting, in addition to immunofluorescence. They should also discuss possible reasons why PTBP1 mRNA abundance is not detectably affected by the frameshift.
(2) The authors should analyze PTBP1 expression in WT and cKO substantia nigra samples shown in Figure 3 or justify why this analysis is not necessary.
(3) Lines 236-238 and Figure 4E: The authors report an enrichment of CU-rich sequences near PTBP1-regulated exons. To better compare this with previous studies on position-specific splicing regulation by PTBP1, it would be helpful to assess whether the position of such motifs differs between PTBP1-activated and PTBP1-repressed exons.
(4) The analyses in Figure 5 and its supplement strongly suggest that the splicing changes in PTBP1-depleted astrocytes are distinct from those occurring during neuronal differentiation. However, the authors should ensure that these comparisons are not confounded by transcriptome-wide differences in gene expression levels between astrocytes and developing neurons. One way to address this concern would be to compare the new PTBP1 cKO data with publicly available RNA-seq datasets of astrocytes induced to transdifferentiate into neurons using proneural transcription factors (e.g., PMID: 38956165).
Point 1 has been successfully addressed in the revision by providing relevant references/discussion. Points 2-4 were addressed by including additional data/analyses.
Reviewer #2 (Public review):
Summary:
The manuscript by Zhang and colleagues describes a study that investigated if deletion of PTBP1 in adult astrocytes in mice led to an astrocyte-to-neuron conversion. The study revisited the hypothesis that reduced PTBP1 expression reprogrammed astrocytes to neurons. More than 10 studies have been published on this subject, with contradicting results. Half of the studies supported the hypothesis while the other half did not. The question being addressed is an important one because if the hypothesis is correct, it can lead to exciting therapeutic applications for treating neurodegenerative diseases such as Parkinson's disease.
In this study, Zhang and colleagues conducted a conditional mouse knockout study to address the question. They used the Cre-LoxP system to specifically delete PTBP1 in adult astrocytes. Through a series of carefully controlled experiments including cell lineage tracing, the authors found no evidence for the astrocyte-to-neuron conversion.
The authors then carried out a key experiment that none of previous studies on the subject did: investigating alternative splicing pattern changes in PTBP1-depleted cells using RNA-seq analysis. The idea is to compare the splicing pattern change caused by PTBP1 deletion in astrocytes to what occurs during neurodevelopment. This is an important experiment that will help illuminate if the astrocyte-to-neuron transition occurred in the system. The result was consistent with that of the cell staining experiments: no significant transition being detected.
These experiments demonstrate that, in this experiment setting, PTBT1 deletion in adult astrocytes did not convert the cells to neurons.
Strengths:
This is a well-designed, elegantly conducted, and clearly described study that addresses an important question. The conclusions provide important information to the field.<br /> To this reviewer, this study provided convincing and solid experimental evidence to support the authors' conclusions.
My concerns in the previous review have been addressed satisfactorily.
Author response:
The following is the authors’ response to the original reviews.
Reviewer #1 (Public review):
Summary:
Zhang et al. used a conditional knockout mouse model to re-examine the role of the RNAbinding protein PTBP1 in the transdifferentiation of astroglial cells into neurons. Several earlier studies reported that PTBP1 knockdown can efficiently induce the transdifferentiation of rodent glial cells into neurons, suggesting potential therapeutic applications for neurodegenerative diseases. However, these findings have been contested by subsequent studies, which in turn have been challenged by more recent publications. In their current work, Zhang et al. deleted exon 2 of the Ptbp1 gene using an astrocyte-specific, tamoxifen-inducible Cre line and investigated, using fluorescence imaging and bulk and single-cell RNA-sequencing, whether this manipulation promotes the transdifferentiation of astrocytes into neurons across various brain regions. The data strongly indicate that genetic ablation of PTBP1 is not sufficient to drive efficient conversion of astrocytes into neurons. Interestingly, while PTBP1 loss alters splicing patterns in numerous genes, these changes do not shift the astroglial transcriptome toward a neuronal profile.
Strengths:
Although this is not the first report of PTBP1 ablation in mouse astrocytes in vivo, this study utilizes a distinct knockout strategy and provides novel insights into PTBP1-regulated splicing events in astrocytes. The manuscript is well written, and the experiments are technically sound and properly controlled. I believe this study will be of considerable interest to a broad readership.
Weaknesses:
(1) The primary point that needs to be addressed is a better understanding of the effect of exon 2 deletion on PTBP1 expression. Figure 4D shows successful deletion of exon 2 in knockout astrocytes. However, assuming that the coverage plots are CPM-normalized, the overall PTBP1 mRNA expression level appears unchanged. Figure 6A further supports this observation. This is surprising, as one would expect that the loss of exon 2 would shift the open reading frame and trigger nonsense-mediated decay of the PTBP1 transcript. Given this uncertainty, the authors should confirm the successful elimination of PTBP1 protein in cKO astrocytes using an orthogonal approach, such as Western blotting, in addition to immunofluorescence. They should also discuss possible reasons why PTBP1 mRNA abundance is not detectably affected by the frameshift.
We thank the reviewer for raising this important point. Indeed, the deletion of exon 2 introduces a frameshift that is predicted to disrupt the PTBP1 open reading frame and trigger nonsensemediated decay (NMD). While our CPM-normalized coverage plots (Figure 4D) and gene-level expression analysis (Figure 6A) suggest that PTBP1 mRNA levels remain largely unchanged in cKO astrocytes, we acknowledge that this observation is counterintuitive and merits further clarification.
We suspect that the process of brain tissue dissociation and FACS sorting for bulk or single cell RNA-seq may enrich for nucleic material and thus dilute the NMD signal, which occurs in the cytoplasm. Alternatively, the transcripts (like other genes) may escape NMD for unknown mechanisms. Although a frameshift is a strong indicator for triggering NMD, it does not guarantee NMD will occur in every case. (lines 346-353)
Regarding the validation of PTBP1 protein depletion in cKO astrocytes by Western blotting, we acknowledge that orthogonal approaches to confirm PTBP1 elimination would address uncertainty around the effect of exon 2 deletion on PTBP1 expression. The low cell yield of cKO astrocytes vis FACS poses a significant burden on obtaining sufficient samples for immunoblotting detection of PTBP1 depletion. On average 3-5 adult animals per genotype (with three different alleles) are needed for each biological replicate. The manuscript contains PTBP1 immunofluorescence staining of brain slides to demonstrate PTBP1 deletion (Figures 1-2, Figure 3 supplement 1). Our characterization of this Ptbp1 deletion allele in other contexts show the loss of full length PTBP1 proteins in ESCs using Western blotting (PMID: 30496473). Furthermore, germline homozygous mutant mice do not survive beyond embryonic day 6, supporting that it is a loss of function allele.
(2) The authors should analyze PTBP1 expression in WT and cKO substantia nigra samples shown in Figure 3 or justify why this analysis is not necessary.
We thank the reviewer for pointing out this important question. Although we are using an astrocyte-specific PTBP1 knockout (KO) mouse model, which is designed to delete PTBP1 in all the astrocyte throughout mouse brain, and although we have systematically verified PTBP1 elimination in different mouse brain regions (cortex and striatum) at multiple time points (from 4w to 12w after tamoxifen administration), we agree that it remains necessary and important to demonstrate whether the observed lack of astrocyte-to-neuron conversion is indeed associated with sufficient PTBP1 depletion.
We have analyzed the PTBP1 expression in the substantia nigra, as we did in the cortex and striatum. We added a new figure (Figure 3-figure supplement 1) to show the results. We found in cKO samples, tdT+ cells lack PTBP1 immunostaining, and there is no overlapping of NeuN+ and tdT+ signals. These results show effective PTBP1 depletion in the substantia nigra, similar to that observed in the cortex and striatum. (line 221-224)
(3) Lines 236-238 and Figure 4E: The authors report an enrichment of CU-rich sequences near PTBP1-regulated exons. To better compare this with previous studies on position-specific splicing regulation by PTBP1, it would be helpful to assess whether the position of such motifs differs between PTBP1-activated and PTBP1-repressed exons.
We thank the reviewer for this insightful comment. We agree that assessing the positional distribution of CU-rich motifs between PTBP1-activated and PTBP1-repressed exons would provide valuable insight into the position-specific regulatory mechanisms of PTBP1. In response, we have performed separate motif enrichment analyses for PTBP1-activated and PTBP1-repressed exons and examined whether their positional patterns differ (Figure 4–figure supplement 2).
Our analysis revealed that CU-rich motifs were significantly enriched in the upstream introns of both activated and repressed exons by PTBP1 loss, with higher enrichment observed in repressed exons (Enrichment ratio = 2.14, q = 9.00×10-5) compared to activated exons (Enrichment ratio = 1.72, q = 7.75×10-5) (Figure 4–figure supplement 2B–C). In contrast, no CU-rich motifs were found downstream of activated exons (Figure 4–figure supplement 2D), while a weak, non-significant enrichment was observed downstream of repressed exons (Enrichment ratio = 1.21, q = 0.225; Figure 4–figure supplement 2E). These results do not necessarily fully fit with a couple of earlier PTBP1 CLIP studies showing differential PTBP1 binding for repressed vs activated exons but are more in line with the Black Lab study (PMID: 24499931) that PTBP1 binds upstream introns of both repressed and activated exons. Either case, PTBP1 affects a diverse set of alternative exons and likely involves diverse contextdependent binding patterns (lines 244-257).
(4) The analyses in Figure 5 and its supplement strongly suggest that the splicing changes in PTBP1-depleted astrocytes are distinct from those occurring during neuronal differentiation. However, the authors should ensure that these comparisons are not confounded by transcriptome-wide differences in gene expression levels between astrocytes and developing neurons. One way to address this concern would be to compare the new PTBP1 cKO data with publicly available RNA-seq datasets of astrocytes induced to transdifferentiate into neurons using proneural transcription factors (e.g., PMID: 38956165).
We would like to express our gratitude for the thoughtful feedback. We agree that transcriptome-wide differences in gene expression between astrocytes and developing neurons could confound the interpretation of splicing differences. To address this concern, we have incorporated publicly available RNA-seq datasets from studies in which astrocytes are reprogrammed into neurons using proneural transcription factors, Ngn2 or PmutNgn2 (PMID: 38956165).
The results of principal component analysis (PCA) for splicing profiles revealed that the in vivo splicing profiles from this study and the in vitro splicing profiles from PMID 38956165 are well separated on PC1 and PC2. While Ngn2/PmutNgn2-induced neurons and control astrocytes started to show distinction on PC3 (and to some degree on PC4), Ptbp1 cKO samples remained tightly grouped with control astrocytes and showed no directional shift toward the neuronal cluster (Figure 5–figure supplement 2B). These findings further support the conclusion that PTBP1 depletion in mature astrocytes does not induce a neuronal-like splicing program, even when compared against neurons derived from the astrocyte lineage (lines 306318).
The pairwise correlation analysis of percent spliced in between Ptbp1 cKO, control astrocytes, and induced neurons confirmed that Ptbp1 cKO astrocytes are highly similar to control astrocytes (ρ = 0.81) and clearly distinct from induced neurons (ρ = 0.62) (Figure 5– figure supplement 2C), reinforcing the notion that PTBP1 loss alone is insufficient to drive a neuronal-like splicing transition (lines 319-336).
Consistent with the analysis for splicing profiles, PCA for gene expression profiles showed that control and Ptbp1 cKO astrocytes clustered tightly together and no directional shift toward the neuronal cluster while Ngn2/PmutNgn2-induced neurons and control astrocytes were distributed across a broader range (Figure 6–figure supplement 1A–B). Correlation analysis further supported this result, with a strong similarity between Ptbp1 cKO and control astrocytes (ρ = 0.97), and low similarity between Ptbp1 cKO astrocytes and induced neurons (ρ = 0.27) (Figure 6–figure supplement 1C). These findings indicate that, even with PTBP1 loss, cKO astrocytes retain a transcriptional profile very distinct from that of neurons, underscoring that Ptbp1 deficiency alone does not induce astrocyte-to-neuron reprogramming at the transcriptomic level (lines 366-373).
Reviewer #2 (Public review):
Summary:
The manuscript by Zhang and colleagues describes a study that investigated whether the deletion of PTBP1 in adult astrocytes in mice led to an astrocyte-to-neuron conversion. The study revisited the hypothesis that reduced PTBP1 expression reprogrammed astrocytes to neurons. More than 10 studies have been published on this subject, with contradicting results. Half of the studies supported the hypothesis while the other half did not. The question being addressed is an important one because if the hypothesis is correct, it can lead to exciting therapeutic applications for treating neurodegenerative diseases such as Parkinson's disease.
In this study, Zhang and colleagues conducted a conditional mouse knockout study to address the question. They used the Cre-LoxP system to specifically delete PTBP1 in adult astrocytes. Through a series of carefully controlled experiments, including cell lineage tracing, the authors found no evidence for the astrocyte-to-neuron conversion.
The authors then carried out a key experiment that none of the previous studies on the subject did: investigating alternative splicing pattern changes in PTBP1-depleted cells using RNA-seq analysis. The idea is to compare the splicing pattern change caused by PTBP1 deletion in astrocytes to what occurs during neurodevelopment. This is an important experiment that will help illuminate whether the astrocyte-to-neuron transition occurred in the system. The result was consistent with that of the cell staining experiments: no significant transition was detected.
These experiments demonstrate that, in this experimental setting, PTBT1 deletion in adult astrocytes did not convert the cells to neurons.
Strengths:
This is a well-designed, elegantly conducted, and clearly described study that addresses an important question. The conclusions provide important information to the field.
To this reviewer, this study provided convincing and solid experimental evidence to support the authors' conclusions.
Weaknesses:
The Discussion in this manuscript is short and can be expanded. Can the authors speculate what led to the contradictory results in the published studies? The current study, in combination with the study published in Cell in 2021 by Wang and colleagues, suggests that observed difference is not caused by the difference of knockdown vs. knockout. Is it possible that other glial cell types are responsible for the transition? If so, what cells? Oligodendrocytes?
We are grateful for the reviewer’s careful reading and valuable suggestions. We have expanded the Discussion to include discussion of possible origins of glial cells responsible for neuronal transition. (lines 441-461)
Reviewer #1 (Recommendations for the authors):
(1) Throughout the text and figures, it is customary to write loxP with a capital "P".
We have capitalized “P” in loxP throughout the text and figures.
(2) It would be helpful to indicate the brain regions analyzed above the images in Figure 1B-C, Figure 2A-B, Figure 1 - Supplement 3, and Figure 2 - Supplement 2, as was done in Figure 1 - Supplement 1.
The labels indicating brain regions of corresponding images have been added to the figures.
(3) The arrowheads in Figure 1C, Figure 2B, Figure 3, and several supplemental panels are nearly equilateral triangles, making their direction difficult to discern. Consider using a more slender or indented design (e.g., ➤).
We have replaced triangular arrowheads with indented arrowheads in the figures.
(4) Lines 181-209: This section should be revised, given that the striatum is not a midbrain structure.
We have revised this section to reflect our analysis of the striatum as a brain region of the nigrostriatal pathway rather than a midbrain structure.
Reviewer #2 (Recommendations for the authors):
In Supplemental Figure 1, the two open triangles are almost indistinguishable. It would be better if the colors of these open triangles were changed so that it is easier to tell what's what. There is not enough contrast between white and yellow.
We have changed the open triangle arrowheads to solid yellow and violet arrowheads to improve contrast between labels.
Please can we add an internal lnk from the below pages and link to their relevant country page
URL to Link: https://gfsdeliver.com/parcels/gfs-international/
URL to Link: https://gfsdeliver.com/news-and-blogs/gfs-international-christmas/ Can we link all the countries to their main landing page
URL to Link: https://gfsdeliver.com/news-and-blogs/gfs-international-final-posting-dates-for-christmas-delivery/ Can we link all the countries to their main landing page
Ireland is among the Top 50 global eCommerce markets, boasting a substantial share in the worldwide growth rate that continues to escalate year after year. Ireland is the UK’s sixth largest trading partner, accounting for 5% of total UK trade*. With GFS International, UK eCommerce businesses gain a strategic advantage to captivate Irish customers right next door!
Change to
Ireland is among the Top 50 global eCommerce markets, boasting a substantial share in the worldwide growth rate that continues to escalate year after year. It is the UK's sixth largest trading partner, accounting for 5% of total UK trade*. With GFS International, UK businesses gain a strategic advantage to captivate customers in the region, who are right next door!
Unlocking the Emerald Isle for UK Businesses Looking to Ship Parcels to Ireland
Change to
End-to-end International Delivery Solutions for Ireland Shipping
GFS International is your gateway to hassle-free eCommerce shipping from the UK to Ireland
Change as confirmed by Sambi
Tap into Ireland's Growing eCommerce Market
We wanted to reduce a little bit of the marketing content and make it more direct to customers and search
Sambi - you may wish to run your eyes over the content and tweak bits
Meta title: Shipping Parcels to Ireland from the UK | Multi Carrier Delivery
Change to Sambi suggestion
Meta Title: Shipping to Ireland from the UK | Multi-Carrier International Delivery
eLife Assessment
This computational study examines how neurons in the songbird premotor nucleus HVC might generate the precise, sparse burst sequences that drive adult song. The findings would be useful for understanding how intrinsic conductances and HVC microcircuitry may produce neural sequences, but the work is incomplete because of arbitrary network assumptions, insufficient consideration of biological details such as how silent gaps in song sequences are represented, and failure to incorporate interactions with auditory and brainstem inputs. As a result, the study offers limited advance and only a modest conceptual advance over prior models.
Reviewer #2 (Public review):
Summary:
In this paper, the authors use numerical simulations to try to understand better a major experimental discovery in songbird neuroscience from 2002 by Richard Hahnloser and collaborators. The 2002 paper found that a certain class of projection neurons in the premotor nucleus HVC of adult male zebra finch songbirds, the neurons that project to another premotor nucleus RA, fired sparsely (once per song motif) and precisely (to about 1 ms accuracy) during singing.
The experimental discovery is important to understand since it initially suggested that the sparsely firing RA-projecting neurons acted as a simple clock that was localized to HVC and that controlled all details of the temporal hierarchy of singing: notes, syllables, gaps, and motifs. Later experiments suggested that the initial interpretation might be incomplete: that the temporal structure of adult male zebra finch songs instead emerged in a more complicated and distributed way, still not well understood, from the interaction of HVC with multiple other nuclei, including auditory and brainstem areas. So at least two major questions remain unanswered more than two decades after the 2002 experiment: What is the neurobiological mechanism that produces the sparse precise bursting: is it a local circuit in HVC or is it some combination of external input to HVC and local circuitry? And how is the sparse precise bursting in HVC related to a songbird's vocalizations?
The authors only investigate part of the first question, whether the mechanism for sparse precise bursts is local to HVC. They do so indirectly, by using conductance-based Hodgkin-Huxley-like equations to simulate the spiking dynamics of a simplified network that includes three known major classes of HVC neurons and such that all neurons within a class are assumed to be identical. A strength of the calculations is that the authors include known biophysically deduced details of the different conductances of the three majors classes of HVC neurons, and they take into account what is known, based on sparse paired recordings in slices, about how the three classes connect to one another. One weakness of the paper is that the authors make arbitrary and not-well-motivated assumptions about the network geometry, and they do not use the flexibility of their simulations to study how their results depend on their network assumptions. A second weakness is that they ignore many known experimental details such as projections into HVC from other nuclei, dendritic computations (the somas and dendrites are treated by the authors as point-like isopotential objects), the role of neuromodulators, and known heterogeneity of the interneurons. These weaknesses make it difficult for readers to know the relevance of the simulations for experiments and for advancing theoretical understanding.
Strengths:
The authors use conductance-based Hodgkin-Huxley-like equations to simulate spiking activity in a network of neurons intended to model more accurately songbird nucleus HVC of adult male zebra finches. Spiking models are much closer to experiments than models based on firing rates or on 2-state neurons.
The authors include information deduced from modeling experimental current-clamp data such as the types and properties of conductances. They also take into account how neurons in one class connect to neurons in other classes via excitatory or inhibitory synapses, based on sparse paired recordings in slices by other researchers.
The authors obtain some new results of modest interest such as how changes in the maximum conductances of four key channels (e.g., A-type K+ currents or Ca-dependent K+ currents) influence the structure and propagation of bursts, while simultaneously being able to mimic accurately current-clamp voltage measurements.
Weaknesses:
One weakness of this paper is the lack of a clearly stated, interesting, and relevant scientific question to try to answer. The authors do not discuss adequately in their introduction what questions have recent experimental and theoretical work failed to explain adequately concerning HVC neural dynamics and its role in producing vocalizations. The authors do not discuss adequately why they chose the approach of their paper and how their results address some of these questions.
For example, the authors need to explain in more detail how their calculations relate to the works of Daou et al, J. Neurophys. 2013 (which already fitted spiking models to neuronal data and identified certain conductances), to Jin et al J. Comput. Neurosci. 2007 (which already discussed how to get bursts using some experimental details), and to the rather similar paper by E. Armstrong and H. Abarbanel, J. Neurophys 2016, which already postulated and studied sequences of microcircuits in HVC. This last paper is not even cited by the authors.
The authors' main achievement is to show that simulations of a certain simplified and idealized network of spiking neurons, that includes some experimental details but ignores many others, can match some experimental results like current-clamp-derived voltage time series for the three classes of HVC neurons (although this was already reported in earlier work by Daou and collaborators in 2013), and simultaneously the robust propagation of bursts with properties similar to those observed in experiments. The authors also present results about how certain neuronal details and burst propagation change when certain key maximum conductances are varied.
But these are weak conclusions for two reasons. First, the authors did not do enough calculations to allow the reader to understand how many parameters were needed to obtain these fits and whether simpler circuits, say with fewer parameters and simpler network topology, could do just as well. Second, many previous researchers have demonstrated robust burst propagation in a variety of feed-forward models. So what is new and important about the authors' results compared to the previous computational papers?
Also missing is a discussion, or at least an acknowledgement, of the fact that not all of the fine experimental details of undershoots, latencies, spike structure, spike accommodation, etc may be relevant for understanding vocalization. While it is nice to know that some model can match these experimental details and produce realistic bursts, that does not mean that all of these details are relevant for the function of producing precise vocalizations. Scientific insights in biology often require exploring which of the many observed details can be ignored, and especially identifying the few that are essential for answering some questions. As one example, if HVC-X neurons are completely removed from the authors' model, does one still get robust and reasonable burst propagation of HVC-RA neurons? While part of nucleus HVC acts as a premotor circuit that drives nucleus RA, part of HVC is also related to learning. It is not clear that HVC-X neurons, which carry out some unknown calculation and transmit information to area X in a learning pathway, are relevant for burst production and propagation of HVC-RA neurons, and so relevant for vocalization. Simulations provide a convenient and direct way to explore questions of this kind.
One key question to answer is whether the bursting of HVC-RA projection neurons is based on a mechanism local to HVC or is some combination of external driving (say from auditory nuclei) and local circuitry. The authors do not contribute to answering this question because they ignore external driving and assume that the mechanism is some kind of intrinsic feed-forward circuit, which they put in by hand in a rather arbitrary and poorly justified way, by assuming the existence of small microcircuits consisting of a few HVC-RA, HVC-X, and HVC-I neurons that somehow correspond to "sub-syllabic segments". To my knowledge, experiments do not suggest the existence of such microcircuits nor does theory suggest the need for such microcircuits.
Another weakness of this paper is an unsatisfactory discussion of how the model was obtained, validated, and simulated. The authors should state as clearly as possible, in one location such as an appendix, what is the total number of independent parameters for the entire network and how parameter values were deduced from data or assigned by hand. With enough parameters and variables, many details can be fit arbitrarily accurately so researchers have to be careful to avoid overfitting. If parameter values were obtained by fitting to data, the authors should state clearly what was the fitting algorithm (some iterative nonlinear method, whose results can depend on the initial choice of parameters), what was the error function used for fitting (sum of least squares?), and what data were used for the fitting.
The authors should also state clearly what is the dynamical state of the network, the vector of quantities that evolve over time. (What is the dimension of that vector, which is also the number of ordinary differential equations that have to be integrated?) The authors do not mention what initial state was used to start the numerical integrations, whether transient dynamics were observed and what were their properties, or how the results depend on the choice of initial state. The authors do not discuss how they determined that their model was programmed correctly (it is difficult to avoid typing errors when writing several pages or more of a code in any language) or how they determined the accuracy of the numerical integration method beyond fitting to experimental data, say by varying the time step size over some range or by comparing two different integration algorithms.
Also disappointing is that the authors do not make any predictions to test, except rather weak ones such as that varying a maximum conductance sufficiently (which might be possible by using dynamic clamps) might cause burst propagation to stop or change its properties. Based on their results, the authors do not make suggestions for further experiments or calculations, but they should.
Comments on revised version:
The second version, unfortunately, did not address most of the substantive comments so that, while some parts of the discussion were expanded, most of the serious scientific weaknesses mentioned in the first round of review remain. The revised preprint is not a substantive improvement over the first.
Author response:
The following is the authors’ response to the original reviews.
Reviewer #1 (Public review):
Summary:
The paper presents a model for sequence generation in the zebra finch HVC, which adheres to cellular properties measured experimentally. However, the model is fine-tuned and exhibits limited robustness to noise inherent in the inhibitory interneurons within the HVC, as well as to fluctuations in connectivity between neurons. Although the proposed microcircuits are introduced as units for sub-syllabic segments (SSS), the backbone of the network remains a feedforward chain of HVC_RA neurons, similar to previous models.
Strengths:
The model incorporates all three of the major types of HVC neurons. The ion channels used and their kinetics are based on experimental measurements. The connection patterns of the neurons are also constrained by the experiments.
Weaknesses:
The model is described as consisting of micro-circuits corresponding to SSS. This presentation gives the impression that the model's structure is distinct from previous models, which connected HVC_RA neurons in feedforward chain networks (Jin et al 2007, Li & Greenside, 2006; Long et al 2010; Egger et al 2020). However, the authors implement single HVC_RA neurons into chain networks within each micro-circuit and then connect the end of the chain to the start of the chain in the subsequent micro-circuit. Thus, the HVC_RA neuron in their model forms a single-neuron chain. This structure is essentially a simplified version of earlier models.
In the model of the paper, the chain network drives the HVC_I and HVC_X neurons. The role of the micro-circuits is more significant in organizing the connections: specifically, from HVC_RA neurons to HVC_I neurons, and from HVC_I neurons to both HVC_X and HVC_RA neurons.
We thank Reviewer 1 for their thoughtful comments.
While the reviewer is correct about the fact that the propagation of sequential activity in this model is primarily carried by HVC<sub>RA</sub> neurons in a feed-forward manner, we need to emphasize that this is true only if there is no intrinsic or synaptic perturbation to the HVC network. For example, we showed in Figures 10 and 12 how altering the intrinsic properties of HVC<sub>X</sub> neurons or for interneurons disrupts sequence propagation. In other words, while HVC<sub>RA</sub> neurons are the key forces to carry the chain forward, the interplay between excitation and inhibition in our network as well as the intrinsic parameters for all classes of HVC neurons are equally important forces in carrying the chain of activity forward. Thus, the stability of activity propagation necessary for song production depend on a finely balanced network of HVC neurons, with all classes contributing to the overall dynamics. Moreover, all existing models that describe premotor sequence generation in the HVC either assume a distributed model (Elmaleh et al., 2021) that dictates that local HVC circuitry is not sufficient to advance the sequence but rather depends upon moment to-moment feedback through Uva (Hamaguchi et al., 2016), or assume models that rely on intrinsic connections within HVC to propagate sequential activity. In the latter case, some models assume that HVC is composed of multiple discrete subnetworks that encode individual song elements (Glaze & Troyer, 2013; Long & Fee, 2008; Wang et al., 2008), but lacks the local connectivity to link the subnetworks, while other models assume that HVC may have sufficient information in its intrinsic connections to form a single continuous network sequence (Long et al. 2010). The HVC model we present extends the concept of a feedforward network by incorporating additional neuronal classes that influence the propagation of activity (interneurons and HVC<sub>X</sub> neurons). We have shown that any disturbance of the intrinsic or synaptic conductances of these latter neurons will disrupt activity in the circuit even when HVC<sub>RA</sub> neurons properties are maintained.
In regard to the similarities between our model and earlier models, several aspects of our model distinguish it from prior work. In short, while several models of how sequence is generated within HVC have been proposed (Cannon et al., 2015; Drew & Abbott, 2003; Egger et al., 2020; Elmaleh et al., 2021; Galvis et al., 2018; Gibb et al., 2009a, 2009b; Hamaguchi et al., 2016; Jin, 2009; Long & Fee, 2008; Markowitz et al., 2015), all the models proposed either rely on intrinsic HVC circuitry to propagate sequential activity, rely on extrinsic feedback to advance the sequence or rely on both. These models do not capture the complex details of spike morphology, do not include the right ionic currents, do not incorporate all classes of HVC neurons, or do not generate realistic firing patterns as seen in vivo. Our model is the first biophysically realistic model that incorporates all classes of HVC neurons and their intrinsic properties. We tuned the intrinsic and the synaptic properties bases on the traces collected by Daou et al. (2013) and Mooney and Prather (2005) as shown in Figure 3. The three classes of model neurons incorporated to our network as well as the synaptic currents that connect them are based on Hodgkin- Huxley formalisms that contain ion channels and synaptic currents which had been pharmacologically identified. This is an advancement over prior models that primarily focused on the role of synaptic interactions or external inputs. The model is based on feedforward chain of microcircuits that encode for the different sub-syllabic segments and that interact with each other through structured feedback inhibition, defining an ordered sequence of cell firing. Moreover, while several models highlight the critical role of inhibitory interneurons in shaping the timing and propagation of bursts of activity in HVC<sub>RA</sub> neurons, our work offers an intricate and comprehensive model that help understand this critical role played by inhibition in shaping song dynamics and ensuring sequence propagation.
How useful is this concept of micro-circuits? HVC neurons fire continuously even during the silent gaps. There are no SSS during these silent gaps.
Regarding the concern about the usefulness of the 'microcircuit' concept in our study, we appreciate the comment and we are glad to clarify its relevance in our network. While we acknowledge that HVC<sub>RA</sub> neurons interconnect microcircuits, our model's dynamics are still best described within the framework of microcircuitry particularly due to the firing behavior of HVC<sub>X</sub> neurons and interneurons. Here, we are referring to microcircuits in a more functional sense, rather than rigid, isolated spatial divisions (Cannon et al. 2015), and we now make this clear on page 21. A microcircuit in our model reflects the local rules that govern the interaction between all HVC neuron classes within the broader network, and that are essential for proper activity propagation. For example, HVC<sub>INT</sub> neurons belonging to any microcircuit burst densely and at times other than the moments when the corresponding encoded SSS is being “sung”. What makes a particular interneuron belong to this microcircuit or the other is merely the fact that it cannot inhibit HVC<sub>RA</sub> neurons that are housed in the microcircuit it belongs to. In particular, if HVC<sub>INT</sub> inhibits HVC<sub>RA</sub> in the same microcircuit, some of the HVC<sub>RA</sub> bursts in the microcircuit might be silenced by the dense and strong HVC<sub>INT</sub> inhibition breaking the chain of activity again. Similarly, HVC<sub>X</sub> neurons were selected to be housed within microcircuits due to the following reason: if an HVC<sub>X</sub> neuron belonging to microcircuit i sends excitatory input to an HVC<sub>INT</sub> neuron in microcircuit j, and that interneuron happens to select an HVC<sub>RA</sub> neuron from microcircuit i, then the propagation of sequential activity will halt, and we’ll be in a scenario similar to what was described earlier for HVC<sub>INT</sub> neurons inhibiting HVC<sub>RA</sub> neurons in the same microcircuit.
We agree that there are no sub-syllabic segments described during the silent gaps and we thank the reviewer to pointing this out. Although silent gaps are integral to the overall process of song production, we have not elaborated on them in this model due to the lack of a clear, biophysically grounded representation for the gaps themselves at the level of HVC. Our primary focus has been on modeling the active, syllable-producing phases of the song, where the HVC network’s sequential dynamics are critical for song. However, one can think the encoding of silent gaps via similar mechanisms that encode SSSs, where each gap is encoded by similar microcircuits comprised of the three classes of HVC neurons (let’s call them GAP rather than SSS) that are active only during the silent gaps. In this case, the propagation of sequential activity is carried throughout the GAPs from the last SSS of the previous syllable to the first SSS of the subsequent syllable. This is no described more clearly on page 22 of the manuscript.
A significant issue of the current model is that the HVC_RA to HVC_RA connections require fine-tuning, with the network functioning only within a narrow range of g_AMPA (Figure 2B). Similarly, the connections from HVC_I neurons to HVC_RA neurons also require fine-tuning. This sensitivity arises because the somatic properties of HVC_RA neurons are insufficient to produce the stereotypical bursts of spikes observed in recordings from singing birds, as demonstrated in previous studies (Jin et al 2007; Long et al 2010). In these previous works, to address this limitation, a dendritic spike mechanism was introduced to generate an intrinsic bursting capability, which is absent in the somatic compartment of HVC_RA neurons. This dendritic mechanism significantly enhances the robustness of the chain network, eliminating the need to fine-tune any synaptic conductances, including those from HVC_I neurons (Long et al 2010). Why is it important that the model should NOT be sensitive to the connection strengths?
We thank the reviewer for the comment. While mathematical models designed for highly complex nonlinear biological processes tangentially touch the biological realism, the current network as is right now is the first biologically realistic-enough network model designed for HVC that explains sequence propagation. We do not include dendritic processes in our network although that increases the realistic dynamics for various reasons. 1) The ion channels we integrated into the somatic compartment are known pharmacologically (Daou et al. 2013), but we don’t know about the dendritic compartment’s intrinsic properties of HVC neurons and the cocktail of ion channels that are expressed there. 2) We are able to generate realistic bursting in HVC<sub>RA</sub> neurons despite the single compartment, and the main emphasis in this network is on the interactions between excitation and inhibition, the effects of ion channels in modulating sequence propagation, etc … 3) The network model already incorporates thousands of ODEs that govern the dynamics of each of the HVC neurons, so we did not want to add more complexity to the network especially that we don’t know the biophysical properties of the dendritic compartments.
Therefore, our present focus is on somatic dynamics and the interaction between HVC<sub>RA</sub> and HVC<sub>INT</sub> neurons, but we acknowledge the importance of these processes in enhancing network resiliency. Although we agree that adding dendritic processes improves robustness, we still think that somatic processes alone can offer insightful information on the sequential dynamics of the HVC network. While the network should be robust across a wide range of parameters, it is also essential that certain parameters are designed to filter out weaker signals, ensuring that only reliable, precise patterns of activity propagate. Hence, we specifically chose to make the HVC<sub>RA</sub>-to-HVC<sub>RA</sub> excitatory connections more sensitive (narrow range of values) such that only strong, precise and meaningful stimuli can propagate through the network representing the high stereotypy and precision seen in song production.
First, the firing of HVC_I neurons is highly noisy and unreliable. HVC_I neurons fire spontaneous, random spikes under baseline conditions. During singing, their spike timing is imprecise and can vary significantly from trial to trial, with spikes appearing or disappearing across different trials. As a result, their inputs to HVC_RA neurons are inherently noisy. If the model relies on precisely tuned inputs from HVC_I neurons, the natural fluctuations in HVC_I firing would render the model non-functional. The authors should incorporate noisy HVC_I neurons into their model to evaluate whether this noise would render the model non-functional.
We acknowledge that under baseline and singing settings, interneurons fire in an extremely noisy and inaccurate manner, although they exhibit time locked episodes in their activity (Hahnloser et al 2002, Kozhinikov and Fee 2007). In order to mimic the biological variability of these neurons, our model does, in fact, include a stochastic current to reflect the intrinsic noise and random variations in interneuron firing shown in vivo (and we highlight this in the Methods). However, to make sure the network is resilient to this randomness in interneuron firing, introduced a stochastic input current of the form I<sub>noise</sub> (t)= σ.ξ(t) where ξ(t) is a Gaussian white noise with zero mean and unit variance, and σ is the noise amplitude. This stochastic drive was introduced to every model neuron and it mimics the fluctuations in synaptic input arising from random presynaptic activity and background noise. For values of σ within 1-5% of the mean synaptic conductance, the stochastic current has no effect on network propagation. For larger values of σ, the desired network activity was disrupted or halted. We now talk about this on page 22 of the manuscript.
Second, Kosche et al. (2015) demonstrated that reducing inhibition by suppressing HVC_I neuron activity makes HVC_RA firing less sparse but does not compromise the temporal precision of the bursts. In this experiment, the local application of gabazine should have severely disrupted HVC_I activity. However, it did not affect the timing precision of HVC_RA neuron firing, emphasizing the robustness of the HVC timing circuit. This robustness is inconsistent with the predictions of the current model, which depends on finely tuned inputs and should, therefore, be vulnerable to such disruptions.
We thank the reviewer for the comment. The differences between the Kosche et al. (2015) findings and the predictions of our model arise from differences in the aspect of HVC function we are modeling. Our model is more sensitive to inhibition, which is a designed mechanism for achieving precise song patterning. This is a modeling simplification we adopted to capture specific characteristics of HVC function. Hence, Kosche et al. (2015) findings do not invalidate the approach of our model, but highlights that HVC likely operates with several, redundant mechanisms that overall ensure temporal precision.
Third, the reliance on fine-tuning of HVC_RA connections becomes problematic if the model is scaled up to include groups of HVC_RA neurons forming a chain network, rather than the single HVC_RA neurons used in the current work. With groups of HVC_RA neurons, the summation of presynaptic inputs to each HVC_RA neuron would need to be precisely maintained for the model to function. However, experimental evidence shows that the HVC circuit remains functional despite perturbations, such as a few degrees of cooling, micro-lesions, or turnover of HVC_RA neurons. Such robustness cannot be accounted for by a model that depends on finely tuned connections, as seen in the current implementation.
Our model of individual HVC<sub>RA</sub> neurons and as stated previously is reductive model that focuses on understanding the mechanisms that govern sequential neural activity. We agree that scaling the model to include many of HVC<sub>RA</sub> neurons poses challenges, specifically concerning the summation of presynaptic inputs. However, our model can still be adapted to a larger network without requiring the level of fine-tuning currently needed. In fact, the current fine-tuning of synaptic connections in the model is a reflection of fundamental network mechanisms rather than a limitation when scaling to a larger network. Besides, one important feature of this neural network is redundancy. Even if some neurons or synaptic connections are impaired, other neurons or pathways can compensate for these changes, allowing the activity propagation to remain intact.
The authors examined how altering the channel properties of neurons affects the activity in their model. While this approach is valid, many of the observed effects may stem from the delicate balancing required in their model for proper function. In the current model, HVC_X neurons burst as a result of rebound activity driven by the I_H current. Rebound bursts mediated by the I_H current typically require a highly hyperpolarized membrane potential. However, this mechanism would fail if the reversal potential of inhibition is higher than the required level of hyperpolarization. Furthermore, Mooney (2000) demonstrated that depolarizing the membrane potential of HVC_X neurons did not prevent bursts of these neurons during forward playback of the bird's own song, suggesting that these bursts (at least under anesthesia, which may be a different state altogether) are not necessarily caused by rebound activity. This discrepancy should be addressed or considered in the model.
In our HVC network model, one goal with HVC<sub>X</sub> neurons is to generate bursts in their underlying neuron population. Since HVC<sub>X</sub> neurons in our model receive only inhibitory inputs from interneurons, we rely on inhibition followed by rebound bursts orchestrated by the I<sub>H</sub> and the I<sub>CaT</sub> currents to achieve this goal. The interplay between the T-type Ca<sup>++</sup> current and the H current in our model is fundamental to generate their corresponding bursts, as they are sufficient for producing the desired behavior in the network. Due to this interplay, we do not need significant inhibition to generate rebound bursts, because the T-type Ca<sub>++</sub> current’s conductance can be stronger leading to robust rebound bursting even when the degree of inhibition is not very strong. This is now highlighted on page 42 in the revised version.
Some figures contain direct copies of figures from published papers. It is perhaps a better practice to replace them with schematics if possible.
We wanted on purpose to keep the results shown in Mooney and Prather (2005) to be shown as is, in order to compare them with our model simulations highlighting the degree of resemblance. We believe that creating schematics of the Mooney and Prather (2005) results will not have the same impact, similarly creating a schematic for Hahnloser et al (2002) results won’t help much. However, if the reviewer still believes that we should do that, we’re happy to do it.
Reviewer #2 (Public review):
Summary:
In this paper, the authors use numerical simulations to try to understand better a major experimental discovery in songbird neuroscience from 2002 by Richard Hahnloser and collaborators. The 2002 paper found that a certain class of projection neurons in the premotor nucleus HVC of adult male zebra finch songbirds, the neurons that project to another premotor nucleus RA, fired sparsely (once per song motif) and precisely (to about 1 ms accuracy) during singing.
The experimental discovery is important to understand since it initially suggested that the sparsely firing RA-projecting neurons acted as a simple clock that was localized to HVC and that controlled all details of the temporal hierarchy of singing: notes, syllables, gaps, and motifs. Later experiments suggested that the initial interpretation might be incomplete: that the temporal structure of adult male zebra finch songs instead emerged in a more complicated and distributed way, still not well understood, from the interaction of HVC with multiple other nuclei, including auditory and brainstem areas. So at least two major questions remain unanswered more than two decades after the 2002 experiment: What is the neurobiological mechanism that produces the sparse precise bursting: is it a local circuit in HVC or is it some combination of external input to HVC and local circuitry? And how is the sparse precise bursting in HVC related to a songbird's vocalizations? The authors only investigate part of the first question, whether the mechanism for sparse precise bursts is local to HVC. They do so indirectly, by using conductance-based Hodgkin-Huxley-like equations to simulate the spiking dynamics of a simplified network that includes three known major classes of HVC neurons and such that all neurons within a class are assumed to be identical. A strength of the calculations is that the authors include known biophysically deduced details of the different conductances of the three major classes of HVC neurons, and they take into account what is known, based on sparse paired recordings in slices, about how the three classes connect to one another. One weakness of the paper is that the authors make arbitrary and not well-motivated assumptions about the network geometry, and they do not use the flexibility of their simulations to study how their results depend on their network assumptions. A second weakness is that they ignore many known experimental details such as projections into HVC from other nuclei, dendritic computations (the somas and dendrites are treated by the authors as point-like isopotential objects), the role of neuromodulators, and known heterogeneity of the interneurons. These weaknesses make it difficult for readers to know the relevance of the simulations for experiments and for advancing theoretical understanding.
Strengths:
The authors use conductance-based Hodgkin-Huxley-like equations to simulate spiking activity in a network of neurons intended to model more accurately songbird nucleus HVC of adult male zebra finches. Spiking models are much closer to experiments than models based on firing rates or on 2-state neurons.
The authors include information deduced from modeling experimental current-clamp data such as the types and properties of conductances. They also take into account how neurons in one class connect to neurons in other classes via excitatory or inhibitory synapses, based on sparse paired recordings in slices by other researchers. The authors obtain some new results of modest interest such as how changes in the maximum conductances of four key channels (e.g., A-type K+ currents or Ca-dependent K+ currents) influence the structure and propagation of bursts, while simultaneously being able to mimic accurately current-clamp voltage measurements.
Weaknesses:
One weakness of this paper is the lack of a clearly stated, interesting, and relevant scientific question to try to answer. In the introduction, the authors do not discuss adequately which questions recent experimental and theoretical work have failed to explain adequately, concerning HVC neural dynamics and its role in producing vocalizations. The authors do not discuss adequately why they chose the approach of their paper and how their results address some of these questions.
For example, the authors need to explain in more detail how their calculations relate to the works of Daou et al, J. Neurophys. 2013 (which already fitted spiking models to neuronal data and identified certain conductances), to Jin et al J. Comput. Neurosci. 2007 (which already discussed how to get bursts using some experimental details), and to the rather similar paper by E. Armstrong and H. Abarbanel, J. Neurophys 2016, which already postulated and studied sequences of microcircuits in HVC. This last paper is not even cited by the authors.
We thank the reviewer for this valuable comment, and we agree that we did not clarify enough throughout the paper the utility of our model or how it advanced our understanding of the HVC dynamics and circuitry. To that end, we revised several places of the manuscript and made sure to cite and highlight the relevance and relatedness of the mentioned papers.
In short, and as mentioned to Reviewer 1, while several models of how sequence is generated within HVC have been proposed (Cannon et al., 2015; Drew & Abbott, 2003; Egger et al., 2020; Elmaleh et al., 2021; Galvis et al., 2018; Gibb et al., 2009a, 2009b; Hamaguchi et al., 2016; Jin, 2009; Long & Fee, 2008; Markowitz et al., 2015; Jin et al., 2007), all the models proposed either rely on intrinsic HVC circuitry to propagate sequential activity, rely on extrinsic feedback to advance the sequence or rely on both. These models do not capture the complex details of spike morphology, do not include the right ionic currents, do not incorporate all classes of HVC neurons, or do not generate realistic firing patterns as seen in vivo. Our model is the first biophysically realistic model that incorporates all classes of HVC neurons and their intrinsic properties.
No existing hypothesis had been challenged with our model, rather; our model is a distillation of the various models that’s been proposed for the HVC network. We go over this in detail in the Discussion. We believe that the network model we developed provide a step forward in describing the biophysics of HVC circuitry, and may throw a new light on certain dynamics in the mammalian brain, particularly the motor cortex and the hippocampus regions where precisely-timed sequential activity is crucial. We suggest that temporally-precise sequential activity may be a manifestation of neural networks comprised of chain of microcircuits, each containing pools of excitatory and inhibitory neurons, with local interplay among neurons of the same microcircuit and global interplays across the various microcircuits, and with structured inhibition as well as intrinsic properties synchronizing the neuronal pools and stabilizing timing within a firing sequence.
The authors' main achievement is to show that simulations of a certain simplified and idealized network of spiking neurons, which includes some experimental details but ignores many others, match some experimental results like current-clamp-derived voltage time series for the three classes of HVC neurons (although this was already reported in earlier work by Daou and collaborators in 2013), and simultaneously the robust propagation of bursts with properties similar to those observed in experiments. The authors also present results about how certain neuronal details and burst propagation change when certain key maximum conductances are varied. However, these are weak conclusions for two reasons. First, the authors did not do enough calculations to allow the reader to understand how many parameters were needed to obtain these fits and whether simpler circuits, say with fewer parameters and simpler network topology, could do just as well. Second, many previous researchers have demonstrated robust burst propagation in a variety of feed-forward models. So what is new and important about the authors' results compared to the previous computational papers?
A major novelty of our work is the incorporation of experimental data with detailed network models. While earlier works have established robust burst propagation, our model uses realistic ion channel kinetics and feedback inhibition not only to reproduce experimental neural activity patterns but also to suggest prospective mechanisms for song sequence production in the most biophysical way possible. This aspect that distinguishes our work from other feed-forward models. We go over this in detail in the Discussion. However, the reviewer is right regarding the details of the calculations conducted for the fits, we will make sure to highlight this in the Methods and throughout the manuscript with more details.
We believe that the network model we developed provide a step forward in describing the biophysics of HVC circuitry, and may throw a new light on certain dynamics in the mammalian brain, particularly the motor cortex and the hippocampus regions where precisely-timed sequential activity is crucial. We suggest that temporally-precise sequential activity may be a manifestation of neural networks comprised of chain of microcircuits, each containing pools of excitatory and inhibitory neurons, with local interplay among neurons of the same microcircuit and global interplays across the various microcircuits, and with structured inhibition as well as intrinsic properties synchronizing the neuronal pools and stabilizing timing within a firing sequence.
Also missing is a discussion, or at least an acknowledgment, of the fact that not all of the fine experimental details of undershoots, latencies, spike structure, spike accommodation, etc may be relevant for understanding vocalization. While it is nice to know that some models can match these experimental details and produce realistic bursts, that does not mean that all of these details are relevant for the function of producing precise vocalizations. Scientific insights in biology often require exploring which of the many observed details can be ignored and especially identifying the few that are essential for answering some questions. As one example, if HVC-X neurons are completely removed from the authors' model, does one still get robust and reasonable burst propagation of HVC-RA neurons? While part of the nucleus HVC acts as a premotor circuit that drives the nucleus RA, part of HVC is also related to learning. It is not clear that HVC-X neurons, which carry out some unknown calculation and transmit information to area X in a learning pathway, are relevant for burst production and propagation of HVCRA neurons, and so relevant for vocalization. Simulations provide a convenient and direct way to explore questions of this kind.
One key question to answer is whether the bursting of HVC-RA projection neurons is based on a mechanism local to HVC or is some combination of external driving (say from auditory nuclei) and local circuitry. The authors do not contribute to answering this question because they ignore external driving and assume that the mechanism is some kind of intrinsic feed-forward circuit, which they put in by hand in a rather arbitrary and poorly justified way, by assuming the existence of small microcircuits consisting of a few HVC-RA, HVC-X, and HVC-I neurons that somehow correspond to "sub-syllabic segments". To my knowledge, experiments do not suggest the existence of such microcircuits nor does theory suggest the need for such microcircuits.
Recent results showed a tight correlation between the intrinsic properties of neurons and features of song (Daou and Margoliash 2020, Medina and Margoliash 2024), where adult birds that exhibit similar songs tend to have similar intrinsic properties. While this is relevant, we acknowledge that not all details may be necessary for every aspect of vocalization, and future models could simplify concentrate on core dynamics and exclude certain features while still providing insights into the primary mechanisms.
The question of whether HVC<sub>X</sub> neurons are relevant for burst propagation given that our model includes these neurons as part of the network for completeness, the reviewer is correct, the propagation of sequential activity in this model is primarily carried by HVC<sub>RA</sub> neurons in a feed-forward manner, but only if there is no perturbation to the HVC network. For example, we have shown how altering the intrinsic properties of HVC<sub>X</sub> neurons or for interneurons disrupts sequence propagation. In other words, while HVC neurons are the key forces to carry the chain forward, the interplay between excitation and inhibition in our network as well as the intrinsic parameters for all classes of HVC neurons are equally important forces in carrying the chain of activity forward. Thus, the stability of activity propagation necessary for song production depend on a finely balanced network of HVC neurons, with all classes contributing to the overall dynamics.
We agree with the reviewer however that a potential drawback of our model is that its sole focus is on local excitatory connectivity within the HVC (Kornfeld et al., 2017; Long et al., 2010), while HVC neurons receive afferent excitatory connections (Akutagawa & Konishi, 2010; Nottebohm et al., 1982) that plays significant roles in their local dynamics. For example, the excitatory inputs that HVC neurons receive from Uvaeformis may be crucial in initiating (Andalman et al., 2011; Danish et al., 2017; Galvis et al., 2018) or sustaining (Hamaguchi et al., 2016) the sequential activity. While we acknowledge this limitation, our main contribution in this work is the biophysical insights onto how the patterning activity in HVC is largely shaped by the intrinsic properties of the individual neurons as well as the synaptic properties where excitation and inhibition play a major role in enabling neurons to generate their characteristic bursts during singing. This is true and holds irrespective of whether an external drive is injected onto the microcircuits or not. We elaborated on this further in the revised version in the Discussion.
Another weakness of this paper is an unsatisfactory discussion of how the model was obtained, validated, and simulated. The authors should state as clearly as possible, in one location such as an appendix, what is the total number of independent parameters for the entire network and how parameter values were deduced from data or assigned by hand. With enough parameters and variables, many details can be fit arbitrarily accurately so researchers have to be careful to avoid overfitting. If parameter values were obtained by fitting to data, the authors should state clearly what the fitting algorithm was (some iterative nonlinear method, whose results can depend on the initial choice of parameters), what the error function used for fitting (sum of least squares?) was, and what data were used for the fitting.
The authors should also state clearly the dynamical state of the network, the vector of quantities that evolve over time. (What is the dimension of that vector, which is also the number of ordinary differential equations that have to be integrated?) The authors do not mention what initial state was used to start the numerical integrations, whether transient dynamics were observed and what were their properties, or how the results depended on the choice of the initial state. The authors do not discuss how they determined that their model was programmed correctly (it is difficult to avoid typing errors when writing several pages or more of a code in any language) or how they determined the accuracy of the numerical integration method beyond fitting to experimental data, say by varying the time step size over some range or by comparing two different integration algorithms.
We thank the reviewer again. The fitting process in our model occurred only at the first stage where the synaptic parameters were fit to the Mooney and Prather as well as the Kosche results. There was no data shared and we merely looked at the figures in those papers and checked the amplitude of the elicited currents, the magnitudes of DC-evoked excitations etc … and we replicated that in our model. While this is suboptimal, it was better for us to start with it rather than simply using equations for synaptic currents from the literature for other types of neurons (that are not even HVC’s or in the songbird) and integrate them into our network model. The number of ODEs that govern the dynamics of every model neuron is listed on page 10 of the manuscript as well as in the Appendix. Moreover, we highlighted the details of this fitting process in the revised version.
Also disappointing is that the authors do not make any predictions to test, except rather weak ones such as that varying a maximum conductance sufficiently (which might be possible by using dynamic clamps) might cause burst propagation to stop or change its properties. Based on their results, the authors do not make suggestions for further experiments or calculations, but they should.
We agree that making experimental testable predictions is crucial for the advancement of the model. Our predictions include testing whether eradication of a class of neurons such as HVC<sub>X</sub> neurons disrupts activity propagation which can be done through targeted neuron elimination. This also can be done through preventing rebound bursting in HVC<sub>X</sub> by pharmacologically blocking the I<sub>H</sub> channels. Others include down regulation of certain ion channels (pharmacologically done through ion blockers) and testing which current is fundamental for song production (and there a plenty of test based our results, like the SK current, the T-type Ca<sup>2+</sup> current, the A-type K<sup>+</sup> current, etc…). We incorporated these into the Discussion of the revised manuscript to better demonstrate the model's applicability and to guide future research directions.
Main issues:
(1) Parameters are overly fine-tuned and often do not match known biology to generate chains. This fine-tuning does not reveal fundamental insights.
(1a) Specific conductances (e.g. AMPA) are finely tweaked to generate bursts, in part due to a lack of a dendritic mechanism for burst generation. A dendritic mechanism likely reflects the true biology of HVC neurons.
We acknowledge that the model does not include active dendritic processes and we do not regard this as a limitation. In fact, our present approach, although simplified, is intended to focus on somatic mechanisms to identify minimal conditions required for stable sequential propagation. We know HVC<sub>RA</sub> neurons possess thin, spiny dendrites which can contribute to burst initiation and shaping. Future models that include such nonlinear dendritic mechanisms would likely reduce the need for fine tuning of specific conductances at the soma and consequently better match the known biology of HVC<sub>RA</sub> neurons.
In text: “While our simplified, somatically driven architecture enables better exploration of mechanisms for sequence propagation, future extensions of the model will incorporate dendritic compartments to more accurately reflect the intrinsic bursting mechanisms observed in HVC<sub>RA</sub> neurons.”
(1b) In this paper, microcircuits are simulated and then concatenated to make the HVC chain, resulting in no representations during silent gaps. This is out of touch with the known HVC function. There is no anatomical nor functional evidence for microcircuits of the kind discussed in this paper or in the earlier and rather similar paper by Eve Armstrong and Henry Abarbanel (J. Neurophy 2016). One can write a large number of papers in which one makes arbitrary unconstrained guesses of network structure in HVC and, unless they reveal some novel principle or surprising detail, they are all going to be weak.
Although the model is composed of sequentially activated microcircuits, the gaps between each microcircuit’s output do not represent complete silence in the network. During these periods, other neurons such as those in other microcircuits may still exhibit bursting activity. Thus, what may appear as a 'silent gap' from the perspective of a given output microcircuit is, in fact, part of the ongoing background dynamics of the larger HVC neuron network. We fully acknowledge the reviewer's point that there is no direct anatomical or physiological evidence supporting the presence of microcircuits with this structure in HVC. Our intention was not to propose the existence of such a physical model but to use it as a computational simplification to make precise sequential bursting activity feasible given the biologically realistic neuronal dynamics used. Hence, our use of 'microcircuits' refers to a modeling construct rather than a structural hypothesis. Even if the network topology is hypothetical, we still believe that the temporal structuring suggested allows us to generate specific predictions for future work about burst timing and neuronal connections.
(1c) HVC interneuron discharge in the author's model is overly precise; addressing the observation that these neurons can exhibit noisy discharge. Real HVC interneurons are noisy. This issue is critical: All reviewers strongly recommend that the authors should, at the minimum in a revision, focus on incorporating HVC-I noise in their model.
We agree that capturing the variability in interneuron bursting is critical for biological realism. In our model, HVC interneurons receive stochastic background current that introduces variability in their firing patterns as observed in vivo. This variability is seen in our simulations and produces more biologically realistic dynamics while maintaining sequence propagation. We clarify this implementation in the Methods section.
(1d) Address the finding that Kosche et al show that even with reduced inhibition, HVCra neuronal timing is preserved; it is the burst pattern that is affected.
The differences between the Kosche et al. (2015) findings and the predictions of our model arise from differences in the aspect of HVC function we are modeling. Our model is more sensitive to inhibition, which is a designed mechanism for achieving precise song patterning. This is a modeling simplification we adopted to capture specific characteristics of HVC function.
We acknowledged this point in the discussion: “While findings of Kosche et al. (2015) emphasize the robustness of the HVC timing circuit to inhibition, our model is more sensitive to inhibition, highlighting that HVC likely operates with several, redundant mechanisms that overall ensure temporal precision.”
(1e) The real HVC is robust to microlesions, cooling, and HVCra neuron turnover. The model in this paper relies on precise HVCra connectivity and is not robust.
Although our model is grounded in the biologically observed behavior of HVC neurons in vivo, we don’t claim that it fully captures the resilience seen in the HVC network. Instead, we see this as a simplified framework that helps us explore the basic principles of sequential activity. In the future, adding features like recurrent excitation, synaptic plasticity, or homeostatic mechanisms could make the model more robust.
(1f) There is unclear motivation for Ih-driven HVCx bursting, given past findings from the Mooney group.
Daou et al (2013) noticed that the observed in HVC<sub>X</sub> and HVC<sub>INT</sub> neurons in response to hyperpolarizing current pulses (Dutar et al. 1998; Kubota and Saito 1991; Kubota and Taniguchi 1998) was completely abolished after the application of the drug ZD 7288 in all of the neurons tested indicating that the sag in these HVC neurons is due to the hyperpolarization-activated inward current (I<sub>h</sub>). in addition, the sag and the rebound seen in these two neuron groups were larger as for larger hyperpolarization current pulses.
(1g) The initial conditions of the network and its activity under those conditions, as well as the possible reliance on external inputs, are not defined.
In our model, network activity is initiated through a brief, stochastic excitatory input to a small HVC<sub>RA</sub> neuron of one microcircuit. This drive represents a simplified version of external input from upstream brain regions known to project to HVC, such as nuclei in the high vocal center's auditory pathways such as Nif and Uva. Modeling the activity of these upstream regions and their influence on HVC dynamics is an ongoing research work to be published in the future.
(1h) It has been known from the time of Hodgkin and Huxley how to include temperature dependences for neuronal dynamics so another suggestion is for the authors to add such dependences for the three classes of neurons and see if their simulation causes burst frequencies to speed up or slow down as T is varied.
We added this as limitation to the discussion section: “Our model was run at a fixed physiological temperature, but it's well known going all the way back to Hodgkin and Huxley that both ion channel activity and synaptic dynamics can change with temperature. In future work, adding temperature scaling (like Q10 factors) could help us explore how burst timing and sequence speed change with temperature changes, and how neural activity in HVC would/would not preserve its precision under different physiological conditions.”
(2) The scope of the paper and its objectives must be clearly defined. Defining the scope and providing caveats for what is not considered will help the reader contextualize this study with other work.
(2a) The paper does not consider the role of external inputs to HVC, which are very likely important for the capacity of the HVC chain to tile the entire song, including silent gaps.
The role of afferent input to HVC particularly from nuclei such as Uva and Nif is critical in shaping the timing and initiation of HVC sequences throughout the song, including silent intervals. In fact, external inputs are likely involved in more than just triggering sequences, they may also influence the continuity of activity across motifs. However, in this study, we chose to focus on the intrinsic dynamics of HVC as a step toward understanding the internal mechanisms required for generating temporally precise sequences and for this reason, we used a simplified external input only to initiate activity in the chain.
(2b) The paper does not consider important dendritic mechanisms that almost certainly facilitate the all-or-none bursting behavior of HVC projection neurons. the authors need to mention and discuss that current-clamped neuronal response - in which an electrode is inserted into the soma and then a constant current-step is applied - bypasses dendritic structure and dendritic processing and so is an incomplete way to characterize a neuron's properties. In particular, claiming to fit current-clamp data accurately and then claiming that one now has a biophysically accurate network model, as the authors do, is greatly misleading.
While we addressed this is 1a, we do not suggest that our model is a fully accurate biophysical representation of HVC network. Instead, we see it as a simplified framework that helps reveal how much of HVC’s sequential activity can be explained by somatic properties and synaptic interactions alone. However, additional biological mechanisms, like dendritic processing, are likely to play an important role and should be explored in future work.
(2c) The introduction does not provide a clear motivation for the paper - what hypotheses are being tested? What is at stake in the model outcomes? It is not inherently informative to take a known biological representation and fine-tune a limited model to replicate that representation.
We explicitly added the hypotheses to the revised introduction.
(2d) There have been several published modeling efforts applied to the HVC chain (Seung, Fee, Long, Greenside, Jin, Margoliash, Abarbanel). These and others need to be introduced adequately, and it needs to be crystal clear what, if anything, the present study is adding to the canon.
While several influential models have explored how HVC might generate sequences ranging from synfire chains to recurrent dynamics or externally driven sequences (e.g., Seung, Fee, Long, Greenside, Jin, Abarbanel, and others), these models could not capture the detailed dynamics observed in vivo. Our aim was to bridge a gap in the modeling literature by exploring how far biophysically grounded intrinsic properties and experimentally supported synaptic connections that are local to the HVC can alone produce temporally precise sequences. We have proven that these mechanisms are sufficient to generate these sequences, although some missing components (such as dendritic mechanisms or external inputs) might be needed to fully capture the complexity and robustness of HVC function.
(2e) The authors mention learning prominently in the abstract, summary, and introduction but this paper has nothing to do with learning. Most or all mentions of learning should be deleted since they are misleading.
We appreciate the reviewer’s observation however our intent by referencing learning was not to suggest that our model directly simulates learning processes, but rather to place HVC function within the broader context of song learning and production, where temporal sequencing plays a fundamental role. Yet, repeated references to learning may be misleading given that our current model does not incorporate plasticity, synaptic modification, or developmental changes. Hence, we have carefully revised the manuscript to rephrase mentions of learning unless directly relevant to context.
(3) Using the model for hypothesis generation and prediction of experimental results.
(3a) The utility of a model is to provide conceptual insight into how or why the real HVC functions as it does, or to predict outcomes in yet-to-be conducted experiments to help motivate future studies. This paper does not adequately achieve these goals.
We revised the Discussion of the manuscript to better emphasize potential contributions and point out many experiments that could validate or challenge the model’s predictions. These include dynamic clamp or ion channel blockers targeting A-type K<sup>+</sup> in HVC<sub>RA</sub> neurons to assess their impact on burst precision, optogenetic disruption of inhibitory interneurons to observe changes in burst timing and sequence propagation, pharmacological modulation of I<sub>h</sub> or I<sub>CaT</sub> in HVC<sub>X</sub> and interneurons etc.
(3b) Additionally, it can be interesting to conduct an experiment on an existing model; for example, what happens to the HVCra chain in your model if you delete the HVCx neurons? What happens if you block NMDA receptors? Such an approach in a modeling paper can help motivate hypotheses and endow the paper with a sense of purpose.
We agree that running targeted experiments to test our computational model such as removing an HVC neuron population or blocking a synaptic receptor can be a powerful way to generate new ideas and guide future experiments. While we didn’t include these specific tests in the current study, the model is well suited for this kind of exploration. For instance, removing interneurons could help us better understand their role in shaping the timing of HVC<sub>RA</sub> bursts. These are great directions for future experiments, and we now highlight this in the discussion as a way the model could be used to guide experiments.
(4) Changes to the paper's organization may improve clarity.
(4a) Nearly all equations should be moved to an Appendix so that the main part of the paper can focus on the science: assumptions made, details of simulations, conclusions obtained, and their significance. The authors present many equations without discussion which weakens the paper.
Equations moved to appendix.
(4b) There are many grammatical errors, e.g., verbs do not match the subject in terms of being single or plural. The authors need to run their manuscript through a grammar checker.
Done.
(4c) Many of the figures are poorly designed and should be substantially modified. E.g. in Figure 1B, too many colors are used, making it hard to grasp what is being plotted and the colors are not needed. Figures 1C and 1D are entire figures taken from other papers, and there is no way a reader will be able to see or appreciate all the details when this figure is published on a single page. Figure 2 uses colors for dots that are almost identical, and the colors could be avoided by using different symbols. Figure 5 fills an entire page but most of the figure conveys no information, there is no need to show the same details for all 120 neurons, just show the top 1/3 of this figure; the same for Figure 7, a lot of unnecessary information is being included. Figure 10, the bottom time series of spikes should be replaced with a time series of rates, cannot extract useful information.
Adjusted as requested.
(4d) Table 1 is long and largely uninteresting, and should be moved to an appendix.
Table 1 moved to appendix.
(4e) Many sentences are not carefully written, which greatly weakens the paper. As one typical example, the first sentence in the Discussion section "In this study, we have designed a neural network model that describes [sic] zebra finch song production in the HVC." This is inaccurate, the model does not describe song production, it just explores some properties of one nucleus involved with song production. Just one or few sentences like this is ok but there are so many sentences of this kind that the reader loses faith in the authors.
Thank you for raising this point, we revised the manuscript to improve the precision of the writing. We replaced the first sentence of the discussion with this: "In this study, we developed a biophysically realistic neural network model to explore how intrinsic neuronal properties and local connectivity within the songbird nucleus HVC may support the generation of temporally precise activity sequences associated with zebra finch song."
< H2>Make International Delivery Easy with GFS< H2>
With GFS, you can offer your customers realistic delivery promises backed by our reliable carrier network. Whether you’re targeting Europe, North America, Asia, or beyond, we make international delivery seamless so you can focus on growing your ecommerce business.
<CTA BUTTON>[Contact our team] today to discuss your international delivery strategy.<CTA BUTTON>
Content to round up the end of the page and provide a clear CTA for users
PLEASE NOTE: Transit Times are being impacted by delays caused by the Brexit changes. Brexit has impacted distribution services from the UK to Europe as all shipments go through a formal customs clearance process. GFS are constantly reviewing the best options to limit any impact in service. Countries that are part of the EU are shown in bold.
We would add this to the very bottom of the page - we understand the importance but it takes valuable seo space and interaction on the page
TOP 50 Global destinations listed. GFS do ship to all Countries Worldwide. To discuss further destinations, please get in touch. Transit times are delivery aims – this means the majority of historical shipments on this service have been delivered within this time window. Shipments may not be delivered within this time frame due to unforeseen delays, which includes customs. Transit times are from receipt of shipment at the UK hub of GFS International. Unless stated otherwise all services are Delivered Duty Unpaid (DDU). For full terms and conditions, please contact the GFS International team at international@gfsdeliver.com
We would reframe this content with more engaging and easy to read content; for example
Reliable international delivery is essential for ecommerce businesses that want to grow beyond the UK. At GFS, we make global shipping simple, with trusted carrier partners, fast transit times, and one point of contact for your entire international logistics.
Our international transit times give you complete visibility on when your customers can expect their orders to arrive, helping you set accurate delivery promises and build customer confidence. With direct integration into multiple carriers and routes, GFS ensures your parcels are always shipped on the most efficient service available.
Transit times are delivery aims – this means the majority of historical shipments on this service have been delivered within this time window.
Shipments may not be delivered within this time frame due to unforeseen delays, which includes customs.
Transit times are from receipt of shipment at the UK hub of GFS International. Unless stated otherwise all services are Delivered Duty Unpaid (DDU). (Put the full t&cs at the bottom of the page)
Click on the below links to find out more about our delivery in your country of choice:
Meta Description: Transit times are delivery aims - this means the majority of historical shipments on this service have been delivered within this time window.
Change to Meta Description: Explore GFS International’s reliable transit time table for 220+ global destinations. Get delivery aims from just 1-2 days (Express) to 12+ days (Economy) and ship with confidence.
This is a simple placeholder for the manuscript’s main document
this is my annotation
Internal Links
URL to Link: https://gfsdeliver.com/international-trade-news/international-trade-news-for-april-2025/
2 links from the country names
URL to Link: https://gfsdeliver.com/parcels/gfs-international/ Can we link all the countries to their main landing page
URL to Link: https://gfsdeliver.com/news-and-blogs/gfs-international-christmas/ Can we link all the countries to their main landing page
URL to Link: https://gfsdeliver.com/news-and-blogs/gfs-international-final-posting-dates-for-christmas-delivery/ Can we link all the countries to their main landing page
Leverage our expertise and network for a seamless shipping experience.
Can we increase this content to a similar size as the Ireland page
n Sydney's scenario, it was the traditional and the familiarthat were stressed. The authenticity of Hong Kong culture as lived throughthe consumption of dimsum, whether frozen or steaming hot, seemed toprovide the emotional stability and the networks that were craved inimmigrant life - as a balance against uncertainty, alienation anddisplacement - in short, a reassurance that one was still rooted
leads to an very interesting segway into the tendencies of immigrant groups to "turn inwards" and become extremely conservative in response to feeling of uprootedness.