Scope for improvement: They could benefit from adding an AI Chatbot to their website to help people navigate through website through roadblocks in real time. there is a help button all the way at the end of the webpage that could be placed better because people often find webpages not user friendly and exit before going all the way to the end of the webpage. Even if they use the contact us page, they will have to wait to hear back on their emails/ call wait times so AI Chatbots provide a faster, smoother solution.
- Jan 2026
-
www.woodgreen.org www.woodgreen.org
-
-
The Raymond Chang Foundation gives Homeward Bound a $2 million gift
Suggested Improvement: The pictures don't seem to be <alt> tags friendly. There is a feature to make text bigger/ smaller or screen readers. They have a feature to hide images from the screen but we feel they could add a image to text feature too. If somebody uses the feature to hides images, they should be able to read about what was demonstrated or used to further visually describe or relate to the topic if they wanted to actually see it.
-
Child Care
Suggested Improvement: The hyperlinks are added at the bottom in a smaller font size whereas they could just be embedded in the headings itself considering they are the same words. It would help make the webpage more robust by having a clean code and smoother transition between pages.
-
They have great accessibility features like ADHD Friendly switch for the webpage as well as keyboard accessibility with shortcuts to even access dropdown menus ad buttons on the webpage. The accessibility menu is located at the bottom right with a blue accessibility sign and has more features even for individuals affected by visual or hearing impairments.
-
-
hypothes.is hypothes.is
-
Web Accessibility Annotations
Website Chosen - Shopify Accessibility Shopify is a digital brand that prioritizes inclusive and accessible web design. Analyzing this site aligns with the goals of EID as it demonstrates how accessibility principles are applied in real world digital spaces.
Accessibility Annotations Annotation 1 - Clear Heading Hierarchy Feature: Proper use of headings This page demonstrates good accessibility practice through a clear and logical heading structure. Properly organized headings allow users to navigate pages efficiently and understand how the content is structured.
Annotation 2- Plain and Inclusive Language Feature: Simple, Readable The language used throughout the page is clear, direct and easy to understand. This helps with overall accessibility by ensuring information is understandable to a wide audience.
Annotation 3 - High Colour Contrast Feature: Text contrast against Background The strong contrast between the text and background improves readability for users with low vision or colour blindness.
Annotation 4 - Descriptive Links Feature: Meaningful link text The links on the page use descriptive language rather than generic phases like "click here". This is good accessibility practice because screen reader users can understand the purpose of each link without relying on other surrounding context.
-
-
socialsci.libretexts.org socialsci.libretexts.org
-
public interest
The general welfare or best interest of the public.
-
-
www.cbc.ca www.cbc.ca
-
Yet here we are — sleeping, eating and panicking if we can't find our phones. It's not just our own connection to these devices, but a whole infrastructure and system that needs to change
As of 2022, there are over 5 billion internet user over the world. Especially in Asia by having over half of the percentages of users. In the same year, North America had the most internet world penetration rate, up to 93%. The data showed how much we rely on our devices.
-
-
www.nvidia.com www.nvidia.com
-
utomotive From developing autonomous vehicles on NVIDIA DRIVE® to creating factory digital twins and retail experiences with NVIDIA Omniverse, our automotive solutions offer the performance and scalability to design, visualize, simulate, and create all types of future transportation. Quick Links
Operable
Can be navigated by keyboard and can be used by different methods for example screen scrolling by hands while using on phone. Users also have sufficient time to read the content before it jump into different one.
-
Main Menu Products Cloud Services Data Center Embedded Systems Gaming and Creating Graphics Cards and GPUs Laptops Networking Professional Workstations Software Tools Cloud Services BioNeMo AI-driven platform for life sciences research and discovery DGX Cloud Fully managed end-to-end AI platform on leading clouds NVIDIA APIs Explore, test, and deploy AI models and agents Omniverse Cloud Integrate advanced simulation and AI into complex 3D workflows Private Registry Guide for using NVIDIA NGC private registry with GPU cloud NVIDIA NGC Accelerated, containerized AI models and SDKs Data Center Overview Modernizing data centers with AI and accelerated computing DGX Platform Enterprise AI factory for model development and deployment Grace CPU Architecture for data centers that transform data into intelligence HGX Platform A supercomputer purpose-built for AI and HPC IGX Platform Advanced functional safety and security for edge AI MGX Platform Accelerated computing with modular servers OVX Systems Scalable data center infrastructure for high-performance AI Embedded Systems Jetson Leading platform for autonomous machines and embedded applications DRIVE AGX Powerful in-vehicle computing for AI-driven autonomous vehicle systems Clara AGX AI-powered computing for innovative medical devices and imaging Gaming and Creating GeForce Explore graphics cards, gaming solutions, AI technology, and more GeForce Graphics Cards RTX graphics cards bring game-changing AI capabilities Gaming Laptops Thinnest and longest lasting RTX laptops, optimized by Max-Q G-SYNC Monitors Smooth, tear-free gaming with NVIDIA G-SYNC monitors DLSS Neural rendering tech boosts FPS and enhances image quality Reflex Ultimate responsiveness for faster reactions and better aim RTX AI PCs AI PCs for gaming, creating, productivity and development NVIDIA Studio High performance laptops and desktops, purpose-built for creators GeForce NOW Cloud Gaming RTX-powered cloud gaming. Choose from 3 memberships NVIDIA App Optimize gaming, streaming, and AI-powered creativity NVIDIA Broadcast App AI-enhanced voice and video for next-level streams, videos, and calls SHIELD TV World-class streaming media performance Graphics Cards and GPUs Blackwell Architecture The engine of the new industrial revolution Hopper Architecture High performance, scalability, and security for every data center Ada Lovelace Architecture Performance and energy efficiency for endless possibilities GeForce Graphics Cards RTX graphics cards bring game-changing AI capabilities NVIDIA RTX PRO Accelerating professional AI, graphics, rendering and compute workloads Virtual GPU Virtual solutions for scalable, high-performance computing Laptops GeForce Laptops GPU-powered laptops for gamers and creators Studio Laptops High performance laptops purpose-built for creators NVIDIA RTX PRO Laptops Accelerate professional AI and visual computing from anywhere Networking Overview Accelerated networks for modern workloads DPUs and SuperNICs Software-defined hardware accelerators for networking, storage, and security Ethernet Ethernet performance, availability, and ease of use across a wide range of applications InfiniBand High-performance networking for super computers, AI, and cloud data centers Networking Software Networking software for optimized performance and scalability Network Acceleration IO subsystem for modern, GPU-accelerated data centers Professional Workstations Overview Accelerating professional AI, graphics, rendering, and compute workloads DGX Spark A Grace Blackwell AI Supercomputer on your desk DGX Station The ultimate desktop AI supercomputer powered by NVIDIA Grace Blackwell NVIDIA RTX PRO AI Workstations Accelerate innovation and productivity in AI workflows NVIDIA RTX PRO Desktops Powerful AI, graphics, rendering, and compute workloads NVIDIA RTX PRO Laptops Accelerate professional AI and visual computing from anywhere Software Agentic AI Models - Nemotron AI Agents - NeMo AI Blueprints AI Inference - Dynamo AI Inference - NIM AI Microservices - CUDA-X Automotive - DRIVE Data Science - Apache Spark Data Science - RAPIDS Decision Optimization - cuOpt Healthcare - Clara Industrial AI - Omniverse Intelligent Video Analytics - Metropolis NVIDIA AI Enterprise NVIDIA Mission Control NVIDIA Run:ai Physical AI - Cosmos Robotics - Isaac Telecommunications - Aerial See All Software Tools AI Workbench Simplify AI development with NVIDIA AI Workbench on GPUs API Catalog Explore NVIDIA's AI models, blueprints, and tools for developers Data Center Management AI and HPC software solutions for data center acceleration GPU Monitoring Monitor and manage GPU performance in cluster environments Nsight Explore NVIDIA developer tools for AI, graphics, and HPC NGC Catalog Discover GPU-optimized AI, HPC, and data science software NVIDIA App for Laptops Optimize enterprise GPU management NVIDIA NGC Accelerate AI and HPC workloads with NVIDIA GPU Cloud solutions Desktop Manager Enhance multi-display productivity with NVIDIA RTX Desktop Manager RTX Accelerated Creative Apps Creative tools and AI-powered apps for artists and designers Video Conferencing AI-powered audio and video enhancement Solutions Artificial Intelligence Cloud and Data Center Design and Simulation High-Performance Computing Robotics and Edge AI Autonomous Vehicles Artificial Intelligence Overview Add intelligence and efficiency to your business with AI and machine learning Agentic AI Build AI agents designed to reason, plan, and act AI Data Powering a new class of enterprise infrastructure for AI Conversational AI Enables natural, personalized interactions with real-time speech AI Cybersecurity AI-driven solutions to strengthen cybersecurity and AI infrastructure Data Science Iterate on large datasets, deploy models more frequently, and lower total cost Inference Drive breakthrough performance with AI-enabled applications and services Cloud and Data Center Overview Powering AI, HPC, and modern workloads with NVIDIA AI Data Platform for Enterprise Bringing enterprise storage into the era of agentic AI AI Factory Full-stack infrastructure for scalable AI workloads Accelerated Computing Accelerated computing uses specialized hardware to boost IT performance Cloud Computing On-demand IT resources and services, enabling scalability and intelligent insights Colocation Accelerate the scaling of AI across your organization Networking High speed ethernet interconnect solutions and services Sustainable Computing Save energy and lower cost with AI and accelerated computing Virtualization NVIDIA virtual GPU software delivers powerful GPU performance Design and Simulation Overview Streamline building, operating, and connecting metaverse apps Computer Aided-Engineering Develop real-time interactive design using AI-accelerated real-time digital twins Digital Twin Development Harness the power of large-scale, physically-based OpenUSD simulation Rendering Bring state-of-the-art rendering to professional workflows Robotic Simulation Innovative solutions to take on your robotics, edge, and vision AI challenges Scientific Visualization Enablies researchers to visualize their large datasets at interactive speeds Vehicle Simulation AI-defined vehicles are transforming the future of mobility Extended Reality Transform workflows with immersive, scalable interactions in virtual environments High-Performance Computing Overview Discover NVIDIA’s HPC solutions for AI, simulation, and accelerated computing HPC and AI Boost accuracy with GPU-accelerating HPC and AI Scientific Visualization Enables researchers to visualize large datasets at interactive speeds Simulation and Modeling Accelerate simulation workloads Quantum Computing Fast-tracking the advancement of scientific innovations with QPUs Robotics and Edge AI Overview Innovative solutions to take on robotics, edge, and vision AI challenges Robotics GPU-accelerated advances in AI perception, simulation, and software Edge AI Bring the power of NVIDIA AI to the edge for real-time decision-making solutions Vision AI Transform data into valuable insights using vision AI Autonomous Vehicles Overview AI-enhanced vehicles are transforming the future of mobility Open Source AV Models and Tools For reasoning-based AV systems AV Simulation Explore high-fidelity sensor simulation for safe autonomous vehicle development Reference Architecture Enables vehicles to be L4-ready Infrastructure Essential data center tools for safe autonomous vehicle development In-Vehicle Computing Develop automated driving functions and immersive in-cabin experiences Safety State-of-the-art system for AV safety, from the cloud to the car Industries
Robust
This web using clean HTML and work well with different browsers (Chrome, Safari, Firefox,etc). The content processed correctly by tools as I tested the Safari screen reader.
-
-
pressbooks.online.ucf.edu pressbooks.online.ucf.edu
-
The man who stole me as soon as I was born, recorded the births of all the infants which he claimed to be born his property, in a book which he kept for that purpose.
I want to highlight the word "stole". That is extremely powerful use of language. The word "stole" invokes great emotion within me because it's the truth. Not only was he stolen from his mother but his childhood was also stolen.
-
Are you a Christian? This is the carrying out of practical Christianity; and there is no other. Christianity is practical in its very nature and essence. It is a life, springing out of a soul imbued with its spirit. Are you a friend of the missionary cause?
I love how the author pointed out a Christian argument, since there were Christian slave owners. It's interesting to see how the same religion can be used to justify slavery, while simultaneously used in the abolitionist cause as well.
-
-
ibighit.com ibighit.com
-
A
Hovering over pictures with the mouse will show text that informs users what the topic is. Those who can only access the headings may not know which picture corresponds to which topic.
-
Dark
Contrast can be conveniently adjusted for just the site without the user needing to go into settings.
-
ABOUTBTSBTSPROFILEDISCOGRAPHYTOUR2026 BTSTOMORROW X TOGETHERTOMORROW X TOGETHERPROFILEDISCOGRAPHYTOURCORTISCORTISPROFILEDISCOGRAPHYLEE HYUNLEE HYUN
Headers lead to the same topics and the floating pictures. The header can be accessed by either keyboard or mouse while the pictures are only accessible via mouse.
-
-
www.biorxiv.org www.biorxiv.org
-
eLife Assessment
Koch et al. describe a valuable novel methodology, SynSAC, to synchronise cells to analyse meiosis I or meiosis II or mitotic metaphase in budding yeast. The authors present convincing data to validate abscisic acid-induced dimerisation to induce a synthetic spindle assembly checkpoint (SAC) arrest that will be of particular importance to analyse meiosis II. The authors use their approach to determine the composition and phosphorylation of kinetochores from meiotic metaphase I and metaphase II that will be of interest to the broader meiosis research community.
[Editors' note: this paper was reviewed by Review Commons.]
-
Reviewer #1 (Public review):
Summary:
These authors have developed a method to induce MI or MII arrest. While this was previously possible in MI, the advantage of the method presented here is it works for MII, and chemically inducible because it is based on a system that is sensitive to the addition of ABA. Depending on when the ABA is added, they achieve a MI or MII delay. The ABA promotes dimerizing fragments of Mps1 and Spc105 that can't bind their chromosomal sites. The evidence that the MI arrest is weaker than the MII arrest is convincing and consistent with published data and indicating the SAC in MI is less robust than MII or mitosis. The authors use this system to find evidence that the weak MI arrest is associated with PP1 binding to Spc105. This is a nice use of the system.
The remainder of the paper uses the SynSAC system to isolate populations enriched for MI or MII stages and conduct proteomics. This shows a powerful use of the system, but more work is needed to validate these results, particularly in normal cells.
Overall, the most significant aspect of this paper is the technical achievement, which is validated by the other experiments. They have developed a system and generated some proteomics data that maybe useful to others when analyzing kinetochore composition at each division.
-
Reviewer #2 (Public review):
Summary:
The manuscript submitted by Koch et al. describes a novel approach to collect budding yeast cells in metaphase I or metaphase II by synthetically activating the spinde checkpoint (SAC). The arrest is transient and reversible. This synchronization strategy will be extremely useful for studying meiosis I and meiosis II, and compare the two divisions. The authors characterized this so named syncSACapproach and could confirm previous observations that the SAC arrest is less efficient in meiosis I than in meiosis II. They found that downregulation of the SAC response through PP1 phosphatase is stronger in meiosis I than in meiosis II. The authors then went on to purify kinetochore-associated proteins from metaphase I and II extracts for proteome and phosphoproteome analysis. Their data will be of significant interest to the cell cycle community (they compared their datasets also to kinetochores purified from cells arrested in prophase I and -with SynSAC in mitosis).
Significance:
The technique described here will be of great interest to the cell cycle community. Furthermore, the authors provide data sets on purified kinetochores of different meiotic stages and compare them to mitosis. This paper will thus be highly cited, for the technique, and also for the application of the technique.
-
Reviewer #3 (Public review):
Summary:
In their manuscript, Koch et al. describe a novel strategy to synchronize cells of the budding yeast Saccharomyces cerevisiae in metaphase I and metaphase II, thereby facilitating comparative analyses between these meiotic stages. This approach, termed SynSAC, adapts a method previously developed in fission yeast and human cells that enables the ectopic induction of a synthetic spindle assembly checkpoint (SAC) arrest by conditionally forcing the heterodimerization of two SAC components upon addition of the plant hormone abscisic acid (ABA). This is a valuable tool, which has the advantage that induces SAC-dependent inhibition of the anaphase promoting complex without perturbing kinetochores. Furthermore, since the same strategy and yeast strain can be also used to induce a metaphase arrest during mitosis, the methodology developed by Koch et al. enables comparative analyses between mitotic and meiotic cell divisions. To validate their strategy, the authors purified kinetochores from meiotic metaphase I and metaphase II, as well as from mitotic metaphase, and compared their protein composition and phosphorylation profiles. The results are presented clearly and in an organized manner. Despite the relevance of both the methodology and the comparative analyses, several main issues should be addressed:
(1) In contrast to the strong metaphase arrest induced by ABA addition in mitosis (Supp. Fig. 2), the SynSAC strategy only promotes a delay in metaphase I and metaphase II as cells progress through meiosis. This delay extends the duration of both meiotic stages, but does not markedly increase the percentage of metaphase I or II cells in the population at a given timepoint of the meiotic time course (Fig. 1C). Therefore, although SynSAC broadens the time window for sample collection, it does not substantially improve differential analyses between stages compared with a standard NDT80 prophase block synchronization experiment. Could a higher ABA concentration or repeated hormone addition improve the tightness of the meiotic metaphase arrest?
(2) Unlike the standard SynSAC strategy, introducing mutations that prevent PP1 binding to the SynSAC construct considerably extended the duration of the meiotic metaphase arrests. In particular, mutating PP1 binding sites in both the RVxF (RASA) and the SILK (4A) motifs of the Spc105(1-455)-PYL construct caused a strong metaphase I arrest that persisted until the end of the meiotic time course (Fig. 3A). This stronger and more prolonged 4A-RASA SynSAC arrest would directly address the issue raised above. It is unclear why the authors did not emphasize more this improved system. Indeed, the 4A-RASA SynSAC approach could be presented as the optimal strategy to induce a conditional metaphase arrest in budding yeast meiosis, since it not only adapts but also improves the original methods designed for fission yeast and human cells. Along the same lines, it is surprising that the authors did not exploit the stronger arrest achieved with the 4A-RASA mutant to compare kinetochore composition at meiotic metaphase I and II.
(3) The results shown in Supp. Fig. 4C are intriguing and merit further discussion. Mitotic growth in ABA suggest that the RASA mutation silences the SynSAC effect, yet this was not observed for the 4A or the double 4A-RASA mutants. Notably, in contrast to mitosis, the SynSAC 4A-RASA mutation leads to a more pronounced metaphase I meiotic delay (Fig. 3A). It is also noteworthy that the RVAF mutation partially restores mitotic growth in ABA. This observation supports, as previously demonstrated in human cells, that Aurora B-mediated phosphorylation of S77 within the RVSF motif is important to prevent PP1 binding to Spc105 in budding yeast as well.
(4) To demonstrate the applicability of the SynSAC approach, the authors immunoprecipitated the kinetochore protein Dsn1 from cells arrested at different meiotic or mitotic stages, and compared kinetochore composition using data independent acquisition (DIA) mass spectrometry. Quantification and comparative analyses of total and kinetochore protein levels were conducted in parallel for cells expressing either FLAG-tagged or untagged Dsn1 (Supp. Fig. 7A-B). To better detect potential changes, protein abundances were next scaled to Dsn1 levels in each sample (Supp. Fig. 7C-D). However, it is not clear why the authors did not normalize protein abundance in the immunoprecipitations from tagged samples at each stage to the corresponding untagged control, instead of performing a separate analysis. This would be particularly relevant given the high sensitivity of DIA mass spectrometry, which enabled quantification of thousands of proteins. Furthermore, the authors compared protein abundances in tagged-samples from mitotic metaphase and meiotic prophase, metaphase I and metaphase II (Supp. Fig. 7E-F). If protein amounts in each case were not normalized to the untagged controls, as inferred from the text (lines 333 to 338), the observed differences could simply reflect global changes in protein expression at different stages rather than specific differences in protein association to kinetochores.
(5) Despite the large amount of potentially valuable data generated, the manuscript focuses mainly on results that reinforce previously established observations (e.g., premature SAC silencing in meiosis I by PP1, changes in kinetochore composition, etc.). The discussion would benefit from a deeper analysis of novel findings that underscore the broader significance of this study.
Significance:
Koch et al. describe a novel methodology, SynSAC, to synchronize budding yeast cells in metaphase I or metaphase II during meiosis, as well and in mitotic metaphase, thereby enabling differential analyses among these cell division stages. Their approach builds on prior strategies originally developed in fission yeast and human cells models to induce a synthetic spindle assembly checkpoint (SAC) arrest by conditionally forcing the heterodimerization of two SAC proteins upon addition of abscisic acid (ABA). The results from this manuscript are of special relevance for researchers studying meiosis and using Saccharomyces cerevisiae as a model. Moreover, the differential analysis of the composition and phosphorylation of kinetochores from meiotic metaphase I and metaphase II adds interest for the broader meiosis research community. Finally, regarding my expertise, I am a researcher specialized in the regulation of cell division.
-
Author response:
General Statements
We are delighted that all reviewers found our manuscript to be a technical advance by providing a much sought after method to arrest budding yeast cells in metaphase of mitosis or both meiotic metaphases. The reviewers also valued our use of this system to make new discoveries in two areas. First, we provided evidence that the spindle checkpoint is intrinsically weaker in meiosis I and showed that this is due to PP1 phosphatase. Second, we determined how the composition and phosphorylation of the kinetochore changes during meiosis, providing key insights into kinetochore function and providing a rich dataset for future studies.
The reviewers also made some extremely helpful suggestions to improve our manuscript, which we will now implement:
(1) Improvements to the discussion throughout the manuscript. The reviewers recommended that we focus our discussion on the novel findings of the manuscript and drew out some key points of interest that deserve more attention. We fully agree with this and we will address this in a revised version.
(2) We will add a new supplemental figure to help interpret the mass spectrometry data, to address Reviewer #3, point 4.
(3) We are currently performing an additional control experiment to address the minor point 1 from reviewer #3. Our experiment to confirm that SynSAC relies on endogenous checkpoint proteins was missing the cell cycle profile of cells where SynSAC was not induced for comparison. We will add this control to our full revision.
(4) In our full revision we will also include representative images of spindle morphology as requested by Reviewer #1, point 2
Description of the planned revisions
Reviewer #1 (Evidence, reproducibility and clarity):
These authors have developed a method to induce MI or MII arrest. While this was previously possible in MI, the advantage of the method presented here is that it works for MII, and chemically inducible because it is based on a system that is sensitive to the addition of ABA. Depending on when the ABA is added, they achieve a MI or MII delay. The ABA promotes dimerizing fragments of Mps1 and Spc105 that can't bind their chromosomal sites. The evidence that the MI arrest is weaker than the MII arrest is convincing and consistent with published data and indicating the SAC in MI is less robust than MII or mitosis. The authors use this system to find evidence that the weak MI arrest is associated with PP1 binding to Spc105. This is a nice use of the system.
The remainder of the paper uses the SynSAC system to isolate populations enriched for MI or MII stages and conduct proteomics. This shows a powerful use of the system but more work is needed to validate these results, particularly in normal cells.
Overall the most significant aspect of this paper is the technical achievement, which is validated by the other experiments. They have developed a system and generated some proteomics data that maybe useful to others when analyzing kinetochore composition at each division. Overall, I have only a few minor suggestions.
We appreciate the reviewers’ support of our study.
(1) In wild-type - Pds1 levels are high during M1 and A1, but low in MII. Can the authors comment on this? In line 217, what is meant by "slightly attenuated? Can the authors comment on how anaphase occurs in presence of high Pds1? There is even a low but significant level in MII.
The higher levels of Pds1 in meiosis I compared to meiosis II has been observed previously using immunofluorescence and live imaging[1–3]. Although the reasons are not completely clear, we speculate that there is insufficient time between the two divisions to re-accumulate Pds1 prior to separase re-activation.
We agree “slightly attenuated” was confusing and we have re-worded this sentence to read “Addition ABA at the time of prophase release resulted in Pds1securin stabilisation throughout the time course, consistent with delays in both metaphase I and II”.
We do not believe that either anaphase I or II occur in the presence of high Pds1. Western blotting represents the amount of Pds1 in the population of cells at a given time point. The time between meiosis I and II is very short even when treated with ABA. For example, in Figure 2B, spindle morphology counts show that the anaphase I peak is around 40% at its maxima (105 min) and around 40% of cells are in either metaphase I or metaphase II, and will be Pds1 positive. In contrast, due to the better efficiency of meiosis II, anaphase II hardly occurs at all in these conditions, since anaphase II spindles (and the second nuclear division) are observed at very low frequency (maximum 10%) from 165 minutes onwards. Instead, metaphase II spindles partially or fully breakdown, without undergoing anaphase extension. Taking Pds1 levels from the western blot and the spindle data together leads to the conclusion that at the end of the time-course, these cells are biochemically in metaphase II, but unable to maintain a robust spindle. Spindle collapse is also observed in other situations where meiotic exit fails, and potentially reflects an uncoupling of the cell cycle from the programme governing gamete differentiation[3–5]. We will explain this point in a revised version while referring to representative images that from evidence for this, as also requested by the reviewer below.
(2) The figures with data characterizing the system are mostly graphs showing time course of MI and MII. There is no cytology, which is a little surprising since the stage is determined by spindle morphology. It would help to see sample sizes (ie. In the Figure legends) and also representative images. It would also be nice to see images comparing the same stage in the SynSAC cells versus normal cells. Are there any differences in the morphology of the spindles or chromosomes when in the SynSAC system?
This is an excellent suggestion and will also help clarify the point above. We will provide images of cells at the different stages. For each timepoint, 100 cells were scored. We have already included this information in the figure legends
(3) A possible criticism of this system could be that the SAC signal promoting arrest is not coming from the kinetochore. Are there any possible consequences of this? In vertebrate cells, the RZZ complex streams off the kinetochore. Yeast don't have RZZ but this is an example of something that is SAC dependent and happens at the kinetochore. Can the authors discuss possible limitations such as this? Does the inhibition of the APC effect the native kinetochores? This could be good or bad. A bad possibility is that the cell is behaving as if it is in MII, but the kinetochores have made their microtubule attachments and behave as if in anaphase.
In our view, the fact that SynSAC does not come from kinetochores is a major advantage as this allows the study of the kinetochore in an unperturbed state. It is also important to note that the canonical checkpoint components are all still present in the SynSAC strains, and perturbations in kinetochore-microtubule interactions would be expected to mount a kinetochore-driven checkpoint response as normal. Indeed, it would be interesting in future work to understand how disrupting kinetochore-microtubule attachments alters kinetochore composition (presumably checkpoint proteins will be recruited) and phosphorylation but this is beyond the scope of this work. In terms of the state at which we are arresting cells – this is a true metaphase because cohesion has not been lost but kinetochore-microtubule attachments have been established. This is evident from the enrichment of microtubule regulators but not checkpoint proteins in the kinetochore purifications from metaphase I and II. While this state is expected to occur only transiently in yeast, since the establishment of proper kinetochore-microtubule attachments triggers anaphase onset, the ability to capture this properly bioriented state will be extremely informative for future studies. We appreciate the reviewers’ insight in highlighting these interesting discussion points which we will include in a revised version.
Reviewer #1 (Significance):
These authors have developed a method to induce MI or MII arrest. While this was previously possible in MI, the advantage of the method presented here is it works for MII, and chemically inducible because it is based on a system that is sensitive to the addition of ABA. Depending on when the ABA is added, they achieve a MI or MII delay. The ABA promotes dimerizing fragments of Mps1 and Spc105 that can't bind their chromosomal sites. The evidence that the MI arrest is weaker than the MII arrest is convincing and consistent with published data and indicating the SAC in MI is less robust than MII or mitosis. The authors use this system to find evidence that the weak MI arrest is associated with PP1 binding to Spc105. This is a nice use of the system.
The remainder of the paper uses the SynSAC system to isolate populations enriched for MI or MII stages and conduct proteomics. This shows a powerful use of the system but more work is needed to validate these results, particularly in normal cells.
Overall the most significant aspect of this paper is the technical achievement, which is validated by the other experiments. They have developed a system and generated some proteomics data that maybe useful to others when analyzing kinetochore composition at each division.
We appreciate the reviewer’s enthusiasm for our work.
Reviewer #2 (Evidence, reproducibility and clarity):
The manuscript submitted by Koch et al. describes a novel approach to collect budding yeast cells in metaphase I or metaphase II by synthetically activating the spinde checkpoint (SAC). The arrest is transient and reversible. This synchronization strategy will be extremely useful for studying meiosis I and meiosis II, and compare the two divisions. The authors characterized this so-named syncSACapproach and could confirm previous observations that the SAC arrest is less efficient in meiosis I than in meiosis II. They found that downregulation of the SAC response through PP1 phosphatase is stronger in meiosis I than in meiosis II. The authors then went on to purify kinetochore-associated proteins from metaphase I and II extracts for proteome and phosphoproteome analysis. Their data will be of significant interest to the cell cycle community (they compared their datasets also to kinetochores purified from cells arrested in prophase I and -with SynSAC in mitosis).
I have only a couple of minor comments:
(1) I would add the Suppl Figure 1A to main Figure 1A. What is really exciting here is the arrest in metaphase II, so I don't understand why the authors characterize metaphase I in the main figure, but not metaphase II. But this is only a suggestion.
This is a good suggestion, we will do this in our full revision.
(2) Line 197, the authors state: “...SyncSACinduced a more pronounced delay in metaphase II than in metaphase I”. However, line 229 and 240 the authors talk about a "longer delay in metaphase <i compared to metaphase II"... this seems to be a mix-up.
Thank you for pointing this out, this is indeed a typo and we have corrected it.
(3) The authors describe striking differences for both protein abundance and phosphorylation for key kinetochore associated proteins. I found one very interesting protein that seems to be very abundant and phosphorylated in metaphase I but not metaphase II, namely Sgo1. Do the authors think that Sgo1 is not required in metaphase II anymore? (Top hit in suppl Fig 8D).
This is indeed an interesting observation, which we plan to investigate as part of another study in the future. Indeed, data from mouse indicates that shugoshin-dependent cohesin deprotection is already absent in meiosis II in mouse oocytes[6], though whether this is also true in yeast is not known. Furthermore, this does not rule out other functions of Sgo1 in meiosis II (for example promoting biorientation). We will include this point in the discussion.
Reviewer #2 (Significance):
The technique described here will be of great interest to the cell cycle community. Furthermore, the authors provide data sets on purified kinetochores of different meiotic stages and compare them to mitosis. This paper will thus be highly cited, for the technique, and also for the application of the technique.
Reviewer #3 (Evidence, reproducibility and clarity):
In their manuscript, Koch et al. describe a novel strategy to synchronize cells of the budding yeast Saccharomyces cerevisiae in metaphase I and metaphase II, thereby facilitating comparative analyses between these meiotic stages. This approach, termed SynSAC, adapts a method previously developed in fission yeast and human cells that enables the ectopic induction of a synthetic spindle assembly checkpoint (SAC) arrest by conditionally forcing the heterodimerization of two SAC components upon addition of the plant hormone abscisic acid (ABA). This is a valuable tool, which has the advantage that induces SAC-dependent inhibition of the anaphase promoting complex without perturbing kinetochores. Furthermore, since the same strategy and yeast strain can be also used to induce a metaphase arrest during mitosis, the methodology developed by Koch et al. enables comparative analyses between mitotic and meiotic cell divisions. To validate their strategy, the authors purified kinetochores from meiotic metaphase I and metaphase II, as well as from mitotic metaphase, and compared their protein composition and phosphorylation profiles. The results are presented clearly and in an organized manner.
We are grateful to the reviewer for their support.
Despite the relevance of both the methodology and the comparative analyses, several main issues should be addressed:
(1) In contrast to the strong metaphase arrest induced by ABA addition in mitosis (Supp. Fig. 2), the SynSAC strategy only promotes a delay in metaphase I and metaphase II as cells progress through meiosis. This delay extends the duration of both meiotic stages, but does not markedly increase the percentage of metaphase I or II cells in the population at a given timepoint of the meiotic time course (Fig. 1C). Therefore, although SynSAC broadens the time window for sample collection, it does not substantially improve differential analyses between stages compared with a standard NDT80 prophase block synchronization experiment. Could a higher ABA concentration or repeated hormone addition improve the tightness of the meiotic metaphase arrest?
For many purposes the enrichment and extended time for sample collection is sufficient, as we demonstrate here. However, as pointed out by the reviewer below, the system can be improved by use of the 4A-RASA mutations to provide a stronger arrest (see our response below). We did not experiment with higher ABA concentrations or repeated addition since the very robust arrest achieved with the 4A-RASA mutant deemed this unnecessary.
(2) Unlike the standard SynSAC strategy, introducing mutations that prevent PP1 binding to the SynSAC construct considerably extended the duration of the meiotic metaphase arrests. In particular, mutating PP1 binding sites in both the RVxF (RASA) and the SILK (4A) motifs of the Spc105(1-455)-PYL construct caused a strong metaphase I arrest that persisted until the end of the meiotic time course (Fig. 3A). This stronger and more prolonged 4A-RASA SynSAC arrest would directly address the issue raised above. It is unclear why the authors did not emphasize more this improved system. Indeed, the 4A-RASA SynSAC approach could be presented as the optimal strategy to induce a conditional metaphase arrest in budding yeast meiosis, since it not only adapts but also improves the original methods designed for fission yeast and human cells. Along the same lines, it is surprising that the authors did not exploit the stronger arrest achieved with the 4A-RASA mutant to compare kinetochore composition at meiotic metaphase I and II.
We agree that the 4A-RASA mutant is the best tool to use for the arrest and going forward this will be our approach. We collected the proteomics data and the data on the SynSAC mutant variants concurrently, so we did not know about the improved arrest at the time the proteomics experiment was done. Because very good arrest was already achieved with the unmutated SynSAC construct, we could not justify repeating the proteomics experiment which is a large amount of work using significant resources. However, we will highlight the potential of the 4A-RASA mutant more prominently in our full revision.
(3) The results shown in Supp. Fig. 4C are intriguing and merit further discussion. Mitotic growth in ABA suggest that the RASA mutation silences the SynSAC effect, yet this was not observed for the 4A or the double 4A-RASA mutants. Notably, in contrast to mitosis, the SynSAC 4A-RASA mutation leads to a more pronounced metaphase I meiotic delay (Fig. 3A). It is also noteworthy that the RVAF mutation partially restores mitotic growth in ABA. This observation supports, as previously demonstrated in human cells, that Aurora B-mediated phosphorylation of S77 within the RVSF motif is important to prevent PP1 binding to Spc105 in budding yeast as well.
We agree these are intriguing findings that highlight key differences as to the wiring of the spindle checkpoint in meiosis and mitosis and potential for future studies, however, currently we can only speculate as to the underlying cause. The effect of the RASA mutation in mitosis is unexpected and unexplained. However, the fact that the 4A-RASA mutation causes a stronger delay in meiosis I compared to mitosis can be explained by a greater prominence of PP1 phosphatase in meiosis. Indeed, our data (Figure 4A) show that the PP1 phosphatase Glc7 and its regulatory subunit Fin1 are highly enriched on kinetochores at all meiotic stages compared to mitosis.
We agree that the improved growth of the RVAF mutant is intriguing and points to a role of Aurora B-mediated phosphorylation, though previous work has not supported such a role [7].
We will include a discussion of these important points in a revised version.
(4) To demonstrate the applicability of the SynSAC approach, the authors immunoprecipitated the kinetochore protein Dsn1 from cells arrested at different meiotic or mitotic stages, and compared kinetochore composition using data independent acquisition (DIA) mass spectrometry. Quantification and comparative analyses of total and kinetochore protein levels were conducted in parallel for cells expressing either FLAG-tagged or untagged Dsn1 (Supp. Fig. 7A-B). To better detect potential changes, protein abundances were next scaled to Dsn1 levels in each sample (Supp. Fig. 7C-D). However, it is not clear why the authors did not normalize protein abundance in the immunoprecipitations from tagged samples at each stage to the corresponding untagged control, instead of performing a separate analysis. This would be particularly relevant given the high sensitivity of DIA mass spectrometry, which enabled quantification of thousands of proteins. Furthermore, the authors compared protein abundances in tagged-samples from mitotic metaphase and meiotic prophase, metaphase I and metaphase II (Supp. Fig. 7E-F). If protein amounts in each case were not normalized to the untagged controls, as inferred from the text (lines 333 to 338), the observed differences could simply reflect global changes in protein expression at different stages rather than specific differences in protein association to kinetochores.
While we agree with the reviewer that at first glance, normalising to no tag appears to be the most appropriate normalisation, in practice there is very low background signal in the no tag sample which means that any random fluctuations have a big impact on the final fold change used for normalisation. This approach therefore introduces artefacts into the data rather than improving normalisation.
To provide reassurance that our kinetochore immunoprecipitations are specific, and that the background (no tag) signal is indeed very low, we will provide a new supplemental figure showing the volcanos comparing kinetochore purifications at each stage with their corresponding no tag control.
It is also important to note that our experiment looks at relative changes of the same protein over time, which we expect to be relatively small in the whole cell lysate. We previously documented proteins that change in abundance in whole cell lysates throughout meiosis[8]. In this study, we found that relatively few proteins significantly change in abundance.
Our aim in the current study was to understand how the relative composition of the kinetochore changes and for this, we believe that a direct comparison to Dsn1, a central kinetochore protein which we immunoprecipitated is the most appropriate normalisation.
(5) Despite the large amount of potentially valuable data generated, the manuscript focuses mainly on results that reinforce previously established observations (e.g., premature SAC silencing in meiosis I by PP1, changes in kinetochore composition, etc.). The discussion would benefit from a deeper analysis of novel findings that underscore the broader significance of this study.
We strongly agree with this point and we will re-frame the discussion to focus on the novel findings, as also raised by the other reviewers.
Finally, minor concerns are:
(1) Meiotic progression in SynSAC strains lacking Mad1, Mad2 or Mad3 is severely affected (Fig. 1D and Supp. Fig. 1), making it difficult to assess whether, as the authors state, the metaphase delays depend on the canonical SAC cascade. In addition, as a general note, graphs displaying meiotic time courses could be improved for clarity (e.g., thinner data lines, addition of axis gridlines and external tick marks, etc.).
We will generate the data to include a checkpoint mutant +/- ABA for direct comparison. We will take steps to improve the clarity of presentation of the meiotic timecourse graphs, though our experience is that uncluttered graphs make it easier to compare trends.
(2) Spore viability following SynSAC induction in meiosis was used as an indicator that this experimental approach does not disrupt kinetochore function and chromosome segregation. However, this is an indirect measure. Direct monitoring of genome distribution using GFP-tagged chromosomes would have provided more robust evidence. Notably, the SynSAC mad3Δ mutant shows a slight viability defect, which might reflect chromosome segregation defects that are more pronounced in the absence of a functional SAC.
Spore viability is a much more sensitive way of analysing segregation defects that GFP-labelled chromosomes. This is because GFP labelling allows only a single chromosome to be followed. On the other hand, if any of the 16 chromosomes mis-segregate in a given meiosis this would result in one or more aneuploid spores in the tetrad, which are typically inviable. The fact that spore viability is not significantly different from wild type in this analysis indicates that there are no major chromosome segregation defects in these strains, and we therefore do not plan to do this experiment.
(3) It is surprising that, although SAC activity is proposed to be weaker in metaphase I, the levels of CPC/SAC proteins seem to be higher at this stage of meiosis than in metaphase II or mitotic metaphase (Fig. 4A-B).
We agree, this is surprising and we will point this out in the revised discussion. We speculate that the challenge in biorienting homologs which are held together by chiasmata, rather than back-to-back kinetochores results in a greater requirement for error correction in meiosis I. Interestingly, the data with the RASA mutant also point to increased PP1 activity in meiosis I, and we additionally observed increased levels of PP1 (Glc7 and Fin1) on meiotic kinetochores, consistent with the idea that cycles of error correction and silencing are elevated in meiosis I.
(4) Although a more detailed exploration of kinetochore composition or phosphorylation changes is beyond the scope of the manuscript, some key observations could have been validated experimentally (e.g., enrichment of proteins at kinetochores, phosphorylation events that were identified as specific or enriched at a certain meiotic stage, etc.).
We agree that this is beyond the scope of the current study but will form the start of future projects from our group, and hopefully others.
(5) Several typographical errors should be corrected (e.g., "Knetochores" in Fig. 4 legend, "250uM ABA" in Supp. Fig. 1 legend, etc.)
Thank you for pointing these out, they have been corrected.
Reviewer #3 (Significance):
Koch et al. describe a novel methodology, SynSAC, to synchronize budding yeast cells in metaphase I or metaphase II during meiosis, as well and in mitotic metaphase, thereby enabling differential analyses among these cell division stages. Their approach builds on prior strategies originally developed in fission yeast and human cells models to induce a synthetic spindle assembly checkpoint (SAC) arrest by conditionally forcing the heterodimerization of two SAC proteins upon addition of abscisic acid (ABA). The results from this manuscript are of special relevance for researchers studying meiosis and using Saccharomyces cerevisiae as a model. Moreover, the differential analysis of the composition and phosphorylation of kinetochores from meiotic metaphase I and metaphase II adds interest for the broader meiosis research community. Finally, regarding my expertise, I am a researcher specialized in the regulation of cell division.
Description of the revisions that have already been incorporated in the transferred manuscript
We have only corrected minor typos as detailed above.
Description of analyses that authors prefer not to carry out
The revisions we plan are detailed above. There are just two revisions we believe are either unnecessary or beyond the scope, both minor concerns of Reviewer #3. For clarity we have reproduced them, along with our justification below. In the latter case, the reviewer also acknowledged that further work in this direction is beyond the scope of the current study.
(2) Spore viability following SynSAC induction in meiosis was used as an indicator that this experimental approach does not disrupt kinetochore function and chromosome segregation. However, this is an indirect measure. Direct monitoring of genome distribution using GFP-tagged chromosomes would have provided more robust evidence. Notably, the SynSAC mad3Δ mutant shows a slight viability defect, which might reflect chromosome segregation defects that are more pronounced in the absence of a functional SAC.
Spore viability is a much more sensitive way of analysing segregation defects that GFP-labelled chromosomes. This is because GFP labelling allows only a single chromosome to be followed. On the other hand, if any of the 16 chromosomes mis-segregate in a given meiosis this would result in one or more aneuploid spores in the tetrad, which are typically inviable. The fact that spore viability is not significantly different from wild type in this analysis indicates that there are no major chromosome segregation defects in these strains, and we therefore do not plan to do this experiment.
(4) Although a more detailed exploration of kinetochore composition or phosphorylation changes is beyond the scope of the manuscript, some key observations could have been validated experimentally (e.g., enrichment of proteins at kinetochores, phosphorylation events that were identified as specific or enriched at a certain meiotic stage, etc.).
We agree that this is beyond the scope of the current study but will form the start of future projects from our group, and hopefully others.
(1) Salah, S.M., and Nasmyth, K. (2000). Destruction of the securin Pds1p occurs at the onset of anaphase during both meiotic divisions in yeast. Chromosoma 109, 27–34.
(2) Matos, J., Lipp, J.J., Bogdanova, A., Guillot, S., Okaz, E., Junqueira, M., Shevchenko, A., and Zachariae, W. (2008). Dbf4-dependent CDC7 kinase links DNA replication to the segregation of homologous chromosomes in meiosis I. Cell 135, 662–678.
(3) Marston, A.L.A.L., Lee, B.H.B.H., and Amon, A. (2003). The Cdc14 phosphatase and the FEAR network control meiotic spindle disassembly and chromosome segregation. Developmental cell 4, 711–726. https://doi.org/10.1016/S1534-5807(03)00130-8.
(4) Attner, M.A., and Amon, A. (2012). Control of the mitotic exit network during meiosis. Molecular Biology of the Cell 23, 3122–3132. https://doi.org/10.1091/mbc.E12-03-0235.
(5) Pablo-Hernando, M.E., Arnaiz-Pita, Y., Nakanishi, H., Dawson, D., del Rey, F., Neiman, A.M., and de Aldana, C.R.V. (2007). Cdc15 Is Required for Spore Morphogenesis Independently of Cdc14 in Saccharomyces cerevisiae. Genetics 177, 281–293. https://doi.org/10.1534/genetics.107.076133.
(6) El Jailani, S., Cladière, D., Nikalayevich, E., Touati, S.A., Chesnokova, V., Melmed, S., Buffin, E., and Wassmann, K. (2025). Eliminating separase inhibition reveals absence of robust cohesin protection in oocyte metaphase II. EMBO J 44, 5187–5214. https://doi.org/10.1038/s44318-025-00522-0.
(7) Rosenberg, J.S., Cross, F.R., and Funabiki, H. (2011). KNL1/Spc105 Recruits PP1 to Silence the Spindle Assembly Checkpoint. Current Biology 21, 942–947. https://doi.org/10.1016/j.cub.2011.04.011.
(8) Koch, L.B., Spanos, C., Kelly, V., Ly, T., and Marston, A.L. (2024). Rewiring of the phosphoproteome executes two meiotic divisions in budding yeast. EMBO J 43, 1351–1383. https://doi.org/10.1038/s44318-024-00059-8.
-
-
www.ttc.ca www.ttc.caTTC.ca3
-
1 Normal service 2 Normal service 4 Normal service 6 Normal service
There is another public annotation on this webpage on Hypothesis that says the reliance on colour makes it not accessible to colour blindness. Although I agree with this take, I also think the TTC homepage does not rely entirely on colour and symbols to convey its messages. They do use colour and images, but those symbols and colours are always accompanied by text.
-
See all accessibility alerts
Robust - Although there is a lot of information about accessibility on the homepage and the information is clear I think there should be more additional features on the site. especially with such a big emphasis on accessibility they could have included features like screen magnifiers, screen readers and alt text on the images.
-
TTC winter w
Visibility - The TTC web page uses high contrast colours that help visual impairments identify new pages. The headings for each page are clearly outlined and are in large font to understand.
-
-
www.scope.org.uk www.scope.org.uk
-
It is tested every 6 months.
The website is tested every 6 months to make sure that everything is working properly and that it is as accessible as it can be. This is part of their "An Equal Future Strategy"
-
To help us understand the problem as quickly as we can, please tell us: the web address or title of the page where you found a problem what the problem is what computer and software you use.
After they have given you their contact information they also inform you what the best way to contact them is, so that it is efficient and they welcome constructive feedback
-
We are currently working to fix: our donate journey as it has several accessibility issues header menus not closing when you tab off them heading structures some websites we link to may not be accessible some of our older PDF documents may not work correctly with screen readers some text does not meet our content accessibility requirements
The website is very clear on what they are trying to fix and they have their contact information attached in case any of these issues are an inconvenience.
-
For help with customising your experience using accessibility features already on your computer. Or by installing extra assistive technologies. Try these sites: AbilityNet's My Computer, My way The Web Accessibility Initiative’s Better Web Browsing: Tips for Customising your Computer
the website has an accessibility tab where they clearly show you how to customize your experience to make it accessible for you.
-
-
a11y.canada.ca a11y.canada.ca
-
Community Directory
In terms of the robust qualities of this website, we are able to interact using keyboard only functions (eg., to scroll down on the page using the arrow keys) along with trackpad and mouse.
-
Learn more about the Information and Communication Technology (ICT) accessibility standards, including the EN 301 549, which includes WCAG 2.1 level A and AA, when purchasing goods or services or designing a project, roles, and teams related to the Government of Canada.
With this text, there is no option for text to speech, making the perceivable aspect of this website poor for people with visual impairments. Additionally, the website does not allow manipulation of text sizes.
-
Accessibility standards
A good practice within the webpage would be the hyperlinks displayed throughout the page. This is an operable element that provides a usable interactive element.
-
Digital accessibility in the Government of Canada
The bold, large text format allows for understandable content, which is clear and draws the eye towards the main topics. The different font sizes allows for the text and title to be distinguishable from one another.
-
-
blog.usablenet.com blog.usablenet.com
-
Courts and regulators will focus less on how content was generated and more on what users experienced. AI-generated content will not receive special treatment or exemptions, particularly as enforcement timelines like those outlined in the European Accessibility Act (EAA) approach.
It's good that AI-generated content will not be treated differently, as something that many people fear is not being able to match up to the capabilities of AI.
-
By 2026, AI will be the primary engine behind website creation. It will generate layouts, assemble components, produce content, and accelerate deployment at a pace no human team can match.
It is mostly good because AI is like an assistant. Developers can focus on different things instead of just solving repetitive tasks.
-
-
ibighit.com ibighit.com
-
Dark
Users can conveniently change the contrast of only the website without having to go into their own settings.
-
ABOUTBTSBTSPROFILEDISCOGRAPHYTOUR2026 BTSTOMORROW X TOGETHERTOMORROW X TOGETHERPROFILEDISCOGRAPHYTOURCORTISCORTISPROFILEDISCOGRAPHYLEE HYUNLEE HYUN
Top headers are buttons that lead to more information. Anyone accessing the site using only a keyboard can still enter the topics. Floating pictures provide a point of visual interest as well as a visual reference and can be used via only mouse to access the same topics as the headers.
-
-
www.cbc.ca www.cbc.ca
-
European leaders warn of a ‘dangerous downward spiral’ as they stand united against U.S President Donald Trump's threat to impose tariffs on countries that resist his bid to take over Greenland.
A closed caption option represents a good practice as it allows people who have trouble hearing a way to watch the video by reading subtitles.
-
Listen to this article
An audio version of this article represents a good practice as it allows people who have trouble seeing a good alternative to consume the information by listening.
-
-
www.youtube.com www.youtube.com
-
Scooby-Doo 2: Monsters Unleashed (2004) Trailer #1 | Movieclips Classic Trailers Rotten Tomatoes Classic Trailers Rotten Tomatoes Classic Trailers 1.91M subscribers Subscribe Subscribed 1.4KShareSaveDownload Download 267,411 views Nov 1, 2019 Check out the official Scooby-Doo 2: Monsters Unleashed (2004) Trailer starring Sarah Michelle Gellar! Let us know what you think in the comments below. ► Watch on FandangoNOW: https://www.fandangonow.com/details/m... Subscribe to the channel and click the bell icon to stay up to date on all your favorite movies. Starring: Freddie Prinze Jr., Sarah Michelle Gellar, Linda Cardellini Directed By: Raja Gosnell Synopsis: The Mystery Inc. gang must save Coolsville from an attack of past unmasked monsters brought to life by an evil masked figure trying to "unmask" the gang. Watch More Classic Trailers: ► Horror Films: http://bit.ly/2D21x45 ► Comedies: http://bit.ly/2qTCzPN ► Dramas: http://bit.ly/2tefVm2 ► Sci-Fi Movies: http://bit.ly/2msyb5C ► Animated Movies: http://bit.ly/2HqZZ2c ► Documentaries: http://bit.ly/2Fs2zFd ► Musicals: http://bit.ly/2oDFckX ► Romantic Comedies: http://bit.ly/2qQVieQ ► Superhero Films: http://bit.ly/2FtNZgi ► Westerns: http://bit.ly/2mrOEXG ► War Movies: http://bit.ly/2qX4u18 ► Trailers By Year: http://bit.ly/2qTCxHF Fuel Your Movie Obsession: ► Subscribe to CLASSIC TRAILERS: http://bit.ly/2D01HJi ► Watch Movieclips ORIGINALS: http://bit.ly/2D3sipV ► Like us on FACEBOOK: http://bit.ly/2DikvkY ► Follow us on TWITTER: http://bit.ly/2mgkaHb ► Follow us on INSTAGRAM: http://bit.ly/2mg0VNU Subscribe to the Fandango MOVIECLIPS CLASSIC TRAILERS channel to rediscover all your favorite movie trailers and find a classic you may have missed. Show less Check out the official Scooby
Positive: Text in different sections is broken up with proper spacing and font sizes to ensure users can distinguish separate parts of text from each other. Some text such as the description is also sectioned off into a box so it visually stands out from the title.
-
1.4KShareSave
Positive: Icons have designated alternative text that describes what they are used for. The like button has text saying "I like this" for example, allowing people to know both what the button is and what its function is. This text appears both when a cursor is hovering over the button or when it is selected with the TAB key, which means people using a mouse or a keyboard can see the alt text.
-
1.4K
Positive: Pressing the TAB key cycles through all of the buttons available on the webpage, with the arrow keys moving the page up and down, and the space bar selecting options highlighted when pressing TAB. Certain buttons also have keyboard shortcuts such as T for theatre mode. This makes options more accessible for people exclusively using a keyboard.
-
Trailers
Positive: YouTube offers automatic or custom closed captioning to allow people to either upload their own subtitles for their uploaded videos, or for people to read automatically generated subtitles. This allows those who may not be able to hear the audio of the video still experience it without compromise.
-
-
www.oiselle.com www.oiselle.com
-
Classic style, epic support
This image and the two on either side of it has alt text but the images on the product pages do not!
-
At
The accessibility icon in the bottom left corner provides assistance for people with visual (bigger text, bigger cursor, brightness, contrast, etc), hearing (read page), and cognitive impairments (dyslexic font)
-
At Oiselle, our vision is to help all women feel strong and free through the joy of running. Our mission is to design great product, build the sisterhood, and improve the sport. May we all experience the transformative power of sports and community.
Clear contrast between foreground and background. Also the banner above this has great colour contrast too.
-
-
www.scope.org.uk www.scope.org.uk
-
It's like a charity shop, but online. Literally. Our online charity shop is an ever-changing collection of amazing homewares, electronics, fitness gear and so much more. Explore new products every day!
They clearly explain what the website is about, with pictures and high contrast images which make it easier for people to access their content and understand what they are saying.
-
-
www.youtube.com www.youtube.com
-
Synthèse des Enquêtes Internationales : Enjeux et Perspectives pour le Système Éducatif Français
Résumé Exécutif
L'analyse des enquêtes internationales (PISA, TIMSS, PIRLS) révèle une situation contrastée pour l'éducation en France.
Si le pays maintient une position proche de la moyenne de l'OCDE dans certains domaines, des signaux d'alarme majeurs apparaissent, notamment une baisse tendancielle du niveau en mathématiques depuis 30 ans et une corrélation exceptionnellement forte entre l'origine sociale et la réussite scolaire.
Les points critiques identifiés incluent :
• Un déclin marqué en mathématiques : À peine 20 % des élèves de 6ème maîtrisent le concept des fractions sur une ligne numérique.
• Des inégalités sociales persistantes : La France est l'un des pays où le milieu socio-économique prédit le mieux les résultats.
• Un déficit de compétences psychosociales : Les élèves français manifestent une anxiété élevée, une faible persévérance et un sentiment d'appartenance à l'école réduit.
• Un climat scolaire dégradé : Les perturbations en classe sont nettement supérieures à la moyenne internationale.
Toutefois, des motifs d'optimisme existent, notamment la résilience des scores de lecture au niveau primaire malgré la pandémie de COVID-19, et le succès d'expérimentations ciblées (groupes de besoins, réformes structurelles au Maroc et en Estonie).
La recherche scientifique préconise un passage du simple diagnostic à l'action par l'expérimentation rigoureuse et le renforcement de la formation des enseignants.
--------------------------------------------------------------------------------
I. Panorama des Évaluations Internationales
Le Conseil Scientifique de l'Éducation Nationale (CSEN) souligne l'importance d'utiliser ces enquêtes non comme des classements médiatiques, mais comme des outils de diagnostic et des leviers de transformation pédagogique.
1. Les trois piliers de l'évaluation
| Enquête | Organisme | Population cible | Domaines évalués | | --- | --- | --- | --- | | PISA | OCDE | Élèves de 15 ans | Culture mathématique, scientifique et compréhension de l'écrit (littératie). | | TIMSS | IEA | CM1 et 4ème | Mathématiques et Sciences. | | PIRLS | IEA | CM1 | Compréhension de l'écrit (processus de lecture). |
2. Distinction entre PISA et TIMSS/PIRLS
• PISA adopte un point de vue "extérieur" aux programmes scolaires, évaluant la capacité des jeunes à mobiliser leurs connaissances dans des situations de la vie réelle à la fin de la scolarité obligatoire.
• TIMSS et PIRLS sont plus étroitement liés aux programmes d'enseignement (curriculum) et se basent sur des niveaux scolaires spécifiques (Grade 4 et Grade 8).
--------------------------------------------------------------------------------
II. Analyse du Système Français : Constats et Diagnostics
1. Performances Académiques : Un déclin hétérogène
• Mathématiques : C'est le point noir du système français.
Les résultats en CM1 et 4ème montrent un décrochage net par rapport à la moyenne de l'Union européenne.
L'écart se creuse particulièrement en 4ème, avec seulement 3 % d'élèves très performants contre 11 % au niveau européen et 50 % à Singapour.
• Lecture : La situation est plus encourageante au primaire.
La France est l'un des rares pays à avoir progressé ou stabilisé ses scores en lecture (PIRLS 2021) malgré la crise sanitaire.
Cette résilience est attribuée à une fermeture limitée des écoles (comparée à d'autres pays) et potentiellement aux politiques de dédoublement des classes en éducation prioritaire.
• Compétences Numériques et Civiques : Dans les enquêtes ICILS (numérique) et ICCS (citoyenneté), la France obtient des résultats honorables, se situant dans la moyenne ou légèrement au-dessus, notamment en pensée informatique et en adhésion aux valeurs d'égalité.
2. Le Poids des Inégalités Sociales et de Genre
La France se distingue par une "surdétermination" des performances par l'origine sociale.
La variance expliquée par le milieu socio-économique est de 17-19 % en France, contre 13-14 % dans les autres pays de l'OCDE.
De plus, un "effet de genre" émerge dès le CP : les garçons prennent rapidement l'avantage sur les filles en mathématiques, un écart qui s'accentue jusqu'au CM1 (23 points d'écart en 2023).
3. Climat Scolaire et Facteurs Psychologiques
Les enquêtes mettent en lumière des fragilités comportementales spécifiques aux élèves français :
• Anxiété mathématique : Bien qu'en baisse, elle reste notable.
• Climat de classe : 29 % des élèves déclarent ne pas pouvoir travailler correctement en mathématiques à cause du bruit et du désordre (moyenne OCDE : 23 %).
• Esprit de croissance : Moins d'un élève sur deux en France pense que son intelligence peut se développer par l'effort.
• Coopération : La France obtient l'un des indices de coopération entre élèves les plus faibles de l'OCDE.
--------------------------------------------------------------------------------
III. Enseignements Internationaux : Modèles de Réussite
L'analyse de pays aux trajectoires variées permet d'identifier des facteurs clés de succès.
1. L'Estonie : Le modèle d'efficacité nordique
Le succès estonien repose sur :
• L'autonomie des établissements : Les écoles gèrent leur propre programme tout en respectant un socle national.
• La haute qualification des enseignants : Le Master est obligatoire pour un contrat permanent.
• L'éducation précoce : Un programme scolaire dès la maternelle (4-6 ans) incluant lecture et jeux.
• La transparence des données : Une évaluation externe régulière dont les résultats guident les améliorations locales.
2. Le Maroc : La réforme des "Écoles Pionnières"
Face à des résultats historiquement faibles, le Maroc a lancé un programme massif incluant :
• L'approche TARL (Teaching at the Right Level) : Remédiation intensive basée sur le niveau réel de l'élève plutôt que sur son âge.
• L'enseignement explicite : Des leçons structurées et scriptées pour soutenir les enseignants.
• Un encadrement de proximité : Les inspecteurs passent d'un rôle de contrôle à un rôle de coaching hebdomadaire.
• Résultats : Un gain d'impact de 0,9 écart-type en une seule année dans les écoles pilotes.
3. Le Portugal : La leçon de la continuité
L'expérience portugaise montre qu'une politique de "hautes attentes" (examens nationaux exigeants, programmes basés sur les contenus) a permis une remontée spectaculaire entre 2000 et 2015.
Inversement, l'assouplissement de ces exigences et le passage à une "flexibilité curriculaire" après 2016 ont coïncidé avec une baisse des résultats.
--------------------------------------------------------------------------------
IV. Leviers de Transformation pour la France
Le CSEN et les experts réunis suggèrent plusieurs pistes pour inverser la courbe du déclin.
1. Améliorer la maîtrise des fondamentaux
• Enseignement des fractions : Des interventions ciblées de 4 à 5 semaines, utilisant des logiciels de pointage numérique avec feedback immédiat, ont montré une progression spectaculaire des élèves de CM2 et 6ème.
• Enseignement de la compréhension : Contrairement aux pays anglophones, la France enseigne peu les stratégies explicites de compréhension (inférences, analyse de structure de texte).
Il est recommandé d'intégrer ces pratiques dès le primaire.
2. Renforcer la formation et l'attractivité
• Investissement : La part du PIB consacrée à l'éducation en France a baissé de près d'un point depuis les années 90 (représentant un manque à gagner de 25 milliards d'euros).
• Formation continue : Nécessité de former les enseignants aux apports des sciences cognitives pour identifier les "obstacles cognitifs" (erreurs de logique, recours excessif aux connaissances personnelles au détriment du texte).
3. Agir sur le climat et les compétences sociales
• Développer l'esprit de croissance : Encourager les élèves à voir l'erreur comme une étape d'apprentissage.
• Favoriser la coopération : Réduire la compétition pour améliorer le bien-être et la motivation, particulièrement chez les élèves les plus fragiles.
4. Utiliser l'évaluation comme diagnostic
L'évaluation ne doit pas être vécue comme une sanction.
Elle doit permettre de créer des "groupes de besoins" temporaires et ciblés, permettant de traiter les lacunes spécifiques (comme les automatismes de calcul) avant qu'elles ne deviennent insurmontables.
--------------------------------------------------------------------------------
Conclusion
Les enquêtes internationales confirment que le déclin n'est pas une fatalité.
Des pays aux contextes variés (Estonie, Maroc, Portugal) ont réussi à transformer leur système en s'appuyant sur la cohérence des programmes, la formation des acteurs et une culture de l'évaluation diagnostique.
Pour la France, l'enjeu réside dans sa capacité à traduire ces données scientifiques en pratiques de classe quotidiennes et en politiques publiques stables.
-
-
www.apple.com www.apple.com
-
List the potential benefits of becoming a client
-
-
www.americanyawp.com www.americanyawp.com
-
I cannot understand how the Government sends a man out to fight us, as it did General Miles, and then breaks his word. Such a government has something wrong about i
the government of the United States continue to break treaty it has signed many years ago and continue to do the same as today
-
Hear me, my chiefs. I am tired; my heart is sick and sad. From where the sun now stands I will fight no more forever.
it seems that the government of the United States gave the natives no choice but to give in to its demands
-
-
Local file Local file
-
Used in procedures where detailed patient cooperation is not required.• Beneficial in cases where muscle relaxation is desired.• Reduces pain response during the procedure.
① Used in procedures where detailed patient cooperation is not required. Ayrıntılı hasta iş birliğinin gerekmediği işlemlerde kullanılır.
② Beneficial in cases where muscle relaxation is desired. Kas gevşemesinin istendiği durumlarda faydalıdır.
③ Reduces pain response during the procedure. İşlem sırasında ağrıya verilen yanıtı azaltır. Moderate Sedation
-
Minimal sedation is characterized by an anxiolytic effect where the patientresponds normally to verbal stimuli. This level of sedation is preferred forprocedures that require patient cooperation.
① Minimal sedation is characterized by an anxiolytic effect where the patient responds normally to verbal stimuli. Minimal sedasyon, hastada anksiyetenin (kaygının) azalmasıyla karakterizedir ve bu durumda hasta sözel uyaranlara normal şekilde yanıt verir.
② This level of sedation is preferred for procedures that require patient cooperation. Bu sedasyon düzeyi, hastanın iş birliğinin gerekli olduğu işlemler için tercih edilir. bu anestezı cesıdınde analgesı yoktur
-
-
www.biorxiv.org www.biorxiv.org
-
eLife Assessment
This work offers important insights into the protein CHD4's function in chromatin remodeling and gene regulation in embryonic stem cells, supported by extensive biochemical, genomic, and imaging data. The use of an inducible degron system allows precise functional analysis, and the datasets generated represent a key resource for the field. The revised study offers compelling evidence and makes a significant contribution to understanding CHD4's role in epigenetic regulation. This work will be of interest to the epigenetics and stem biology fields.
-
Reviewer #1 (Public review):
Summary:
The authors performed an elegant investigation to clarify the roles of CHD4 in chromatin accessibility and transcription regulation. In addition to the common mechanisms of action through nucleosome repositioning and opening of transcriptionally active regions, the authors considered here a new angle of CHD4 action through modulating the off rate of transcription factor binding. Their suggested scenario is that the action of CHD4 is context-dependent and is different for highly-active regions vs low-accessibility regions.
Strengths:
This is a very well-written paper that will be of interest to researchers working in this field. The authors performed large work with different types of NGS experiments and the corresponding computational analyses. The combination of biophysical measurements of the off-rate of protein-DNA binding with NGS experiments is particularly commendable.
Comments on revised version:
The authors have addressed all my points
-
Reviewer #2 (Public review):
This study leverages acute protein degradation of CHD4 to define its role in chromatin and gene regulation. Previous studies have relied on KO and/or RNA interference of this essential protein and as such are hampered by adaptation, cell population heterogeneity, cell proliferation and indirect effects. The authors have established an AID2-based method to rapidly deplete the dMi-2 remodeller to circumvent these problems. CHD4 is gone within an hour, well before any effects on cell cycle or cell viability can manifest. This represents an important technical advance that, for the first time, allows a comprehensive analysis of the immediate and direct effect of CHD4 loss of function on chromatin structure and gene regulation.
Rapid CHD4 degradation is combined with ATAC-seq, CUT&RUN, (nascent) RNA-seq and single molecule microscopy to comprehensively characterise the impact on chromatin accessibility, histone modification, transcription and transcription factor (NANOG, SOX2, KLF4) binding in mouse ES cells.
The data support the previously developed model that high levels of CHD4/NuRD maintain a degree of nucleosome density to limit TF binding at open regulatory regions (e.g. enhancers). The authors propose that CHD4 activity at these sites is an important prerequisite for enhancers to respond to novel signals that require an expanded or new set of TFs to bind.
What I find even more exciting and entirely novel is the finding that CHD4 removes TFs from regions of limited accessibility to repress cryptic enhancers and to suppress spurious transcription. These regions are characterised by low CHD4 binding and have so far never been thoroughly analysed. The authors correctly point out that the general assumption that chromatin regulators act on regions where they seem to be concentrated (i.e. have high ChIP-seq signals) runs the risk of overlooking important functions elsewhere. This insight is highly relevant beyond the CHD4 field and will prompt other chromatin researchers to look into low level binding sites of chromatin regulators.
The biochemical and genomic data presented in this study is of high quality (I cannot judge single microscopy experiments due to my lack of expertise). This is an important and timely study that is of great interest to the chromatin field.
Comments on revised version:
All my comments below have been addressed in the revised version of the manuscript.
The revised manuscript provides a significant advance of our understanding of how the nucleosome remodeler CHD4 exerts its function. In particular, the findings suggest an intriguing role of CHD4 in TF removal at genomic regions where only low levels of CHD4 can be detected. In the future, it will be interesting to see if this activity is shared by other ATP-dependent nucleosome remodelers.
-
Reviewer #3 (Public review):
Summary:
In this manuscript an inducible degron approach is taken to investigate the function of the CHD4 chromatin remodelling complex. The cell lines and approaches used are well thought out and the data appear to be of high quality. They show that loss of CHD4 results in rapid changes to chromatin accessibility at thousands of sites. At the majority of locations where changes are detected, chromatin accessibility is decreased and these sites are strongly bound by CHD4 prior to activation of the degron and so likely represent primary sites of action. Somewhat surprisingly while chromatin accessibility is reduced at these sites transcription factor occupancy is little changed. Following CHD4 degradation occupancy of the key pluripotency transcription factors NANOG and SOX2 increases at many locations genome wide and at many of these sites chromatin accessibility increases. These represent important new insights into the function of CHD4 complexes.
Strengths:
The experimental approach is well suited to providing insight into a complex regulator such as CHD4. The data generated to characterise how cells respond to loss of CHD4 is of high quality. The study reveals major changes in transcription factor occupancy following CHD4 depletion.
Weaknesses:
The main weakness can be summarised as relating to the fact authors favour the interpretation that all rapid changes following CHD4 degradation occur as a direct effect of the loss of CHD4 activity. The possibility that rapid indirect effects arise does not appear to have been given sufficient consideration. This is especially pertinent where effects are reported at sites where CHD4 occupancy is initially very low (e.g sites where accessibility is gained, in comparison to that at sites where chromatin acdessibility is lost). The revised discussion acknowledges rapid indirect effects cannot be excluded.
-
Author response:
The following is the authors’ response to the original reviews.
Public Reviews:
Reviewer #1 (Public review)
(1) It might be good to further discuss potential molecular mechanisms for increasing the TF off rate (what happens at the mechanistic level).
This is now expanded in the Discussion
(2) To improve readability, it would be good to make consistent font sizes on all figures to make sure that the smallest font sizes are readable.
We have normalised figure text as much as is feasible.
(3) upDARs and downDARs - these abbreviations are defined in the figure legend but not in the main text.
We have removed references to these terms from the text and included a definition in the figure legend.
(4) Figure 3B - the on-figure legend is a bit unclear; the text legend does not mention the meaning of "DEG".
We have removed this panel as it was confusing and did not demonstrate any robust conclusion.
(5) The values of apparent dissociation rates shown in Figure 5 are a bit different from values previously reported in literature (e.g., see Okamoto et al., 20203, PMC10505915). Perhaps the authors could comment on this. Also, it would be helpful to add the actual equation that was used for the curve fitting to determine these values to the Methods section.
We have included an explanation of the curve fitting equation in the Methods as suggested.
The apparent dissociation rate observed is a sum of multiple rates of decay – true dissociation rate (k<sub>off</sub>), signal loss caused by photobleaching k<sub>pb</sub>, and signal loss caused by defocusing/tracking error (k<sub>tl</sub>).
k<sub>off</sub><sup>app</sup> = k<sub>off</sub>+ k<sub>pb</sub> + k<sub>tl</sub>
We are making conclusions about relative changes in k<sub>off</sub><sup>app</sup> upon CHD4 depletion, not about the absolute magnitude of true in k<sub>off</sub> or TF residence times.Our conclusions extend to true in k<sub>off</sub> on the assumption that k<sub>pb</sub> and k<sub>tl</sub> are equal across all samples imaged due to identical experimental conditions and analysis. k<sub>pb</sub> and k<sub>tl</sub> vary hugely across experimental set-ups, especially with different laser powers, so other k<sub>off</sub> or k<sub>off</sub><sup>app</sup> values reported in the literature would be expected to differ from ours. Time-lapse experiments or independent determination of k<sub>pb</sub> (and k<sub>tl</sub>) would be required to make any statements about absolute values of k<sub>off</sub>
(6) Regarding the discussion about the functionality of low-affinity sites/low accessibility regions, the authors may wish to mention the recent debates on this (https://www.nature.com/articles/s41586-025-08916-0; https://www.biorxiv.org/content/10.1101/2025.10.12.681120v1).
We have now included a discussion of this point and referenced both papers.
(7) It may be worth expanding figure legends a bit, because the definitions of some of the terms mentioned on the figures are not very easy to find in the text.
We have endeavoured to define all relevant terms in the figure legends.
Reviewer #2 (Public review):
(1) Figure 2 shows heat maps of RNA-seq results following a time course of CHD4 depletion (0, 1, 2 hours...). Usually, the red/blue colour scale is used to visualise differential expression (fold-difference). Here, genes are coloured in red or blue even at the 0-hour time point. This confused me initially until I discovered that instead of folddifference, a z-score is plotted. I do not quite understand what it means when a gene that is coloured blue at the 0-hour time point changes to red at a later time point. Does this always represent an upregulation? I think this figure requires a better explanation.
The heatmap displays z-scores, meaning expression for each gene has been centred and scaled across the entire time course. As a result, time zero is not a true baseline, it simply shows whether the gene’s expression at that moment is above or below its own mean. A transition from blue to red therefore indicates that the gene increases relative to its overall average, which typically corresponds to upregulation, but it doesn’t directly represent fold-change from the 0-hour time point. We have now included a brief explanation of this in the figure legend to make this point clear.
(2) Figure 5D: NANOG, SOX2 binding at the KLF4 locus. The authors state that the enhancers 68, 57, and 55 show a gain in NANOG and SOX2 enrichment "from 30 minutes of CHD4 depletion". This is not obvious to me from looking at the figure. I can see an increase in signal from "WT" (I am assuming this corresponds to the 0 hours time point) to "30m", but then the signals seem to go down again towards the 4h time point. Can this be quantified? Can the authors discuss why TF binding seems to increase only temporarily (if this is the case)?
We have edited the text to more accurately reflect what is going on in the screen shot. We have also replaced “WT” with “0” as this more accurately reflects the status of these cells.
(3) There is no real discussion of HOW CHD4/NuRD counteracts TF binding (i.e. by what molecular mechanism). I understand that the data does not really inform us on this. Still, I believe it would be worthwhile for the authors to discuss some ideas, e.g., local nucleosome sliding vs. a direct (ATP-dependent?) action on the TF itself.
We now include more speculation on this point in the Discussion.
Reviewer #3 (Public review):
The main weakness can be summarised as relating to the fact that authors interpret all rapid changes following CHD4 degradation as being a direct effect of the loss of CHD4 activity. The possibility that rapid indirect effects arise does not appear to have been given sufficient consideration. This is especially pertinent where effects are reported at sites where CHD4 occupancy is initially low.
We acknowledge that we cannot definitively say any effect is a direct consequence of CHD4 depletion and have mitigated statements in the Results and Discussion.
Reviewing Editor Comments:
I am pleased to say all three experts had very complementary and complimentary comments on your paper - congratulations. Reviewer 3 does suggest toning down a few interpretations, which I suggest would help focus the manuscript on its greater strengths. I encourage a quick revision to this point, which will not go back to reviewers, before you request a version of record. I would also like to take this opportunity to thank all three reviewers for excellent feedback on this paper.
As advised we have mitigated the points raised by the reviewers.
Reviewer #2 (Recommendations for the authors):
p9, top: The sentence starting with "Genes increasing in expression after four hours...." is very difficult to understand and should be rephrased or broken up.
We agree. This has been completely re-written.
Reviewer #3 (Recommendations for the authors):
Sites of increased chromatin accessibility emerge more slowly than sites of lost chromatin accessibility. Figure 1D, a little increase in accessibility at 30min, but a more noticeable decrease at 30min. The sites of increased accessibility also have lower absolute accessibility than observed at locations where accessibility is lost. This raises the possibility that the sites of increased accessibility represent rapid but indirect changes occurring following loss of CHD4. Consistent with this, enrichment for CHD4 and MDB3 by CUT and TAG is far higher at sites of decreased accessibility. The low level of CHD4 occupancy observed at sites where accessibility increases may not be relevant to the reason these sites are affected. Such small enrichments can be observed when aligning to other genomic features. The authors interpret their findings as indicating that low occupancy of CHD4 exerts a long-lasting repressive effect at these locations. This is one possible explanation; however, an alternative is that these effects are indirect. Perhaps driven by the very large increase in TF binding that is observed following CHD4 degradation and which appears to occur at many locations regardless of whether CHD4 is present.
The reviewer is right to point out that we don’t know what is direct and what is indirect. All we know is that changes happen very rapidly upon CHD4 depletion. The changes in standard ATAC-seq signal appear greater at the sites showing decreased accessibility than those increasing, however the starting points are very different: a small increase from very low accessibility will likely be a higher fold change than a more visible decrease from very high accessibility (Fig. 1D). In contrast, Figure 6 shows a more visible increase in Tn5 integrations at sites increasing in accessibility at 30 minutes than the change in sites decreasing in accessibility at 30 minutes. We therefore disagree that the sites increasing in accessibility are more likely to be indirect targets. In further support of this, there is a rapid increase in MNase resistance at these sites upon MBD3 reintroduction (Fig. 6I), possibly indicating a direct impact of NuRD on these sites.
Substantial changes in Nanog and SOX2 binding are observed across the time course. These changes are very large, with 43k or 78k additional sites detected. How is this possible? Does the amount of these TF's present in cells change? The argument that transient occupancy of CHD4 acts to prevent TF's binding to what is likely to be many 100's of thousands of sites (if the data for Nanog and SOX2 are representative of other transcription factors such as KLF4) seems unlikely.
The large number of different sites identified gaining TF binding is likely to be a reflection of the number of cells being analysed: within the 10<sup>5</sup>-10<sup>6</sup> cells used for a Cut&Run experiment we detect many sites gaining TF binding. In individual cells we agree it would be unlikely for that many sites to become bound at the same time. We detect no changes in the amounts of Nanog or Sox2 in our cells across 4 hour CHD4 depletion time course. However, we maintain that low frequency interactions of CHD4 with a site can counteract low frequency TF binding and prevent it from stimulating opening of a cryptic enhancer.
While increased TF binding is observed at sites of gained accessibility, the changes in TF occupancy at the lost sites do not progress continuously across the time course. In addition, the changes in occupancy are small in comparison to those observed at the gained sites. The text comments on an increase in SOX2 and Nanog occupancy at 30 min, but there is either no change or a loss by 4 hours. It's difficult to know what to conclude from this.
At sites losing accessibility the enrichment of both Nanog and Sox2 increases at 30 minutes. We suspect this is due to the loss of CHD4’s TF-removal activity. Thereafter the two TFs show different trends: Nanog enrichment then decreases again, probably due to the decrease in accessibility at these sites. Sox2, by contrast, does not change very much, possibly due to its higher pioneering ability. It is true that the amounts of change are very small here, however Cut&Run was performed in triplicate and the summary graphs are plotted with standard error of the mean (which is often too small to see), demonstrating that the detected changes are highly significant. (We neglected to refer to the SEM in our figure legends: this has now been corrected.) At sites where CHD4 maintains chromatin compaction, the amount of transcription factor binding goes from zero or nearly zero to some finite number, hence the fold change is very large. In contrast the changes at sites losing accessibility starts from high enrichment so fold changes are much smaller.
Changes in the diffusive motion of tagged TF's are measured. The data is presented as an average of measurements of individual TF's. What might be anticipated is that subpopulations of TF's would exhibit distinct behaviours. At many locations, occupancy of these TF's are presumably unchanged. At 1 hour, many new sites are occupied, and this would represent a subpopulation with high residence. A small population of TF's would be subject to distinct effects at the sites where accessibility reduces at the onehour time point. The analysis presented fails to distinguish populations of TF's exhibiting altered mobility consistent with the proportion of the TF's showing altered binding.
We agree that there are likely subpopulations of TFs exhibiting distinct binding behaviours, and our modality of imaging captures this, but to distinguish subpopulations within this would require a lot more data.
However, there is no reason to believe that the TF binding at the new sites being occupied at 1 hr would have a difference in residence time to those sites already stably bound by TFs in the wildtype, i.e. that they would exhibit a different limitation to their residence time once bound compared to those sites. We do capture more stably bound trajectories per cell, but that’s not what we’re reporting on - it’s the dissociation rate of those that have already bound in a stable manner at sites where TF occupancy is detected also by ChIP.
The analysis of transcription shown in Figure 2 indicates that high-quality data has been obtained, showing progressive changes to transcription. The linkage of the differentially expressed genes to chromatin changes shown in Figure 3 is difficult to interpret. The curves showing the distance distribution for increased or decreased DARs are quite similar for up- and down-regulated genes. The frequency density for gained sites is slightly higher, but not as much higher as would be expected, given these sites are c6fold more abundant than the sites with lost accessibility. The data presented do not provide a compelling link between the CHD4-induced chromatin changes and changes to transcription; the authors should consider revising to accommodate this. It is possible that much of the transcriptional response even at early time points is indirect. This is not unprecedented. For example, degradation of SOX2, a transcriptional activator, results in both repression and activation of similar numbers of genes https://pmc.ncbi.nlm.nih.gov/articles/PMC10577566/
We agree that these figures do not provide a compelling link between the observed chromatin changes and gene expression changes. That 50K increased sites are, on average, located farther away from misregulated genes than are the 8K decreasing sites highlights that this is rarely going to be a case of direct derepression of a silenced gene, but rather distal sites could act as enhancers to spuriously activate transcription. This would certainly be a rare event, but could explain the low-level transcriptional noise seen in NuRD mutants. We have edited the wording to make this clearer.
The model presented in Figure 7 includes distinct roles at sites that become more or less accessible following inactivation of CHD4. This is perplexing as it implies that the same enzymes perform opposing functions at some of the different sites where they are bound.
Our point is that it does the same thing at both kinds of sites, but the nature of the sites means that the consequences of CHD4 activity will be different. We have tried to make this clear in the text.
At active sites, it is clear that CHD4 is bound prior to activation of the degron and that chromatin accessibility is reduced following depletion. Changes in TF occupancy are complex, perhaps reflecting slow diffusion from less accessible chromatin and a global increase in the abundance of some pluripotency transcription factors such as SOX2 and Nanog that are competent for DNA binding. The link between sites of reduced accessibility and transcription is less clear.
At the inactive sites, the increase in accessibility could be driven by transcription factor binding. There is very little CHD4 present at these sites prior to activation of the degron, and TF binding may induce chromatin opening, which could be considered a rapid but indirect effect of the CHD4 degron. The link to transcription is not clear from the data presented, but it would be anticipated that in some cases it would drive activation.
We acknowledge these points and have indicated this possibility in the Results and the Discussion.
No Analysis is performed to identify binding sequences enriched at the locations of decreased accessibility. This could potentially define transcription factors involved in CHD4 recruitment or that cause CHD4 to function differently in different contexts.
HOMER analyses failed to provide any unique insights. The sites going down are highly accessible in ES cells: they have TF binding sites that one would expect in ES cells. The increasing sites show an enrichment for G-rich sequences, which reflects the binding preference of CHD4.
-
-
3cslab.notion.site 3cslab.notion.site
-
Engelbart's Core Question
This was Engelbart's driving question. Not just "How do we work faster?" but "How do we get better at the process of getting better?" It's a meta-question that shifts our focus from outputs to learning systems.
-
Douglas Engelbart
Learn more about Douglas Engelbart from the Engelbart Framework Annotation Project and his paper: Augmented Human Intellect: A Conceptual Framework
-
-
human.libretexts.org human.libretexts.org
-
Research suggests that _____________. The data indicate that _____________. _____________is increasing or decreasing. There is a trend toward _____________. _____________causes _____________ _____________leads to _____________.
=claim of fact
-
-
sites.google.com sites.google.com
-
"fall line;"
The Atlantic Seaboard Fall Line is a 900-mile escarpment where the Piedmont and Atlantic coastal plain meet in the eastern United States. Before navigation improvements, such as locks, the fall line was generally the head of navigation on rivers due to their rapids or waterfalls, and the necessary portage around them.
-
Burke
Edmund Burke was an Anglo-Irish writer, philosopher, and politician who is widely credited as the founder of the cultural and political philosophy of conservatism.
-
composite nationality
The frontier as a crucible or melting pot, forging a new american identity.
-
the census of 1890
The 1890 United States census was taken beginning June 2, 1890. The census determined the resident population of the United States to be 62,979,766, an increase of 25.5 percent over the 50,189,209 persons enumerated during the 1880 census. The data reported that the distribution of the population had resulted in the disappearance of the American frontier.
-
Peck's New Guide to the West
John Mason Peck (1789–1858) was an American Baptist missionary to the western frontier of the United States, especially in Missouri and Illinois. A prominent anti-slavery advocate of his day, Peck also founded many educational institutions and wrote prolifically.
-
"Kit" Carson
Christopher Houston Carson (December 24, 1809 – May 23, 1868) was an American frontiersman, fur trapper, wilderness guide, Indian agent and U.S. Army officer. He became an American frontier legend in his own lifetime through biographies and news articles; exaggerated versions of his exploits were the subject of dime novels. His understated nature belied confirmed reports of his fearlessness, combat skills, tenacity, as well as profound effect on the westward expansion of the United States.
-
Daniel Boone
Daniel Boone (November 2 [O.S. October 22] 1734 – September 26, 1820) was an American pioneer and frontiersman whose exploits made him one of the first folk heroes of the United States. He became famous for his exploration and settlement of Kentucky, which was then beyond the western borders of the Thirteen Colonies. In 1775, Boone founded the Wilderness Road through the Cumberland Gap and into Kentucky, in the face of resistance from Native Americans. He founded Boonesborough, one of the first English-speaking settlements west of the Appalachian Mountains.
-
Webster
Daniel Webster (January 18, 1782 – October 24, 1852) was an American lawyer and statesman who represented New Hampshire and Massachusetts in the U.S. Congress and served as the 14th and 19th U.S. secretary of state under presidents William Henry Harrison, John Tyler, and Millard Fillmore. Webster was one of the most prominent American lawyers of the 19th century, arguing over 200 cases before the United States Supreme Court in his career. During his life, Webster had been a member of the Federalist Party, the National Republican Party, and the Whig Party. He was among the three members of the Great Triumvirate along with Henry Clay and John C. Calhoun.
-
salt springs of the Kanawha
The Kanawha saltworks were an important 19th-century mineral resource of the United States. Originally known as Buffalo Salt Lick, in the early 19th century salt producers began creating brine wells, in the vicinity of the Great Kanawha River in what is now West Virginia.
-
Bishop Spangenburg
August Gottlieb Spangenberg (15 July 1704 – 18 September 1792) was a German theologian, minister, and bishop of the Moravian Church. As successor to Nicolaus Zinzendorf as bishop of the Moravian Church, he helped develop and lead international Moravian missions in colonial-era Province of Pennsylvania and stabilized Moravian theology and organization.
-
Victor Hehn
Victor Hehn was a German-Baltic cultural historian and librarian.
-
Lewis and Clark
The Lewis and Clark Expedition, also known as the Corps of Discovery Expedition, was the United States expedition to cross the newly acquired western portion of the country after the Louisiana Purchase. The Corps of Discovery was a select group of U.S. Army and civilian volunteers under the command of Captain Meriwether Lewis and his close friend Second Lieutenant William Clark. Clark, along with 30 others, set out from Camp Dubois (Camp Wood), Illinois, on May 14, 1804, met Lewis and ten other members of the group in St. Charles, Missouri, then went up the Missouri River. The expedition crossed the Continental Divide of the Americas near the Lemhi Pass, eventually coming to the Columbia River, and the Pacific Ocean in 1805. The return voyage began on March 23, 1806, at Fort Clatsop, Oregon, ending six months later on September 23.
-
The ranges of the Great Plains, with ranch and cowboy and nomadic life, are things of yesterday and of to-day.
Cattle drives were a major economic activity in the 19th and early 20th century American West, particularly between 1850s and 1910s. In this period, 27 million cattle were driven from Texas to railheads in Kansas, for shipment to stockyards in St. Louis and points east, and direct to Chicago. The long distances covered, the need for periodic rests by riders and animals, and the establishment of railheads led to the development of "cow towns" across the frontier.
-
War of 1812
The War of 1812 was fought by the United States and its allies against the United Kingdom and its allies in North America. It began when the United States declared war on Britain on 18 June 1812. Although peace terms were agreed upon in the December 1814 Treaty of Ghent, the war did not officially end until the peace treaty was ratified by the United States Congress on 17 February 1815.
-
Albany congress of 1754
The Albany Congress (June 19 – July 11, 1754), also known as the Albany Convention of 1754, was a meeting of representatives sent by the legislatures of seven of the British colonies in British America: Connecticut, Maryland, Massachusetts, New Hampshire, New York, Pennsylvania, and Rhode Island. Those not in attendance included Newfoundland, Nova Scotia, New Jersey, Virginia, Georgia, North Carolina, and South Carolina. Representatives met daily at the City Hall in Albany, New York, from June 19 to July 11, 1754, to discuss better relations with the Native American tribes and common defensive measures against the French threat from Canada in the opening stage of the French and Indian War, the North American front of the Seven Years' War between Great Britain and France.
-
Duquesne
Michel-Ange Duquesne de Menneville, Marquis Duquesne (4 April 1700 – 17 September 1778) was a French Navy officer and colonial administrator who served as Governor General of New France from 1752 to 1755.
-
La Salle
René-Robert Cavelier, Sieur de La Salle (November 22, 1643 – March 19, 1687), was a French explorer and fur trader in North America. He explored the Great Lakes region of the United States and Canada, and the Mississippi River. He is best known for an early 1682 expedition in which he canoed the lower Mississippi River from the mouth of the Illinois River to the Gulf of Mexico.
-
South Pass in the Rockies
South Pass is a route across the Continental Divide, in the Rocky Mountains in southwestern Wyoming. Though it approaches a mile and a half high, South Pass is the lowest point on the Continental Divide between the Central and Southern Rocky Mountains. The passes furnish a natural crossing point of the Rockies. The historic pass became the route for emigrants on the Oregon, California, and Mormon trails to the West during the 19th century. It was designated as a U.S. National Historic Landmark on January 20, 1961.
-
Cumberland Gap
The Cumberland Gap is a pass in the eastern United States through the long ridge of the Cumberland Mountains, within the Appalachian Mountains and near the tripoint of Kentucky, Virginia, and Tennessee. At an elevation of 1,631 feet (497 m) above sea level, it is famous in American colonial history for its role as a key passageway through the lower central Appalachians.
-
Loria
Achille Loria (2 March 1857 – 6 November 1943) was an Italian political economist. He was educated at the lyceum of his native city and the universities of Bologna, Pavia, Rome, Berlin, and London and graduated at the University of Bologna (1877).
-
Pacific Railroad
America's first transcontinental railroad (known originally as the "Pacific Railroad" and later as the "Overland Route") was a 1,911-mile (3,075 km) continuous railroad line built between 1863 and 1869 that connected the existing eastern U.S. rail network at Council Bluffs, Iowa, with the Pacific coast at the Oakland Long Wharf on San Francisco Bay.
-
The United States Army fought a series of Indian wars in Minnesota, Dakota, and the Indian Territory.
Wars which occurred from the time of the earliest colonial settlements in the 17th century until the end of the 19th century between the European Settlers and the Native Americans.
-
Alleghenies
The Allegheny Mountain Range is part of the vast Appalachian Mountain Range of the Eastern United States and Canada. Historically, it represented a significant barrier to westward land travel and development.
-
Grund
Francis Joseph Grund (September 19, 1805 – September 29, 1863) was a Bohemian-born American journalist and author who wrote such works as The Americans in Their Moral, Social, and Political Relations (1837).
-
Astor's American Fur Company
American Fur Company, enterprise incorporated in New York state (April 6, 1808) by John Jacob Astor, which dominated the fur trade of the central and western United States during the first third of the 19th century. The company absorbed or crushed its rivals during its search for furs in the Great Lakes region, Missouri River valley, Rocky Mountains, and Oregon. Explorations by the firm’s traders and trappers, directed chiefly from its office in St. Louis, did much to prepare the frontier for settlement.
-
census of 1820
The 1820 United States census was the fourth census conducted in the United States. It was conducted on August 7, 1820. The 1820 census included six new states: Louisiana, Indiana, Mississippi, Illinois, Alabama and Maine. The total population was determined to be 9,638,453, of which 1,538,022 were slaves.
-
first census
The 1790 United States census was the first United States census. It recorded the population of the whole United States as of Census Day, August 2, 1790, as mandated by Article 1, Section 2, of the Constitution and applicable laws. In the first census, the population of the United States was enumerated to be 3,929,214 inhabitants.
-
proclamation of 1763
The Royal Proclamation of 1763 was issued by King George III of Great Britain on 7 October 1763. It followed the Treaty of Paris (1763), which formally ended the Seven Years' War and transferred French territory in North America to Great Britain.[1] The proclamation at least temporarily forbade all new settlements west of a line drawn along the Appalachian Mountains, which was delineated as an Indian Reserve.[2] Exclusion from the vast region of Trans-Appalachia created discontent between Britain and colonial land speculators and potential settlers.
-
King
George III (George William Frederick; 4 June 1738 – 29 January 1820) was King of Great Britain and Ireland from 25 October 1760 until his death in 1820.
-
expedition in 1714
The Knights of the Golden Horseshoe Expedition, also known as the Transmontane Expedition,[1] took place in 1716 in the British Colony of Virginia. The Royal Governor and a number of prominent citizens traveled westward, across the Blue Ridge Mountains on an exploratory expedition.
-
Governor Spotswood
Major-General Alexander Spotswood (12 December 1676 – 7 June 1740) was a British army officer, explorer and colonial administrator who served as the governor of Virginia from 1710 to 1722. After an unsatisfactory military career, in 1710 he was appointed as Virginia's governor, a post he held for twelve years. During that period, Spotswood engaged in the exploration of the territories beyond the western border, of which he was the first to see the economic potentials.
-
Professor von Holst
Hermann Eduard von Holst (June 19, 1841 – January 20, 1904) was a German-American historian and author. Von Holst emigrated to the United States and wrote extensively on the Constitution of the United States, largely from an anti-slavery perspective.
-
slavery struggle
The Missouri Compromise of 1820 was federal legislation of the United States that balanced the desires of northern states to prevent the expansion of slavery in the country with those of southern states to expand it. It admitted Missouri as a slave state and Maine as a free state and declared a policy of prohibiting slavery in the remaining Louisiana Purchase lands north of the 36°30′ parallel.
-
Calhoun
John C. Calhoun (born March 18, 1782, Abbeville district, South Carolina, U.S.—died March 31, 1850, Washington, D.C.) was an American political leader who was a congressman, the secretary of war, the seventh vice president (1825–32), a senator, and the secretary of state of the United States.
-
-
sites.google.com sites.google.com
-
Filson's
John Filson (c. 1747 – October 1788)was an American author, historian of Kentucky, pioneer, surveyor and one of the founders of Cincinnati, Ohio.
-
John Underhill
John Underhill (c. 1608/09 – 21 July 1672) was an early English settler and soldier in the Massachusetts Bay Colony, the Province of New Hampshire, where he also served as governor; the New Haven Colony, New Netherland, and later the Province of New York, settling on Long Island. Hired to train militia in New England, he is most noted for leading colonial militia in the Pequot War (1636–1637) and Kieft's War which the colonists mounted against two different groups of Native Americans. He also published an account of the Pequot War.
-
Roger Williams
Roger Williams (c. 1603 – March 1683) was an English-born New England minister, theologian, author, and founder of the Providence Plantations, which became the Colony of Rhode Island and Providence Plantations and later the State of Rhode Island. He was a staunch advocate for religious liberty, separation of church and state, and fair dealings with the Native Americans.
-
Cotton Mather
Cotton Mather FRS ( February 12, 1663 – February 13, 1728) was a Puritan clergyman and author in colonial New England, who wrote extensively on theological, historical, and scientific subjects. After being educated at Harvard College, he joined his father Increase as minister of the Congregationalist Old North Meeting House in Boston, then part of the Massachusetts Bay Colony, where he preached for the rest of his life.
-
Robert Frost
Robert Lee Frost (March 26, 1874 – January 29, 1963) was an American poet. Known for his realistic depictions of rural life and his command of American colloquial speech,Frost frequently wrote about settings from rural life in New England in the early 20th century, using them to examine complex social and philosophical themes.
-
Kenneth Rexroth
Kenneth Charles Marion Rexroth (December 22, 1905 – June 6, 1982)was an American poet, translator, and critical essayist. He is regarded as a central figure in the San Francisco Renaissance, and paved the groundwork for the movement.
-
Himard Mumford Jones
Howard Mumford Jones (April 16, 1892 – May 11, 1980) was an American intellectual historian, literary critic, journalist, poet, and professor of English at the University of Michigan and later at Harvard University.
-
Joseph Campbell
Joseph John Campbell was an American writer. He was a professor of literature at Sarah Lawrence College who worked in comparative mythology and comparative religion. His work covers many aspects of the human condition. Campbell's best-known work is his book The Hero with a Thousand Faces (1949), in which he discusses his theory of the journey of the archetypal hero shared by world mythologies, termed the monomyth.
-
-
www.biorxiv.org www.biorxiv.org
-
eLife Assessment
This valuable study presents Altair-LSFM, a well-documented implementation of a light-sheet fluorescence microscope (LSFM) designed for accessibility and reduced cost. The approach provides compelling evidence of its strengths, including the use of custom-machined baseplates, detailed assembly instructions, and demonstrated live-cell imaging capabilities. This manuscript will be of interest to microscopists and potentially biologists seeking accessible LSFM tools.
-
Reviewer #1 (Public review):
Summary:
The article presents the details of the high-resolution light-sheet microscopy system developed by the group. In addition to presenting the technical details of the system, its resolution has been characterized and its functionality demonstrated by visualizing subcellular structures in a biological sample.
Strengths:
The article includes extensive supplementary material that complements the information in the main article.
Live imaging has been incorporated, as requested, increasing the value of the paper.
Weaknesses:
None
-
Reviewer #2 (Public review):
Summary:
The authors present Altair-LSFM (Light Sheet Fluorescence Microscope), a high-resolution, open-source light-sheet microscope, that may be relatively easy to align and construct due to a custom-designed mounting plate. The authors developed this microscope to fill a perceived need that current open-source systems are primarily designed for large specimens and lack sub-cellular resolution or achieve high-resolution but are difficult to construct and are unstable. While commercial alternatives exist that offer sub-cellular resolution, they are expensive. The authors manuscript centers around comparisons to the highly successful lattice light-sheet microscope, including the choice of detection and excitation objectives. The authors thus claim that there remains a critical need for a high-resolution, economical and easy to implement LSFM systems and address this need with Altair.
Strengths:
The authors succeed in their goals of implementing a relatively low cost (~ USD 150K) open-source microscope that is easy to align. The ease of alignment rests on using custom-designed baseplates with dowel pins for precise positioning of optics based on computer analysis of opto-mechanical tolerances as well as the optical path design. They simplify the excitation optics over Lattice light-sheet microscopes by using a Gaussian beam for illumination while maintaining lateral and axial resolutions of 235 and 350 nm across a 260-um field of view after deconvolution. In doing so they rest on foundational principles of optical microscopy that what matters for lateral resolution is the numerical aperture of the detection objective and proper sampling of the image field on to the detection, and the axial resolution depends on the thickness of the light-sheet when it is thinner than the depth of field of the detection objective. This concept has unfortunately not been completely clear to users of high-resolution light-sheet microscopes and is thus a valuable demonstration. The microscope is controlled by an open-source software, Navigate, developed by the authors, and it is thus foreseeable that different versions of this system could be implemented depending on experimental needs while maintaining easy alignment and low cost. They demonstrate system performance successfully by characterizing their sheet, point-spread function, and visualization of sub-cellular structures in mammalian cells including microtubules, actin filaments, nuclei, and the Golgi apparatus.
Weaknesses:
There is still a fixation on comparison to the first-generation lattice light-sheet microscope, which has evolved significantly since then:
(1) One of the major limitations of the first generation LLSM was the use of a 5 mm coverslip, which was a hinderance for many users. However, the Zeiss system elegantly solves this problem and so does Oblique Plane Microscopy (OPM), while the Altair-LSFM retains this feature which may dissuade widespread adoption. This limitation and how it may be overcome in future iterations is now discussed in the manuscript but remains a limitation in the currently implemented design.
(2) Further, on the point of sample flexibility, all generations of the LLSM, and by the nature of its design the OPM, can accommodate live-cell imaging with temperature, gas, and humidity control. In the revised manuscript the authors now implement temperature control, but ideal live cell imaging conditions that would include gas and humidity control are not implemented. While, as the authors note, other microscopes that lack full environmental control have achieved widespread adoption, in my view this still limits the use cases of this microscope. There is no discussion on how this limitation of environmental control may be overcome in future iterations.
(3) While the microscope is well designed and completely open source it will require experience with optics, electronics, and microscopy to implement and align properly. Experience with custom machining or soliciting a machine shop is also necessary. Thus, in my opinion it is unlikely to be implemented by a lab that has zero prior experience with custom optics or can hire someone who does. Altair-LSFM may not be as easily adaptable or implementable as the authors describe or perceive in any lab that is interested even if they can afford it. Claims on how easy it may be to align the system for a "Novice" in supplementary table 5, appear to be unsubstantiated and should be removed unless a Novice was indeed able to assemble and validate the system in 2 weeks. It seems that these numbers were just arbitrarily proposed in the current version without any testing. In our experience it's hard to predict how long an alignment will take for a novice.
(4) There is no quantification on field uniformity and the tunability of the light sheet parameters (FOV, thickness, PSF, uniformity). There is no quantification on how much improvement is offered by the resonant and how its operation may alter the light-sheet power, uniformity and the measured PSF.
-
Reviewer #3 (Public review):
Summary:
This manuscript introduces a high-resolution, open-source light-sheet fluorescence microscope optimized for sub-cellular imaging.
The system is designed for ease of assembly and use, incorporating a custom-machined baseplate and in silico optimized optical paths to ensure robust alignment and performance.
The important feature of the microscope is the clever and elegant adaptation of simple gaussian beams, smart beam shaping, galvo pivoting and high NA objectives to ensure a uniform thin light-sheet of around 400 nm in thickness, over a 266 micron wide Field of view, pushing the axial resolution of the system beyond the regular diffraction limited-based tradeoffs of light-sheet fluorescence microscopy.
Compelling validation using fluorescent beads multicolor cellular imaging and dual-color live-cell imaging highlights the system's performance. Moreover, a very extensive and comprehensive manual of operation is provided in the form of supplementary materials. This provides a DIY blueprint for researchers that want to implement such a system, providing also estimate costs and a detailed description of needed expertises.
Strengths:
- Strong and accessible technical innovation.
With an elegant combination of beam shaping and optical modelling, the authors provide a high resolution light-sheet system that overcomes the classical light-sheet tradeoff limit of thin light-sheet and small field of view. In addition, the integration of in silico modelling with a custom-machined baseplate is very practical and allows for ease of alignment procedures. Combining these features with the solid and super-extensive guide provided in the supplementary information, this provides a protocol for replicating the microscope in any other lab.
- Impeccable optical performances and ease of mounting of samples
The system takes advantage of the same sample-holding method seen already in other implementations, but reduces the optical complexity. At the same time, the authors claim to achieve similar lateral and axial resolution to Lattice-light-sheet microscopy (although without a direct comparison (see below in the "weaknesses" section). The optical characterization of the system is comprehensive and well-detailed. Additionally, the authors validate the system imaging sub-cellular structures in mammalian cells.
-Transparency and comprehensiveness of documentation and resources.
A very detailed protocol provides detailed documentation about the setup, the optical modeling and the total cost.
Conclusion:
Altair-LSFM represents a well-engineered and accessible light-sheet system that addresses a longstanding need for high-resolution, reproducible, and affordable sub-cellular light-sheet imaging. At this stage, I believe the manuscript makes a compelling case for Altair-LSFM as a valuable contribution to the open microscopy scientific community.
Comments on revisions:
I appreciate the details and the care expressed by the authors in answering all my concerns, both the bigger ones (lack of live cell imaging demonstration) and to the smaller ones (about data storage, costs, expertise needed, and so on). The manuscript has been greatly improved, and I have no other comments to make.
-
Author response:
The following is the authors’ response to the original reviews.
eLife Assessment
This useful study presents Altair-LSFM, a solid and well-documented implementation of a light-sheet fluorescence microscope (LSFM) designed for accessibility and cost reduction. While the approach offers strengths such as the use of custom-machined baseplates and detailed assembly instructions, its overall impact is limited by the lack of live-cell imaging capabilities and the absence of a clear, quantitative comparison to existing LSFM platforms. As such, although technically competent, the broader utility and uptake of this system by the community may be limited.
We thank the editors and reviewers for their thoughtful evaluation of our work and for recognizing the technical strengths of the Altair-LSFM platform, including the custom-machined baseplates and detailed documentation provided to promote accessibility and reproducibility. Below, we provide point-by-point responses to each referee comment. In the process, we have significantly revised the manuscript to include live-cell imaging data and a quantitative evaluation of imaging speed. We now more explicitly describe the different variants of lattice light-sheet microscopy—highlighting differences in their illumination flexibility and image acquisition modes—and clarify how Altair-LSFM compares to each. We further discuss challenges associated with the 5 mm coverslip and propose practical strategies to overcome them. Additionally, we outline cost-reduction opportunities, explain the rationale behind key equipment selections, and provide guidance for implementing environmental control. Altogether, we believe these additions have strengthened the manuscript and clarified both the capabilities and limitations of AltairLSFM.
Public Reviews:
Reviewer #1 (Public review):
Summary:
The article presents the details of the high-resolution light-sheet microscopy system developed by the group. In addition to presenting the technical details of the system, its resolution has been characterized and its functionality demonstrated by visualizing subcellular structures in a biological sample.
Strengths:
(1) The article includes extensive supplementary material that complements the information in the main article.
(2) However, in some sections, the information provided is somewhat superficial.
We thank the reviewer for their thoughtful assessment and for recognizing the strengths of our manuscript, including the extensive supplementary material. Our goal was to make the supplemental content as comprehensive and useful as possible. In addition to the materials provided with the manuscript, our intention is for the online documentation (available at thedeanlab.github.io/altair) to serve as a living resource that evolves in response to user feedback. We would therefore greatly appreciate the reviewer’s guidance on which sections were perceived as superficial so that we can expand them to better support readers and builders of the system.
Weaknesses:
(1) Although a comparison is made with other light-sheet microscopy systems, the presented system does not represent a significant advance over existing systems. It uses high numerical aperture objectives and Gaussian beams, achieving resolution close to theoretical after deconvolution. The main advantage of the presented system is its ease of construction, thanks to the design of a perforated base plate.
We appreciate the reviewer’s assessment and the opportunity to clarify our intent. Our primary goal was not to introduce new optical functionality beyond that of existing high-performance light-sheet systems, but rather to substantially reduce the barrier to entry for non-specialist laboratories. Many open-source implementations, such as OpenSPIM, OpenSPIN, and Benchtop mesoSPIM, similarly focused on accessibility and reproducibility rather than introducing new optical modalities, yet have had a measureable impact on the field by enabling broader community participation. Altair-LSFM follows this tradition, providing sub-cellular resolution performance comparable to advanced systems like LLSM, while emphasizing reproducibility, ease of construction through a precision-machined baseplate, and comprehensive documentation to facilitate dissemination and adoption.
(2) Using similar objectives (Nikon 25x and Thorlabs 20x), the results obtained are similar to those of the LLSM system (using a Gaussian beam without laser modulation). However, the article does not mention the difficulties of mounting the sample in the implemented configuration.
We appreciate the reviewer’s comment and agree that there are practical challenges associated with handling 5 mm diameter coverslips in this configuration. In the revised manuscript, we now explicitly describe these challenges and provide practical solutions. Specifically, we highlight the use of a custommachined coverslip holder designed to simplify mounting and handling, and we direct readers to an alternative configuration using the Zeiss W Plan-Apochromat 20×/1.0 objective, which eliminates the need for small coverslips altogether.
(3) The authors present a low-cost, open-source system. Although they provide open source code for the software (navigate), the use of proprietary electronics (ASI, NI, etc.) makes the system relatively expensive. Its low cost is not justified.
We appreciate the reviewer’s perspective and understand the concern regarding the use of proprietary control hardware such as the ASI Tiger Controller and NI data acquisition cards. Our decision to use these components was intentional: relying on a unified, professionally supported and maintained platform minimizes complexity associated with sourcing, configuring, and integrating hardware from multiple vendors, thereby reducing non-financial barriers to entry for non-specialist users.
Importantly, these components are not the primary cost driver of Altair-LSFM (they represent roughly 18% of the total system cost). Nonetheless, for individuals where the price is prohibitive, we also outline several viable cost-reduction options in the revised manuscript (e.g., substituting manual stages, omitting the filter wheel, or using industrial CMOS cameras), while discussing the trade-offs these substitutions introduce in performance and usability. These considerations are now summarized in Supplementary Note 1, which provides a transparent rationale for our design and cost decisions.
Finally, we note that even with these professional-grade components, Altair-LSFM remains substantially less expensive than commercial systems offering comparable optical performance, such as LLSM implementations from Zeiss or 3i.
(4) The fibroblast images provided are of exceptional quality. However, these are fixed samples. The system lacks the necessary elements for monitoring cells in vivo, such as temperature or pH control.
We thank the reviewer for their positive comment regarding the quality of our data. As noted, the current manuscript focuses on validating the optical performance and resolution of the system using fixed specimens to ensure reproducibility and stability.
We fully agree on the importance of environmental control for live-cell imaging. In the revised manuscript, we now describe in detail how temperature regulation can be achieved using a custom-designed heated sample chamber, accompanied by detailed assembly instructions on our GitHub repository and summarized in Supplementary Note 2. For pH stabilization in systems lacking a 5% CO₂ atmosphere, we recommend supplementing the imaging medium with 10–25 mM HEPES buffer. Additionally, we include new live-cell imaging data demonstrating that Altair-LSFM supports in vitro time-lapse imaging of dynamic cellular processes under controlled temperature conditions.
Reviewer #2 (Public review):
Summary:
The authors present Altair-LSFM (Light Sheet Fluorescence Microscope), a high-resolution, open-source microscope, that is relatively easy to align and construct and achieves sub-cellular resolution. The authors developed this microscope to fill a perceived need that current open-source systems are primarily designed for large specimens and lack sub-cellular resolution or are difficult to construct and align, and are not stable. While commercial alternatives exist that offer sub-cellular resolution, they are expensive. The authors' manuscript centers around comparisons to the highly successful lattice light-sheet microscope, including the choice of detection and excitation objectives. The authors thus claim that there remains a critical need for high-resolution, economical, and easy-to-implement LSFM systems.
We thank the reviewer for their thoughtful summary. We agree that existing open-source systems primarily emphasize imaging of large specimens, whereas commercial systems that achieve sub-cellular resolution remain costly and complex. Our aim with Altair-LSFM was to bridge this gap—providing LLSM-level performance in a substantially more accessible and reproducible format. By combining high-NA optics with a precision-machined baseplate and open-source documentation, Altair offers a practical, high-resolution solution that can be readily adopted by non-specialist laboratories.
Strengths:
The authors succeed in their goals of implementing a relatively low-cost (~ USD 150K) open-source microscope that is easy to align. The ease of alignment rests on using custom-designed baseplates with dowel pins for precise positioning of optics based on computer analysis of opto-mechanical tolerances, as well as the optical path design. They simplify the excitation optics over Lattice light-sheet microscopes by using a Gaussian beam for illumination while maintaining lateral and axial resolutions of 235 and 350 nm across a 260-um field of view after deconvolution. In doing so they rest on foundational principles of optical microscopy that what matters for lateral resolution is the numerical aperture of the detection objective and proper sampling of the image field on to the detection, and the axial resolution depends on the thickness of the light-sheet when it is thinner than the depth of field of the detection objective. This concept has unfortunately not been completely clear to users of high-resolution light-sheet microscopes and is thus a valuable demonstration. The microscope is controlled by an open-source software, Navigate, developed by the authors, and it is thus foreseeable that different versions of this system could be implemented depending on experimental needs while maintaining easy alignment and low cost. They demonstrate system performance successfully by characterizing their sheet, point-spread function, and visualization of sub-cellular structures in mammalian cells, including microtubules, actin filaments, nuclei, and the Golgi apparatus.
We thank the reviewer for their thoughtful and generous assessment of our work. We are pleased that the manuscript’s emphasis on fundamental optical principles, design rationale, and practical implementation was clearly conveyed. We agree that Altair’s modular and accessible architecture provides a strong foundation for future variants tailored to specific experimental needs. To facilitate this, we have made all Zemax simulations, CAD files, and build documentation openly available on our GitHub repository, enabling users to adapt and extend the system for diverse imaging applications.
Weaknesses:
There is a fixation on comparison to the first-generation lattice light-sheet microscope, which has evolved significantly since then:
(1) The authors claim that commercial lattice light-sheet microscopes (LLSM) are "complex, expensive, and alignment intensive", I believe this sentence applies to the open-source version of LLSM, which was made available for wide dissemination. Since then, a commercial solution has been provided by 3i, which is now being used in multiple cores and labs but does require routine alignments. However, Zeiss has also released a commercial turn-key system, which, while expensive, is stable, and the complexity does not interfere with the experience of the user. Though in general, statements on ease of use and stability might be considered anecdotal and may not belong in a scientific article, unreferenced or without data.
We thank the reviewer for this thoughtful and constructive comment. We have revised the manuscript to more clearly distinguish between the original open-source implementation of LLSM and subsequent commercial versions by 3i and ZEISS. The revised Introduction and Discussion now explicitly note that while open-source and early implementations of LLSM can require expert alignment and maintenance, commercial systems—particularly the ZEISS Lattice Lightsheet 7—are designed for automated operation and stable, turn-key use, albeit at higher cost and with limited modifiability. We have also moderated earlier language regarding usability and stability to avoid anecdotal phrasing.
We also now provide a more objective proxy for system complexity: the number of optical elements that require precise alignment during assembly and maintenance thereafter. The original open-source LLSM setup includes approximately 29 optical components that must each be carefully positioned laterally, angularly, and coaxially along the optical path. In contrast, the first-generation Altair-LSFM system contains only nine such elements. By this metric, Altair-LSFM is considerably simpler to assemble and align, supporting our overarching goal of making high-resolution light-sheet imaging more accessible to non-specialist laboratories.
(2) One of the major limitations of the first generation LLSM was the use of a 5 mm coverslip, which was a hinderance for many users. However, the Zeiss system elegantly solves this problem, and so does Oblique Plane Microscopy (OPM), while the Altair-LSFM retains this feature, which may dissuade widespread adoption. This limitation and how it may be overcome in future iterations is not discussed.
We thank the reviewer for this helpful comment. We agree that the use of 5 mm diameter coverslips, while enabling high-NA imaging in the current Altair-LSFM configuration, may pose a practical limitation for some users. We now discuss this more explicitly in the revised manuscript. Specifically, we note that replacing the detection objective provides a straightforward solution to this constraint. For example, as demonstrated by Moore et al. (Lab Chip, 2021), pairing the Zeiss W Plan-Apochromat 20×/1.0 detection objective with the Thorlabs TL20X-MPL illumination objective allows imaging beyond the physical surfaces of both objectives, eliminating the need for small-format coverslips. In the revised text, we propose this modification as an accessible path toward greater compatibility with conventional sample mounting formats. We also note in the Discussion that Oblique Plane Microscopy (OPM) inherently avoids such nonstandard mounting requirements and, owing to its single-objective architecture, is fully compatible with standard environmental chambers.
(3) Further, on the point of sample flexibility, all generations of the LLSM, and by the nature of its design, the OPM, can accommodate live-cell imaging with temperature, gas, and humidity control. It is unclear how this would be implemented with the current sample chamber. This limitation would severely limit use cases for cell biologists, for which this microscope is designed. There is no discussion on this limitation or how it may be overcome in future iterations.
We thank the reviewer for this important observation and agree that environmental control is critical for live-cell imaging applications. It is worth noting that the original open-source LLSM design, as well as the commercial version developed by 3i, provided temperature regulation but did not include integrated control of CO2 or humidity. Despite this limitation, these systems have been widely adopted and have generated significant biological insights. We also acknowledge that both OPM and the ZEISS implementation of LLSM offer clear advantages in this respect, providing compatibility with standard commercial environmental chambers that support full regulation of temperature, CO₂, and humidity.
In the revised manuscript, we expand our discussion of environmental control in Supplementary Note 2, where we describe the Altair-LSFM chamber design in more detail and discuss its current implementation of temperature regulation and HEPES-based pH stabilization. Additionally, the Discussion now explicitly notes that OPM avoids the challenges associated with non-standard sample mounting and is inherently compatible with conventional environmental enclosures.
(4) The authors' comparison to LLSM is constrained to the "square" lattice, which, as they point out, is the most used optical lattice (though this also might be considered anecdotal). The LLSM original design, however, goes far beyond the square lattice, including hexagonal lattices, the ability to do structured illumination, and greater flexibility in general in terms of light-sheet tuning for different experimental needs, as well as not being limited to just sample scanning. Thus, the Alstair-LSFM cannot compare to the original LLSM in terms of versatility, even if comparisons to the resolution provided by the square lattice are fair.
We agree that the original LLSM design offers substantially greater flexibility than what is reflected in our initial comparison, including the ability to generate multiple lattice geometries (e.g., square and hexagonal), operate in structured illumination mode, and acquire volumes using both sample- and lightsheet–scanning strategies. To address this, we now include Supplementary Note 3 that provides a detailed overview of the illumination modes and imaging flexibility afforded by the original LLSM implementation, and how these capabilities compare to both the commercial ZEISS Lattice Lightsheet 7 and our AltairLSFM system. In addition, we have revised the discussion to explicitly acknowledge that the original LLSM could operate in alternative scan strategies beyond sample scanning, providing greater context for readers and ensuring a more balanced comparison.
(5) There is no demonstration of the system's live-imaging capabilities or temporal resolution, which is the main advantage of existing light-sheet systems.
In the revised manuscript, we now include a demonstration of live-cell imaging to directly validate AltairLSFM’s suitability for dynamic biological applications. We also explicitly discuss the temporal resolution of the system in the main text (see Optoelectronic Design of Altair-LSFM), where we detail both software- and hardware-related limitations. Specifically, we evaluate the maximum imaging speed achievable with Altair-LSFM in conjunction with our open-source control software, navigate.
For simplicity and reduced optoelectronic complexity, the current implementation powers the piezo through the ASI Tiger Controller, which modestly reduces its bandwidth. Nonetheless, for a 100 µm stroke typical of light-sheet imaging, we achieved sufficient performance to support volumetric imaging at most biologically relevant timescales. These results, along with additional discussion of the design trade-offs and performance considerations, are now included in the revised manuscript and expanded upon in the supplementary material.
While the microscope is well designed and completely open source, it will require experience with optics, electronics, and microscopy to implement and align properly. Experience with custom machining or soliciting a machine shop is also necessary. Thus, in my opinion, it is unlikely to be implemented by a lab that has zero prior experience with custom optics or can hire someone who does. Altair-LSFM may not be as easily adaptable or implementable as the authors describe or perceive in any lab that is interested, even if they can afford it. The authors indicate they will offer "workshops," but this does not necessarily remove the barrier to entry or lower it, perhaps as significantly as the authors describe.
We appreciate the reviewer’s perspective and agree that building any high-performance custom microscope—Altair-LSFM included—requires a basic understanding of (or willingness to learn) optics, electronics, and instrumentation. Such a barrier exists for all open-source microscopes, and our goal is not to eliminate this requirement entirely but to substantially reduce the technical and logistical challenges that typically accompany the construction of custom light-sheet systems.
Importantly, no machining experience or in-house fabrication capabilities are required. Users can simply submit the provided CAD design files and specifications directly to commercial vendors for fabrication. We have made this process as straightforward as possible by supplying detailed build instructions, recommended materials, and vendor-ready files through our GitHub repository. Our dissemination strategy draws inspiration from other successful open-source projects such as mesoSPIM, which has seen widespread adoption—over 30 implementations worldwide—through a similar model of exhaustive documentation, open-source software, and community support via user meetings and workshops.
We also recognize that documentation alone cannot fully replace hands-on experience. To further lower barriers to adoption, we are actively working with commercial vendors to streamline procurement and assembly, and Altair-LSFM is supported by a Biomedical Technology Development and Dissemination (BTDD) grant that provides resources for hosting workshops, offering real-time community support, and developing supplementary training materials.
In the revised manuscript, we now expand the Discussion to explicitly acknowledge these implementation considerations and to outline our ongoing efforts to support a broad and diverse user base, ensuring that laboratories with varying levels of technical expertise can successfully adopt and maintain the Altair-LSFM platform.
There is a claim that this design is easily adaptable. However, the requirement of custom-machined baseplates and in silico optimization of the optical path basically means that each new instrument is a new design, even if the Navigate software can be used. It is unclear how Altair-LSFM demonstrates a modular design that reduces times from conception to optimization compared to previous implementations.
We thank the reviewer for this insightful comment and agree that our original language regarding adaptability may have overstated the degree to which Altair-LSFM can be modified without prior experience. It was not our intention to imply that the system can be easily redesigned by users with limited technical background. Meaningful adaptations of the optical or mechanical design do require expertise in optical layout, optomechanical design, and alignment.
That said, for laboratories with such expertise, we aim to facilitate modifications by providing comprehensive resources—including detailed Zemax simulations, complete CAD models, and alignment documentation. These materials are intended to reduce the development burden for expert users seeking to tailor the system to specific experimental requirements, without necessitating a complete re-optimization of the optical path from first principles.
In the revised manuscript, we clarify this point and temper our language regarding adaptability to better reflect the realistic scope of customization. Specifically, we now state in the Discussion: “For expert users who wish to tailor the instrument, we also provide all Zemax illumination-path simulations and CAD files, along with step-by-step optimization protocols, enabling modification and re-optimization of the optical system as needed.” This revision ensures that readers clearly understand that Altair-LSFM is designed for reproducibility and straightforward assembly in its default configuration, while still offering the flexibility for modification by experienced users.
Reviewer #3 (Public review):
Summary:
This manuscript introduces a high-resolution, open-source light-sheet fluorescence microscope optimized for sub-cellular imaging. The system is designed for ease of assembly and use, incorporating a custommachined baseplate and in silico optimized optical paths to ensure robust alignment and performance. The authors demonstrate lateral and axial resolutions of ~235 nm and ~350 nm after deconvolution, enabling imaging of sub-diffraction structures in mammalian cells. The important feature of the microscope is the clever and elegant adaptation of simple gaussian beams, smart beam shaping, galvo pivoting and high NA objectives to ensure a uniform thin light-sheet of around 400 nm in thickness, over a 266 micron wide Field of view, pushing the axial resolution of the system beyond the regular diffraction limited-based tradeoffs of light-sheet fluorescence microscopy. Compelling validation using fluorescent beads and multicolor cellular imaging highlights the system's performance and accessibility. Moreover, a very extensive and comprehensive manual of operation is provided in the form of supplementary materials. This provides a DIY blueprint for researchers who want to implement such a system.
We thank the reviewer for their thoughtful and positive assessment of our work. We appreciate their recognition of Altair-LSFM’s design and performance, including its ability to achieve high-resolution, imaging throughout a 266-micron field of view. While Altair-LSFM approaches the practical limits of diffraction-limited performance, it does not exceed the fundamental diffraction limit; rather, it achieves near-theoretical resolution through careful optical optimization, beam shaping, and alignment. We are grateful for the reviewer’s acknowledgment of the accessibility and comprehensive documentation that make this system broadly implementable.
Strengths:
(1) Strong and accessible technical innovation: With an elegant combination of beam shaping and optical modelling, the authors provide a high-resolution light-sheet system that overcomes the classical light-sheet tradeoff limit of a thin light-sheet and a small field of view. In addition, the integration of in silico modelling with a custom-machined baseplate is very practical and allows for ease of alignment procedures. Combining these features with the solid and super-extensive guide provided in the supplementary information, this provides a protocol for replicating the microscope in any other lab.
(2) Impeccable optical performance and ease of mounting of samples: The system takes advantage of the same sample-holding method seen already in other implementations, but reduces the optical complexity.
At the same time, the authors claim to achieve similar lateral and axial resolution to Lattice-light-sheet microscopy (although without a direct comparison (see below in the "weaknesses" section). The optical characterization of the system is comprehensive and well-detailed. Additionally, the authors validate the system imaging sub-cellular structures in mammalian cells.
(3) Transparency and comprehensiveness of documentation and resources: A very detailed protocol provides detailed documentation about the setup, the optical modeling, and the total cost.
We thank the reviewer for their thoughtful and encouraging comments. We are pleased that the technical innovation, optical performance, and accessibility of Altair-LSFM were recognized. Our goal from the outset was to develop a diffraction-limited, high-resolution light-sheet system that balances optical performance with reproducibility and ease of implementation. We are also pleased that the use of precisionmachined baseplates was recognized as a practical and effective strategy for achieving performance while maintaining ease of assembly.
Weaknesses:
(1) Limited quantitative comparisons: Although some qualitative comparison with previously published systems (diSPIM, lattice light-sheet) is provided throughout the manuscript, some side-by-side comparison would be of great benefit for the manuscript, even in the form of a theoretical simulation. While having a direct imaging comparison would be ideal, it's understandable that this goes beyond the interest of the paper; however, a table referencing image quality parameters (taken from the literature), such as signalto-noise ratio, light-sheet thickness, and resolutions, would really enhance the features of the setup presented. Moreover, based also on the necessity for optical simplification, an additional comment on the importance/difference of dual objective/single objective light-sheet systems could really benefit the discussion.
In the revised manuscript, we have significantly expanded our discussion of different light-sheet systems to provide clearer quantitative and conceptual context for Altair-LSFM. These comparisons are based on values reported in the literature, as we do not have access to many of these instruments (e.g., DaXi, diSPIM, or commercial and open-source variants of LLSM), and a direct experimental comparison is beyond the scope of this work.
We note that while quantitative parameters such as signal-to-noise ratio are important, they are highly sample-dependent and strongly influenced by imaging conditions, including fluorophore brightness, camera characteristics, and filter bandpass selection. For this reason, we limited our comparison to more general image-quality metrics—such as light-sheet thickness, resolution, and field of view—that can be reliably compared across systems.
Finally, per the reviewer’s recommendation, we have added additional discussion clarifying the differences between dual-objective and single-objective light-sheet architectures, outlining their respective strengths, limitations, and suitability for different experimental contexts.
(2) Limitation to a fixed sample: In the manuscript, there is no mention of incubation temperature, CO₂ regulation, Humidity control, or possible integration of commercial environmental control systems. This is a major limitation for an imaging technique that owes its popularity to fast, volumetric, live-cell imaging of biological samples.
We fully agree that environmental control is critical for live-cell imaging applications. In the revised manuscript, we now describe the design and implementation of a temperature-regulated sample chamber in Supplementary Note 2, which maintains stable imaging conditions through the use of integrated heating elements and thermocouples. This approach enables precise temperature control while minimizing thermal gradients and optical drift. For pH stabilization, we recommend the use of 10–25 mM HEPES in place of CO₂ regulation, consistent with established practice for most light-sheet systems, including the initial variant of LLSM. Although full humidity and CO₂ control are not readily implemented in dual-objective configurations, we note that single-objective designs such as OPM are inherently compatible with commercial environmental chambers and avoid these constraints. Together, these additions clarify how environmental control can be achieved within Altair-LSFM and situate its capabilities within the broader LSFM design space.
(3) System cost and data storage cost: While the system presented has the advantage of being opensource, it remains relatively expensive (considering the 150k without laser source and optical table, for example). The manuscript could benefit from a more direct comparison of the performance/cost ratio of existing systems, considering academic settings with budgets that most of the time would not allow for expensive architectures. Moreover, it would also be beneficial to discuss the adaptability of the system, in case a 30k objective could not be feasible. Will this system work with different optics (with the obvious limitations coming with the lower NA objective)? This could be an interesting point of discussion. Adaptability of the system in case of lower budgets or more cost-effective choices, depending on the needs.
We agree that cost considerations are critical for adoption in academic environments. We would also like to clarify that the quoted $150k includes the optical table and laser source. In the revised manuscript, Supplementary Note 1 now includes an expanded discussion of cost–performance trade-offs and potential paths for cost reduction.
Last, not much is said about the need for data storage. Light-sheet microscopy's bottleneck is the creation of increasingly large datasets, and it could be beneficial to discuss more about the storage needs and the quantity of data generated.
In the revised manuscript, we now include Supplementary Note 4, which provides a high-level discussion of data storage needs, approximate costs, and practical strategies for managing large datasets generated by light-sheet microscopy. This section offers general guidance—including file-format recommendations, and cost considerations—but we note that actual costs will vary by institution and contractual agreements.
Conclusion:
Altair-LSFM represents a well-engineered and accessible light-sheet system that addresses a longstanding need for high-resolution, reproducible, and affordable sub-cellular light-sheet imaging. While some aspects-comparative benchmarking and validation, limitation for fixed samples-would benefit from further development, the manuscript makes a compelling case for Altair-LSFM as a valuable contribution to the open microscopy scientific community.
Recommendations for the authors:
Reviewer #2 (Recommendations for the authors):
(1) A picture, or full CAD design of the complete instrument, should be included as a main figure.
A complete CAD rendering of the microscope is now provided in Supplementary Figure 4.
(2) There is no quantitative comparison of the effects of the tilting resonant galvo; only a cartoon, a figure should be included.
The cartoon was intended purely as an educational illustration to conceptually explain the role of the tilting resonant galvo in shaping and homogenizing the light sheet. To clarify this intent, we have revised both the figure legend and corresponding text in the main manuscript. For readers seeking quantitative comparisons, we now reference the original study that provides a detailed analysis of this optical approach, as well as a review on the subject.
(3) Description of L4 is missing in the Figure 1 caption.
Thank you for catching this omission. We have corrected it.
(4) The beam profiles in Figures 1c and 3a, please crop and make the image bigger so the profile can be appreciated. The PSFs in Figure 3c-e should similarly be enlarged and presented using a dynamic range/LUT such that any aberrations can be appreciated.
In Figure 1c, our goal was to qualitatively illustrate the uniformity of the light-sheet across the full field of view, while Figure 1d provided the corresponding quantitative cross-section. To improve clarity, we have added an additional figure panel offering a higher-magnification, localized view of the light-sheet profile. For Figure 3c–e, we have enlarged the PSF images and adjusted the display range to better convey the underlying signal and allow subtle aberrations to be appreciated.
(5) It is unclear why LLSM is being used as the gold standard, since in its current commercial form, available from Zeiss, it is a turn-key system designed for core facilities. The original LLSM is also a versatile instrument that provides much more than the square lattice for illumination, including structured illumination, hexagonal lattices, live-cell imaging, wide-field illumination, different scan modes, etc. These additional features are not even mentioned when compared to the Altair-LSFM. If a comparison is to be provided, it should be fair and balanced. Furthermore, as outlined in the public review, anecdotal statements on "most used", "difficult to align", or "unstable" should not be provided without data.
In the revised manuscript, we have carefully removed anecdotal statements and, where appropriate, replaced them with quantitative or verifiable information. For instance, we now explicitly report that the square lattice was used in 16 of the 20 figure subpanels in the original LLSM publication, and we include a proxy for optical complexity based on the number of optical elements requiring alignment in each system.
We also now clearly distinguish between the original LLSM design—which supports multiple illumination and scanning modes—and its subsequent commercial variants, including the ZEISS Lattice Lightsheet 7, which prioritizes stability and ease of use over configurational flexibility (see Supplementary Note 3).
(6) The authors should recognize that implementing custom optics, no matter how well designed, is a big barrier to cross for most cell biology labs.
We fully understand and now acknowledge in the main text that implementing custom optics can present a significant barrier, particularly for laboratories without prior experience in optical system assembly. However, similar challenges were encountered during the adoption of other open-source microscopy platforms, such as mesoSPIM and OpenSPIM, both of which have nonetheless achieved widespread implementation. Their success has largely been driven by exhaustive documentation, strong community support, and standardized design principles—approaches we have also prioritized in Altair-LSFM. We have therefore made all CAD files, alignment guides, and detailed build documentation publicly available and continue to develop instructional materials and community resources to further reduce the barrier to adoption.
(7) Statements on "hands on workshops" though laudable, may not be appropriate to include in a scientific publication without some documentation on the influence they have had on implanting the microscope.
We understand the concern. Our intention in mentioning hands-on workshops was to convey that the dissemination effort is supported by an NIH Biomedical Technology Development and Dissemination grant, which includes dedicated channels for outreach and community engagement. Nonetheless, we agree that such statements are not appropriate without formal documentation of their impact, and we have therefore removed this text from the revised manuscript.
(8) It is claimed that the microscope is "reliable" in the discussion, but with no proof, long-term stability should be assessed and included.
Our experience with Altair-LSFM has been that it remains well-aligned over time—especially in comparison to other light-sheet systems we worked on throughout the last 11 years—we acknowledge that this assessment is anecdotal. As such, we have omitted this claim from the revised manuscript.
(9) Due to the reliance on anecdotal statements and comparisons without proof to other systems, this paper at times reads like a brochure rather than a scientific publication. The authors should consider editing their manuscript accordingly to focus on the technical and quantifiable aspects of their work.
We agree with the reviewer’s assessment and have revised the manuscript to remove anecdotal comparisons and subjective language. Where possible, we now provide quantitative metrics or verifiable data to support our statements.
Reviewer #3 (Recommendations for the authors):
Other minor points that could improve the manuscript (although some of these points are explained in the huge supplementary manual):
(1) The authors explain thoroughly their design, and they chose a sample-scanning method. I think that a brief discussion of the advantages and disadvantages of such a method over, for example, a laserscanning system (with fixed sample) in the main text will be highly beneficial for the users.
In the revised manuscript, we now include a brief discussion in the main text outlining the advantages and limitations of a sample-scanning approach relative to a light-sheet–scanning system. Specifically, we note that for thin, adherent specimens, sample scanning minimizes the optical path length through the sample, allowing the use of more tightly focused illumination beams that improve axial resolution. We also include a new supplementary figure illustrating how this configuration reduces the propagation length of the illumination light sheet, thereby enhancing axial resolution.
(2) The authors justify selecting a 0.6 NA illumination objective over alternatives (e.g., Special Optics), but the manuscript would benefit from a more quantitative trade-off analysis (beam waist, working distance, sample compatibility) with other possibilities. Within the objective context, a comparison of the performances of this system with the new and upcoming single-objective light-sheet methods (and the ones based also on optical refocusing, e.g., DAXI) would be very interesting for the goodness of the manuscript.
In the revised manuscript, we now provide a quantitative trade-off analysis of the illumination objectives in Supplementary Note 1, including comparisons of beam waist, working distance, and sample compatibility. This section also presents calculated point spread functions for both the 0.6 NA and 0.67 NA objectives, outlining the performance trade-offs that informed our design choice. In addition, Supplementary Note 3 now includes a broader comparison of Altair-LSFM with other light-sheet modalities, including diSPIM, ASLM, and OPM, to further contextualize the system’s capabilities within the evolving light-sheet microscopy landscape.
(3) The modularity of the system is implied in the context of the manuscript, but not fully explained. The authors should specify more clearly, for example, if cameras could be easily changed, objectives could be easily swapped, light-sheet thickness could be tuned by changing cylindrical lens, how users might adapt the system for different samples (e.g., embryos, cleared tissue, live imaging), .etc, and discuss eventual constraints or compatibility issues to these implementations.
Altair-LSFM was explicitly designed and optimized for imaging live adherent cells, where sample scanning and short light-sheet propagation lengths provide optimal axial resolution (Supplementary Note 3). While the same platform could be used for superficial imaging in embryos, systems implementing multiview illumination and detection schemes are better suited for such specimens. Similarly, cleared tissue imaging typically requires specialized solvent-compatible objectives and approaches such as ASLM that maximize the field of view. We have now added some text to the Design Principles section that explicitly state this.
Altair-LSFM offers varying levels of modularity depending on the user’s level of expertise. For entry-level users, the illumination numerical aperture—and therefore the light-sheet thickness and propagation length—can be readily adjusted by tuning the rectangular aperture conjugate to the back pupil of the illumination objective, as described in the Design Principles section. For mid-level users, alternative configurations of Altair-LSFM, including different detection objectives, stages, filter wheels, or cameras, can be readily implemented (Supplementary Note 1). Importantly, navigate natively supports a broad range of hardware devices, and new components can be easily integrated through its modular interface. For expert users, all Zemax simulations, CAD models, and step-by-step optimization protocols are openly provided, enabling complete re-optimization of the optical design to meet specific experimental requirements.
(4) Resolution measurements before and after deconvolution are central to the performance claim, but the deconvolution method (PetaKit5D) is only briefly mentioned in the main text, it's not referenced, and has to be clarified in more detail, coherently with the precision of the supplementary information. More specifically, PetaKit5D should be referenced in the main text, the details of the deconvolution parameters discussed in the Methods section, and the computational requirements should also be mentioned.
In the revised manuscript, we now provide a dedicated description of the deconvolution process in the Methods section, including the specific parameters and algorithms used. We have also explicitly referenced PetaKit5D in the main text to ensure proper attribution and clarity. Additionally, we note the computational requirements associated with this analysis in the same section for completeness.
(5) Image post-processing is not fully explained in the main text. Since the system is sample-scanning based, no word in the main text is spent on deskewing, which is an integral part of the post-processing to obtain a "straight" 3D stack. Since other systems implement such a post-processing algorithm (for example, single-objective architectures), it would be beneficial to have some discussion about this, and also a brief comparison to other systems in the main text in the methods section.
In the revised manuscript, we now explicitly describe both deskewing (shearing) and deconvolution procedures in the Alignment and Characterization section of the main text and direct readers to the Methods section. We also briefly explain why the data must be sheared to correct for the angled sample-scanning geometry for LLSM and Altair-LSFM, as well as both sample-scanning and laser-scanning-variants of OPMs.
(6) A brief discussion on comparative costs with other systems (LLSM, dispim, etc.) could be helpful for non-imaging expert researchers who could try to implement such an optical architecture in their lab.
Unfortunately, the exact costs of commercial systems such as LLSM or diSPIM are typically not publicly available, as they depend on institutional agreements and vendor-specific quotations. Nonetheless, we now provide approximate cost estimates in Supplementary Note 1 to help readers and prospective users gauge the expected scale of investment relative to other advanced light-sheet microscopy systems.
(7) The "navigate" control software is provided, but a brief discussion on its advantages compared to an already open-access system, such as Micromanager, could be useful for the users.
In the revised manuscript, we now include Supplementary Note 5 that discusses the advantages and disadvantages of different open-source microscope control platforms, including navigate and MicroManager. In brief, navigate was designed to provide turnkey support for multiple light-sheet architectures, with pre-configured acquisition routines optimized for Altair-LSFM, integrated data management with support for multiple file formats (TIFF, HDF5, N5, and Zarr), and full interoperability with OMEcompliant workflows. By contrast, while Micro-Manager offers a broader library of hardware drivers, it typically requires manual configuration and custom scripting for advanced light-sheet imaging workflows.
(8) The cost and parts are well documented, but the time and expertise required are not crystal clear.Adding a simple time estimate (perhaps in the Supplement Section) of assembly/alignment/installation/validation and first imaging will be very beneficial for users. Also, what level of expertise is assumed (prior optics experience, for example) to be needed to install a system like this? This can help non-optics-expert users to better understand what kind of adventure they are putting themselves through.
We thank the reviewer for this helpful suggestion. To address this, we have added Supplementary Table S5, which provides approximate time estimates for assembly, alignment, validation, and first imaging based on the user’s prior experience with optical systems. The table distinguishes between novice (no prior experience), moderate (some experience using but not assembling optical systems), and expert (experienced in building and aligning optical systems) users. This addition is intended to give prospective builders a realistic sense of the time commitment and level of expertise required to assemble and validate AltairLSFM.
Minor things in the main text:
(1) Line 109: The cost is considered "excluding the laser source". But then in the table of costs, you mention L4cc as a "multicolor laser source", for 25 K. Can you explain this better? Are the costs correct with or without the laser source?
We acknowledge that the statement in line 109 was incorrect—the quoted ~$150k system cost does include the laser source (L4cc, listed at $25k in the cost table). We have corrected this in the revised manuscript.
(2) Line 113: You say "lateral resolution, but then you state a 3D resolution (230 nm x 230 nm x 370 nm). This needs to be fixed.
Thank you, we have corrected this.
(3) Line 138: Is the light-sheet uniformity proven also with a fluorescent dye? This could be beneficial for the main text, showing the performance of the instrument in a fluorescent environment.
The light-sheet profiles shown in the manuscript were acquired using fluorescein to visualize the beam. We have revised the main text and figure legends to clearly state this.
(4) Line 149: This is one of the most important features of the system, defying the usual tradeoff between light-sheet thickness and field of view, with a regular Gaussian beam. I would clarify more specifically how you achieve this because this really is the most powerful takeaway of the paper.
We thank the reviewer for this key observation. The ability of Altair-LSFM to maintain a thin light sheet across a large field of view arises from diffraction effects inherent to high NA illumination. Specifically, diffraction elongates the PSF along the beam’s propagation direction, effectively extending the region over which the light sheet remains sufficiently thin for high-resolution imaging. This phenomenon, which has been the subject of active discussion within the light-sheet microscopy community, allows Altair-LSFM to partially overcome the conventional trade-off between light-sheet thickness and propagation length. We now clarify this point in the main text and provide a more detailed discussion in Supplementary Note 3, which is explicitly referenced in the discussion of the revised manuscript.
(5) Line 171: You talk about repeatable assembly...have you tried many different baseplates? Otherwise, this is a complicated statement, since this is a proof-of-concept paper.
We thank the reviewer for this comment. We have not yet validated the design across multiple independently assembled baseplates and therefore agree that our previous statement regarding repeatable assembly was premature. To avoid overstating the current level of validation, we have removed this statement from the revised manuscript.
(6) Line 187: same as above. You mention "long-term stability". For how long did you try this? This should be specified in numbers (days, weeks, months, years?) Otherwise, it is a complicated statement to make, since this is a proof-of-concept paper.
We also agree that referencing long-term stability without quantitative backing is inappropriate, and have removed this statement from the revised manuscript.
(7) Line 198: "rapid z-stack acquisition. How rapid? Also, what is the limitation of the galvo-scanning in terms of the imaging speed of the system? This should be noted in the methods section.
In the revised manuscript, we now clarify these points in the Optoelectronic Design section. Specifically, we explicitly note that the resonant galvo used for shadow reduction operates at 4 kHz, ensuring that it is not rate-limiting for any imaging mode. In the same section, we also evaluate the maximum acquisition speeds achievable using navigate and report the theoretical bandwidth of the sample-scanning piezo, which together define the practical limits of volumetric acquisition speed for Altair-LSFM.
(8) Line 234: Peta5Kit is discussed in the additional documentation, but should be referenced here, as well.
We now reference and cite PetaKit5D.
(9) Line 256: "values are on par with LLSM", but no values are provided. Some details should also be provided in the main text.
In the revised manuscript, we now provide the lateral and axial resolution values originally reported for LLSM in the main text to facilitate direct comparison with Altair-LSFM. Additionally, Supplementary Note 3 now includes an expanded discussion on the nuances of resolution measurement and reporting in lightsheet microscopy.
Figures:
(1) Figure 1 could be implemented with Figure 3. They're both discussing the validation of the system (theoretically and with simulations), and they could be together in different panels of the same figure. The experimental light-sheet seems to be shown in a transmission mode. Showing a pattern in a fluorescent dye could also be beneficial for the paper.
In Figure 1, our goal was to guide readers through the design process—illustrating how the detection objective’s NA sets the system’s resolution, which defines the required pixel size for Nyquist sampling and, in turn, the field of view. We then use Figure 1b–c to show how the illumination beam was designed and simulated to achieve that field of view. In contrast, Figure 3 presents the experimental validation of the illumination system. To avoid confusion, we now clarify in the text that the light sheet shown in Figure 3 was visualized in a fluorescein solution and imaged in transmission mode. While we agree that Figures 1 and 3 both serve to validate the system, we prefer to keep them as separate figures to maintain focus within each panel. We believe this organization better supports the narrative structure and allows readers to digest the theoretical and experimental validations independently.
(2) Figure 3: Panels d and e show the same thing. Why would you expect that xz and yz profiles should be different? Is this due to the orientation of the objectives towards the sample?
In Figure 3, we present the PSF from all three orthogonal views, as this provides the most transparent assessment of PSF quality—certain aberration modes can be obscured when only select perspectives are shown. In principle, the XZ and YZ projections should be equivalent in a well-aligned system. However, as seen in the XZ projection, a small degree of coma is present that is not evident in the YZ view. We now explicitly note this observation in the revised figure caption to clarify the difference between these panels.
(3) Figure 4's single boxes lack a scale bar, and some of the Supplementary Figures (e.g. Figure 5) lack detailed axis labels or scale bars. Also, in the detailed documentation, some figures are referred to as Figure 5. Figure 7 or, for example, figure 6. Figure 8, and this makes the cross-references very complicated to follow
In the revised manuscript, we have corrected these issues. All figures and supplementary figures now include appropriate scale bars, axis labels, and consistent formatting. We have also carefully reviewed and standardized all cross-references throughout the main text and supplementary documentation to ensure that figure numbering is accurate and easy to follow.
-
-
Local file Local file
-
but even so there is something that needs to be improved again, namely the process ofthe TF-IDF and Cosine Similarity programs which can be said to take quite a long time in processing calculations.Suggestions that can be given for further research are to look for special programs that can shorten the programruntime or compare the process runtime of this method with other calculation methods.
maybe we can increase process effectiveness
-
-
moodle.u-bordeaux.fr moodle.u-bordeaux.fr
-
The evolution of agriculture in Palanpur involvedand was shaped by the interactions amongst several factors, including demographic change, expansion ofirrigation, intensi cation of cultivation, changing cropping patterns, farm mechanization, growing non-farmemployment, ‘marketization’ of factors of production, and improvements in formal credit supply.We identify ve fundamental elements of the change in agriculture in Palanpur. First, growing populationpressure and sale of land have meant that per capita land owned and land operated have declined signi cantlybetween 1983 and 2008–10. While the decline in land availability per capita has in uenced changing patternsof tenancy, it has also affected choice of crops and cropping intensity. Second, while agricultural output, and tosome extent outside jobs, played a central role in income growth in earlier surveys, growing non-farm andoutside jobs have become more important in recent periods. The decline in the role of agriculture hasimplications not just for income distribution and inequality in the village but also for the labour and landmarkets. Third, we look at the role of agriculture in relation to social and cultural aspects of the village,particularly the af nity of some of the caste groups in relation to agriculture
This passage highlights that long-run development operates through cumulative and interdependent processes rather than through a simple sectoral reallocation of labour. The persistence of agricultural employment despite declining output shares suggests that agriculture continues to play a stabilizing and coordinating role in rural labour markets, influencing income diversification, tenancy relations, and investment decisions. Long-term data make visible these slow institutional and technological adjustments that are often missed in short-run analyses. Structural transformation in Palanpur therefore appears less as an exit from agriculture than as a gradual reorganization of its functions within a diversified rural economy. Does this imply that agriculture should be understood as a complement rather than a residual sector in long-run development?
-
-
www.independent.co.uk www.independent.co.uk
-
“Dear Jonas: Considering your Country decided not to give me the Nobel Peace Prize for having stopped 8 Wars PLUS, I no longer feel an obligation to think purely of Peace, although it will always be predominant, but can now think about what is good and proper for the United States of America. “Denmark cannot protect that land from Russia or China, and why do they have a ‘right of ownership’ anyway? There are no written documents, it’s only that a boat landed there hundreds of years ago, but we had boats landing there, also. “I have done more for NATO than any other person since its founding, and now, NATO should do something for the United States. The World is not secure unless we have Complete and Total Control of Greenland. Thank you! President DJT”
This letter, one can't even begin to unpack. What it says about the mind of Trump, his current mental health, or about what it means to have a toddler setting US foreign policy based on personal resentments
-
-
www.biorxiv.org www.biorxiv.org
-
eLife Assessment
In this study, the authors investigate the role of ZMAT3, a p53 target gene, in tumor suppression and RNA splicing regulation. Using quantitative proteomics, the authors uncover that ZMAT3 knockout leads to upregulation of HKDC1, a gene linked to mitochondrial respiration, and that ZMAT3 suppresses HKDC1 expression by inhibiting c-JUN-mediated transcription. This set of convincing evidence reveals a fundamental mechanism by which ZMAT3 contributes to p53-driven tumor suppression by regulating mitochondrial respiration.
-
Reviewer #1 (Public review):
Summary:
ZMAT3 is a p53 target gene that the Lal group and others have shown is important for p53-mediated tumor suppression, and which plays a role in the control of RNA splicing. In this manuscript Lal and colleagues perform quantitative proteomics of cells with ZMAT3 knockout and show that the enzyme hexokinase HKDC1 is the most upregulated protein. Mechanistically, the authors show that ZMAT3 does not appear to directly regulate the expression of HKDC1; rather, they show that the transcription factor c-JUN was strongly enriched in ZMAT3 pull-downs in IP-mass spec experiments, and they perform IP-western to demonstrate an interaction between c-JUN and ZMAT3. Importantly, the authors demonstrate, using ChIP-qPCR, that JUN is present at the HKDC1 gene (intron 1) in ZMAT3 WT cells, and showed markedly enhanced binding in ZMAT3 KO cells. The data best fit a model whereby p53 transactivates ZMAT3, leading to decreased JUN binding to the HKDC1 promoter, and altered mitochondrial respiration. The data are novel, compelling and very interesting.
Comments on revisions:
The authors have done a thorough job addressing my comments. This manuscript is quite strong and will be highly cited for its novelty and rigor.
-
Reviewer #2 (Public review):
Summary:
The study elucidates the role of the recently discovered mediator of p53 tumor suppressive activity, ZMAT3. Specifically, the authors find that ZMAT3 negatively regulates HKDC1, a gene involved in the control of mitochondrial respiration and cell proliferation.
Comments on revisions:
The authors have mostly addressed to the concerns raised previously by this reviewer. The lack of functional assays made the reported findings mostly mechanistic with no clear biological context.
The present manuscript is certainly improved compared to the previous version.
-
Author response:
The following is the authors’ response to the original reviews.
Public Reviews:
Reviewer #1 (Public review):
Summary:
ZMAT3 is a p53 target gene that the Lal group and others have shown is important for p53mediated tumor suppression, and which plays a role in the control of RNA splicing. In this manuscript, Lal and colleagues perform quantitative proteomics of cells with ZMAT3 knockout and show that the enzyme hexokinase HKDC1 is the most upregulated protein. Mechanistically, the authors show that ZMAT3 does not appear to directly regulate the expression of HKDC1; rather, they show that the transcription factor c-JUN was strongly enriched in ZMAT3 pull-downs in IP-mass spec experiments, and they perform IP-western to demonstrate an interaction between c-JUN and ZMAT3. Importantly, the authors demonstrate, using ChIP-qPCR, that JUN is present at the HKDC1 gene (intron 1) in ZMAT3 WT cells and shows markedly enhanced binding in ZMAT3 KO cells. The data best fit a model whereby p53 transactivates ZMAT3, leading to decreased JUN binding to the HKDC1 promoter, and altered mitochondrial respiration.
Strengths:
The authors use multiple orthogonal approaches to test the majority of their findings. The authors offer a potentially new activity of ZMAT3 in tumor suppression by p53: the control of mitochondrial respiration.
Weaknesses:
Some indication as to whether other c-JUN target genes are also regulated by ZMAT3 would improve the broad relevance of the authors' findings.
We thank the reviewer for the kind words and the thoughtful suggestion. As recommended, to identify additional c-JUN targets potentially regulated by ZMAT3, we intersected the genes upregulated upon ZMAT3 knockout (from our RNA-seq data) with the ChIP-Atlas dataset for human c-JUN and cross-referenced these with c-JUN peaks from three ENCODE cell lines. From this analysis, we selected for further analysis the top 4 candidate genes - LAMA2, VSNL1, SAMD3, and IL6R (Figure 5-figure supplement 2A-D). Like HKDC1, these genes were upregulated in ZMAT3-KO cells, and this upregulation was abolished upon siRNA-mediated JUN knockdown in ZMAT3-KO cells (Figure 5-figure supplement 2E). Moreover, by ChIP-qPCR we observed increased JUN binding to the JUN peak for these genes in ZMAT3-KO cells as compared to the ZMAT3-WT (Figure 5- figure supplement 2F). As described on page 11 of the revised manuscript, these results suggest that the ZMAT3/JUN axis negatively regulates HKDC1 expression and additional c-JUN target genes.
Reviewer #2 (Public review):
Summary:
The study elucidates the role of the recently discovered mediator of p53 tumor suppressive activity, ZMAT3. Specifically, the authors find that ZMAT3 negatively regulates HKDC1, a gene involved in the control of mitochondrial respiration and cell proliferation.
Strengths:
Mechanistically, ZMAT3 suppresses HKDC1 transcription by sequestering JUN and preventing its binding to the HKDC1 promoter, resulting in reduced HKDC1 expression. Conversely, p53 mutation leads to ZMAT3 downregulation and HKDC1 overexpression, thereby promoting increased mitochondrial respiration and proliferation. This mechanism is novel; however, the authors should address several points.
Weaknesses:
The authors conduct mechanistic experiments (e.g., transcript and protein quantification, luciferase assays) to demonstrate regulatory interactions between p53, ZMAT3, JUN, and HKDC1. These findings should be supported with functional assays, such as proliferation, apoptosis, or mitochondrial respiration analyses.
We thank the reviewer for appreciating our work and for this valuable suggestion. The reviewer rightly pointed out that supporting the regulatory interactions between p53, ZMAT3, JUN and HKDC1 with functional assays such as proliferation, apoptosis and mitochondrial respiration analyses would strengthen our mechanistic data. During the revision of our manuscript, we attempted to address this point by performing simultaneously knockdown of these proteins; however, we observed substantial toxicity under these conditions, making the functional assays technically unfeasible. This outcome was not unexpected as knockdown of JUN or HKDC1 individually results in growth defects. We therefore focused our efforts on addressing the recommendation for authors.
Reviewer #3 (Public review):
Summary:
In their manuscript, Kumar et al. investigate the mechanisms underlying the tumor suppressive function of the RNA binding protein ZMAT3, a previously described tumor suppressor in the p53 pathway. To this end, they use RNA-sequencing and proteomics to characterize changes in ZMAT3-deficient cells, leading them to identify the hexokinase HKDC1 as upregulated with ZMAT3 deficiency first in colorectal cancer cells, then in other cell types of both mouse and human origin. This increase in HKDC1 is associated with increased mitochondrial respiration. As ZMAT3 has been reported as an RNA-binding and DNA-binding protein, the authors investigated this via PAR-CLIP and ChIP-seq but did not observe ZMAT3 binding to HKDC1 pre-mRNA or DNA. Thus, to better understand how ZMAT3 regulates HKDC1, the authors used quantitative proteomics to identify ZMAT3interacting proteins. They identified the transcription factor JUN as a ZMAT3-interacting protein and showed that JUN promotes the increased HKDC1 RNA expression seen with ZMAT3 inactivation. They propose that ZMAT3 inhibits JUN-mediated transcriptional induction of HKDC1 as a mechanism of tumor suppression. This work uncovers novel aspects of the p53 tumor suppressor pathway.
Strengths:
This novel work sheds light on one of the most well-established yet understudied p53 target genes, ZMAT3, and how it contributes to p53's tumor suppressive functions. Overall, this story establishes a p53-ZMAT3-HKDC1 tumor suppressive axis, which has been strongly substantiated using a variety of orthogonal approaches, in different cell lines and with different data sets.
Weaknesses:
While the role of p53 and ZMAT3 in repressing HKDC1 is well substantiated, there is a gap in understanding how ZMAT3 acts to repress JUN-driven activation of the HKDC1 locus. How does ZMAT3 inhibit JUN binding to HKDC1? Can targeted ChIP experiments or RIP experiments be used to make a more definitive model? Can ZMAT3 mutants help to understand the mechanisms? Future work can further establish the mechanisms underlying how ZMAT3 represses JUN activity.
We thank the reviewer for the kind words and the invaluable suggestion. The reviewer has an excellent point regarding how ZMAT3 inhibits JUN binding to HKDC1 locus.Our new data included in the revised manuscript show that the ZMAT3-JUN interaction is lost in the presence of DNase or RNase, indicating that the interaction requires both DNA and RNA. This result suggests that ZMAT3 and JUN form an RNA-dependent, chromatin- associated complex. Although not directly investigated in our study, this finding is consistent with emerging evidence that RBPs can function as chromatin-associated cofactors in transcription. For example, functional interplay between transcription factor YY1 and the RNA binding protein RBM25 co-regulates a broad set of genes, where RBM25 appears to engage promoters first and then recruit YY1, with RNA proposed to guide target recognition. We have discussed this possibility in the discussion section of revised manuscript (page 13). We agree that future work using ZMAT3 mutants and targeted ChIP or RIP assays will be valuable to delineate the precise mechanism by which ZMAT3 inhibits JUN binding to its target genes.
Recommendations for the authors:
Reviewer #1 (Recommendations for the authors):
ZMAT3 is a p53 target gene that the Lal group and others have shown is important for p53mediated tumor suppression, and which plays a role in the control of RNA splicing. In this manuscript, Lal and colleagues perform quantitative proteomics of cells with ZMAT3 knockout and show that the enzyme hexokinase HKDC1 is the most upregulated protein. HKDC1 is emerging as an important player in human cancer. Importantly, the authors show both acute (gene silencing) and chronic (CRISPR KO) approaches to silence ZMAT3, and they do this in several cell lines. Notably, they show that ZMAT3 silencing leads to impaired mitochondrial respiration, in a manner that is rescued by silencing of HKDC1. Mechanistically, the authors show that ZMAT3 does not appear to directly regulate the expression of HKDC1; rather, they show that the transcription factor c-JUN was strongly enriched in ZMAT3 pull-downs in IP-mass spec experiments, and they perform IP-western to demonstrate an interaction between c-JUN and ZMAT3. Importantly, the authors demonstrate, using ChIP-qPCR, that JUN is present at the HKDC1 gene (intron 1) in ZMAT3 WT cells, and shows markedly enhanced binding in ZMAT3 KO cells. The data best fit a model whereby p53 transactivates ZMAT3, leading to decreased JUN binding to the HKDC1 promoter (intron 1), and altered mitochondrial respiration. The findings are compelling, and the authors use multiple orthogonal approaches to test most findings. And the authors offer a potentially new activity of ZMAT3 in tumor suppression by p53: the control of mitochondrial respiration. As such, enthusiasm is high for this manuscript.
Addressing the following question would improve the manuscript.
It is not clear how many (other) c-JUN target genes might be impacted by ZMAT3; other important c-JUN targets in cancer include GLS1, WEE1, SREBP1, GLUT1, and CD36, so there could be a global impact on metabolism in ZMAT3 KO cells. Can the authors perform qPCR on these targets in ZMAT3 WT and KO cells and see if these target genes are differentially expressed?
We thank the reviewer for this thoughtful suggestion. As recommended, we examined the expression of key c-JUN target genes GLS1 (also known as GLS), WEE1, SREBP1, GLUT1, and CD36 in ZMAT3-WT and ZMAT3-KO cells. We first analyzed publicly available JUN ChIP-Seq data from three ENCODE cell lines, which revealed JUN binding peaks near or upstream of exon 1 for GLS1/GLS, SREBP1, and SLC2A1/GLUT1, but not for WEE1 or CD36 (Appendix 1, panels A-E). Based on these results, we performed RT-qPCR for GLS1/GLS, SREBP1 and SLC2A1 in ZMAT3-WT and ZMAT3-KO cells, with or without JUN knockdown. GLS mRNA was significantly reduced upon JUN knockdown in both ZMAT3-WT cells and ZMAT3-KO cells, but it was not upregulated upon loss of ZMAT3, indicating that GLS is a JUN target gene, but it is not regulated by ZMAT3. In contrast, SREBF1 or SLC2A1 expression remained unchanged upon ZMAT3 loss or JUN knockdown (Appendix 1 panels F-H). These data suggest that the ZMAT3/JUN axis does not regulate the expression of these genes.
To identify additional c-JUN targets potentially regulated by ZMAT3, we intersected the genes upregulated upon ZMAT3 knockout (from our RNA-seq data) with the ChIP-Atlas dataset for human c-JUN and cross-referenced these with c-JUN peaks from three ENCODE cell lines. From this analysis, we selected for further analysis the top 4 candidate genes - LAMA2, VSNL1, SAMD3, and IL6R (Figure 5-figure supplement 2A-D). Like HKDC1, these genes were upregulated in ZMAT3-KO cells, and this upregulation was abolished upon siRNA-mediated JUN knockdown in ZMAT3-KO cells (Figure 5-figure supplement 2E). Moreover, by ChIP-qPCR we observed increased JUN binding to the JUN peak for these genes in ZMAT3-KO cells as compared to the ZMAT3-WT (Figure 5- figure supplement 2F). As described on page 11 of the revised manuscript, these results suggest that the ZMAT3/JUN axis negatively regulates HKDC1 expression and additional c-JUN target genes.
Minor concerns:
(1) Line 150: observed a modest.
(2) Line 159: Figure 2G appears to be inaccurately cited.
(3) Line 191: assays to measure.
We thank the reviewer for pointing these out. These minor concerns have been addressed in the text.
Reviewer #2 (Recommendations for the authors):
(1) Figure 1E: Can the authors clarify what the numbers on the left side of the chart represent? Do they refer to the scale?
The numbers on the Y-axis represent the -log 10 (p- value) where higher values correspond to more significant changes. For visualization purposes, the significant changes are shown in red.
(2) Page 5, line 123: The sentence "As expected, ZMAT3 mRNA levels were decreased in the ZMAT3-KO cells" is redundant, as this information was already mentioned on page 4, line 103.
We thank the reviewer for noticing this redundancy. The repeated sentence has been removed in the revised manuscript.
(3) Page 5: The authors state: "Transcriptome-wide, upon loss of ZMAT3, 606 genes were significantly up-regulated (adj. p < 0.05 and 1.5-fold change) and 552 were down-regulated, with a median fold change of 1.76 and 0.55 for the up- and down-regulated genes, respectively." Later, on page 6, they write: "Comparison of the RNA-seq data from ZMAT3WT vs. ZMAT3-KO and CTRL siRNA vs. ZMAT3 siRNA-transfected HCT116 cells indicated that 1023 genes were commonly up-regulated, and 1042 were commonly down-regulated upon ZMAT3 loss (Figure S2C and D)." Why is the number of deregulated transcripts higher in the ZMAT3-WT vs. ZMAT3-KO comparison than in the CTRL siRNA vs. ZMAT3 siRNA comparison? Are the authors using less stringent criteria in the second analysis? This point should be clarified.
We thank the reviewer for highlighting this point. The reviewer is correct that less stringent criteria were used in the second analysis. On page 5, we applied stringent thresholds (adjusted p-value < 0.05 and 1.5-fold change) to identify high-confidence transcriptome-wide changes upon ZMAT3 loss. In contrast, for the comparison of both RNA-seq datasets (ZMAT3-WT vs. KO and siCTRL vs. siZMAT3), we included genes that were consistently up- or downregulated, without applying a fold change threshold, focusing instead on significantly altered genes (adjusted p < 0.05) in both datasets. This allowed us to capture broader and more reproducible transcriptomic changes that occur upon ZMAT3 depletion, including modest but significant changes upon transient ZMAT3 knockdown with siRNAs. We have now clarified this distinction on page 6 of the revised manuscript.
(4) Figures 2B and 2E: The authors should provide quantification of HKDC1 protein levels normalized to a loading control. In addition, they should assess HKDC1 protein abundance upon ZMAT3 interference in SWI1222 and HCEC1CT cells, not just in HepG2 and HCT116 cells.
We thank the reviewer for this suggestion. We have now quantified all immunoblots presented throughout the manuscript, including those shown in Figures 2B and 2E, and all other figures containing protein analyses. Band intensities were quantified using ImageJ densitometry and normalized to GAPDH as the loading control. In addition, as suggested, we examined HKDC1 protein levels following ZMAT3 knockdown in two additional cell lines, SW1222 and HCEC-1CT. Consistent with our observations in HepG2 and HCT116 cells, ZMAT3 depletion led to increased HKDC1 protein levels in both SW1222 and HCEC-1CT cells. These new data are now included in Figure 2-figure supplement 1F and G. We have updated the Results section, figure legends, and figures to reflect these additions.
(5) Figure 3A: It is unclear which gene was knocked out in the "KO cells." The authors should clearly specify this.
We thank the reviewer for pointing this out. We have now updated Figure 3A.
(6) Figure 3D: The result appears counterintuitive in comparison to Figure 3E. Why does HKDC1 knockdown reduce cell confluency more in ZMAT3 KO cells than in control (ZMAT3 wild-type) cells? The authors should explain this discrepancy more clearly.
We thank the reviewer for this insightful comment. As shown in Figure 3D and 3E, knockdown of HKDC1 resulted in a greater decrease in proliferation in ZMAT3-KO cells than in ZMAT3-WT cells. This observation was indeed unexpected, given that HKDC1 acts downstream of ZMAT3. One possible explanation is that elevated HKDC1 expression in ZMAT3-KO cells increases their reliance on HKDC1 for sustaining proliferation, and that HKDC1 may also participate in additional pathways in ZMAT3-KO cells. Consequently, transient knockdown of HKDC1 in ZMAT3-KO cells would have a more pronounced effect on proliferation due to their increased dependency on HKDC1 activity. In contrast, ZMAT3WT cells which express lower levels of HKDC1 are less dependent on its function and therefore less sensitive to its depletion. We have now clarified this point on page 8 of the revised manuscript.
Reviewer #3 (Recommendations for the authors):
(1) Why do the authors start their analysis by knocking out the p53 response element in Zmat3? That should be clarified. In addition, since clones were picked after CRISPR KO of Zmat3, were experiments done to confirm that p53 signaling was not disrupted?
We thank the reviewer for this thoughtful question. We began our study by targeting the p53 response element (p53RE) in the ZMAT3 locus because the basal expression of ZMAT3 is regulated by p53 (Muys, Bruna R. et al., Genes & Development, 2021). Deleting the p53RE therefore allowed us to markedly reduce ZMAT3 expression without disrupting the entire ZMAT3 locus. We have clarified this rationale on page 4 of the revised manuscript. To ensure that p53 signaling was not affected by this modification, we verified that canonical p53 targets such as p21 were equivalently induced in both ZMAT3WT and KO cells following Nutlin treatment and that p53 induction was unchanged(Figure 4F and Figure 1 – figure supplement 1A).
(2) Throughout the text, many immunoblots are used to validate the knockouts and knockdowns used, but some clarification is needed. In Figure S1A, the Zmat3-WT sample seems to have significantly more p53 than the Zmat3 KO sample. Does Zmat3 KO compromise p53 levels in other experiments? It would be good to understand if Zmat3 affects p53 function by affecting its levels. Also, the p21 blot is overloaded.
We thank the reviewer for this helpful observation. To determine whether ZMAT3 knockout affects p53 function by affecting its levels, we repeated the experiment three independent times. Western blots from these biological replicates, together with protein quantification, are now included in Appendix-2 and Figure 1-figure supplement 1A. These data show no significant differences in p53 or p21 induction between ZMAT3-WT and ZMAT3-KO cells following Nutlin treatment. In the revised manuscript, we have replaced the blot in Figure 1-figure supplement 1A with a more representative image from one of these replicate experiments.
In Figure 2E, HKDC1 protein levels are not shown for the SW1222 and HCEC-1CT cell lines,
We thank the reviewer for this suggestion. HKDC1 protein levels in SW1222 and HCEC1-CT cells following ZMAT3 knockdown are now included as Figure 2- figure supplement 1F and 1G, together with the corresponding quantification.
and Zmat3 does not appear as its characteristic two bands on the blot. What does this signify?
We thank the reviewer for this observation. Endogenous ZMAT3 typically appears as two closely migrating bands on immunoblots. As shown in Figure 4D and Appendix 2A and 2B, these two bands are observed at the expected molecular weight following Nutlin treatment and are specific to ZMAT3, as they are markedly reduced in ZMAT3-KO cells. In contrast, only a single ZMAT3 band is visible in Figure 2E. This likely reflects limited resolution of the two bands in some blots rather than a biological difference.
(3) Why does HKDC1 knockdown only have an effect on metabolic phenotypes when ZMAT3 is gone? In Figure 3A, there does not seem to be a decrease in hexokinase activity in the siCTRL + siHKDC1 condition compared to siCTRL alone. Also, in Figure 3A, does phosphorylation activity of HKDC1 necessarily reflect glucose uptake, as stated? Additionally, in Figure 3C, there is no effect on mitochondrial respiration with siHKDC1, even though recent studies have shown a significant effect of HKDC1 on this.
We thank the reviewer for raising these important questions. As noted, HKDC1 knockdown alone in wild-type cells (siCTRL + siHKDC1) does not significantly reduce hexokinase activity (Figure 3A). This likely reflects the low basal expression of HKDC1 in these cells. Thus, the metabolic phenotype may only become apparent when HKDC1 expression exceeds a functional threshold, as observed in ZMAT3-KO cells where HKDC1 is upregulated.
Regarding the glucose uptake assay, HKDC1 itself is not phosphorylated; rather, it phosphorylates a non-catabolizable glucose analog, 2-deoxyglucose (2-DG) upon cellular uptake. According to the manufacturer’s protocol, intracellular 2-DG is phosphorylated by hexokinases to 2-deoxyglucose-6-phosphate (2-DG6P), which cannot be further metabolized and therefore accumulates. The accumulated 2-DG6P is quantified using a luminescence-based readout. This assay is widely used as a surrogate for glucose uptake because it reflects both glucose import and phosphorylation — the first step of glycolytic flux. As for the lack of change in mitochondrial respiration (Figure 3C), we acknowledge that some studies have reported mitochondrial roles for HKDC1 under basal conditions; however, such effects may be cell type-specific.
(4) The emphasis on glycolysis signatures is confusing, as in the end, glycolysis does not seem to be affected by ZMAT3 status, but mitochondrial respiration is affected. Can the text be clarified to address this? It is also difficult to understand the role of oxygen consumption rate (OCR) in ZMAT3 phenotypes, as it does not fully track with proliferation. For example, ZMAT3 KD has the highest OCR, and the other conditions have similar OCRs but different proliferative rates in Figure 3D. Also, the colors used in Figure 3 to denote different genotypes change between B/C and D, which is confusing.
We thank the reviewer for pointing out the inconsistency in the colors of the graph in Figure 2, which we have now corrected. Our data indicates that ZMAT3 regulates mitochondrial respiration without significantly affecting glycolysis. It is possible that mitochondria in ZMAT3-KO cells are oxidizing more substrates that are not produced by glycolysis. Additional work will be required to fully determine these mechanisms. We have clarified this on page 8 of the revised manuscript.
(5) The lack of ZMAT3 binding to RNAs in PAR-CLIP is not proof that it does not do so. A more targeted approach should be used, using individual RIP assays. The authors should also analyze the splicing of HKDC1, which could be affected by ZMAT3.
As suggested, we performed ZMAT3 RNA IP experiments (RIP) using doxycycline-inducible HCT116-ZMAT3-FLAG cells. However, we did not observe significant enrichment of HKDC1 mRNA in the ZMAT3 IPs (Figure 5 – figure supplement 1A), consistent with previously published ZMAT3 RIP-seq data (Bersani et al, Oncotarget, 2016). These findings further support the notion that ZMAT3 does not directly bind to HKDC1 mRNA in these cells. We Accordingly, we have modified the text on page 10 of the revised manuscript.
In addition, as suggested by the reviewer, we analyzed changes in splicing of HKDC1 pre-mRNA using rMATS in HCT116 cells by comparing our previously published RNA-seq data from siCTRL and siZMAT3-transfected HCT116 cells (Muys et al, Genes Dev, 2021). We focused on splicing events with an FDR < 0.05 and a delta PSI > |0.1| (representing at least a 10% change in splicing). The splicing analysis (data not shown) did not reveal any significant alterations in HKDC1 pre-mRNA splicing upon ZMAT3 knockdown. Corresponding text has been updated on page 10 of the revised manuscript.
(6) The authors say that they examine JUN binding at the HKDC1 promoter several times, but they focus on intron 1 in Figure 5. They should revise the text accordingly, and they should also show JUN ChIP data traces for the whole HKDC1 locus in Figure 5C.
We thank the reviewer for this helpful suggestion. As recommended, we have revised the text throughout the manuscript and replaced HKDC1 promoter with HKDC1 intron 1 DNA to accurately reflect our analysis, and Figure 5 now shows the JUN ChIP-seq signal across the entire HKDC1 locus.
(7) In the ZMAT3 and JUN interaction assays, were these tested in the presence of DNAse or RNAse to determine if nucleic acids mediate the interaction?
We thank the reviewer for this valuable suggestion. To test whether nucleic acids mediate the ZMAT3-JUN interaction, we performed ZMAT3 immunoprecipitation (IPs) in the presence or absence of DNase and RNase from doxycycline-inducible ZMAT3-FLAG expressing HCT116 cells. The ZMAT3-JUN interaction was lost upon treatment with either DNase or RNase, indicating that the interaction is mediated by nucleic acids. This data has been added in the revised manuscript (Figure 5-figure supplement 1D and on page 11).
-
-
www.biorxiv.org www.biorxiv.org
-
eLife Assessment
This important study provides the first putative evidence that alteration of the Hox code in neck lateral plate mesoderm is sufficient to induce ectopic development of forelimb buds at neck level. The authors use both gain-of-function (GOF) and loss-of-function (LOF) approaches in chick embryos to test the roles of Hox paralogy group (PG) 4-7 genes in limb development. The GOF data provide strong evidence that overexpression of Hox PG6/7 genes are sufficient to induce forelimb buds at neck level. However, the experiments using dominant negative constructs are lacking some key controls that are needed to demonstrate the specificity of the LOF effect rendering the work as a whole incomplete.
-
Reviewer #2 (Public review):
In the original review of this manuscript, I noted that this study provides the first evidence that alteration of the Hox code in neck lateral plate mesoderm is sufficient for ectopic forelimb budding. Their finding that ectopic expression of Hoxa6 or Hoxa7 induces wing budding at neck level, a demonstration of sufficiency, is of major significance. The experiments used to test the necessity of specific Hox genes for limb budding involved overexpression of dominant negative constructs, and there were questions about whether the controls were well designed. The reviewers made several suggestions for additional experiments that would address their concerns. In their responses to those comments, the authors indicated that they would conduct those experiments, and they acknowledged the requests for further discussion of a few points.
In the revised version of the manuscript, the authors have provided additional RNA-seq data in Table 3, which lists 221 genes that are shared between the Hoxa6-induced limb bud and normal wing bud but not the neck. This shows that the ectopic limb bud has a limb-like character. The authors also expanded the discussion of their results in the context of previous work on the mouse. These changes have improved the paper.
The authors elected not to conduct the co-transfection experiments that were suggested to test the ability of Hoxa4/a5 to block the limb-inducing ability of Hoxa6/a7. They also chose not to conduct the additional control experiments that were suggested for the dominant negative studies. The authors' justification for not conducting these experiments is provided in the responses to reviewers.
The paper is improved over the previous version, but the conclusions, particularly regarding the dominant negative experiments, would have been strengthened by the additional experiments that were recommended by the reviewers. Under the current publishing model for eLife, it is the authors' prerogative to decide whether to revise in accordance with the reviewers' suggestions. Therefore, it seems to me that this version of the manuscript is the definitive version that the authors want to publish, and that eLife should publish it together with the reviewers' comments and the authors' responses.
-
Author response:
The following is the authors’ response to the original reviews.
Public Reviews:
Reviewer #1 (Public review)
Weaknesses:
(1) The activity of the dominant negatives lacks appropriate controls. This is crucial given that mouse mutants for PG5, PG6, PG7, and three of the four PG4 genes show no major effects on limb induction or growth. Understanding these discrepancies is essential.
We thank the reviewer for emphasizing the importance of appropriate controls for the dominant-negative experiments. Dominant-negative Hox constructs have been successfully and widely used in previous studies, supporting the reliability of this approach. In our experiments, electroporation of the dominant-negative constructs into the limb field produced clear and reproducible effects when compared with both unoperated embryos and embryos electroporated with a GFP control construct. The GFP construct serves as an appropriate control, as it accounts for any effects of electroporation or exogenous protein expression without altering Hox gene function. We therefore conclude that the observed phenotypes specifically reflect dominant-negative Hox activity rather than procedural artifacts.
The absence of overt limb phenotypes in PG4–PG7 mouse mutants likely reflects both functional redundancy among Hox paralogs and the difficulty of detecting subtle limbspecific effects in bilateral, systemically affected embryos. In contrast, the chick embryo system allows unilateral gene manipulation, providing an internal control and greater sensitivity for detecting weak or localized effects that may be masked in whole-animal mouse mutants.
(2) The authors mention redundancies in Hox activity, consistent with numerous previous reports. However, they only use single dominant-negative versions of each Hox paralog gene individually. If Hox4 and Hox5 functions are redundant, experiments should include simultaneous dominant negatives for both groups.
We thank the reviewer for this thoughtful suggestion. We fully agree that functional redundancy among Hox paralogs is an important consideration. However, Hox gene interactions are highly context-dependent and not strictly additive. Simultaneous interference with multiple Hox groups often leads to complex or compensatory effects that are difficult to interpret mechanistically, particularly when using dominant-negative constructs that may affect overlapping transcriptional networks.
Our current experimental design, which targets individual paralog groups, allows us to attribute observed phenotypes to specific Hox activities and to interpret the results more precisely. Moreover, as shown in previous studies, simultaneous knockdown of multiple Hox genes does not necessarily produce stronger. For these reasons, we believe that the present single–dominant-negative experiments are the most informative and sufficient for addressing the specific questions in this study.
(3) The main conclusion that Hox4 and Hox5 provide permissive cues on which Hox6/7 induce the forelimb is not sufficiently supported by the data. An experiment expressing simultaneous dnHox4/5 and Hox6/7 is needed. If the hypothesis is correct, this should block Hox6/7's capacity to expand the limb bud or generate an extra bulge.
We thank the reviewer for this insightful suggestion. However, because of the extensive functional redundancy and regulatory interdependence within the Hox network, simultaneous inhibition of Hox4 and Hox5 is unlikely to produce a simple or interpretable outcome. Previous studies have shown that combinatorial Hox manipulations can trigger compensatory changes in other Hox genes, often obscuring rather than clarifying specific relationships.
In our study, the proposed permissive role of Hox4/5 is supported by the spatial and temporal patterns of Hox expression and by the phenotypic effects observed upon individual dominant-negative perturbations. These data together suggest that Hox4/5 establish a forelimb-competent domain, on which Hox6/7 subsequently act to promote limb outgrowth. We therefore believe that the current evidence sufficiently supports this model without necessitating the additional combined experiment, which may not provide clear mechanistic insight due to redundancy effects.
(4) The identity of the extra bulge or extended limb bud is unclear. The only marker supporting its identity as a forelimb is Tbx5, while other typical limb development markers are absent. Tbx5 is also expressed in other regions besides the forelimb, and its presence does not guarantee forelimb identity. For instance, snakes express Tbx5 in the lateral mesoderm along much of their body axis.
We thank the reviewer for this important comment. We agree that Tbx5 expression alone may be not sufficient to define forelimb identity. However, in our experiments, the induced bulge displays several additional characteristics consistent with early limb identity (in pre-AER stage). First, the Tbx5 expression we observe corresponds to the stage when the limb field is already specified, not the earlier broad mesodermal phase described in other systems. Second, the induced domain also expresses Lmx1, a marker of dorsal limb mesenchyme, further supporting its limb-specific nature. Third, our RNA sequencing analysis reveals upregulation of multiple genes associated with early limb development pathways, providing molecular evidence for limb-type identity rather than non-specific mesodermal expansion. Taken together, these results strongly indicate that the induced bulge represents a forelimb-like structure rather than a generic mesodermal thickening.
(5) It is important to analyze the skeletons of all embryos to assess the effect of reduced limb buds upon dnHox expression and determine whether extra skeletal elements develop from the extended bud or ectopic bulge.
We thank the reviewer for this helpful suggestion. We have analyzed the cartilage structures of the operated embryos. No skeletal elements were detected within the ectopic wing bud in the neck region. Furthermore, we did not observe any significant structural changes in the wing skeleton following loss-of-function (dnHox) experiments. These observations indicate that the ectopic bulges do not progress to form skeletal elements, consistent with their identity as early limb-like outgrowths rather than fully developed limbs.
Reviewer #2 (Public review):
Weaknesses
(1) By contrast to the GOF experiments that induce ectopic limb budding, the LOF experiments, which use dominant negative forms of Hoxa4, Hoxa5, Hoxa6, and Hoxa7, are more challenging to interpret due to the absence of data on the specificity of the dominant negative constructs. Absent such controls, one cannot be certain that effects on limb development are due to disruption of the specific Hox proteins that are being targeted.
We thank the reviewer for raising this important point regarding the specificity of the dominant-negative constructs. The dnHox constructs used in this study were generated by truncating the C-terminal region of each Hox protein, a strategy that removes the homeodomain and has been demonstrated to act as a specific dominant-negative by interfering with the corresponding Hox function without broadly affecting unrelated Hox genes. This approach has been successfully validated and used in previous work (Moreau et al., Curr. Biol. 2019), where similar constructs effectively and specifically inhibited Hox activity in the chick embryo.
(2) A test of their central hypothesis regarding the necessity and sufficiency of the Hox genes under investigation would be to co-transfect the neck with full-length Hoxa6/a7 AND the dnHoxA4/a5. If their hypothesis is correct, then the dn constructs should block the limb-inducing ability of Hoxa6/a7 overexpression (again, validation of specificity of the DN constructs is important here)
We thank the reviewer for this insightful suggestion. We agree that, in principle, coelectroporation of dnHox4/5 with Hox6/7 could test the hierarchical relationship between these genes. However, due to the extensive redundancy and regulatory interdependence among Hox genes, simultaneous manipulation of multiple genes often leads to compensatory effects or complex outcomes that are difficult to interpret mechanistically. As discussed in our response to Point 3 of the reviewer 1, inhibition of only one or two Hox4/5 paralogs is unlikely to completely abolish the permissive function of this group.
Our current data — showing that Hox6/7 gain-of-function can induce ectopic limb-like outgrowths, while dnHox4/5 and dnHox6/7 lead to reduced limb formation — already provide strong evidence for both the necessity and sufficiency of these Hox activities in forelimb positioning. We therefore believe that the existing experiments adequately support our proposed model without the need for additional combinatorial manipulations.
(3) The paper could be strengthened by providing some additional data, which should already exist in their RNA-Seq dataset, such as supplementary material that shows the actual gene expression data that are represented in the Venn diagram, heatmap, and GO analysis in Figure 3.
We thank the reviewer for this constructive suggestion. In response, we have added a table (Table 3) listing the genes expressed in both the native limb/wing bud and the Hoxa6-induced wing bud, as identified from our RNA-Seq dataset. This table provides the underlying data for the Venn diagram, heatmap, and GO analysis presented in Figure 3. We agree that including these data improves transparency and helps readers better appreciate the molecular similarity between the induced and native limb buds.
(4) The results of these experiments in chick embryos are rather unexpected based on previous knockout experiments in mice, and this needs to be discussed.
We thank the reviewer for this important point. We have addressed this issue in our response to Reviewer 1, Point 1, and have expanded the relevant discussion in the manuscript. Briefly, we believe that the apparent discrepancy between chick and mouse results arises from both the high degree of functional redundancy among Hox paralogs and the limitations of detecting subtle limb-specific effects in systemic mouse mutants, where both sides of the embryo are equally affected. In contrast, the chick system allows unilateral gene manipulation, providing an internal control and greatly enhancing sensitivity to detect weak or localized effects. Thus, the chick embryo model can reveal subtle Hox-dependent limb-induction activities that are masked in conventional mouse knockout approaches.
-
-
test2025.mitkoforevents.cz test2025.mitkoforevents.cz
-
Materiály
Látky + correct order in the shared table
-
-
test2025.mitkoforevents.cz test2025.mitkoforevents.cz
-
Grafický návrh zdarma
use the banner from https://test2025.mitkoforevents.cz/nuzkove-stany/3x3/
-
-
test2025.mitkoforevents.cz test2025.mitkoforevents.cz
-
Osobní odběr Máte dodávkový vůz? Zastavte se v Mitko pro odběr! Pamatujte, že pouze osobní odběr vám zaručuje včasnou dodávku. Objednané zboží si můžete vyzvednout v sídle naší společnosti od pondělí do pátku v době od 8:00 do 15:00. MITKO s.r.o. Bohunická 133/50 612 00 – Brno, ČR NIP: CZ02777631
erase this whole section- we don´t have the "personal pickup in our CZ/SK offer
-
-
www.biorxiv.org www.biorxiv.org
-
eLife Assessment
This study reports useful information on the mechanisms by which a high-fat diet induces arrhythmias in the model organism Drosophila. Specifically, the authors propose that adipokinetic hormone (Akh) secretion is increased with this diet, and through binding of Akh to its receptor on cardiac neurons, arrhythmia is induced. The authors have revised their manuscript, but in some areas the evidence remains incomplete, which the authors say future studies will be directed to closing the present gaps. Nonetheless, the data presented will be helpful to those who wish to extend the research to a more complex model system, such as the mouse.
-
Reviewer #1 (Public review):
Summary:
In the manuscript submission by Zhao et al. entitled, "Cardiac neurons expressing a glucagon-like receptor mediate cardiac arrhythmia induced by high-fat diet in Drosophila" the authors assert that cardiac arrhythmias in Drosophila on a high fat diet is due in part to adipokinetic hormone (Akh) signaling activation. High fat diet induces Akh secretion from activated endocrine neurons, which activate AkhR in posterior cardiac neurons. Silencing or deletion of Akh or AkhR blocks arrhythmia in Drosophila on high fat diet. Elimination of one of two AkhR expressing cardiac neurons results in arrhythmia similar to high fat diet.
Strengths:
The authors propose a novel mechanism for high fat diet induced arrhythmia utilizing the Akh signaling pathway that signals to cardiac neurons.
-
Reviewer #3 (Public review):
Zhao et al. provide new insights into the mechanism by which a high-fat diet (HFD) induces cardiac arrhythmia employing Drosophila as a model. HFD induces cardiac arrhythmia in both mammals and Drosophila. Both glucagon and its functional equivalent in Drosophila Akh are known to induce arrhythmia. The study demonstrates that Akh mRNA levels are increased by HFD and both Akh and its receptor are necessary for high-fat diet-induced cardiac arrhythmia, elucidating a novel link. Notably, Zhao et al. identify a pair of AKH receptor-expressing neurons located at the posterior of the heart tube. Interestingly, these neurons innervate the heart muscle and form synaptic connections, implying their roles in controlling the heart muscle. The study presented by Zhao et al. is intriguing, and the rigorous characterization of the AKH receptor-expressing neurons would significantly enhance our understanding of the molecular mechanism underlying HFD-induced cardiac arrhythmia.
Many experiments presented in the manuscript are appropriate for supporting the conclusions while additional controls and precise quantifications should help strengthen the authors' arguments. The key results obtained by loss of Akh (or AkhR) and genetic elimination of the identified AkhR-expressing cardiac neurons do not reconcile, complicating the overall interpretation.
The most exciting result is the identification of AkhR-expressing neurons located at the posterior part of the heart tube (ACNs). The authors attempted to determine the function of ACNs by expressing rpr with AkhR-GAL4, which would induce cell death in all AkhR-expressing cells, including ACNs. The experiments presented in Figure 6 are not straightforward to interpret. Moreover, the conclusion contradicts the main hypothesis that elevated Akh is the basis of HFD-induced arrhythmia. The results suggest the importance of AkhR-expressing cells for normal heartbeat. However, elimination of Akh or AkhR restores normal rhythm in HFD-fed animals, suggesting that Akh and AkhR are not important for maintaining normal rhythms. If Akh signaling in ACNs is key for HFD-induced arrhythmia, genetic elimination of ACNs should unalter rhythm and rescue the HFD-induced arrhythmia. An important caveat is that the experiments do not test the specific role of ACNs. ACNs should be just a small part of the cells expressing AkhR. Specific manipulation of ACNs will significantly improve the study. Moreover, the main hypothesis suggests that HFD may alter the activity of ACNs in a manner dependent on Akh and AkhR. Testing how HFD changes calcium, possibly by CaLexA (Figure 2) and/or GCaMP, in wild-type and AkhR mutant could be a way to connect ACNs to HFD-induced arrhythmia. Moreover, optogenetic manipulation of ACNs may allow for specific manipulation of ACNs.
Interestingly, expressing rpr with AkhR-GAL4 was insufficient to eliminate both ACNs. It is not clear why it didn't eliminate both ACNs. Given the incomplete penetrance, appropriate quantifications should be helpful. Additionally, the impact on other AhkR-expressing cells should be assessed. Adding more copies of UAS-rpr, AkhR-GAL4, or both may eliminate all ACNs and other AkhR-expressing cells. The authors could also try UAS-hid instead of UAS-rpr.
-
Author response:
The following is the authors’ response to the previous reviews
Public Reviews:
Reviewer #1 (Public review):
Summary:
In the manuscript submission by Zhao et al. entitled, "Cardiac neurons expressing a glucagon-like receptor mediate cardiac arrhythmia induced by high-fat diet in Drosophila" the authors assert that cardiac arrhythmias in Drosophila on a high fat diet is due in part to adipokinetic hormone (Akh) signaling activation. High fat diet induces Akh secretion from activated endocrine neurons, which activate AkhR in posterior cardiac neurons. Silencing or deletion of Akh or AkhR blocks arrhythmia in Drosophila on high fat diet. Elimination of one of two AkhR expressing cardiac neurons results in arrhythmia similar to high fat diet.
Strengths:
The authors propose a novel mechanism for high fat diet induced arrhythmia utilizing the Akh signaling pathway that signals to cardiac neurons.
Comments on revisions:
The authors have addressed my other concerns. The only outstanding issue is in regard to the following comment:
The authors state that "HFD led to increased heartbeat and an irregular rhythm." In representative examples shown, HFD resulted in pauses, slower heart rate, and increased irregularity in rhythm but not consistently increased heart rate (Figures 1B, 3A, and 4C). Based on the cited work by Ocorr et al (https://doi.org/10.1073/pnas.0609278104), Drosophila heart rate is highly variable with periods of fast and slow rates, which the authors attributed to neuronal and hormonal inputs. Ocorr et al then describe the use of "semi-intact" flies to remove autonomic input to normalize heart rate. Were semi-intact flies used? If not, how was heart rate variability controlled? And how was heart rate "increase" quantified in high fat diet compared to normal fat diet? Lastly, how does one measure "arrhythmia" when there is so much heart rate variability in normal intact flies?
The authors state that 8 sec time windows were selected at the discretion of the imager for analysis. I don't know how to avoid bias unless the person acquiring the imaging is blinded to the condition and the analysis is also done blind. Can you comment whether data acquisition and analysis was done in a blinded fashion? If not, this should be stated as a limitation of the study.
Drosophila heart rate is highly variable. During the recording, we were biased to choose a time window when heartbeat was fairly stable. This is a limitation of the study, which we mentioned in the revised version. We chose to use intact over “semi-intact” flies with an intention to avoid damaging the cardiac neurons.
Reviewer #3 (Public review):
Zhao et al. provide new insights into the mechanism by which a high-fat diet (HFD) induces cardiac arrhythmia employing Drosophila as a model. HFD induces cardiac arrhythmia in both mammals and Drosophila. Both glucagon and its functional equivalent in Drosophila Akh are known to induce arrhythmia. The study demonstrates that Akh mRNA levels are increased by HFD and both Akh and its receptor are necessary for high-fat diet-induced cardiac arrhythmia, elucidating a novel link. Notably, Zhao et al. identify a pair of AKH receptor-expressing neurons located at the posterior of the heart tube. Interestingly, these neurons innervate the heart muscle and form synaptic connections, implying their roles in controlling the heart muscle. The study presented by Zhao et al. is intriguing, and the rigorous characterization of the AKH receptor-expressing neurons would significantly enhance our understanding of the molecular mechanism underlying HFD-induced cardiac arrhythmia.
Many experiments presented in the manuscript are appropriate for supporting the conclusions while additional controls and precise quantifications should help strengthen the authors' arguments. The key results obtained by loss of Akh (or AkhR) and genetic elimination of the identified AkhR-expressing cardiac neurons do not reconcile, complicating the overall interpretation.
We thank the reviewer for the positive comments. We believe that more signaling pathways are active in the AkhR neurons and regulate rhythmic heartbeat. We are current searching for the molecules and pathways that act on the AkhR cardiac neurons to regulate the heartbeat. Thus, AkhR neuron x shall have a more profound effect. Loss of AkhR is not equivalent to AkhR neuron ablation.
The most exciting result is the identification of AkhR-expressing neurons located at the posterior part of the heart tube (ACNs). The authors attempted to determine the function of ACNs by expressing rpr with AkhR-GAL4, which would induce cell death in all AkhRexpressing cells, including ACNs. The experiments presented in Figure 6 are not straightforward to interpret. Moreover, the conclusion contradicts the main hypothesis that elevated Akh is the basis of HFD-induced arrhythmia. The results suggest the importance of AkhR-expressing cells for normal heartbeat. However, elimination of Akh or AkhR restores normal rhythm in HFD-fed animals, suggesting that Akh and AkhR are not important for maintaining normal rhythms. If Akh signaling in ACNs is key for HFD-induced arrhythmia, genetic elimination of ACNs should unalter rhythm and rescue the HFD-induced arrhythmia. An important caveat is that the experiments do not test the specific role of ACNs. ACNs should be just a small part of the cells expressing AkhR. Specific manipulation of ACNs will significantly improve the study. Moreover, the main hypothesis suggests that HFD may alter the activity of ACNs in a manner dependent on Akh and AkhR. Testing how HFD changes calcium, possibly by CaLexA (Figure 2) and/or GCaMP, in wild-type and AkhR mutant could be a way to connect ACNs to HFD-induced arrhythmia. Moreover, optogenetic manipulation of ACNs may allow for specific manipulation of ACNs.
We thank the reviewer for suggesting the detailed experiments and we believe that address these points shall consolidate the results. As AkhR-Gal4 also expresses in the fat body, we set out to build a more specific driver. We planned to use split-Gal4 system (Luan et al. 2006. PMID: 17088209). The combination of pan neuronal Elav-Gal4.DBD and AkhRp65.AD shall yield AkhR neuron specific driver. We selected 2580 bp AkhR upstream DNA and cloned into pBPp65ADZpUw plasmid (Addgene plasmid: #26234). After two rounds of injection, however, we were not able to recover a transgenic line.
We used GCaMP to record the calcium signal in the AkhR neurons. AkhR-Gal4>GCaMP has extremely high levels of fluorescence in the cardiac neurons under normal condition.
We are screening Gal4 drivers, trying to find one line that is specific to the cardiac neurons and has a lower level of driver activity.
Interestingly, expressing rpr with AkhR-GAL4 was insufficient to eliminate both ACNs. It is not clear why it didn't eliminate both ACNs. Given the incomplete penetrance, appropriate quantifications should be helpful. Additionally, the impact on other AhkR-expressing cells should be assessed. Adding more copies of UAS-rpr, AkhR-GAL4, or both may eliminate all ACNs and other AkhR-expressing cells. The authors could also try UAS-hid instead of UASrpr.
We quantified the AkhR neuron ablation and found that about 69% (n=28) showed a single ACN in AkhR-Gal4>rpr flies. It is more challenging to quantify other AkhR-expressing cells, as they are wide-spread distributed. We tried to add more copies of UAS-rpr or AkhR-Gal4, which caused developmental defects (pupa lethality). Thus, as mentioned above, we are trying to find a more specific driver for targeting the cardiac neurons.
Recommendations for the authors:
Reviewer #3 (Recommendations for the authors):
The authors refer 'crop' as the functional equivalent of the human stomach. Considering the difference in their primary functions, this cannot be justified.
In Drosophila, the crop functions analogously to the stomach in vertebrates. It is a foregut storage and preliminary processing organ that regulates food passage into the midgut. It’s more than a simple reservoir. Crop engages in enzymatic mixing, neural control, and active motility.
Line 163 and 166, APCs are not neurons.
Akh-producing cells (APCs) in Drosophila are neuroendocrine cells, residing in the corpora cardiaca (CC). While they produce and secrete the hormone AKH (akin to glucagon), they are not brain interneurons per se. APCs share many neuronal features (vesicular release, axon-like projections) and receive neural inputs, effectively functioning as a peripheral endocrine center.
-
-
www.biorxiv.org www.biorxiv.org
-
eLife Assessment
This fundamental study is part of an impressive, large-scale effort to assess the reproducibility of published findings in the field of Drosophila immunity. In a companion article, the authors analyze 400 papers published between 1959 and 2011, and assess how many of the claims in these papers have been tested in subsequent publications. In this article, the authors report the results of validation experiments to assess a subset of the claims that, according to the literature, have not been corroborated. While the evidence reported for some of these validation studies is convincing, it remains incomplete for others.
-
Reviewer #1 (Public review):
Summary:
This work revisits a substantial part of the published literature in the field of Drosophila innate immunity from 1959 to 2011. The strategy has been to restrain the analysis to some 400 articles and then to extract a main claim, two to four major claims and up to four minor claims totaling some 2000 claims overall. The consistency of these claims with the current state-of-the-art has been evaluated and reported on a dedicated Web site known as ReproSci and also in the text as well as in the 28 Supplements that report experimental verification, direct or indirect, e.g., using novel null mutants unavailable at the time, of a selected set of claims made in several articles. Of note, this review is mostly limited to the manuscript and its associated supplements and does not integrally cover the ReproSci website.
Strengths:
One major strength of this article is that it tackles the issue of reproducibility/consistency on a large scale. Indeed, while many investigators have some serious doubts about some results found in the literature, few have the courage, or the means and time, to seriously challenge studies, especially if published by leaders in the field. The Discussion adequately states the major limitations of the ReproSci approach, which should be kept in mind by the reader to form their own opinion.
This study also allows investigators not familiar with the field to have a clearer understanding of the questions at stake and to derive a more coherent global picture that allows them to better frame their own scientific questions. Besides a thorough and up-to-date knowledge of the literature used to assess the consistency of the claims with our current knowledge, a merit of this study is the undertaking of independent experiments to address some puzzling findings and the evidence presented is often convincing, albeit one should keep in mind the inherent limitations as several parameters are difficult to control, especially in the field of infections, as underlined by the authors themselves. Importantly, some work of the lead author has also been re-evaluated (Supplements S2-S4). Thus, while utmost caution should be exerted, and often is, in challenging claims, even if the challenge eventually proves to be not grounded, it is valuable to point out potential controversial issues to the scientific community.
While this is not a point of this review, it should be acknowledged that the possibility to post comments on the ReproSci website will allow further readjustments by the community in the appreciation of the literature and also of the ReproSci assessments themselves and of its complementary additional experiments.
Weaknesses:
Challenging the results from articles is, by its very nature, a highly sensitive issue, and utmost care should be taken when challenging claims. While the authors generally acknowledge the limitations of their approach in the main text and Supplements, there are a few instances where their challenges remain questionable and should be reassessed. This is certainly the case for Supplement S18, for which the ReproSci authors make a claim for a point that was not made in the publication under scrutiny. The authors of that study (Ramet et al., Immunity, 2001) never claimed that scavenger receptor SR-CI is a phagocytosis receptor, but that it is required for optimal binding of S2 cells to bacteria. Westlake et al. here have tested for a role of this scavenger receptor in phagocytosis, which had not been tested by Ramet et al. Thus, even though the ReproSci study brings additional knowledge to our understanding of the function of SR-CI by directly testing its involvement in phagocytosis by larval hemocytes, it did not address the major point of the Ramet et al. study, SR-CI binding to bacteria, and thus inappropriately concludes in Supplement S18 that "Contrary to (Ramet et al., 2001, Saleh et al., 2006), we find that SR-CI is unlikely to be a major Drosophila phagocytic receptor for bacteria in vivo." It follows that the results of Ramet et al. cannot be challenged by ReproSci as it did not address this program. Of note, Saleh et al. (2006) also mistakenly stated that SR-CI impaired phagocytosis in S2 cells and could be used as a positive control to monitor phagocytosis in S2 cells. Their assay appears to have actually not monitored phagocytosis but the association of FITC-labeled bacteria to S2 cells by FACS, as they did not mention quenching the fluorescence of bacteria associated with the surface with Trypan blue.
The inference method to assess the consistency of results with current knowledge also has limitations that should be better acknowledged. At times, the argument is made that the gene under scrutiny may not be expressed at the right time according to large-scale data or that the gene product was not detected in the hemolymph by a mass-spectrometry approach. While being in theory strong arguments, some genes, for instance, those encoding proteases at the apex of proteolytic activation cascades, need not necessarily be strongly expressed and might be released by a few cells. In addition, we are often lacking relevant information on the expression of genes of interest upon specific immune challenges such as infections with such and such pathogens.
As regards mass spectrometry, there is always the issue of sensitivity that limits the force of the argument. Our understanding of melanization remains currently limited, and methods are lacking to accurately measure the killing activity associated with the triggering of the proPO activation cascade. In this study, the authors monitor only the blackening reaction of the wound site based on a semi-quantitative measurement. They are not attempting to use other assays, such as monitoring the cleavage of proPOs into active POs or measuring PO enzymatic activity. These techniques are sometimes difficult to implement, and they suffer at times from variability. Thus, caution should be exerted when drawing conclusions from just monitoring the melanization of wounds.
Likewise, the study of phagocytosis is limited by several factors. As most studies in the field focus on adults, the potential role of phagocytosis in controlling Gram-negative bacterial infections is often masked by the efficiency of the strong IMD-mediated systemic immune response mediated by AMPs (Hanson et al, eLife, 2019). This problem can be bypassed in rare instances of intestinal infections by Gram-negative bacteria such as Serratia marcescens (Nehme et al., PLoS Pathogens, 2007) or Pseudomonas aeruginosa (Limmer et al. PNAS, 2011), which escape from the digestive tract into the hemocoel without triggering, at least initially, the systemic immune response. It is technically feasible to monitor bacterial uptake in adults by injecting fluorescently labeled bacteria and subsequently quenching the signal from non-ingested bacteria. Nonetheless, many investigators prefer to resort to ex vivo assays starting from hemocytes collected from third-instar wandering larvae as they are easier to collect and then to analyze, e.g., by FACS. However, it should be pointed out that these hemocytes have been strongly exposed to a peak of ecdysone, which may alter their properties. Like for S2 cells, it is thus not clear whether third-instar larval hemocytes faithfully reproduce the situation in adults. The phagocytic assays are often performed with killed bacteria. Evidence with live microorganisms is better, especially with pathogens. Assays with live bacteria require however, an antibody used in a differential permeabilization protocol. Furthermore, the killing method alters the surface of the microorganisms, a key property for phagocytic uptake. Bacterial surface changes are minimal when microorganisms are killed by X-ray or UV light. These limitations should be kept in mind when proceeding to inference analysis of the consistency of claims. Eater illustrates this point well. Westlake et al. state that:" [...] subsequent studies showed that a null mutation of eater does not impact phagocytosis". The authors refer here to Bretscher et al., Biology Open, 2015, in which binding to heat-killed E. coli was assessed in an ex vivo assay in third instar larvae. In contrast, Chung and Kocks (JBC, 2011) tested whether the recombinant extracellular N-terminal ligand-binding domain was able to bind to bacteria. They found that this domain binds to live Gram-positive bacteria but not to live Gram-negative bacteria. For the latter, killing bacteria with ethanol or heating, but not by formaldehyde treatment, allowed binding. More importantly, Chung and Kocks documented a complex picture in which AMPs may be needed to permeabilize the Gram-negative bacterial cell wall that would then allow access of at least the recombinant secreted Eater extracellular domain to peptidoglycan or peptidoglycan-associated molecules. Thus, the systemic Imd-dependent immune response would be required in vivo to allow Eater-dependent uptake of Gram-negative bacteria by adult hemocytes. In ex vivo assays, any AMPs may be diluted too much to effectively attack the bacterial membrane. A prediction is then that there should be an altered phagocytosis of Gram-negative bacteria in IMD-pathway mutants, e.g., an imd null mutant but not the hypomorphic imd[1] allele. This could easily be tested by ReproSci using the adult phagocytosis assay used by Kocks et al, Cell, 2005. At the very least, the part on the role of Eater in phagocytosis should take the Chung &Kocks study into account, and the conclusions modulated.
Another point is that some mutant phenotypes may be highly sensitive to the genetic background, for instance, even after isogenization in two different backgrounds. In the framework of a Reproducibility project, there might be no other option for such cases than direct reproduction of the experiment as relying solely on inference may not be reliable enough.
With respect to the experimental part, some minor weaknesses have been noted. The authors rely on survival to infection experiments, but often do not show any control experiments with mock-challenged or noninfected mutant fly lines. In some cases, monitoring the microbial burden would have strengthened the evidence. For long survival experiments, a check on the health status of the lines (viral microbiota, Wolbachia) would have been welcome. Also, the experimental validation of reagents, RNAi lines, or KO lines is not documented in all cases.
-
Reviewer #2 (Public review):
Summary:
The authors present an ambitious and large-scale reproducibility analysis of 400 articles on Drosophila immunity published before 2011. They extract major and minor claims from each article, assess their verifiability through literature comparison and, when possible, through targeted experimental re-testing, and synthesize their findings in an openly accessible online database. The goal is to provide clarity to the community regarding claims that have been contradicted, incompletely supported, or insufficiently followed up in the literature, and to foster broader community participation in evaluating historical findings. The manuscript summarizes the major insights emerging from this systematic effort.
Strengths:
(1) Novelty and community value: This work represents a rare example of a systematic, transparent, and community-facing reproducibility project in a specific research domain. The creation of a dedicated public platform for disseminating and discussing these assessments is particularly innovative.
(2) Breadth and depth: The authors analyze an impressive number of publications spanning multiple decades, and they couple literature-based assessments with new experimental data where follow-up is missing.
(3) Clarity of purpose: The manuscript carefully distinguishes between assessing evidential support for claims and judging the scientific merit of historical work. This helps frame the project as constructive rather than punitive.
(4) Metascientific relevance: The analysis identifies methodological and contextual factors that commonly underlie irreproducible claims, providing a useful guide for future study design and interpretation.
(5) Transparency: Supplementary datasets and the public website provide an exceptional degree of openness, which should facilitate community engagement and further refinement.
Weaknesses:
(1) Subjectivity in selection: Despite the authors' efforts, the choice of which papers and claims to highlight cannot be entirely objective. This is an inherent limitation of any retrospective curation effort, but it remains important to acknowledge explicitly.
(2) Emphasis on irreproducible claims: The manuscript focuses primarily on claims that are challenged or found to be weakly supported. While understandable from the perspective of novelty, this emphasis may risk overshadowing the value of claims that are well supported and reproducible.
(3) Framing and language: Certain passages could benefit from more neutral phrasing and avoidance of binary terms such as "correct" or "incorrect," in keeping with the open-ended and iterative nature of scientific progress.
(4) Community interaction with the dataset: While the website is an excellent resource, the manuscript could further clarify how the community is expected to contribute, challenge, or refine the annotations, especially given the large volume of supplementary data.
(5) Minor inconsistency: The manuscript states that papers from 1959-2011 were included, but the Methods section mentions a range beginning in 1940. This should be aligned for clarity.
Impact and significance:
This contribution is likely to have a meaningful impact on both the Drosophila immunity community and the broader scientific ecosystem. It highlights methodological pitfalls, encourages transparent post-publication evaluation, and offers a reusable framework that other fields could adopt. The work also has pedagogical value for early-career researchers entering the field, who often struggle to navigate contradictory or outdated claims. By centralizing and contextualizing these discussions, the manuscript should help accelerate more robust and reproducible research.
-
Reviewer #3 (Public review):
Summary:
In this ambitious study, the authors set out to analyse the validity of a number of claims, both minor and major, from 400 published articles within the field of Drosophila immunity that were published before 2011. The authors were able to determine initially if claims were supported by comparing them to other published literature in the field and, if required, by experimentally testing 'unchallenged' claims that had not been followed up in subsequent published literature. Using this approach, the authors identified a number of claims that had contradictory evidence using new methods or taking into account developments within the field post-initial publication. They put their findings on a publicly available website designed to enable the research community to assess published work within the field with greater clarity.
Strengths:
The work presented is rigorous and methodical, the data presentation is high quality, and importantly, the data presented support the conclusions. The discussion is balanced, and the study is written considerately and respectfully, highlighting that the aim of the study is not to assign merit to individual scientists or publications but rather to improve clarity for scientists across the field. The approach carried out by the researchers focuses on testing the validity of the claims made in the original papers rather than testing whether the original experimental methods produced reproducible results. This is an important point since there are many reasons why the original interpretation of data may have understandably led to the claims made. These potential explanations for irreproducible data or conclusions are discussed in detail by the authors for each claim investigated.
The authors have generated an accompanying website, which provides a valuable tool for the Drosophila Immunity research community that can be used to fact-check key claims and encourages community engagement. This will achieve one important goal of this study - to prevent time loss for scientists who base their research on claims that are irreproducible. The authors rightly point out that it is impossible (and indeed undesirable) to avoid publication of irreproducible results within a field since science is 'an exploratory process where progress is made by constant course correction'. This study is, however, an important piece of work that will make that course correction more efficient.
Weaknesses:
I have little to recommend for the improvement of this manuscript. As outlined in my comments above, I am very supportive of this manuscript and think it is a bold and ambitious body of work that is important for the Drosophila immunity field and beyond.
-
Reviewer #4 (Public review):
This is an important paper that can do much to set an example for thoughtful and rigorous evaluation of a discipline-wide body of literature. The compiled website of publications in Drosophila immunity is by itself a valuable contribution to the field. There is much to praise in this work, especially including the extensive and careful evaluation of the published literature. However, there are also cautions.
One notable concern is that the validation experiments are generally done at low sample sizes and low replication rates, and often lack statistical analysis. This is slippery ground for declaring a published study to be untrue. Since the conclusions reported here are nearly all negative, it is essential that the experiments be performed with adequate power to detect the originally described effects. At a minimum, they should be performed with the same sample size and replication structure as the originally reported studies.
The first section of Results should be an overview of the general accuracy of the literature. Of all claims made in the 400 evaluated papers, what proportion fell into each category of "verified", "unchallenged", "challenged", "mixed", or "partially verified"? This summary overview would provide a valuable assessment of the field as a whole. A detailed dispute of individual highlighted claims could follow the summary overview.
Section headings are phrased as declarative statements, "Gene X is not involved in process Y", which is more definitive phrasing than we typically use in scientific research. It implies proving a negative, which is difficult and rare, and the evidence provided in the present manuscript generally does not reach that threshold. A more common phrasing would be "We find no evidence that gene X contributes to process Y". A good model for this more qualified phrasing is the "We conclude that while Caspar might affect the Imd pathway in certain tissue-specific contexts, it is unlikely to act as a generic negative regulator of the Imd pathway," concluding the section on the role of Caspar. I am sure the authors feel that the softer, more qualified phrasing would undermine their article's goal of cleansing the literature of inaccuracies, but the hard declarative 'never' statements are difficult to justify unless every validation experiment is done with a high degree of rigor under a variety of experimental conditions. This caveat is acknowledged in the 3rd paragraph of the Discussion, but it is not reflected in the writing of the Results. The caveat should also appear in the Introduction.
The article is clear that "Claims were assessed as verified, unchallenged, challenged, mixed, or partially verified," but the project is called "reproducibility project" in the 7th line of the abstract, and the website is "ReproSci". The fourth line of the abstract and the introduction call some published research "irreproducible". Most of the present manuscript does not describe reproduction or replication. It describes validation, or independent experimental tests for consistency. Published work is considered validated if subsequent studies using distinct approaches yielded consistent results. For work that the authors consider suspicious, or that has not been subsequently tested, the new experiments provided here do not necessarily recreate the published experiment. Instead, the published result is evaluated with experiments that use different tools or methods, again testing for consistency of results. This is an important form of validation, but it is not reproduction, and it should not be referred to as such. I strongly suggest that variations of the words "reproducible" or "replication" be removed from the manuscript and replaced with "validation". This will be more scientifically accurate and will have the additional benefit of reducing the emotional charge that can be associated with declaring published research to be irreproducible.
The manuscript includes an explanatory passage in the Results section, "Our project focuses on assessing the strength of the claims themselves (inferential/indirect reproducibility) rather than testing whether the original methods produce repeatable results (results/direct reproducibility). Thus, our conclusions do not directly challenge the initial results leading to a claim, but rather the general applicability of the claim itself." Rather than first appearing in Results, this statement should appear prominently in the abstract and introduction because it is a core element of the premise of the study. This can be combined with the content of the present Disclaimer section into a single paragraph in the Introduction instead of appearing in two redundant passages. I would again encourage the authors to substitute the word validation for reproduction, which would eliminate the need for the invented distinction between indirect versus direct reproduction. It is notable that the authors have chosen to title the relevant Methods section "Experimental Validation" and not "Replication".
Experimental data "from various laboratories" in the last paragraph of the Introduction and the first paragraph of the Results are ambiguous. Since these new experiments are part of the central core of the manuscript, the specific laboratories contributing them should be named in the two paragraphs. If experiments are being contributed by all authors on the manuscript, it would suffice to say "the authors' laboratories". The attribution to "various labs" appears to be contradicted by the Discussion paragraph 2, which states "the host laboratory has expertise in" antibacterial and antifungal defense, implying a single lab. The claim of expertise by the lead author's laboratory is unnecessary and can be deleted if the Lemaitre lab is the ultimate source of all validation experiments.
The passage on the controversial role of Duox in the gut is balanced and scholarly, and stands out for its discussion of multiple alternative lines of evidence in the published literature and supplement. This passage may benefit from research by multiple groups following up on the original claims that are not available for other claims, but the tone of the Duox section can be a model for the other sections.
Comments on other sections and supplements:
I understand the desire to explain how original results may have been obtained when they are not substantiated by subsequent experiments. However, statements such as "The initial results may have been obtained due to residual impurities in preparations of recombinant GNBP1" and "Non-replicable results on the roles of Spirit, Sphinx and Spheroide in Toll pathway activation may be due to off-target effects common to first-generation RNAi tools" are speculation. No experimental data are presented to support these assertions, so these statements and others like them (currently at the end of most "insights" sections) should not appear in Results. I recognize that the authors are trying to soften their criticism of prior studies by providing explanations for how errors may have occurred innocently. If they wish to do so, the speculative hypotheses should appear in the Discussion.
The statement in Results that "The initial claim concerning wntD may be explained by a genetic background effect independent of wntD" similarly appears to be a speculation based on the reading of the main text Results. However, the Discussion clarifies that "Here, we obtained the same results as the authors of the claim when using the same mutant lines, but the result does not stand when using an independent mutant of the same gene, indicating the result was likely due to genetic background." That additional explanation in the Discussion greatly increases reader confidence in the Result and should be explained with reference to S5 in the Results. Such complete explanations should be provided everywhere possible without requiring the reader to check the Supplement in each instance.
In some cases, such as "The results of the initial papers are likely due to the use of ubiquitous overexpression of PGRP-LE, resulting in melanization due to overactivation of the Imd pathway and resulting tissue damage", the claim to explain the original finding would be easy to test. The authors should perform those tests where they can, if they wish to retain the statements in the manuscript. Similarly, the claim "The published data are most consistent with a scenario in which RNAi generated off-target knockdown of a protein related to retinophilin/undertaker, while Undertaker itself is unlikely to have a role in phagocytosis" would be stronger if the authors searched the Drosophila genome for a plausible homolog that might have been impacted by the RNAi construct, and then put forth an argument as to why the off-target gene is more likely to have generated the original phenotype than the nominally targeted gene. There is a brief mention in S19 that junctophilin is the authors' preferred off-target candidate, but no evidence or rationale is presented to support that assertion. If the original RNAi line is still available, it would be easy enough to test whether junctophilin is knocked down as an off-target, and ideally then to use an independent knockdown of junctophilin to recapitulate the original phenotype. Otherwise, the off-target knockdown hypothesis is idle speculation.
A good model is the passage on extracellular DNA, which states, "experiments performed for ReproSci using the original DNAse IIlo hypomorph show that elevated Diptericin expression in the hypomorph is eliminated by outcrossing of chromosome II, and does not occur in an independent DNAse II null mutant, indicating that this effect is due to genetic background (Supplementary S11)." In this case, the authors have performed a clear experiment that explains the original finding, and inclusion of that explanation is warranted. Similar background replacement experiments in other validations are equally compelling.
The statement "Analysis of several fly stocks expected to carry the PGRP-SDdS3 mutation used in the initial study revealed the presence of a wild-type copy PGRP-SD, suggesting that either the stock used in this study did not carry the expected mutation, or that the mutation was lost by contamination prior to sharing the stock with other labs" provides a documentable explanation of a potential error in the original two manuscripts, but the subsequent "analysis of several fly stocks" needs citations to published literature or explanation in the supplement. It is unclear from this passage how the wildtype allele in the purportedly mutant stocks could have led to the misattribution of function to PGRP-SD, so that should be explained more clearly in the manuscript.
The originally claimed anorexia of the Gr28b mutation is explained as having been "likely obtained due to comparison to a wild-type line with unusually high feeding rates". This claim would be stronger if the wildtype line in question were named and data showing a high rate of feeding were presented in the supplement or cited from published literature. Otherwise, this appears to be speculation.
In the section "The Toll immune pathway is not negatively regulated by wntD", FlyAtlas is cited as evidence that wntD is not expressed in adult flies. However, the FlyAtlas data is not adequately sensitive to make this claim conclusively. If the present authors wish to state that wntD is not expressed in adults, they should do a thorough test themselves and report it in the Supplement.
Alternatively, the statement "data from FlyAtlas show that wntD is only expressed at the embryonic stage and not at the adult stage at which the experiments were performed by (Gordon et al., 2005a)" could be rephrased to something like "data from FlyAtlas show strong expression of wntD in the embryo but not the adult" and it should be followed by a direct statement that adult expression was also found to be near-undetectable by qPCR in supplement S5. That data is currently "not shown" in the supplement, but it should be shown because this is a central result that is being used to refute the original claim. This manuscript passage should also describe the expression data described in Gordon et al. (2005), for contrast, which was an experimental demonstration of expression in the embryo and a claim "RT-PCR was used to confirm expression of endogenous wntD RNA in adults (data not shown)."
Inclusion of the section on croquemort is curious because it seems to be focused exclusively on clearance of apoptotic cells in the embryo, not on anything related to immunity. The subsection is titled "Croquemort is not a phagocytic engulfment receptor for apoptotic cells or bacteria", but the text passage contains no mention of phagocytosis of bacteria, and phagocytosis of bacteria is not tested in the S17 supplement. I would suggest deleting this passage entirely if there is not going to be any discussion of the immune-related phenotypes.
The claim "Toll is not activated by overexpression of GNBP3 or Grass: Experiments performed for ReproSci find that contrary to previous reports, overexpression of GNBP3 (Gottar et al., 2006) or<br /> Grass (El Chamy et al., 2008) in the absence of immune challenge does not effectively activate Toll signaling (Supplementaries S6, S7)" is overly strongly stated unless the authors can directly repeat the original published studies with identical experimental conditions. In the absence of that, the claim in the present manuscript needs to be softened to "we find no evidence that..." or something similar. The definitive claim "does not" presumes that the current experiments are more accurate or correct than the published ones, but no explanation is provided as to why that should be the case. In the absence of a clear and compelling argument as to why the current experiment is more accurate, it appears that there is one study (the original) that obtained a certain result and a second study (the present one) that did not. This can be reported as an inconsistency, but the second experiment does not prove that the first was an error. The same comment applies to the refutation of the roles for Edin and IRC. Even though the current experiments are done in the context of a broader validation study, this does not automatically make them more correct. The present work should adhere to the same standards of reporting that we expect in any other piece of science.
The statement "Furthermore, evidence from multiple papers suggests that this result, and other instances where mutations have been found to specifically eliminate Defensin expression, is likely due to segregating polymorphisms within Defensin that disrupt primer binding in some genetic backgrounds and lead to a false negative result (Supplementary S20)" should include citations to the multiple papers being referenced. This passage would benefit from a brief summary of the logic presented in S20 regarding the various means of quantifying Defensin expression.
In S22 Results, the statement "For general characterization of the IrcMB11278 mutant, including developmental and motor defects and survival to septic injury, see additional information on the ReproSci website" is not acceptable. All necessary information associated with the paper needs to be included in the Supplement. There cannot be supporting data relegated to an independent website with no guaranteed stability or version control. The same comment applies to "Our results show that eiger flies do not have reduced feeding compared to appropriate controls (See ReproSci website)" in S25.
Supplement S21 appears to show a difference between the wildtype and hemese mutants in parasitoid encapsulation, which would support the original finding. However, the validation experiment is performed at a small sample size and is not replicated, so there can be no statistical analysis. There is no reported quantification of lamellocytes or total hemocytes. The validation experiment does not support the conclusion that the original study should be refuted. The S21 evaluation of hemese must either be performed rigorously or removed from the Supplement and the main text.
In S22, the second sentence of the passage "Due to the fact that IrcMB11278 flies always survived at least 24h prior to death after becoming stuck to the substrate by their wings, we do not attribute the increased mortality in Ecc15-fed IrcMB11278 flies primarily to pathogen ingestion, but rather to locomotor defects. The difference in survival between sucrose-fed and Ecc15-fed IrcMB11278 flies may be explained by the increased viscosity of the Ecc15-containing substrate compared to the sucrose-containing substrate" is quite strange. The first sentence is plausible and a reasonable interpretation of the observations. But to then conclude that the difference between the bacterial treatment versus the control is more plausibly due to substrate viscosity than direct action of the bacteria on the fly is surprising. If the authors wish to put forward that interpretation, they need to test substrate viscosity and demonstrate that fly mortality correlates with viscosity. Otherwise, they must conclude that the validation experiment is consistent with the original study.
In S27, the visualization of eiger expression using a GFP reporter is very non-standard as a quantitative assay. The correct assay is qPCR, as is performed in other validation experiments, and which can easily be done on dissected fat body for a tissue-specific analysis. S27 Figure 1 should be replaced with a proper experiment and quantitative analysis. In S27 Figure 2, the authors should add a panel showing that eiger is successfully knocked down with each driver>construct combination. This is important because the data being reported show no effect of knockdown; it is therefore imperative to show that the knockdown is actually occurring. The same comment applies everywhere there is an RNAi to demonstrate a lack of effect.
The Drosomycin expression data in S3 Figure 2A look extremely noisy and are presented without error bars or statistical analysis. The S4 claim that sphinx and spheroid are not regulators of the Toll pathway because quantitative expression levels of these genes do not correlate with Toll target expression levels is an extremely weak inference. The RNAi did not work in S4, so no conclusion should be inferred from those experiments. Although the original claims in dispute may be errors in both cases, the validation data used to refute the original claims must be rigorous and of an acceptable scientific standard.
In S6 Figure 1, it is inappropriate to plot n=2 data points as a histogram with mean and standard errors. If there are fewer than four independent points, all points should be plotted as a dot plot. This comment applies to many qPCR figures throughout the supplement. In S7 Figure 1, "one representative experiment" out of two performed is shown. This strongly suggests that the two replicates are noisy, and a cynical reader might suspect that the authors are trying to hide the variance. This also applies to S5 Fig 3. Particularly in the context of a validation study, it is imperative to present all data clearly and objectively, especially when these are the specific data that are being used to refute the claim.
Other comments:
In S26, the authors suggest that much of the observed melanization arises from excessive tissue damage associated with abdominal injection contrasted to the lesser damage associated with thoracic injection. I believe there may be a methodological difference here. The Methods of S27 are not entirely clear, but it appears that the validation experiment was done with a pinprick, whereas the original Mabary and Schneider study was done with injection via a pulled capillary. My lab group (and I personally) have extensive experience with both techniques. In our hands, pinpricks to the abdomen do indeed cause substantial injury, and the physically less pliable thorax is more robust to pinpricks. However, capillary injections to the abdomen do virtually no tissue damage - very probably less than thoracic injections - and result in substantially higher survivals of infection even than thoracic injections. Thus, the present manuscript may infer substantial tissue damage in the original study because they are employing a different technique.
-
-
www.callingbullshit.org www.callingbullshit.org
-
If this strikes you as frighteningly close to Philip K. Dick’s notion of pre-crime, the film Minority Report, and other dystopian science fiction, you’re not alone.
YES!
-
-
www.biorxiv.org www.biorxiv.org
-
eLife Assessment
This important study builds on previous work from the same authors to present a conceptually distinct workflow for cryo-EM reconstruction that uses 2D template matching to enable high-resolution structure determination of small (sub-50 kDa) protein targets. The paper describes how density for small-molecule ligands bound to such targets can be reconstructed without these ligands being present in the template. However, the evidence described for the claim that this technique "significantly" improves the alignment of the reconstruction of small complexes is incomplete. The authors could better evaluate the effects of model bias on the reconstructed densities.
-
Reviewer #1 (Public review):
Summary:
This paper describes an application of the high-resolution cryo-EM 2D template matching technique to sub-50kDa complexes. The paper describes how density for ligands can be reconstructed without having to process cryo-EM data through the conventional single particle analysis pipelines.
Strengths:
This paper contributes additional data (alongside other papers by the same authors) to convey the message that high-resolution 2D template matching is a powerful alternative for cryo-EM structure determination. The described application to ligand density reconstruction, without the need for extensive refinements, will be of interest to the pharmaceutical industry, where often multiple structures of the same protein in complex with different ligands are solved as part of their drug development pipelines. Improved insights into which particles contribute to the best ligand density are also highly valuable and transferable to other applications of the same technique.
Weaknesses:
Although the convenient visualisation of small molecules bound to protein targets of a known structure would be relevant for the pharmaceutical industry, the evidence described for the claim that this technique "significantly" improves alignment of reconstruction of small complexes is incomplete. The authors are encouraged to better evaluate the effects of model bias on the reconstructed densities in a revised paper.
-
Reviewer #2 (Public review):
In this manuscript, Zhang et al describe a method for cryo-EM reconstruction of small (sub-50kDa) complexes using 2D template matching. This presents an alternative, complementary path for high-resolution structure determination when there is a prior atomic model for alignment. Importantly, regions of the atomic model can be deleted to avoid bias in reconstructing the structure of these regions, serving as an important mechanism of validation.
The manuscript focuses its analysis on a recently published dataset of the 40kDa kinase complex deposited to EMPIAR. The original processing workflow produced a medium resolution structure of the kinase (GSFSC ~4.3A, though features of the map indicate ~6-7A resolution); at this resolution, the binding pocket and ligand were not resolved in the original published map. With 2DTM, the authors produce a much higher resolution structure, showing clear density for the ATP binding pocket and the bound ATP molecule. With careful curation of the particle images using statistically derived 2DTM p-values, a high-resolution 2DTM structure was reconstructed from just 8k particles (2.6A non-gold standard FSC; ligand Q-score of 0.6), in contrast to the 74k particles from the original publication. This aligns with recent trends that fewer, higher-quality particles can produce a higher-quality structure. The authors perform a detailed analysis of some of the design choices of the method (e.g., p-value cutoff for particle filtering; how large a region of the template to delete).
Overall, the workflow is a conceptually elegant alternative to the traditional bottom-up reconstruction pipeline. The authors demonstrate that the p-values from 2DTM correlations provide a principled way to filter/curate which particle images to extract, and the results are impressive. There are only a few minor recommendations that I could make for improvement.
-
Reviewer #3 (Public review):
Summary:
Due to the low SNR of cryo-EM micrographs necessitated by radiation damage, determining the structure of proteins smaller than 50 kDa is exceedingly challenging, such that only a handful have been solved to date. This work aims to improve the reconstruction of small proteins in single-particle cryo-EM by using high-resolution 2D template matching, an algorithm previously used to locate and align macromolecules in situ, to align and reconstruct small proteins. This approach uses an existing macromolecular structure, either experimentally determined or predicted by AlphaFold, to simulate a noise-free 3D reference and generates whitened projections, crucially including high-spatial-frequency information, to align particles by the orientation with maximal cross-correlation. They demonstrate the success of this approach by generating a 3D reconstruction from an existing dataset of a 41.3 kDa protein kinase that had previously evaded attempts at high-resolution structure determination. To alleviate concerns that this is purely from template bias, they demonstrate clear density at two regions that were not present in the template: 6 residues in an alpha helix and an ATP in the ligand binding pocket. The latter is particularly important for its implications in determining structures of ligand-bound proteins for drug discovery. Additionally, the authors provide an update to the classic calculation in Henderson 1995 to predict the minimum molecular mass of a protein that can be solved by single-particle cryo-EM.
Strengths:
I am in no doubt that this technique can be used to gain valuable insights into the structures of small proteins, and this is an important advancement for the field. The ability to determine the structure of ligands in a binding site is particularly important, and this paper provides a method of doing that which outperforms traditional single-particle cryo-EM processing workflows.
The claim that using high-spatial frequency information is essential for aligning small proteins is a valuable insight. A recent pre-print published at a similar time to this manuscript used high-resolution information in standard ab-initio reconstruction to generate a high-resolution reconstruction from the same dataset, supporting the claims made in the manuscript.
The theoretical section outlined in the appendix is also theoretically sound. It uses the same logic as Henderson, but applies more up-to-date knowledge, such as incorporating dose-weighting and altering the cross-correlation-based noise estimation. This update is valuable for understanding factors preventing us from reaching the theoretical limit.
Weaknesses:
Given that this technique creates template bias, only parts of the reconstruction not in the template can be trusted, unlike standard single-particle processing, where the independent half-maps from separate, ab initio templates are used to generate a 3D reconstruction. Although, in principle, one could perform the search many times such that every residue has been omitted in at least one search, this will be extremely computationally intensive and was not demonstrated in this manuscript. It is therefore currently only realistically applicable when only a small portion of the sub-50 kDa protein is of interest.
The applicability of this technique to more than a single target was also not demonstrated, and there are concerns that it may not work effectively in many cases. The authors note in the results that "the ATP density was consistently recovered more robustly than nearby residues" and speculate that this may be because misalignments disproportionately blur peripheral residues. Since the region of interest in a structure is not necessarily in the center, this may need further investigation. The implications of this statement may also be unclear to the reader. For example, can this issue be minimized by having the region of interest centered in the simulated volume?
In Figure 3, the authors demonstrate that it is not solely improved particle filtering and a noise-free reference that improves alignment, but that the high spatial frequency information is important. This information is very valuable since it can be applied to other, more standard methods. However, this key figure is not as clear or convincing as it could be. The FSC curves are possibly misleading, since the reduced resolution could be explained by reduced template bias when auto-refining with a map initially low-pass filtered to 10 Å. Moreover, although the helix reconstruction does look slightly better using the 2DTM angles, the improvement in density for ATP in the binding pocket is not clear. A qualitative argument only clear in one out of two cases is not as convincing as a quantitative metric across more examples.
-
-
nmoer.pressbooks.pub nmoer.pressbooks.pub
-
It is important to ask for help no matter what level of learning you are starting as a freshman, or a returning student. The world changes so much and you never know where the next big idea may come from.
-
Do not ask your Instructor if you missed anything important when you were absent. Instructors work diligently to design their coursework, so asking if any of that content was important can be considered rude or dismissive of their hard work. Instead ask if missed anything that was not included on the course schedule.
When absent students should avoid asking instructors if anything important was missed and instead politely ask whether there was anything not listed on the course schedule that they should review.
-
It is not enough to understand course material and summarize it on an exam. You will also be expected to seriously engage with new ideas by reflecting on them, analyzing them, critiquing them, making connections, drawing conclusions, or finding new ways of thinking about a given subject.
Setting the tone of this course. Let's me know my expectations.
-
-
www.whatsbetter.today www.whatsbetter.today
-
p.s. Want the visual map? You'll find a full FieldNote Sketch Summary of this inside the 'hidden' layer. Click this highlight to see the synthesis, share it with someone you know needs it and save a copy for yourself.
SktchNote of THE TRANSFORMER PROTOCOL
Download your copy for your notes or send this to a friend you know needs it. -
Pause: Ground the Circuit. Stop pacing. Stand still. Press both feet firmly into the floor.
🧠 NEURO-HACK: Somatic Grounding. You cannot pray effectively if your Amygdala is hijacking your prefrontal cortex. Physical grounding (feeling your feet) tells your nervous system: "We are safe. We are here." This re-engages the logic centres of the brain, allowing the spiritual signal to come through clearly.
-
Prove to yourself that the power is in the house.
🎯 MISSION OBJECTIVE: Confidence Protocol. The only way to break the addiction to external validation is to survive a crisis without it. The next time the storm hits, do not reach for the phone. Reach for the Source. Once you survive one night on your own power supply, you will never go back to being a Parasite.
-
Stop trying to plug into the Man. Plug into the Method.
🛠 ARMORY UPGRADE: The A.R.G. (Apprentice’s Reliable Guide). When we say "Method," we don't mean a ritual. We mean a Protocol. The A.R.G. is designed to force a "Pattern Interrupt." It stops the panic loop and forces your brain to engage with the Truth before you engage with the problem.
-
Jesus said, "It is better for you that I go away" (John 16:7). He initiated the ultimate decentralization of power.
⚡ HISTORICAL DATA: Decentralized Command. The Disciples wanted Jesus beside them (External). Jesus wanted the Spirit in them (Internal). If the Commander is only in one physical body, He is limited by geography. If the Spirit is in every Agent, the Resistance is everywhere at once. You are a mobile Forward Operating Base.
-
The "Guru Economy" is built on a structural lie: I have the power; you pay the subscription.
🚫 PSYOP DETECTED: The Subscription Model of Faith. If a leader makes you dependent on their voice to hear God, they aren't a Shepherd; they are a Middleman. Kingdom Leadership: The goal is obsolescence. A true spiritual father trains you to read the map yourself so you can eventually lead the patrol.
-
The Heir houses the power via a Local Transformer.
⚙️ TECHNICAL SPECS: Why a Transformer? High Voltage (God's infinite power/holiness) fed directly into a low-voltage appliance (Human nervous system) causes a burnout. The Holy Spirit acts as the Transformer. He "steps down" the infinite power of God into a usable current called "Wisdom" or "Peace" that you can handle without blowing a fuse.
-
You are trapped in an Oscillation Triad,
🗺️ TACTICAL MAP: THE IDENTITY TRIAD
The Parasite: "I need you to survive." (Codependency).
The Orphan: "I don't need anyone." (Isolation/Pride).
The Heir: "I have access to everything that is my Father's." (Interdependence). Mission: Most Agents bounce between 1 and . The Empire pushes you to be a Parasite (Consumer) or an Orphan (Hustler). Only the Kingdom invites you to be an Heir.
-
He is a lamp sitting in the dark, clutching an extension cord, waiting for someone else to find the outlet.
⚠️ DIAGNOSTIC: EXTERNAL LOCUS OF CONTROL This is the definition of "Powerlessness." Alex has the hardware (he knows the verses/theology), but he lacks the connection. He believes the "Outlet" is his mentor. The Truth: The Outlet is inside the believer (The Indwelling Spirit). Alex is sitting in a room with the lights off, holding a plug, forgetting he is hardwired to the generator.
-
-
www.americanyawp.com www.americanyawp.com
-
My mother, she made me set my hand to a book
she is a child being asked by the judge about "how she became a witch"
-
-
www.americanyawp.com www.americanyawp.com
-
, where our men are driven to fling their Bibles and prayer Books into the sea,
This shows me how religion was used as a justification of colonization. The excuse was " to spread the gospel"; however, it was just to take the land away from its people
-
-
www.americanyawp.com www.americanyawp.com
-
The natives are capable of Morality or Goodness and very apt to receive the principles of Catholic Religion;
This shows me that they knew that the natives were peaceful people and they were very welcoming to the Europeans, however, the Europeans could only look at the native people as "alike" only if the were of the same belief.
-
-
www.jstor.org www.jstor.org
-
eding decades. Americans mafrequently, formed families at a younger age, and had more childrenthan they had in the 1920s and 1930s (Hartman
baby boom
-
-
www.americanyawp.com www.americanyawp.com
-
I believe that they would easily be made Christians, as it appeared to me that they had no religion
He viewed the Native people in a dismissive way, although he said that religion is not by force, he just dismissed their culture and belief and wanted to make them Christians "easily"
-
-
www.americanyawp.com www.americanyawp.com
-
I thought I was alone!
I wonder what did the human think about being alone, aside from the Bald Eagle. "He had no life" what was the meaning of life the human was supposed to have to the Bald Eagle? Did he think Human life should look different than animal life?
-
-
statprogworkshop.onrender.com statprogworkshop.onrender.com
-
lack of ability to use R
“However, the challenge of promote R into our daily programming work lies in the fact that programmer team lack of ability to use R.” 不提我们团队没有能力做这件事,我们转战R shiny的目的写“更高效,更便捷的输出Output,满足当前生物统计部的急迫利益需求”。
-
-
confluence-mx.sec.samsung.net confluence-mx.sec.samsung.net
-
journals.plos.org journals.plos.org
-
Significant differences between treatments were tested by the Tukey-Kramer (HSD) test.
in case you're interested in the statistics used for Figure 7 above.
-
Regimen of honey bee feeding with Varroa dsRNA The experiment with Varroa dsRNA was conducted in mini-hives, 12 mini-hives per replicate, and was repeated three times. In each replicate, a cup of bees and a laying queen were placed in each mini-hive. Three mini-hives were randomly assigned to one of four netted enclosures, each representing a different feeding treatment. Bees were fed 5 ml of 50% sucrose solution in troughs placed in each mini-hive. The four treatments were: 1) sucrose solution only (untreated control), 2) Mixture I (200 µg each of five dsRNAs added to the sugar solution), 3) Mixture II (200 µg each of 14 dsRNAs added to the sugar solution), and 4) dsRNA-GFP (200 µg dsRNA) serving as an inert dsRNA control. Mini-hives that fully consumed the treatment solutions were supplemented with candy (67% sugar powder and 33% honey). In addition, the bees were routinely fed pollen patties (70% pollen and 30% sugar powder). Each replicate of the experiment lasted for 60 days (Figure 4). Bees in each treatment were fed the respective solution daily for the first 10 days and for the last 14 days, and twice a week in the interim. Varroa mites were introduced into each mini-hive from day 7 till day 14. In the first replicate, 30 mites were introduced into each mini-hive; in the latter two replicates, 100 mites were introduced into each mini-hive. On day 60, all mature bees were collected, counted and shaken with 70% ethanol overnight in order to collect and count Varroa mites that fell off the bees. All capped brood cells were opened to collect and count Varroa mites. We calculated mites per bee (mature and developing). Varroa mites, adult bees, emerging bees and pupae were stored for molecular analyses.
This section is required reading for the above interpretation of the graph. Additionally, use this section to unpack some methodological limitations with this study. We will discuss further in class.
-
Varroa infestation was reduced in mini-hives treated with Varroa dsRNA compared to the controls (F3,29 = 5.65, P = 0.0035; Figure 7). The effect was greater with Mixture II, which targeted more genes than Mixture I, reducing Varroa populations by an average 53% compared to the dsRNA-GFP control, and by 61% compared to the untreated control.
This is the figure we will discuss properly in class. In preparation, answer these questions: * Validate the use of dsGFP as an additional treatment group. * Distinguish Mixtures I and II (see further below in Methods section). * Complete the sentence: The number of varroa mites per bee provided with Mixture II was ______ fewer than the number of mites per bee in the untreated control.
-
Bidirectional horizontal transfer
After reading this section (two paragraphs), answer these questions: * What is the significance of successful bidirectional horizontal transfer of dsRNA between bees? * What is the limitation with the application of dsRNA in commercial hives based on the information provided here?
-
To test for direct horizontal transfer
We already described horizontal transfer of genetic information in class. Answer these questions: * What is GFP and how is it used? (Google search) * What is the purpose of including a dsRNA-GFP treatment?
-
However
From this paragraph, you should be able to: * briefly summarise the impact of varroa mite on honeybee health * identify the current practice of varroa mite control (at the time of the study) * identify the proposed strategy to combat varroa mite used in this study
-
Varroa destructor
Meet the varroa mite!

-
-
islamic-study.org islamic-study.org
-
higherlearning institutions teach that nothing happened over a thousand years is notjust beyond comprehension, but against academic rules of rigorousquestioning. Students, who are trained to think critically, suddenly face asudden darkness of ten centuries, and then are told things appeared, as if by amiracle, all at once in the Renaissance
I think this is mainly due to the fact that a lot of history that is taught is Eurocentric in nature, thus ignoring achievements from other parts of the world. It is during the Renaissance that Europe is once again in the spotlight
-
It is often repeated that Islamic science declined precipitously after 1200CE (usually attributed to the religious reactionaries such as al-GhazÁlë [d.1111 CE]), that the Ancient Sciences were not taught in the schools(madrasa’s), and that the influence of Islamic science on Europe ceasedafter the magical date of 1200.
I'm wondering who exactly is repeating this. Is it Europeans refusing to believe that other cultures were capable of intellectual thought?
-
-
celticonline.brightspace.com celticonline.brightspace.com
-
edu-cators should present information about ways nursing stu-dents and newly licensed nurses can positively cope withstress, psychological trauma, and exhaustion to avoid mal-adaptive coping through substances. While there are signsthat a nurse may be impaired, those signs and symptomsmay not be apparent. Our data indicate that when discov-ery of SU results in a nurse’s unexpected removal frompatient care, the entire unit’s staff should be debriefed in asession that follows institutional policies and protects thesuspected-impaired nurse’s confidential information. Theinterplay and tensions between human dynamics of addic-tion, legal implications of licensure, safe patient care, andpeer relationships need to be recognized.
This section of the text answers the question, "How does the study outcome affect future research or clinical practice?" The information provided in this section implicate great interventions in the workplace environment to provide appropriate resources and coping mechanisms. Early recognition and help improve outcomes.
-
Specifically, nurses mayfear SU disclosure could jeopardize their license andemployment status, and therefore, their ability to earn aliving.
For the question "Did the authors provide justification for the research study ?" The authors' justify the need for this study by explaining that substance use among nurses is difficult to measure due to stigma and fear of professional consequences. They emphasize that nurses may be hesitant to disclose substance use because it could jeopardize their license or employment. This justification supports the need for anonymous data collection and a qualitive approach to better understand nurses experiences.
-
Trustworthiness was supported in each step of the dataanalysis process. First, in the preparation phase, we craftedour open-ended question in such a manner to allow therespondent to offer a wide variety of descriptions. We pur-posefully did not limit it to their personal experiences withSU and opened the experiences to more broadly encom-pass a lived experience surrounding substances. Stratifiedrandom sampling in the parent study allowed us to repre-sent the larger population of RNs in the state. Second, inthe organization phase, the three coders used printed cop-ies of the documents to allow for conceptual tools, such ascolor coding of the responses. Multiple discussions of thetemporal patterns, number of categories and concepts, andthe order of the themes transpired. In the reporting phase,several iterations of final themes, supported by narratives,were reviewed.
"Is the study qualitative or quantitative?" -cont This section highlights how the qualitative data was analyzed. As described, this data was collected more openly, allowing individuals to broadly describe their overall experience rather than a more limited quantitative one.
-
Quantitative AnalysisAs the objective of the current study was to explorenurses’ responses related to SU, we present descriptivestatistics in Table 1 and comparison statistics (α = .05) ofthose nurses who offered comments and those whodeclined. The p value was obtained using chi-square testor Fisher’s exact test for categorical variables and two-sample t test for continuous variables. Of note is that theASSIST (WHO ASSIST Working Group, 2002) surveyresponses indicate a very low rate of SU (self-report) andno statistical differences were found between contribu-tors/noncontributors in SU (tobacco, alcohol, and othersubstances). However, as outlined in the results section,the qualitative analysis reveals themes that describe thisphenomenon in nursing.
"Is the study qualitative or quantitative?" This section describes how the data was analyzed both quantitatively and qualitatively. The quantitative analysis uses statistical analysis of the data to inspect the self-report rate and look for statistical significance.
-
Although the results are inconclusive, select data appearto indicate that the use of alcohol to cope with profes-sional stress may be an increasing problem in nursing.When nurses use substances, there are professionalpractice and life outcomes that result. We divided theseinto negative and positive outcomes. Significant negativeoutcomes from the use of substances included the nurseelecting to leave the organization prior to terminationwith no referral for assistance. This often occurred whenthe nurse was confronted with suspected use andrequested to submit to a urine drug screen. Other nurseswere terminated after the SU was discovered; this couldinclude impairment while on duty or diversion activities.There were select nurses who had enrolled in ISNAP.However, some nurses were unsuccessful with monitor-ing agreements and began to use substances again. Othernurses were arrested, lost custody of children, divorced,and were incarcerated.
This section of the text is to respond to the question, "What are the results of the research study?" It explains that the overall results of the study are inconclusive based on the vast difference in the responses. Nurses gave testimonies of their own substance abuse history, history of SU in their families, the struggle of the nursing profession, stories of sobriety and denying use, and responses of witnessing others use. Nurses faced many different outcomes from their choices and it was also telling that some of that used and others never found out. Earlier in the article it talked about how some of the results could have been skewed. Some people may have lied, or tried to just respond multiple times to recieve the gift card. The results could be inconclusive due to the varying responses and chance of inaccuracy in the responses given.
-
Author RolesKJF and LZ conceived the study; KJF determined the method-ology. KJF, LZ, and BR collected the data, and KJF, BR, andKK analyzed the data. KJF took the lead in writing and organiz-ing the manuscript. LZ took the lead in preparing the compara-tor table with assistance from BR. All four authors reviewed thefinal manuscript before submitting for publication.Declaration of Conflicting InterestsThe author(s) declared no potential conflicts of interest withrespect to the research, authorship, and/or publication of thisarticle.FundingThe author(s) disclosed receipt of the following financial sup-port for the research, authorship, and/or publication of this arti-cle: This project was funded by the National Council of StateBoards of Nursing, Center for Regulatory Excellence (Project:R101011).
This section answers the question "Are the authors qualified to be conducting this research?" They appear well qualified to conduct this research as evidenced by their academic and clinical expertise as well a lack of conflicting interests. Their expertise is especially relevant due to the nature of this being a qualitative study that requires careful interpretation of nurses' experiences.
-
In states across the country, the board of nursing or otheradministrative agency is charged with ensuring the nurs-ing practice act is upheld by those possessing a registerednurse (RN) license (Russell, 2017). The board of nursingacts to enforce standards to ensure nurses are competent,provide care within the scope of practice, and are freefrom impairment due to substance use (SU), and therebyprotect patients who receive nursing care. The rate ofnurses’ SU is normative with the general population esti-mates of SU; that is, between 6% and 8% (Kunyk, 2015;Trinkoff, Eaton, & Anthony, 1991; Trinkoff & Storr,1998; Trinkoff, Zhou, & Storr, 1999). When trackingsuch estimates, there are additional complexities to nurseswho report SU, even when anonymity is assured, thatmay influence full disclosure. Specifically, nurses mayfear SU disclosure could jeopardize their license andemployment status, and therefore, their ability to earn aliving. These hesitations are well-founded historicallydue to the “throw away” culture that permeated nurseswhen confronted with SU (Curtin, 1987). Today, due tothe National Council of State Boards of Nursing (NCSBN)and industry partners, the alternative-to-discipline pro-grams, which include nurse-accountability, the nurse maybe given the option of a monitoring program versusemployment termination (NCSBN, 2011).Others argue, however, that SU in nurses continues tobe surrounded by a discourse of “personal choice, as afailure of moral character” rather than a disease (Kunyk,Milner, & Overend, 2016, p. 315). When confronted withsuspected SU and/or diversion, nurses will often be askedto submit a urine sample for SU screening. Often, policiesin health care organizations require termination ofemployees who refuse to submit to such screenings.When a nurse is punitively discharged for refusing to886369JAPXXX10.1177/1078390319886369Journal of the American Psychiatric Nurses AssociationFoli et al.research-article20191Karen J. Foli, PhD, RN, FAAN, Purdue University, West Lafayette,IN, USA2Blake Reddick, RN, BSN, Purdue University, West Lafayette, IN, USA3Lingsong Zhang, PhD, Purdue University, West Lafayette, IN, USA4Kathryn Krcelich, RN, BSN, Purdue University, West Lafayette, IN, USACorresponding Author:Karen J. Foli, Purdue University School of Nursing, Johnson Hall ofNursing, Room 234, West Lafayette, IN 47907, USA.Email: kfoli@purdue.eduSubstance Use in Registered Nurses:“I Heard About a Nurse Who . . .”Karen J. Foli1 , Blake Reddick2, Lingsong Zhang3,and Kathryn Krcelich4AbstractBACKGROUND: Estimates of substance use (SU) in nurses is on par with that of the general population: between6% and 8%. However, collecting sensitive information such as SU is difficult based on social desirability and fearsof disclosure. AIMS: Part of a larger study surrounding nurses’ self-reports of SU (n = 1,478), the purpose wasto explore open-ended responses of nurses (n = 373) who were invited to “Please add any additional commentsrelated to substance or alcohol use that you have experienced or witnessed in registered nurses.” METHOD: Thisqualitative study employed a content analysis of 373 nurses’ open-ended responses collected via an online survey.RESULTS: The majority of nurses (n = 250) forwarded comments that described SU in other nurses, while 24comments reflected the nurse’s past or current SU. Content analysis revealed the following four themes: (1) differingsocial network proximity to SU; (2) individual process: vulnerability to adaptive/maladaptive coping resulting in positiveand negative outcomes; (3) bedside, system, and organizational spaces and effects; and (4) there are no SU issues innursing. CONCLUSIONS: Although direct reports of SU constitute approximately one quarter of the commentsforwarded, nurses reported peers’ struggles with SU, including observing nurses working in patient care while impairedand the use of substances to cope with work and personal stressors. Individual factors and system-related failuresappear to be contributors to SU in nurses.Keywordsnurses, substance use, qualitative
For my question "What was the author's research question?", the author's research question focused on understanding how registered nurses experience and perceive substance use within the nursing profession, particularly through their descriptions of substance use they have personally experienced or witnessed in other nurses, and what individual, organizational, and system-level factors shape these experiences.
-
-
bio.libretexts.org bio.libretexts.org
-
___________________________________________________________
Towards the head
-
-
bookshelf.vitalsource.com bookshelf.vitalsource.com
-
Sociologists study health, healing, and illness because they are a central part of the human experience
peoples health is a "central part of human experience", it is a great topic for sociologists to study because it tell s a lot about humans.
-
early mitigation strategies
Strategies included limiting social interaction, inequality in the health system such as social locations, and different beliefs that people had about COVID-19.
-
incomes under US$25,000, to live in rural areas, and to have just a high school education or less (Kricorian, Civen, and Equils 2022).
This part of the paragraph recognizes that people that have vaccine hesitancy tend to be a part of lower class or have less of an education than those who don't have vaccine hesitancy. I think that this shows how people from different walks of life have different opinions.
-
xcess mortality
I have never heard of this term before. I have also never understood how they collected death counts before. It really puts into perspective the numbers.
-
death toll of over 6 million, while the US Centers for Disease Control and Prevention reported US deaths exceeding 1 million.
I found that It was really important to see in numbers how covid affected our society. I am curious about how two different organizations got such different numbers. I am thinking that CDC and prevention stopped keeping track or something after one million. But see six million versus one million really helps someone reading this understand how much it affect us.
-
-
dbmi7byi6naiyr.archive.is dbmi7byi6naiyr.archive.is
-
At one point, Pérez told me the name Jupyter honored Galileo, perhaps the first modern scientist. The Jupyter logo is an abstracted version of Galileo’s original drawings of the moons of Jupiter. “Galileo couldn’t go anywhere to buy a telescope,” Pérez said. “He had to build his own.”
-
When you improve the praxis of science, the dream is that you’ll improve its products, too. Leibniz’s notation, by making it easier to do calculus, expanded the space of what it was possible to think. The grand scientific challenges of our day are as often as not computational puzzles
-
Pérez told me stories of scientists who sacrificed their academic careers to build software, because building software counted for so little in their field: The creator of matplotlib, probably the most widely used tool for generating plots in scientific papers, was a postdoc in neuroscience but had to leave academia for industry. The same thing happened to the creator of NumPy, a now-ubiquitous tool for numerical computing. Pérez himself said, “I did get straight-out blunt comments from many, many colleagues, and from senior people and mentors who said: Stop doing this, you’re wasting your career, you’re wasting your talent.” Unabashedly, he said, they’d tell him to “go back to physics and mathematics and writing papers.”
También he vivido la subvaloración asociada a publicar y sostener software libre en y desde contextos comunitarios en contraste con la publicación en circuitos académicos clásicos. Y si bien las universidades locales se están pensando esto en aras de visibilizar innovación, lo hacen muy lentamente, como es habitual, mientras los incentivos siguen estando alineados a las métricas convencionales
-
As science becomes more about computation, the skills required to be a good scientist become increasingly attractive in industry. Universities lose their best people to start-ups, to Google and Microsoft. “I have seen many talented colleagues leave academia in frustration over the last decade,” he wrote, “and I can’t think of a single one who wasn’t happier years later.”
Yo he escuchado esa sirena en el pasado, e incluso me propusieron trabajar con una de las big pharma y con una aseguradora, básicamente por mis conocimientos en programación (Pharo Smalltalk, específicamente). Sin embargo, "recaí a la academia, después de ser un académico en rehabilitación", como suelo decir y ahora volví de tiempo completo. Creo que una alternativa entre un camino y otro es ser académico/consultor, produciendo bienes comunes que uno trae del sector de las MiPyMes (micro, pequeña y medianas empresas) que hacen innovación local hacia la academía, en lo que uno esperaría que sea un ciclo virtuoso.
Lo anterior es lo que he intentando con mutabiT de manera sostenida desde hace un par de décadas, gracias a las economías de los afectos (incluyendo mi mamá, mi hermana, Adriana y otros amigues) y si bien eso ha permitido hacer investigación de largo aliento desde las latitudes de la Mayoría Global sin mayores pérdidas de dinero, tampoco ha sido un esfuerzo lucrativo. Creo que, en caso de no poder continuar produciendo bienes comunes que vinculen los mundos académicos y productivos en esa escala sostenible y autónoma en los contextos locales, tendría que decidirme entre dejar alguno de los dos, como cuentan que han hecho estos académicos de otras latitudes.
-
maybe computational notebooks will only take root if they’re backed by a single super-language, or by a company with deep pockets and a vested interest in making them work. But it seems just as likely that the opposite is true. A federated effort, while more chaotic, might also be more robust—and the only way to win the trust of the scientific community.
-
Stephen Wolfram who titled a book about his own work on cellular automata A New Kind of Science. In his blog post about computational essays, he writes, “At the core of computational essays is the idea of expressing computational thoughts using the Wolfram Language.”
Esta idea de vincular sus productos a sus discursos hace ver todas las charlas de Stephen Wolfram, como charlas de mercadeo de sus productos, más que de sus ideas, con la consecuente necesidad de mostrar su productos como las únicas alternativas valiosas para explorar ideas que ocurren en muchos lados y de muchas formas.
-
The Mathematica notebook is the more coherently designed, more polished product—in large part because every decision that went into building it emanated from the mind of a single, opinionated genius. “I see these Jupyter guys,” Wolfram said to me, “they are about on a par with what we had in the early 1990s.” They’ve taken shortcuts, he said. “We actually want to try and do it right.”
Desde mediados/finales de los noventas no uso Mathematica, e incluso en ese momento era un gran sistema, altamente integrado y coherente. Sin embargo, en la medida en que me decanté por el software libre, empecé prontamente a buscar alternativas e inicié con TeXmacs, del cual traduje la mayor parte de su documentación al español, como una de mis primeras contribuciones a un proyecto de software libre (creo que aún la traducción es la que se está usando y por aquella época usábamos SVN para coordinar cambio e incluso enviábamos archivos compresos, pues el control de versiones no era muy popular).Por ejemplo el bonito y minimalista Yacas, con el que hiciera muchas de mis tareas en pregrado y colocara algunos talleres y corrigiría parciales cuando me convirtiese en profesor del departamento de Matemáticas
TeXmacs, a diferencia de sistemas monolíticos como Mathematica, se conectaba ya desde ese entonces con una gran variedad de Sistemas de Álgebra Computacional (o CAS, por sus siglas en inglés) exponiéndonos a una diversidad de enfoques y paradigmas CAS, con sus sintaxis e idiosincracias particulares, en una riqueza que Mathematica nunca tendrá.
TeXmacs también me expondría a ideas poderosas, como poder cambiar el software fácilmente a partir de pequeños scripts (en Scheme), que lo convirtieron en el primer software libre que modifiqué, y las poderosas S-expressions que permitían definir un documento y su interacción con CAS externos, si bien TeXmas ofrecía un lenguaje propio mas legible y permitía pasar de Scheme a este y viceversa.
En general esa es la diferencia de los sistemas privativos con los libres: una monocultura versus una policultura, con las conveniencias de la primera respecto a los enfoques unificantes contra la diversidad de la segunda. Si miramos lo que ha ocurrido con Python y las libretas computacionales abiertas como Marimo y Jupyter, estos han ganado en la conciencia popular con respecto a Mathematica y han incorporado funcionalidad progresiva que Mathematica tenía, mientras que otra sigue estando aún presente en los sistemas privativos y no en los libres y viceversa. Yo no diría que las libretas computacionales libres están donde estaba Mathematica en los 90's, sino que han seguido rutas históricas diferentes, cada una con sus valores y riquezas.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
3.2.5. Fake Bots# We also would like to point out that there are fake bots as well, that is real people pretending their work is the result of a Bot. For example, TikTok user Curt Skelton posted a video claiming that he was actually an AI-generated / deepfake character:
I hadn't fully realized there were so many “unofficial” bots out there—like those that bypass API restrictions by simulating human clicks. This feels riskier than simply registering bots. Add to that fake bots (real people pretending to be bots), and it becomes harder to verify information sources, further eroding trust on the platform.
-
3.2.3. Corrupted bots# As a final example, we wanted to tell you about Microsoft Tay a bot that got corrupted. In 2016, Microsft launched a Twitter bot that was intended to learn to speak from other Twitter users and have conversations. Twitter users quickly started tweeting racist comments at Tay, which Tay learned from and started tweeting out within one day. Read more about what went wrong from Vice How to Make a Bot That Isn’t Racist
I think the Tay example perfectly illustrates that “learning bots” are not neutral—what they learn depends entirely on their environment. If the platform itself is saturated with malicious content and the bot lacks sufficient filtering or constraints, it can quickly become corrupted and even amplify existing problems.
-
-
tw-preview.dev.amust.local tw-preview.dev.amust.local
-
You can add new storage vaults later, for example, if you need to increase storage size or want to store data in another region.
I don't think increasing storage size is a valid reason, there is no limit on the size of a single vault. Maybe change to:
You can add additional storage vaults at any time, for example if you need to store data in another region.
-
-
tw-preview.dev.amust.local tw-preview.dev.amust.local
-
NoteIf you are a customer of a Veeam Cloud & Service Provider partner, you cannot launch the Add New Tenant wizard. Ask your service provider to add your Veeam Data Cloud Vault tenant or send you an email with an invitation link that allows you to launch the wizard.
same comment as Azure
-
-
tw-preview.dev.amust.local tw-preview.dev.amust.local
-
If you are a customer of a Veeam Cloud & Service Provider partner, you cannot launch the Add New Tenant wizard. Ask your service provider to add your Veeam Data Cloud Vault tenant or send you an email with an invitation link that allows you to launch the wizard.
Can we soften this a bit? Maybe:
If you are a customer of a Veeam Cloud & Service Provider (VCSP) partner, the Add New Tenant wizard won’t be available for direct use. To add a new Veeam Data Cloud Vault tenant your service provider can either create your tenant for you, or they can share an invitation link allows you to launch the tenant creation wizard.
-
-
online.valenciacollege.edu online.valenciacollege.edu
-
Diction word choice that both conveys and emphasizes the meaning or theme of a poem through distinctions in sound, look, rhythm, syllable, letters, and definition
Diction means choosing words that emphasize meaning, and give rhythm to the piece with syllables and so on.
-
Look for words that show an implied meaning. Words display a conscious choice of the author that indicates tone. BEWARE! Connotations can change over time. Example: confidence/ arrogance or mouse/ rat
Connotation is when a word has an implicit meaning like that guy's a rat or she's mousey
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
2.2.2. The “Golden Rule”# One widespread ethical principle is what English speakers sometimes call the “Golden Rule”: “Tsze-kung asked, saying, ‘Is there one word which may serve as a rule of practice for all one’s life?’ The Master said, ‘Is not reciprocity such a word? What you do not want done to yourself, do not do to others.’” Confucius, Analects 15.23 (~500 BCE China) “There is nothing dearer to man than himself; therefore, as it is the same thing that is dear to you and to others, hurt not others with what pains yourself.” Gautama Buddha, Udānavarga 5:18 (~500 BCE Nepal/India) “That which is hateful to you do not do to another; that is the entire Torah, and the rest is its interpretation.” Hillel the Elder, Talmud Shabbat, folio 33a (~0 CE Palestine) “So in everything, do to others what you would have them do to you, for this sums up the Law and the Prophets.” Jesus of Nazareth, Matthew 7:12 (~30 CE Palestine) And many more…
I find the “Golden Rule” sounds simple enough, but it doesn't always work well in practice because everyone's feelings and boundaries are different. Especially on social media, judging behavior by thinking “I don't mind, so others shouldn't either” can actually overlook the feelings of those who are genuinely affected.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Taoism# Act with unforced actions in harmony with the natural cycles of the universe. Trying to force something to happen will likely backfire. Rejects Confucian focus on ceremonies/rituals. Prefers spontaneity and play. Like how water (soft and yielding), can, over time, cut through rock. Key figures: Lao Tzu ~500 BCE China Lao Tzu Zhuangzi ~300 BCE China
As a Chinese student, I actually resonate quite deeply with the Taoism mentioned here. Taoism emphasizes “governing through non-action” and following nature's course. This inclines me to question “forceful intervention” and “over-optimization” when considering social media and tech ethics. Sometimes, the more platforms try to control user behavior, the more likely they are to backfire—much like Taoism's idea that “the harder you try, the more unbalanced things become.”
-
-
www.swinburne.edu.au www.swinburne.edu.au
-
Why some clothes shrink in the wash — and how to 'unshrink' them
- Clothes shrink in the wash due to natural fibres like cotton and linen relaxing to their original crinkled state when exposed to heat, moisture, and agitation.
- During manufacturing, fibres are stretched straight, but hydrogen bonds break in hot water, allowing cellulose chains to recoil.
- Loosely knitted fabrics shrink more than tightly woven ones; even cold water can cause some shrinkage due to swelling and mechanical action.
- Wool shrinks via felting, where cuticle scales on fibres interlock during washing.
- Synthetics like polyester resist shrinking due to crystalline structures that maintain stability.
- To unshrink clothes, soak in lukewarm water with conditioner or baby shampoo, then stretch gently and dry flat.
Hacker News Discussion
- Users share tips on durable clothing brands like American Giant hoodies, Carhartt pants, Duluth Trading shirts, and Uniqlo's better options, noting quality declines in some like Levi's and H&M.
- Discussions on avoiding shrinkage: wash cold, air dry or use low-heat dryers, hang dry with fans/dehumidifiers; modern heat pump dryers praised for gentleness.
- Health concerns about dryer lint and microplastics from synthetics, with anecdotes of respiratory issues from poor ventilation.
- Debates on fabric quality: longer staple cotton resists shrinking better; pre-shrunk fabrics and blends help; natural vs. synthetic preferences vary.
-
-
www.sciencedirect.com www.sciencedirect.com
-
search accuracy was over 95 %, which did not differ across the trial types. For this reason, we focused on reaction time (RT) data.
they measured reaction time
-
in a half of all the trials, one of the non-targets had a distinct color.
why?
-
-
drive.google.com drive.google.comview2
-
outsidesources
I figure this will be talked about later but would you want us to include the class reading as a part of the sources or have it strictly be outside sources?
-
Time Log: Maintain and submit time log by 11:00 pm on 3/15
The college I went to previously had physical time logs we had to fill out. Is that an option for this class or is it just online? Online is completely find with me, I'm just curious!
-
-
data-feminism.mitpress.mit.edu data-feminism.mitpress.mit.edu
-
The process of converting life experience into data always necessarily entails a reduction of that experience
Yes - I imagine most definitely with quant data. I wonder if certain data is less reductive - for example qual data, which allows for themes/coding from narratives, first-hand story telling etc.
-
The work of data feminism is first to tune into how standard practices in data science serve to reinforce these existing inequalities and second to use data science to challenge and change the distribution of power..d-undefined, .lh-undefined { background-color: rgba(0, 0, 0, 0.2) !important; }1Megan Foesch
Glad to learn this new term, Data-Feminism, and seeking the equal distribution of power and commitment to co-liberation
-
[It is] when prejudice and discrimination is supported and encouraged by the world around you. It is when you are harmed or not helped by government, community or society at large because of your identity.”
Powerful.
-
-
www.zara.com www.zara.com
-
bag [0]
Bad practice, not understandable: The page lacks clear labels and contextual text, this is difficult to negative for users with ADHD, or autism. While the layout is simple, there are no product names, Id codes, prices, and the absence of information reduces clarity! ** (NOTE: the page did not allow me to annotate images, so i had to make use of the few words on this page to drive my points!)**
-
Sh
Good practice, Robust: consistent page layout, it is visually organized, helping keep it the same across different devices (attempted on computer as well as laptop)
-