1,441 Matching Annotations
  1. Feb 2018
    1. This standard food was used as a base for all nutrient-altered foods that were prepared in this study (Table S3). The sugar-enriched foods were prepared by first making a stock sugar solution of 20 g sugar in 10 ml ddH2O, solubilized with rounds of 15 s in the microwave and then stirring, repeated until the sugar dissolved. 1.5 ml amounts of these sugar solutions were immediately mixed with 3.5 ml of melted standard food. As aspartame, erythritol, saccharin, and xylitol were not uniformly soluble, the sweetener-enriched foods were generated through direct addition of powder equivalents directly into 5 ml of melted standard food to a final concentration of 1 M (Table S3). Yeast-enriched food was prepared by mixing 1.5 ml of heat-killed yeast paste into 3.5 ml melted standard food. Dually enriched food was prepared through addition of 1.5 ml sugar solution and 1.5 ml heat-killed yeast to 2 ml standard food. Desiccated food was prepared by addition of 2.5 g silica gel (roughly 2.5 ml volume) to vials containing 5 ml standard food (Table S3).

      The recipes for the substance-enriched foods given to the flies.

    2. The impact of diverse dietary sugars on insulin signaling has not been fully defined in D. melanogaster. From the perspective of Wolbachia endosymbiosis, this study suggests that dietary sugars induce different classes of mechanistic responses.

      The researcher's reasoning for executing this experiment was to uncover the frontier that is the mechanism that act in Wolbachia concentration, as there is very little information on it.

    3. To further investigate how oocyte Wolbachia titer is controlled, this study analyzed the response of wMel Wolbachia to diets enriched in an array of natural sugars and other sweet tastants. Confocal imaging of D. melanogaster oocytes showed that food enriched in dietary galactose, lactose, maltose and trehalose elevated Wolbachia titer.

      The paper attempts to find the mechanism responsible for Wolbachia concentration increase in germ line cells through tests of natural/artificial sugars and yeast.

    1. we illustrate through a series of analyses that the stationary gene partition is superior to the nonstationary partition

      Gene partitioning involves looking at the make up of complex traits. It involves computing heritability contributions from subsets of predictors to try and narrow down the search for causal variants.The reason why gene partitioning is so important is because if it is not done correctly the sister genomes results in aneuploidy. The consequences of these errors range from loss of normal cellular function to cell death. Stationary partitioning is superior to non-stationary because better results are given when trying to find these casual variants that are important for hereditary information.

    2. Herein, we demonstrate that these conclusions require substantial revision

      The main goal of this paper is to revise the previous work made on the construction of phylogenetic trees because according to previous studies by Rokas and Gee, those trees that have been created are fairly inaccurate. This research is being made to demonstrate how these phylogenetic trees are in fact accurate and correct the conclusions made by previous scientists.

    3. The authors then carried out a series of analyses

      In this experiment, the authors had to find out the lowest possible amount of data they needed to come up with a correct species tree. In other words, they want to discover the most efficient way possible to create an accurate phylogenetic tree.

    4. this approach is necessary because there are no identifiable parameters that predict the phylogenetic performance of genes (Gee, 2003; Rokas et al. 2003)

      It is hypothesized that no matter what, the accuracy of a tree is directly related to the number of genes used to create that tree. This means that the greater number of genes studied, the more accurate the tree, so with an infinite number of genes one can achieve an infinite amount of accuracy.

      The second part of the hypothesis states that the above statement should be true due to the fact that there is no set evidence that can "rank" a gene's ability to contribute to a phylogenetic tree. This is hypothesizing that all genes contribute an equal amount to a phylogenetic tree, with no gene being able to contribute any more or less than another gene.

    5. We investigated incongruence between stationary and nonstationary partitions further by examining partitioned Bremer support.

      Partitioned Bremer support is a method for assessing the similarity in joined data collections, although, there are some point of views that require some insight. When more than one similar parsimonious tree is found in the middle of varied ones, averaging Partitioned Bremen support for each set of data over these trees can avoid trouble, and it should ideally be analyzed for each unnatural tree. When numerous most parsimonious trees are examined due to joint data collection, Partitioned Bremen support is generally computed on the common tree. Be that as it may, extra information can be acquired on the off chance that Partitioned Bremen support is computed on each of the most parsimonious trees or less quality trees.

    6. We performed parsimony bootstrap analysis of individual genes across all positions to compare the phylogenetic performance of these partitions.

      "Bootstrapping" is a technology based practice for determining the accuracy or precision of many statistical results. For phylogenetic  trees, the bootstrap helps by exampling "confidence" in related organisms (common ancestor) in proportion with organisms from the same "family".

      During this procedure, the authors essentially performed an analysis to determine how divergent, or dissimilar, the genes had come to be.

    1. Our data sets consist of detailed second-by-second longitudinal records of online support activity for ISIS from its 2014 development onward and, for comparison, online civil protestors across multiple countries within the past 3 years

      On a daily basis, experts looked for specific hashtags and keywords that indicated activities related to ISIS or to civil unrest. Then, at the same time each day, these experts logged into VK.com--an equivalent of Facebook that is popular in Europe--and searched for newly created aggregates, which were then inserted into a database.

  2. Jan 2018
    1. If Ir40a is required for the behavioral response to DEET, one must contemplate that both Ir40a and Orco are necessary for DEET sensation, but that neither pathway is sufficient for repellency on its own.

      The author of this paper is currently doing research on the olfactory receptors that mediate the mosquito’s human and plant host-seeking behavior and the genes that regulate their appetitive drives. This includes research on Ir40a.

    1. Figure 2 indicates reconstructed and estimated shifts in the distribution of major Mediterranean biomes

      These are some of the main general biomes found in the Mediterranean region. The BIOME4 model uses a longer list of specific biome types, and more than 20 were found in the Mediterranean region. For the analysis, this larger list was grouped into ten “aggregated biome types” which are presented in Figure 3. To see how the aggregated biomes were grouped, check out Table S1 in the Supplementary Materials.

    2. The colored areas illustrate the interquartile interval provided by the intermodel variability

      Each of the colored lines is generated from using multiple climate models as input to the BIOME4 model. The solid line indicates the average change ratio predicted from multiple runs of the model.The variation within each model is indicated by the shaded region, which encompasses the 25% of results above and below the average. As you can see, some models had higher variability than others, and the predicted changes within the first 50 years overlap between models.

    3. The limitations of a relatively simple ecosystem model are largely offset by two factors. First, this method directly relates the physical environment, including its seasonal variability, and atmospheric CO2 to plant processes and thereby avoids the strong assumptions made by niche models (18). Second, past observations are analyzed with the same process-based model that is used for the future projections, thus providing a more coherent framework for the assessment.

      Though the BIOME4 methods have limitations, there are two major advantages.

      First, the model is based on the underlying processes that connect climate and ecosystems, so it avoids the assumptions made by models that are based on correlating current ecosystem distributions and climate values. For example, these models often assume that the current ecosystem distributions are stable and at equilibrium, rather than in flux.

      Second, the researchers are able to use the same model for both the past reconstructions and for the future projections. This allows for more direct comparison between the two groups.

    4. For the Holocene, BIOME4 was inverted to generate gridded climate patterns by time steps of 100 years and associated ecosystems (“biomes”) from

      To reconstruct Holocene climate variables and biome types, the researchers "inverted" the BIOME4 model, flipping the input and output data. Rather than using climate data as the input, it uses plant data.

      The researchers started by converting the pollen core data for a given time and location into "plant functional type" scores, which BIOME4 uses to rank and select biome types based on the "best match." "Plant functional type" groups species together based on similar characteristics such as respiration rate, response to climate variation, and genetic makeup.

      The researchers then ran the BIOME4 model in "inverse" mode, testing a large number of randomized climate inputs for each time and location. The set of climate inputs that resulted in the best match to the pollen core data is used for the reconstruction.

    5. see table S2 and (6) for details

      In many scientific journals, the Materials and Methods are part of the main text, but in Science they are part of the references and Supplementary Materials. For details on how the researchers developed their pathways and models, and information on the different ecosystem groupings, check out the supplementary information.

    6. 25th, 50th, and 75th percentiles

      These boxes show a range of results from many model runs for each scenario. The dot indicates the 50th percentile (median) result for all runs. The box shows the range containing the middle 50% of the results.

    1. whole-mount in situ hybridization (WMISH) with species-specific probes, we show that crocodile, lizard, and snake placodes all exhibit spatial expression of Shh

      WMISH is a common technique for visualizing the location of expressed RNAs in embryos.

      In this study, the authors used WMISH to show that Shh was expressed in the placode.

    2. breeding experiments

      Individual lizards with different physical traits were bred together in order to observe how those traits were passed on to the offspring.

    3. each of these dermoepidermal elevations that generate scales in crocodiles, lizards, and snakes occurs at the location of a transient developmental unit that exhibits the characteristics (Fig. 1B) of the mammalian and avian anatomical placode

      The authors of this study expand on the findings of previous studies by showing that the elevations that result in scales correspond to the anatomical placode in mammals and birds, which gives rise to hair and feathers.

    4. skin developmental series (Fig. 1A) in crocodiles (Crocodylus niloticus), bearded dragon lizards (P. vitticeps), and corn snakes (Pantherophis guttatus)

      The authors took successive microscopic images of different body parts in both lizards and snakes, specifically focusing on the places that scales form.

    1. Here we present a framework to calculate the amount of mismanaged plastic waste generated annually by populations living within 50 km of a coast worldwide that can potentially enter the ocean as marine debris.

      The authors use population density data and waste accumulation rates to predict how much plastic might enter the ocean as marine debris.

  3. Dec 2017
    1. The most marked and derived macropatterning of skin in reptiles is observed in snakes

      Compared to other reptiles, snakes' scale development is unique.

    2. proliferating cell nuclear antigen (PCNA) analyses indicate a reduced proliferation rate of the placode epidermal cells

      PCNA is a protein involved in cell division, so it is used as a marker to locate cells that are actively dividing.

      This analysis showed that the cells of the placode were reproducing very slowly.

    3. associated with the presence of an anatomical placode presenting all the characteristics observed in avian and mammalian placodes

      The authors of this study provide evidence for the theory that hair, feathers, and scales are homologous structures.

    1. To this end, luciferase RNA flanked by the 5′ and 3′UTRs of PE243, was transfected with increasing amounts of MSI1 into human embryonic kidney (HEK) 293T cells, which do not normally express MSI1 (fig. S4).

      Luciferase RNA was transfected into human embryonic kidney cells with increasing amounts of MSI1 to test the translation regulation abilities of MSI1. The result is an increase in luciferase translation due to MSI1 regulation (meaning MSI1 may stabilize the genome of RNA).

    2. Because there was no discernible difference between ZIKV binding and entry into control and KO cells (Fig. 2, G and H), we asked if MSI1 could regulate translation through ZIKV UTRs

      The viral binding affinity was tested using the viral binding assay along with the a pseudotyped particle infectivity assay to test the ability of the virus to infect a cell. The results for the assays did not show a significant change in binding affinity.

    3. Consistently, levels of the viral dsRNA and flavivirus E protein, as well as the infectious titer, were reduced in the KOs (Fig. 2E and fig. S3). Because MSI2 levels were similar between control and KO cells (Fig. 2C), MSI1 and MSI2 are unlikely to have complete functional redundancy in ZIKV replication. Replication of the MR766 strain was also impaired in the KO cells (Fig. 2F)

      Numbers of viral dsRNA, flavivirus E protein, and infectious particles were determined using confocal microscopy. Copies of viral RNA were measured after introducing the MR766 strain. The authors use this method to understand the replication processes that can increase or, in this article, decrease due to the inactivatoin of MSI1. The results observed showed a decrease in the amounts of viral particles and MR766 replication.

    4. We then generated MSI1 knockouts (KOs) in U-251 cells by clustered regularly interspaced short palindromic repeats (CRISPR)–Cas9–mediated targeting of exons 8 or 6 of MSI1 (KO1 and KO2, respectively; Fig. 2C and fig. S2). Control cells were obtained through clonal expansion of cells transfected with Cas9 alone. By measuring viral RNA at different times after PE243 infection, a marked reduction of viral load was seen in KO1 and KO2 cells at 24 and 48 hours (Fig. 2D).

      MSI1 knockout cells (KO) were created using CRISPR/Cas9. KO1 and KO2 were then probed with N and C termini MSI1 and MSI2 antibodies. The results showed a decrease in viral load after 24 and 48 hours.

    5. In all three cell types, MSI1 depletion led to a marked reduction in viral RNA levels (Fig. 2, A and B)

      Cells were observed after being infected with PE243, a control siRNA (small interfering RNA used to silence protein coding genes), and MSI1 siRNA. Western blots were run on the proteins from the cells. The results of the observations showed a decrease in viral RNA levels.

    6. To investigate whether MSI1 also binds ZIKV 3′UTR in vivo, ultraviolet (UV) cross-linking immunoprecipitation (CLIP) of RNA was performed from lysates of PE243-infected U-251 glioblastoma cells, revealing a robust direct interaction between MSI1 and PE243 ZIKV RNA (Fig. 1E)

      UV radiation was used to run a CLIP (cross-linking immunoprecipitation used to bind proteins together using UV radiation) analysis on infected cells. A western blot was run on the results, which allowed scientists to find the strong bond between MSI1 and PE243 ZIKV RNA.

    1. Additional BRUV survey sites

      BRUV's were placed in sites different to the determined video sites in order to serve as a control for the experiment. The author expected to see distinct measures of site fidelity and examine these results in order to prevent false positives and ensure that the results gathered from the no take marine reserves were higher or more accurate than results measuring site fidelity in open waters. -Sindy

    2. Baited remote underwater video (BRUV)

      BRUV is a system used in marine biology research. By attracting shark into the field of view of a remotely controlled camera, the technique records sharks diversity, abundance and behavior of species. -Sindy

    3. If sharks exhibit fine-scale site-fidelity to certain parts of GRMR, then the number of detections on a monitor should decrease with distance from the shark's tagging location.

      As the shark gets further away from the initial location of tagging, its prevalence of location indicating site fidelity decreases because it is getting further and further away from its initial site. -Sindy

  4. Nov 2017
    1. More information on shark movements and relative abundance in different management zones is needed to understand the extent to which marine reserves benefit Caribbean reef sharks and reef sharks in general.

      This research is centered around this idea of uncovering information on the abundance of sharks in zones that are monitored but reserved for marine animals It tests the efficiency of no-take marine reserves and its correlation to the site fidelity of reef sharks. -Sindy

    2. However, can marine reserves also benefit large, roving reef predators that are potentially mobile throughout their life?

      This question sheds light on a topic regarding the suitability of marine reserves not only as a permanent safety harbor for recovery and expansion of the species but also the temporary inhibition of the space by species that are mobile, whether they would use the space to breed for protection or for a stable source of food and shelter. -Sindy

    1. The interactions between the insulin signaling pathway (ISP) and juvenile hormone (JH) controlling reproductive trade-offs are well documented in insects.

      A signaling pathway allows an activation process to occur for a cell to have a specific function. So a communication takes place between these two different hormones, juvenile and insulin. These hormones work with energy to develop an insect. -Nicole Jones

    1. Multiple alignments of protein and nucleotide sequences were implemented using the BIOEDIT program (Hall 1999) and visually inspected for errors.

      BIOEDIT is a computer program that is capable of editing and aligning sequences rapidly and effectively. It is meant to be easy to use and is intended to allow researchers to easily create and analyze basic sequences. It is incredibly useful in this case as it allows the researcher to analyze sequences of multiple insect species rapidly and quickly. Learn more at http://www.mbio.ncsu.edu/BioEdit/page2.html -Eri-Ray

    2. It has been found a conserved relationship among dsx/tra/tra-2 across dipterans so that this axis represents the ancestral state of the sex determination cascade,

      The conserved relationship among all dipterans suggests that this form of relationship evolved to be long ago. Part A of Figure 1 explains the cascade of how sex is determined among drosophila. This serves an example of a mechanism that works efficiently in determining sex. Through evolution and natural selection, the mechanisms that would perform most efficiently would be passed on (as can be expressed in Part A of Figure 1), while the other inefficient mechanisms would be removed from the population. -Elder

    1. Contraction times at the 45% Lf position

      Different positions of the fish led to more or less efficient muscle contraction times. So, the authors used the positions that would best decrease contraction time to find the maximum speed of a fish.

    2. also limited slippage of the hypodermic needles during contraction (no slippage was observed during contraction as well).

      Parafilm was used to make sure the needles didn't slip while measuring muscle contraction time, because that would lead to a source of error.

    1. fluorescein isothiocyanate (FITC)–dextran (3 kD) in aCSF, was infused at midday (12 to 2 p.m.) via the cannula implanted in the cisterna magna. In sleeping mice, a robust influx of the fluorescent CSF tracer was noted along periarterial spaces

      The scientists injected a fluorescent tracer, FITC, into a large reservoir of CSF called the cisterna magna.

      This enabled the scientists to detect CSF as it circulated through the brain and diffused into the interstitial space between neurons.

      The scientists observe that the CSF in sleeping mice flows robustly along arteries and diffuses into the surface of the brain (the parenchyma).

    2. Periarterial and parenchymal tracer influx was reduced by ~95% in awake as compared with sleeping mice during the 30-min imaging session

      The scientists awaken sleeping mice and inject a second tracer (also small) into their CSF.

      The scientists observe far less CSF flow along arteries and into the brain. This suggests that wakefulness exerts a powerful and rapid effect on CSF circulation.

      This experiment compares two arousal states within the same mouse, rather than comparing the two states in two different mice. This kind of experimental design minimizes the impact of genetic and environmental variability between mice and makes this study extremely powerful.

    3. we repeated the experiments in a new cohort of mice in which all experiments were performed when the animals were awake (8 to 10 p.m.).

      The scientists have already demonstrated that CSF circulation is reduced when they wake a sleeping mouse.

      However, it is possible that this is only true when a mouse is awakened in the middle of its sleep cycle, or that it is a short-lived effect that only matters during the first several minutes of wakefulness.

      The scientists expanded their finding by injecting tracer into awake mice and then anesthetizing them.

    4. P < 0.05, two-way analysis of variance (ANOVA) with Bonferroni test],

      An Analysis of Variance (ANOVA) is a statistical method for determining whether multiple groups being studied are truly different from each other.

      "n = 6 mice" means that six mice were compared in total; this is a small sample, but because of the powerful design of their study (comparing arousal states within the same mouse), the scientists are able to detect significant differences between groups.

    5. Radiolabeled125I-Aβ1-40 was injected intracortically in three groups of animals: freely behaving awake mice, naturally sleeping mice, and animals anesthetized with ketamine/xylazine (fig. S4).

      In order to test whether their findings may be relevant for understanding Alzheimer's disease (AD), the scientists wanted to determine if the clearance (removal) of beta amyloid, Aβ, a protein that accumulates during AD, was more or less efficient during sleep.

      The scientists injected small amounts of a slightly radioactive form of Aβ, then waited for between 10 minutes and 4 hours, giving the brain an opportunity to remove the Aβ.

      The radioactive label is believed to not impact the clearance of Aβ, and allowed the authors to determine how much of the Aβ protein remained in the brain using a gamma counter.

  5. Oct 2017
    1. we used several methods to evaluate the degree of consistency among the sgRNAs or shRNAs targeting the top candidate genes

      The authors compared GeCKO screening to a similar screen performed with shRNA.

    2. RNAi Gene Enrichment Ranking (RIGER) algorithm

      The authors used an algorithm to rank the genes enriched in their screen by how likely it is these genes contribute to PLX resistance.

    3. we found enrichment of multiple sgRNAs that target each gene after 14 days of PLX treatment (Fig. 3E), suggesting that loss of these particular genes contributes to PLX resistance.

      After 14 days, the authors saw a change in the distribution of sgRNAs in drug-resistant cells. From the new distribution of sgRNAs, they were able to identify genes that may contribute to PLX resistance.

    4. enrichment of a small group of cells that were rendered drug-resistant by Cas9:sgRNA-mediated modification.

      PLX exposure stops growth in cells with a specific BRAF mutation. Treating a group of cells with PLX will halt the growth of cells without resistance, but cells that are resistant to PLX will continue to grow. This allowed the researchers to isolate drug-resistant cells.

    5. we sought to identify gene knockouts that result in resistance to the BRAF protein kinase inhibitor vemurafenib (PLX) in melanoma

      To test the GeCKO library's effectiveness for positive selection (genetic variation that is beneficial to the cell), the authors tried to use it to identify which genes result in resistance to PLX.

      PLX is a BRAF enzyme inhibitor. BRAF is involved in the uncontrolled (cancerous) cell growth in melanoma.

    6. we conducted a negative selection screen by profiling the depletion of sgRNAs targeting essential survival genes

      To determine how effective the GeCKO library is at knocking out targeted genes, the authors first used a negative selection screen (which tests for deleterious effects).

      They infected cells with the library of sgRNAs. At the end of 14 days, they observed a reduction in the diversity of sgRNAs, as those sgRNAs that targeted genes necessary for survival were lost when cells died.

    7. We designed a library of sgRNAs targeting 5′ constitutive exons (Fig. 2A) of 18,080 genes in the human genome with an average coverage of 3 to 4 sgRNAs per gene (table S1), and each target site was selected to minimize off-target modification

      The authors produced a wide variety of sgRNAs that were used to find the 5’ end of constitutive exons, sequences that are constantly turned on and being used to create proteins.

    8. potential of Cas9 for pooled genome-scale functional screening

      As noted above, genome-wide screening has been successfully performed with RNAi. Here, the authors wanted to know if CRISPR could offer a more accurate and precise way to screen genomes.

    1. CSF influx into the cortex of awake, anesthetized, and sleeping mice.

      The authors are looking to see if CSF influx (part of the convective exchange) is different between:

      awake mice sleeping mice anesthetized mice

    2. We tested the alternative hypothesis that Aβ clearance is increased during sleep and that the sleep-wake cycle regulates

      This illustrates how scientists ask questions.

      The authors take an observed phenomenon (sleep) and carefully form testable hypotheses about its function.

      They combine their observation that sleep is restorative and dysregulated sleep is associated with mental illness, with their knowledge of the glymphatic system.

      They hypothesize a connection between the two, which they carefully test in well-designed experiments.

      They choose powerful methods for data collection and analysis, making comparisons within individuals to minimize variability, and systematically recording data at a variety of time points and conditions to increase their probability of answering their questions.

      Finally, they connect their findings into a general model for how sleep, arousal, and glymphatic function could interplay in normally functioning as well as diseased brains.

    3. why lack of sleep impairs brain function

      The scientists are asking two questions in this study:

      "why do we need sleep?"

      "why do we feel bad if we do not get enough sleep?"

    1. But little is known about the mechanisms underlying the evolution of habitat specialization and the extent to which herbivores contribute to phenotypic divergence during the speciation process

      The authors have a clear goal in their experiment. Through this observation that is lacking a response they build an experiment to identify natures processes.

      -Luisa Bermeo

    1. Receivers were attached with shackles and heavy duty plastic cable-ties to a length of polyurethane braided rope

      Polyester rope is used because it has extreme resistance to abrasion and sunlight. -Sindy

    2. Externally mounted transmitters were fitted to these individuals instead of performing intracoelomic insertion because of inclement weather and rough sea conditions.

      The presence of reef sharks with external transmitters as well as internal transmitters further validates the results obtained. Despite the turbulence of weather conditions, all sharks were accounted for, providing an accurately measured population pool. -Sindy

    3. Caribbean reef shark populations can benefit from no-take marine reserves and increase in abundance in these areas

      The author expects the well being and population of reef sharks living in no-take marine reserves to be beneficial and higher respectively than those living elsewhere. No-take marine reserves are protected areas of a marine habitat that prohibit human activity. This makes sense because the disastrous effects that human actions have on marine environments is common knowledge. Reef sharks would thrive in a location where adequate amounts of supply are met due to natural resources and the fact that a no-take reserve are essentially a no-fishing area. This prevents sharks from being hunted for their fins, meat etc. In a fiji no-take marine reserve, there was observed to be four times as many sharks than in an areas where fishing was allowed.

      https://www.youtube.com/watch?v=mYZ6AIgFMQg

      -Sindy

    1. Additional tests for the presence of local molecular clocks were carried out in the case of SXL (insects) by using the program HyPhy

      HyPhy stands for hypothesis testing using phylogenies. Hyphy is a software package that serves many purposes as well as multiple audiences. It is used primarily for the analysis of genetic sequences using techniques in phylogenetics (studies of evolutionary history). Though it is also commonly used in molecular evolution, and to study rates and patterns of sequence evolution. It can also be compiled as a shared library and called upon by other programming environments, thus making HyPhy great for group work. To learn more, click here https://academic.oup.com/bioinformatics/article/21/5/676/220389/HyPhy-hypothesis-testing-using-phylogenies -Jake Barbee

    2. To fill this gap, the present work investigates the levels of variation displayed by five sex-determining proteins across 59 insect species, finding high rates of evolution at basal components of the cascade.

      The author’s goal for this article is to find high rates of evolution at the basal components of the cascade. The author does this by examining the displayed variations of five sex-determining proteins in a sample of fifty-nine insect species. To see a simple example of what the author is trying to determine click here. https://www.youtube.com/watch?v=7usaaiggDgw -Jake Barbee

    3. The study of the epistatic relationships between Sxl and the other genes involved in sex determination [i.e., transformer (tra), transformer-2 (tra-2), fruitless (fru), and doublesex (dsx)] has revealed a hierarchical interaction among them during development (Baker and Ridge 1980), with the product of one gene controlling the sex-specific splicing of the primary transcript of the gene immediately downstream [reviewed in (Sánchez 2008)] (Fig. 1a).

      Splicing refers to the process of joining or inserting genes or fragments of genes. This excerpt refers to the phenotypic and genotypic relationship between Sxl and other sex determinants that contain a hierarchy and produce a "suppressor" that controls the transcription of the gene downstream. Where downstream refers to the final level of the cascade. -Elder

    4. the program committing the embryo to either the male or the female pathway is under the control of the gene Sex lethal (Sxl)

      The "Sex lethal" gene is the sex determining gene of the sexually reproducing Drosophilia Melanogaster, During embryonic development, an XX embryo would have the Sxl gene expressed or active, while the XY embryo would have the Sxl gene suppressed or dormant to ensure the development of a male Drosophilia. -Elder

    5. Fig 1. Schematic representation of the hierarchical epistatic interactions constituting the sex determination cascade in Drosophila [adapted from (Sánchez 2008)] evolving from bottomto top (DSX doublesex, FRU fruitless, TRA-2 transformer-2, TRA transformer, SXL Sex-lethal). a In the absence of X/A signal in males, truncated SXL and TRA proteins will be produced leading to the synthesis of male-specific FRU and DSX that will eventually result in maleness. The major components of the cascade analyzed in the present work are indicated in gray background. b Under the bottom-up hypothesis, genes more recently recruited into sex determining pathways are expected to cause divergence toward the top of the cascade. c According to the developmental constraint hypothesis, genes involved in early aspects of development would be more constrained due to the large deleterious pleiotropic effects of mutations

      The author is using the chart to display sex-determining proteins of the Drosophila species. He does this to better describe how proteins play an important role in sex determination. The author also displays how Drosophila evolved bottom-up to create a higher divergence towards the top of the cascade. This would thus explain why genes evolved earlier would be more restricted than those evolved later down the cascade. To better explain sex-determination please watch this video. https://www.youtube.com/watch?v=NQ4Mh_CU15E -Jake Barbee

    1. JH and insulin regulate reproductive output in mosquitoes; both hormones are involved in a complex regulatory network, in which they influence each other and in which the mosquito's nutritional status is a crucial determinant of the network's output.

      The hypotheses is saying the mosquitoes nutrition has an effect on "insulin sensitivity" and "juvenile hormone synthesis."Insulin is a hormone that works with the amount of glucose (sugar) in blood. A juvenile hormone is a hormone in insects that work with maturation and reproduction. So the researchers are saying that the amount of nutrition (food) the mosquitoes eat will determine the amount of these two hormones in the insect body, insulin and juvenile.

      Since the researchers hypothesize that how much the mosquitoes eat determines how much insulin and juvenile hormones work, this means how much insects reproduce is also affected. So the researchers are saying if the insects eat enough, they will reproduce with better conditions than not eating enough. This is because the hormones that control reproduction are controlled by the insects nutrition.

      -Nicole Jones

    1. Finally, knowing object distance is a prerequisite (or corequisite) in the model for deconfounding size, impedance and shape, so these features would first appear in the torus and higher areas. Although this proposal is not yet based on quantitative simulation or modeling, we believe it may be a useful working hypothesis for interpreting and further exploring parts of the electrosensory nervous system.

      In this sentence, the authors are hypothesizing that the EOD pattern incorporated into electrosensory nervous system of the electric fish uses the information of size, shape, and distance of the objects in an algorithm used to process and relay information to the electric fish brain.

      -Kierra Hobdy

    2. Ultimately, weakly electric fish must extract and interpret any useful signals contained in small-field perturbations superimposed upon the intrinsic EOD pattern. Therefore, a considerable volume of the electric fish brain is devoted to electrosensory processing. For the computational algorithms proposed above to be involved in electrolocation, they must have a plausible neural implementation in the fish’s nervous system. We propose one such projection onto the neural networks in the electric fish brain.

      Electric fish are able to receive signals and information from their environment though the emission of electroreceptors thorough their skin. After the contact of the electroreceptors and the external stimuli, information is relayed in a pattern to the electro-receptory organs of the fish. Since this is an essential part of their way of life , the authors know a large amount of neurons and brain matter are involved in this process. Therefore, they hypothesized, that in order to insure this sensory information is relayed efficiently and quickly to the brain of the electric fish, there must be an algorithm used by the neural networks in the fish in order for this process to occur.

      -Kierra Hobdy

    1. Also, the replication “succeeds” when the result is near zero but not estimated with sufficiently high precision to be distinguished from the original effect size.

      Here, the authors describe a problem of judging the replication success by evaluating the replication effect against the original effect size. When the replication effects size is near zero, it could be possible that the data shows no effect, and therefore we would find an unsuccessful replication attempt.

      However, the estimation of the effect size could be imprecise. This means that there could be a lot of “noise” in the data, from random or systematic errors in the measurement. If there was a lot of noise in the data, it could distort our impression of whether the effect is really zero or not.

      We might conclude that a replication with an effect size close to zero was sufficiently different from zero and thus successful, although the effect was really just produced by noise in the data, and the true effect is zero, meaning that the replication could be falsely taken as a success.

    2. Also, the replication “succeeds” when the result is near zero but not estimated with sufficiently high precision to be distinguished from the original effect size.

      Here, the authors describe a problem of judging the replication success by evaluating the replication effect against the original effect size. When the replication effect size is near zero, it could be possible that the data shows no effect, and therefore we would find an unsuccessful replication attempt. However, the estimation of the effect size could be imprecise. This means that there could be a lot of “noise” in the data, from random or systematic errors in the measurement. If there was a lot of noise in the data, it could distort our impression of whether the effect is really zero or not. We might conclude that a replication with an effect size close to zero was sufficiently different from zero and thus successful, although the effect was really just produced by noise in the data, and the true effect is zero, meaning that the replication could be falsely taken as a success.

    3. We correlated the five indicators evaluating reproducibility with six indicators of the original study (original P value, original effect size, original sample size, importance of the effect, surprising effect, and experience and expertise of original team) and seven indicators of the replication study (replication P value, replication effect size, replication power based on original effect size, replication sample size, challenge of conducting replication, experience and expertise of replication team, and self-assessed quality of replication)

      Last, the authors wanted to know if successfully reproducible studies differed from studies that could not be replicated in a systematic way.

      For this, they checked if a number of differences in the original studies, such as the size of the effect originally reported, was systematically related to successfully replicated studies.

      They also checked if a number of differences in the replication studies themselves, such as the size of the effect of the replication study, related systematically to successful replications.

    4. We conducted fixed-effect meta-analyses using the R package metafor (27) on Fisher-transformed correlations for all study-pairs in subset MA and on study-pairs with the odds ratio as the dependent variable.

      The authors combined the results of each original and replication study to determine if the cumulative joint effect size was significantly different from zero. If the overall effect was significantly different from zero, this could be treated as an indication that the effect exists in reality, and that the original or replication did not erroneously pick up on an effect that did not actually exist.

    5. tested the hypothesis that this proportion is 0.5

      The authors hypothesized that in half the replication studies, the effect would be stronger in the original than the replication.

      The reason for choosing this null hypothesis here is that this is the expectation we would have if chance alone determined the effect sizes. There are two likely outcomes for each replication study: (1) that its effect size is bigger than that of the original study, and (2) that its effect size is smaller than that of the original study.

      If chance alone determined the effect size of the replication, we would see each possible outcome realized in 50% of the cases. If the proportion of replication effects that are bigger than the effect of the original study was significantly different from 50%, it could be concluded that this difference was not random.

    6. Each replication team conducted the study, analyzed their data, wrote their summary report, and completed a checklist of requirements for sharing the materials and data.

      After completing their replication attempt, independent reviewers checked that each team's procedure was well documented, that it followed the initial replication protocol, and that the statistical analysis on the effects selected for replication were correct.

      Then, all the data were compiled to conduct analyses not only on the individual studies, but about all replication attempts made. The authors wanted to know if studies that replicated and those that did not replicate would be different.

      For instance, they investigated if studies that replicated would be more likely to come from one journal than another, or if studies that did not replicate would be more likely to have a level of statistical significance than studies which could be replicated.

  6. Sep 2017
    1. In parallel, we engineered a new KDM5B construct with a 10X HA-tag SM (HA-KDM5B) (5) to complement FLAG-KDM5B (formerly referred to as SM-KDM5B), as shown in Fig. 4A. As a first application of this technology, we wanted to test if polysomes interact with each other to form higher-order structures that can translate two distinct mRNAs at the same time.

      To determine whether multiple polysomes can interact with one another to translate multiple mRNAs, the authors transfected two KDM5B genes into the cell, one containing the HA SM tag and one containing the FLAG SM tag. These two tags can be detected by different antibodies bound to different color fluorochromes.

    2. To measure the lifetime of Fab binding, we performed fluorescence recovery

      Photobleaching or FRAP uses laser light to quench the fluorescence being emitted by the fluorescent tag. The fluorescence can be recovered by Fab antibodies leaving the translated proteins and being replaced by new antibodies. Thus, this experiment can determine how length of time that the Fab antibodies can bind to the newly translated proteins. These experiments were used to gather baseline data to determine the rate of transcriptional elongation.

    3. To measure how quickly Fab bind polysomes, we microinjected them into cells transfected 6 hours earlier with our KDM5B construct and pre-loaded with MCP.

      This experiment measured how quickly the Fab antibodies bound to the polysomes by injecting the antibodies into cells containing the KDM5B gene and the MCP fluorescent marker.

    4. Besides their brightness, NCT also revealed differences in the mobility of polysomes. We quantified this by measuring the mean squared displacement of tracked polysomes as a function of time.

      Using the FLAG tag described above, the authors measured how quickly the ribosomes translated the proteins by determining when the ribosomes left the mRNA after translation.

    5. To determine precisely how many nascent chains exist per site, we calibrated fluorescence by imaging a new beta-actin plasmid containing a single 1X FLAG tag rather than the 10X SM FLAG tag

      To determine how many proteins are being translated at any one time from a single molecule of mRNA, beta actin was labelled with only one FLAG tag (rather than 10). Multiple proteins can be translated from one mRNA using polysomes.

    6. As depicted in the inset of Fig. 2D, this allowed us to accurately compare (1) the appearance frequency and brightness, (2) the mobility, and (3) the size of translation sites.

      Using a system to track mRNA during translation, scientists were able to visualize the size of translation sites, the mobility of the molecules, and how bright each molecule appeared after being tagged with fluorescence.

    7. Western blots

      Procedure used to identify proteins by separating the proteins by length

    8. To see if we could also detect translation of smaller proteins, we constructed two plasmids

      To further test the abilities of identifying translation dynamics using the previously identified system, experiments were conducted using smaller proteins, beta-actin and H2B, rather than KDM5B.

    9. To further confirm protein/mRNA spots were translation sites, we treated cells with 4 μg/ml cycloheximide to slow elongation and load more ribosomes per transcript. Consistent with this, spots got brighter

      A fourth experiment was conducted in order to positively identify that the co-located protein-mRNA spots were sites of translation, 4 micrograms of cycloheximide were added. The addition of the cycloheximide caused translation to slow down while also allowing for more ribosomes to attach to a single transcript.

    10. To test this, we treated cells with 50 μg/ml of puromycin

      To test the co-moving protein-mRNA spots (areas/spots in the images taken, Fig. 1, C and D, where a red and green dots were co-located) were locations of translation, 50 micrograms of puro-mycin was added to U2OS cells containing the transiently transferred plasmid. The puromycin leads to the inhibition of translation while mRNAs are still present.

  7. Aug 2017
    1. to areas in which changes from scenario RCP2.6 already appear (red areas)

      The authors do not include changes that occur in the RCP2.6L scenario in the Figure 3H map. Why might they have left that out?

    2. for scenarios RCP2.6L, RCP2.6, RCP4.5, and RCP8.5, respectively, at the end of the 21st century.

      The four maps for the future pathways, 3D to 3G, are based on putting the climate data for year 2100 for each pathway into the BIOME4 model, running in "forward" mode.

    3. reconstructed (rec) from pollen for the present

      The first map, 3A, comes from putting pollen core data from the past century into the BIOME4 inversion model, the same method as is used for the past biome map (3B).

    4. o assess the 1.5°C target, we created a fourth class (denoted RCP2.6L) from selected CMIP5 scenarios

      None of the existing RCPs resulted in a projection of 1.5°C, so the researchers created a new model so that they could evaluate the scenario.

    1. To determine whether the benefits afforded by tau reduction were sustained, we examined older mice.

      By examining older hAPP mice with and without tau, the authors could test how tau causes Alzheimer’s disease to progress as an animal ages.

    2. Probe trials, in which the platform was removed and mice were given 1 min to explore the pool, confirmed the beneficial effect of tau reduction

      After the researchers trained all mice to swim to a platform hidden under the water surface, the authors removed the platform to see how much time they spent in the area where the platform used to be. This way the researchers were able to test the memory of the mice.

    3. hAPP/Tau+/+ mice took longer to master this task (Fig. 1A; P < 0.001). In contrast, hAPP/Tau+/– and hAPP/Tau–/– mice performed at control levels.

      Alzheimer’s mice (with hAPP) that had normal amounts of tau (Tau+/+) took longer to learn to swim to a visible platform than the other five types of mice.

    4. We crossed hAPP mice (11) with Tau–/– mice (12) and examined hAPP mice with two (hAPP/Tau+/+), one (hAPP/Tau+/–), or no (hAPP/Tau–/–) endogenous tau alleles, compared with Tau+/+, Tau+/–, and Tau–/– mice without hAPP (13).

      The authors used mice that were genetically engineered to express a human copy of the amyloid precursor protein (called hAPP mice). hAPP mice are a common animal model of Alzheimer’s disease. They develop amyloid plaques and severe memory and cognitive problems later in life, just like humans with the disease.

      The authors bred these hAPP mice with other mice that were missing both their genes for the tau protein (called Tau-/- mice). From this breeding plan, the authors produced hAPP mice with normal amounts of tau (hAPP/Tau+/+), with half the normal amount of tau (hAPP/Tau+/-), and with no tau (hAPP/Tau-/-).

      They also produced mice without the human APP gene that had normal, half, and no tau.

    1. Blast-related tau phosphorylation was also detected when quantitated as a ratio of phosphorylated tau protein to total tau protein (Fig. 5, E, F, H, and J).

      The authors saw an increase in total tau protein in blast-exposed mice. To confirm that this increase was also true for phosphorylated tau, they computed the ratio of phosphorylated tau to total tau.

    2. we performed immunoblot analysis of tissue homogenates prepared from brains harvested from mice 2 weeks after single-blast or sham-blast exposure

      The authors used western blotting to detect abnormal phosphorylated tau and confirm their electron microscopy observations. They confirmed that levels of phosphorylated tau were elevated in the brains of blast-exposed mice as compared to the control group.

      For more on western blotting, see the Journal of Visualized Experiments:

      http://www.jove.com/science-education/5065/the-western-blot

    3. Gross examination of postmortem brains from both groups of mice was unremarkable and did not reveal macroscopic evidence of contusion, necrosis, hematoma, hemorrhage, or focal tissue damage (Fig. 3, A to F, and fig. S8)

      This is an important "negative" finding that is reported in almost all cases of blast exposure. There is a lack of gross (visible to the naked eye) brain injury in the brains of people that have been exposed to a nonpenetrating blast.

      The fact that this finding is replicated in this study encouraged the authors to explore whether there were "invisible injuries" in the brains of blast-exposed mice.

    4. evaluated pressure tracings in the hippocampus of intact living mice (Fig. 2B) and compared results to the same measurements obtained in isolated mouse heads severed at the cervical spine

      To test the water hammer effect, the authors compared the pressure resulting from a blast wave both inside and outside the head of a living mouse. They then compared these measurements to similar measurements performed on a decapitated mouse head.

      Because the disembodied mouse head had no vascular system or thorax, the water hammer could not contribute to the shock wave pressure inside the head. However, the authors observed similar shock wave pressures in both the live mouse and the mouse head, showing that the water hammer effect cannot be the primary source of brain damage.

    5. ConWep (Conventional Weapons Effects Program)

      Explosion simulation software based on a large data base of experimentally obtained blast data. ConWep enables comparative analysis of blasts. The authors used the software to verify that their simulated blasts closely matched explosions that would happen in the field.

    6. Wild-type C57BL/6 male mice

      In animal studies (especially those using mice) it is important for the researchers to identify the genotype of the species they're studying. This helps other researchers either replicate the findings or identify errors in the authors' reasoning due to the genetic makeup of the model organism.

      C57BL/6 mice are the most commonly used nontransgenic mouse in biomedical research. They are called "wild-type" because their genetics have not been changed by humans.

      Mouse colonies like this are inbred to make sure the mice are as genetically identical as possible. This helps eliminate extra, unexpected variables in an experiment that could affect the results in unknown ways.

    7. compressed gas blast tube

      The authors designed a "shock tube" to create controlled blasts. A shock tube is basically a wind tunnel: Pressure is built up on one side and suddenly released, creating a shock wave that travels down the tube.

      Source: Fraunhofer CMI

    8. blast neurotrauma model to investigate mechanistic linkage between blast exposure, CTE neuropathology, and neurobehavioral sequelae.

      After finding identical CTE-linked problems in the brains of military veterans with blast exposure and athletes who suffered head injuries, the authors investigated how blast exposure might cause brain injury using a mouse model.

      Mice are commonly used to model human diseases.

    9. CTE neuropathology in postmortem brains from military veterans with blast exposure and/or concussive injury and young athletes with repetitive concussive injury.

      In this figure, the authors look at the CTE neuropathology (neurological abnormalities) in blast-exposed military veterans and young American football athletes.

      Through postmortem analysis (still the only way to identify CTE), they found the same disease in both groups.

      From these findings, the authors were motivated to investigate whether blast injury would cause the same disease as sport-related head injuries.

    10. Control sections omitting primary antibody demonstrated no immunoreactivity.

      Immunohistochemistry (IHC) is a technique used to identify the biochemical and cellular cause of disease in a tissue.

      First, a tissue is treated with a primary antibody. This antibody binds to the target protein in the tissue.

      Next, the sample is treated with a second antibody that will bind to and detect the primary antibody. This second antibody has a colored or fluorescent tagged [is there a word missing here?] so that it can be detected with a microscope or other tool.

      Question: Why do scientists use a second, additional antibody instead of just tagging the primary antibody and using that for analysis?

    11. monoclonal antibody Tau-46 (Fig. 1T) directed against phosphorylation-independent tau protein

      Tau-46 is a "pan-tau" stain, meaning it detects all (or many) forms of tau protein, both normal and abnormal.

      In normal control cases, nonphosphorylated tau immunostaining would be diffuse and light. In pathologic cases, phosphorylated tau immunostaining would be present in neurons and glia.

    1. Spatially detailed, annual pesticide measurements, including neonicotinoid insecticides, were available for the United States after 1991.

      The authors used these to measure the mean annual exposure of species to pesticides. See the Supplementary Materials for more information.

    2. If species expanded their northern range limits to track recent warming, their ranges should show positive (northward) latitudinal shifts, but cool thermal limits should be stable through time.

      Here the authors outline one of their main hypotheses (species will expand their northern range limits to track climate change), and the subsequent prediction.

    3. All data and supporting scripts are available from Dryad Digital Repository: doi:10.5061/dryad.gf774.

      If you're interested in downloading the data and reproducing the experiments the authors did, visit here! The authors provide a script of code so by downloading the statistical tool R, you can easily repeat their experiments and reproduce their figures yourself. Try it out!

    1. induces a pro-social motivational state in rats, leading them to open the restrainer door and liberate the cagemate

      The main hypothesis tested was whether rats would voluntarily act to free a trapped cagemate because of feeling sympathy for the trapped rat.

      The authors ask whether the free rat, upon seeing and hearing its trapped cagemate, will act to release the cagemate.

  8. Jul 2017
    1. We performed comprehensive neuropathological analyses (table S1)

      Neuropathological analyses consisted of staining thin sections of postmortem brain tissue from autopsy donors and examining these specimens under a microscope for evidence of disease.

      The presence, distribution, and phosphorylation state (whether or not the protein has a phosphoryl group attached to it) is used to confirm CTE diagnosis after death.

    1. possibility that peer reviewers may be rewarding an applicant’s grant proposal writing skills rather than the underlying quality of her work

      In this model, the authors try to control for the fact that an application could be selected because the applicant writes well, rather than based on the quality of the application.

    2. we employ a probabilistic algorithm developed by Kerr to determine applicant gender and ethnicity (Hispanic or Asian)

      The algorithm in question was developed by William Kerr to estimate the contribution of Chinese and Indian scientists to the U.S. Patent and Trademark Office.

    3. Our regression results include separate controls for each type of publication: any authorship position, and first or last author publications.

      What the authors mean here is that they made statistical computations that allow them to remove the effect that the position of a name in the authors row can have in a publication.

    4. our paper asks whether NIH selects the most promising projects to support.

      Previous work has shown that receiving a grant increases scientific productivity. However, the paper authors want to know if NIH is awarding grants to projects that will make the best use of the money.

  9. Jun 2017
    1. To test whether firing of somatosensory neurons causes USVs, we microstimulated the trunk cortex (Fig. 4G).

      The authors used a small electrical current to trigger neurons in the somatosensory cortex to fire in the absence of tickling. They then determined whether this neural stimulation caused the production of USVs.

    2. We simultaneously performed single-unit recordings in the trunk region of the somatosensory cortex (Fig. 2A). We obtained high-quality recordings of neuronal responses elicited by tickling and gentle touch (Fig. 2B and fig. S2A).

      The authors implanted a microelectrode into the somatosensory cortex of the rat's brain and recorded neural activity (rate of neuron firing) in this region during tickling, touch, breaks, and play.

  10. May 2017
    1. In addition to the quantitative assessments of replication and effect estimation, we collected subjective assessments of whether the replication provided evidence of replicating the original result.

      Finally, the authors included a last measure for replication success: a subjective rating. All researchers who conducted a replication were asked if they thought their results replicated the original effect successfully. Based on their yes or no answers, the authors calculated subjective replication success.

    2. “coverage,” or the proportion of study-pairs in which the effect of the original study was in the CI

      The authors compared the size of the original study effects with their replication study's effects to determine if the original study results fell within the confidence interval (CI) of the replication.

      If the original study's effect size was within the CI of the replication study, the effect size can be assumed to be similar.

    3. We transformed effect sizes into correlation coefficients whenever possible

      The third indicator of replication success was the effect sizes of original and replication studies. The authors calculated correlation coefficients to indicate effect sizes.

      In a single study, when the means of two groups are similar, the correlation coefficient will be close to 1, and when the means of two groups are very different, the correlation coefficient will be close to 0.

      The effect size of original studies was always coded as positive (values between 0 and 1). When the effect in the relevant replication study went in the same direction, the effect size was also coded as positive (values between 0 and 1), but when the effect in the replication went in the other direction, the effect size was coded as negative (values between -1 and 0).

    4. distribution of P values of original and replication studies using the Wilcoxon signed-rank test

      In addition to comparing the proportion of studies yielding signficant results, the authors compared the p-values of these studies to find out how similar they were to each other.

    5. We tested the hypothesis that the proportions of statistically significant results in the original and replication studies are equal using the McNemar test for paired nominal data and calculated a CI of the reproducibility parameter.

      Next, the authors conducted another test to find out if the proportion of original studies that produced significant results was equal to or different from the proportion of replication studies that produced significant results.

    6. we tested the hypothesis that these studies had “no evidential value” (the null hypothesis of zero-effect holds for all these studies)

      The first analysis that the authors ran on the data assessed all replication studies that yielded non-significant results.

      The authors used Fisher's method to determine whether these studies actually had effects or if the original study authors were interpreting nonsignificant results as significant.

    7. However, original studies that interpreted nonsignificant P values as significant were coded as significant (four cases, all with P values < 0.06).

      Here, the authors explain how they deal with the problem that some of the original studies reported results as significant when they were, in fact, not significant.

      In each case, the threshold that is customarily set to determine statistical significance (p<0.05) was not met, but all reported p-values fell very close to this threshold (0.06>p>0.05). Since the original authors treated these effects as significant, the current analysis did so as well.

    8. There is no single standard for evaluating replication success

      Because the large-scale comparison of original and replication studies is a new development in the field of psychology, the authors had to formulate a plan for their analysis that did not rely much on previous research.

      They decided to use 5 key indicators for evaluating the success of the replications. They compared the original and the replicated studies in terms of the number of significant outcomes, p-values, and effect sizes. They also assessed how many studies were subjectively judged to replicate the original effect. Finally, they ran a meta-analysis of the effect sizes.

    9. subjective assessments of replication outcomes

      One of the indicators for whether a study was replicated successfully or not was a subjective rating: each team of replicators was asked if their study replicated the original effect (yes or no).

    10. sampling frame and selection process

      The authors wanted to make sure that the studies that were selected for replication would be representative of psychological research. Representativeness was important because it would mean that the conclusions that would be drawn from the replication outcomes could be cautiously extended to assumptions about the state of the field overall.

      At the same time, they had to make sure that the studies selected could also be conducted (that is, that one of the coauthors had the necessary skill or equipment to collect the data).

      To achieve this goal, a step-wise procedure was used: starting from the first issue of 2008 from three important psychology journals, 20 studies were selected and matched with a team of replicators who would conduct the replication attempt. If articles were left over because no-one could conduct the replication, but more replication teams were willing to conduct a study, another 10 articles were made available. In the end, out of 488 studies drawn from the population of studies, the authors attempted to replicate 100.

    11. constructed a protocol for selecting and conducting high-quality replications

      Before collecting data for the replication studies, the authors produced a detailed protocol that described how they were going to select the studies that were available for replication, how they would decide which effect they would attempt to replicate, and which principles would guide all replication attempts.

      Importantly, this protocol was made public, and all individual replication attempts had to adhere to it.

    1. Among sessile organisms, there were marked differences in survivorship and repair after initial injury.

      The authors observed injured corals and sponges for several months after Hurricane Allen to see if they survived, noting the initial degree of injury.

    2. Differences in damage to different growth forms (7) were particularly. striking for corals,

      The authors measured the degree of damage to certain species of corals, taking care to note the growth form and size. This allowed them to look at how shape and size influenced patterns of damage.

    3. Not all patchiness can be easily explained, but a number of patterns emerge.

      The authors observed a variety of sites with different profiles both before and after the hurricane. They describe the variation within and between sites here.

    4. We consider first the effects of spatial factors and then describe the immediate impact on common organisms and their subsequent responses over the following 7 months.

      The authors noted damage to corals after the hurricane and tracked their subsequent recovery (or death).

    5. they collected data comparable to those taken previously on routine patterns and processes

      Because these reefs had been well-surveyed before Hurricane Allen, the authors conducted surveys using the same methods after the hurricane to examine the impacts of the storm.

    1. brain organoids recapitulate the orchestrated cellular and molecular early events comparable to the first trimester fetal neocortex

      Neurospheres are useful for modeling very early (embryonic) development, while organoids are used to study later stages of development.

    2. In addition to MOCK infection, we used dengue virus 2 (DENV2), a flavivirus with genetic similarities to ZIKV (11, 19), as an additional control group

      The authors also compared ZIKV infection to dengue virus 2 (DENV2) infection. DENV2 is similar to ZIKV.

    3. The growth rate of 12 individual organoids (6 per condition) was measured during this period

      Both infected and uninfected organoids were immersed in a fixative solution to "freeze" them and allow them to be visualized.

      It was then possible to use an electron microscope to compare the infrastructure of infected cells and uninfected cells.

    4. morphological abnormalities and cell detachment

      Neurospheres that contained cells infected with Zika virus were oddly shaped, and some cells broke away.

    5. mock-

      Mock NSCs were not infected with Zika.

    6. to explore the consequences of ZIKV infection during neurogenesis and growth

      In order to obtain neural stem cells from human iPS, researchers cultured iPS in a special medium.

      To create neurospheres and organoids, neural stem cells were divided and again cultured in a special medium.

      Finally, ZIKV was diluted and added to the different types of culture for 2 hours.

    1. we cannot directly assess whether the NIH systematically rejects high-potential applications

      Because the authors only looked at projects that received grant funding, their analysis does not take into account how many high-potential projects were rejected by peer review.

    2. our estimates are likely downward biased

      The authors acknowledge that there is sometimes a long delay between a grant award and patenting, so their analysis may not be a good indicator of how relevant research is to commercial applications.

    3. We control for the same variables as described in Model 6

      The patents like the grants are checked according to certain indicators: institutional quality, gender, and ethnicity of applicants.

    4. Our final analysis explores whether peer reviewers’ value-added comes from being able to identify transformative science, science with considerable applied potential, or from being able to screen out very low-quality research.

      Finally, the authors wanted to figure out if peer reviewers are good at choosing applicants for their innovation, their practicality, or if they are simply good at weeding out low-quality research.

    5. These residuals represent the portions of grants’ citations or publications that cannot be explained by applicants’ previous qualifications or by application year or subject area

      The authors removed the influence of the grant applicant's background, demographics, and writing skill in order to look at what effect a reviewer's expertise has.

    6. adds controls describing a PI’s publication history

      The authors control for yet another potential variable, an applicant's research background (they use the PI's publication history to do this).

    7. We also include NIH institute-level fixed effects to control for differences in citation and publication rates by fields, as defined by a grant’s area of medical application.

      The authors try to remove the effect of an article's field on its impact. For example, a biochemistry article may appear to have a smaller impact because of the high rate of publication and citation in that field, whereas a physics article's impact may be inflated due to a lower publication and citation rate.

    8. potential concerns

      Several factors can lead to the trends in Figure 1 being misinterpreted, like the age of the grant and the field of study. The authors address these concerns by adjusting their model to account for these effects.

    9. a 1-SD worse score is associated with a 14.6% decrease in grant-supported research publications and a 18.6% decrease in citations to those publications

      Here, the authors estimated how much a decrease of one standard deviation on the percentile score affected the number of publications and citations of a grant recipient.

    10. measure applicant-level characteristics

      The authors studied some characteristics such as the grant history or the institutional affiliation to see if the previous work of the applicant has an impact on the result of the grant application.

    11. patents that either directly cite NIH grant support or cite publications acknowledging grant support

      The last measure is the number of patents that cite those publications from (i), or acknowledge support from the grant.

    12. the total number of citations that those publications receive through 2013

      The second measure is the total number of citations the publications from (i) received through 2013.

    13. the total number of publications that acknowledge grant support within 5 years of grant approval

      The first measure of success is the number of papers a team published during the 5 years after they received the grant.

    14. funding is likely to have direct effect on research productivity

      The authors considered grants which were already funded and competing for renewal. This makes it easier to attribute differences in research productivity to the peer review process, rather than the amount of funding the project has.

    15. percentile score

      The percentile score is assigned by the peer review committee. It ranks all authors to determine which was the most favored by the committee. A lower score means the committee liked the application more.

    16. peer review has high value-added if differences in grants’ scores are predictive of differences in their subsequent research output

      If the evaluation by the peer review committee is correlated with the quality of work put out by the research group, then peer review has high value-added (meaning, it is useful for choosing research groups with the highest potential).

    17. Because NIH cannot possibly fund every application it receives, the ability to distinguish potential among applications is important for its success.

      The outcome of this study could have important implications for how the NIH evaluates and chooses who it gives money to.

  11. Apr 2017
    1. These findings suggest that genotypic determinants may be critical factors that modulate temporal and phenotypic expression of TBI and late-emerging sequelae, including CTE.

      Although the wild-type mouse model has limitations (as do all experimental models), using laboratory animals allows researchers to test things that can't be directly tested in humans.

      For example, mouse models are useful tools to study the effects of different genes. Biological research has generated thousands of genetically modified mouse strains that allow researchers to test hypotheses regarding specific genes and how these genes influence human diseases.

    2. blast exposure did not impair gross neurological functioning with respect to locomotion, exploratory activity, and thigmotaxis (an indicator of murine anxiety assessed by movement close to the wall of the experimental apparatus)

      It was important to assess locomotion (movement) because the behavioral cognition test used relies on locomotion for its results.

      If locomotion were impaired, then the results of the behavioral test could not be linked to cognition alone.

      The authors did not detect any abnormalities in locomotion in either the control or experimental group, meaning that any cognitive abnormalities they observed were not due to locomotor defects. This strengthens the argument that any of the abnormalities they saw were due to blast-related interference with brain functions affecting learning and memory.

    3. We found marked impairments of stimulus-evoked LTP in mouse slices prepared 2 weeks and 1 month after blast exposure

      As above, the authors found impairments in a neuronal process thought to be involved in memory storage.

    4. significantly slowed 2 weeks after blast exposure, an effect that persisted for at least 1 month

      Following blast exposure, the authors observed a slowing of axonal conduction in structures of the hippocampus that are important for long-term potentiation (LTP).

    5. We hypothesized that blast forces exerted on the skull would result in head acceleration-deceleration oscillation of sufficient intensity to induce persistent brain injury (“bobblehead effect”)

      In this section, the authors sought to find out if their mouse model of blast neurotrauma caused similar brain abnormalities as those observed in human CTE cases.

      If so, it would mean there is a causal linkage between blast exposure and development of CTE later in life.

    6. Kinematic analysis of high-speed videographic records of head movement during blast exposure confirmed rapid oscillating acceleration-deceleration of the head in the horizontal and sagittal planes of motion (Fig. 2, D to G, and video S1).

      The authors used high-speed video cameras (which capture 100,000 frames per second) to record mouse head movements during blast exposure.

      The videos showed violent winds pushing and pulling on the head during the blast, which the authors measured as the kinematics (motions over time) of the head.

      These pushing and pulling forces during blast exposure were similar to sport-related impacts leading to injury.

    7. blast wind velocity was 150 m/s (336 miles/hour)

      After investigating the effects of blast waves on the skull and brain, the authors turned their attention to the effects of blast wind.

      See the following video for a look at the destructive nature of these blast winds during a 1955 atomic weapons test:

      https://www.youtube.com/watch?v=ztJXZjIp8OA

    8. To investigate intracranial pressure (ICP) dynamics during blast exposure, we inserted a needle hydrophone into the hippocampus of living mice and monitored pressure dynamics during blast exposure.

      The first potential injury source the authors investigated was the interaction of the blast wave with the brain. The authors tested whether this interaction was destructive to the brain. To do this, they measured the pressure of the brain in living mice using a small needle hydrophone (a microphone used to detect sound waves underwater).

    9. Evidence of axon degeneration, axon retraction bulbs, and axonal dystrophy were observed in the subcortical white matter subjacent to cortical tau pathology

      In areas of the cortex where the authors saw irregular tau proteins, they saw damage to the axons in the white matter beneath.

    10. four young-adult normal control subjects

      Although not shown in the figure below, the authors also examined the brains of healthy controls that matched the age and sex of the experimental subjects. They used the same IHC technique.

      No abnormalities were observed in the control cases.

    11. Neuropathological comparison to brains from young-adult amateur American football players (Fig. 1, C, D, G, and H) with histories of repetitive concussive and subconcussive injury exhibited similar CTE neuropathology marked by perivascular NFTs and glial tangles with sulcal depth prominence in the dorsolateral and inferior frontal cortices.

      The abnormalities seen in the brains of head-injured athletes and blast-exposed military veterans were similar, so the authors hypothesized that the cause of CTE in these two groups must also be similar. However, it is currently not possible to diagnose CTE until after a patient has died and their brain can be autopsied.

      The authors identified a correlation between neurotrauma and CTE, but in order to test their hypothesis and prove a causal connection, they had to perform controlled experiments. They used a mouse model for this.

    12. NFTs and dystrophic axons immunoreactive for monoclonal antibody CP-13 (Fig. 1, A to I, L, Q, R, and U, and fig. S1) directed against phosphorylated tau protein at Ser202 (pS202) and Thr205 (pT205), monoclonal antibody AT8 (Fig. 1S) directed against phosphorylated tau protein at Ser202 (pS202) and Thr205 (pT205)

      Monoclonal antibodies are used in immunohistochemical staining to detect abnormal phosphorylated tau proteins. In this study, several different antibodies were used to detect various forms of tau proteins in different places in the brain.

      If the specific set of abnormalities is detected in the postmortem brain of someone who had a history of neurotrauma, the diagnosis is CTE.

    13. compared these neuropathological analyses

      The authors used neuropathological analysis to compare the brains of military veterans who were exposed to blasts with young American football players, a professional wrestler, and normal controls.

    14. Head immobilization during blast exposure prevented blast-induced learning and memory deficits.

      If a mouse's head was prevented from moving when it was exposed to a blast, the learning and memory deficits described above did not occur.

    15. Given the overlap of clinical signs and symptoms in military personnel with blast-related TBI and athletes with concussion-related CTE, we hypothesized that common biomechanical and pathophysiological determinants may trigger development of CTE neuropathology and sequelae in both trauma settings.

      A common format of experimental design in medical research is to formulate a hypothesis based on findings in human patients and test the hypothesis in an animal model.

  12. Jan 2017
    1. Differences in damage to different growth forms (7) were particularly. striking for corals

      The authors measured the degree of damage to certain species of corals, taking care to note the growth form and size. This allowed them to look at how shape and size influenced patterns of damage.

    2. Recovery of Surviving Sessile Organisms

      The authors observed injured corals and sponges for several months after Hurricane Allen to see if they survived, noting the initial degree of injury.

    3. significant at P < .001, Mann-Whitney U test

      The Mann-Whitney U test is a statistical test used when values are not normally distributed. Here it is used to compare live tissue coverage before and after the hurricane. The P-value of less than 0.001 indicates that there was a statistically significant difference between the two groups.

    4. Not all patchiness can be easily explained, but a number of patterns emerge.

      The authors observed a variety of sites with different profiles both before and after the hurricane. They describe the variation within and between sites here.

    5. they collected data comparable to those taken previously on routine patterns and processes

      Because these reefs had been well-surveyed before Hurricane Allen, the authors conducted surveys using the same methods after the hurricane to examine the impacts of the storm.

    6. We consider first the effects of spatial factors and then describe the immediate impact on common organisms and their subsequent responses over the following 7 months.

      The authors noted damage to corals after the hurricane and tracked their subsequent recovery (or death).

    1. Meta-analysis combining original and replication effects

      Moreover, the authors planned to combine the results of each original and replication study, to show if the cumulative effect size was significantly different from zero. If the overall effect was significantly different from zero, this could be treated as an indication that the effect exists in reality, and that the original or replication did not erroneously pick up on an effect that did not actually exist.

    2. hypothesis that this proportion is 0.5

      In this case, testing against the null hypothesis that the half of the replication effects are stronger than the original study effects means assuming that there is only a chance difference between the effect sizes. The alternative hypothesis is that the replication effects are on average stronger, or weaker, than the original study effects.

    3. subjective assessments of replication outcomes

      One of the indicators for whether a study was replicated successfully or not was a subjective rating: each team of replicators was asked if their study replicated the original effect (yes or no).

    4. sampling frame and selection process

      The authors wanted to make sure that the studies that were selected for replication would be representative of psychological research, that means they would give a good picture of which kinds of studies psychologists typically run. Representativeness was important because it would mean that the conclusions that would be drawn from the replication outcomes could be cautiously extended to assumptions about the state of the field overall.

      At the same time, they had to make sure that the studies selected could also be conducted (that is, that one of the coauthors had the necessary skill or equipment to collect the data).

      To achieve this goal, a step-wise procedure was used: starting from the first issue of 2008 from three important psychology journals, 20 studies were selected and matched with a team of replicators who would conduct the replication attempt. If articles were left over because no-one could conduct the replication, but more replication teams were willing to conduct a study, another 10 articles were made available. In the end, out of 488 studies drawn from the population of studies, 100 studies were attempted to be replicated.

    5. Subjective assessment of “Did it replicate?”

      Finally, the authors used the subjective rating of whether the effect replicated as an indicator of replication success. Out of 100 replication teams, only 39 reported that they thought they had replicated the original effect.

    6. Comparing original and replication effect sizes

      With this third measure for replication success, the authors further compared the sizes of the original and replicated effects. They found that the original effect sizes were larger than the replication effect sizes in more than 80% of the cases.

    7. Evaluating replication effect against original effect size

      In a second way to look at the replication success, the authors checked if the sizes of the effects of the original studies weren't too far off from those of the replication studies. Using this measure, they found that less than half the replications showed an effect size that was not too different from the original effect size to speak of a successful replication.

    8. Evaluating replication effect against null hypothesis of no effect

      First, the authors used the 5 measures for replication success to check to what extent the 100 original studies could be successfully replicated.

      In a first glance at the results, the authors checked how many replications "worked" by analyzing how many replication studies showed a significant effect with the same direction (positive or negative) as the original studies. Of the 100 original studies, 97 showed a significant effect. Because each replication study did not have a probability of 100% to find a positive result even if the investigated effect was true, if all original effects were true, we could have maximally expected around 89 successful replications. However, results showed that only 35 studies were successfully replicated.

    9. Analysis of moderators

      Last, the authors wanted to know if successfully replicable studies differed from studies that could not be replicated in a systematic way. For this, they would check if a number of differences in the original studies would be systematically related to successfully replicable studies.

    10. Subjective assessment of “Did it replicate?”

      Finally, the authors included a last measure for replication success: a subjective rating. All researchers who conducted a replication were asked if they thought their results replicated the original effect successfully. Based on their yes or no answers, subjective replication success would be calculated.

    11. “coverage,” or the proportion of study-pairs in which the effect of the original study was in the CI of the effect of the replication study

      In this test for replication success, the authors will compare the size of the original study effect and the effect of the replication study to identify if there are indications that they are not too different, so that it is likely that the effect sizes in both samples correspond to the same effect size in the population.

    12. Correlates of reproducibility

      Finally, the authors wanted to know if successfully replicable studies differed from studies that could not be replicated in a systematic way. As the criterion for replication success, they used their first analysis (significance testing).

      They found that studies from the social psychology journal were less likely to replicate than those from the two journals publishing research in cognitive psychology. Moreover, studies were more likely to replicate if the original study reported a lower p-value and a larger effect size, and if the original finding was subjectively judged to be less surprising. However, successfully replicated studies were not judged to be more important for the field, or to have been conducted by original researchers or replicators with higher expertise than failed replications.

    13. The last measure for the success of the replications was a subjective rating from the replication teams. Each team was asked if they thought they had replicated the original effect. Out of 100 studies, 39 were judged to be successful replications.

    14. Combining original and replication effect sizes for cumulative evidence

      Fourth, the authors combined the original and replication effect sizes and calculated a cumulative estimation of the effects. They wanted to see how many of the studies that could be analyzed this way would show an effect that was significantly different from zero if the evidence from the original study and that of the replication study was combined.

      Results showed that 68% of the studies analyzed this way indicated that an effect existed. In the remaining 32% of the studies, the effect found in the original study, when combined with the data from the replication study, could no longer be detected.

    15. Statistical analyses

      Because the large-scale comparison of original and replication studies is a new development in the field of psychology, the authors had to formulate a plan for their analysis that could not rely much on previous research. They decided to use 5 key indicators for evaluating the success of the replications. They will compare the original and the replicated studies in terms of the number of significant outcomes, p-values and effect sizes, and they will assess how many studies were subjectively judged to replicate the original effect. Finally, they also run a meta-analysis over the effect sizes.

    16. Aggregate data preparation

      After each team had completed the replication attempt, independent reviewers checked that their procedure was well documented and according to the initial replication protocol, and that the statistical analysis on the effects selected for replication were correct.

      Then, all the data were compiled to conduct analyses not only on the individual studies, but about all replication attempts made. The authors wanted to know if studies that replicated and those that did not replicate would be different. For instance, they investigated if studies that replicated would be more likely to come from one journal than another, or if studies that did not replicate would be more likely to have a higher p-value than studies which could be replicated.

    17. constructed a protocol for selecting and conducting high-quality replications

      Before collecting data for the replication studies, the authors produced a detailed protocol that described how they were going to select the studies that were available for replication and how it would be decided which effect in these studies would be attempted to be replicated, and which principles would guide all replication attempts. Importantly, this protocol was made public, and all individual replication attempts had to adhere to it.

    1. (i) photomineralization of DOC (N = 97), (ii) partial photo-oxidation of DOC (N = 97)

      Measurement methods are summarized here and provided in detail in the supplemental information.

      Photomineralization and partial photo-oxidation were measured for each water sample by first filtering the bacteria out of the samples and then putting them in ultraclean, air-tight, transparent vials.

      Vials were exposed to sunlight for 12 hours. Control vials were wrapped in foil, and placed alongside the sunlight-exposed vials.

      After the sunlight exposure, carbon dioxide (CO<sub>2</sub>) production and oxygen (O<sub>2</sub>) consumption were measured for each sample, as in the dark bacterial respiration experiment.

      The amount of complete and partial oxidation that occurred were calculated based on the ratio of O<sub>2</sub> use to CO<sub>2</sub> production. This is possible because complete oxidation uses one molecule of O<sub>2</sub> for every CO<sub>2</sub> produced, whereas partial oxidation uses O<sub>2</sub> but does not produce any CO<sub>2</sub>.

    2. We quantified dark bacterial respiration

      The details of how this was done are in the supplemental information. The method is summarized below:

      The authors measured dark bacterial respiration by putting unfiltered water samples in extremely clean, air-tight vials and allowing them to sit for 5–7 days at 6°C to 7°C (the average temperature of nonfrozen water at the sampling sites).

      As a control, the authors used water samples where all of the bacteria had been killed using mercury.

      At the end of the 5–7 days, the authors measured how much carbon dioxide (CO<sub>2</sub>) had been produced in each sample. In some samples, the amount of oxygen (O<sub>2</sub>) decrease was also measured as a way of validating the results. Bacterial respiration uses one molecule of O<sub>2</sub> for each molecule of CO<sub>2</sub> produced. The two methods returned the same results, indicating high-quality data.

  13. Nov 2016
    1. further characterize the consequences of ZIKV infection during different stages of fetal development

      The authors used models that allowed them to study early stages of brain development. They suggest there is more work to be done to determine the effects of ZIKV infection on later stages of fetal development.

    2. brain organoids recapitulate the orchestrated cellular and molecular early events comparable to the first trimester fetal neocortex

      Neurospheres are useful for modeling very early (embryonic) development, while organoids are used to study later stages of development.

    3. In addition to MOCK infection, we used dengue virus 2 (DENV2), a flavivirus with genetic similarities to ZIKV (11, 19), as an additional control group.

      The authors also compared ZIKV infection to dengue virus 2 (DENV2) infection. DENV2 is similar to ZIKV.

    4. reduced by 40% when compared to brain organoids under mock conditions

      Brain organoids infected with Zika virus were, on average, 40% smaller

    5. The growth rate of 12 individual organoids (6 per condition) was measured during this period

      Both infected and uninfected organoids were immersed in a fixative solution to "freeze" them and allow them to be visualized.

      It was then possible to use an electron microscope to compare the infrastructure of infected cells and uninfected cells