1,157 Matching Annotations
  1. Sep 2017
    1. To further confirm protein/mRNA spots were translation sites, we treated cells with 4 μg/ml cycloheximide to slow elongation and load more ribosomes per transcript. Consistent with this, spots got brighter

      A fourth experiment was conducted in order to positively identify that the co-located protein-mRNA spots were sites of translation, 4 micrograms of cycloheximide were added. The addition of the cycloheximide caused translation to slow down while also allowing for more ribosomes to attach to a single transcript.

    2. To test this, we treated cells with 50 μg/ml of puromycin

      To test the co-moving protein-mRNA spots (areas/spots in the images taken, Fig. 1, C and D, where a red and green dots were co-located) were locations of translation, 50 micrograms of puro-mycin was added to U2OS cells containing the transiently transferred plasmid. The puromycin leads to the inhibition of translation while mRNAs are still present.

  2. Aug 2017
    1. to areas in which changes from scenario RCP2.6 already appear (red areas)

      The authors do not include changes that occur in the RCP2.6L scenario in the Figure 3H map. Why might they have left that out?

    2. for scenarios RCP2.6L, RCP2.6, RCP4.5, and RCP8.5, respectively, at the end of the 21st century.

      The four maps for the future pathways, 3D to 3G, are based on putting the climate data for year 2100 for each pathway into the BIOME4 model, running in "forward" mode.

    3. reconstructed (rec) from pollen for the present

      The first map, 3A, comes from putting pollen core data from the past century into the BIOME4 inversion model, the same method as is used for the past biome map (3B).

    4. o assess the 1.5°C target, we created a fourth class (denoted RCP2.6L) from selected CMIP5 scenarios

      None of the existing RCPs resulted in a projection of 1.5°C, so the researchers created a new model so that they could evaluate the scenario.

    1. To determine whether the benefits afforded by tau reduction were sustained, we examined older mice.

      By examining older hAPP mice with and without tau, the authors could test how tau causes Alzheimer’s disease to progress as an animal ages.

    2. Probe trials, in which the platform was removed and mice were given 1 min to explore the pool, confirmed the beneficial effect of tau reduction

      After the researchers trained all mice to swim to a platform hidden under the water surface, the authors removed the platform to see how much time they spent in the area where the platform used to be. This way the researchers were able to test the memory of the mice.

    3. hAPP/Tau+/+ mice took longer to master this task (Fig. 1A; P < 0.001). In contrast, hAPP/Tau+/– and hAPP/Tau–/– mice performed at control levels.

      Alzheimer’s mice (with hAPP) that had normal amounts of tau (Tau+/+) took longer to learn to swim to a visible platform than the other five types of mice.

    4. We crossed hAPP mice (11) with Tau–/– mice (12) and examined hAPP mice with two (hAPP/Tau+/+), one (hAPP/Tau+/–), or no (hAPP/Tau–/–) endogenous tau alleles, compared with Tau+/+, Tau+/–, and Tau–/– mice without hAPP (13).

      The authors used mice that were genetically engineered to express a human copy of the amyloid precursor protein (called hAPP mice). hAPP mice are a common animal model of Alzheimer’s disease. They develop amyloid plaques and severe memory and cognitive problems later in life, just like humans with the disease.

      The authors bred these hAPP mice with other mice that were missing both their genes for the tau protein (called Tau-/- mice). From this breeding plan, the authors produced hAPP mice with normal amounts of tau (hAPP/Tau+/+), with half the normal amount of tau (hAPP/Tau+/-), and with no tau (hAPP/Tau-/-).

      They also produced mice without the human APP gene that had normal, half, and no tau.

    1. Blast-related tau phosphorylation was also detected when quantitated as a ratio of phosphorylated tau protein to total tau protein (Fig. 5, E, F, H, and J).

      The authors saw an increase in total tau protein in blast-exposed mice. To confirm that this increase was also true for phosphorylated tau, they computed the ratio of phosphorylated tau to total tau.

    2. we performed immunoblot analysis of tissue homogenates prepared from brains harvested from mice 2 weeks after single-blast or sham-blast exposure

      The authors used western blotting to detect abnormal phosphorylated tau and confirm their electron microscopy observations. They confirmed that levels of phosphorylated tau were elevated in the brains of blast-exposed mice as compared to the control group.

      For more on western blotting, see the Journal of Visualized Experiments:

      http://www.jove.com/science-education/5065/the-western-blot

    3. Gross examination of postmortem brains from both groups of mice was unremarkable and did not reveal macroscopic evidence of contusion, necrosis, hematoma, hemorrhage, or focal tissue damage (Fig. 3, A to F, and fig. S8)

      This is an important "negative" finding that is reported in almost all cases of blast exposure. There is a lack of gross (visible to the naked eye) brain injury in the brains of people that have been exposed to a nonpenetrating blast.

      The fact that this finding is replicated in this study encouraged the authors to explore whether there were "invisible injuries" in the brains of blast-exposed mice.

    4. evaluated pressure tracings in the hippocampus of intact living mice (Fig. 2B) and compared results to the same measurements obtained in isolated mouse heads severed at the cervical spine

      To test the water hammer effect, the authors compared the pressure resulting from a blast wave both inside and outside the head of a living mouse. They then compared these measurements to similar measurements performed on a decapitated mouse head.

      Because the disembodied mouse head had no vascular system or thorax, the water hammer could not contribute to the shock wave pressure inside the head. However, the authors observed similar shock wave pressures in both the live mouse and the mouse head, showing that the water hammer effect cannot be the primary source of brain damage.

    5. ConWep (Conventional Weapons Effects Program)

      Explosion simulation software based on a large data base of experimentally obtained blast data. ConWep enables comparative analysis of blasts. The authors used the software to verify that their simulated blasts closely matched explosions that would happen in the field.

    6. Wild-type C57BL/6 male mice

      In animal studies (especially those using mice) it is important for the researchers to identify the genotype of the species they're studying. This helps other researchers either replicate the findings or identify errors in the authors' reasoning due to the genetic makeup of the model organism.

      C57BL/6 mice are the most commonly used nontransgenic mouse in biomedical research. They are called "wild-type" because their genetics have not been changed by humans.

      Mouse colonies like this are inbred to make sure the mice are as genetically identical as possible. This helps eliminate extra, unexpected variables in an experiment that could affect the results in unknown ways.

    7. compressed gas blast tube

      The authors designed a "shock tube" to create controlled blasts. A shock tube is basically a wind tunnel: Pressure is built up on one side and suddenly released, creating a shock wave that travels down the tube.

      Source: Fraunhofer CMI

    8. blast neurotrauma model to investigate mechanistic linkage between blast exposure, CTE neuropathology, and neurobehavioral sequelae.

      After finding identical CTE-linked problems in the brains of military veterans with blast exposure and athletes who suffered head injuries, the authors investigated how blast exposure might cause brain injury using a mouse model.

      Mice are commonly used to model human diseases.

    9. CTE neuropathology in postmortem brains from military veterans with blast exposure and/or concussive injury and young athletes with repetitive concussive injury.

      In this figure, the authors look at the CTE neuropathology (neurological abnormalities) in blast-exposed military veterans and young American football athletes.

      Through postmortem analysis (still the only way to identify CTE), they found the same disease in both groups.

      From these findings, the authors were motivated to investigate whether blast injury would cause the same disease as sport-related head injuries.

    10. Control sections omitting primary antibody demonstrated no immunoreactivity.

      Immunohistochemistry (IHC) is a technique used to identify the biochemical and cellular cause of disease in a tissue.

      First, a tissue is treated with a primary antibody. This antibody binds to the target protein in the tissue.

      Next, the sample is treated with a second antibody that will bind to and detect the primary antibody. This second antibody has a colored or fluorescent tagged [is there a word missing here?] so that it can be detected with a microscope or other tool.

      Question: Why do scientists use a second, additional antibody instead of just tagging the primary antibody and using that for analysis?

    11. monoclonal antibody Tau-46 (Fig. 1T) directed against phosphorylation-independent tau protein

      Tau-46 is a "pan-tau" stain, meaning it detects all (or many) forms of tau protein, both normal and abnormal.

      In normal control cases, nonphosphorylated tau immunostaining would be diffuse and light. In pathologic cases, phosphorylated tau immunostaining would be present in neurons and glia.

    1. Spatially detailed, annual pesticide measurements, including neonicotinoid insecticides, were available for the United States after 1991.

      The authors used these to measure the mean annual exposure of species to pesticides. See the Supplementary Materials for more information.

    2. If species expanded their northern range limits to track recent warming, their ranges should show positive (northward) latitudinal shifts, but cool thermal limits should be stable through time.

      Here the authors outline one of their main hypotheses (species will expand their northern range limits to track climate change), and the subsequent prediction.

    3. All data and supporting scripts are available from Dryad Digital Repository: doi:10.5061/dryad.gf774.

      If you're interested in downloading the data and reproducing the experiments the authors did, visit here! The authors provide a script of code so by downloading the statistical tool R, you can easily repeat their experiments and reproduce their figures yourself. Try it out!

    1. induces a pro-social motivational state in rats, leading them to open the restrainer door and liberate the cagemate

      The main hypothesis tested was whether rats would voluntarily act to free a trapped cagemate because of feeling sympathy for the trapped rat.

      The authors ask whether the free rat, upon seeing and hearing its trapped cagemate, will act to release the cagemate.

  3. Jul 2017
    1. We performed comprehensive neuropathological analyses (table S1)

      Neuropathological analyses consisted of staining thin sections of postmortem brain tissue from autopsy donors and examining these specimens under a microscope for evidence of disease.

      The presence, distribution, and phosphorylation state (whether or not the protein has a phosphoryl group attached to it) is used to confirm CTE diagnosis after death.

    1. possibility that peer reviewers may be rewarding an applicant’s grant proposal writing skills rather than the underlying quality of her work

      In this model, the authors try to control for the fact that an application could be selected because the applicant writes well, rather than based on the quality of the application.

    2. we employ a probabilistic algorithm developed by Kerr to determine applicant gender and ethnicity (Hispanic or Asian)

      The algorithm in question was developed by William Kerr to estimate the contribution of Chinese and Indian scientists to the U.S. Patent and Trademark Office.

    3. Our regression results include separate controls for each type of publication: any authorship position, and first or last author publications.

      What the authors mean here is that they made statistical computations that allow them to remove the effect that the position of a name in the authors row can have in a publication.

    4. our paper asks whether NIH selects the most promising projects to support.

      Previous work has shown that receiving a grant increases scientific productivity. However, the paper authors want to know if NIH is awarding grants to projects that will make the best use of the money.

  4. Jun 2017
    1. To test whether firing of somatosensory neurons causes USVs, we microstimulated the trunk cortex (Fig. 4G).

      The authors used a small electrical current to trigger neurons in the somatosensory cortex to fire in the absence of tickling. They then determined whether this neural stimulation caused the production of USVs.

    2. We simultaneously performed single-unit recordings in the trunk region of the somatosensory cortex (Fig. 2A). We obtained high-quality recordings of neuronal responses elicited by tickling and gentle touch (Fig. 2B and fig. S2A).

      The authors implanted a microelectrode into the somatosensory cortex of the rat's brain and recorded neural activity (rate of neuron firing) in this region during tickling, touch, breaks, and play.

  5. May 2017
    1. In addition to the quantitative assessments of replication and effect estimation, we collected subjective assessments of whether the replication provided evidence of replicating the original result.

      Finally, the authors included a last measure for replication success: a subjective rating. All researchers who conducted a replication were asked if they thought their results replicated the original effect successfully. Based on their yes or no answers, the authors calculated subjective replication success.

    2. “coverage,” or the proportion of study-pairs in which the effect of the original study was in the CI

      The authors compared the size of the original study effects with their replication study's effects to determine if the original study results fell within the confidence interval (CI) of the replication.

      If the original study's effect size was within the CI of the replication study, the effect size can be assumed to be similar.

    3. We transformed effect sizes into correlation coefficients whenever possible

      The third indicator of replication success was the effect sizes of original and replication studies. The authors calculated correlation coefficients to indicate effect sizes.

      In a single study, when the means of two groups are similar, the correlation coefficient will be close to 1, and when the means of two groups are very different, the correlation coefficient will be close to 0.

      The effect size of original studies was always coded as positive (values between 0 and 1). When the effect in the relevant replication study went in the same direction, the effect size was also coded as positive (values between 0 and 1), but when the effect in the replication went in the other direction, the effect size was coded as negative (values between -1 and 0).

    4. distribution of P values of original and replication studies using the Wilcoxon signed-rank test

      In addition to comparing the proportion of studies yielding signficant results, the authors compared the p-values of these studies to find out how similar they were to each other.

    5. We tested the hypothesis that the proportions of statistically significant results in the original and replication studies are equal using the McNemar test for paired nominal data and calculated a CI of the reproducibility parameter.

      Next, the authors conducted another test to find out if the proportion of original studies that produced significant results was equal to or different from the proportion of replication studies that produced significant results.

    6. we tested the hypothesis that these studies had “no evidential value” (the null hypothesis of zero-effect holds for all these studies)

      The first analysis that the authors ran on the data assessed all replication studies that yielded non-significant results.

      The authors used Fisher's method to determine whether these studies actually had effects or if the original study authors were interpreting nonsignificant results as significant.

    7. However, original studies that interpreted nonsignificant P values as significant were coded as significant (four cases, all with P values < 0.06).

      Here, the authors explain how they deal with the problem that some of the original studies reported results as significant when they were, in fact, not significant.

      In each case, the threshold that is customarily set to determine statistical significance (p<0.05) was not met, but all reported p-values fell very close to this threshold (0.06>p>0.05). Since the original authors treated these effects as significant, the current analysis did so as well.

    8. There is no single standard for evaluating replication success

      Because the large-scale comparison of original and replication studies is a new development in the field of psychology, the authors had to formulate a plan for their analysis that did not rely much on previous research.

      They decided to use 5 key indicators for evaluating the success of the replications. They compared the original and the replicated studies in terms of the number of significant outcomes, p-values, and effect sizes. They also assessed how many studies were subjectively judged to replicate the original effect. Finally, they ran a meta-analysis of the effect sizes.

    9. subjective assessments of replication outcomes

      One of the indicators for whether a study was replicated successfully or not was a subjective rating: each team of replicators was asked if their study replicated the original effect (yes or no).

    10. sampling frame and selection process

      The authors wanted to make sure that the studies that were selected for replication would be representative of psychological research. Representativeness was important because it would mean that the conclusions that would be drawn from the replication outcomes could be cautiously extended to assumptions about the state of the field overall.

      At the same time, they had to make sure that the studies selected could also be conducted (that is, that one of the coauthors had the necessary skill or equipment to collect the data).

      To achieve this goal, a step-wise procedure was used: starting from the first issue of 2008 from three important psychology journals, 20 studies were selected and matched with a team of replicators who would conduct the replication attempt. If articles were left over because no-one could conduct the replication, but more replication teams were willing to conduct a study, another 10 articles were made available. In the end, out of 488 studies drawn from the population of studies, the authors attempted to replicate 100.

    11. constructed a protocol for selecting and conducting high-quality replications

      Before collecting data for the replication studies, the authors produced a detailed protocol that described how they were going to select the studies that were available for replication, how they would decide which effect they would attempt to replicate, and which principles would guide all replication attempts.

      Importantly, this protocol was made public, and all individual replication attempts had to adhere to it.

    1. Among sessile organisms, there were marked differences in survivorship and repair after initial injury.

      The authors observed injured corals and sponges for several months after Hurricane Allen to see if they survived, noting the initial degree of injury.

    2. Differences in damage to different growth forms (7) were particularly. striking for corals,

      The authors measured the degree of damage to certain species of corals, taking care to note the growth form and size. This allowed them to look at how shape and size influenced patterns of damage.

    3. Not all patchiness can be easily explained, but a number of patterns emerge.

      The authors observed a variety of sites with different profiles both before and after the hurricane. They describe the variation within and between sites here.

    4. We consider first the effects of spatial factors and then describe the immediate impact on common organisms and their subsequent responses over the following 7 months.

      The authors noted damage to corals after the hurricane and tracked their subsequent recovery (or death).

    5. they collected data comparable to those taken previously on routine patterns and processes

      Because these reefs had been well-surveyed before Hurricane Allen, the authors conducted surveys using the same methods after the hurricane to examine the impacts of the storm.

    1. brain organoids recapitulate the orchestrated cellular and molecular early events comparable to the first trimester fetal neocortex

      Neurospheres are useful for modeling very early (embryonic) development, while organoids are used to study later stages of development.

    2. In addition to MOCK infection, we used dengue virus 2 (DENV2), a flavivirus with genetic similarities to ZIKV (11, 19), as an additional control group

      The authors also compared ZIKV infection to dengue virus 2 (DENV2) infection. DENV2 is similar to ZIKV.

    3. The growth rate of 12 individual organoids (6 per condition) was measured during this period

      Both infected and uninfected organoids were immersed in a fixative solution to "freeze" them and allow them to be visualized.

      It was then possible to use an electron microscope to compare the infrastructure of infected cells and uninfected cells.

    4. morphological abnormalities and cell detachment

      Neurospheres that contained cells infected with Zika virus were oddly shaped, and some cells broke away.

    5. mock-

      Mock NSCs were not infected with Zika.

    6. to explore the consequences of ZIKV infection during neurogenesis and growth

      In order to obtain neural stem cells from human iPS, researchers cultured iPS in a special medium.

      To create neurospheres and organoids, neural stem cells were divided and again cultured in a special medium.

      Finally, ZIKV was diluted and added to the different types of culture for 2 hours.

    1. we cannot directly assess whether the NIH systematically rejects high-potential applications

      Because the authors only looked at projects that received grant funding, their analysis does not take into account how many high-potential projects were rejected by peer review.

    2. our estimates are likely downward biased

      The authors acknowledge that there is sometimes a long delay between a grant award and patenting, so their analysis may not be a good indicator of how relevant research is to commercial applications.

    3. We control for the same variables as described in Model 6

      The patents like the grants are checked according to certain indicators: institutional quality, gender, and ethnicity of applicants.

    4. Our final analysis explores whether peer reviewers’ value-added comes from being able to identify transformative science, science with considerable applied potential, or from being able to screen out very low-quality research.

      Finally, the authors wanted to figure out if peer reviewers are good at choosing applicants for their innovation, their practicality, or if they are simply good at weeding out low-quality research.

    5. These residuals represent the portions of grants’ citations or publications that cannot be explained by applicants’ previous qualifications or by application year or subject area

      The authors removed the influence of the grant applicant's background, demographics, and writing skill in order to look at what effect a reviewer's expertise has.

    6. adds controls describing a PI’s publication history

      The authors control for yet another potential variable, an applicant's research background (they use the PI's publication history to do this).

    7. We also include NIH institute-level fixed effects to control for differences in citation and publication rates by fields, as defined by a grant’s area of medical application.

      The authors try to remove the effect of an article's field on its impact. For example, a biochemistry article may appear to have a smaller impact because of the high rate of publication and citation in that field, whereas a physics article's impact may be inflated due to a lower publication and citation rate.

    8. potential concerns

      Several factors can lead to the trends in Figure 1 being misinterpreted, like the age of the grant and the field of study. The authors address these concerns by adjusting their model to account for these effects.

    9. a 1-SD worse score is associated with a 14.6% decrease in grant-supported research publications and a 18.6% decrease in citations to those publications

      Here, the authors estimated how much a decrease of one standard deviation on the percentile score affected the number of publications and citations of a grant recipient.

    10. measure applicant-level characteristics

      The authors studied some characteristics such as the grant history or the institutional affiliation to see if the previous work of the applicant has an impact on the result of the grant application.

    11. patents that either directly cite NIH grant support or cite publications acknowledging grant support

      The last measure is the number of patents that cite those publications from (i), or acknowledge support from the grant.

    12. the total number of citations that those publications receive through 2013

      The second measure is the total number of citations the publications from (i) received through 2013.

    13. the total number of publications that acknowledge grant support within 5 years of grant approval

      The first measure of success is the number of papers a team published during the 5 years after they received the grant.

    14. funding is likely to have direct effect on research productivity

      The authors considered grants which were already funded and competing for renewal. This makes it easier to attribute differences in research productivity to the peer review process, rather than the amount of funding the project has.

    15. percentile score

      The percentile score is assigned by the peer review committee. It ranks all authors to determine which was the most favored by the committee. A lower score means the committee liked the application more.

    16. peer review has high value-added if differences in grants’ scores are predictive of differences in their subsequent research output

      If the evaluation by the peer review committee is correlated with the quality of work put out by the research group, then peer review has high value-added (meaning, it is useful for choosing research groups with the highest potential).

    17. Because NIH cannot possibly fund every application it receives, the ability to distinguish potential among applications is important for its success.

      The outcome of this study could have important implications for how the NIH evaluates and chooses who it gives money to.

  6. Apr 2017
    1. These findings suggest that genotypic determinants may be critical factors that modulate temporal and phenotypic expression of TBI and late-emerging sequelae, including CTE.

      Although the wild-type mouse model has limitations (as do all experimental models), using laboratory animals allows researchers to test things that can't be directly tested in humans.

      For example, mouse models are useful tools to study the effects of different genes. Biological research has generated thousands of genetically modified mouse strains that allow researchers to test hypotheses regarding specific genes and how these genes influence human diseases.

    2. blast exposure did not impair gross neurological functioning with respect to locomotion, exploratory activity, and thigmotaxis (an indicator of murine anxiety assessed by movement close to the wall of the experimental apparatus)

      It was important to assess locomotion (movement) because the behavioral cognition test used relies on locomotion for its results.

      If locomotion were impaired, then the results of the behavioral test could not be linked to cognition alone.

      The authors did not detect any abnormalities in locomotion in either the control or experimental group, meaning that any cognitive abnormalities they observed were not due to locomotor defects. This strengthens the argument that any of the abnormalities they saw were due to blast-related interference with brain functions affecting learning and memory.

    3. We found marked impairments of stimulus-evoked LTP in mouse slices prepared 2 weeks and 1 month after blast exposure

      As above, the authors found impairments in a neuronal process thought to be involved in memory storage.

    4. significantly slowed 2 weeks after blast exposure, an effect that persisted for at least 1 month

      Following blast exposure, the authors observed a slowing of axonal conduction in structures of the hippocampus that are important for long-term potentiation (LTP).

    5. We hypothesized that blast forces exerted on the skull would result in head acceleration-deceleration oscillation of sufficient intensity to induce persistent brain injury (“bobblehead effect”)

      In this section, the authors sought to find out if their mouse model of blast neurotrauma caused similar brain abnormalities as those observed in human CTE cases.

      If so, it would mean there is a causal linkage between blast exposure and development of CTE later in life.

    6. Kinematic analysis of high-speed videographic records of head movement during blast exposure confirmed rapid oscillating acceleration-deceleration of the head in the horizontal and sagittal planes of motion (Fig. 2, D to G, and video S1).

      The authors used high-speed video cameras (which capture 100,000 frames per second) to record mouse head movements during blast exposure.

      The videos showed violent winds pushing and pulling on the head during the blast, which the authors measured as the kinematics (motions over time) of the head.

      These pushing and pulling forces during blast exposure were similar to sport-related impacts leading to injury.

    7. blast wind velocity was 150 m/s (336 miles/hour)

      After investigating the effects of blast waves on the skull and brain, the authors turned their attention to the effects of blast wind.

      See the following video for a look at the destructive nature of these blast winds during a 1955 atomic weapons test:

      https://www.youtube.com/watch?v=ztJXZjIp8OA

    8. To investigate intracranial pressure (ICP) dynamics during blast exposure, we inserted a needle hydrophone into the hippocampus of living mice and monitored pressure dynamics during blast exposure.

      The first potential injury source the authors investigated was the interaction of the blast wave with the brain. The authors tested whether this interaction was destructive to the brain. To do this, they measured the pressure of the brain in living mice using a small needle hydrophone (a microphone used to detect sound waves underwater).

    9. Evidence of axon degeneration, axon retraction bulbs, and axonal dystrophy were observed in the subcortical white matter subjacent to cortical tau pathology

      In areas of the cortex where the authors saw irregular tau proteins, they saw damage to the axons in the white matter beneath.

    10. four young-adult normal control subjects

      Although not shown in the figure below, the authors also examined the brains of healthy controls that matched the age and sex of the experimental subjects. They used the same IHC technique.

      No abnormalities were observed in the control cases.

    11. Neuropathological comparison to brains from young-adult amateur American football players (Fig. 1, C, D, G, and H) with histories of repetitive concussive and subconcussive injury exhibited similar CTE neuropathology marked by perivascular NFTs and glial tangles with sulcal depth prominence in the dorsolateral and inferior frontal cortices.

      The abnormalities seen in the brains of head-injured athletes and blast-exposed military veterans were similar, so the authors hypothesized that the cause of CTE in these two groups must also be similar. However, it is currently not possible to diagnose CTE until after a patient has died and their brain can be autopsied.

      The authors identified a correlation between neurotrauma and CTE, but in order to test their hypothesis and prove a causal connection, they had to perform controlled experiments. They used a mouse model for this.

    12. NFTs and dystrophic axons immunoreactive for monoclonal antibody CP-13 (Fig. 1, A to I, L, Q, R, and U, and fig. S1) directed against phosphorylated tau protein at Ser202 (pS202) and Thr205 (pT205), monoclonal antibody AT8 (Fig. 1S) directed against phosphorylated tau protein at Ser202 (pS202) and Thr205 (pT205)

      Monoclonal antibodies are used in immunohistochemical staining to detect abnormal phosphorylated tau proteins. In this study, several different antibodies were used to detect various forms of tau proteins in different places in the brain.

      If the specific set of abnormalities is detected in the postmortem brain of someone who had a history of neurotrauma, the diagnosis is CTE.

    13. compared these neuropathological analyses

      The authors used neuropathological analysis to compare the brains of military veterans who were exposed to blasts with young American football players, a professional wrestler, and normal controls.

    14. Head immobilization during blast exposure prevented blast-induced learning and memory deficits.

      If a mouse's head was prevented from moving when it was exposed to a blast, the learning and memory deficits described above did not occur.

    15. Given the overlap of clinical signs and symptoms in military personnel with blast-related TBI and athletes with concussion-related CTE, we hypothesized that common biomechanical and pathophysiological determinants may trigger development of CTE neuropathology and sequelae in both trauma settings.

      A common format of experimental design in medical research is to formulate a hypothesis based on findings in human patients and test the hypothesis in an animal model.

  7. Jan 2017
    1. Differences in damage to different growth forms (7) were particularly. striking for corals

      The authors measured the degree of damage to certain species of corals, taking care to note the growth form and size. This allowed them to look at how shape and size influenced patterns of damage.

    2. Recovery of Surviving Sessile Organisms

      The authors observed injured corals and sponges for several months after Hurricane Allen to see if they survived, noting the initial degree of injury.

    3. significant at P < .001, Mann-Whitney U test

      The Mann-Whitney U test is a statistical test used when values are not normally distributed. Here it is used to compare live tissue coverage before and after the hurricane. The P-value of less than 0.001 indicates that there was a statistically significant difference between the two groups.

    4. Not all patchiness can be easily explained, but a number of patterns emerge.

      The authors observed a variety of sites with different profiles both before and after the hurricane. They describe the variation within and between sites here.

    5. they collected data comparable to those taken previously on routine patterns and processes

      Because these reefs had been well-surveyed before Hurricane Allen, the authors conducted surveys using the same methods after the hurricane to examine the impacts of the storm.

    6. We consider first the effects of spatial factors and then describe the immediate impact on common organisms and their subsequent responses over the following 7 months.

      The authors noted damage to corals after the hurricane and tracked their subsequent recovery (or death).

    1. Meta-analysis combining original and replication effects

      Moreover, the authors planned to combine the results of each original and replication study, to show if the cumulative effect size was significantly different from zero. If the overall effect was significantly different from zero, this could be treated as an indication that the effect exists in reality, and that the original or replication did not erroneously pick up on an effect that did not actually exist.

    2. hypothesis that this proportion is 0.5

      In this case, testing against the null hypothesis that the half of the replication effects are stronger than the original study effects means assuming that there is only a chance difference between the effect sizes. The alternative hypothesis is that the replication effects are on average stronger, or weaker, than the original study effects.

    3. subjective assessments of replication outcomes

      One of the indicators for whether a study was replicated successfully or not was a subjective rating: each team of replicators was asked if their study replicated the original effect (yes or no).

    4. sampling frame and selection process

      The authors wanted to make sure that the studies that were selected for replication would be representative of psychological research, that means they would give a good picture of which kinds of studies psychologists typically run. Representativeness was important because it would mean that the conclusions that would be drawn from the replication outcomes could be cautiously extended to assumptions about the state of the field overall.

      At the same time, they had to make sure that the studies selected could also be conducted (that is, that one of the coauthors had the necessary skill or equipment to collect the data).

      To achieve this goal, a step-wise procedure was used: starting from the first issue of 2008 from three important psychology journals, 20 studies were selected and matched with a team of replicators who would conduct the replication attempt. If articles were left over because no-one could conduct the replication, but more replication teams were willing to conduct a study, another 10 articles were made available. In the end, out of 488 studies drawn from the population of studies, 100 studies were attempted to be replicated.

    5. Subjective assessment of “Did it replicate?”

      Finally, the authors used the subjective rating of whether the effect replicated as an indicator of replication success. Out of 100 replication teams, only 39 reported that they thought they had replicated the original effect.

    6. Comparing original and replication effect sizes

      With this third measure for replication success, the authors further compared the sizes of the original and replicated effects. They found that the original effect sizes were larger than the replication effect sizes in more than 80% of the cases.

    7. Evaluating replication effect against original effect size

      In a second way to look at the replication success, the authors checked if the sizes of the effects of the original studies weren't too far off from those of the replication studies. Using this measure, they found that less than half the replications showed an effect size that was not too different from the original effect size to speak of a successful replication.

    8. Evaluating replication effect against null hypothesis of no effect

      First, the authors used the 5 measures for replication success to check to what extent the 100 original studies could be successfully replicated.

      In a first glance at the results, the authors checked how many replications "worked" by analyzing how many replication studies showed a significant effect with the same direction (positive or negative) as the original studies. Of the 100 original studies, 97 showed a significant effect. Because each replication study did not have a probability of 100% to find a positive result even if the investigated effect was true, if all original effects were true, we could have maximally expected around 89 successful replications. However, results showed that only 35 studies were successfully replicated.

    9. Analysis of moderators

      Last, the authors wanted to know if successfully replicable studies differed from studies that could not be replicated in a systematic way. For this, they would check if a number of differences in the original studies would be systematically related to successfully replicable studies.

    10. Subjective assessment of “Did it replicate?”

      Finally, the authors included a last measure for replication success: a subjective rating. All researchers who conducted a replication were asked if they thought their results replicated the original effect successfully. Based on their yes or no answers, subjective replication success would be calculated.

    11. “coverage,” or the proportion of study-pairs in which the effect of the original study was in the CI of the effect of the replication study

      In this test for replication success, the authors will compare the size of the original study effect and the effect of the replication study to identify if there are indications that they are not too different, so that it is likely that the effect sizes in both samples correspond to the same effect size in the population.

    12. Correlates of reproducibility

      Finally, the authors wanted to know if successfully replicable studies differed from studies that could not be replicated in a systematic way. As the criterion for replication success, they used their first analysis (significance testing).

      They found that studies from the social psychology journal were less likely to replicate than those from the two journals publishing research in cognitive psychology. Moreover, studies were more likely to replicate if the original study reported a lower p-value and a larger effect size, and if the original finding was subjectively judged to be less surprising. However, successfully replicated studies were not judged to be more important for the field, or to have been conducted by original researchers or replicators with higher expertise than failed replications.

    13. The last measure for the success of the replications was a subjective rating from the replication teams. Each team was asked if they thought they had replicated the original effect. Out of 100 studies, 39 were judged to be successful replications.

    14. Combining original and replication effect sizes for cumulative evidence

      Fourth, the authors combined the original and replication effect sizes and calculated a cumulative estimation of the effects. They wanted to see how many of the studies that could be analyzed this way would show an effect that was significantly different from zero if the evidence from the original study and that of the replication study was combined.

      Results showed that 68% of the studies analyzed this way indicated that an effect existed. In the remaining 32% of the studies, the effect found in the original study, when combined with the data from the replication study, could no longer be detected.

    15. Statistical analyses

      Because the large-scale comparison of original and replication studies is a new development in the field of psychology, the authors had to formulate a plan for their analysis that could not rely much on previous research. They decided to use 5 key indicators for evaluating the success of the replications. They will compare the original and the replicated studies in terms of the number of significant outcomes, p-values and effect sizes, and they will assess how many studies were subjectively judged to replicate the original effect. Finally, they also run a meta-analysis over the effect sizes.

    16. Aggregate data preparation

      After each team had completed the replication attempt, independent reviewers checked that their procedure was well documented and according to the initial replication protocol, and that the statistical analysis on the effects selected for replication were correct.

      Then, all the data were compiled to conduct analyses not only on the individual studies, but about all replication attempts made. The authors wanted to know if studies that replicated and those that did not replicate would be different. For instance, they investigated if studies that replicated would be more likely to come from one journal than another, or if studies that did not replicate would be more likely to have a higher p-value than studies which could be replicated.

    17. constructed a protocol for selecting and conducting high-quality replications

      Before collecting data for the replication studies, the authors produced a detailed protocol that described how they were going to select the studies that were available for replication and how it would be decided which effect in these studies would be attempted to be replicated, and which principles would guide all replication attempts. Importantly, this protocol was made public, and all individual replication attempts had to adhere to it.

    1. (i) photomineralization of DOC (N = 97), (ii) partial photo-oxidation of DOC (N = 97)

      Measurement methods are summarized here and provided in detail in the supplemental information.

      Photomineralization and partial photo-oxidation were measured for each water sample by first filtering the bacteria out of the samples and then putting them in ultraclean, air-tight, transparent vials.

      Vials were exposed to sunlight for 12 hours. Control vials were wrapped in foil, and placed alongside the sunlight-exposed vials.

      After the sunlight exposure, carbon dioxide (CO<sub>2</sub>) production and oxygen (O<sub>2</sub>) consumption were measured for each sample, as in the dark bacterial respiration experiment.

      The amount of complete and partial oxidation that occurred were calculated based on the ratio of O<sub>2</sub> use to CO<sub>2</sub> production. This is possible because complete oxidation uses one molecule of O<sub>2</sub> for every CO<sub>2</sub> produced, whereas partial oxidation uses O<sub>2</sub> but does not produce any CO<sub>2</sub>.

    2. We quantified dark bacterial respiration

      The details of how this was done are in the supplemental information. The method is summarized below:

      The authors measured dark bacterial respiration by putting unfiltered water samples in extremely clean, air-tight vials and allowing them to sit for 5–7 days at 6°C to 7°C (the average temperature of nonfrozen water at the sampling sites).

      As a control, the authors used water samples where all of the bacteria had been killed using mercury.

      At the end of the 5–7 days, the authors measured how much carbon dioxide (CO<sub>2</sub>) had been produced in each sample. In some samples, the amount of oxygen (O<sub>2</sub>) decrease was also measured as a way of validating the results. Bacterial respiration uses one molecule of O<sub>2</sub> for each molecule of CO<sub>2</sub> produced. The two methods returned the same results, indicating high-quality data.

  8. Nov 2016
    1. further characterize the consequences of ZIKV infection during different stages of fetal development

      The authors used models that allowed them to study early stages of brain development. They suggest there is more work to be done to determine the effects of ZIKV infection on later stages of fetal development.

    2. brain organoids recapitulate the orchestrated cellular and molecular early events comparable to the first trimester fetal neocortex

      Neurospheres are useful for modeling very early (embryonic) development, while organoids are used to study later stages of development.

    3. In addition to MOCK infection, we used dengue virus 2 (DENV2), a flavivirus with genetic similarities to ZIKV (11, 19), as an additional control group.

      The authors also compared ZIKV infection to dengue virus 2 (DENV2) infection. DENV2 is similar to ZIKV.

    4. reduced by 40% when compared to brain organoids under mock conditions

      Brain organoids infected with Zika virus were, on average, 40% smaller

    5. The growth rate of 12 individual organoids (6 per condition) was measured during this period

      Both infected and uninfected organoids were immersed in a fixative solution to "freeze" them and allow them to be visualized.

      It was then possible to use an electron microscope to compare the infrastructure of infected cells and uninfected cells

    6. morphological abnormalities and cell detachment

      Neurospheres that contained cells infected with Zika virus were oddly shaped, and some cells broke away.

    7. to explore the consequences of ZIKV infection during neurogenesis and growth

      In order to obtain neural stem cells from human iPS, researchers cultured iPS in a special medium.

      To create neurospheres and organoids, neural stem cells were divided and again cultured in a special medium.

      Finally, ZIKV was diluted and added to the different types of culture for 2 hours.

    1. we cannot directly assess whether the NIH systematically rejects high-potential applications

      Because the authors only looked at projects that received grant funding, their analysis does not take into account how many high-potential projects were rejected by peer review.

    2. our estimates are likely downward biased

      The authors acknowledge that there is sometimes a long delay between a grant award and patenting, so their analysis may not be a good indicator of how relevant research is to commercial applications.

    3. Our final analysis

      Finally, the authors wanted to figure out if peer reviewers are good at choosing applicants for their innovation, their practicality, or if they are simply good at weeding out low-quality research.

    4. These residuals represent the portions of grants’ citations or publications that cannot be explained by applicants’ previous qualifications or by application year or subject are

      The authors removed the influence of the grant applicant's background, demographics, and writing skill in order to look at what effect a reviewer's expertise has.

    5. rewarding an applicant’s grant proposal writing skills

      In this model, the authors try to control for the fact that an application could be selected because the applicant writes well, rather than based on the quality of the application

    6. Controlling for publication history attenuates but does not eliminate the relationship

      Again, controlling for the variable of a PI's research background does not eliminate the relationship the authors originally found.

    7. adds controls describing a PI’s publication history

      The authors control for yet another potential variable, an applicant's research background (they use the PI's publication history to do this).

    8. Controlling for cohort and field effects does not attenuate our main finding

      The authors' adjustments to control for various external effects did not change their original findings.

    9. We also include NIH institute-level fixed effects to control for differences in citation and publication rates by fields

      The authors try to remove the effect of an article's field on its impact. For example, a biochemistry article may appear to have a smaller impact because of the high rate of publication and citation in that field, whereas a physics article's impact may be inflated due to a lower publication and citation rate.

    10. potential concerns

      Several factors can lead to the trends in Figure 1 being misinterpreted, like the age of the grant and the field of study. The authors address these concerns by adjusting their model to account for these effects.

    11. a 1-SD worse score is associated with a 14.6% decrease in grant-supported research publications and a 18.6% decrease in citations to those publications

      Here the authors estimated how much a decrease of one standard deviation on the percentile score affected the number of publications and citations of a grant recipient.

    12. patents that either directly cite NIH grant support or cite publications acknowledging grant support

      The last measure is the number of patents that cite those publications from (i), or acknowledge support from the grant.

    13. the total number of citations that those publications receive through 2013

      The second measure is the total number of citations the publications from (i) received through 2013.

    14. the total number of publications that acknowledge grant support within 5 years of grant approval

      The first measure of success is the number of papers a team published during the 5 years after they received the grant.

    15. funding is likely to have direct effect on research productivity

      The authors considered grants which were already funded and competing for renewal. This makes it easier to attribute differences in research productivity to the peer review process, rather than the amount of funding the project has.

    16. percentile score

      The percentile score is assigned by the peer review committee. It ranks all authors to determine which was the most favored by the committee. A lower score means the committee liked the application more.

    17. peer review has high value-added if differences in grants’ scores are predictive of differences in their subsequent research output

      If the evaluation by the peer review committee is correlated with the quality of work put out by the research group, then peer review has high value-added (meaning, it is useful for choosing research groups with the highest potential).

    18. Because NIH cannot possibly fund every application it receives, the ability to distinguish potential among applications is important for its success.

      The outcome of this study could have important implications for how the NIH evaluates and chooses who it gives money to.

    19. our paper asks whether NIH selects the most promising projects to support

      Previous work has shown that receiving a grant increases scientific productivity. However, the paper authors want to know if the NIH is awarding grants to projects that will make the best use of the money.

  9. Oct 2016
    1. We transformed effect sizes into correlation coefficients whenever possible.

      For the third indicator for replication success, the effect sizes of original and replication studies, the authors will calculate correlation coefficients to indicate effect sizes. In a single study, when the means of two groups are similar, the correlation coefficient will be close to 1, and when the means of two groups are very different, the correlation coefficient will be close to 0.

      The effect size of original studies was always coded as positive (values between 0 and 1). When the effect in the relevant replication study went in the same direction, the effect size was also coded as positive (values between 0 and 1), but when the effect in the replication went in the other direction, the effect size was coded as negative (values between -1 and 0).

    2. Using only the nonsignificant Pvalues of the replication studies and applying Fisher’s method (26), we tested the hypothesis that these studies had “no evidential value” (the null hypothesis of zero-effect holds for all these studies).

      The first analysis that will be run on the data assesses all replication studies that yielded non-significant results. By applying Fisher's method, the authors tested if based on these failures to replicate, there is evidence that effects are present in reality for all assessed studies, or if these studies cannot justify a deviation from the null hypothesis that there are no effects in reality.

    3. Second, we compared the central tendency of the distribution of P values of original and replication studies using the Wilcoxon signed-rank test and the t test for dependent samples.

      Moreover, the original and the replication studies are compared in terms of the p-values they yield: are the p-values very similar, or extremely different from each other?

    4. We tested the hypothesis that the proportions of statistically significant results in the original and replication studies are equal using the McNemar test for paired nominal data and calculated a CI of the reproducibility parameter.

      Next, the authors conducted another test to find out if the number of results in the original studies that produced significant results was equal to or different from the number of replication studies that produced significant results.

    5. However, original studies that interpreted nonsignificant P values as significant were coded as significant (four cases, all with P values < 0.06).

      Here, the authors explain how they deal with the problem that some of the original studies reported results as significant, although in fact, they were non-significant. In each case, the threshold that is customarily set to determine statistical significance (p<0.05) was not met, but all reported p-values fell very close to this threshold (0.06>p>0.05). Since the original authors treated these effects as significant, the current analysis did so as well.

  10. Sep 2016
    1. To determine whether the benefits afforded by tau reduction were sustained, we examined older mice

      By examining older hAPP mice with and without tau, the authors could test how tau causes Alzheimer’s disease to progress as an animal ages.

    2. Probe trials, in which the platform was removed and mice were given 1 min to explore the pool, confirmed the beneficial effect of tau reduction

      After the researchers trained all mice to swim to a platform hidden under the water surface, the authors removed the platform to see how much time they spent in the area where the platform used to be. This way the researchers were able to test the memory of the mice.

    3. hAPP/Tau+/+mice took longer to master this task (Fig. 1A; P < 0.001). In contrast, hAPP/Tau+/– and hAPP/Tau–/– mice performed at control levels

      Alzheimer’s mice (with hAPP) that had normal amounts of tau (Tau+/+) took longer to learn to swim to a visible platform than the other five types of mice.

    4. We crossed hAPP mice (11) with Tau–/– mice (12) and examined hAPP mice with two (hAPP/Tau+/+), one (hAPP/Tau+/–), or no (hAPP/Tau–/–) endogenous tau alleles, compared with Tau+/+, Tau+/–, and Tau–/– mice without hAPP (13)

      The authors used mice that were genetically engineered to express a human copy of the amyloid precursor protein (called hAPP mice). hAPP mice are a common animal model of Alzheimer’s disease. They develop amyloid plaques and severe memory and cognitive problems later in life, just like humans with the disease.

      The authors bred these hAPP mice with other mice that were missing both their genes for the tau protein (called Tau-/- mice). From this breeding plan, the authors produced hAPP mice with normal amounts of tau (hAPP/Tau+/+), with half the normal amount of tau (hAPP/Tau+/-), and with no tau (hAPP/Tau-/-).

      They also produced mice without the human APP gene that had normal, half, and no tau.

    1. To determine whether this activation of prey motor neurons was the result of central nervous system (spinal) activity or activity in efferent branches of motor neurons, the dual tension experiment was repeated twice with extensively double-pithed fish (in which both the brain and spinal cord were destroyed, but the branches of motor efferents were left intact within the fish body) and compared with a brain-pithed fish

      As found out before, the paralysis of the fish is caused by it's motor neurones. Now the setting with two differently prepared fish was repeated. First fish: brain and spinal cord destroyed. Second Fish: only brain destroyed. This is to check by which part of the nervous system the activation of the fish's motor neurons (it's movement) is caused. Either the central nervous system (fish 1) or the "decentral" branches (fish 2)

    2. n this study, I designed a set of experiments to explore the impacts of the electric eel discharges on potential prey and the mechanism that operates during such attacks.

      The purpose of this study is to further understand the effects of the electric eel's attacks on their prey and to explore the multiple strategies that are used by the eel during the process of prey detection, hunt and capture.

    3. To test this hypothesis, a pithed fish was placed in a thin plastic bag to isolate it from the eel’s discharge. The electrically isolated fish was positioned below an agar barrier, with electrical leads embedded in the head and tail region (10) that allowed production of artificial fish twitch by the experimenter. Artificial fish twitch was triggered remotely through a stimulator (Fig. 4A), allowing control over its timing and occurrence. When the stimulating electrodes were inactive, eel doublets caused no response in the pithed fish and eels did not attack the preparation (Fig. 4B and movie S6). However, when the stimulator was configured to trigger fish twitch when the eel produced a doublet, the eel’s full “doublet attack” behavior was replicated (Fig. 4C and movie S6).

      To test the hypothesis that eel's detect hidden fish with sending out doublets, fish were put in a plastic bag to electrically isolate them from the eel's signals. Thus, the fish didn't give any response (movement) to these signals. But fish movement could be simulated by a stimulater controlled by the experimenter.

    4. To identify the function of this additional behavior, eels were presented with prey hidden below a thin agar barrier (Fig. 3C). In some cases, eels detected prey through the barrier and attacked directly, but in other cases, the eel investigated the agar surface with a low-amplitude electric organ discharge and then produced a high-voltage doublet. The doublet invariably caused prey movement. Stimulated prey movement was closely followed (in 20 to 40 ms) by a full predatory strike consisting of a strong electric discharge volley and directed attack (Fig. 3 and movie S5), as characterized in the first experiments.

      Eels can also detect hidden fish. Weak electric shocks make the fish move, the eel detects this movement and strikes strongly

    5. In each of four cases, tension responses in the curarized fish dropped to near zero, whereas the sham-injected fish continued to respond (fig. S3).

      the poisoned fish didn't move anymore when shocked by the eel

    6. To determine whether the discharge induced muscle contractions by initiating action potentials directly in prey muscles or through activation of some portion of fish motor neurons, one of two similarly sized fish was injected with curare (an acetylcholine antagonist) so as to block the acetylcholine gated ion channels at the neuromuscular junction, whereas the other fish was sham-injected

      To find out how the muscle contraction is caused, two fish were prepared differently: 1. the first fish was injected with a nerve poison to check if the muscle contraction by a chemical effect 2. the second fish was only given a placebo to check if it's a electrical effect

    7. An eel in the aquarium was separated from the fish by an electrically permeable agar barrier (Fig. 2A) (11) and fed earthworms, which it attacked with volleys of its high-voltage discharge. The discharge directed at the earthworms induced strong muscular contractions in the fish preparation, precisely correlated in time with the volley (no tension developed during the weak discharge). A steep rise in fish tension occurred with a mean latency of 3.4 ms (n = 20 trials) after the first strong pulse (Fig. 2B), which is similar to the 2.9-ms mean immobilization latency (n = 20 trials) observed in free-swimming fish.

      The fish connected to the force transducer was behind a jelly barrier. When the eel sent out a strong electric shock the fish moved. The fish did not react to weak electric signals.

    8. To characterize the mechanism by which high-voltage volleys cause this remote immobilization of prey (10), anesthetized fish were pithed (to destroy the brain), the hole was sealed with cyanoacrylate, and the fish was attached to a force transducer.

      To understand how the eel's electric shocks affect the fish, anesthetized fish were connected to force transducers to record their movement.

    9. Electric eels emit three distinct types of electric organ discharges: (i) low-voltage pulses for sensing their environment, (ii) pairs and triplets of high-voltage pulses given off periodically while hunting in complex environments, and (iii) high-frequency volleys of high-voltage pulses during prey capture or defense

      Eels have three strategies: (i). mild electric shocks to get information about their environment (like radar) (ii). strong double or triple shocks (iii). many strong back-to-back shocks to catch fish or to defend themselves

    10. To further investigate the fidelity of prey muscle contractions relative to the electric organ discharge, and the mechanism of the contractions’ induction

      To check the validity of the previous conclusion that the eel's electric shock causes muscle contraction, two fish with destroyed brains were tested.

    1. generates a new splice donor site (gt) 42 bases upstream of the wild-type donor site, thus generating a 14–amino acid deletion in the corresponding transcript (Fig. 3D).

      The insertion of the jumping DNA caused a mutation. This mutation affected splicing, a process where the DNA is edited to ultimately produce the correct protein.

      This incorrect splicing resulted in a shortened non-functional protein.

    2. Using breeding experiments,

      The authors took different individual lizards with different physical traits, and then mated them to see how these traits were transmitted to the offspring of these mixes.

    1. NPAS4, a transcription factor activated in response to depolarization

      Npas4 was found to cause activation of distinct programs of late-response genes in inhibitory and excitatory neurons. It's an activity-dependent transcription factor that regulates inhibitory synapse number and function in cell culture. It is also expressed in pyramidal neurons of the hippocampus where it promotes an increase in the number of inhibitory synapses on the cell soma and a decrease in the number of inhibitory synapses on the apical dendrites.

    2. binomial test

      It is an exact test of the statistical significance of deviations from a theoretically expected distribution of observations from two categories.

      It is used in many surveys, an example of how it works is given in the link below:

      See more...

    3. Affymetrix Gene-Chip Human Mapping 500K single-nucleotide polymorphism (SNP) array, as well as bacterial artificial chromosome (BAC) comparative genomic hybridization (CGH) microarrays

      Both the single nucleotide polymorphism array and BAC Comparative Genomic Hybridization arrays are used to detect the number of copies of a specific locus in a subject's DNA. This allows us to know whether the locus is present on one or both chromosomes of a subject.

      To do this, control DNA and tested DNA are labeled with different fluorescent molecules of different colours (in the picture red for the control and green for the test). After denaturation, DNA will hybridise. If the sample DNA and the control DNA are identical, we will observe orange fluorescence. If the control DNA has deletions, the solution will appear red and if it gained regions, it will appear green.

    4. knock-down

      A strategy for down-regulation of expression of a gene by incorporating into the genome an antisense oligodeoxynucleotide or ribozyme sequence directed against the targeted gene.

    5. RNA interference (RNAi)

      A biological pathway, found in many eukaryotes, in which RNA molecules inhibit gene expression, usually by the destruction of certain mRNA molecules. This process is controlled by the RNA-induced silencing complex (RISC).

      This procedure is also called co-suppression, post-transcriptional gene silencing (PTGS), or quelling.

      https://www.youtube.com/watch?v=cK-OGB1_ELE&noredirect=1

    6. neuronal membrane depolarization by elevated KCl

      This is an important and common experimental technique that is used to study the result of enhanced neuronal activity on alterations in gene expression.

      The elevated KCL, which means the increase of extracellular potassium, has three mechanistic effects that result in a sustained depolarized state:

      The normally present hyperpolarizing outflow of potassium (K+) is slowed down, while less hyperpolarizing outflow is equivalent to depolarization, since there is a depolarizing inflow of sodium (Na+) that was not also slowed down by an equivalent increase in intracellular sodium. The result is a change in the equilibrium potential of the cell.

      The increase in the membrane potential is characterised as a slow depolarizasion and leads to the entrance of Na+ ions into the cell, via open sodium channels, resulting in further depolarization. The slow depolarization causes partial Na+ channel inactivation to occur as well. This prevents the neuron from triggering a full action potential.

      As a result, the cell remains in a slightly depolarized state.

    7. hippocampal neurons

      Neurons in the hippocampus, which is a major component of the human brain and those of other vertebrates. It is part of the limbic system and has a crucial role in the consolidation of information, from short-term memory to long-term memory and spatial navigation.

      Primary cultures of rat and murine hippocampal neurons are widely used to reveal cellular mechanisms in neurobiology. By isolating and growing individual neurons, researchers are able to analyze properties related to cellular trafficking, cellular structure and individual protein localization using a variety of biochemical techniques.

    8. screens

      A screen, also known as a genetic screen is a laboratory procedure used to create and detect a mutant organism and provide important information on gene function. In order to identify the function of an unknown gene, one strategy is to introduce general mutations into an organism and then study both the mutant and the control organisms in hopes of detecting a difference in their physical properties or phenotypes.It is used to identify and select for individuals who possess a phenotype of interest in a mutagenised population, so in this case autism-associated genes.

    9. we reasoned that a prominent involvement of autosomal recessive genes in autism would be signaled by differences in the male-to-female (M/F) ratio of affected children in consanguineous (related) versus nonconsanguineous marriages

      The authors calculated the male to female ratio = (number of affected men)/(number of affected women) which is presented as ratio :1. This gives you how many men are affected for one affected women.

      The logic was that there would be a difference in the ratio between consiguineous and non-consanguineous families if faulty genes were transmitted by autosomes.

    10. Fisher's exact test

      Fisher's exact test is a statistical significance test, which is very useful in categorising data that result from classifying objects in two different ways; it is used for analysing contingency tables.

      https://www.youtube.com/watch?v=jwkP_ERw9Ak

    11. microarray screens

      An array is an orderly arrangement of samples where known and unknown DNA samples are matched according to base pairing rules.

      In this experimental setup, the cDNA derived from the mRNA of known genes is immobilised. The expression pattern is then compared to the expression pattern of a gene responsible for a disease.

      https://www.youtube.com/watch?v=pWk_zBpKt_w

    1. Our regression results include separate controls for each type of publication: any authorship position, and first or last author publications

      What the authors mean here is that they made statistical computations that allow them to remove the effect that the position of a name in the authors row can have in a publication

    2. measure applicant-level characteristics

      The authors studied some characteristics such as the grant history or the institutional affiliation to see if the previous work of the applicant has an impact on the result of the grant application.

    1. in randomly sampled files from all days of testing.

      From materials and methods: In addition, audio samples (n=272, 5-minute duration) were randomly chosen from all 12 days of the trapped, object, and empty conditions and were similarly analyzed.

      http://www.sciencemag.org/content/suppl/2011/12/07/334.6061.1427.DC1/Bartal.SOM.pdf

    2. Ultrasonic (~23 kHz) vocalizations were collected from multiple testing arenas with a bat-detector and were analyzed to determine whether rats emitted alarm calls

      From materials and methods:

      Audio files were collected from all sessions where only one condition was tested (26 rats in the trapped condition, 16 rats in the empty condition, and eight rats in the object condition).

      All audio files collected on days 1–3 of male rats in the trapped condition were analyzed for the presence of alarm calls.

      Judges blind to the experimental condition used the freeware Audacity 1.3 to locate potential alarm calls (about 23 kHz), and listened to each candidate segment to verify which of these segments indeed contained alarm calls.

      Each file was then categorized as either containing alarm calls or not containing alarm calls.

      The proportion of samples from each experimental condition that contained alarm calls was then calculated.

    3. and were unsurprised by door-opening

      See Figure 2E.

    4. using a consistent style

      See Figure 2D.

    5. they did so at short latency

      See Figure 2B.

    6. Yet alarm calls occurred too infrequently to support this explanation.

      Go back to Figure 2 panel F. Alarm calls accounted for only 14% of the ultrasonic noises rats made in the trapped condition.

      QUESTIONS: Do you agree with the statement that infrequency of alarm calls rules them out as the source of helping behavior?

      What experiment would you design to make the authors' interpretation stronger?

    7. To determine whether anticipation of social interaction is necessary to motivate door-opening, we tested rats in a modified setup in which the trapped animal could only exit into a separate arena (separated condition, Fig. 4, A and B)

      Here the authors investigate the motivation behind the helper rat opening the restrainer for their cagemate.

      QUESTION: Is the opening action driven by empathy for the distressed cagemate, or by the expectation to interact socially with the freed rat?

      By allowing the freeing of the trapped animal but preventing social interaction after freeing, the authors were able to determine the motivating factor behind the freeing action.

    8. Free rats circled the restrainer, digging at it and biting it, and contacted the trapped rat through holes in the restrainer (Fig. 1B

      Only in the scenario where the helper rat is an arena with a trapped rat was the movement of the free rat concentrated around the restrainer. The pattern of blue dots (rat's head) is denser closer to the red rectangle (restrainer).

      Rats do not like to be in the middle of an open space and usually concentrate their movement close to the sides, as seen in the empty and object containing conditions. This tendency to hug the sides of a space is called thigmotaxis.

      QUESTION: Why does the rat in the 2+ empty condition only move along the perforated divide?

    9. Free rats’ heads were marked and their movements were recorded with a top-mounted camera for offline analysis (11)

      Measuring location and movement in the arena The free tracking software ImageJ (v1.44e, NIH, USA) was used to convert the black marker on the rats' head into x,y coordinates denoting the rats' location at each frame at a rate of 0.5 frames per second (FPS).

      These data were then used to calculate movement velocity (termed activity). Other indices (time away from wall and persistence ratio) were calculated using the free SEE software (28), developed specifically for the analysis of rodent movement in an arena. [http://www.sciencedirect.com/science/article/pii/S0149763401000227]

    10. Rats were housed in pairs for 2 weeks before the start of testing

      These rat pairs are referred to as "cagemates" throughout the study. This 2-week period of housing in the same cage established familiarity between the rats.

    1. to test the hypothesis that Hipp-applied BDNF works via the IL mPFC

      Now that the authors believe the hippocampus is the source of BDNF, they want to test their hypothesis by injecting BDNF into the hippocampus and seeing whether they can block the effects of increased BDNF levels with an antibody in the IL mPFC. If they can, it means that hippocampal BDNF is acting at the IL mPFC.

    2. We took advantage of the fact that BDNF infusions increase BDNF levels in efferent targets

      The authors infuse BDNF into the hippocampus to demonstrate that this is the source of BDNF in the IL mPFC.

      Previous studies have shown that BDNF infusion in regions of the brain such as the prefrontal cortex results in increased BDNF levels in prefrontal cortical targets.

    3. BDNF protein levels in the Success group were elevated relative to the Failure group in the hippocampus [t(9) = 4.370, P = 0.002], but not the mPFC or amygdala

      The authors want to know what brain region is supplying BDNF to the medial prefrontal cortex, so they separate animals based on their natural variation in extinction ability.

      Animals that extinguish well have elevated levels of BDNF in the hippocampus relative to animals that extinguish poorly. These results suggest that the hippocampus may be the source of BDNF.

    4. We then selected two subgroups on the basis of their ability to successfully recall extinction on day 3

      Some rats are naturally bad at extinguishing fearful associations. The authors take advantage of this fact to see whether there is a correlation between BDNF levels in various brain regions and extinction ability.

      They used an ELISA assay (Sample methods: http://www.thermofisher.com/us/en/home/references/protocols/cell-and-tissue-analysis/elisa-protocol/general-elisa-protocol.html) to quantify protein levels in the hippocampus, mPFC, and amygdala.

      The naturally poor extinquishers had lower BDNF levels in the hippocampal region.

    5. It is possible, therefore, that IL BDNF mediates its extinction-like effects through NMDA receptors. To test this, we conditioned rats as previously on day 1. On day 2, in the absence of training, rats received one of the following treatment combinations: (i) saline injection (intraperitoneally) + saline infusion into IL (SAL + SAL), (ii) saline injection + BDNF infusion (SAL + BDNF), or (iii) CPP injection + BDNF infusion (CPP + BDNF). On day 3, all rats were returned to the chambers for a single-tone test

      The authors think that BDNF may be working through NMDA receptors. They test this by treating rats with an NMDA antagonist (injected peripherally), to see whether it blocks the effects of BDNF infusion (directly into brain).

      Rats that received the NMDA antagonist have freezing behavior similar to that of controls, suggesting that BDNF is indeed acting through NMDA receptors in this situation.

    6. freezing could be reinstated after unsignaled footshocks

      The authors want to see whether the original memory is still intact. After extinction, rats are given footshocks without a preceding tone.

      The re-emergence of a fearful association between the tone and shock indicates that the memory has not been degraded.

    7. BDNF could inhibit fear expression (similar to extinction), or it could have degraded the original fear memory

      There are two ways in which BDNF could reduce freezing behavior on day 3: It could degrade the original memory, or it could just reduce the expression of fear, without having any effect on the initial fear memory. This is an important distinction to make!

    8. Although the effect of BDNF on fear did not require extinction training, it did require conditioning, because BDNF infused 1 day before conditioning did not significantly reduce freezing (Fig. 1C)

      This experiment shows that an infusion of BDNF reduces freezing only if given after fear conditioning. By showing that the effects of BDNF require, but do not alter, fear conditioning, the authors show that BDNF is acting specifically on the extinction process.

    9. We therefore repeated the previous experiment but omitted extinction training from day 2

      Extinction training was removed in order to see whether the effects of BDNF were extinction-dependent.

    10. rats were subjected to auditory fear conditioning

      This is a 3-day process that allows the researcher to study the formation and extinction of associative fear memories in an animal model.

      Sample methods: http://www.jove.com/video/50871/contextual-cued-fear-conditioning-test-using-video-analyzing-system

      Fear conditioning (day1) - The rats are exposed to tones followed by footshocks.

      Fear extinction (day 2) - The rats are exposed to the same tones multiple times, without a footshock.

      Testing (day 3) - The animals are tested to see whether they are still fearful of the tone.

  11. Aug 2016
    1. cagemate versus chocolate paradigm

      The authors pitted freeing a trapped cagemate against opening a restrainer containing chocolate chips. This tests how the helper rat values freeing their friend versus accessing a tasty treat.

    1. All of the above precision estimates are conservative because the leave-half-out validations used only half of the reference samples from each area in order to assign the other half as unknowns

      The author’s suggest that the actual accuracy of their assignment method is very high because they used all the reference samples to assign seized ivory.

    2. We test the accuracy of these results by assigning each of the reference samples, treating them as samples of unknown origin, under several cross-validation scenarios (table S3).

      To test how accurately the reference map predicts geographic origins, the authors treated some of the reference map samples as unknown samples. They then determined how well the reference map predicts the origin of that sample.

    3. A total of 1350 reference samples, 1001 savanna and 349 forest, were collected at 71 locations across 29 African countries, with 1 to 95 samples per location (table S2)

      Table S2 has a complete list of the countries, location names, regions, sample sizes, and longitudes and latitudes for the all 1350 reference samples.

      The method relies on noninvasive techniques that acquire DNA from elephant scat.

    4. spatial smoothing (13, 14) to compute separate continuous allele frequency maps across Africa from savanna or forest elephant reference samples

      The "spatial smoothing" method assumes that populations close to one another are genetically more similar than populations that are more distant.

      The scientists used this method separately for reference samples from savanna and forest elephants to create maps that spanned all of Africa. This way, they could assign tusks without known origin to any location in Africa, not just to those few areas where reference samples are available.

    5. Here we use DNA-based methods to assign population of origin to African elephant ivory from 28 large seizures (≥0.5 metric tons) made across Africa and Asia from 1996 to 2014 (table S1)

      Forensics experts have traditionally been unable to determine the geographic origin of poached ivory. That's because the illegal ivory is usually caught while in transit, and ivory isn't necessarily exported from the same country in which it was poached.

    1. nonmeasles mortality should be correlated with measles incidence data

      Author's Hypothesis: 1

      To assess whether the hypothesis is correct, the authors stated that first and foremost the incidence (amount of) non-measles death each year should correlate (increase or decrease with) the levels of measles infections that occurred in that same year. In other words, when there are high levels of measles disease this should result in high levels of non-measles infections and mortality, and vise versa.

  12. Jul 2016
    1. Third, the strength of this association should be greatest

      Author's Hypothesis: 3

      The mathematically calculated prevalence of immune-amnesia in the population (i.e. the number of people with the adverse immune-suppressive sequelae of measles) will increase for any given point in time. The calculated prevalence of immune-amnesia for each particular hypothesized duration of immune-amnesia tested can be correlated to the number of non-measles infectious disease deaths from year to year. Thus, each different duration of immune-amnesia that is tested will correlated differently (better or worse) with the number of non-measles deaths.

      This third part of the hypothesis simply states that, as the hypothesized duration of immune-amnesia is changed, the 'best guess' for the true duration of immune-amnesia will be that duration which leads to a calculated prevalence of immune-amnesia that best correlates with the amount of non-measles infectious disease deaths.

    1. we have developed a strategy to convert a Drosophila heterozygous recessive mutation into a homozygous condition manifesting a mutant phenotype

      The authors' goal was to develop a novel way to create homozygous mutations without the need to interbreed heterozygotes. They turned to CRISPR-Cas9 technology to accomplish this. The idea was to insert genes encoding Cas9 and the gRNA into the genome precisely at the gRNA cut site.

      First, they created a genetic construct containing the Cas9 protein coding sequence, a Cas9 guide RNA targeting their locus of interest, and flanking homology arms with sequence homology to their sequence of interest. They used this construct to create transgenic flies to test out their method.