1,451 Matching Annotations
  1. Jan 2017
    1. The last measure for the success of the replications was a subjective rating from the replication teams. Each team was asked if they thought they had replicated the original effect. Out of 100 studies, 39 were judged to be successful replications.

    2. Combining original and replication effect sizes for cumulative evidence

      Fourth, the authors combined the original and replication effect sizes and calculated a cumulative estimation of the effects. They wanted to see how many of the studies that could be analyzed this way would show an effect that was significantly different from zero if the evidence from the original study and that of the replication study was combined.

      Results showed that 68% of the studies analyzed this way indicated that an effect existed. In the remaining 32% of the studies, the effect found in the original study, when combined with the data from the replication study, could no longer be detected.

    3. Statistical analyses

      Because the large-scale comparison of original and replication studies is a new development in the field of psychology, the authors had to formulate a plan for their analysis that could not rely much on previous research. They decided to use 5 key indicators for evaluating the success of the replications. They will compare the original and the replicated studies in terms of the number of significant outcomes, p-values and effect sizes, and they will assess how many studies were subjectively judged to replicate the original effect. Finally, they also run a meta-analysis over the effect sizes.

    4. Aggregate data preparation

      After each team had completed the replication attempt, independent reviewers checked that their procedure was well documented and according to the initial replication protocol, and that the statistical analysis on the effects selected for replication were correct.

      Then, all the data were compiled to conduct analyses not only on the individual studies, but about all replication attempts made. The authors wanted to know if studies that replicated and those that did not replicate would be different. For instance, they investigated if studies that replicated would be more likely to come from one journal than another, or if studies that did not replicate would be more likely to have a higher p-value than studies which could be replicated.

    5. constructed a protocol for selecting and conducting high-quality replications

      Before collecting data for the replication studies, the authors produced a detailed protocol that described how they were going to select the studies that were available for replication and how it would be decided which effect in these studies would be attempted to be replicated, and which principles would guide all replication attempts. Importantly, this protocol was made public, and all individual replication attempts had to adhere to it.

  2. Nov 2016
    1. further characterize the consequences of ZIKV infection during different stages of fetal development

      The authors used models that allowed them to study early stages of brain development. They suggest there is more work to be done to determine the effects of ZIKV infection on later stages of fetal development.

    2. brain organoids recapitulate the orchestrated cellular and molecular early events comparable to the first trimester fetal neocortex

      Neurospheres are useful for modeling very early (embryonic) development, while organoids are used to study later stages of development.

    3. In addition to MOCK infection, we used dengue virus 2 (DENV2), a flavivirus with genetic similarities to ZIKV (11, 19), as an additional control group.

      The authors also compared ZIKV infection to dengue virus 2 (DENV2) infection. DENV2 is similar to ZIKV.

    4. reduced by 40% when compared to brain organoids under mock conditions

      Brain organoids infected with Zika virus were, on average, 40% smaller

    5. The growth rate of 12 individual organoids (6 per condition) was measured during this period

      Both infected and uninfected organoids were immersed in a fixative solution to "freeze" them and allow them to be visualized.

      It was then possible to use an electron microscope to compare the infrastructure of infected cells and uninfected cells

    6. morphological abnormalities and cell detachment

      Neurospheres that contained cells infected with Zika virus were oddly shaped, and some cells broke away.

    7. to explore the consequences of ZIKV infection during neurogenesis and growth

      In order to obtain neural stem cells from human iPS, researchers cultured iPS in a special medium.

      To create neurospheres and organoids, neural stem cells were divided and again cultured in a special medium.

      Finally, ZIKV was diluted and added to the different types of culture for 2 hours.

    1. we cannot directly assess whether the NIH systematically rejects high-potential applications

      Because the authors only looked at projects that received grant funding, their analysis does not take into account how many high-potential projects were rejected by peer review.

    2. our estimates are likely downward biased

      The authors acknowledge that there is sometimes a long delay between a grant award and patenting, so their analysis may not be a good indicator of how relevant research is to commercial applications.

    3. Our final analysis

      Finally, the authors wanted to figure out if peer reviewers are good at choosing applicants for their innovation, their practicality, or if they are simply good at weeding out low-quality research.

    4. These residuals represent the portions of grants’ citations or publications that cannot be explained by applicants’ previous qualifications or by application year or subject are

      The authors removed the influence of the grant applicant's background, demographics, and writing skill in order to look at what effect a reviewer's expertise has.

    5. rewarding an applicant’s grant proposal writing skills

      In this model, the authors try to control for the fact that an application could be selected because the applicant writes well, rather than based on the quality of the application

    6. Controlling for publication history attenuates but does not eliminate the relationship

      Again, controlling for the variable of a PI's research background does not eliminate the relationship the authors originally found.

    7. adds controls describing a PI’s publication history

      The authors control for yet another potential variable, an applicant's research background (they use the PI's publication history to do this).

    8. Controlling for cohort and field effects does not attenuate our main finding

      The authors' adjustments to control for various external effects did not change their original findings.

    9. We also include NIH institute-level fixed effects to control for differences in citation and publication rates by fields

      The authors try to remove the effect of an article's field on its impact. For example, a biochemistry article may appear to have a smaller impact because of the high rate of publication and citation in that field, whereas a physics article's impact may be inflated due to a lower publication and citation rate.

    10. potential concerns

      Several factors can lead to the trends in Figure 1 being misinterpreted, like the age of the grant and the field of study. The authors address these concerns by adjusting their model to account for these effects.

    11. a 1-SD worse score is associated with a 14.6% decrease in grant-supported research publications and a 18.6% decrease in citations to those publications

      Here the authors estimated how much a decrease of one standard deviation on the percentile score affected the number of publications and citations of a grant recipient.

    12. patents that either directly cite NIH grant support or cite publications acknowledging grant support

      The last measure is the number of patents that cite those publications from (i), or acknowledge support from the grant.

    13. the total number of citations that those publications receive through 2013

      The second measure is the total number of citations the publications from (i) received through 2013.

    14. the total number of publications that acknowledge grant support within 5 years of grant approval

      The first measure of success is the number of papers a team published during the 5 years after they received the grant.

    15. funding is likely to have direct effect on research productivity

      The authors considered grants which were already funded and competing for renewal. This makes it easier to attribute differences in research productivity to the peer review process, rather than the amount of funding the project has.

    16. percentile score

      The percentile score is assigned by the peer review committee. It ranks all authors to determine which was the most favored by the committee. A lower score means the committee liked the application more.

    17. peer review has high value-added if differences in grants’ scores are predictive of differences in their subsequent research output

      If the evaluation by the peer review committee is correlated with the quality of work put out by the research group, then peer review has high value-added (meaning, it is useful for choosing research groups with the highest potential).

    18. Because NIH cannot possibly fund every application it receives, the ability to distinguish potential among applications is important for its success.

      The outcome of this study could have important implications for how the NIH evaluates and chooses who it gives money to.

    19. our paper asks whether NIH selects the most promising projects to support

      Previous work has shown that receiving a grant increases scientific productivity. However, the paper authors want to know if the NIH is awarding grants to projects that will make the best use of the money.

  3. Oct 2016
    1. We transformed effect sizes into correlation coefficients whenever possible.

      For the third indicator for replication success, the effect sizes of original and replication studies, the authors will calculate correlation coefficients to indicate effect sizes. In a single study, when the means of two groups are similar, the correlation coefficient will be close to 1, and when the means of two groups are very different, the correlation coefficient will be close to 0.

      The effect size of original studies was always coded as positive (values between 0 and 1). When the effect in the relevant replication study went in the same direction, the effect size was also coded as positive (values between 0 and 1), but when the effect in the replication went in the other direction, the effect size was coded as negative (values between -1 and 0).

    2. Using only the nonsignificant Pvalues of the replication studies and applying Fisher’s method (26), we tested the hypothesis that these studies had “no evidential value” (the null hypothesis of zero-effect holds for all these studies).

      The first analysis that will be run on the data assesses all replication studies that yielded non-significant results. By applying Fisher's method, the authors tested if based on these failures to replicate, there is evidence that effects are present in reality for all assessed studies, or if these studies cannot justify a deviation from the null hypothesis that there are no effects in reality.

    3. Second, we compared the central tendency of the distribution of P values of original and replication studies using the Wilcoxon signed-rank test and the t test for dependent samples.

      Moreover, the original and the replication studies are compared in terms of the p-values they yield: are the p-values very similar, or extremely different from each other?

    4. We tested the hypothesis that the proportions of statistically significant results in the original and replication studies are equal using the McNemar test for paired nominal data and calculated a CI of the reproducibility parameter.

      Next, the authors conducted another test to find out if the number of results in the original studies that produced significant results was equal to or different from the number of replication studies that produced significant results.

    5. However, original studies that interpreted nonsignificant P values as significant were coded as significant (four cases, all with P values < 0.06).

      Here, the authors explain how they deal with the problem that some of the original studies reported results as significant, although in fact, they were non-significant. In each case, the threshold that is customarily set to determine statistical significance (p<0.05) was not met, but all reported p-values fell very close to this threshold (0.06>p>0.05). Since the original authors treated these effects as significant, the current analysis did so as well.

  4. Sep 2016
    1. To determine whether the benefits afforded by tau reduction were sustained, we examined older mice

      By examining older hAPP mice with and without tau, the authors could test how tau causes Alzheimer’s disease to progress as an animal ages.

    2. Probe trials, in which the platform was removed and mice were given 1 min to explore the pool, confirmed the beneficial effect of tau reduction

      After the researchers trained all mice to swim to a platform hidden under the water surface, the authors removed the platform to see how much time they spent in the area where the platform used to be. This way the researchers were able to test the memory of the mice.

    3. hAPP/Tau+/+mice took longer to master this task (Fig. 1A; P < 0.001). In contrast, hAPP/Tau+/– and hAPP/Tau–/– mice performed at control levels

      Alzheimer’s mice (with hAPP) that had normal amounts of tau (Tau+/+) took longer to learn to swim to a visible platform than the other five types of mice.

    4. We crossed hAPP mice (11) with Tau–/– mice (12) and examined hAPP mice with two (hAPP/Tau+/+), one (hAPP/Tau+/–), or no (hAPP/Tau–/–) endogenous tau alleles, compared with Tau+/+, Tau+/–, and Tau–/– mice without hAPP (13)

      The authors used mice that were genetically engineered to express a human copy of the amyloid precursor protein (called hAPP mice). hAPP mice are a common animal model of Alzheimer’s disease. They develop amyloid plaques and severe memory and cognitive problems later in life, just like humans with the disease.

      The authors bred these hAPP mice with other mice that were missing both their genes for the tau protein (called Tau-/- mice). From this breeding plan, the authors produced hAPP mice with normal amounts of tau (hAPP/Tau+/+), with half the normal amount of tau (hAPP/Tau+/-), and with no tau (hAPP/Tau-/-).

      They also produced mice without the human APP gene that had normal, half, and no tau.

    1. To determine whether this activation of prey motor neurons was the result of central nervous system (spinal) activity or activity in efferent branches of motor neurons, the dual tension experiment was repeated twice with extensively double-pithed fish (in which both the brain and spinal cord were destroyed, but the branches of motor efferents were left intact within the fish body) and compared with a brain-pithed fish

      As found out before, the paralysis of the fish is caused by it's motor neurones. Now the setting with two differently prepared fish was repeated. First fish: brain and spinal cord destroyed. Second Fish: only brain destroyed. This is to check by which part of the nervous system the activation of the fish's motor neurons (it's movement) is caused. Either the central nervous system (fish 1) or the "decentral" branches (fish 2)

    2. n this study, I designed a set of experiments to explore the impacts of the electric eel discharges on potential prey and the mechanism that operates during such attacks.

      The purpose of this study is to further understand the effects of the electric eel's attacks on their prey and to explore the multiple strategies that are used by the eel during the process of prey detection, hunt and capture.

    3. To test this hypothesis, a pithed fish was placed in a thin plastic bag to isolate it from the eel’s discharge. The electrically isolated fish was positioned below an agar barrier, with electrical leads embedded in the head and tail region (10) that allowed production of artificial fish twitch by the experimenter. Artificial fish twitch was triggered remotely through a stimulator (Fig. 4A), allowing control over its timing and occurrence. When the stimulating electrodes were inactive, eel doublets caused no response in the pithed fish and eels did not attack the preparation (Fig. 4B and movie S6). However, when the stimulator was configured to trigger fish twitch when the eel produced a doublet, the eel’s full “doublet attack” behavior was replicated (Fig. 4C and movie S6).

      To test the hypothesis that eel's detect hidden fish with sending out doublets, fish were put in a plastic bag to electrically isolate them from the eel's signals. Thus, the fish didn't give any response (movement) to these signals. But fish movement could be simulated by a stimulater controlled by the experimenter.

    4. To identify the function of this additional behavior, eels were presented with prey hidden below a thin agar barrier (Fig. 3C). In some cases, eels detected prey through the barrier and attacked directly, but in other cases, the eel investigated the agar surface with a low-amplitude electric organ discharge and then produced a high-voltage doublet. The doublet invariably caused prey movement. Stimulated prey movement was closely followed (in 20 to 40 ms) by a full predatory strike consisting of a strong electric discharge volley and directed attack (Fig. 3 and movie S5), as characterized in the first experiments.

      Eels can also detect hidden fish. Weak electric shocks make the fish move, the eel detects this movement and strikes strongly

    5. In each of four cases, tension responses in the curarized fish dropped to near zero, whereas the sham-injected fish continued to respond (fig. S3).

      the poisoned fish didn't move anymore when shocked by the eel

    6. To determine whether the discharge induced muscle contractions by initiating action potentials directly in prey muscles or through activation of some portion of fish motor neurons, one of two similarly sized fish was injected with curare (an acetylcholine antagonist) so as to block the acetylcholine gated ion channels at the neuromuscular junction, whereas the other fish was sham-injected

      To find out how the muscle contraction is caused, two fish were prepared differently: 1. the first fish was injected with a nerve poison to check if the muscle contraction by a chemical effect 2. the second fish was only given a placebo to check if it's a electrical effect

    7. An eel in the aquarium was separated from the fish by an electrically permeable agar barrier (Fig. 2A) (11) and fed earthworms, which it attacked with volleys of its high-voltage discharge. The discharge directed at the earthworms induced strong muscular contractions in the fish preparation, precisely correlated in time with the volley (no tension developed during the weak discharge). A steep rise in fish tension occurred with a mean latency of 3.4 ms (n = 20 trials) after the first strong pulse (Fig. 2B), which is similar to the 2.9-ms mean immobilization latency (n = 20 trials) observed in free-swimming fish.

      The fish connected to the force transducer was behind a jelly barrier. When the eel sent out a strong electric shock the fish moved. The fish did not react to weak electric signals.

    8. To characterize the mechanism by which high-voltage volleys cause this remote immobilization of prey (10), anesthetized fish were pithed (to destroy the brain), the hole was sealed with cyanoacrylate, and the fish was attached to a force transducer.

      To understand how the eel's electric shocks affect the fish, anesthetized fish were connected to force transducers to record their movement.

    9. Electric eels emit three distinct types of electric organ discharges: (i) low-voltage pulses for sensing their environment, (ii) pairs and triplets of high-voltage pulses given off periodically while hunting in complex environments, and (iii) high-frequency volleys of high-voltage pulses during prey capture or defense

      Eels have three strategies: (i). mild electric shocks to get information about their environment (like radar) (ii). strong double or triple shocks (iii). many strong back-to-back shocks to catch fish or to defend themselves

    10. To further investigate the fidelity of prey muscle contractions relative to the electric organ discharge, and the mechanism of the contractions’ induction

      To check the validity of the previous conclusion that the eel's electric shock causes muscle contraction, two fish with destroyed brains were tested.

    1. generates a new splice donor site (gt) 42 bases upstream of the wild-type donor site, thus generating a 14–amino acid deletion in the corresponding transcript (Fig. 3D).

      The insertion of the jumping DNA caused a mutation. This mutation affected splicing, a process where the DNA is edited to ultimately produce the correct protein.

      This incorrect splicing resulted in a shortened non-functional protein.

    2. Using breeding experiments,

      The authors took different individual lizards with different physical traits, and then mated them to see how these traits were transmitted to the offspring of these mixes.

    1. NPAS4, a transcription factor activated in response to depolarization

      Npas4 was found to cause activation of distinct programs of late-response genes in inhibitory and excitatory neurons. It's an activity-dependent transcription factor that regulates inhibitory synapse number and function in cell culture. It is also expressed in pyramidal neurons of the hippocampus where it promotes an increase in the number of inhibitory synapses on the cell soma and a decrease in the number of inhibitory synapses on the apical dendrites.

    2. binomial test

      It is an exact test of the statistical significance of deviations from a theoretically expected distribution of observations from two categories.

      It is used in many surveys, an example of how it works is given in the link below:

      See more...

    3. Affymetrix Gene-Chip Human Mapping 500K single-nucleotide polymorphism (SNP) array, as well as bacterial artificial chromosome (BAC) comparative genomic hybridization (CGH) microarrays

      Both the single nucleotide polymorphism array and BAC Comparative Genomic Hybridization arrays are used to detect the number of copies of a specific locus in a subject's DNA. This allows us to know whether the locus is present on one or both chromosomes of a subject.

      To do this, control DNA and tested DNA are labeled with different fluorescent molecules of different colours (in the picture red for the control and green for the test). After denaturation, DNA will hybridise. If the sample DNA and the control DNA are identical, we will observe orange fluorescence. If the control DNA has deletions, the solution will appear red and if it gained regions, it will appear green.

    4. knock-down

      A strategy for down-regulation of expression of a gene by incorporating into the genome an antisense oligodeoxynucleotide or ribozyme sequence directed against the targeted gene.

    5. RNA interference (RNAi)

      A biological pathway, found in many eukaryotes, in which RNA molecules inhibit gene expression, usually by the destruction of certain mRNA molecules. This process is controlled by the RNA-induced silencing complex (RISC).

      This procedure is also called co-suppression, post-transcriptional gene silencing (PTGS), or quelling.

      https://www.youtube.com/watch?v=cK-OGB1_ELE&noredirect=1

    6. neuronal membrane depolarization by elevated KCl

      This is an important and common experimental technique that is used to study the result of enhanced neuronal activity on alterations in gene expression.

      The elevated KCL, which means the increase of extracellular potassium, has three mechanistic effects that result in a sustained depolarized state:

      The normally present hyperpolarizing outflow of potassium (K+) is slowed down, while less hyperpolarizing outflow is equivalent to depolarization, since there is a depolarizing inflow of sodium (Na+) that was not also slowed down by an equivalent increase in intracellular sodium. The result is a change in the equilibrium potential of the cell.

      The increase in the membrane potential is characterised as a slow depolarizasion and leads to the entrance of Na+ ions into the cell, via open sodium channels, resulting in further depolarization. The slow depolarization causes partial Na+ channel inactivation to occur as well. This prevents the neuron from triggering a full action potential.

      As a result, the cell remains in a slightly depolarized state.

    7. hippocampal neurons

      Neurons in the hippocampus, which is a major component of the human brain and those of other vertebrates. It is part of the limbic system and has a crucial role in the consolidation of information, from short-term memory to long-term memory and spatial navigation.

      Primary cultures of rat and murine hippocampal neurons are widely used to reveal cellular mechanisms in neurobiology. By isolating and growing individual neurons, researchers are able to analyze properties related to cellular trafficking, cellular structure and individual protein localization using a variety of biochemical techniques.

    8. screens

      A screen, also known as a genetic screen is a laboratory procedure used to create and detect a mutant organism and provide important information on gene function. In order to identify the function of an unknown gene, one strategy is to introduce general mutations into an organism and then study both the mutant and the control organisms in hopes of detecting a difference in their physical properties or phenotypes.It is used to identify and select for individuals who possess a phenotype of interest in a mutagenised population, so in this case autism-associated genes.

    9. we reasoned that a prominent involvement of autosomal recessive genes in autism would be signaled by differences in the male-to-female (M/F) ratio of affected children in consanguineous (related) versus nonconsanguineous marriages

      The authors calculated the male to female ratio = (number of affected men)/(number of affected women) which is presented as ratio :1. This gives you how many men are affected for one affected women.

      The logic was that there would be a difference in the ratio between consiguineous and non-consanguineous families if faulty genes were transmitted by autosomes.

    10. Fisher's exact test

      Fisher's exact test is a statistical significance test, which is very useful in categorising data that result from classifying objects in two different ways; it is used for analysing contingency tables.

      https://www.youtube.com/watch?v=jwkP_ERw9Ak

    11. microarray screens

      An array is an orderly arrangement of samples where known and unknown DNA samples are matched according to base pairing rules.

      In this experimental setup, the cDNA derived from the mRNA of known genes is immobilised. The expression pattern is then compared to the expression pattern of a gene responsible for a disease.

      https://www.youtube.com/watch?v=pWk_zBpKt_w

    1. Our regression results include separate controls for each type of publication: any authorship position, and first or last author publications

      What the authors mean here is that they made statistical computations that allow them to remove the effect that the position of a name in the authors row can have in a publication

    2. measure applicant-level characteristics

      The authors studied some characteristics such as the grant history or the institutional affiliation to see if the previous work of the applicant has an impact on the result of the grant application.

    1. in randomly sampled files from all days of testing.

      From materials and methods: In addition, audio samples (n=272, 5-minute duration) were randomly chosen from all 12 days of the trapped, object, and empty conditions and were similarly analyzed.

      http://www.sciencemag.org/content/suppl/2011/12/07/334.6061.1427.DC1/Bartal.SOM.pdf

    2. Ultrasonic (~23 kHz) vocalizations were collected from multiple testing arenas with a bat-detector and were analyzed to determine whether rats emitted alarm calls

      From materials and methods:

      Audio files were collected from all sessions where only one condition was tested (26 rats in the trapped condition, 16 rats in the empty condition, and eight rats in the object condition).

      All audio files collected on days 1–3 of male rats in the trapped condition were analyzed for the presence of alarm calls.

      Judges blind to the experimental condition used the freeware Audacity 1.3 to locate potential alarm calls (about 23 kHz), and listened to each candidate segment to verify which of these segments indeed contained alarm calls.

      Each file was then categorized as either containing alarm calls or not containing alarm calls.

      The proportion of samples from each experimental condition that contained alarm calls was then calculated.

    3. and were unsurprised by door-opening

      See Figure 2E.

    4. using a consistent style

      See Figure 2D.

    5. they did so at short latency

      See Figure 2B.

    6. Yet alarm calls occurred too infrequently to support this explanation.

      Go back to Figure 2 panel F. Alarm calls accounted for only 14% of the ultrasonic noises rats made in the trapped condition.

      QUESTIONS: Do you agree with the statement that infrequency of alarm calls rules them out as the source of helping behavior?

      What experiment would you design to make the authors' interpretation stronger?

    7. To determine whether anticipation of social interaction is necessary to motivate door-opening, we tested rats in a modified setup in which the trapped animal could only exit into a separate arena (separated condition, Fig. 4, A and B)

      Here the authors investigate the motivation behind the helper rat opening the restrainer for their cagemate.

      QUESTION: Is the opening action driven by empathy for the distressed cagemate, or by the expectation to interact socially with the freed rat?

      By allowing the freeing of the trapped animal but preventing social interaction after freeing, the authors were able to determine the motivating factor behind the freeing action.

    8. Free rats circled the restrainer, digging at it and biting it, and contacted the trapped rat through holes in the restrainer (Fig. 1B

      Only in the scenario where the helper rat is an arena with a trapped rat was the movement of the free rat concentrated around the restrainer. The pattern of blue dots (rat's head) is denser closer to the red rectangle (restrainer).

      Rats do not like to be in the middle of an open space and usually concentrate their movement close to the sides, as seen in the empty and object containing conditions. This tendency to hug the sides of a space is called thigmotaxis.

      QUESTION: Why does the rat in the 2+ empty condition only move along the perforated divide?

    9. Free rats’ heads were marked and their movements were recorded with a top-mounted camera for offline analysis (11)

      Measuring location and movement in the arena The free tracking software ImageJ (v1.44e, NIH, USA) was used to convert the black marker on the rats' head into x,y coordinates denoting the rats' location at each frame at a rate of 0.5 frames per second (FPS).

      These data were then used to calculate movement velocity (termed activity). Other indices (time away from wall and persistence ratio) were calculated using the free SEE software (28), developed specifically for the analysis of rodent movement in an arena. [http://www.sciencedirect.com/science/article/pii/S0149763401000227]

    10. Rats were housed in pairs for 2 weeks before the start of testing

      These rat pairs are referred to as "cagemates" throughout the study. This 2-week period of housing in the same cage established familiarity between the rats.

    1. to test the hypothesis that Hipp-applied BDNF works via the IL mPFC

      Now that the authors believe the hippocampus is the source of BDNF, they want to test their hypothesis by injecting BDNF into the hippocampus and seeing whether they can block the effects of increased BDNF levels with an antibody in the IL mPFC. If they can, it means that hippocampal BDNF is acting at the IL mPFC.

    2. We took advantage of the fact that BDNF infusions increase BDNF levels in efferent targets

      The authors infuse BDNF into the hippocampus to demonstrate that this is the source of BDNF in the IL mPFC.

      Previous studies have shown that BDNF infusion in regions of the brain such as the prefrontal cortex results in increased BDNF levels in prefrontal cortical targets.

    3. BDNF protein levels in the Success group were elevated relative to the Failure group in the hippocampus [t(9) = 4.370, P = 0.002], but not the mPFC or amygdala

      The authors want to know what brain region is supplying BDNF to the medial prefrontal cortex, so they separate animals based on their natural variation in extinction ability.

      Animals that extinguish well have elevated levels of BDNF in the hippocampus relative to animals that extinguish poorly. These results suggest that the hippocampus may be the source of BDNF.

    4. We then selected two subgroups on the basis of their ability to successfully recall extinction on day 3

      Some rats are naturally bad at extinguishing fearful associations. The authors take advantage of this fact to see whether there is a correlation between BDNF levels in various brain regions and extinction ability.

      They used an ELISA assay (Sample methods: http://www.thermofisher.com/us/en/home/references/protocols/cell-and-tissue-analysis/elisa-protocol/general-elisa-protocol.html) to quantify protein levels in the hippocampus, mPFC, and amygdala.

      The naturally poor extinquishers had lower BDNF levels in the hippocampal region.

    5. It is possible, therefore, that IL BDNF mediates its extinction-like effects through NMDA receptors. To test this, we conditioned rats as previously on day 1. On day 2, in the absence of training, rats received one of the following treatment combinations: (i) saline injection (intraperitoneally) + saline infusion into IL (SAL + SAL), (ii) saline injection + BDNF infusion (SAL + BDNF), or (iii) CPP injection + BDNF infusion (CPP + BDNF). On day 3, all rats were returned to the chambers for a single-tone test

      The authors think that BDNF may be working through NMDA receptors. They test this by treating rats with an NMDA antagonist (injected peripherally), to see whether it blocks the effects of BDNF infusion (directly into brain).

      Rats that received the NMDA antagonist have freezing behavior similar to that of controls, suggesting that BDNF is indeed acting through NMDA receptors in this situation.

    6. freezing could be reinstated after unsignaled footshocks

      The authors want to see whether the original memory is still intact. After extinction, rats are given footshocks without a preceding tone.

      The re-emergence of a fearful association between the tone and shock indicates that the memory has not been degraded.

    7. BDNF could inhibit fear expression (similar to extinction), or it could have degraded the original fear memory

      There are two ways in which BDNF could reduce freezing behavior on day 3: It could degrade the original memory, or it could just reduce the expression of fear, without having any effect on the initial fear memory. This is an important distinction to make!

    8. Although the effect of BDNF on fear did not require extinction training, it did require conditioning, because BDNF infused 1 day before conditioning did not significantly reduce freezing (Fig. 1C)

      This experiment shows that an infusion of BDNF reduces freezing only if given after fear conditioning. By showing that the effects of BDNF require, but do not alter, fear conditioning, the authors show that BDNF is acting specifically on the extinction process.

    9. We therefore repeated the previous experiment but omitted extinction training from day 2

      Extinction training was removed in order to see whether the effects of BDNF were extinction-dependent.

    10. rats were subjected to auditory fear conditioning

      This is a 3-day process that allows the researcher to study the formation and extinction of associative fear memories in an animal model.

      Sample methods: http://www.jove.com/video/50871/contextual-cued-fear-conditioning-test-using-video-analyzing-system

      Fear conditioning (day1) - The rats are exposed to tones followed by footshocks.

      Fear extinction (day 2) - The rats are exposed to the same tones multiple times, without a footshock.

      Testing (day 3) - The animals are tested to see whether they are still fearful of the tone.

  5. Aug 2016
    1. cagemate versus chocolate paradigm

      The authors pitted freeing a trapped cagemate against opening a restrainer containing chocolate chips. This tests how the helper rat values freeing their friend versus accessing a tasty treat.

    1. All of the above precision estimates are conservative because the leave-half-out validations used only half of the reference samples from each area in order to assign the other half as unknowns

      The author’s suggest that the actual accuracy of their assignment method is very high because they used all the reference samples to assign seized ivory.

    2. We test the accuracy of these results by assigning each of the reference samples, treating them as samples of unknown origin, under several cross-validation scenarios (table S3).

      To test how accurately the reference map predicts geographic origins, the authors treated some of the reference map samples as unknown samples. They then determined how well the reference map predicts the origin of that sample.

    3. A total of 1350 reference samples, 1001 savanna and 349 forest, were collected at 71 locations across 29 African countries, with 1 to 95 samples per location (table S2)

      Table S2 has a complete list of the countries, location names, regions, sample sizes, and longitudes and latitudes for the all 1350 reference samples.

      The method relies on noninvasive techniques that acquire DNA from elephant scat.

    4. spatial smoothing (13, 14) to compute separate continuous allele frequency maps across Africa from savanna or forest elephant reference samples

      The "spatial smoothing" method assumes that populations close to one another are genetically more similar than populations that are more distant.

      The scientists used this method separately for reference samples from savanna and forest elephants to create maps that spanned all of Africa. This way, they could assign tusks without known origin to any location in Africa, not just to those few areas where reference samples are available.

    5. Here we use DNA-based methods to assign population of origin to African elephant ivory from 28 large seizures (≥0.5 metric tons) made across Africa and Asia from 1996 to 2014 (table S1)

      Forensics experts have traditionally been unable to determine the geographic origin of poached ivory. That's because the illegal ivory is usually caught while in transit, and ivory isn't necessarily exported from the same country in which it was poached.

    1. nonmeasles mortality should be correlated with measles incidence data

      Author's Hypothesis: 1

      To assess whether the hypothesis is correct, the authors stated that first and foremost the incidence (amount of) non-measles death each year should correlate (increase or decrease with) the levels of measles infections that occurred in that same year. In other words, when there are high levels of measles disease this should result in high levels of non-measles infections and mortality, and vise versa.

  6. Jul 2016
    1. Third, the strength of this association should be greatest

      Author's Hypothesis: 3

      The mathematically calculated prevalence of immune-amnesia in the population (i.e. the number of people with the adverse immune-suppressive sequelae of measles) will increase for any given point in time. The calculated prevalence of immune-amnesia for each particular hypothesized duration of immune-amnesia tested can be correlated to the number of non-measles infectious disease deaths from year to year. Thus, each different duration of immune-amnesia that is tested will correlated differently (better or worse) with the number of non-measles deaths.

      This third part of the hypothesis simply states that, as the hypothesized duration of immune-amnesia is changed, the 'best guess' for the true duration of immune-amnesia will be that duration which leads to a calculated prevalence of immune-amnesia that best correlates with the amount of non-measles infectious disease deaths.

    2. Second, an immune memory loss mechanism should present as a strengthening of this association when measles incidence data are transformed to reflect an accumulation of previous measles cases (a measles “shadow”).

      Author's Hypothesis: 2

      If the long-term immune-amnesia predicted by the authors is true, there should be a measurable difference, detectable in the population-level data, to prove this hypothesis. The authors' will show this by accounting for the length of time that the immune system remains impaired due to the immune-amnesia or immuno-modulation as they also refer to it. They suggest that this immuno-amnesia is like a measles "Shadow" that remains even after the measles epidemic has ended. At a population level, it can be accounted for as an accumulation of past measles cases or epidemics.

      For example, if there is a very large amount of measles (a high incidence of measles) in year 1, and the immune-amnesia following measles lasts for 3 years, then for up to three years after this large measles epidemic there might be a noticeable increase in other infectious diseases due to the impaired immune resistance from the immune-amnesia.

      So, if the amnesia lasts for 3 years, then during year 2, for example, the amount of other infectious diseases during that year will be proportional not only to the amount of measles that happened during year 2, but rather to the amount of measles that happened during both years 1 and 2. Similarly, during year three, the amount of non-measles infections will be a reflection of the amount of measles that happened in that year, year 3, as well as years 2 and 1. However, because we said that the immune-amnesia in this example lasts for 3 years only, then if we look at the amount of non-measles infections in year 4, it will be a reflection of the amount of measles that happened in that year, year 4, plus year 3 and year 2. It will not however reflect the large measles epidemic that happened in year 1 anymore, because those children would no longer have immune amnesia by year 4.

    3. We propose that, if loss of acquired immunological memory after measles exists, the resulting impaired host resistance should be detectable in the epidemiological data collected during periods when measles was common and [in contrast to previous investigations that focus on low-resource settings (5–12)] should be apparent in high-resource settings where mortality from opportunistic infections during acute measles immune suppression was low.

      This is the authors' main hypothesis for the paper.

      The authors' hypothesize that if impaired host resistance (immune memory loss, also known as immune-amnesia, or, as they describe it in the text, immunomodulation) exists after measles infection, that they will be able to observe this phenomenon by examining population-level or epidemiological data. As discussed above, this sort of data is collected for each country by government health agencies (like the U.S. Centers for disease control and prevention) in order to monitor disease incidence and prevalence over time. It includes the age of patients, why they are sick, when they are sick, and if they die, along with lots of other data not used in this study.

      As previously discussed, they will be examining epidemiological data from high-income countries where the general incidence of childhood disease is lower.

      This main hypothesis is divided into 4 sub-level hypothesis examined in this paper and annotated in the next paragraph.

    1. we have developed a strategy to convert a Drosophila heterozygous recessive mutation into a homozygous condition manifesting a mutant phenotype

      The authors' goal was to develop a novel way to create homozygous mutations without the need to interbreed heterozygotes. They turned to CRISPR-Cas9 technology to accomplish this. The idea was to insert genes encoding Cas9 and the gRNA into the genome precisely at the gRNA cut site.

      First, they created a genetic construct containing the Cas9 protein coding sequence, a Cas9 guide RNA targeting their locus of interest, and flanking homology arms with sequence homology to their sequence of interest. They used this construct to create transgenic flies to test out their method.

    2. Polymerase chain reaction (PCR) analysis of the y locus in individual y– F1 progeny confirmed the precise gRNA- and HDR-directed genomic insertion of the y-MCR construct in all flies giving rise to y– female F2 progeny

      The authors used a technique called PCR to determine the genotype of F1 flies. They used primers (small DNA fragments that are the same sequence as DNA region of interest) to amplify the either the wildtype y+ locus or the y- locus (with the MCR insertion).

      The presence of a DNA band after the wildtype reaction indicates that the fly has wildtype DNA. The presence of a band in either HA (homology arm) reaction indicates that the fly has DNA containing the MCR insertion.

    3. We tested this prediction in D. melanogaster with the use of a characterized efficient target sequence (y1) (5) in the X-linked yellow (y) locus as the gRNA target and a vasa-Cas9 transgene as a source of Cas9

      After making their mutagenic chain reaction construct, the authors needed a way to test its efficiency at introducing homozygous mutations. So, they decided to design guide RNAs that would cause mutations in a gene called yellow because they can easily see if this gene was mutated by the color of the flies.

      A fly must have 2 copies of the recessive yellow allele to have a yellow color. All other flies are brown. The wildtype allele is X-linked, so males must inherit the allele from their mother, while females inherit the allele from both parents.

    1. Because SEL was randomly assigned, the study can separately identify any additional role of SEL.

      Because of the significant finding that arrest rates drop for those involved in the summer jobs program compared to those who are not involved, this study is also prepared to ask why this might be the case.

      Recall that the those youth randomly assigned to the treatment group were also randomly assigned to either take part in social-emotional learning or to not take part in social-emotional learning (SEL).

      This random assignment makes it possible to also assess whether social-emotional learning plays any role in reducing violent crime activity.

    2. Because OSP could affect how youth perceive and respond to social interactions differently from how it affects their economic situations, opportunities for crime, or drug use, the study estimates program effects separately by crime type.

      The researcher is using previous studies on crime and social interventions to make the case for how she is proceeding with analysis.

      Previous studies find that crimes originate in different contexts. Violent crimes tend to be conflict oriented. Nonviolent crimes tend to originate from economic circumstances.

      She is therefore going to look at how the summer jobs program (One Summer Plus, or OSP) influences rates of crimes according to their violent or non-violent (property, drug, or other crimes) nature.

    3. The study randomly assigns 1634 8th- to 12th-grade applicants

      Assignment refers to the process by which the study population is put into either the treatment or the control group.

      Random assignment in this study refers to a technique in which the researcher placed participants into two groups (those who have the jobs program only and those who have the jobs program and the social-emotional learning) in a manner that ensures there are no pre-existing differences between the groups.

      Any difference that can be found in the groups at the end of the study can therefore be attributed to the study, and not to pre-existing differences.

  7. Jun 2016
    1. Fig. 3.  Predicted impact of the use of sulfate-based coagulants in drinking water treatment on sewer corrosion mitigation for average sewer and sewage conditions

      Figure 3 is composed of two y-axes:

      1) The left one shows the cost as $ per m3 of water treated and is to be used for the blue-colored data.

      2) The right one shows the % increase in cost and is to be used for the red-colored data.

      Because the point of reference for increase in cost is "no alum dosing", % increase in cost starts from zero even though it in fact costs a certain amount of money.

      In other words, the authors assume the increase in cost for "no alum dosing" as zero and calculate the rest according to:

      $${(X-Y)}/{Y}*100$$

      where; X=cost with alum dosing, Y=cost of no alum

    2. Through the addition of ferric chloride, a commonly used chemical for corrosion mitigation (7), the dissolved sulfide concentration was maintained at low levels (mostly below 0.5 mg S/liter) in all cases.

      The purpose of this test is to find out how much ferric chloride is going to be necessary to maintain the dissolved sulfide concentration below 0.5 mg S/L.

      The test conditions were provided (shown in the Figure 3) as 0, 5, 10, and 15 mg S/L.

      The results of this test showed that as sulfate addition levels increase, the amount of ferric chloride necessary to drop the sulfide concentration below 0.5 mg/L will increase. As a result, more ferric chloride use leads to more money spent on corrosion prevention.

    3. We performed 72 additional simulation runs

      Researchers investigated the effect of sulfate-based coagulants in different scenarios:

      1-changing source water sulfate concentration,

      2-changing hydraulic retention time,

      3-changing rising main (pipe that supplies water from ground to upper levels of a building) fraction,

      4-changing sewage temperature.

      They showed that important savings can be achieved simply by decreasing the use of sulfate-based coagulants (aluminum sulfate in this case) in all different scenarios except source water sulfate concentration.

      The amount of sulfate-based coagulants did not really effect the cost of corrosion prevention when the source water has high levels of sulfate (above 10 mg S/L).

    4. Fig. 2.  Sulfate concentrations in source water and drinking water.

      Figure 2 is a bar graph. It shows sulfate concentration on the y-axis and two different scenarios on the x-axis: 1) Without aluminum sulfate dosing during drinking water treatment,

      2)With aluminum sulfate dosing during drinking water treatment.

      The figure emphasizes that there is a clear difference between the source water and drinking water when the plants use aluminum sulfate during drinking water treatment.

      The inset figure is drawn by taking the averages of source and drinking water values in two different scenarios.

    5. Sulfate concentrations in source water, drinking water, and sewage in the sampled suburban area in 2009–2010

      Figure 1 is composed of a bar chart and an inset pie chart. It shows two different representation/information from the same data collection process when they monitored the sulfate concentrations in source water, drinking water, and sewage.

      On the y-axis, sulfate concentration is shown. The x-axis contains the three data sources: source water, drinking water, and sewage.

      Figure 1 draws the ideal water cycle by assuming a fixed sulfate concentration (no loss in the process) during the water cycle. It shows the increase in the sulfate concentration as water is collected from the source until its disposal as sewage.

      If the total sulfate concentration in sewage (approximately 17.5 mgS/L as seen from the Figure) is 100%, the contribution of coagulant was calculated as 52% as in the inset pie chart.

    1. with medium from Cos 7 cells transfected with a nAG plasmid or with a control plasmid

      Cos 7 cells transfected with nAG plasmid are expected to release nAG protein into the medium. This is an easy way for the authors to make nAG so that they can add it to limb blastema cells.

      To do this, the authors allow the Cos 7 cells some time to secrete the nAG protein into the medium they are contained in and then simply transfer those media to their limb blastemal cells.

      The control plasmid will not produce nAG protein and instead acts as control to show what happens when Cos 7 media is transferred to limb blastemal cells.

    2. It is difficult to understand the events underlying cell division in limb mesenchyme because of the complexity of epithelial-mesenchymal interactions in development and regeneration

      There is always a lot of "noise" in biology research. The amount of outside influences that could be affecting your results are boundless. Consequently, it can be hard to determine conclusions from this type of data.

      To simplify things and reduce the noise, the authors have opted to use cultured blastema cells to further study nAG.

      When deciding on an experimental strategy such as this, it is important to consider what is lost and gained by studying these cells outside of the organism.

    3. The newt on the left has regenerated its control left limb, whereas the right denervated limb has failed to regenerate.

      This control animal has had both limbs amputated . However, the right limb was electroporated with empty vector.

      This group of animals functions as a control in this experiment.

    4. In the normal limb, there was weak staining of a subset of glands in the dermis

      In the normal (i.e., not amputated) limb the authors found that there was low staining for the presence of nAG. This was done using immunohistochemistry as described in the second figure's annotation panel.

      What does this suggest about its possible functions/role? Why would we expect to see none/little of it considering the background information?

    5. detected by phosphatase-labeled secondary antibodies

      Antibodies can have enzymes linked to them. In this instance, an antibody had a phosphatase added to it. Because of that, wherever this antibody binds you will see phosphatase activity.

      To exploit this the authors added a yellow solution made of two compounds, which in the presence a phosphatase has a phosphate group removed. This changes the yellow dye into a blue precipitate.

    6. using two different antibodies directed at non-overlapping sequences

      Although it is definitely more complex than this, antibodies generally have an amino acid sequence that they bind heavily to.

      This is likely not surprising because, as you can imagine, an antibody for the flu virus is not going to target herpes and vice versa, a herpes antibody will not target the flu.

      Why might it be a good idea to test an immunoblot with two different antibodies?

    7. myc-tagged nAG was expressed after transfection

      The authors created transgenic cells using a product called lipofectamine. Transgenic cells contain DNA that is not normally found in that organism.

      In this case the authors introduced plasmid DNA encoding myc-nAG. This expresses a nAG protein with an added "myc" epitope. See the earlier explanation of epitope-labeled proteins for further information.

      Lipofectamine-based transfection (insertion of foreign DNA) is actually a fairly well understood phenomena. For a nice explanation of the theory behind it, please see the manufacturer's website

    8. bait and with prey libraries derived from both normal newt limb and limb blastema

      Yeast two-hybrid prey libraries are derived by reverse transcription (mRNA to cDNA) of total RNA extracted from the unamputated (normal) limb or the limb blastema.

      Which RNAs are present is determined by what genes are expressed. The likely reason the authors chose to use RNA from normal limbs and the limb blastema is because they thought that the expressed genes would be different in each.

      As a result of all this, the cDNA library for normal limbs and limb blastema are different (with some overlap). By doing this the authors increase their chance of finding prey for their nAG bait.

      For more information on yeast two-hybrid screens, please see the annotation for figure 1.

      For an overview of how cDNA libraries are made, please see the following diagram

    9. without N or C terminal signal/anchoring sequences

      N (amino) or C (carboxyl) terminals begin and end a protein, respectively. That is, there is a free NH2 at the start of a protein and a free COOH at the end of it.

      Often there can be special sequences of amino acids shortly before or after these terminals. These often act as "zip codes" in that they tell the protein what to do and where to go.

      For the case of Prod1, the N terminus signals for secretion and the C terminus signals for anchoring and modification with GPI.

      The authors in this experiment wisely chose not to include both these sequences in their yeast two-hybrid screen. Yeast are eukaryotic cells and can perform many of the same functions as verterbrate cells.

      In turn, leaving these sequences on may very well have caused the Prod1 protein to be secreted. This is not something you want for experiment that relies on your protein being in the nucleus!

    10. It remains unclear which molecules are responsible for the activity of the nerve and wound epidermis

      This is really the main question the authors are trying to answer. Which molecule, if any, is producing the effect seen in the previous experiments?

    1. Southern blotting (DNA) analysis re- vealed that several resultant clones had undergone the desired recombination event

      Southern blotting is a technique that uses radioactive probes to detect DNA sequences. The probes will only stick to complementary DNA, enabling researchers to look for very specific sequences.

      Here, the authors are using it to confirm that cells have incorporated a foreign DNA sequence.

    2. Transfection was done with LipofectAmine (Life Technologies) and subconfluent monolayers of cells

      Lipofectamine is a lipid-based compound that helps cells take up foreign DNA, a process called transfection. Researchers can generate a DNA sequence, combine the DNA with lipofectamine, and add this mix to the cell culture. The foreign DNA will be able to enter the cell and code for a protein of interest.

    3. Time lapse experiments

      These types of experiments can monitor the level or location of a protein over the course of minutes, hours, or even days.

      Here, the authors tracked a fluorescently labeled histone protein for 36 hours, taking pictures at specific intervals (see Figure 5). This allowed them to detect defects in mitosis when p53 or p21 was removed.

    4. histone H2B–green fluorescent protein fusion vector

      Here, the authors used a combination of two proteins fused together: (1) a histone protein (H2B), which is found only in the nucleus of a cell and helps organize the DNA, and (2) a green fluorescent protein (GFP), originally found in jellyfish, which glows green in response to a particular wavelength of light.

      This combination enables scientists to watch the nucleus of a cell, because only H2B will glow green.

    5. we disrupted the p53 gene by homologous recombination

      The authors introduced a piece of DNA that is similar in sequence to the p53 gene and the surrounding area. But instead of p53, this DNA encodes a resistance gene that allows a cell to tolerate a toxic drug. The cell recognizes the "p53-like" DNA, triggering a repair response.

      Homologous recombination replaces the normal p53 gene with the antibiotic resistance gene. But only a small number of cells will undergo this replacement. After treatment with the drug, though, only cells that have incorporated the resistance gene will survive. This allows scientists to isolate only the cells that incorporated the resistance gene (and are therefore lacking the p53 gene). They now have a line of cells without p53, and they can use this line to study the role of the gene.

    6. This may be because some p53-independent p21 synthesis occurred after irradiation of p53−/−cells

      The authors ran a western blot (a technique to separate and identify specific proteins) to analyze the levels of p53 and p21 protein (see Figure 4A).

      They authors found that in cells lacking p53 (the last two lanes on the right), there is a small amount of p21 in cells that were subjected to radiation. This tells them that something other than p53 must be turning on expression of p21 in response to DNA damage.

    7. Smad4 gene

      As a control, the authors disrupted a gene that is involved in a different pathway than p53 (the Smad4 gene) to make sure that gene disruption itself did not affect their results.

    8. The HCT116 cell line was chosen because it has apparently intact DNA damage–dependent and spindle-dependent checkpoints, and it is suitable for targeted homologous recombination (4, 9, 15)

      Recall that cancer cell lines can carry lots of mutations and other changes in important regulatory genes (see above).

      The researchers had to be careful when choosing a cell line to study because there could be additional factors other than p53 status (so-called "confounding variables") that would affect their results. So they started with a cell line that had p53 genes and various other checkpoint genes intact, and then disrupted only the p53 gene by homologous recombination.

    9. three with intact p53 genes and three with mutant genes

      Here, the authors wanted to compare cells with normal p53 with cells with mutant p53. They treated each of the six lines with radiation to induce DNA damage, and then looked at what happened to the different cells. In this way, they could investigate the role of p53 in the DNA damage response.

    1. To address whether the development of functional blood vessels was required for the recruitment of myeloid precursors into the brain rudiment

      Using a mouse model knock-out for a sodium calcium exchanger 1 gene (Ncx-1), the authors were able to show that established and functional vasculature is necessary for the migration of yolk sac macrophages into the brain.

    2. To determine at which stage Runx1+ precursors or their progeny seed the brain during development, we injected Runx1Cre/wt:Rosa26R26R-LacZ pregnant females with a single dose of 4′OHT at E7.25 to E7.5 and traced the apparition of labeled cells into the brain rudiment at different time points after injection

      In this experiment the authors again crossed two knock-in mice strains.

      The first one had CRE fused to an estrogen receptor and expressed under the Runx1 promoter as described previously.

      The second knock-in strain was similar to the strain expressing eYFP downstream of the ROSA26 transcription start site described above, but instead of using eYFP, the authors used another reporter gene called LacZ. LacZ encodes the protein β-galactosidase (β-gal), which can be easily detected by immunohistochemistry.

      In the ROSA26 lacZ reporter (ROSA26R) mouse, a transcriptional stop cassette flanked by loxP sites was inserted just downstream of a transcription start site at ROSA26 locus. When intact, this cassette prevents expression of β-gal from the downstream lacZ coding sequence. The stop cassette is excised upon exposure to CRE recombinase, which mediates recombination between the loxP sites, thus permitting expression from the lacZ reporter.

      The activity of galactosidase can easily be detected with single-cell resolution by staining tissues with X-Gal. Thereby, CRE activity eventually results in a blue dye precipitate.

    3. We crossed the Runx1-MER-Cre-MER mice (Runx1Cre/wt) with the Cre-reporter mouse strain Rosa26R26R-eYFP/R26R-eYFP and induced recombination by a single injection of 4-hydroxytamoxifen (4′OHT) into pregnant females at different days of gestation

      The authors used two transgenic mouse strains.

      The first strain expressed a CRE recombinase protein fused to two murine estrogen-receptor ligand-binding domains (MERs) downstream of the promoter of the Runx1 gene. That ensured that CRE would only be expressed in Runx1-positive cells and will be functional only following its activation with the estrogen receptor antagonist tamoxifen.

      The second strain of mice had enhanced yellow fluorescent protein (eYFP) cDNA knocked-in (inserted) downstream of a transcription start site at the ubiquitously expressed ROSA26 locus. A transcriptional stop cassette flanked by loxP sites was inserted between the transcription start site and the eYFP cDNA. When intact, this cassette prevents expression of eYFP. The stop cassette is excised upon exposure to CRE recombinase activated by tamoxifen, permitting expression of the eYFP reporter gene.

      After crossing these two strains of mice, the authors injected pregnant dams with 4-hydroxytamoxifen, which is an active metabolite of tamoxifen and selective estrogen receptor antagonist. It bound to the estrogen receptor in the MER-CRE-MER fusion protein construct, which then entered the nucleus allowing CRE to act on the floxed genes.

      This recombination resulted in progeny that expressed eYFP in Runx1-expressing cells.

    4. we used Cx3cr1gfp/+ knockin mice (17) because the fractalkine receptor (CX3CR1) is a marker of early myeloid progenitors (18) and microglia

      To be able to detect myeloid cells expressing the CX3CR1 receptor (aka fractalkine receptor) in the brain, the authors used a reporter mouse with an inserted, or knocked in, gene that codes for a green fluorescent protein in the locus of CX3CR1 gene. This way, every time the CX3CR1 gene is expressed, the cell fluoresces in green color.

      Read more here about how specific genes are targeted:

      http://studentreader.com/gene-targeting/

    5. we reconstituted sublethally irradiated C57BL/6 CD45.2+ newborns with hematopoietic cells isolated from CD45.1+ congenic mice

      The authors exposed recipient C57BL/6 newborn mice to a slightly less than lethal dose of irradiation to obliterate their developing bone marrow and any blood cell progenitors present outside the bone marrow such as in the fetal liver or spleen. They then restored their bone marrow and hematopoietic system by injecting bone marrow cells or fetal liver cells that contained hematopoietic progenitors from the donor mice.

      The donor mice were genetically identical to the recipient mice with the exception of the CD45 allele (CD45.1), which codes for a transmembrane protein expressed on all white blood cells.

      Using flow cytometry the authors were able to distinguish resident CD45.2+ (positive, or expressing) cells from donor CD45.1+ cells stained with an allele-specific antibody.

      http://www.jove.com/video/4208/differentiating-functional-roles-gene-expression-from-immune-non

    1. Second, a 6-day treatment of primary brain capillary endothelial cells with rGDF11 (40 ng/ml) increased their proliferation by 22.9% as compared with that of controls (fig. S11), but not in the presence of a TGF-β inhibitor (fig. S12), confirming that GDF11 has a direct biological effect on these cells through the p-SMAD pathway

      The authors conducted an in vitro experiment to test the specific downstream effects of GDF11. They treated brain capillary endothelial cells with GDF11, as well as a TGF-beta inhibitor, for 6 days.

      GDF11-treated cells did not increase cell proliferation in the presence of the inhibitor.

      This indicates that GDF11 acts directly on the TGF-beta SMAD pathway, in order to have an effect on blood vessels.

    2. Given the interconnection between the vasculature and neural stem cells, we asked whether young blood factors can also rejuvenate blood vessel architecture and function. To test this, we created “angiograms,” 3D reconstructions of the blood vessels

      To measure the volume of blood vessels, and to create an "angiogram," the authors imaged mouse blood vessels on a confocal microscope and captured z-plane stacks over the total thickness of the section. The z-stacks were added to create 3D angiograms for calculation of total blood vessel volume.

    3. To test the functional implication of these findings, we performed an olfaction assay

      The goal of this experiment is to test any difference in sensitivity of smell between heterochornic old, isochronic old, and young mice. If there is a change in smell sensitivity, this is due to increased neurogenesis.

    4. We pulsed parabiotic pairs with BrdU to label newborn neurons, and after 3 weeks, the mice were analyzed for BrdU+/NeuN+ cells to quantify newborn neurons (Fig. 2A). As expected from our in vitro studies, we observed increased olfactory neurogenesis in vivo.

      Bromodeoxyuridine (BrdU) is a synthetic nucleoside that is incorporated into DNA of replicating cells.

      NeuN+ denotes neuronal nuclei.

      The authors use BrdU and NeuN double-positive staining to identify and track newborn neurons in hetero-chronic old mice.

      This is an in vivo study to complement the results of increased neurogenesis observed in their in vitro study (Fig. 1). The in vitro study is in the supplemental data.

    5. we generated heterochronic parabiotic pairs between 15-month-old (Het-O) and 2-month-old (Het-Y) male mice, as well as control groups of age-matched pairs, namely isochronic young (Iso-Y) and isochronic old (Iso-O) pairs

      The authors report a novel finding of the possibility of reversing and remodeling the neural stem niche of old mice, by exposing them to systemic factors present in young mice.

    6. Thus, we have observed that age-dependent remodeling of this niche is reversible by means of systemic intervention.

      The authors report a novel finding of the possibility of reversing and remodeling the neural stem niche of old mice, by exposing them to systemic factors present in young mice.

  8. May 2016
    1. Solid Al2O3ceramic lattices were prepared in the microlithography system by using photosensitive PEGDA liquid prepolymer loaded with ~150-nm alumina nanoparticles (Baikowski Inc., ~12.5% alumina by volume)

      Instead of coating the base polymer lattice after microstereolithography, nanoparticles can be added to the resin bath and thus create ceramic lattices.

      The authors created solid alumina ceramic lattices by using a resin bath containing alumina nanoparticles.

    2. Uniaxial compression of these structures is shown in movies S1 to S3

      These movies can be accessed on this page:

      http://www.sciencemag.org/content/344/6190/1373/suppl/DC1

    3. The microstructured mechanical metamaterials were tested to determine

      The samples were mechanically tested to obtain their properties.

    4. The densities of all samples were calculated by measuring the weight and fabricated dimensions of the completed microlattices

      The authors estimated the densities of the created samples by measuring their weights and volumes (dimensions).

    5. These hybrid lattices are converted to pure Al2O3 octet-truss microlattices

      The stereolithography process applied using a resin bath with nanoparticles results in an hybrid microlattice (made of a mixture of solid polymer and nanoparticles).

      An additional step is needed to obtain a microlattice made of pure alumina.

    6. A similar templating approach is used to generate hollow-tube aluminum oxide (amorphous Al2O3, alumina) microlattices; however, the coating is produced by

      A different coating process was applied to generate hollow-tube lattices with a different material (alumina). This process is called atomic layer deposition.

    7. The polymer template is subsequently removed by thermal decomposition, leaving behind the hollow-tube nickel-phosphorus (Ni-P) stretch-dominated microlattice

      After covering the initial polymer microlattice with a nickel alloy, the polymer base was destroyed by heating the lattice. A hollow-tube nickel microlattice was thus obtained.

    8. to convert the structures to metallic and ceramic microlattices. Metallic lattices were generated

      The base polymer lattice obtained through the microstereolithography was transformed in a metallic lattice by first covering the formed polymer with a nickel alloy.

    9. Although projection microstereolithography requires a photopolymer, other constituent materials such as metals and ceramics can be incorporated with additional processing

      The projection microstereolithography technique was used to generate a base polymer lattice.

      This template is then converted into the desired metallic or ceramic microlattices using nanoscale coating techniques.

    10. By combining projection microstereolithography with nanoscale coating methods, 3D lattices with ultralow relative densities below 0.1% can be created

      The different samples were created using two combined techniques.

      First, a base polymer lattice was generated with projection microstereolithography.

      This template was then converted into metallic and ceramic microlattices using nanoscale coating methods.

    11. as a point of comparison, a bend-dominated tetrakaidecahedron unit cell (40, 41) of the same size scale was generated and the corresponding cubic-symmetric foams (known as Kelvin foams) were fabricated with a variety of densities

      The structure proposed by the authors is based on a stretch-dominated lattice. A series of foams based on the other type of lattice, bend-dominated lattice, was fabricated to compare with the new proposed structure.

    12. we analyzed, fabricated, and tested them in a variety of orientations

      The behavior of a material can depend on the direction of the applied load. It also depends on the orientation of its architecture.

      To investigate the relationship between the elastic response and the orientation and load direction, different samples with varying orientations were created and they were tested with different loading directions.

    13. The densities of samples produced in this work ranged from 0.87 kg/m3 to 468 kg/m3, corresponding to 0.025% to 20% relative density

      The authors created samples with a wide range of densities to investigate the evolution of the elastic properties when the density changes.

    14. We report a group of ultralight mechanical metamaterials that maintain a nearly linear scaling between stiffness and density spanning three orders of magnitude in density, over a variety of constituent materials

      Authors designed and realized samples of ultralight mechanical metamaterials with a specific set of properties (namely, the coupling between stiffness and weight density is reduced to the theoretical limit).

    1. use of computed tomography to track pulmonary edema over time revealed partial protection from ALI in mice deficient in Mac-1 and almost complete protection when PSGL-1 interactions were blocked

      Using CT, the authors observed that in mice deficient for PSGL-1, the accumulation of fluid in the lungs (pulmonary edema) during ALI was reduced.

    2. deficiency in PSGL-1 or Mac-1, or inhibition of PSGL-1, resulted in moderate protection from ALI-induced death

      The authors extended the observation of neutrophil-platelet interaction via PSGL-1 during ALI. By inhibiting PSGL-1, they observed that mice were protected from ALI-induced death as the interaction between neutrophils and platelets was inhibited.

    3. hematopoietic-specific deletion of the gene encoding Cdc42

      Mx-1Cre-Transgenic mice are used to create inducible Cdc42 knockout mice. These mice have hematopoietic cells that are deficient in Cdc42.

      Read more on the transgenic mice used:

      http://www.informatics.jax.org/allele/MGI:2176073

      Read more on inducible gene targeting:

      http://www.sciencemag.org/content/269/5229/1427.long

      Ref: A Guide to Methods in the Biomedical Sciences. By Ronald B. Corley. Page 113: Description on bone marrow chimeras

    4. To test this hypothesis, we first induced transient depletion of platelets

      Besides using a gene knockout, the authors also used a model where depleting platelets was in their control. This was done by treating the mice with antiplatelet serum.

      It is important to support data from gene knockout experiments with blocking experiments to be able to translate one's findings to humans (we will not find gene knockouts in humans).

    5. blocking antibody injected after neutrophils had adhered

      Just prior to imaging the mice, the authors injected through IV an antibody that blocks PSGL-1.

    6. In vivo labeling for P-selectin

      In order to visualize some of the molecules, the authors inject dyes/antibodies with a fluorescent color tagged. These will specifically target the molecule of interest. (In vivo means inside the animal.)

    7. IVM, we could obtain three-dimensional (3D) reconstructions of polarized neutrophils within inflamed venules of Dock2-GFP mice

      Spinning disk microscopy rapidly captures multiple images in real time, up to 10,000 frames per second. The authors utilize this technique of IVM to capture multiple images to construct a 3D image of cells.

      http://zeiss-campus.magnet.fsu.edu/articles/spinningdisk/introduction.html

    8. revealed colocalization of Dock2 with PSGL-1 clusters on crawling neutrophils

      In this case, the authors tagged Dock-2 protein with GFP that allows them to view the molecule through the fluorescence emitted.

      They observed Dock-2 fluorescence on sites where PSGL-1 formed clusters.

    9. In vivo labeling of Mac-1 and PSGL-1

      Fluorescently labeled antibodies to Mac-1 and PSGL-1 were injected into the mice so as to trace these molecules by IVM based on the fluorescence they emit.

    10. deficiency in the β2 integrin Mac-1 (Itgam–/–) resulted in reductions at both the uropod and leading edge

      As a positive control, the authors use mice that lack another cell adhesion molecule, Mac-1. This molecule is required for neutrophils to interact with other cell types at both the leading edge and the uropod.

    11. revealed marked reductions in platelet interactions with the uropod, whereas those at the leading edge remained unaffected

      To confirm whether PSGL-1 on neutrophils is what is mediating these interactions, the authors see what happens in the absence of PSGL-1.

    12. mice deficient in PSGL-1

      To test the requirement of a molecule, researchers generally question what happens in the absence of the molecule. Thus, they generate knockouts.

      http://www.powershow.com/view/24ca0d-ZjkwN/What_is_a_Knockout_Mouse_powerpoint_ppt_presentation

      This is done by completely removing the gene that encodes the molecule.

    13. blockade of this domain protected mice against thromboinflammatory injury

      Blocking the interaction of neutrophils and platelets that lead to neutrophil migration and inflammation protected mice from excessive injury in the blood vessels (thrombo-inflammatory injury).

    1. For our linguistic analysis, we recorded word-use frequencies for members of the 113th U.S. Congress using relevant terms from one of the most frequently used emotion scales

      This means that the researchers looked at how often, within the 2013 U.S. Congressional Record, individuals used words taken from the Positive and Negative Affect Schedule, a common scale for measuring mood.

      In this study, the researchers were particularly interested in how often the congressmen and congresswomen used "positive affect" words: alert, determined, enthusiastic, excited, proud, etc.

      In addition to positive affect, the researchers also looked at individuals' usage of "negative affect" words (e.g., nervous, irritable, upset, scared), "joviality" words (e.g., joyful, delighted, energetic, lively), and "sadness" words (e.g., sad, alone, lonely, downhearted).

    2. A FACS-certified coder assessed the intensity of two action units (AUs) associated with genuine smiling behavior

      For this part of the study, the researchers used official photographs of the members of Congress and had someone trained in the Facial Action Coding System code how intensely both of these muscles were used in the photos.

      The key point is that both "sincere" (Duchenne) and "insincere" smiles use the zygomatic major muscle, but only "sincere" smiles tend to also involve the orbicularis oculi muscle.

    1. DOC processing was measured for streams and lakes

      The authors provide a map of their sampling sites in figure S1 (in the supplemental information). It is included here in the annotation tabs for Figure 1. Lakes are shown as open circles (135 different sites) and streams or rivers are shown as closed triangles (73 different sites).

      According to the supplemental information, water samples were collected biweekly to monthly during 2011-13.

      This video featuring primary author Rose Cory and her coworkers shows some of the sampling sites and overviews the background of this research.

      This video explains what the researchers do with the water samples they collect.

    2. were calculated by integrating apparent quantum yields with incoming UV radiation and light absorption by DOC over mean water column depth

      In order to make their results from the water samples relevant to actual streams, lakes, and rivers, the authors had to account for how much light is available over the range of depths in each water body.

      They measured the amount of incoming sunlight at the field sites and also made field measurements of how much light is available as you go from the water surface to the bottom.

      By using the bacterial respiration and photo degradation rates obtained from the water samples and the information about incoming light, the authors are able to approximate the total amount of degradation that takes place within a measured area of surface water each day.

      Some of the results of these calculations are shown in Table 1. The results are discussed further in the next paragraph.

    3. Areal rates were then scaled to the entire Kuparuk basin

      The authors used their calculated daily areal degradation rates (Table 1) to sum up the total amount of degradation that would happen in each water body over the course of a year. These calculations were based on daily measurements of sunlight during the ice-free season.

      The authors made additional measurements of light availability in the water column at sampling sites beyond those in Figure S1 in order to increase the accuracy of their calculations for the entire river basin.

      The results for degradation per year are shown in Table 2.

    4. As expected, measured light attenuation coefficients Kd,λ were strongly related to underwater light absorption by CDOM (aCDOM,λ) (R2 = 0.94, N = 100), and we used our measured aCDOM,λ (N = 2153) to predict Kd,λ when it was not measured directly (fig. S2)

      Kd,λ values in this paper are based on measurements made in the field whereas aCDOM,λ values are based on measurements on water samples taken back to the lab. The authors were able to find a mathematical correlation between the two parameters and use aCDOM,λ to calculate Kd,λ, which they used in their calculations in the information in Table 2.

      The details are provided in supplemental information.

    1. movie S1

      [https://www.youtube.com/watch?v=9QdAH6qz240]

      In the condition to the far right with a real rat trapped in the restrainer, the free rat is seen to poke its head close to the restrainer and contact the trapped rat several times.

      The black dot on the rat's head was used to track the movement depicted in Figure 1B.

    2. To examine whether individual differences in boldness influenced door-opening, we tested the latency for approach to the ledge of a half-opened cage before the experiment

      The author's wanted to determine whether the rats opened the door because they felt empathy for their cagemate or because of some inherent boldness.

      Taking this measurement allowed the authors to determine whether helping is preferentially done by bold rats.

    1. The nAG protein was detected in the medium after immunoblotting under both reducing and nonreducing conditions as a band at 18 kD

      The authors demonstrate that they can indeed detect nAG secreted into the medium by Cos 7 cells.

      They use reducing and non-reducing conditions as well. In reducing conditions disulfide bridges in the protein are broken, but in non-reducing conditions disulfide bridges are intact.

      For a picture showing disulfide bridges and other bonds found in proteins see here

    2. the axons may subsequently regenerate from the level of the star to the amputation plane, but denervated adult newt blastemas undergo fibrosis and other tissue changes that stop them from making a delayed regenerative response

      In other words, the axons in some animals are able to regrow even after being cut. However, nerves that do this are thought have no effect on regeneration.

    3. The expression of nAG protein was analyzed by reacting sections of the newt limb with the two antibodies, and these gave comparable results

      For these experiments the authors performed immunohistochemistry on sections of newt limbs to determine the location of nAG. This method is further described in the annotation for figure 2.

      Why would the authors use 2 different antibodies for nAG and what is the significance of getting the same results from both?

      Consider the "scientific process" when answering this question.

    4. The right animal has also regenerated on the control left side, but the expression of nAG has rescued the denervated blastema

      This animal had both limbs amputated, but only the right limb was denervated. In addition, the right limb was electroporated with nAG. This causes some number of those cells to express nAG when they normally would not.

      This group of animals acts as the experimental group.

    5. To deliver the protein to the adult newt limb, we electroporated plasmid DNA into the distal stump at day 5 pa

      Using focal point electroporation (previously described), the authors were able to introduce exogenous DNA into the amputated limb.

      The authors chose to use a plasmid (circular DNA) to introduce their gene into the limb.

    6. and then electroporated the nAG plasmid or empty vector on the denervated side

      The authors use an empty vector as the negative control for this experiment. The vector is essentially the exact same as nAG plasmid, but lacks the DNA sequence coding for nAG.

    1. In contrast, we examined learning-induced changes in long-standing social biases

      In contrast to previous work, the authors expand on the existing research by attempting to reduce people's biases. Unlike memories for facts, biases are acquired over a long period of time and are constantly reinforced by people's friends, family, and interpretations of their daily experiences.

    2. electroencephalographic signals

      Data obtained from EEG (electroencephalogram).

      EEG is a non-invasive method that detects electrical activity (from communicating brain cells) in the brain. Participants wear sensors on their head that detect this activity through the scalp.

      Click here for a picture of an EEG cap

      The authors used this device to detect when participants were asleep and to measure different sleep characteristics.

    3. Results were quantified by using a conventional scoring procedure

      This procedure involved calculating the reaction time (how fast participants pressed the button) for trials where the button participants used to classify trials was stereotype consistent (e.g. same button art words and female faces) and subtracting it from the reaction time for trials where the button participants used to classify trials was inconsistent with the stereotype.

      Participants who showed slower reactions when the button was inconsistent with the stereotype demonstrated more implicit bias.

      See the supplemental materials for more information.

    4. recruited as two subsamples that allowed for a direct replication

      The authors divided the 40 participants into two groups to test whether their results were found in both groups. Finding the effect in both groups makes their results more believable.

    1. We hypothesized that caffeine could affect the learning and memory of foraging pollinators.

      Caffeine blocks adenosine receptors, thus enhancing the activity of the neurons involved in learning smells and remembering them.

      Caffeine is also found in the nectar of some plants.

      Therefore, the authors of the study predict that caffeine in the nectar will in some way affect the learning of pollinating insects. They are going to test this hypothesis using behavior tests on bees in the presence of different amounts of caffeine.

    2. nectar concentrations did not exceed 0.3 mM

      See figure 1 for these data.

    3. If bees can detect caffeine, they might learn to avoid flowers offering nectar containing it

      The idea here is actually the opposite of what the authors hypothesized, but is a reasonable concern and thus important to consider.

      The authors hypothesized that caffeine could enhance a honeybee's memory of a plant, causing them to return to that plant. This is because of the memory-enhancing effect of the chemical's interaction with the bee's mushroom body neurons.

      However, the chemical is also very bitter. A bee is as unlikely to want to drink something that is very bitter as a human is. Therefore, if the honeybee can taste the bitter caffeine, it is unlikely to want to return to that plant.

      So how do we figure out which of these two opposite effects caffeine actually has on the bees?

      The authors of the study offered honeybees sugar solutions containing increasing amounts of caffeine, and observed at what concentration the bees found the solution to be repellent.

      If the concentration of caffeine in the nectar of these plants is at that repellent level or higher, it is unlikely that caffeine is attracting bees to those plants.

      However, if the amount of caffeine in the nectar is below that concentration, it is unlikely that the bees would detect its bitterness, and so would not be repelled by it.

    4. fig. S3

      Supplementary figure 3 showed proof that the honeybees have sensilla (sensory organs) that are able to detect caffeine.

      This is what is called a "proof-of-concept" experiment; it shows that the hypothesis is at least possible. Whether or not it is actually the way things are remains to be seen.

      In this case, it needed to be shown that honeybees are capable of detecting caffeine. If not, it doesn't matter that there is caffeine in the nectar of the plants - if the honeybees can't sense it and don't react to it, then caffeine in the nectar won't affect their behavior regardless of its concentration.

    5. adenosine receptor antagonist DPCPX

      The authors used DPCPX to identify whether or not the increase in nicotinic acetylcholine receptor activation they saw in response to caffeine was the result of caffeine interacting with adenosine receptors.

      DPCPX binds to adenosine receptors, preventing any other molecule, including caffeine, from binding to and interacting with the receptor. This is similar to the antagonistic action of caffeine on adenosine receptors, which blocks its activation by adenosine.

      As DPCPX was applied before caffeine, caffeine was unable to bind to adenosine receptors during this experiment. As a result, the authors could be sure that any effects seen after adding caffeine happened independently of caffeine's ability to bind adenosine receptors.

      This tells the researchers something about about how caffeine behaves in the absence of this antagonist, too - if it's able to act even when the adenosine receptors are inhibited, that means that those receptors are not necessary for this effect to occur. Conversely, if they saw that caffeine-mediated nicotinic acetylcholine receptor activation disappeared when they blocked caffeine from binding to the adenosine receptors, they would know that the adenosine receptors play an important role in inducing this effect.

    6. whole-KC recordings

      The authors took these recordings using a technique called "whole-cell patch-clamp electrophysiology."

      Patch-clamp electrophysiology allows researchers to measure the current flowing through an individual channel in a cell membrane. These currents result from the passage of ions through these channels, and act to send signals from neuron to neuron.

      Researchers isolate a tiny patch of the cell membrane using a glass micropipette. The surface of the cell membrane is suctioned into the tiny opening at the end of the pipette and forms a tight seal. This tight seal allows the current across that portion of the membrane to be measured very precisely, without any distortion of the data by surrounding factors.

      Whole-cell patch-clamp electrophysiology is a specific kind of patch-clamp in which multiple channels are individually measured at the same time. The same seal is made over a patch of the membrane using a glass pipette, but in the case of whole-cell recording, the suction is strong enough that the membrane is ruptured, allowing access to the interior of the cell.

      To see patch-clamp electrophysiology in action, see here, here, here and here.

    7. we trained bees for six trials with 30 s between each pairing of odor with reward

      In order to train the bees to associate the floral odor with receiving sucrose, the authors exposed a honeybee to the odor they wanted it to learn, then immediately fed it sucrose solution. This process was repeated 6 times per bee, with a 30 second interval between each trial.

      In order to test the effect of caffeine on the development of olfactory memory, the bees were split into eight groups. One group was fed either sucrose alone, while the other seven were fed solutions containing seven different concentrations of caffeine.

      The authors tested the bees memory of the smell twice - once 10 minutes after the last conditioning trial and once 24 hours later. The bees were tested with both the floral odor they had been taught to associate with sucrose and with another, unrelated odor to see if their memory to the floral scent was specific.

    8. This intertrial interval approximated the rate of floral visitation exhibited by honeybees foraging from multiple flowers on a single Citrus tree

      In the field, the authors had observed that bees will spend an average of 3.77 seconds at each flower they visit, and then will spend an average of 20.3 seconds traveling from one flower to the next, about 24 seconds total.

      As a result, the authors decided to use a 30 second interval between trials in this study in order to use a time frame between "flowers" (exposure to floral scent) that is similar to that seen in bees feeding in nature.

    1. In fig. S1, countries that have lost forests without gain are high on the y axis (Paraguay, Mongolia, and Zambia). Countries with a large fraction of forest area disturbed and/or reforested/afforested are high on the x axis (Swaziland, South Africa, and Uruguay).

      On page 7 of the supplemental material s Figure S2. On the x-axis are the countries with forestry programs, whereas countries on the y-axis have deforestation caused by factors other than forestry.

    2. tree canopy densities

      Tree canopy density is an important concept in this study, because it determines whether an area of land is a "forest" or not a "forest."

      To explain, Hansen and colleagues could analyze the satellite images of Earth at a resolution of 30 m by 30 m. For each of these 30 m by 30 m sections, they then calculated the proportion of that section that was covered in a tree canopy.

      Using this map, forests can be defined as a user prefers by applying a threshold of tree cover density; for example, tree cover greater than 50% can be labeled as forest.

    3. Landsat data

      Landsat is a a joint program between the U.S. Geological Service (USGS) and the National Aeronautics and Space Administration (NASA).

      In this program, satellites orbit around Earth and take photos of Earth's surface.

      The photos taken from the satellites are not the type of photos you would take with a typical smartphone. Instead, these images can show the energy that is both reflected and emitted from Earth in many different wavelengths, including blue, green, red, near-infrared, midinfrared, and thermal-infrared light.

      For a look at energy wavelengths click here.

      To distinguish forests from other land types (e.g., grasslands, water, etc.), one can look at a combination of different wavelengths in the Landsat data. For instance, looking at a combination of the red, near-infrared, and shortwave infrared wavelengths helps identify forests. Our eyes are similar to a satellite sensor, except we only see in the visible wavelengths, namely red, green, and blue. Forests, to our eyes, appear dark green, meaning low blue and red reflectance and brighter green reflectance. By using remote sensing technology, we can improve our identification of forest extent and change by adding other wavelengths of reflected and emitted energy not visible to the naked eye.

      Find out more about how the different wavelengths can help identify different features, here.

      You can view some Landsat photos here. Be sure to check out the photos of deforestation in Bolivia, which is particularly relevant to this study!

      Also check out this Landsat image video.

      Landsat images are online and available to the public. You can find data here.

    4. preprocessing of geometric and radiometric corrections of satellite imagery

      Before Hansen and colleagues could analyze the Landsat images, they had to preprocess them. This preprocessing included making corrections for clouds and shadows, assessing the quality of the images, and normalizing the images so that they can all be directly compared.

    1. sample sizes

      The total number of samples taken from a sample population.

    2. SEM

      Standard error of the mean = standard deviation/number of samples (Reference)

    3. fig. S5

      Figure 5 in Supplementary Materials shows differences in the cost of treatment in two cases: Average HRT and T=20C, Long HRT and T=25C (As mentioned before HRT describes the time necessary for particles to travel the system from the entrance till the outlet.)

      Figure S5 shows the cost of treatment per m3 of water treated in the y-axis. The x-axis presents the two different scenarios: baseline (average HRT and T=20C) and long HRT and T=25C.

      They looked at the effect of these two scenarios in two samples:

      Source water sulfate with 15 mgS/L and Source water with no sulfate. In both scenarios, source water with sulfate had higher expenses compared with the condition with no sulfate in source water.

      The long HRT condition is investigated to account for the conditions that could favor sulfide formation and corrosion in the sewer, in other words it is a representation of the worst-case scenario.

  9. Apr 2016
    1. Fourth, the estimated duration should be consistent both with the available evidence of increased risk of mortality after MV, compared with uninfected children, and with the time required to build a protective immune repertoire in early life (Fig. 1D, fig. S2, and SM 5 and 6).

      Author's Hypothesis: 4

      The estimated duration of immune suppression in hypothesis 3 must be biologically relevant. We have years of data from other scientists who have measured the amount of mortality following measles infections. Thus, to strengthen the reliability of this paper's findings, then the duration of immune-amnesia that this paper determines should agree with the time frame during which previous studies have found children to be at increased risk of mortality following measles.

      Additionally, a basic hypothesis underlying this paper is that children recover from immune-amnesia by rebuilding their immune response through re-exposure to pathogens. This process is similar to how children first build up their immune response after birth - through exposure to pathogens. Thus, if the hypothesis put forth in this paper is correct, then the time it takes to recover from measles induced immune-amnesia (i.e. the duration of immune-amnesia) should be similar to the amount of time it takes children to build an immune response in the first place.

    2. As a further test of the immunosuppressive impact of measles, we carried out a similar analysis on pertussis.

      This is a control experiment preformed by the authors.

      To test that their analysis doesn't result in falsely positive results another childhood disease, Pertussis, which is preventable by vaccination but does not cause immunomodulation was also analyzed. It was found that data did not demonstrate immunomodulation like measles disease.

    1. These y– F1♂ were considered candidates for carrying the y-MCR construct and were crossed to y+ females

      The authors also crossed the yellow F1 males to wildtype females to generate F2 offspring. A diagram of this cross is shown in Figure 2A.

    2. Six such yMCR F1♀ were crossed individually to y+♂

      After they had determined which of the F1 progeny expressed the mutagenic chain reaction allele (yMCR), they crossed the yMCR F1 females back to wildtype males to obtain the F2 progeny.