1,136 Matching Annotations
  1. Mar 2018
    1. We considered N = 23 taxonomic group × geographic region combinations for latitude, incorporating 764 individual species responses, and N = 31 taxonomic group × region combinations for elevation, representing 1367 species responses. For the purpose of analysis, the mean shift across all species of a given taxonomic group, in a given region, was taken to represent a single value (for example, plants in Switzerland or birds in New York State; table S1)

      The authors conducted the current study to determine if there is a positive trend between observed range shifts and climate warming. They performed a meta-analysis of a very large set of data for diverse species and geographic regions.

      The data used in the meta-analysis are summarized in Tables S1a and S1b. Rates of range shift and statistical analyses were calculated as described in the notes for Table S1.

    2. evidence has previously fallen short of demonstrating a direct link between temperature change and range shifts

      One goal of this meta-analysis was to demonstrate this direct link between temperature change and range shifts.

    1. difference plots clearly show the distinct profiles obtained for the mixtures

      Fluorescence is used to determine the profiles of the DNA mixtures between the Symbiodinium types.

    2. an alternative technique to rapidly and accurately genotype monotypic Symbiodinium populations.

      In this experiment, a faster and alternative method was used in order to accurately genotype the monotypic (one type) Symbiodinium populations. This method was compared to those with DGGE profiles, where they extracted the gel bands and sequenced them based on region 2 of the ribosomal DNA transcribed spacer.

    3. The touchdown protocol consisted of an initial denaturing step at 92 "C for 3 min, 21 cycles at 92 "C for 30 s, 62 "C for 40 s, and 72 "C for 30 s, decreasing each cycle 0.5 "C, followed by 15 cycles with a 52 "C annealing step and a final extension at 72 "C for 10 min.

      One PCR cycle consists of denaturing, annealing, and extension steps. Denaturing means that the double-stranded DNA is separating into single-stranded DNA. Annealing refers to primers being added to the DNA strand. The extension step is when Taq polymerase adds dNTPs to the annealed primer.

    4. PCR amplification had the following conditions

      The PCR amplifications exhibited a set of conditions in order to compare the cultures of a HRM analysis to that of a DGGE analysis.

    5. For each genotype, one of the samples was used as the reference genotype, while the other was treated as an ‘unknown.’

      In order to determine if HRM is accurate, onr of the genotypes was used as the sample and the other was used as an unknown to identify if the template concentration affects accuracy of genotyping.

    6. To understand potential limitations of the HRM technique, three tests were conducted.

      Three different tests were conducted for HRM:

      1. Tested out if the template concentration affected the accuracy.
      2. DNA from Symbiodinium types of the same clade were combined to identify if it will change melting profiles.
      3. DNA from Symbiodinium types of the all clades were combined to identify if it will change melting profiles.
    7. The rest of the samples were assigned to a genotype based on the percentage of confidence

      The amplifications were compared based on the best percentage confidence (accuracy) of the specific culture ( HRM vs. DGGE). This was done through a software. In order to calculate the error, the square of the difference was taken between the fluorescence of each reading sample and reference genotype.

    8. DNA extractions were carried out using DNeasy# Plant Mini kit (Qiagen)

      A standardized kit was used to extract each culture's DNA, where two replications of the DNA were obtained.

      The ITS2 (a specific gene in a ribosome) was then amplified.

    9. Symbiodinium cultures of clades A–E were obtained from Dr Scott Santos

      The experiment consisted of cultures of the Symbiodinium clades A-E for growing marine algae; the media was replaced (only half of it) with fresh medium every month. They were set at a constant temperature of 25˚C and were exposed under a constant light intensity for 12 hours light and a 12 hour dark period.

    1. Therefore, a considerable volume of the electric fish brain is devoted to electrosensory processing. For the computational algorithms proposed above to be involved in electrolocation, they must have a plausible neural implementation in the fish’s nervous system. We propose one such projection onto the neural networks in the electric fish brain.

      Electric fish are able to receive signals and information from emission of electroreceptors in their environment passing through their skin. After the contact of the electroreceptors and the external stimuli, information is relayed in a pattern to the electro-receptory organs of the fish. Since this is an essential part of their way of life, the authors know a large amount of neurons and brain matter are involved in this process. Therefore, scientists hypothesized that in order to ensure this sensory information is relayed efficiently and quickly to the brain of the electric fish, there must be an algorithm used by the neural networks in the fish in order for this process to occur.

    2. The scanning or probing movements have also been hypothesized to help recognize object features.

      The movements that these electric fish use allows for the recognition of an object's features. Evidence from Heiligenberg's stimulation show that when electric fish bend their tails it increases the spatial contrast and makes it easier to distinguish an object's features. While Bacher's 3-D model showed that fish's tail bending help show a clear difference between the object's location and its shape. The BEM stimulation showed that electric fish control their movement in order to regulate their electrosensory system input by demonstrating a stable image of the rostral body which is thought to help the fish to distinguish features.

    3. To address these limitations and to simulate exploratory behaviors further, we built a complementary three-dimensional BEM electric fish model.

      The boundary element numerical method (BEM) simulator was built by the authors to remedy an issue presented by their standard three-dimensional simulator (color-coded mapping of a stationary fish): only the proximal (close to the center/point of origin) side of the fish's body could be digitized since secondary effects of the body could not be accounted for on the EOD. The BEM simulator addressed this problem by allowing objects under analysis to be bent or randomly shaped via nodes placed on their surfaces, thus giving the BEM simulator the ability to analyze secondary effects and other regions of the fish's body.

    4. This allows the fish to swim equally well forwards or backwards and to hold the body in an arc around objects (Bastian, 1986; Toerring and Belbenoit, 1979) while maintaining rigid control over the electroreceptive surfaces. Presumably, by keeping the detector array in a fixed orientation with respect to field generation, this controlled body motion reduces the number of variables that must be taken into account to interpret electrosensory information.

      The author is explaining here that the fish have a certain mechanism in their body that allows them to move much more fluently, whether forwards or backwards, and allows them to wrap around objects while not compromising the information they get from their electroreceptors. The mechanism that allows them to do this is done by keeping their bodies rigid and only moving their dorsal or ventral fin.

    5. To date we have mapped the EODs of three gymnotiform wave species and seven gymnotiform pulse species. These maps, for the first time, clearly illustrate the full spatiotemporal structure of the EOD (Fig. 1). In the majority of species, the EOD waveforms vary greatly with location, revealing considerably more complex patterns than were previously appreciated.
      • "Wave" fish have longer discharges when it comes to EOD's and continuous charges, "pulse" fish however have silent intervals between discharges.
      • This experiment, as the author illustrated above, allows for the statistical analysis of the EOD discharges rather than the collective qualitative data that stood in its place.
      • With this experiment, the waveforms created by the EOD's are affected by their location and proximity to other objects, and although in the charts both fish have many similarities, it seems the further the fish, the easier it becomes to tell the wavelength between both fish apart.
      • Thanks to this study, it is no longer difficult to explain the results of this experiment, previously there were no numbers to support claims, but now the authors have helped to provide some numerical values to this type of research.
    6. We have developed a powerful system for mapping EOD potentials and electric field vectors in three dimensions.

      Electric organ discharge (EOD) potentials, which are generated by electrodes located near the head and tail of an electric fish, are typically difficult to analyze due to variations among different species and the inconsistent geometry of the EOD spatial patterns. Thus, to overcome this obstacle, the authors devised a system for the visual representation of EOD potentials to facilitate analysis. This was achieved by the creation of a robotic arm that recorded the EOD of an immobile electric fish at multiple positions around its body. The motionless state of the fish also enabled the authors to capture the exact times of numerous EODs. Each EOD measurement collected from the arm was then digitally processed and converted into a color-coded map overlaid on a diagram of the fish's body for further research/analysis in the phenomenon known as electrolocation.

    1. To determine whether observed trends in the timing of phenological events were associated with summer temperature trends, we tested the relationship between (βDOY_x_YEAR) with the summer temperature trend (βDOY_x_TAIRSUMMER) for each species at each subsite using linear mixed models, with site, subsite (nested within site) and species as random factors in JMP.

      Statistical analysis such as linear mixed models were used to demonstrate if the trends in the timing of phenological events were associated with the temperature during the summer.

    2. Because site effects such as latitude, elevation and species traits have strong influences on the calendar date that a phenological event occurs, we did not use the DOY associated with a phenological event as a direct measure of phenological response. Instead, we used two types of measures that are largely independent of the site-specific properties (table 2). First, we calculated the TDD from snowmelt until the occurrence of the phenological event for each species-plot combination. This measure reflects the amount of heat accumulated from snowmelt until the phenological event was observed. Second, for each species-subsite combination, we calculated the slopes (β) of the relationships between the timing of the phenological event (represented as DOY or TDD) and the calendar year or site temperature (measured as air temperature of the spring or summer).

      Two types of measures were used to calculate the time of phenological events. One of them was thaw degree days (TDD) which was able to show how much heat was produced from snowmelt until the phenological response occurred. The other measure was done by comparing the slopes of each species-subsite. The slopes represented the time of the phenological event and the year or temperature it took place. One of the advantages of this measure is that it shows the comparison between species and sites.

    3. To evaluate potential differences in species responses among locations, sites were categorized into climatic zones as in previous syntheses [11,22,23]: high Arctic, low Arctic or alpine. Subsites were categorized as dry, moist or wet, where dry refers to plant communities on well-drained, mineral soils typically located on ridges, moist refers to sites with some soil drainage, and wet refers to plant communities with water tables frequently near or above the surface.

      The different locations used was categorized based on climate zones which were high Arctic, low Arctic, and alpine. They were also categorized as dry, moist, or wet.

    4. Consequently, we used mean temperatures of month combinations (spring = April–May and summer = June–August temperatures) as the basis of the temperature analysis.

      The average temperature for spring and summer times within a year were used to analyze temperature change throughout the experiment.

    5. Mean monthly air temperatures for the months preceding the growing season and months of the growing season (April–August) were calculated for each study site each year for comparison with plant phenology.

      The plant phenology was compared with the mean monthly air temperatures.

    6. A cubic spline interpolation

      Used to show the most accurate average of the temperatures.

    7. The weather dataset was based on data collected at the sites

      The data used for the weather was gathered from the specific site. However, in some places such as Finse, Norway, where the weather data was not available for a period of time, the averages from previous years was used.

    8. A priority-ranking lumping scheme that accounted for differences in plant morphology was used to consolidate phenological variables, although for a given species at a subsite, the phenological definitions were consistent over time.

      The plants were ranked based on the way they looked.

    9. Flowering and leafing stages were compiled from most sites at an observation resolution of one to two times per week.

      The plants' flowers and leafs were measured once or twice a week.

    10. Our approach was to use long-term trends and interannual variability across the ITEX control plots to evaluate change in plant phenology in relation to temperature.

      Plant phenology changes were measured using trends and interannual variations.

    1. Pollination success was quantified for these same flowers by recording the fruit production of visited flowers on the potted plants, maintained in the greenhouse. We placed at least 15 plants per day, each with one to three open flowers, over 20 days of observations, using a new set of plants each day. We recorded a total of 69 visits to over 400 flowers observed in this potted plant placement experiment. Pollination efficiency of each visitor group was assessed by comparing the percentage of visited flowers that produced fruit after a single visit.

      The success rate of the pollination was determined by recording the fruit produced by visited flowers. 15 potted plants having 1-3 open flowers were studied over 20 days, and a new set of plants was used every day. 69 total visits to over 400 flowers were observed throughout the course of the experiment.

    2. To determine the most effective pollinator, we placed 15 greenhouse-grown potted plants in the field to quantify pollination success at Site 3, the site with the highest visitation frequency (B. B. Roque, personal observations). On 20 different days during the flowering period, we compared the qualitative effectiveness of the different pollinator groups by allowing a single visit to individual flowers on the potted plants. Flowers that were ready to open prior to observation periods were bagged, while in bud, to exclude visitors. At the time of observation (from 9:00 am to 12:00 pm), bags were removed and flowers exposed to foraging insects. For a specific flower, pollinator visits were restricted to a single visit by one individual from one of the four groups of pollinators. After visitation, a flower was labelled (by pollinator group) and bagged to exclude subsequent visitors.

      For this experiment, the authors wanted to determine which was the best pollinator in relation to effectiveness. They grew 15 plants in the greenhouse and then placed them at the site with the highest number of visitors. Over 20 days, a single visit by pollinators was allowed for each individual flower. Bags were placed on the flowers whenever they were not being studied so as to avoid pollinator visits.

    3. To examine a possible relationship between line thickness and pollen deposition, we hand pollinated fresh flowers using fishing line of four different diameters. Each thickness was inserted into a fresh flower to the bottom of the corolla tube to collect pollen (as above); we then stained entire length of the fishing line with methylene blue to stain the adhering pollen grains and introduced the stained portion into another new fresh flower.

      In this experiment, fresh flowers were pollinated using four fishing lines of different diameters to simulate different proboscis diameters. The fishing lines were inserted into the floral tube in order to collect pollen. Then, the length of the fishing lines were stained blue. The fishing lines were re-inserted and the pollen that attached were stained blue. The same fishing lines were then inserted into other fresh flowers.

    4. To quantify the efficiency of each visitor group, we estimated pollen on the visitor mouthparts as the average number of pollen grains per individual visitor of each group.

      The authors counted average number of pollen grains per group, as well as time spent at each flower, and the behavior of the pollinator after leaving the flower.

      All of these metrics are important to pollination efficiency.

    5. We haphazardly selected plants with open flowers for each of the intervals.

      The authors use the word 'haphazardly' to indicate randomness in the sampling of the flowers. It is important to have randomness to avoid bias.

    6. The flowers have no notable fragrance, and offer viscous nectar as a pollinator reward, with the sugar concentration of the nectar ranging from 30 to 67 %

      This species has no fragrance and uses nectar as a reward with 30-67% sugar concentration as the reward.

    1. individuals at the leading edge and back of the invasions were genotyped, and traits of all 14 genotypes were measured.

      The authors aimed to determine both the genotype and the phenotype of the plants to determine which traits (phenotype) contributed to successful invasion, and how the diversity of the plants at the leading edge changes with gap size (determined by number of genotypes present). Comparing the leading edge and the back allows for determining the effect of invasion on population evolution.

    2. separating individual pots of suitable habitat by gaps that were 0 (continuous landscapes), 4, 8, or 12 times the mean dispersal distance.

      The question of how evolution occurs in patchy landscapes was addressed by testing both evolving and non-evolving populations in environments in which there was space between the habitable spaces in order to grow. Gaps simulated barriers in landscape which may require certain traits to overcome.

    3. In evolving populations, the resulting plants produced seeds, which dispersed across the array (assisted via a simulated rain event), constituting the next generation of the population (Fig.

      One of the questions addressed in the study is how evolution affects the spreading of plants. To test this, plants in the evolving condition were allowed to spread seeds, and then the seeds grew into the next generation. In the non-evolving condition, the new seedlings were replaced with seeds randomly drawn from the source population. This difference allows for determining if the traits the seeds carry with them are important for how fast they spread.

    4. spreading through continuous and fragmented landscapes, each consisting of a linear array of rectangular pots

      The experiments were designed to mimic real-life invasion using a one dimensional (linear) model. Pots filled with soil are places where seeds can grow, while empty pots act as barriers to invasion, resembling a fragmented landscape. The setup allows for the researchers to have precise control over as many variables as possible, and the linear array makes tracking of invasion progress simpler.

    1. To test whether neuronal responses to tickling are also modulated by such conditions, we tickled rats in both control (Fig. 3A, left) and anxiogenic settings, such as under bright illumination and on an elevated platform (Fig. 3A, right).

      The authors wanted to test whether ticklishness in rats is mood dependent, as it is in humans.

      To do this, they used bright lights and height to trigger anxiety in the rats. They then recorded USVs and the neuron firing rate while tickling the rats under these conditions.

    2. (ventral tickling, 4.45 ± 0.28 Hz; ventral gentle touch, 2.58 ± 0.21 Hz; n = 16, P < 0.001; mean ± SEM, paired t test).

      SEM is the standard error of the mean. It is related to standard deviation and provides information about the likelihood that the studied sample represents the whole population.

      A paired t-test is used to compare two measurements (in this case ventral gentle touch vs. ventral tickling) to see if they are significantly different. A P-value of 0.05 or less is an indicator of statistical significance.

    3. rats rapidly approached the tickling hand, and tickling induced unsolicited jumps accompanied by 50-kHz USVs (Freudensprünge, “joy jumps”; movie S2), which can be seen in joyful subjects in various mammalian species (14–16). We visually categorized spectrograms of an extensive set of USVs (34,140 calls) into modulated, trill, combined, and miscellaneous call types (Fig. 1Band fig. S1).

      The authors tickled rats while recording their behaviors and vocalizations.

      A spectrogram is a visual representation of sound waves. It provides quantitative information about the pitch and volume of the rat calls.

      By looking at spectrograms the researchers sorted the vocalizations into three categories.

    1. Hominin and non-hominin tracks were recognised in four test-pits at Site S, namely L8, M9, TP2 and M10.

      The authors made a series of test-pits in order to follow the predicted direction of the footprints.

    2. Consequently, we focused our interpretations on the more appropriate predictions inferred from the relationship between foot size and body dimensions in Australopithecus

      The authors used the size and shape of the footprints to infer body mass, stature, and walking speed for four of the individuals (S1, G1,G2, and G3)

    1. Fig. 4. Relative advantage of generalizing as a function of flower density and the proportion of deep flowers in the community. Outcomes with flight speed of 0.5 m s−1 are shown (15). The generalist is favored when its relative advantage is >1 (pink shading).

      The influences of floral density and the abundance of flowers featuring deep corollas are used to identify the advantage that a generalist (short-tongued) bumble bee would have over a specialist (long-tongued) bumble bee where bees have the same flight speed. Pink areas denote conditions that favor generalists.

      This model was adapted from a model that explained the effects of floral density on the composition of pollinators. The means by which the model that was adapted from to produce this model can be found at:

      https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3825104/

      A list of materials and methods used to produce this figure is found at:

      http://science.sciencemag.org/content/sci/suppl/2015/09/23/349.6255.1541.DC1/Miller-Struttmann-SM.pdf

    2. Fig. 3. Change in flower abundance at landscape and local scales along a 400-m altitudinal gradient on Pennsylvania Mountain. (A) Map showing areas where PFD decreased (1.95 km2), is stable (1.29 km2), and increased (0.10 km2). Unshaded (excluded) areas contain cliff, talus, mining disturbance, and subalpine forest. (B) PFD (mean >± SE) for plots in krummholz (KRUM); tundra slopes (SLOPE); wet meadow (SWALE), false summit (FSUMMIT); and summit (SUMMIT) habitats (N = 6 species; F4,385 = 5.55, P = 0.0002). Asterisks indicate significant differences at P < 0.05. (C) Total flower production (in millions) is the product of total surface area for (A) each habitat (table S5) (15) and (B) mean PFD.

      These results show that the peak flower density of the habitat of the bumble bees is mostly decreasing, causing more competition for available nectar to ensue among the bees.

  2. Feb 2018
    1. In 2012–2014, we resurveyed bumble bee visitation on Mount Evans and Niwot Ridge in accordance with historical observations (18). Despite a 10-fold difference between past (n = 4099 visits observed) and present (n = 519 visits observed) collection effort, surveys indicate that resident bumble bees have broadened their diet. Resampling historical visitation data to match present collection effort reveals that foraging breadth (Levin’s niche breadth) (15) increased from 2.61 to 7.01 for B. balteatus [z score (Z) = 28.48, P < 0.0001] and 2.09 to 5.07 for B. sylvicola (Z = 19.78, P < 0.0001). Bumble bees have added flowers with shorter and more variable tube depth to their diet (B. balteatus: F1,1997 = 7554, P < 0.0001; B. sylvicola: F1,1997 = 64,851, P < 0.0001) (Fig. 2, E and F, and table S3).

      The researchers analyzed differences in bumble bee diets on Mount Evans and Niwot Ridge, between the 1960s and presently around 2011. Fewer bee visits were recorded during the more recent survey, but it appears that the bees have an expanded range of plants in their diet. This diet includes flowers that have shorter and more varied depths.

    2. We measured the change in tongue length of B. balteatus and B. sylvicola using specimens collected from 1966–1980 and 2012–2014 in the central Rocky Mountains (15). These two species historically comprised 95 to 99% of bumble bees at our high-altitude field sites (16–18). B. balteatus workers were collected from three geographically isolated locations: Mount Evans (39°35.033′N, 105°38.307′W), Niwot Ridge (40°3.567′N, 105°37.000′W), and Pennsylvania Mountain (39°15.803′N, 106°8.564′W).

      B. balteatus and B. sylvicola are the most common bumble bee species at the three research sites in the central Rocky Mountains. The sites are Mount Evans, Niwot Range, and Pennsylvania Mountain. To analyze the change in tongue length over time of both species at the sites, recent specimens from 2012-2014 were compared to older specimens from 1966-1980.

    3. Although the climate change impacts on phenological and spatial overlap of mutualists are well known, the role of climate change in generating functional discrepancies between them is less understood. Using historical data, we show that reduced flower abundance in bumble bee host-plants at the landscape scale has accompanied recent warming, leading to evolutionary shifts in foraging traits of two alpine bumble bee species

      It is known that climate change is impacting compatibility of mutualistic species. However, it is unknown which specific mechanisms of the mutualistic relationship are being affected. Using historical data the authors measured flower abundance shifts due to climate change and their effects on foraging behaviors by the bees studied.

    1. we restricted this analysis to break periods.

      The authors wanted to look at the relationship between neural activity and USV production. However, they needed to remove the possibility that both of these things are independent responses to tickling.

      For this reason, they carried out the experiments during breaks, when USVs and increased neuronal activity were occurring, but tickling was not a factor.

    2. Our recordings revealed that USVs and neuronal activity in the trunk cortex are modulated in a similar way by tickling and anxiogenic conditions. We wondered whether tickling-evoked USVs and neuronal responses to tickling are causally linked. We therefore aligned neuronal firing to the onsets of USVs (Fig. 4, A and B).

      The authors observed that USV production and neuron firing change in the same ways in response to tickling.

      In this set of experiments they wanted to see if there was also a causal relationship. In other words, is the neuronal activity triggering the production of sounds or do they just happen to occur at the same time?

    1. In this paper, we report a novel set of hominin tracks discovered at Laetoli in the new Site S, comparing it to a reappraisal of the original evidence. The new tracks can be referred to two different individuals moving in the same direction and on the same palaeosurface as those documented at Site G.

      The authors of this study analyzed the new set of footprints found at the Laetoli site. Their goal was to compare their results with earlier work in determining stature, morphology, and degree of sexual dimorphism of the individuals leaving the tracks.

    1. climate protection

      This paper investigates a means to optimize carbon storage by understanding the diversity of the mechanics and use of plant life as carbon sinks in order to lower atmospheric CO2.

    2. Thus, for each of the two grassland experiments, we used data collected from each plot to estimate parameters for a model that projects soil C accumulation over a 50-year time frame

      The author used data collected from each plot to produce a fraction to estimate the carbon accumulation in the soil, on the surface (0 to 20 cm) and at a deeper level ranging from 20 to 100 cm, over time. Using that fraction, the author created an equation that would estimate the amount of carbon accumulating in the soil over a 50-year period.

    3. E141, called BioCON, started in 1997 to explore how plant communities respond concurrently to three forms of environmental change: increasing nitrogen deposition, increasing atmospheric CO2, and decreasing biodiversity (45).

      In an experiment conducted by P. B. Reich and others in 1997, the effect of increased nitrogen deposits, increased atmospheric CO2 levels, and decreased species richness on plant communities was observed. The author used only the data from the CO2 concentrations and Nitrogen treatments for this research study. With this data the author was able to calculate carbon storage in the BioCON plots because plants store carbon through the accumulation of biomass. CO2 and nitrogen are vital components in the chemical pathways plants use to add on biomass.

    4. represented C4 grasses, C3 grasses, legumes, and other forbs. The species composition of plots was chosen by separate random draws of 1, 2, 4, 8, or 16 plant species from a pool of 18 species, with each level replicated in 30 or more plots.

      The author used C4 grasses, C3 grasses, legumes, and other forbs, in this experiment, as a common variable between all experimental plots. The author mixed the native species, to the American grassland, and a variety of 18 other plant species. The author randomly selected varying levels of species richness, for each plot, by introducing 1, 2, 4, 8, or 16 different plant species. Each mixture of species richness was replicated in 30 or more plots to produce multiple trials of the same level of species richness to produce an average.

    5. Marginal carbon was computed for each of the 15 incremental steps in species richness described in the modeled data, from S = 1 to S = 2, S = 2 to S = 3, …, to S = 15 to S= 16, bounded by the actual data from the experiments where S varied from 1 to 16 in experimental plots.

      The marginal increase of carbon was estimated for each increasing level of species richness. The model data calculated from equation (2) was bounded by the actual data collected from the experiment in each plot.

    6. For each bootstrapped iteration for each year, the parameter values a and b were used to estimate total soil and ecosystem C content for plots ranging from 1 to 16 species

      The equation above was used by the author to calculate the total amount of carbon, in the soil and ecosystem, between the varying species richness in each plot. The a and b values were parameters calculated, by equation 1, to measure the amount of carbon in the plants and the soil for a 50-year simulation. Using the parameters of a and b and the different amount of specie richness (ranging from 1 to 16), in each plot, the total amount of carbon in the ecosystem was determined.

    7. In the end, the Metropolis-Hastings algorithm generates a distribution of accepted I and kvalues wherein their means coincide with the minimal value of J (that is, the best fit to the observed data). Soil C content for each plot was projected to 50 years using the mean parameter estimates I and k and measured C0 for both depths (that is, soil C at start of the experiment).

      The author used the equation above, the Metropolis-Hastings algorithm, to calculate the difference between the observed and modeled data of carbon accumulation in the soil for each year the carbon soil data was provided. The equation generated a variety of accepted i and k values. The author then calculated the average i and k values with the smallest J value (best fit for observed data). Using the average i and k values that were calculated, the author was able to find the amount of carbon accumulated, at both the surface and deeper level of the soil, over a 50-year period. The author assumes the calculations to be within a 30% accuracy of the actual amount of carbon in the soil.

    8. For E141, soil C data were available for each plot at the beginning of the experiment (year 0) and after 5 and 10 years of experimental treatment (0- to 60-cm depth); we scaled soil C data to 0- to 100-cm depth for E141 using soil C depth distributions for E120 (20-cm increments to 1-m depth) to determine the proportion of C in the top meter of soil contained in the 60- to 100-cm depth, and then we used this proportion with observed soil C data for each plot to estimate total C to 1 m.

      For the BioCON experiment, the author used soil carbon data collected from each plot at the beginning of the experiment and the 5 and 10 years after the introduction of varying species richness (from a 0 to 60 cm depth). The author then estimated the amount of carbon in the soil, from 0 to 100 cm depth, by using carbon soil depth distribution data, for 20 cm increments up to a 1 meter depth. Using the data from the BigBio experiment, the author developed a proportion that helped determine the amount of carbon in the soil up to a meter depth for the BiCON plots.

    9. To fit the model and obtain plot-level parameter estimates of I and k, we used all available soil C data from the Cedar Creek Ecosystem Science Reserve website (available publicly from www.cbs.umn.edu/explore/cedarcreek). For E120, surface soil data (0 to 20 cm) were available from years 0, 1, 5, 7, 9, and 11 for all plots, and from year 10 for some plots, and data for 0 to 100 cm were available for years 0 and 11.

      The author used data collected on carbon accumulation in the soil from the Cedar Creek Ecosystem Science Reserve website to obtain estimated parameters for: i) the constant rate of decomposition of plants, per year (k) and ii) the annual input of carbon into the soil. In the BigBio experiment, the author used data, on carbon content in the surface of the soil, from years 0, 1, 5, 7, 9, 11 for all plots, and year 10 for some plots. The author also collected data, of deeper soil carbon content, from years 0 and 11.

    10. In both experiments, soils were treated before initiation of the manipulations. Before planting, methyl bromide was applied to soils in the BioCON experiment, and the uppermost layer of the soil (0 to 5 cm) was relocated off plot for BigBio, to reduce the influence of the seed bank on species composition of the experimental plots in both cases.

      The soil in all plots was treated before the experiment was conducted. Methyl bromide was applied to the BioCON plots and the first 0 to 5 centimeters of soil was relocated in the BigBio plots. The author either treated the soil or relocated it to make sure any previously existing seeds would not change the species richness of each plot. The addition of plant species not recorded in the experiment would influence the amount of carbon storage occurring in each plot. This could possibly skew the data collected over the duration of the experiment. The difference in pretreatments between BioCON and BigBio could have contributed to the different rates of carbon accumulation between the two experiments.

    11. These experiments—and our analyses of them—excluded large vertebrate grazers, which can influence grassland carbon storage (47) and, thus, its economic value; our assessment focused solely on species richness and did not consider grazing or other influences on grassland carbon accumulation

      A previous research study, wrote by D. P. Xiong, suggested that the presence of grazing animals influences grassland carbon storage. By consuming plants, grazing animals decrease the total amount of biomass in the American grasslands thereby decreasing the amount of carbon storage in an environment. The author decided not to take into account the effect grazing animals had on carbon storage and only looked at the effects of plant diversity on carbon accumulation.

    12. Thus, any influence of functional groups on carbon accumulation (8, 46) are distributed across treatment levels in our analysis.

      The idea of using C4 grasses, C3 grasses, legumes, and other forbs across all plots in this study was inspired by the works of leading scientists D. A. Fornara and D. A. Wedin. By using the native species in all plots, the author was able to concluded that any disturbances, when considering carbon storage, across the plots was due to the varying levels of species richness. The author used the differences in carbon storage as data during the experiment's analysis, to find out the correlation between increased species richness and its effect on carbon storage.

    13. Species richness was manipulated in subplots (2 m × 2 m) located within the three 20-m-diameter ambient CO2 plots, with 32 randomly assigned replicates for the 1-species treatments

      For BioCON, the author used 2 m x 2 m subplots to see the effects of increased species richness on carbon storage. The author randomly chose 32 identical species mixtures from the experimental plots in E120 with varying levels of species richness. 15 plots were given a mixture of 4 different species and 12 plots were given a mixture of 16 different plant species.

    14. E120 contained 342 plots laid out as 13-m × 13-m squares with the central 9 m × 9 m actively maintained for the specified species and plant diversity (44).

      A previous work, drafted by D. Tilman and coworkers, suggested the experimental outlay of this research study. By using 342 plots, in Cedar Creek Ecosystem Reserve in Minnesota, United States, the author was able to see the relationship between increased species richness and its effect on carbon storage in the American grassland. The author started in 1994 and used the 13 m x 13 m plots as the experimental group in this experiment. These plots were subjected to varying levels of species richness and observed the carbon uptake response. The central 9 m x 9 m plots were used as a control group because the species inhabiting the plots and plant diversity were constantly maintained.

    15. For plants, we used observed changes in plant carbon content. For soils, we used data on soil carbon and plant productivity to model carbon accumulation as a function of increasing species richness over a 50-year period.

      The author used the observation of increased plant growth to measure the amount of carbon being sequestered by the plants.

      For soil, the author used data collected over a 50-year period, on soil carbon levels and plant productivity, to measure to total amount of carbon accumulated when introduced to increased species richness. The author was then able to create a fraction, used in later equation (1), of carbon content to estimate the amount of carbon in the soil during the experiment.

    16. We utilized these data to assess the marginal increase in carbon content with increasing species richness and estimated the economic value of the carbon storage conferred

      The author used data collected from the American grasslands, with increased plant diversity, to see if there was an increase in carbon uptake, in the environment, compared to the control group's carbon content.

      Using the valuation of carbon the author then estimated the economical worth of the carbon storage in the experimental fields in order to graph the results to show a correlation.

    17. We analyzed data from two experiments, performed in a North American grassland where species richness had been manipulated for over a decade periodic measurements of plant and soil carbon content in this site over time have suggested that both factors increase with species richness (8, 22),

      The author collected data, from two different American grassland fields with varying levels of plant species diversity for over ten years, on the amount of carbon in both the plants and the soil.

      In multiple research papers, written by leading scientists D. A. Fornara and P. B. Reich, the results show that increased plant diversity, in grassland environments, increased the carbon content in both plants and soil. The data in this research paper correlates with the past findings that an increase in species richness, in American grasslands, increases carbon storage in an ecosystem over time.

    18. We calculated the marginal change in carbon content with increased richness next, we calculated the economic value of species richness for carbon storage in grasslands, using a wide range of estimates of the social cost of carbon compiled by the Interagency Working Group in a recent synthesis used by U.S. federal agencies when estimating the benefits of carbon reductions from application of federal rules and regulations [mid-range estimate, $137.26 per metric ton C (MT C−1), ranging from a low estimate of 41.94toahighestimateof41.94toahighestimateof41.94 to a high estimate of 400.33 MT C−1; see Materials and Methods] (25).

      The author used the data collected to find the change in the amount of carbon, in the American grassland fields, when exposed to increased plant diversity. The author then used data on the cost of carbon, by the Interagency Working Group and a previous research study conducted by the United States Government, to calculate the economical worth of the carbon stored in the American grassland fields.The previous research study gave the author insight into the how to assign a dollar total amount to carbon in the American grassland fields.

    1. Our methodology for identifying these pro-ISIS aggregates was as follows.

      As from the Supplementary Material, the manual analysis described here can be broken down into the following steps:

      • Experts search websites such as VK.com for common hashtags and keywords on a daily basis.
      • A manual list of aggregates, which was updated daily, was assembled by the experts to include only those aggregates appearing to express a strong allegiance to ISIS.
      • To find newly created aggregates, the experts analyze posts and reposts among known aggregates, as well as follow selected profiles that actively publish ISIS news.
      • Newly found aggregates were included in the authors' database.
      • The database of aggregates was analyzed on a daily basis to determine which aggregates were still active and which were shut down.
    2. Our data sets consist of detailed second-by-second longitudinal records of online support activity for ISIS from its 2014 development onward and, for comparison, online civil protestors across multiple countries within the past 3 years

      On a daily basis, experts looked for specific hashtags and keywords that indicated activities related to ISIS or to civil unrest. Then, at the same time each day, these experts logged into VK.com--an equivalent of Facebook that is popular in Europe--and searched for newly created aggregates, which were then inserted into a database.

    1. Our framework was designed to compute, from the best-available data, an order-of-magnitude estimate of the amount of mismanaged plastic waste potentially entering the ocean worldwide.

      Here the authors restate what the goal of their investigation was.

    1. To determine the effect of starvation on lipid reserves, the total lipid contents of mosquitoes were quantified in either sugar-fed or starved females

      Starvation is a state in which the body responds to long periods of fasting. The body then burns fatty acids and lipid reserves, along with small amounts of muscle tissue to provide the brain with an energy source of glucose.

    2. Here diet restriction, in vivo depletion of INSr and FOXO using RNA interference (RNAi) and insulin treatments were used to modify insulin signaling and study the cross-talk between insulin and JH in response to starvation.

      INSr is a gene that encodes for a cell surface receptor, also known as tyrosine kinase. Binding of insulin initiates the insulin signaling pathway, which regulates how glucose is absorbed by muscle and fat cells. FOXO is a series of transcription factors, which are proteins that control transcription of genetic information from DNA to messenger RNA. FOXO transcription factors are responsible for regulating which genes are expressed, and these transcription factors are also responsible for signaling apoptosis, or cellular suicide, when a cell is damaged beyond repair.

    1. burst speed-length relationship

      To account for an initial burst of energy, that gives an initial burst of speed.

    2. Maximum swimming speed was measured following (Wardle, 1975):

      The following formula was used to determine the maximum swimming speed for each of the fish. The measurements needed are the fork length of a fish, its stride length, and its tail-beat frequency.

    3. Pilot tests on euthanized rainbow trout (Oncorhynchus mykiss) (10°C) performed at the University of Copenhagen revealed no difference in peak contraction time with increasing stimulus voltage.

      This shows that having more or less voltage wouldn't affect the peak muscle contraction time, so having different fish be stimulated at different voltages wouldn't have affected the minimum contraction time that the authors were looking for during their experiment.

    4. We first applied 10V to a fish, doubling this amount in case of no contraction, using a maximum of 100V

      A small amount of voltage was used at first (10V), and this was then increased by doubling the initial voltage, until contraction occurred. However, a voltage above 100V was never used. To put this amount into perspective, the standard voltage for electrical outlets in the United States is 120V.

    5. post hoc Tukey test

      A statistical test used to confirm where the differences between groups occurred when it's known that there is a statistically significant difference between the means/averages of the groups.

    6. ANOVA

      A statistical method used to measure the variance in data between different groups.

    7. The body temperature at the stimulus location [15, 30, 45, 60, 75% along the fish fork length (Lf) with 0% representing the tip of the head and 100% the fork of the tail] was

      They also measured the muscle temperatures of each fish at the 5 different areas (along the length of the fish from head to tail) where they measured muscle contraction times. The temperature of the muscle would affect whether it contracts faster or slower (warmer is faster).

    8. Sea surface temperature

      The temperature at which fish live in also affects their swimming speed because warmer water generally enables them to propel through water more efficiently. That's why is it was especially important to record the temperatures of the waters from where the fish were collected.

    9. Estimates based on minimum muscle contraction times thus yield the theoretical maximum values attainable by fish

      Measuring the muscle contraction times of the fish after taking them out of the water enabled the researchers to know what the highest potential speed a fish can reach is without real-life factors of the fish's environment, like water current, affecting their data. Representative of data as if they had observed the fish in action with high-speed cameras.

    10. to measure their minimum muscle contraction times

      It's too difficult to observe maximum speed as it's happening in the wild so the best method would be to observe the physiology of the fish. The faster the muscle of an organism contracts, the faster it can move. As a result, the lower the time it takes for fish muscle to make one contraction, the faster the fish.

    11. predicts that such extreme speed is unlikely

      Using calculations, it should not be possible for sailfish and marlin to swim as fast as previously thought because swimming that fast would cause cavitation bubbles in the fish leading to death.

    1. see (6) for details

      In most scientific journals the Materials and Methods are part of the main text, but in Science they are part of the references and Supplementary Materials. For details on how the researchers developed their pathways and models, and information on the different ecosystem groupings, check out that link.

    1. Post hoc data presented here were generated by SPSS as standard outputs of the analysis, including the adjusted P-values reported throughout the manuscript.

      The authors of this experiment used the SPSS statistical software system to create the graphs presented in the article, showing the descriptive statistic analysis.

    2. The descriptive statistics function was used to analyze the distribution of the data.

      Author used descriptive statistics (mean, median, mode, etc) to compare data collected on the oocyte Wobalchia titer and those collected on oocyte size.

    3. For measurement of ovary volume, tissues were dissected from adult flies and imaged using an AmScope MD500 5.0 megapixel digital Camera mounted upon a Jenco ST-F803 dissection microscope set at 1× magnification.

      Using the AmScope digital camera, the authors were able to improve the visualization of the ovaries by having the field of view appear on a computer screen and analyze from there.

    4. Screen shots of these ovary fill diagrams were then imported into Fiji (Image J version 2.0.0-rc-43/1.51d, NIH) for conversion into 8-bit, thresholded black and white images. The area of the ovary fill diagrams was determined in terms of pixels2 by the Analyze Particles function in Fiji. A scale bar was also used to calculate a pixel2 to micron2ratio (9.3025:1) that was applied to all oocyte area data, for presentation and discussion purposes only. Statistical differences were determined through analysis of the primary data in terms of pixel2 units.

      In this experiment, the authors used the Fiji Image J processing software in order to calculate the size and area of the oocytes, providing more insight into how the oocytes were affected.

    5. Three or more experimental replicates were performed for all treatment conditions examined. Significance of differences between conditions was determined by ANOVA analysis of the raw data.

      The authors involved compared the oocyte titer raw data for each experimental replicate using the ANOVA analysis system by testing the differences of the means present.

    6. Images were manually processed in Photoshop to remove extraneous signal outside the oocyte, and remaining oocyte puncta were quantified using the Analyze Particles feature in Image J version 2.0.0-rc-43/1.51d (NIH).

      The authors used photoshop, an image editing software, in order to remove and unnecessary signaling outside the oocyte, making it easier to analyze and interpret the data.

      Full tutorial and introduction to processing scientific images in photoshop: https://www.youtube.com/watch?v=SbsDtPouggs

    7. confocal images

      The authors collected confocal images though confocal laser scanning microscopy which is an optical imaging technique to increase the optical resolution and contrast of a micrograph by using a spatial pinhole to block out-of-focus light during image formation.

    8. All replicates were imaged by laser scanning confocal microscopy on either Leica SP2 or an Olympus FV1200 confocal microscope at 63× magnification with 1.5× zoom.

      Laser scanning was used to magnify the cells and staining enhanced the detail of the structures, shown in black and white for differentiation purposes as well as measurement.

    9. This stock carries the wMel Wolbachia strain as confirmed previously (Christensen et al., 2016). 0−24-hour-old adult flies were selected at random and transferred into new bottles of standard food and aged for 2 days. Then flies were transferred to vials of nutrient-altered food and incubated for 3 days. Controls were run in parallel with all treatment conditions in all experiments.

      The authors of this experiment used flies this young in order to ensure the flies had not developed any Wolbachia yet, meaning it would be easier to extract the oocytes without shredding the tissue.

    10. All feeding experiments were done using flies of the genotype w; Sp/Cyo; Sb/Tm6B, reared on standard food and in a controlled, 25°C environment.

      Specific genotypes allowed for less error or genetic disruption of results.

    11. To ensure homogeneous suspensions of nutrient-altered diet preparations, all food vials were immediately transferred to an ice bucket to be cooled with additional stirring every 10 min until the food completely solidified. Kimwipe strips were inserted into the food to wick away excess moisture.

      The authors placed the vials in an ice bath (placed in bucket of ice) in order to ensure a homogenous suspension, meaning that the substances in the vials were equally suspended from each other.

    12. This standard food was used as a base for all nutrient-altered foods that were prepared in this study (Table S3). The sugar-enriched foods were prepared by first making a stock sugar solution of 20 g sugar in 10 ml ddH2O, solubilized with rounds of 15 s in the microwave and then stirring, repeated until the sugar dissolved. 1.5 ml amounts of these sugar solutions were immediately mixed with 3.5 ml of melted standard food. As aspartame, erythritol, saccharin, and xylitol were not uniformly soluble, the sweetener-enriched foods were generated through direct addition of powder equivalents directly into 5 ml of melted standard food to a final concentration of 1 M (Table S3). Yeast-enriched food was prepared by mixing 1.5 ml of heat-killed yeast paste into 3.5 ml melted standard food. Dually enriched food was prepared through addition of 1.5 ml sugar solution and 1.5 ml heat-killed yeast to 2 ml standard food. Desiccated food was prepared by addition of 2.5 g silica gel (roughly 2.5 ml volume) to vials containing 5 ml standard food (Table S3).

      The recipes for the substance-enriched foods given to the flies.

    13. The impact of diverse dietary sugars on insulin signaling has not been fully defined in D. melanogaster. From the perspective of Wolbachia endosymbiosis, this study suggests that dietary sugars induce different classes of mechanistic responses.

      The researcher's reasoning for executing this experiment was to uncover the frontier that is the mechanism that act in Wolbachia concentration, as there is very little information on it.

    14. To further investigate how oocyte Wolbachia titer is controlled, this study analyzed the response of wMel Wolbachia to diets enriched in an array of natural sugars and other sweet tastants. Confocal imaging of D. melanogaster oocytes showed that food enriched in dietary galactose, lactose, maltose and trehalose elevated Wolbachia titer.

      The paper attempts to find the mechanism responsible for Wolbachia concentration increase in germ line cells through tests of natural/artificial sugars and yeast.

    1. we illustrate through a series of analyses that the stationary gene partition is superior to the nonstationary partition

      Gene partitioning involves looking at the make up of complex traits. It involves computing heritability contributions from subsets of predictors to try and narrow down the search for causal variants.The reason why gene partitioning is so important is because if it is not done correctly the sister genomes results in aneuploidy. The consequences of these errors range from loss of normal cellular function to cell death. Stationary partitioning is superior to non-stationary because better results are given when trying to find these casual variants that are important for hereditary information.

    2. Herein, we demonstrate that these conclusions require substantial revision

      The main goal of this paper is to revise the previous work made on the construction of phylogenetic trees because according to previous studies by Rokas and Gee, those trees that have been created are fairly inaccurate. This research is being made to demonstrate how these phylogenetic trees are in fact accurate and correct the conclusions made by previous scientists.

    3. The authors then carried out a series of analyses

      In this experiment, the authors had to find out the lowest possible amount of data they needed to come up with a correct species tree. In other words, they want to discover the most efficient way possible to create an accurate phylogenetic tree.

    4. this approach is necessary because there are no identifiable parameters that predict the phylogenetic performance of genes (Gee, 2003; Rokas et al. 2003)

      It is hypothesized that no matter what, the accuracy of a tree is directly related to the number of genes used to create that tree. This means that the greater number of genes studied, the more accurate the tree, so with an infinite number of genes one can achieve an infinite amount of accuracy.

      The second part of the hypothesis states that the above statement should be true due to the fact that there is no set evidence that can "rank" a gene's ability to contribute to a phylogenetic tree. This is hypothesizing that all genes contribute an equal amount to a phylogenetic tree, with no gene being able to contribute any more or less than another gene.

    5. We investigated incongruence between stationary and nonstationary partitions further by examining partitioned Bremer support.

      Partitioned Bremer support is a method for assessing the similarity in joined data collections, although, there are some point of views that require some insight. When more than one similar parsimonious tree is found in the middle of varied ones, averaging Partitioned Bremen support for each set of data over these trees can avoid trouble, and it should ideally be analyzed for each unnatural tree. When numerous most parsimonious trees are examined due to joint data collection, Partitioned Bremen support is generally computed on the common tree. Be that as it may, extra information can be acquired on the off chance that Partitioned Bremen support is computed on each of the most parsimonious trees or less quality trees.

    6. We performed parsimony bootstrap analysis of individual genes across all positions to compare the phylogenetic performance of these partitions.

      "Bootstrapping" is a technology based practice for determining the accuracy or precision of many statistical results. For phylogenetic  trees, the bootstrap helps by exampling "confidence" in related organisms (common ancestor) in proportion with organisms from the same "family".

      During this procedure, the authors essentially performed an analysis to determine how divergent, or dissimilar, the genes had come to be.

  3. Jan 2018
    1. If Ir40a is required for the behavioral response to DEET, one must contemplate that both Ir40a and Orco are necessary for DEET sensation, but that neither pathway is sufficient for repellency on its own.

      The author of this paper is currently doing research on the olfactory receptors that mediate the mosquito’s human and plant host-seeking behavior and the genes that regulate their appetitive drives. This includes research on Ir40a.

    1. Figure 2 indicates reconstructed and estimated shifts in the distribution of major Mediterranean biomes

      These are some of the main general biomes found in the Mediterranean region. The BIOME4 model uses a longer list of specific biome types, and more than 20 were found in the Mediterranean region. For the analysis, this larger list was grouped into ten “aggregated biome types” which are presented in Figure 3. To see how the aggregated biomes were grouped, check out Table S1 in the Supplementary Materials.

    2. The colored areas illustrate the interquartile interval provided by the intermodel variability

      Each of the colored lines is generated from using multiple climate models as input to the BIOME4 model. The solid line indicates the average change ratio predicted from multiple runs of the model.The variation within each model is indicated by the shaded region, which encompasses the 25% of results above and below the average. As you can see, some models had higher variability than others, and the predicted changes within the first 50 years overlap between models.

    3. The limitations of a relatively simple ecosystem model are largely offset by two factors. First, this method directly relates the physical environment, including its seasonal variability, and atmospheric CO2 to plant processes and thereby avoids the strong assumptions made by niche models (18). Second, past observations are analyzed with the same process-based model that is used for the future projections, thus providing a more coherent framework for the assessment.

      Though the BIOME4 methods have limitations, there are two major advantages.

      First, the model is based on the underlying processes that connect climate and ecosystems, so it avoids the assumptions made by models that are based on correlating current ecosystem distributions and climate values. For example, these models often assume that the current ecosystem distributions are stable and at equilibrium, rather than in flux.

      Second, the researchers are able to use the same model for both the past reconstructions and for the future projections. This allows for more direct comparison between the two groups.

    4. For the Holocene, BIOME4 was inverted to generate gridded climate patterns by time steps of 100 years and associated ecosystems (“biomes”) from

      To reconstruct Holocene climate variables and biome types, the researchers "inverted" the BIOME4 model, flipping the input and output data. Rather than using climate data as the input, it uses plant data.

      The researchers started by converting the pollen core data for a given time and location into "plant functional type" scores, which BIOME4 uses to rank and select biome types based on the "best match." "Plant functional type" groups species together based on similar characteristics such as respiration rate, response to climate variation, and genetic makeup.

      The researchers then ran the BIOME4 model in "inverse" mode, testing a large number of randomized climate inputs for each time and location. The set of climate inputs that resulted in the best match to the pollen core data is used for the reconstruction.

    5. see table S2 and (6) for details

      In many scientific journals, the Materials and Methods are part of the main text, but in Science they are part of the references and Supplementary Materials. For details on how the researchers developed their pathways and models, and information on the different ecosystem groupings, check out the supplementary information.

    6. 25th, 50th, and 75th percentiles

      These boxes show a range of results from many model runs for each scenario. The dot indicates the 50th percentile (median) result for all runs. The box shows the range containing the middle 50% of the results.

    1. whole-mount in situ hybridization (WMISH) with species-specific probes, we show that crocodile, lizard, and snake placodes all exhibit spatial expression of Shh

      WMISH is a common technique for visualizing the location of expressed RNAs in embryos.

      In this study, the authors used WMISH to show that Shh was expressed in the placode.

    2. breeding experiments

      Individual lizards with different physical traits were bred together in order to observe how those traits were passed on to the offspring.

    3. each of these dermoepidermal elevations that generate scales in crocodiles, lizards, and snakes occurs at the location of a transient developmental unit that exhibits the characteristics (Fig. 1B) of the mammalian and avian anatomical placode

      The authors of this study expand on the findings of previous studies by showing that the elevations that result in scales correspond to the anatomical placode in mammals and birds, which gives rise to hair and feathers.

    4. skin developmental series (Fig. 1A) in crocodiles (Crocodylus niloticus), bearded dragon lizards (P. vitticeps), and corn snakes (Pantherophis guttatus)

      The authors took successive microscopic images of different body parts in both lizards and snakes, specifically focusing on the places that scales form.

    1. Here we present a framework to calculate the amount of mismanaged plastic waste generated annually by populations living within 50 km of a coast worldwide that can potentially enter the ocean as marine debris.

      The authors use population density data and waste accumulation rates to predict how much plastic might enter the ocean as marine debris.

  4. Dec 2017
    1. The most marked and derived macropatterning of skin in reptiles is observed in snakes

      Compared to other reptiles, snakes' scale development is unique.

    2. proliferating cell nuclear antigen (PCNA) analyses indicate a reduced proliferation rate of the placode epidermal cells

      PCNA is a protein involved in cell division, so it is used as a marker to locate cells that are actively dividing.

      This analysis showed that the cells of the placode were reproducing very slowly.

    3. associated with the presence of an anatomical placode presenting all the characteristics observed in avian and mammalian placodes

      The authors of this study provide evidence for the theory that hair, feathers, and scales are homologous structures.

    1. To this end, luciferase RNA flanked by the 5′ and 3′UTRs of PE243, was transfected with increasing amounts of MSI1 into human embryonic kidney (HEK) 293T cells, which do not normally express MSI1 (fig. S4).

      Luciferase RNA was transfected into human embryonic kidney cells with increasing amounts of MSI1 to test the translation regulation abilities of MSI1. The result is an increase in luciferase translation due to MSI1 regulation (meaning MSI1 may stabilize the genome of RNA).

    2. Because there was no discernible difference between ZIKV binding and entry into control and KO cells (Fig. 2, G and H), we asked if MSI1 could regulate translation through ZIKV UTRs

      The viral binding affinity was tested using the viral binding assay along with the a pseudotyped particle infectivity assay to test the ability of the virus to infect a cell. The results for the assays did not show a significant change in binding affinity.

    3. Consistently, levels of the viral dsRNA and flavivirus E protein, as well as the infectious titer, were reduced in the KOs (Fig. 2E and fig. S3). Because MSI2 levels were similar between control and KO cells (Fig. 2C), MSI1 and MSI2 are unlikely to have complete functional redundancy in ZIKV replication. Replication of the MR766 strain was also impaired in the KO cells (Fig. 2F)

      Numbers of viral dsRNA, flavivirus E protein, and infectious particles were determined using confocal microscopy. Copies of viral RNA were measured after introducing the MR766 strain. The authors use this method to understand the replication processes that can increase or, in this article, decrease due to the inactivatoin of MSI1. The results observed showed a decrease in the amounts of viral particles and MR766 replication.

    4. We then generated MSI1 knockouts (KOs) in U-251 cells by clustered regularly interspaced short palindromic repeats (CRISPR)–Cas9–mediated targeting of exons 8 or 6 of MSI1 (KO1 and KO2, respectively; Fig. 2C and fig. S2). Control cells were obtained through clonal expansion of cells transfected with Cas9 alone. By measuring viral RNA at different times after PE243 infection, a marked reduction of viral load was seen in KO1 and KO2 cells at 24 and 48 hours (Fig. 2D).

      MSI1 knockout cells (KO) were created using CRISPR/Cas9. KO1 and KO2 were then probed with N and C termini MSI1 and MSI2 antibodies. The results showed a decrease in viral load after 24 and 48 hours.

    5. In all three cell types, MSI1 depletion led to a marked reduction in viral RNA levels (Fig. 2, A and B)

      Cells were observed after being infected with PE243, a control siRNA (small interfering RNA used to silence protein coding genes), and MSI1 siRNA. Western blots were run on the proteins from the cells. The results of the observations showed a decrease in viral RNA levels.

    6. To investigate whether MSI1 also binds ZIKV 3′UTR in vivo, ultraviolet (UV) cross-linking immunoprecipitation (CLIP) of RNA was performed from lysates of PE243-infected U-251 glioblastoma cells, revealing a robust direct interaction between MSI1 and PE243 ZIKV RNA (Fig. 1E)

      UV radiation was used to run a CLIP (cross-linking immunoprecipitation used to bind proteins together using UV radiation) analysis on infected cells. A western blot was run on the results, which allowed scientists to find the strong bond between MSI1 and PE243 ZIKV RNA.

    1. Additional BRUV survey sites

      BRUV's were placed in sites different to the determined video sites in order to serve as a control for the experiment. The author expected to see distinct measures of site fidelity and examine these results in order to prevent false positives and ensure that the results gathered from the no take marine reserves were higher or more accurate than results measuring site fidelity in open waters. -Sindy

    2. Baited remote underwater video (BRUV)

      BRUV is a system used in marine biology research. By attracting shark into the field of view of a remotely controlled camera, the technique records sharks diversity, abundance and behavior of species. -Sindy

    3. If sharks exhibit fine-scale site-fidelity to certain parts of GRMR, then the number of detections on a monitor should decrease with distance from the shark's tagging location.

      As the shark gets further away from the initial location of tagging, its prevalence of location indicating site fidelity decreases because it is getting further and further away from its initial site. -Sindy

  5. Nov 2017
    1. More information on shark movements and relative abundance in different management zones is needed to understand the extent to which marine reserves benefit Caribbean reef sharks and reef sharks in general.

      This research is centered around this idea of uncovering information on the abundance of sharks in zones that are monitored but reserved for marine animals It tests the efficiency of no-take marine reserves and its correlation to the site fidelity of reef sharks. -Sindy

    2. However, can marine reserves also benefit large, roving reef predators that are potentially mobile throughout their life?

      This question sheds light on a topic regarding the suitability of marine reserves not only as a permanent safety harbor for recovery and expansion of the species but also the temporary inhibition of the space by species that are mobile, whether they would use the space to breed for protection or for a stable source of food and shelter. -Sindy

    1. The interactions between the insulin signaling pathway (ISP) and juvenile hormone (JH) controlling reproductive trade-offs are well documented in insects.

      A signaling pathway allows an activation process to occur for a cell to have a specific function. So a communication takes place between these two different hormones, juvenile and insulin. These hormones work with energy to develop an insect. -Nicole Jones

    1. Multiple alignments of protein and nucleotide sequences were implemented using the BIOEDIT program (Hall 1999) and visually inspected for errors.

      BIOEDIT is a computer program that is capable of editing and aligning sequences rapidly and effectively. It is meant to be easy to use and is intended to allow researchers to easily create and analyze basic sequences. It is incredibly useful in this case as it allows the researcher to analyze sequences of multiple insect species rapidly and quickly. Learn more at http://www.mbio.ncsu.edu/BioEdit/page2.html -Eri-Ray

    2. It has been found a conserved relationship among dsx/tra/tra-2 across dipterans so that this axis represents the ancestral state of the sex determination cascade,

      The conserved relationship among all dipterans suggests that this form of relationship evolved to be long ago. Part A of Figure 1 explains the cascade of how sex is determined among drosophila. This serves an example of a mechanism that works efficiently in determining sex. Through evolution and natural selection, the mechanisms that would perform most efficiently would be passed on (as can be expressed in Part A of Figure 1), while the other inefficient mechanisms would be removed from the population. -Elder

    1. Contraction times at the 45% Lf position

      Different positions of the fish led to more or less efficient muscle contraction times. So, the authors used the positions that would best decrease contraction time to find the maximum speed of a fish.

    2. also limited slippage of the hypodermic needles during contraction (no slippage was observed during contraction as well).

      Parafilm was used to make sure the needles didn't slip while measuring muscle contraction time, because that would lead to a source of error.

    1. fluorescein isothiocyanate (FITC)–dextran (3 kD) in aCSF, was infused at midday (12 to 2 p.m.) via the cannula implanted in the cisterna magna. In sleeping mice, a robust influx of the fluorescent CSF tracer was noted along periarterial spaces

      The scientists injected a fluorescent tracer, FITC, into a large reservoir of CSF called the cisterna magna.

      This enabled the scientists to detect CSF as it circulated through the brain and diffused into the interstitial space between neurons.

      The scientists observe that the CSF in sleeping mice flows robustly along arteries and diffuses into the surface of the brain (the parenchyma).

    2. Periarterial and parenchymal tracer influx was reduced by ~95% in awake as compared with sleeping mice during the 30-min imaging session

      The scientists awaken sleeping mice and inject a second tracer (also small) into their CSF.

      The scientists observe far less CSF flow along arteries and into the brain. This suggests that wakefulness exerts a powerful and rapid effect on CSF circulation.

      This experiment compares two arousal states within the same mouse, rather than comparing the two states in two different mice. This kind of experimental design minimizes the impact of genetic and environmental variability between mice and makes this study extremely powerful.

    3. we repeated the experiments in a new cohort of mice in which all experiments were performed when the animals were awake (8 to 10 p.m.).

      The scientists have already demonstrated that CSF circulation is reduced when they wake a sleeping mouse.

      However, it is possible that this is only true when a mouse is awakened in the middle of its sleep cycle, or that it is a short-lived effect that only matters during the first several minutes of wakefulness.

      The scientists expanded their finding by injecting tracer into awake mice and then anesthetizing them.

    4. P < 0.05, two-way analysis of variance (ANOVA) with Bonferroni test],

      An Analysis of Variance (ANOVA) is a statistical method for determining whether multiple groups being studied are truly different from each other.

      "n = 6 mice" means that six mice were compared in total; this is a small sample, but because of the powerful design of their study (comparing arousal states within the same mouse), the scientists are able to detect significant differences between groups.

    5. Radiolabeled125I-Aβ1-40 was injected intracortically in three groups of animals: freely behaving awake mice, naturally sleeping mice, and animals anesthetized with ketamine/xylazine (fig. S4).

      In order to test whether their findings may be relevant for understanding Alzheimer's disease (AD), the scientists wanted to determine if the clearance (removal) of beta amyloid, Aβ, a protein that accumulates during AD, was more or less efficient during sleep.

      The scientists injected small amounts of a slightly radioactive form of Aβ, then waited for between 10 minutes and 4 hours, giving the brain an opportunity to remove the Aβ.

      The radioactive label is believed to not impact the clearance of Aβ, and allowed the authors to determine how much of the Aβ protein remained in the brain using a gamma counter.

  6. Oct 2017
    1. we used several methods to evaluate the degree of consistency among the sgRNAs or shRNAs targeting the top candidate genes

      The authors compared GeCKO screening to a similar screen performed with shRNA.

    2. RNAi Gene Enrichment Ranking (RIGER) algorithm

      The authors used an algorithm to rank the genes enriched in their screen by how likely it is these genes contribute to PLX resistance.

    3. we found enrichment of multiple sgRNAs that target each gene after 14 days of PLX treatment (Fig. 3E), suggesting that loss of these particular genes contributes to PLX resistance.

      After 14 days, the authors saw a change in the distribution of sgRNAs in drug-resistant cells. From the new distribution of sgRNAs, they were able to identify genes that may contribute to PLX resistance.

    4. enrichment of a small group of cells that were rendered drug-resistant by Cas9:sgRNA-mediated modification.

      PLX exposure stops growth in cells with a specific BRAF mutation. Treating a group of cells with PLX will halt the growth of cells without resistance, but cells that are resistant to PLX will continue to grow. This allowed the researchers to isolate drug-resistant cells.

    5. we sought to identify gene knockouts that result in resistance to the BRAF protein kinase inhibitor vemurafenib (PLX) in melanoma

      To test the GeCKO library's effectiveness for positive selection (genetic variation that is beneficial to the cell), the authors tried to use it to identify which genes result in resistance to PLX.

      PLX is a BRAF enzyme inhibitor. BRAF is involved in the uncontrolled (cancerous) cell growth in melanoma.

    6. we conducted a negative selection screen by profiling the depletion of sgRNAs targeting essential survival genes

      To determine how effective the GeCKO library is at knocking out targeted genes, the authors first used a negative selection screen (which tests for deleterious effects).

      They infected cells with the library of sgRNAs. At the end of 14 days, they observed a reduction in the diversity of sgRNAs, as those sgRNAs that targeted genes necessary for survival were lost when cells died.

    7. We designed a library of sgRNAs targeting 5′ constitutive exons (Fig. 2A) of 18,080 genes in the human genome with an average coverage of 3 to 4 sgRNAs per gene (table S1), and each target site was selected to minimize off-target modification

      The authors produced a wide variety of sgRNAs that were used to find the 5’ end of constitutive exons, sequences that are constantly turned on and being used to create proteins.

    8. potential of Cas9 for pooled genome-scale functional screening

      As noted above, genome-wide screening has been successfully performed with RNAi. Here, the authors wanted to know if CRISPR could offer a more accurate and precise way to screen genomes.

    1. CSF influx into the cortex of awake, anesthetized, and sleeping mice.

      The authors are looking to see if CSF influx (part of the convective exchange) is different between:

      awake mice sleeping mice anesthetized mice

    2. We tested the alternative hypothesis that Aβ clearance is increased during sleep and that the sleep-wake cycle regulates

      This illustrates how scientists ask questions.

      The authors take an observed phenomenon (sleep) and carefully form testable hypotheses about its function.

      They combine their observation that sleep is restorative and dysregulated sleep is associated with mental illness, with their knowledge of the glymphatic system.

      They hypothesize a connection between the two, which they carefully test in well-designed experiments.

      They choose powerful methods for data collection and analysis, making comparisons within individuals to minimize variability, and systematically recording data at a variety of time points and conditions to increase their probability of answering their questions.

      Finally, they connect their findings into a general model for how sleep, arousal, and glymphatic function could interplay in normally functioning as well as diseased brains.

    3. why lack of sleep impairs brain function

      The scientists are asking two questions in this study:

      "why do we need sleep?"

      "why do we feel bad if we do not get enough sleep?"

    1. But little is known about the mechanisms underlying the evolution of habitat specialization and the extent to which herbivores contribute to phenotypic divergence during the speciation process

      The authors have a clear goal in their experiment. Through this observation that is lacking a response they build an experiment to identify natures processes.

      -Luisa Bermeo

    1. Receivers were attached with shackles and heavy duty plastic cable-ties to a length of polyurethane braided rope

      Polyester rope is used because it has extreme resistance to abrasion and sunlight. -Sindy

    2. Externally mounted transmitters were fitted to these individuals instead of performing intracoelomic insertion because of inclement weather and rough sea conditions.

      The presence of reef sharks with external transmitters as well as internal transmitters further validates the results obtained. Despite the turbulence of weather conditions, all sharks were accounted for, providing an accurately measured population pool. -Sindy

    3. Caribbean reef shark populations can benefit from no-take marine reserves and increase in abundance in these areas

      The author expects the well being and population of reef sharks living in no-take marine reserves to be beneficial and higher respectively than those living elsewhere. No-take marine reserves are protected areas of a marine habitat that prohibit human activity. This makes sense because the disastrous effects that human actions have on marine environments is common knowledge. Reef sharks would thrive in a location where adequate amounts of supply are met due to natural resources and the fact that a no-take reserve are essentially a no-fishing area. This prevents sharks from being hunted for their fins, meat etc. In a fiji no-take marine reserve, there was observed to be four times as many sharks than in an areas where fishing was allowed.

      https://www.youtube.com/watch?v=mYZ6AIgFMQg

      -Sindy

    1. Additional tests for the presence of local molecular clocks were carried out in the case of SXL (insects) by using the program HyPhy

      HyPhy stands for hypothesis testing using phylogenies. Hyphy is a software package that serves many purposes as well as multiple audiences. It is used primarily for the analysis of genetic sequences using techniques in phylogenetics (studies of evolutionary history). Though it is also commonly used in molecular evolution, and to study rates and patterns of sequence evolution. It can also be compiled as a shared library and called upon by other programming environments, thus making HyPhy great for group work. To learn more, click here https://academic.oup.com/bioinformatics/article/21/5/676/220389/HyPhy-hypothesis-testing-using-phylogenies -Jake Barbee

    2. To fill this gap, the present work investigates the levels of variation displayed by five sex-determining proteins across 59 insect species, finding high rates of evolution at basal components of the cascade.

      The author’s goal for this article is to find high rates of evolution at the basal components of the cascade. The author does this by examining the displayed variations of five sex-determining proteins in a sample of fifty-nine insect species. To see a simple example of what the author is trying to determine click here. https://www.youtube.com/watch?v=7usaaiggDgw -Jake Barbee

    3. The study of the epistatic relationships between Sxl and the other genes involved in sex determination [i.e., transformer (tra), transformer-2 (tra-2), fruitless (fru), and doublesex (dsx)] has revealed a hierarchical interaction among them during development (Baker and Ridge 1980), with the product of one gene controlling the sex-specific splicing of the primary transcript of the gene immediately downstream [reviewed in (Sánchez 2008)] (Fig. 1a).

      Splicing refers to the process of joining or inserting genes or fragments of genes. This excerpt refers to the phenotypic and genotypic relationship between Sxl and other sex determinants that contain a hierarchy and produce a "suppressor" that controls the transcription of the gene downstream. Where downstream refers to the final level of the cascade. -Elder

    4. the program committing the embryo to either the male or the female pathway is under the control of the gene Sex lethal (Sxl)

      The "Sex lethal" gene is the sex determining gene of the sexually reproducing Drosophilia Melanogaster, During embryonic development, an XX embryo would have the Sxl gene expressed or active, while the XY embryo would have the Sxl gene suppressed or dormant to ensure the development of a male Drosophilia. -Elder

    5. Fig 1. Schematic representation of the hierarchical epistatic interactions constituting the sex determination cascade in Drosophila [adapted from (Sánchez 2008)] evolving from bottomto top (DSX doublesex, FRU fruitless, TRA-2 transformer-2, TRA transformer, SXL Sex-lethal). a In the absence of X/A signal in males, truncated SXL and TRA proteins will be produced leading to the synthesis of male-specific FRU and DSX that will eventually result in maleness. The major components of the cascade analyzed in the present work are indicated in gray background. b Under the bottom-up hypothesis, genes more recently recruited into sex determining pathways are expected to cause divergence toward the top of the cascade. c According to the developmental constraint hypothesis, genes involved in early aspects of development would be more constrained due to the large deleterious pleiotropic effects of mutations

      The author is using the chart to display sex-determining proteins of the Drosophila species. He does this to better describe how proteins play an important role in sex determination. The author also displays how Drosophila evolved bottom-up to create a higher divergence towards the top of the cascade. This would thus explain why genes evolved earlier would be more restricted than those evolved later down the cascade. To better explain sex-determination please watch this video. https://www.youtube.com/watch?v=NQ4Mh_CU15E -Jake Barbee

    1. JH and insulin regulate reproductive output in mosquitoes; both hormones are involved in a complex regulatory network, in which they influence each other and in which the mosquito's nutritional status is a crucial determinant of the network's output.

      The hypotheses is saying the mosquitoes nutrition has an effect on "insulin sensitivity" and "juvenile hormone synthesis."Insulin is a hormone that works with the amount of glucose (sugar) in blood. A juvenile hormone is a hormone in insects that work with maturation and reproduction. So the researchers are saying that the amount of nutrition (food) the mosquitoes eat will determine the amount of these two hormones in the insect body, insulin and juvenile.

      Since the researchers hypothesize that how much the mosquitoes eat determines how much insulin and juvenile hormones work, this means how much insects reproduce is also affected. So the researchers are saying if the insects eat enough, they will reproduce with better conditions than not eating enough. This is because the hormones that control reproduction are controlled by the insects nutrition.

      -Nicole Jones

    1. Finally, knowing object distance is a prerequisite (or corequisite) in the model for deconfounding size, impedance and shape, so these features would first appear in the torus and higher areas. Although this proposal is not yet based on quantitative simulation or modeling, we believe it may be a useful working hypothesis for interpreting and further exploring parts of the electrosensory nervous system.

      In this sentence, the authors are hypothesizing that the EOD pattern incorporated into electrosensory nervous system of the electric fish uses the information of size, shape, and distance of the objects in an algorithm used to process and relay information to the electric fish brain.

      -Kierra Hobdy

    2. Ultimately, weakly electric fish must extract and interpret any useful signals contained in small-field perturbations superimposed upon the intrinsic EOD pattern. Therefore, a considerable volume of the electric fish brain is devoted to electrosensory processing. For the computational algorithms proposed above to be involved in electrolocation, they must have a plausible neural implementation in the fish’s nervous system. We propose one such projection onto the neural networks in the electric fish brain.

      Electric fish are able to receive signals and information from their environment though the emission of electroreceptors thorough their skin. After the contact of the electroreceptors and the external stimuli, information is relayed in a pattern to the electro-receptory organs of the fish. Since this is an essential part of their way of life , the authors know a large amount of neurons and brain matter are involved in this process. Therefore, they hypothesized, that in order to insure this sensory information is relayed efficiently and quickly to the brain of the electric fish, there must be an algorithm used by the neural networks in the fish in order for this process to occur.

      -Kierra Hobdy

    1. Also, the replication “succeeds” when the result is near zero but not estimated with sufficiently high precision to be distinguished from the original effect size.

      Here, the authors describe a problem of judging the replication success by evaluating the replication effect against the original effect size. When the replication effects size is near zero, it could be possible that the data shows no effect, and therefore we would find an unsuccessful replication attempt.

      However, the estimation of the effect size could be imprecise. This means that there could be a lot of “noise” in the data, from random or systematic errors in the measurement. If there was a lot of noise in the data, it could distort our impression of whether the effect is really zero or not.

      We might conclude that a replication with an effect size close to zero was sufficiently different from zero and thus successful, although the effect was really just produced by noise in the data, and the true effect is zero, meaning that the replication could be falsely taken as a success.

    2. Also, the replication “succeeds” when the result is near zero but not estimated with sufficiently high precision to be distinguished from the original effect size.

      Here, the authors describe a problem of judging the replication success by evaluating the replication effect against the original effect size. When the replication effect size is near zero, it could be possible that the data shows no effect, and therefore we would find an unsuccessful replication attempt. However, the estimation of the effect size could be imprecise. This means that there could be a lot of “noise” in the data, from random or systematic errors in the measurement. If there was a lot of noise in the data, it could distort our impression of whether the effect is really zero or not. We might conclude that a replication with an effect size close to zero was sufficiently different from zero and thus successful, although the effect was really just produced by noise in the data, and the true effect is zero, meaning that the replication could be falsely taken as a success.

    3. We correlated the five indicators evaluating reproducibility with six indicators of the original study (original P value, original effect size, original sample size, importance of the effect, surprising effect, and experience and expertise of original team) and seven indicators of the replication study (replication P value, replication effect size, replication power based on original effect size, replication sample size, challenge of conducting replication, experience and expertise of replication team, and self-assessed quality of replication)

      Last, the authors wanted to know if successfully reproducible studies differed from studies that could not be replicated in a systematic way.

      For this, they checked if a number of differences in the original studies, such as the size of the effect originally reported, was systematically related to successfully replicated studies.

      They also checked if a number of differences in the replication studies themselves, such as the size of the effect of the replication study, related systematically to successful replications.

    4. We conducted fixed-effect meta-analyses using the R package metafor (27) on Fisher-transformed correlations for all study-pairs in subset MA and on study-pairs with the odds ratio as the dependent variable.

      The authors combined the results of each original and replication study to determine if the cumulative joint effect size was significantly different from zero. If the overall effect was significantly different from zero, this could be treated as an indication that the effect exists in reality, and that the original or replication did not erroneously pick up on an effect that did not actually exist.

    5. tested the hypothesis that this proportion is 0.5

      The authors hypothesized that in half the replication studies, the effect would be stronger in the original than the replication.

      The reason for choosing this null hypothesis here is that this is the expectation we would have if chance alone determined the effect sizes. There are two likely outcomes for each replication study: (1) that its effect size is bigger than that of the original study, and (2) that its effect size is smaller than that of the original study.

      If chance alone determined the effect size of the replication, we would see each possible outcome realized in 50% of the cases. If the proportion of replication effects that are bigger than the effect of the original study was significantly different from 50%, it could be concluded that this difference was not random.

    6. Each replication team conducted the study, analyzed their data, wrote their summary report, and completed a checklist of requirements for sharing the materials and data.

      After completing their replication attempt, independent reviewers checked that each team's procedure was well documented, that it followed the initial replication protocol, and that the statistical analysis on the effects selected for replication were correct.

      Then, all the data were compiled to conduct analyses not only on the individual studies, but about all replication attempts made. The authors wanted to know if studies that replicated and those that did not replicate would be different.

      For instance, they investigated if studies that replicated would be more likely to come from one journal than another, or if studies that did not replicate would be more likely to have a level of statistical significance than studies which could be replicated.

  7. Sep 2017
    1. In parallel, we engineered a new KDM5B construct with a 10X HA-tag SM (HA-KDM5B) (5) to complement FLAG-KDM5B (formerly referred to as SM-KDM5B), as shown in Fig. 4A. As a first application of this technology, we wanted to test if polysomes interact with each other to form higher-order structures that can translate two distinct mRNAs at the same time.

      To determine whether multiple polysomes can interact with one another to translate multiple mRNAs, the authors transfected two KDM5B genes into the cell, one containing the HA SM tag and one containing the FLAG SM tag. These two tags can be detected by different antibodies bound to different color fluorochromes.

    2. To measure the lifetime of Fab binding, we performed fluorescence recovery

      Photobleaching or FRAP uses laser light to quench the fluorescence being emitted by the fluorescent tag. The fluorescence can be recovered by Fab antibodies leaving the translated proteins and being replaced by new antibodies. Thus, this experiment can determine how length of time that the Fab antibodies can bind to the newly translated proteins. These experiments were used to gather baseline data to determine the rate of transcriptional elongation.

    3. To measure how quickly Fab bind polysomes, we microinjected them into cells transfected 6 hours earlier with our KDM5B construct and pre-loaded with MCP.

      This experiment measured how quickly the Fab antibodies bound to the polysomes by injecting the antibodies into cells containing the KDM5B gene and the MCP fluorescent marker.

    4. Besides their brightness, NCT also revealed differences in the mobility of polysomes. We quantified this by measuring the mean squared displacement of tracked polysomes as a function of time.

      Using the FLAG tag described above, the authors measured how quickly the ribosomes translated the proteins by determining when the ribosomes left the mRNA after translation.

    5. To determine precisely how many nascent chains exist per site, we calibrated fluorescence by imaging a new beta-actin plasmid containing a single 1X FLAG tag rather than the 10X SM FLAG tag

      To determine how many proteins are being translated at any one time from a single molecule of mRNA, beta actin was labelled with only one FLAG tag (rather than 10). Multiple proteins can be translated from one mRNA using polysomes.

    6. As depicted in the inset of Fig. 2D, this allowed us to accurately compare (1) the appearance frequency and brightness, (2) the mobility, and (3) the size of translation sites.

      Using a system to track mRNA during translation, scientists were able to visualize the size of translation sites, the mobility of the molecules, and how bright each molecule appeared after being tagged with fluorescence.

    7. Western blots

      Procedure used to identify proteins by separating the proteins by length

    8. To see if we could also detect translation of smaller proteins, we constructed two plasmids

      To further test the abilities of identifying translation dynamics using the previously identified system, experiments were conducted using smaller proteins, beta-actin and H2B, rather than KDM5B.

    9. To further confirm protein/mRNA spots were translation sites, we treated cells with 4 μg/ml cycloheximide to slow elongation and load more ribosomes per transcript. Consistent with this, spots got brighter

      A fourth experiment was conducted in order to positively identify that the co-located protein-mRNA spots were sites of translation, 4 micrograms of cycloheximide were added. The addition of the cycloheximide caused translation to slow down while also allowing for more ribosomes to attach to a single transcript.

    10. To test this, we treated cells with 50 μg/ml of puromycin

      To test the co-moving protein-mRNA spots (areas/spots in the images taken, Fig. 1, C and D, where a red and green dots were co-located) were locations of translation, 50 micrograms of puro-mycin was added to U2OS cells containing the transiently transferred plasmid. The puromycin leads to the inhibition of translation while mRNAs are still present.

  8. Aug 2017
    1. to areas in which changes from scenario RCP2.6 already appear (red areas)

      The authors do not include changes that occur in the RCP2.6L scenario in the Figure 3H map. Why might they have left that out?

    2. for scenarios RCP2.6L, RCP2.6, RCP4.5, and RCP8.5, respectively, at the end of the 21st century.

      The four maps for the future pathways, 3D to 3G, are based on putting the climate data for year 2100 for each pathway into the BIOME4 model, running in "forward" mode.

    3. reconstructed (rec) from pollen for the present

      The first map, 3A, comes from putting pollen core data from the past century into the BIOME4 inversion model, the same method as is used for the past biome map (3B).

    4. o assess the 1.5°C target, we created a fourth class (denoted RCP2.6L) from selected CMIP5 scenarios

      None of the existing RCPs resulted in a projection of 1.5°C, so the researchers created a new model so that they could evaluate the scenario.

    1. To determine whether the benefits afforded by tau reduction were sustained, we examined older mice.

      By examining older hAPP mice with and without tau, the authors could test how tau causes Alzheimer’s disease to progress as an animal ages.

    2. Probe trials, in which the platform was removed and mice were given 1 min to explore the pool, confirmed the beneficial effect of tau reduction

      After the researchers trained all mice to swim to a platform hidden under the water surface, the authors removed the platform to see how much time they spent in the area where the platform used to be. This way the researchers were able to test the memory of the mice.

    3. hAPP/Tau+/+ mice took longer to master this task (Fig. 1A; P < 0.001). In contrast, hAPP/Tau+/– and hAPP/Tau–/– mice performed at control levels.

      Alzheimer’s mice (with hAPP) that had normal amounts of tau (Tau+/+) took longer to learn to swim to a visible platform than the other five types of mice.

    4. We crossed hAPP mice (11) with Tau–/– mice (12) and examined hAPP mice with two (hAPP/Tau+/+), one (hAPP/Tau+/–), or no (hAPP/Tau–/–) endogenous tau alleles, compared with Tau+/+, Tau+/–, and Tau–/– mice without hAPP (13).

      The authors used mice that were genetically engineered to express a human copy of the amyloid precursor protein (called hAPP mice). hAPP mice are a common animal model of Alzheimer’s disease. They develop amyloid plaques and severe memory and cognitive problems later in life, just like humans with the disease.

      The authors bred these hAPP mice with other mice that were missing both their genes for the tau protein (called Tau-/- mice). From this breeding plan, the authors produced hAPP mice with normal amounts of tau (hAPP/Tau+/+), with half the normal amount of tau (hAPP/Tau+/-), and with no tau (hAPP/Tau-/-).

      They also produced mice without the human APP gene that had normal, half, and no tau.

    1. Blast-related tau phosphorylation was also detected when quantitated as a ratio of phosphorylated tau protein to total tau protein (Fig. 5, E, F, H, and J).

      The authors saw an increase in total tau protein in blast-exposed mice. To confirm that this increase was also true for phosphorylated tau, they computed the ratio of phosphorylated tau to total tau.

    2. we performed immunoblot analysis of tissue homogenates prepared from brains harvested from mice 2 weeks after single-blast or sham-blast exposure

      The authors used western blotting to detect abnormal phosphorylated tau and confirm their electron microscopy observations. They confirmed that levels of phosphorylated tau were elevated in the brains of blast-exposed mice as compared to the control group.

      For more on western blotting, see the Journal of Visualized Experiments:

      http://www.jove.com/science-education/5065/the-western-blot

    3. Gross examination of postmortem brains from both groups of mice was unremarkable and did not reveal macroscopic evidence of contusion, necrosis, hematoma, hemorrhage, or focal tissue damage (Fig. 3, A to F, and fig. S8)

      This is an important "negative" finding that is reported in almost all cases of blast exposure. There is a lack of gross (visible to the naked eye) brain injury in the brains of people that have been exposed to a nonpenetrating blast.

      The fact that this finding is replicated in this study encouraged the authors to explore whether there were "invisible injuries" in the brains of blast-exposed mice.

    4. evaluated pressure tracings in the hippocampus of intact living mice (Fig. 2B) and compared results to the same measurements obtained in isolated mouse heads severed at the cervical spine

      To test the water hammer effect, the authors compared the pressure resulting from a blast wave both inside and outside the head of a living mouse. They then compared these measurements to similar measurements performed on a decapitated mouse head.

      Because the disembodied mouse head had no vascular system or thorax, the water hammer could not contribute to the shock wave pressure inside the head. However, the authors observed similar shock wave pressures in both the live mouse and the mouse head, showing that the water hammer effect cannot be the primary source of brain damage.

    5. ConWep (Conventional Weapons Effects Program)

      Explosion simulation software based on a large data base of experimentally obtained blast data. ConWep enables comparative analysis of blasts. The authors used the software to verify that their simulated blasts closely matched explosions that would happen in the field.

    6. Wild-type C57BL/6 male mice

      In animal studies (especially those using mice) it is important for the researchers to identify the genotype of the species they're studying. This helps other researchers either replicate the findings or identify errors in the authors' reasoning due to the genetic makeup of the model organism.

      C57BL/6 mice are the most commonly used nontransgenic mouse in biomedical research. They are called "wild-type" because their genetics have not been changed by humans.

      Mouse colonies like this are inbred to make sure the mice are as genetically identical as possible. This helps eliminate extra, unexpected variables in an experiment that could affect the results in unknown ways.

    7. compressed gas blast tube

      The authors designed a "shock tube" to create controlled blasts. A shock tube is basically a wind tunnel: Pressure is built up on one side and suddenly released, creating a shock wave that travels down the tube.

      Source: Fraunhofer CMI

    8. blast neurotrauma model to investigate mechanistic linkage between blast exposure, CTE neuropathology, and neurobehavioral sequelae.

      After finding identical CTE-linked problems in the brains of military veterans with blast exposure and athletes who suffered head injuries, the authors investigated how blast exposure might cause brain injury using a mouse model.

      Mice are commonly used to model human diseases.

    9. CTE neuropathology in postmortem brains from military veterans with blast exposure and/or concussive injury and young athletes with repetitive concussive injury.

      In this figure, the authors look at the CTE neuropathology (neurological abnormalities) in blast-exposed military veterans and young American football athletes.

      Through postmortem analysis (still the only way to identify CTE), they found the same disease in both groups.

      From these findings, the authors were motivated to investigate whether blast injury would cause the same disease as sport-related head injuries.

    10. Control sections omitting primary antibody demonstrated no immunoreactivity.

      Immunohistochemistry (IHC) is a technique used to identify the biochemical and cellular cause of disease in a tissue.

      First, a tissue is treated with a primary antibody. This antibody binds to the target protein in the tissue.

      Next, the sample is treated with a second antibody that will bind to and detect the primary antibody. This second antibody has a colored or fluorescent tagged [is there a word missing here?] so that it can be detected with a microscope or other tool.

      Question: Why do scientists use a second, additional antibody instead of just tagging the primary antibody and using that for analysis?

    11. monoclonal antibody Tau-46 (Fig. 1T) directed against phosphorylation-independent tau protein

      Tau-46 is a "pan-tau" stain, meaning it detects all (or many) forms of tau protein, both normal and abnormal.

      In normal control cases, nonphosphorylated tau immunostaining would be diffuse and light. In pathologic cases, phosphorylated tau immunostaining would be present in neurons and glia.