1,180 Matching Annotations
  1. Jun 2019
    1. Cocaine alone increased place preference by 223% compared with saline control

      The experiment was performed to determine the behavioral response in the 4 groups of animals.

      Do these animals have a conditioned place preference or aversion to cocaine? If the place preference to cocaine exists, which group out of 4 show the most sensitization to the CPP?

      In the conditioned place preference test, mice pretreated with cocaine alone showed preference to the cocaine-coupled chamber by 223%.

    2. Mice treated with nicotine for 7 days followed by cocaine showed a significant enhancement of 98% in locomotor activity compared with mice treated with cocaine alone

      Mice treated with nicotine for 7 days, followed by cocaine for 4 days show a 98% increase in locomotor activity than the control mice

    3. Mice treated only with cocaine showed a 58% increase in locomotion (sensitization) compared with controls

      Mice treated with an injection of cocaine for 4 days show a 58% increase in locomotor activity than the control mice

    4. Mice treated with nicotine (50 μg/ml) showed the same levels of locomotion (that is, no increase in locomotion compared to day 1) as water controls.

      There is no difference in the distance traveled by the animals that were treated with nicotine in drinking water versus drinking water. (yellow vs. black bars, fig 1A).

    1. We propose that many of the ecological surprises that have confronted society over past centuries—pandemics, population collapses of species we value and eruptions of those we do not, major shifts in ecosystem states, and losses of diverse ecosystem services—were caused or facilitated by altered top-down forcing regimes associated with the loss of native apex consumers or the introduction of exotics.

      The disruption of many biological processes and natural ecosystem functions can be traced to the loss of predators shifting systems to a new state. Alternatively, introducing invasive exotic species that may outcompete native organisms, removing food sources of predators/herbivores, may cause ecosystem collapse via trophic cascade.

    2. Bottom-up forces are ubiquitous and fundamental, and they are necessary to account for the responses of ecosystems to perturbations, but they are not sufficient. Top-down forcing must be included in conceptual overviews if there is to be any real hope of understanding and managing the workings of nature

      Given the numerous instances of trophic cascades noted in the literature, and the critical impacts that top-down forcing can have as noted above, it is necessary for researchers to consider both bottom-up and top-down controls when looking at ecosystem function, change, and decline.

    1. Based on retrograde rabies virus and anterograde AAV tracing, ZI axonal projections to the excitatory neurons of the PVT appear more robust than those from other known regions of the brain involved in food intake, suggesting the ZI is not a minor component

      More so than any region of the brain studied so far, ZI GABA neurons and their projections promote binge-like eating behavior.

    2. Bic attenuated photostimulation-evoked feeding (Fig. 2K). That Bic did not completely block photostimulation-evoked food intake could be a diffusion limitation of Bic after application, or ZI VGAT-Cre neurons may coexpress other neurotransmitters responsible for the remaining action.

      The authors found that blocking the function of the GABA receptor (GABA<sub>A</sub> receptor) in the PVT could blunt the increase in food intake mediated by stimulation of ZI terminals in the PVT. This suggests that GABA is an important neurotransmitter underlying this effect.

      The limitations of this approach include that the authors cannot control the spread of the GABA<sub>A</sub> receptor blocker or that other neurotransmitters might be involved.

    1. We found that despite all interactions being novel and primarily involving introduced species, networks were structurally complex and notably similar between scales (local versus regional) and across sites.

      The authors concluded that overall, the complex network structures of island's ecosystem on both the local and regional scale remained the same, even though these networks were now infiltrated with invading plant and animal species.

    2. At the local scale, networks had low to intermediate connectance and, unlike the regional network, were not nested. Similar to the regional network, six of seven local networks were specialized and modular, presenting three or four modules (Fig. 2, fig. S3, and table S4)

      At individual sites, the authors again saw species mostly interact with only a subset of the total interactions available, with instances of specialized species interacting with birds/plants also found in networks of more generalized species.

  2. May 2019
    1. Brain slice electrophysiology confirmed that optogenetic activation of PSTh glutamatergic neuron terminals in the PVT evoked strong glutamate-mediated postsynaptic excitatory currents in PVT vGlut2-GFP neurons, suggesting a functional role for PSTh glutamate neurons in the synaptic excitation of PVT glutamate neurons

      The authors confirmed that stimulation of the PSTh terminals in the PVT was able to activate PVT neurons, i.e. induce excitatory activity in the PVT cells.

      This confirms the rabies tracing result and shows that the connection between the PSTh and the PVT is functional.

    2. In the absence of available food, optogenetic activation of the VGATZI-PVT pathway evoked a significant preference for the chamber associated with laser stimulation compared with the control chamber

      The mice preferred to spend more time in the compartment where they received stimulation of the ZI-PVT pathway. This suggests that stimulation of the neurons is pleasurable for the animal.

    3. ZI GABA neurons project to multiple brain regions, including the hypothalamus and midline thalamus (fig. S6). We therefore measured the relative contribution of stimulation of ZI somata with selective stimulation of ZI axons targeting the PVT. Stimulation of ZI VGAT cell bodies or VGATZI-PVT terminals in the PVT evoked similar levels of feeding

      This suggests that even though ZI GABA neurons project to other brain regions in addition to the PVT, the PVT projection appears to be the most important mediator of increased food intake.

      This is because ZI GABA cell body stimulation and stimulation of the projections to the PVT evoked similar degrees of food intake.

    4. In our monosynaptic retrograde tracing with Cre-dependent rabies virus, although less robust than the projection from the ZI, we found a substantial projection to PVT glutamate neurons from the parasubthalamic nucleus (PSTh) (Fig. 4I and fig. S11) (27, 28).

      Analysis of the inputs to PVH vGlut2 neurons showed that these neurons receive input from the parasubthalamic nucleus, a brain region with previously described roles in appetite.

    5. Food deprivation lasting 24 hours increased inhibitory synaptic neurotransmission to PVT glutamate neurons

      In food-deprived mice, PVT glutamate neurons receive more inhibitory inputs compared to fed mice. These may come from ZI inhibitory GABA neurons, which the authors demonstrated to have increased activity upon food deprivation.

    6. confirmed that PVT glutamate neurons receive strong and direct innervation from ZI neurons

      The authors confirm their anterograde tracing findings using retrograde tracing to show that ZI GABA neurons send projections to excitatory neurons in the PVT.

    7. VGAT-Cre mice with ChIEF expression, bilateral laser stimulation (20 Hz) in the ZI increased food intake

      When ZI GABA neurons are activated by delivering blue light into the brains of the mice, the mice eat a large amount of food.

    8. ChIEF-tdTomato was selectively expressed in ZI GABA neurons

      The optogenetic tool ChIEF-tdTomato was found to be expressed only the ZI GABA neurons.

    9. Satiety feedback signals can thus attenuate ZI-induced feeding.

      Signals from the body inform the brain that sufficient food has been eaten (satiety). These include release of hormones from the gut as well as stomach distention.

      The finding that ZI-stimulated mice will not eat indefinitely suggests that satiety feedback mechanisms are still intact.

    10. Photostimulation of ZI-PVT inhibitory axons evoked gnawing, but not eating, of nonnutritional wood sticks (fig. S5, A and B); photostimulation leading to food intake eliminated subsequent evoked stick gnawing. A priori wood gnawing had no effect on photostimulation-evoked food intake (fig. S5, C and D).

      The authors conclude that the food intake seen with ZI stimulation is not because the manipulation increases gnawing behavior directed at any object, but directs behavior towards edible sources of food.

    11. After the days of photostimulation were completed, mice showed a significantly reduced food intake compared with that of controls (Fig. 3H).

      The authors measured daily food intake for 15 days during the stimulation protocol. They then continued measuring daily food intake in the absence of stimulation.

      The mice likely reduced their food consumption when the stimulation protocol ceased due to satiety signals that normally prevent overeating in the absence of the manipulation of the ZI to PVT pathway.

    12. Ablation of ZI GABA neurons decreased long-term food intake and reduced body weight gain by 53% over 8 weeks (Fig. 3, J and K).

      When the ZI GABA neurons were killed, the mice could no long maintain their normal body weight and ate less food than control mice that had their ZI GABA neurons intact.

      This suggests that these cells are required for normal food intake and body weight maintenance.

    13. Ablation of PVT vGluT2 neurons substantially increased both food intake and body weight gain for an extended period (16-week study) (fig. S10, G and H).

      The increase in food intake and body weight gain with PVT neuron ablation shows that these cells are important for body weight maintenance.

      This finding is opposite to what was found when ZI GABA neurons were ablated, suggesting that PVT neurons are downstream of ZI GABA neurons and have an opposing effect on food intake and body weight.

    14. photostimulation of ZI VGAT-ChIEF-tdTomato terminals in the PVT evoked GABA-mediated inhibitory currents in PVT vGlut2-GFP neurons (Fig. 2D).

      Stimulation of ZI GABA neuron projections (terminals) in the PVT with blue light led to inhibition of PVT excitatory neurons.

      This finding confirms the rabies tracing result and shows that the connection between ZI neurons and PVT neurons is inhibitory.

    1. The experimental proof of the colinearity of a gene and the polypeptide chain it produces may be confidently expected within the next year or so.

      Crick proposed that a gene was a linear sequence of nucleotides, where each gene encoded a single protein. However, this explanation is a bit too simplistic, especially for higher-level and multicellular organisms.

      In fact, we now know that colinearity is generally the exception, not the rule, in eukaryotic genomes.

      For more information, check out this piece in Nature Education: https://www.nature.com/scitable/topicpage/what-is-a-gene-colinearity-and-transcription-430

    2. That is, either (+ with + with +) or (- with - with -). Whereas a single + or a pair of them (+ with +) makes the gene completely inactive, a set of three, suitably chosen, has some activity.

      DNA with only one additional base generates a completely different protein than that of the original code. The same is true of DNA with two additional bases.

      However, when bases are added in triplets, the generated protein is identical to the native protein with one minor difference: an extra amino acid.

    3. The preliminary results presented so far disclose no clear difference, with respect to the code, between E. coli and mammals, and this is encouraging (10, 13).

      Since mammals and E. coli (a bacterium) are extremely different biological species, it is remarkable that their genes are comprised of identical genetic codes. Their similarity suggests the genetic code is not different per species, but universal across all living organisms.

    4. It is possible that some triplets may code more than one amino acid—that is, they may be ambiguous.

      While most amino acids are encoded by at least two codons (with the exception of methionine and tryptophan), the reverse is not true. Each codon specifies just one amino acid or stop signal. Thus, the genetic code is unambiguous.

    5. In general, more than one triplet codes each amino acid.

      In other words, the same amino acid is coded by more than one base triplet. For example, there are six different codon combinations to encode arginine. This property is now known as degeneracy.

    1. One of these regions (~525 kb in size) showed the strongest differentiation in all three contrasts. The region included four genes: high mobility AT-hook 2 (HMGA2), methionine sulfoxide reductase B3 (MSRB3), LEM domain-containing protein 3 (LEMD3), and WNT inhibitory factor 1 (WIF1).

      This index scan allowed the researchers to narrow down the region that showed the strongest differences between the large, medium, and small ground and tree finches.

    2. The results suggest that the phenotypic effects of these loci are small relative to the effect of the HMGA2 locus.

      The investigators were able to rule out the six other loci that may have been associated with beak size, body size, or beak shape.

    3. This SNP showed a highly significant association with beak size, a significant association with body size, and no association with beak shape among medium ground finches (Fig. 2E).

      Investigations within the HMGA2 gene showed that it did, in fact, show a high relationship with beak size.

      This seems to confirm the fact that HMGA2 played an important role in the evolutionary shift in beak size during the drought.

    4. We have shown that the HMGA2 locus played a critical role in this character shift. The selection coefficient at the HMGA2locus (s = 0.59 ± 0.14) is comparable in magnitude to the selection differential on the phenotype and is higher than other examples of strong selection, such as loci associated with coat color in mice (s < 0.42) (25). The main implication of our findings is that a single locus facilitates rapid diversification.

      This single HMGA2 locus seems to allow for diversity among finch beak size. However, the exact mechanism for how HMGA2 controls beak size in finches is unknown.

      Would the findings of this study provide enough justification for further studies that would attempt to better understand that function of HMGA2?

    5. Thus, we conclude that the relationship between HMGA2 and fitness was mediated entirely by the effect of this locus on beak size or associated craniofacial bones or muscles; developmental research will be necessary to reveal the underlying mechanism for the association. There is no evidence of pleiotropic effects of the gene on other, unmeasured, traits affecting fitness (table S5).

      The HMGA2 gene is strongly associated with beak size. Beak size is the main factor that allowed some medium ground and tree finches to survive after a drought.

      Thus, the researchers provided a direct link between this particular gene and the fitness of finches.

    6. revealed two major haplotype groups associated with size; 98% of small birds (body weight <16 g) clustered into one group and 82% of the large birds (body weight >17 g) clustered into the other (Fig. 1D). The split between the two haplotypes occurred before the divergence of warbler and nonwarbler finches at the base of the phylogeny (Fig. 1D), about 1 million years ago (Fig. 1C).

      Why was it important to not only construct a phylogenetic tree based on the entire genome sequences (Fig. 1C), but also construct one based on the HMGA2 region (Fig. 1D)?

      What do these two trees reveal about the differing sizes of finches?

    7. We discovered a genomic region containing the HMGA2 gene that varies systematically among Darwin’s finch species with different beak sizes.

      For the first time, researchers have been able to find a gene that controls beak size in Darwin's finches. Why is this finding such an important part of the story of these birds and how they evolved?

    1. Oʻahu’s SDN included 15 bird and 44 plant species connected by 112 distinct links (Fig. 1)

      The authors identified 44 different plant species and 25 different birds in O'ahu. Figure 1 depicts each bird (left column) and each plant (right column) as rectangles. The lines connecting a bird to a plant represents a seed dispersal event.

    1. in-class exam scores by approximately 61% and entirely eliminated the gap on the FCME

      Taking into account pre-existing differences in students' preparation for the class, values affirmation was highly effective in reducing or eliminating the gender gap.

    1. The transient nature of REPAIR-mediated edits will likely be useful for treating diseases caused by temporary changes in cell state, such as local inflammation, and could also be used to treat disease by modifying the function of proteins involved in disease-related signal transduction.

      Compared to DNA editing, RNA editing is temporary and can be reversed more easily. This means that RNA editing with REPAIR can be used not only to correct inherited mutations that cause disease, but disease-causing mutations that arise later in life. While DNA editing could treat a mutation that occurred later in life, the treatment would be permanent. RNA editing could treat something temporarily, so it could be used for temporary cell growth or inhibition of imflammation in response to an infection.

      The programmability of RNA and DNA editing by simple base pairing rules makes it much easier to develop than alternative therapies such as drug inhibitors that are targeted to specific enzymes. Finding an inhibitor that is specific enough without causing side effects is difficult. The REPAIR system is a potential solution to this precision problem.

    2. can target either the sense or antisense strand

      Cas9 (as the targeting unit of the DNA base editor) can be directed against the sense or antisense strand. The gRNA pairs with the strand, causing a short sequence of the opposite strand to be displaced. Then, the "free" fragment is accessible for the cytidine deaminase that acts on single-stranded DNA.

      This means that DNA base editors can be used to correct mutations in either strand. By contrast, RNA editors can only work with the mRNA which limits their editing ability two times.

      In contrast, REPAIR is not constrained by PAM, PFS, and other motifs, meaning it can act on a broader range of sequences (but can only act on transcripts).

    3. The lack of motif for ADAR editing, in contrast with previous literature, is likely due to the increased local concentration of REPAIR at the target site owing to dCas13b binding.

      Previously, it has been shown that the ADAR proteins prefer certain nucleotides at the 5' and 3' positions next to the target sequence (reference 21). However, this is not a strict preference, and some mutants are more strict than others.

      In addition, previous work has shown that when ADAR2 is site-specifically directed at a target, its local concentration is increased and less favorable reactions sometimes occur. 

    4. Deeper sequencing and novel inosine enrichment methods could further refine our understanding of REPAIR specificity in the future.

      In this work, the authors used next-generation sequencing to detect editing events. They compared the obtained sequences (reads) with the genomic sequence. If the genome contained A but the read had G at the same position, they considered that an editing event.

      However, this approach may overestimate the number of edits. First, the sequencing itself generates errors that can be mistakenly attributed to ADAR activity. Second, the gene may have many versions in a genome that differ at some positions, including A>G differences. This leads to some ambiguity in data interpretation.

      To make up for these problems, several approaches have been suggested to detect inosine modification directly, including chemical modification and antibody-based enrichment. Post publication of this paper, newer techniques have been introduced that could be used for chemical modification.

    5. it is unlikely to do so in this case because Cas13b does not bind DNA efficiently and because REPAIR is cytoplasmically localized.

      ADAR activity on DNA-RNA heteroduplexes might be a problem for precise RNA editing by REPAIR. Fortunately, Cas13b predominantly binds RNA and the REPAIR system is sequestered from the DNA in the cytoplasm as a result of the nuclear export sequence (NES) fused to Cas13b.

    6. The overlap in off-targets between the targeting and nontargeting conditions and between REPAIRv1 and BoxB conditions suggests that ADAR2DD drives off-targets independent of dCas13 targeting

      In summary, the authors found two pieces of evidence supporting the hypothesis that the ADAR2 deaminase domain is the main source of off-targets:

      First, they found overlap between the off-target effects of REPAIRv1 under different targeting conditions and ADAR2(DD).

      Second, they saw overlap in the off-target effects between REPAIR and ADAR2(DD) in a completely different targeting system. This offers strong support that ADAR2 drives off-target effects regardless of the targeting module.

    7. Given the high number of overlapping off-targets between the targeting and nontargeting guide conditions, we reasoned that the off-targets may arise from ADARDD.

      The authors found a lot of overlap between the off-target effects they saw in the targeting and non-targeting conditions, meaning that the REPAIRv1 complex was modifying adenosines even when the gRNA did not guide it. This suggests that ADAR is increasing off-target editing effects independent of dCas13b.

    8. These results indicate that Cas13a and Cas13b display similar sequence constraints and sensitivities against mismatches.

      The researchers found that Cas13a and Cas13b had the same sequence requirements for successful target recognition and cleavage. Neither nuclease was affected by changes to the PFS sequence, but both of them functioned less well when there were changes in the middle of the target sequence.

      Another consequence of these findings is that, because changes to PFS sequences (which are at the edge of target sequences) don't affect Cas13 activity, off-target mRNA molecules that differ from the target only in PFS sequences might be mistakenly destroyed by Cas13.

      Fortunately, because of its length (more than 30-nt), the spacer is highly specific and binds unique sequences.

    9. found that PspCas13b had consistently increased levels of knockdown relative to LwaCas13a (average of 92.3% for PspCas13b versus 40.1% knockdown for LwaCas13a)

      As you can see on the figures 1C and 1D, the blue line corresponding to the PspCas13b nuclease goes far beyond the red line which corresponds to the LwaCas13a nuclease.

      The signal of the Gluc luciferase that resulted from the PspCas13b knockdown was around 10% for all gRNA molecules along the transcript. This means that the average knockdown efficiency was 90%. The signal of Gluc in the case with LwaCas13a was not as consistent, but on average dropped by about 40%.

    10. We selected the top five Cas13b orthologs and the top two Cas13a orthologs for further engineering

      Several Cas13a and b orthologs decreased Gluc luciferase signal in up to 10-30% of the control cells. Some Cas13b nucleases demonstrated even greater effectiveness than the previously characterized Cas13a from Leptotrichia wadei.

    1. which enables maximum regeneration with minimal temperature increase

      This means that the water desorption process is highly efficient, not requiring a large amount of energy from heat.

    2. By considering both the adsorption and desorption dynamics, a porosity of 0.7 was predicted to yield the largest quantity of water.

      The results of the simulations show that if the porosity is too low, the adsoprtion process becomes slower. In contrast, if the porosity is too high, more MOF is required to harvest the same amount of water but the process is slower. The simulations showed that the ideal porosity was 0.7.

    3. We also report a device based on this MOF that can harvest and deliver water (2.8 liters of water per kilogram of MOF per day at 20% RH) under a nonconcentrated solar flux less than 1 sun (1 kW m–2), requiring no additional power input for producing water at ambient temperature outdoors.

      In this study, the authors demonstrate that a MOF-based device can harvest water from the atmosphere at humidity and temperature conditions similar to a desert, without requiring any more energy than natural sunlight. Since this paper was published, the authors published another paper with the results of a devise test in the Arizona desert. In the "Related Content" tab, see their 2018 Nature Communications paper.

  3. Apr 2019
    1. Our experimental studies confirm that the stable superlubricity regime occurs over a wide range of test conditions; when the load was changed from 0.5 to 3 N, velocity was varied from 0.6 to 25 cm/s, temperature increased from 20°C to 50°C (fig. S15), and the substrate was changed to nickel or bare silicon (fig. S16).

      Superlubricity experiments at any sliding interface should follow certain local load and sliding velocity conditions. Thermal effects are also dominant in frictional energy dissipation during sliding. Authors have tested a wide range of experimental conditions and shown that their system exhibits stable superlubricity under the tested conditions.

    2. The tribological evolution of a single graphene patch at the nanoscale resembles that of a single asperity contact, whereas the mesoscopic behavior resembles a multiple asperity contact. The friction mechanism at the mesoscale for an ensemble of graphene patches is not different from nanoscale (single patch). The initial tribological state of the patches, as well as the configuration of the patches versus nanodiamonds, dictates the dynamics of scroll formation, which in turn affects the dynamical evolution of COF for the mesoscopic system

      The mechanism of friction reduction at the macroscale is found to be the same as that at the nanoscale. At the macroscale, nanoscrolls at different orientations are observed while an assembly of graphene patches at different initial configurations slide over the nanodiamonds. Over time, this reduced contact area leads to ultralow friction sliding at the large scale.

    1. changes in sea surface temperature may remain relatively small in this region

      Has this hypothesis been proven true since this paper was published in 2000?

      Use the National Oceanic and Atmospheric Administration's (NOAA) website to investigate changes in ocean temperatures since this study was published: https://www.nodc.noaa.gov/OC5/3M_HEAT_CONTENT/

    2. changes we observe in global ocean heat content may be related to the hemispheric and/or global modal variability of the atmosphere

      The Earth is an interconnected system where conditions in the atmosphere also affect what happens in the oceans. While the authors cannot say for sure that specific climate oscillations are driving the changes in ocean heat content, it is possible to say that changing climates are likely impacting ocean heat content.

    3. increase in ocean heat content preceded the observed warming of sea surface temperature

      The heat content of the atmosphere contributes to the warming of the ocean, so it would seem logical for the surface ocean to warm more quickly than the deep ocean (because the deep ocean is farther from the atmosphere).

      However, data suggests that the heat content of the deep ocean is increasing more quickly than the surface ocean. This further suggests that deep ocean circulation is transporting warmth from the surface to greater depths.

    4. our results support the findings of Hansenet al.

      Levitus et al. found a ∼2 × 10<sup>23</sup> joules warming between the mid-1950s and mid-1990s, which corresponds to a warming rate of 0.3 watts per meter squared.

      Thus, the ocean appears to be absorbing nearly half of the total heat that results from the warming of the Earth.

    5. warming in the western midlatitudes is due to nearly equal contributions by the 0- to 500- and 500- to 1000-m layers

      The western midlatitudes in this case represent the area between 30° - 45°N and 80° - 40°W. In this area the surface 300 m of the ocean has warmed by 1-2°C from the 1970s to the 1990s (Fig. 3A). This increase is even more extreme in the deeper ocean which has warmed by as much as 8°C (Fig. 3B).

      After examining the data in 500 m increments, Levitus and coauthors determined that most of this warming is occurring in the top 1000 m.

    6. There is a consistent warming signal in each ocean basin, although the signals are not monotonic.

      In each ocean basin, there have been periods when the heat content has increased and periods when the heat has decreased. However, the overall trend shown by the data is that every ocean basin on the planet has been warming over the last 50 years.

      Most of that warming has occurred after the 1950s and 1960s.

    7. Only the Atlantic exhibits a substantial contribution to these basin integrals below 1000-m depth.

      It is only in the North Atlantic that the ocean is being warmed at deeper depths. In the Pacific and Indian oceans, warming appears to only occur in the surface 1000m. 

    8. The cooling of the Labrador Sea is composed of nearly equal contributions of about 1.0 W m−2 from each 500-m-thick ocean layer down to 2500-m depth.

      The Labrador Sea is the region between Greenland and Canada around 60°N, 50°W that shows some of the most extreme cooling throughout the water column (Fig. 3B).

      Levitus et al. found that the cooling of the Labrador Sea has consistently declined by 1W/m<sup>2</sup> for every 500m in depth, extending all the way down to 2500 m.

    9. maximum heat storage for this basin occurs at depths exceeding 300 m

      The North Atlantic ocean absorbs a lot of heat, but where is that heat being stored?

      The heat that the ocean absorbs does not remain in the surface ocean, but is instead carried to deeper depths by deep ocean convection currents. From the 1970s to the 1990s, the deep water of the North Atlantic has gained more heat than the surface ocean has over the same period.

    10. demonstrates the opposite picture.

      The deep ocean of the North Atlantic cooled between the period 1970-1974 and the period 1988-1992, two decades later. While higher latitudes show this cooling most strongly, the decline in deep water temperatures appears to have occurred through much of the North Atlantic.

    11. difference field for the two earlier periods shows that much of the North Atlantic was warming between these periods

      Overall, the deep water of the North Atlantic had higher temperatures during the period 1970-1974 than during a period two decades earlier, from 1955-1959.

    12. decadal variability of the upper ocean heat content in each basin is a significant percentage of the range of the annual cycle for each basin

      The change in heat content from one decade to the next is similar in scale to the shift in heat content that happens over the course of a year due to the change in seasons.

      That is, the range of temperature from the highs of summer to lows in winter is similar to the extent of change from one decade to the next.

    13. There is relatively little contribution to the climatological range of heat content from depths below 300 m.

      The deep ocean below 300m is disconnected from the atmosphere and appears to have little influence on climate.

    14. the two basins positively correlated

      While there are periods of increasing and decreasing heat content in the oceans, with swings happening roughly every two decades, the Pacific Ocean has seen an overall increase in temperature over time.

    15. temperature anomaly of 0.37°C

      In 1998, the North Atlantic ocean was 0.37°C (0.37°F) warmer than the 50-year average.

      Has this trend continued since Levitus et al. published this study in 2000?

      You can find the answer by examining ocean temperature data available from NASA: https://climate.nasa.gov/vital-signs/global-temperature/

    16. In each basin before the mid-1970s, temperatures were nearly all relatively cool, whereas after the mid-1970s these oceans are in a warm state.

      Levitus et al. found that, as time progresses, ocean temperatures increase. Thus, time and ocean temperature are positively correlated.

      Much of this ocean warming has occurred since the mid-1970s.

    1. Some early-life events that are generally thought to affect adult microbiota composition were not associated with microbiota composition variation in our study, including mode of birth [cesarean section (N = 36) or vaginal delivery (N = 1036)], place of birth [home (N = 207) or hospital (N = 899); increased diversity in home-born individuals, FDR>5% when controlling for age], and infant nutrition [breastfed (N = 537) or not breastfed (N = 359)]

      Their study suggests that early-life events, such as birth method or nutrition, do not have any significant correlations with the adult microbiome.

    2. the second bicluster, consisting of seven genera, including Bacteroides and Parabacteroides, comprised individuals with reduced microbiome diversity. Characterization of these individuals revealed a preference for white, low-fiber bread [bread being the major source of carbohydrates in an average Belgian diet (34)] and higher prevalence of recent amoxicillin treatment.

      The other subset of individuals was characterized by lower microbiome diversity, having only seven genera (compared to 15 genera in the first subset). These individuals showed a preference for white bread and were associated with recent antibiotic treatment.

    3. Moreover, correlations between RBC counts and Faecalibacterium abundances are in line with the known oxygen requirements of this genus

      They found that red blood cell counts were also associated with the relative abundance of Faecalibacterium. As oxygen transport is the most important function of red blood cells, this fits the oxygen requirements of this genus.

    4. 308 samples collected in Papua New Guinea (15), Peru (16), and Tanzania (17) reduced the size of the human core microbiota to 14 genera. Notably, Alistipes, Clostridium IV, Parabacteroides, and all Actinobacteria were excluded from the global core composition

      When data was collected from samples outside of Europe and the United States, the core microbiota (which is defined as a genus that is common to at least 95% of samples) decreased to 14 genera.

    5. core microbiota (i.e., the genera shared by 95% of samples) composed of 17 genera with a median core abundance (MA) of 72.20%

      They found that the core microbiota—defined by being shared by 95% of samples from the United Kingdom, United States, FGFP, and LLDeep data sets—was 17 genera. The median abundance of these genera was 72.20%.

    6. Combined, these data sets comprised a total richness of 664 genera (fig. S2A). Extrapolation estimated total western genus richness at 784 ± 40 (fig. S2B), suggesting that total western richness is still undersampled. Observing total richness would require sampling an estimated additional 40,739 individuals

      The overall microbiota richness of their samples accounted for 664 genera, but does not cover all 784 genera expected for the entire Western population. They calculate that in order to observe all 784 genera, they would need to sample over 40,000 more individuals.

    7. Sixty-nine clinical and questionnaire-based covariates were found associated to microbiota compositional variation with a 92% replication rate.

      The study found 69 dependent factors from clinical-based—that is, measurements taken by doctors—and questionnaire-based—that is, provided by the patients about their lifestyle—data. They found that for 92% of the factors that had a counterpart in an independent study (=LLDeep study), these factors were still significantly associated to variation in microbiome composition.

    1. having about equal amounts of uracil and cytosine in (presumably) random order, will increase the incorporation of the amino acids phenylalanine, serine,  leucine, and proline, and possibly threonine.

      As seen in the figure below, random triplet combinations of U and C are now known to encode for the amino acid residues Crick suspected.

  4. Mar 2019
    1. The key features of success of the new lineage are reproductive isolation based on learned song and morphology, transgressive segregation producing novel phenotypes, and the availability of underexploited food resources.

      Here, the authors list the main reasons that this hybrid lineage survived and reproduced rather than dying out or mating with members of the parental species.

    2. the founder male (5110) was not a G. fortis x G. scandens hybrid as previously hypothesized (12), but a G. conirostris

      This conclusion of the authors is a reclassification of the founder male into a different species than previously thought, based on the genetic analysis.

    3. larger mean bill size than five adults that died

      These data further support that natural selection is acting on bill size, as birds with smaller bills did not survive.

    1. indeed, as expected from our light-scattering measurements and tissue geometry, we found that at least 0.7 mm3 of STN is recruited by light stimulation, which closely matched the actual volume of the STN

      Measuring the area of c-fos activated neurons upon light stimulation provided the authors with support for using an optrode to stimulate the STN since the area activated closely matched the true volume of the STN.

    1. Fig. 4 Time series of 5-year running composites of heat content (1022 J) in the upper 3000 m for each major ocean basin

      This figure shows how ocean temperatures have changed in each ocean basin from 0 to 3000m in depth. Due to gaps in temperature data (especially in the 1950s), Levitus et al. used 5-year averages (composites) instead of looking at annual data (as in Figure 1).

      The authors found that there is an overall increase in temperatures within each ocean basin. This warming is not consistent, but instead has periods of lower temperatures and periods of higher temperatures.

      How does this figure differ from Figure 1 that only looks at 0 to 300m depths?

    2. Atlantic and Pacific Oceans have undergone a net warming since the 1950s and the Indian Ocean has warmed since the mid-1960s

      All of the world's oceans have been warming since at least the 1970's according to measurements by ships and research vessels.

      From the data available, it appears the Atlantic and Pacific started warming before the Indian Ocean, but this result could be due to the fewer number of data points collected in the Indian Ocean.

    3. global volume mean temperature increase for the 0- to 300-meter layer was 0.31°C

      While the whole ocean warmed 0.06 C between the 1950s and 1990s, Levitus et al. found that the surface ocean (the top 300 meters or 984 feet) warmed by 0.31 C.

    4. volume mean warming of 0.06°C

      All of the world's oceans combined - the Atlantic, the Pacific, the Indian - have warmed by 0.06°C overall between 1948 and 1998.

      But have the oceans all warmed equally, or have some basins or depths warmed more than others?

    1. Depolarization with veratridine after 2 weeks in culture also significantly increased TH, suggesting that mature as well as developing locus ceruleus exhibits plastic responses to depolarization

      This experiment was focused on observing the changes in tyrosine hydroxylase (TH) activity in older cultures. Similar to developing cultures, the old cultures were also exposed to veratridine. 

    1. Focusing on the prevalent concern of BMI increase and suboptimal health, we assessed the sample size needed to evaluate microbiota compositional changes associated to obesity. To do so, we calculated the independent effect sizes of obesity status, gender, age, and BSS on microbiota variation (table S16). This allowed us to estimate that 865 lean (BMI <25) and 865 obese (BMI ≥30) volunteers would be necessary to study microbiota compositional shifts with P < 5% significance level and a power of 80%. When taking into account gender, age, and BSS score as covariates, the estimated sample size was reduced to 535

      Since there is recent concern regarding the association of BMI increase and poor health, the authors wanted to know how large a sample they would need to identify microbiota changes that could be associated with BMI. They did this by conducting a power analysis which allowed them to determine that the sample size needed to detect changes in microbiome composition associated with obesity (with a false positive score below 5% and 80% probability of detecting a real effect if it is present) is 535 samples when taking into account gender, age, and BSS score as covariates.

    2. Residence type [ranging from countryside (N = 77) over rural village (N = 500), small town (N = 272), suburb (N = 137), to city (N = 102)] during early childhood (up to 5 years old), one of the 69 FGFP microbiome covariates, was linked to adult microbial community composition, with a positive correlation between evenness and residence in more industrialized areas, though not statistically significant (FDR >5%) when correcting for age, gender, and BMI

      They found an association between the type of residence the participants lived in as a child (i.e. rural, urban) and their adult microbiota composition. Moreover, individuals that grew up in industrialized areas showed a more even microbial composition than those that grew up in rural areas—however these results were not statistically significant when correcting for age, gender, and BMI.

    3. Interindividual variation in microbiota composition mainly resulted from changes in relative abundance of core taxa

      Microbiome variation between individuals was largely a result of changes in relative abundance of the core taxa.

    4. these results by no means imply that early-life events do not affect microbiota assembly during infancy, nor do they question previous associations with disease or allergy (36, 37); our analyses only indicated that such events were not significantly associated with microbiome composition at adult age in the FGFP cohort

      The authors acknowledge that these early-life events may affect microbiota development during infancy, but did not observe an effect of these events on the microbiota composition at adult age in their cohort.

    5. The cluster was predominantly composed of women, individuals with a lower weight, and participants with a longer transit time, as reflected both by stool consistency and time since previous relief. Both microbiota richness and evenness were elevated in this cluster.

      When clustering the data based on taxonomic characteristics, one subset was predominantly female, having lower weight and low Bristol stool scale score and a longer time since last defecation. These individuals showed more equal and higher quantities of different microbial taxa.

    6. some of the medical conditions targeted by fecal microbiota research have much smaller microbiome effect sizes than commonly assumed. However, some of the covariates that we identified (such as BSS and medication) are currently largely ignored and should be taken into account in future clinical studies

      They conclude that some medical conditions do not have as great of an impact on gut microbiota as previously thought. However, other covariables are under accounted for in these types of clinical studies.

    7. We could detect a 9% difference between taxon proportions with 400 samples per group at a power above 95% and a 5% difference with 500 samples per group at a power of 80%

      The authors were able to show that with 800 samples, they could detect a 9% difference in taxon proportions at a power above 95% and a 5% difference in taxon proportions with 1000 samples at a power of 80%. The latter means: 80% probability of detecting 5% deviations in taxon proportion between samples.

    8. intake of several of these substances was associated with community composition variation (Fig. 5A and table S15). The only drugs significantly associated with the abundance of specific genera in phenotype-matched case-control analyses were β-lactam antibiotics (FDR <5%).

      While several medications were associated with variation in microbiota composition, the researchers found for only one drug significant associations with the abundance of specific genera (when comparing matched cases and controls), namely for beta-lactam antibiotics.

    9. The presence of Fusobacterium could not be linked to any of the nonredundant covariates identified in this study, which could indicate the specificity of its association with colorectal cancer

      The authors could not find any correlation between the presence of Fusobacterium and the set of 18 nonredundant covariates, suggesting that its presence is specifically associated with colorectal cancer.

    10. was also negatively associated with insulin resistance risk factors such as BMI and blood triglyceride concentrations

      They found that the abundance of Akkermansia was negatively associated with insulin resistance risk factors. This means that samples with higher amounts of Akkermansia belonged to subjects with lower risk of insulin resistance based on BMI and blood triglyceride values. Insulin resistance is a condition in which your cells cannot use insulin (a hormone) effectively, preventing them from absorbing glucose. This condition may result in diabetes, which is characterized by a buildup of glucose in the blood.

    11. features losing most explanatory power were time since previous relief (also indicative of passage rates), blood uric acid and hemoglobin levels, BMI, gender, and frequency of beer consumption

      Stool consistency correlated with other factors like the time since previous defecation, blood uric acid (a chemical produced by the breakdown of food) and hemoglobin (a protein responsible for transporting oxygen in blood) levels, BMI, gender, and frequency of beer consumption.

    12. 12 out of 20 of the FGFP 99% core genera covary with BSS scores, with overall core abundance increasing in looser stools

      In the FGFP cohort, the researchers observed that the overall abundance of core genera was higher in looser stools.

    13.  Faecalibacterium numbers were, as discussed, dependent on RBC counts, but our analysis did find a decreased abundance in ulcerative colitis patients

      They found that red blood cell counts were also associated with the relative abundance of Faecalibacterium. As oxygen transport is the most important function of red blood cells, this fits the oxygen requirements of this genus.

    14. set of 18 variables (Fig. 3B and table S10) with a cumulative (nonredundant) effect size on community variation of 7.63%. Here, we identified stool consistency as the top single, nonredundant microbiome covariate in the FGFP metadata

      When modelling the 69 covariates that had a significant impact on the gut microbiota composition (to see which combination of parameters could best explain the variation), a combination of 18 variables was found, that accounted for 7.63% of the variation. Stool consistency on its own accounted for most of the variation.

    15. Independently of gender, genus richness correlated positively with age, whereas total core abundance decreased

      Researchers found that as both males and females age, the number of genera increases. However, age was negatively correlated with abundance of core genera.

  5. Feb 2019
    1. Although manufacturers may engage in advertising and lobbying to influence consumer preferences and government regulations, a critical collective problem consists of deciding whether governments should regulate the moral algorithms that manufacturers offer to consumers

      Manufacturers may try to sway public opinion and laws regulating AVs, but that doesn't address the bigger issue: Should governments regulate the "moral" algorithms in autonomous vehicles—the algorithms that decide what to do in difficult situations like the ones in the study?

      The authors conclude that moral algorithms may be necessary, but that they create a social dilemma.

    2. As usual, the perceived morality of the sacrifice was high and about the same whether the sacrifice was performed by a human or by an algorithm (median = 70).

      Participants thought that a driver's sacrificing themselves to save other people was highly moral, whether the driver was a human or a computer algorithm.

    3. The algorithm that would kill its passenger to save 10 presented a hybrid profile.

      Unlike the other two algorithms in Figure 3B, which consistently received many points or few points in all of the categories listed, the algorithm that sacrificed its own passenger to save 10 others received high points in some categories and low points in others.

    4. they imagined future AVs as being less utilitarian than they should be.

      The ratings for "What will AVs do?" (whether people think AVs will actually be programmed to sacrifice one passenger over many) are lower than "What should AVs do?" (whether people think they should be programmed to do so.)

      Thus, people are less confident that AVs will actually be programmed to sacrifice their sole passenger to minimize casualties, even though people think that is the most moral approach.

    5. They overwhelmingly expressed a moral preference for utilitarian AVs programmed to minimize the number of casualties (median = 85)

      As shown in the bottom graph of Fig. 2A ("What should AVs do?"), many more people thought it was morally superior for AVs to minimize the number of casualties: There is a high number of responses on the right end of the graph.

      If you sort the responses that everyone gave from lowest to highest and take the middle, or median response, it is also very high, at 85. This indicates that people thought saving as many people as possible was the more moral choice.

    6. In study one (n = 182 participants), 76% of participants thought that it would be more moral for AVs to sacrifice one passenger rather than kill 10 pedestrians [with a 95% confidence interval (CI) of 69 to 82].

      The authors determined the percentage of people who thought it would be more moral to sacrifice one passenger for the good of the whole, or vice versa. In this study, 76% of the 182 participants said that one passenger should be sacrificed to save 10 pedestrians.

      A 95% confidence interval of 69 to 82 tells you that if you asked all people in a population (in this case, all U.S. citizens, since only U.S. citizens participated in the study) the same question, there is a 95% chance that between 69 and 82 percent of your respondents would say that the AV should sacrifice its passenger for the greater good.

      Thus, a confidence interval gives you a range of values that you are pretty sure contains the "true" one. A smaller confidence interval means you are more confident in the result.

    7. our results suggest that such regulation could substantially delay the adoption of AVs

      Based on the results in this paper, if governments regulate the "morality" of AVs, people would be less likely to buy them, even if they approve of them.

    8. Finally, participants were much less likely to consider purchasing an AV with such regulation than without (P < 0.001). The median expressed likelihood of purchasing an unregulated AV was 59, compared with 21 for purchasing a regulated AV.

      Overall, people did not approve of government regulation of utilitarian algorithms, and were much less likely to consider buying an AV if they were regulated by the government.

    9. This is a huge gap from a statistical perspective, but it must be understood as reflecting the state of public sentiment at the very beginning of a new public issue and is thus not guaranteed to persist

      The authors note that there is a large gap in their confidence interval, which indicates that there is not public consensus. They point out that this is because autonomous vehicles are a relatively new issue, so different people have thought about it more or less.

      As people get used to the idea and think about it more, the gap is expected to narrow.

    10. When we inquired whether participants would agree to see such moral sacrifices legally enforced, their agreement was higher for algorithms than for human drivers (P < 0.002), but the average agreement still remained below the midpoint of the 0 to 100 scale in each scenario.

      The authors asked the participants if there should be a law requiring self-driving cars and human drivers to sacrifice themselves to minimize casualties.

      Participants did not think that this type of moral situation should be regulated in general, as indicated by their lower approval ratings. However, participants were more likely to think that regulations on algorithms should be enforced, as opposed to human drivers.

    11. without actually wanting to buy one for themselves

      Again, the researchers found that participants thought self-sacrificing AVs are good for other people but would not want to buy one for themselves.

    12. Like the high-valued algorithm, it received high marks for morality (median budget share = 50) and was considered a good algorithm for other people to have (median budget share = 50). But in terms of purchase intention, it received significantly fewer points than the high-valued algorithm (P < 0.001) and was, in fact, closer to the low-valued algorithms (median budget share = 33).

      The high-valued algorithm is the one that sacrificed one pedestrian to save 10 others, which received many points for each of the three categories listed.

      The results showed that participants thought an algorithm that sacrificed one person to save 10 was more moral. The algorithm that sacrificed a passenger to save 10 pedestrians was slightly less preferred than the algorithm that sacrificed a pedestrian to save 10 other pedestrians.

      Participants also thought that other people should have a car programmed to sacrifice its passenger for the greater good, as shown by the high number of points in that category.

      However, like the "low-valued" algorithm that sacrificed one pedestrian to save one other pedestrian, participants were not very willing to buy a car that was programmed to sacrifice its own passenger, as shown by the lower number of points given to this category.

    13. Although the reported likelihood of buying an AV was low even for the self-protective option (median = 50), respondents indicated a significantly lower likelihood (P < 0.001) of buying the AV when they imagined the situation in which they and their family member would be sacrificed for the greater good (median = 19). In other words, even though participants still agreed that utilitarian AVs were the most moral, they preferred the self-protective model for themselves.

      Even though participants reported that self-driving cars should sacrifice passengers for the greater good, they reported that they were not very likely to buy such a car. Further, they were much less likely to buy an AV that would sacrifice its passengers if it included family members.

      Taken together with previous results, this shows that even though participants thought that the most moral choice was to sacrifice the passengers for the greater good, when thinking about their own lives, the participants still preferred a car that would not do so.

    14. Imagining that a family member was in the AV negatively affected the morality of the sacrifice, as compared with imagining oneself alone in the AV (P = 0.003). But even in that strongly aversive situation, the morality of the sacrifice was still rated above the midpoint of the scale, with a 95% CI of 54 to 66

      Participants thought that having a family member in the car made the decision of sacrificing the car's passengers less moral.

      Even so, participants thought that sacrificing the car's passengers, including a family member, for the greater good was still overall more moral.

    15. robust to treatments in which they had to imagine themselves and another person, particularly a family member, in the AV

      Even in scenarios when a family member was in the car with them, participants thought that sacrificing the passengers in a self-driving car for the greater good was a more moral decision.

    16. their moral approval increased with the number of lives that could be saved (P < 0.001)

      Participants thought that the more people you could save, the more moral the choice was.

      The very low P value (P < 0.001) means that this conclusion has a very low probability (less than one thousandth) of being due to chance. This provides strong evidence supporting the authors' findings.

    17. However, participants were less certain that AVs would be programmed in a utilitarian manner (67% thought so, with a median rating of 70).

      When the participants in Study 1 were asked whether AVs would actually be programmed to save as many people as possible and sacrifice one passenger for the greater good, they had a more varied response.

      67% percent of the 182 participants thought that AVs would actually be programmed this way.

      This viewpoint is also reflected in the participants' ratings of how moral each choice was. The upper graph of Fig. 2A ("What will AVs do?") shows what people thought AVs would do, and is more evenly distributed than the graph of what AVs should do.

      The median or midpoint value of the participants' responses for the upper graph was 70, which is lower than the median of 85 given for the bottom graph. This suggests that people were less sure that AVs would actually be programmed to minimize casualties, even though most people thought that this is how AVs should be programmed.

    18. Overall, participants strongly agreed that it would be more moral for AVs to sacrifice their own passengers when this sacrifice would save a greater number of lives overall.

      The authors found that, overall, participants thought it was better for a car to sacrifice its own passenger to save a group of pedestrians (because this sacrifice would save a greater number of lives overall).

    19. The study participants disapprove of enforcing utilitarian regulations for AVs and would be less willing to buy such an AV. Accordingly, regulating for utilitarian algorithms may paradoxically increase casualties by postponing the adoption of a safer technology.

      People who took part in the study do not want the government to regulate the morality of self-driving cars, particularly if they will sacrifice their passengers for the greater good. They are also less likely to buy a car subject to those regulations.

      This may cause more deaths in the long run. Even though AVs are a safer technology, people would not want to buy these vehicles, which would inhibit the widespread adoption of AVs. If fewer people adopt AVs, then the technology (and increases in safety) will improve more slowly.

    1. It appears that the number of nonsense triplets is rather low, since we only occasionally come across them.

      Crick has evidence from his frameshift experiments that some triplets do not code for any amino acids, but these triplets are rare.

    2. Detailed examination of these results shows that they' are exactly what we should expect if the message were read in triplets, starting from one end.

      By adding and subtracting bases from the genetic code, certain patterns arose. These patterns were consistent with three bases coding for one amino acid.

    3. If, for example, all the codons are triplets, then in addition to the correct reading of the message there are two incorrect readings which we shall obtain if we do not start the grouping into sets of three at the right place.

      For example, if a DNA sequence contains ...CCGGAUC...

      Then an individual codon might be (1) CCG (2) CGG or (3) GGA. One of these would be correct and the other two incorrect.

    1. evolutionary importance of rare and chance events

      The authors conclude that chance events were important in this speciation event: these include the founder male's unusual song, unusual migration, his immature age, and the selection event mentioned in the next sentence.

  6. Jan 2019
    1. These data suggest a causal relationship between sexual experience, NPF levels, and ethanol preference.

      Overall, the researchers showed that when they artificially decreased NPF activity, flies preferred alcohol more, and when they artificially increased NPF activity, flies preferred alcohol less. Thus, they were able to establish a causal relationship between NPF and alcohol. Since the amount of NPF in a fly's brain is dependent on sexual history, this would explain their initial finding that sexually deprived flies prefer alcohol more than mated flies do.

    2. Ethanol preference was markedly lower in the rejected, then mated subgroup (Fig. 1E) compared to the subgroup that had only experienced rejection.

      Consistent with earlier results, the researchers found that the rejected subgroup maintained their preference for alcohol. On the other hand, the initially rejected but then mated subgroup demonstrated no alcohol preference. From this, the researchers concluded that it was the experience of sexual deprivation that was causing the rejected males to develop a preference for alcohol.

    3. was consistently higher for the rejected-isolated cohort

      The researchers found that when given a choice between normal food and alcohol-mixed food, male flies who had mated with female flies did not prefer the alcohol, whereas male flies who had been rejected by the female flies did prefer the alcohol.

    1. For guides targeting either KRAS or PPIB, we found that REPAIRv2 had no detectable off-target edits, unlike REPAIRv1, and could effectively edit the on-target adenosine at efficiencies of 27.1% (KRAS) or 13% (PPIB)

      REPAIRv2 showed a moderate decrease in on-target activity, but at the same time it had substantially lower off-target editing.

      There is a trade-off between on-target activity and off-target numbers. Depending on how an editor is being used, it may be more useful to have high activity and low specificity or lower activity and high specificity.

      There is a middle ground where an editor is less active (but still useful) but more accurate.

    2. We analyzed the editing efficiency of these two systems compared with REPAIRv1 and found that the BoxB-ADAR2 and full-length ADAR2 systems demonstrated 50 and 34.5% editing rates, respectively, compared with the 89% editing rate achieved by REPAIRv1

      The authors demonstrated that the constructs with mutated ADAR had enhanced editing activity at both on-target and off-target sites. They also observed that the construct with the full-length ADAR protein was less efficient at editing and had fewer off-target effects.

    3. There was a high degree of overlap in the off-target editing events between ADARDD(E488Q) and all REPAIRv1 off-target edits, supporting the hypothesis that REPAIR off-target edits are driven by dCas13b-independent ADARDD(E488Q) editing events

      This conclusion was drawn from the observation that ADAR deaminase domain alone could produce a substantial number of off-targets found under the condition when the full REPAIRv1 system was applied.

      Nevertheless, figure S8C shows that ADAR activity cannot account for all off-targets and there are likely other sources of off-target effects. Comparing the off-target numbers for four different conditions shows that the REPAIRv1 construct has a greater level of irrelevant editing than the ADAR domain alone.

    4. We found that all C-terminal truncations tested were still functional and able to restore luciferase signal (fig. S7)

      The REPAIR construct with the largest C-terminal truncation (removal of the end) of dCas13b demonstrated the same level of RNA editing as the original REPAIRv1. This meant that the researchers were able to shorten dCas13b enough to fit into an AAV, but not destroy its ability to bind RNA.

    5. We also observed that 50-nt spacers had an increased propensity for editing at nontargeted adenosines within the sequencing window, likely because of increased regions of duplexed RNA

      The longer, 50-nt spacers showed higher rates of editing but also more off-target effects. This suggests a tradeoff between the efficiency of editing and the specificity.

    6. We found that dCas13b-ADAR1DD(E1008Q) required longer guides to repair the Cluc reporter, whereas dCas13b-ADAR2DD(E488Q) was functional with all guide lengths tested

      The authors found that the the length of the guide RNA affects how efficiently the RNA-editing protein complex finds the target sequence.

      In order to find the best construct, it is important to test different targeting conditions, testing as many target sequences as possible while also considering the resources required to test each one.

    1. Instead, this pattern arose from clear turnover in the mitochondrial ancestry of European dogs, most likely as a result of the arrival of East Asian dogs

      If ancient European dogs were the ancestors of modern European dogs, we expect that they would mostly belong to the same haplogroups. This is not the case, however, which tells us that these ancient dogs were replaced by another population of dogs.

    2. this early indigenous dog population in Europe was replaced (at least partially) by the arrival of East Eurasian dogs

      We know that there were indigenous dogs in Europe 15,000 years ago. It's estimated that East Asian and Western Eurasian dogs had common ancestors 14,000-6400 years ago, and probably sooner than that (Western Eurasian dogs kept some of their ancestral wolf DNA, which is throwing off the estimated divergence time). The authors suggest that the dogs present 15,000 years ago were replaced by dogs that arrived from East Eurasia.

    3. unequivocally placed the Newgrange dog with modern European dogs

      According to radiocarbon dating, the Newgrange dog is 4800 years old. By using three different ancestry analysis tools, there was no doubt that the Newgrange dog is most similar to the modern European dogs, which tells us that the East Asian dogs and Western Eurasian dogs had evolved from each other before 4800 years ago.

  7. Dec 2018
    1. carbonic acid in the air should sink to 0.62 – 0.55 of its present value (lowering of temperature 4° – 5° C.)

      For Ice Age temperatures to occur, Arrhenius calculated that atmospheric CO2 levels must decrease to 0.62 to 0.55 times the current levels.

      This calculation required him to extrapolate from the data in Table VII because the lowest case he modeled was CO2 at 0.67 times current level.

    1. Populations of white-sand and terra firme ecotypes of Protium subserratum were attacked by herbivore assemblages differing in both abundance and species composition, exhibited significant differences in growth and defense allocation, and expressed qualitatively different secondary compounds. That these phenotypic differences occur in populations involved in incipient (or recent) speciation is consistent with the hypothesis that herbivores interact with environmental gradients to promote the evolution of habitat specialization in plants

      The ultimate summary of the whole paper. It explains why there needed to be side experiment analyzing different variables that would then clarify the conclusion that supports the given hypothesis.

    2. The existence of this trade-off has been well supported by many different temperate and tropical studies looking at allocation to growth and defense in plants adapted to different light and nutrient availabilities, both within species

      The difference in nutrient resources can affect the size of plants and their maturity which correlates with the previously cited growth-defense-trade-off because the study claims a correlation between the size of the leaf and defense strategies. This may be due to the unique collection of herbivorous insects that prey on the plants.

    3. Finally, it is important to recognize that our sampling was limited to juvenile plants and lasted only 12 months. Insect herbivore populations can strongly vary in different years, and thus, it is possible that our sampling missed important herbivores that are associated with P. subserratum.

      This is a very important limitation to put the results and conclusions of the experiment in perspective. If the sampling may have been done differently the experiment may have revealed different or additional understandings of defense allocation and speciation.

    4. We found that insect herbivores collected from Protium subserratum showed strong patterns of dissimilarity across different habitat types (Fig. 3). Moreover, we found significantly more insects feeding on terra firme plants than on white-sand plants, correlating with the large differences in resource availability between the habitat types. Taken together, these results suggest that there exists substantial variation in diversity and abundance of insect herbivores associated with P. subserratumacross white-sand and terra firme habitats.

      The results showed that there was a strong variation between insect species, habitat types, and abundance. As mentioned before, the terra firme lineage had different growth strategies than the white sand lineage and the terre firme plants were also found to have more herbivores feeding on them. This correlated since the terra firme seemed to provide more resources. The herbivores that were found also showed that they there was a variation across habitats.

    5. unlike most members of the family Burseraceae, P. subserratum does not yield measureable amounts of monoterpenes and only trace amounts of sesquiterpenes

      Those two plants did not produce enough amounts of monoterpenes, a class of organic compounds, produced by plants, to be taken into account; and very small amounts of sesquiterpenes. The important part of this result was that it was not expected in the scientists' hypothesis and was unusual in comparison to the other members of the plant family.

    6. Terra firme populations exhibited significantly greater height and leaf growth and allocated more to chlorophyll production than white-sand populations in both soil types, demonstrating that different growth strategies have a genetic basis

      The reciprocal transplant experiment that was done with the different soil types showed that the terra firme lineage had greater height, greater leaf growth, and higher chlorophyll production in both soil types, while the white sand lineage did not. This worked to show that the growth strategies between each lineage was significantly different.

    7. Differences between sites, habitat types, and their interaction explained 14%, 15%, and 11%, respectively, of the variation in herbivore species composition among the four sampling locations

      This is explaining how the variation of herbivore species is closely related to the variation of plants in the location. The study explains how certain insects prefer to ingest other substances that can be identified, allowing the experimenter to see how the insects lived in accordance to where the plants where, instead of plants appearing around certain species of insects.

    8. differences among the dominant herbivores, the species composition of the entire P. subserratum herbivore fauna exhibited high turnover among sites and habitats

      The data that was collected from the herbivores and their abundance and variation between the plant species showed that most of the insects preferred terra firme plants instead of the white sand plants. It was also seen that, out of the species that were collected, the majority of them were chrysomelid beetles. Since the plants tested were in different locations, it was found that a small percentage of that correlated with the amount herbivore variation.

    9. In order to study the evolutionary processes involved in habitat specialization and the role of insect herbivores, an ideal study system would include recently derived sister species, or diverging lineages undergoing incipient speciation in different habitats.

      Main goal for the paper.

    10. four classes of constitutive leaf defenses were identified in the populations of P. subserratum: flavans, flavones, quinic acid derivatives, and oxidized terpenes.

      These express results for another question posed that clarifies variables that pertain to the specialization of the main topic. What is important is to grasp that this variable was accounted for in the process of understanding how specialization works in P. subserratum.

    11. We found that leaf thickness and leaf toughness did not show a significant effect of lineage, but instead exhibited significant variation related to soil type

      This information clarifies a question posed in the introduction. This knowledge fills the gaps of different variables that could affect the main goal of specialization.

    1. A collective effort to sequence thousands of invertebrate genomes will only be feasible with participation and commitment from the scientific community. The large breadth of invertebrate diversity will require taxon-specific expertise and integration of traditional biology with molecular advances, both in data generation and analysis. The GIGA team has already expanded beyond initial participants in the first GIGA planning workshop, where the cooperative spirit and ability to work in concert to establish a common platform for data sharing and analysis were demonstrated.

      This database will only be able to be completed with the participation and commitment from the scientific community around the world.

    1. Gentry plots consistently emerge as the best compromise

      Overall, 0.5 hectare gentry plots were the best method.

    2. Fig. 1.

      Figure 1 shows how the coefficient of variability changed for the different types of plots used. The use of smaller plots had a smaller CV meaning that the results had low variance and did not stray much from the standard deviation compared to the larger plots.

    3. The 1 ha plots, despite their relatively high cost to implement (Table 1), were more efficient to inventory than the smallest plots, although 1 ha plots were still inferior to the 0.5 ha modified Gentry plots.

      It was found that the plots that were smaller than 1 hectare had better results than the plots that were larger (1 hectare).

    1. Additionally, the study of DNA methylation patterns revealed significant differences between early (T0, T1) and late (T2, T3) stages of the HAB simulation. More precisely, a decrease in genome-wide DNA methylation levels was observed as the simulation progressed.

      The author further determined that due to the cause of the toxins released by the bacteria from the Red Tide it can influence methylation patterns of DNA. By influencing methylation , there exists a relationship in environmental stress and epigenetics. As the amount of time increases with the concentration of toxins remaining stable within the sample, the less there are accounts of genome wide DNA methylation.

    2. Based on these results, the application of H2A.X as a biomarker of oxidative stress seems plausible

      Ultimately, the authors have observed a positive correlation between H2A.X and Oxidative DNA damage, in such that H2A.X increases in the presence of Oxidative DNA damage (due to the presence of brevotoxins). Based on these results, they have concluded that H2A.X is a good indicator of oxidative stress.

    3. This finding has a dual relevance: first, it corroborates the specialization of histone variants and PTMs in the chromatin of molluscs (González- Romero et al., 2012; Rivera-Casas et al., 2016a,b), supporting a role for H2A.X during environmental responses in invertebrates (Suarez-Ulloa et al., 2015) and the evolutionary conservation of this mechanism (Kinner et al., 2008; Lee et al., 2014). Second, it validates the application of commercial antibodies to detect H2A.X in bivalves, opening new avenues for monitoring DNA damage and health in populations of marine invertebrates.

      The authors found that the histone varaints that were derived from the major histones such as H2A.X, were also passed down as an evolutionary trait. The authors also determined that the antibodies could detect the histones in the oysters, and is signified as a possible method to detect further damage done by the toxic.

    4. The obtained results revealed an increase in H2A.X, concomitantly with the exposure of Eastern oysters to increasing concentrations of K. brevis (T1, T2), and followed by a slight decrease during the recovery phase (T3, Fig. 3C).

      The phosphorylation of H2A.X appeared to be positively correlated to the exposure of K. brevis to the Easter Oysters.

    5. These results suggest that oyster responses to K. brevis exposure do not involve specific modifications in H2A.X, H2A.Z and macroH2A transcription.

      Because of lack of change over the course in the experiment, it is concluded that there is no response from the oyster in K. brevis in terms of gene modification in H2A.X, H2A.Z, and macroH2A transcription.

    6. These results, together with the absence of oyster mortality and the stability in water quality parameters throughout the experiment and across treatment groups, support the effectiveness of HAB simulation in ensuring exposure of Eastern oysters to brevetoxins through K. brevis ingestion.

      The results showed that the simulation was an accurate representation of what happens during HAB.

    1. The advancements in information science technologies to digitalize herbaria records and retrieve the historical phenological information from herbaria, satellite images and field cameras, will be essential to improve our capability to define proximate triggers and forecast the effects of climate change. The very essence of the importance of recovering historic phenological information, and its wide application for conservation, are illustrated by the work of Primack (2014) on the Thoreau records. As technology evolves and Land Surface Phenology becomes more likely, the ubiquity of ground-based phenology and remote sensing approaches will play an increasingly important role for phenology and conservation. This will help answer questions about the timing and drivers of phenological events under climate and land-cover change scenarios, especially in highly diverse and heterogeneous tropical system.

      The application of Land Surface Phenology may become increasingly important as technology advances. As scientists continue to better understand the drivers of phenological changes in the past, it can help us to plan for future changes within complex ecosystems such as tropical ecosystems.

    2. We therefore propose a series of measures and research topics that can increase the contribution of phenology research to conservation science (Box 1). We have described how phenological studies can support conservation management protocols in actively triggering or accelerating the resilience of degraded ecosystems, potentially making a large contribution to the general research framework on global climate and land-use change.

      The authors analyzed how phenology can aid in conservation science. With understanding the importance of this knowledge, scientists have laid out goals for future phenology studies, which includes creating a global model for monitoring changes in phenology to improve biodiversity. The key finding of this paper is understanding the importance of phenology and how it can help in future management discussions.

    3. The understanding and support of ecosystem services provided by biodiversity should take into account the temporal dimension in resource abundance and dynamics across the landscape (Schellhorn et al., 2015).

      Although there is a lot of focus on what an ecosystem can offer us within society, in terms of a tropical ecosystem, managers of the specific services need to take into account the relative abundance of the various resources within the ecosystem. This is important in order to healthily sustain the diversity within the ecosystem while at the same time maximizing the productivity within the ecosystem for human use.

    4. Recent advances in digital technologies to retrieve historical phenological information from herbaria, satellite images and field cameras will be essential to improve our capability to define proximate triggers, and forecast the effects of climate change.

      The causes for latent reproductive phenology in plants have yet to be determined. However, scientists can now use digital technologies (like digital cameras and remote sensing) to capture phenological events as they occur. This gives them the opportunity to examine changes without having to worry about the amount of time they have to analyze those changes.

    1. Field observations revealed that sheer caterpillar size was a fair defense against these ants;

      Authors found out in the field that the sheer caterpillar size was a mechanism of defense to against predators.

    2.  Pseudomyrmex gracilis received the highest final score at 52, and C. floridanus had the next highest score at 40; both C. ashmeadiand C. planatus received lower scores of 24

      Based on species abundance, the Pseudomyremx gracilis received the highest score.

    1. larger fish have greater thermal inertia and increased cardiac capacity

      Thermal inertia is the ability of a body or object to maintain its temperature when ambient temperature changes. Larger objects have higher thermal inertia, so larger fish lose heat more slowly than smaller fish. Larger fish also have larger hearts, which can pump more blood.

    2. The largest size-based differences in energy intake were also observed in October (Fig. 6 and table S3), indicating that thermal niche expansion in this endothermic species results in high energetic reward.

      The increased temperature range allowed the tuna to forage and obtain energy more efficiently.

    3. Lower energy intake was observed during late summer (August and September), when bluefin tuna are moving up through the Southern California Bight (28° to 32°N).

      Lower energy intake during migrations.

    1. This manipulation significantly reduced ethanol preference in mated males, which have elevated NPF levels, but not in virgin males (Fig. 3, A and B)

      In order to confirm that NPF was causally related to the observed changes in alcohol preference, the researchers manually reduced the amount of NPF receptor the flies could produce in all neurons. They did this by using an RNAi, which can specifically prevent the formation of molecules coded by genes. Since the researchers had previously shown that mated flies have higher levels of NPF than virgin flies, and that this was correlated with a reduced preference for alcohol, they hypothesized that if they manually inhibited NPF's ability to affect the fly, then mated flies would show an increase in their preference for alcohol.

      The researchers found that genetically reducing the number of NPF receptors in the brain for mated flies (who would have had elevated levels of the NPF molecule) caused these flies to develop a preference for alcohol. On the other hand, virgin flies (who would have had average levels of NPF molecules in their brains) were unaffected by this. This points to a causal relationship between the function of NPF and its receptor, and preference for alcohol.

    2. When tested 24 hours later, males in the experimental group demonstrated strong preference for the odor associated with NPF neuron activation. The genetic controls, which did not undergo NPF neuron activation, but were exposed to the same training protocol, developed no odor preference (Fig. 4C). Other controls, which underwent NPF neuron activation but were not exposed to the training protocol, similarly developed no odor preference (fig. S6C).

      The researchers found that the flies who had learned to associate an odor with activation of the NPF circuit preferred that odor in the Y maze, relative to the unpaired/neutral odor.

      Other control flies who underwent partial genetic manipulations but did not experience NPF activity-odor pairings had no odor preference in the maze.

    3. There was no effect on ethanol preference when virgin males were tested at 20°C, when the channel is inactive, but there was aversion to ethanol-supplemented food at 29°C, when the channel is active (Fig. 3, C and D)

      The researchers found that when they manually activated NPF-related neurons (recall that when using dTRPA1, this occurs at a higher temperature), preference for alcohol was significantly diminished, even amongst virgin flies.

    4. Rejected-isolated males showed the lowest transcript levels, virgin-grouped males showed higher levels, and mated-grouped males showed the highest (Fig. 2A and fig. S4). Rejected-isolated males also showed markedly lower NPF protein levels than mated-grouped males

      The researchers found that flies that had experienced sexual rejection had significantly reduced levels of NPF in their brain, and flies that had undergone mating had significantly increased levels of NPF. This supports the hypothesis that sexual history can affect the amount of NPF that gets made in the brain, which can in turn change how much alcohol an organism might consume.

    5. The virgin males showed higher ethanol preference, although in general not quite as high as rejected-isolated males

      The researchers found that even when the male virgin flies were housed in large groups, their preference for alcohol persisted, and that mated flies with similar housing conditions did not prefer alcohol. This indicates that social isolation is not the reason for the observed alcohol preference in virgin flies.

    6. Artificial activation of NPF cells, which occurs at 30°C but not 22°C, had no effect on the initial aversion (fig. S6, A and B), but abolished conditioned preference for ethanol (Fig. 4, D and E)

      When NPF cells are activated in tandem with the ethanol vapour and odor exposure, the preference for the ethanol paired odor (that develops for controls) is completely absent. This indicates that the activity of the NPF cells, which the previous experiment showed was inherently rewarding, interferes with the experience of the ethanol.

    1. although this dog was likely able to digest starch less efficiently than modern dogs, it was able to do so more efficiently than wolves

      The ability to digest starch was important to look at, because it gives an idea of when dogs were domesticated, and who domesticated them—hunter-gatherers (meat-based diet) or farmers (starch-based diet).

    1. In B. fremontii, differential transcript and cell abundance data, along with physical adjacency to crystalline residues, implicate Cyphobasidium in the production of vulpinic acid, either directly or by inducing its synthesis by the lecanoromycete.

      The data presented in this paper mean that it can’t be ruled out that the newly identified basidiomycete is involved in vulpinic acid formation. Questions to be tested in the future include whether basidiomycetes produce the acid directly, or whether their presence causes the other fungal players to produce the acid themselves.

    2. The discovery of ubiquitous yeasts embedded in the cortex raises the prospect that more than one fungus may be involved in its construction, and it could explain why lichens synthesized in vitro from axenically grown ascomycete and algal cultures develop only rudimentary cortex layers

      These Cyphobasidium cells were found throughout the lichen, and show evidence of living their whole life cycle within the lichen. Therefore, at this point, it cannot be ruled out that these basidiomycetes might be integral to the structure, phenotype, and formation of some lichen species.

    3. Consistent with the transcript abundance data, these cells were more abundant in thalli of B. tortuosa (Fig. 3), where they were embedded in secondary metabolite residues (movie S1).

      The results of imaging showed that more basidiomycete cells were present in the lichen that produce vulpinic acid (B. tortuosa). This result supported the gene expression analysis results.

    4. As a whole, these data indicate that basidiomycete fungi are ubiquitous and global associates of the world’s most speciose radiation (14) of macrolichens.

      The researchers demonstrated, using rRNA analyses, that the new basidiomycete probably evolved with the lichen species. They showed that different lichens carried different basidiomycete strains. They concluded that basidiomycete symbionts are present in many lichens, and many lichen carry a specific strain.

    5. Restricting our analyses to Ascomycota and Viridiplantae revealed little differential transcript abundance associated with phenotype

      The scientists found that genes in Ascomycota (the mycobiont in both lichens) and Viridiplantae (both lichen's photobiont) were expressed similarly in both lichens. Therefore, they concluded it was unlikely that a common gene was causing the production of vulpinic acid in one lichen but not the other. Therefore, the phenotypic differences between two lichen species could not be explained simply by looking at gene expression in the symbionts.

  8. Nov 2018
    1. The results of this analysis again revealed a clear East-West geographic pattern across Eurasia associated with the deep phylogenetic split

      When the DNA sequences of 605 dogs were examined, two different groups were identified—the East Asian and Western Eurasian core groups. A dog may fall into either group, or between groups, with elements of both groups in its DNA (admixed).

      The 605 dogs were from different parts of the world, and when represented on a map (as in Figure 1A) it was clear that dogs are mostly of the East Asian core group (in red dots) in East Asia, and mostly of the Western Eurasian core group (in yellow) in Western Eurasia.

    1. strategies for similarity matching in the brain (21) and hashing algorithms for nearest-neighbors search in large-scale information retrieval systems.

      In this paper, the authors were able to use their knowledge of the fly olfactory circuit to improve the performance of a standard LSH algorithm. This suggests that the brain uses computational strategies for similarity searches that are better than what are currently used in data science.

      By studying the way the brain computes information, scientists can create faster and more efficient algorithms for solving the nearest-neighbor search problems that arise in everyday life (e.g. databases and search engines).

    2. Moreover, the sparse, binary random projection achieved a computational savings of a factor of 20 relative to the dense, Gaussian random projection

      Computational savings refers to the efficiency of an algorithm—that is, how much time and space on the computer it requires to run. Especially when working with very large data sets, computational savings is an extremely important factor to consider since analyzing the data can take a very long time (imagine if you had to wait more than a fraction of a second every time you entered search terms into Google).

      Here, the authors note that by modifying their experimental algorithm to include sparse, binary random projections instead of dense, Gaussian random projections, they were able to make the algorithm 20 times more efficient. That's big!

    3. Replacing the dense Gaussian random projection of LSH with a sparse binary random projection did not hurt how precisely nearest neighbors could be identified

      A typical LSH algorithm uses a dense Gaussian random projection, while the fly algorithm uses a sparse binary random projection. To test whether this made any difference in the performance of the two algorithms, the authors modified their LSH algorithm to use sparse binary random projections instead.

      The result was that this made no noticeable difference in the performance of the two algorithms (see Figure 2A).

    1. It now seems fairly certain that codons do not overlap. If they did, the change of a single base, due to mutation, should alter two or more (adjacent) amino acids, whereas the typical change is to a single amino acid,

      If codons do not overlap, then a single base change would alter the code for only a single amino acid. If codons do overlap, then a single base change may alter the code for multiple amino acids.

    2. presumably AAA codes lysine. However, since UUU codes phenylalanine, these facts rule out all the foregoing proposed codes.

      Since the complement base to A is U, the complementary code hypothesis would suspect AAA to code the same amino acid as UUU. Instead, AAA codes for a different amino acid (lysine) than UUU (phenylalanine), ruling out the complementary code hypothesis.

    3. The first direct evidence that this was not so was obtained by my colleagues Bretscher and Grunberg-Manago (8), who showed that a poly (C,A) would stimulate the incorporation of several amino acids.

      Since a strand of RNA lacking uracil encoded for various amino acids, uracil is not necessary to encode for an amino acid.

    4. Thus, one codon for phenylalanine appears to be the sequence UUU

      Since the only base found on the synthetic RNA was uracil (U), Crick concludes that triplets of uracil must encode phenylalanine residues.

    1. We have isolated and characterized a gene, Yob, for the M factor in the malaria mosquito Anopheles gambiae

      The authors identified a genetic culprit, which they call Yob, which explains the factors that initiate the genetic pathway to maleness in Anopheles gambiae mosquito embryos.

    2. Yob represents an excellent tool to be used in transgenic technology to conditionally eliminate female embryos and efficiently produce male-only generations of both malaria-transmitting Anopheles species

      In many of the genetically-modified mosquito-control efforts, it is necessary to only release males into the environment. Male mosquitoes do not bite and may or may not pass their genetic modifications to the next generation (depending on the design of the intervention). The authors propose that by over-expressing Yob in laboratory raised mosquitoes, scientists will be able to produce male-only populations without the labor-intensive process of sexing of pupae.

    3. we observed highly significant male deficiency in mosquitoes surviving transient knockdown of Yob in nonsexed embryos (Fig. 3C). All tested female survivors had the XX karyotype.

      Conversely to the previous experiment where the authors added more Yob and saw evidence of female death, the authors reduced Yob expression in this experiment and saw evidence of male death as survivors had XX genotypes, likely due to under-expression of genes on the male's single X chromosome.

    4. all the GFP-positive individuals developed as phenotypic males, whereas in the GFP-negative group and in the control group of A. gambiae embryos injected with GFP plasmid only, the sex ratio was unbiased (Fig. 3B). The GFP-positive males had the XY karyotype, as indicated by PCR; moreover, they had normally developed reproductive organs, produced motile sperm, and were fertile (table S1). Thus, ectopic delivery of Yob mRNA is lethal to genetic female embryos, but has no discernible effect on the sexual development of genetic males.

      All embryos that underwent successful injection of Yob (as indicated by expression of the fluorescent marker) had the appearance of male larvae and had an XY genotype. The authors think that expression of Yob in XX embryos may have killed them.<br> Embryos that were not successfully injected had approximately 50/50 male to female ratios, as expected.

    5. No shift in dsx splicing pattern was observed in cells transfected with transcripts either lacking an initiation codon or containing a premature stop codon, unlike the positive control cells transfected in parallel with the wild-type Yob transcripts

      The Yob transcripts with changes in them, did not change the splicing pattern of dsx like the normal transcript does. Since the changes they made in the transcripts render the transcript non-function, they can assume that the normal Yob transcript is translated into a protein.

    6. The analyzed region encompasses two open reading frames (ORFs) longer than 50 codons, of which only the shorter ORF bears a substitution pattern indicative of purifying selection

      In one portion of the Yob region, the authors found evidence of evolutionary selection through the amount, types and patterns of changes in sequence. Other areas of the gene likely had changes between species as well, but they may appear more random in their pattern.

    7. We observed a significant loss of the female and gain of the male dsx transcript isoforms in cells transfected with Yob mRNA, as compared with control nontransfected cells (Fig. 2, A, B, and D), or cells transfected with nonproductive forms of Yob (see below; Fig. 2D). Transfection experiments in larvae suggest that Yob exerts the same effect on dsx in vivo (fig. S5). This confirms that Yob is involved in the sex determination pathway as a direct or an indirect upstream regulator of dsxsplicing.

      There is less female-form dsx in cells that were transfected with Yob than in cells not transfected, or those transfected with non-functional Yob. They found the same results whether the transfection was done in cell culture or done in live mosquito larvae. The authors still can not tell if Yob is directly interacting with dsx, but they now know that Yob does have an upstream impact dsx splicing.

    8. Yob acting upstream of dsx in the sex-determining hierarchy

      Simply by evaluating timing of expression, the authors conclude that Yob must be upstream of dsx. However, this statement is also a hypothesis, as they have not found a direct connection between the two genes. This statement will serve as motivation for their next experiments.

    9. The female isoform of dsx is maternally deposited, but largely degraded in male embryos within 4 hours of oviposition, and only after complete degradation of the female isoform in males is a persistent pattern of sex-specific dsx splicing established

      They find from an mRNA gel that both males and females express a female form of a downstream sex determination gene (dsx) at 1 hour after fertilization. From the previous experiment, the authors know that Yob expression begins at about 2.5 hours after fertilization. In this experiment, the authors see that at 4 hours post fertilization, the female form of dsx begins to disappear in males but not females. By 8 hours, males start expressing the male form of dsx and the female form is completely absent. In order for the male to start becoming a genetic male, female form transcripts from the mother must be removed before male specific expression can begin.

    10. From the male pool, 21 reads uniquely mapped to the previously characterized scaffold AAAB01008227 derived from the A. gambiae Y chromosome

      Of the ~500,000 mRNA sequences returned, a very small portion of them (only 21) were found to derive from the Y chromosome. While the authors didn't explicitly say this, it is assumed that these sequences were not found in the female embryos.

    1. addiction to cocaine (but not to other drugs) accounted for only 9% of the variance

      The fact that cocaine use accounts for only 9% of the observed variance in avoidance response learning indicates that there are other variables (not taken into consideration) that could influence why CUD patients respond to stimuli that they should be avoiding.

    2. Our findings are also in line with evidence

      Cocaine causes physiological changes that lead to impairment of brain regions involved in control functions.

      Cocaine blocks dopamine re-uptake, leading to overstimulation. Overstimulation affects the way individuals assimilate and learn new information. A lack of control over actions after cocaine dependence makes individuals respond based on habit rather than a structured plan.

    3. Treatment of cocaine addiction should thus focus on training desirable habits that replace habitual drug-taking while protecting CUD patients from aversive consequences that they may fail to avoid.

      The author's findings might have implications for the future treatment of CUD patients.

      Cultivating desirable habits or replacing bad ones could be effective treatments for cocaine use disorder patients. Introducing these types of interventions could be essential to reducing one of the major public health problems that we currently face.