1,113 Matching Annotations
  1. Last 7 days
    1. Despite this, the arms retained sufficient rigidity appropriate for incorporation in the dosage forms.

      Even with a decrease in flexural strength in the arms, it is still a sufficient rigidness to move forward with the experiment. If the rigidity was affected more drastically, it could lead to the arms breaking inside the stomach. The arms are meant to keep the dosage form at a large enough size that it is too big to pass through the pylorus, so breakage could lead to an early digestion.

    2. The flexural strength of the arms was reduced after 2 weeks of incubation in SGF

      After a period of 2 weeks in the simulated stomach acid, the strength of the caged arms decreased. This is because the exposure to the acid affected the integrity of the material the arms are made of. (The percentage of strength decrease for 2 weeks is not stated.)

    3. Solid arms made of Sorona 3015G NC010 were placed in simulated gastric fluid (SGF) for various times

      The simulated gastric fluid is a simulation of the stomach fluid and this is testing the stability of the polymer that the solid arms are made of, for prolonged periods of time.

    4. understanding the mechanical stability of the polymer at low pH

      Some polymers are reactant in differing pH levels. Tests are performed in order to understand how this specific polymer reacts when introduced to different pH levels. These are done to ensure the polymer is stable in the low pH of the stomach.

    5. the caged arms had a significantly higher fracture force than V-shaped arms (65.6 ± 7.5 N, n = 6 versus 51.7 ± 5.8 N, n = 6; P < 0.05, Student’s t test).

      The caged arms fracture force was significantly higher than the V-shaped arms. The three-point bend test shows that it takes a higher force to break the caged arms than it takes to break the V- shaped arms.

    6. gastric resident dosage form

      A dosage form that stays in the stomach during dosage release.

    1. we printed a neonatal-scale human heart from collagen

      After demonstrating the ability to print smaller components, the authors wanted to test the the ability to print full organ framework from the collagen. To do this, they printed an infant-sized heart.

    2. we printed a tri-leaflet heart valve 28 mm in diameter

      The authors wanted to confirm that the printed collagen could function in an adult heart. They printed a heart valve and tested it with a flow system that simulates conditions of a heartbeat.

    3. We next FRESH-printed a model of the left ventricle of the heart

      The authors printed a component of the heart using two materials: collagen ink for structure and a bio-ink containing heart cells for function (contraction).

    1. high-resolution stratigraphic framework

      To analyze what happened in these basin areas, the authors used sequence stratigraphy. They looked for variations in the successive layers of sedimentary rocks and the composition of the rocks. The order in which the different layers were deposited was carefully recorded. The chronostrategraphy aspect of their involved tracking changes in the character of the rocks through geologic time. Fossils found in each layer can then be better placed in terms of when these organisms evolved, and in some cases, became extinct.

      Title: Stratigraphic Principles: https://www.biointeractive.org/classroom-resources/stratigraphic-principles

    2. paleomagnetics

      This is a study of the magnetism in rocks induced by Earth's magnetic field. The minerals in certain rocks lock-in the direction and strength of the magnetic field at the time of their formation. The authors used this to determine the age of their fossil finds.

    3. CA-ID-TIMS U-Pb-dated volcanic ash

      This technique was used by the authors to date rock strata. Chemical Abrasion Isotope-Dilution Thermal Ionization Mass Spectrometry (CA-ID-TIMS) is a multistep, high precision technique used to determine the relative amounts of uranium-238 and lead-206 present in zircon crystals. Zircon crystals, formed from volcanic or metamorphic rock, are extremely durable and resistant to chemical breakdown. They can survive major geologic events. Over time, new zircon layers form on top of the original crystal. The center of the zircon remains unchanged, keeping the chemical characteristics of the rock in which it originally formed. Every radioactive element has a decay rate. The length of time it takes half of the radioactive atoms in a sample to decay (form into a different element) is referred to as its half-life. The half-life of uranium-238 to lead-206 takes 4.47 billion years. The authors selected zircon crystals from their study area and dated them using CA-ID-TIMS.

      Image of thermal-ionization mass spectrometer: https://www.usgs.gov/media/images/usgs-thermofinnigan-triton-thermal-ionization-mass-spectrometer

    4. two U-Pb radiometric dates

      To determine the age of some of the rocks in their study area, the authors used Uranium-lead dating. It is most often used to date volcanic and metamorphic rock and very old rocks. This technique involves the radioactive decay of U-238 and U-235 into lead (Pb). The 238 and 235 refers to the sum of the number of protons (92) and the neutrons in the nucleus of each of these forms or radioactive uranium. The half-life, which is how long it takes half of a sample of the U-238 to undergo radioactive decomposition and become Pb-206. The time is 4.47 billion years. The time it takes for half of a sample of U-235 to decay into Pb-207 is 704 million years. Since the two different forms of uranium have different half-lives and decompose into different forms of lead, they are a good check when calculating the age of a rock or fossil. This dating technique is best used on rocks that are from 1 million to 4.5 billion years old.

    5. The K–Pg boundary is demarcated by the decrease in abundance of Cretaceous pollen taxa

      Fossil pollen reveals how plant species evolved after the K-Pg extinction. The authors compared pollen from both sides of the K-Pg boundary. The authors discovered that immediately after the impact, there were few plants. This clip from HHMI's Rise of the Mammals shows how they obtained pollen and spore samples. 4:28:11 - 5:01:12

    6. concretions and are found in all observed facies

      These are compact masses of mineral matter, usually spherical or disk-shaped. They are carried along by water and become embedded in host rock that has a different composition. Concretions form in sediments before the sediments become rocks. The authors focused on searching for and cracking open concretions. Concretion explained in HHMI's Rise of the Mammals clip 7:32:20 - 9:22:20

    7. Loxolophus sp. [(E) and (F)

      Loxolophus is the oldest placental mammal to be discovered. It has teeth for eating both meat and plants, thus adapting it to a recovering ecosystem. Clip from HHMI's Rise of the Mammals 10:25:10 - 11:18:22

    8. Taeniolabis taoensis [(K) and (L)

      This mammal was a specialist and was proof that mammals were evolving and becoming larger as they specialized. Increased plant diversity fueled this growth. Clip from HHMI's Rise of the Mammals supports this: 11:18:09 - 11:51:06

    9. In addition, the pattern and abundance of vertebrates preserved in all paleoenvironments suggest that by ~700 ka post KPgE the largest mammals (25+ kg) were spatially partitioned across the landscape.

      The fossil record shows that after the K-Pg extinction event, there was an exponential increase in the maximum size of mammals. About 40 million years ago, this size increase leveled off. It is hypothesized that diversification to fill ecological niches was the primary driver of this rapid increase and that environmental temperature and land area have acted to constrain the continued increase. clip from HHMI's Rise of the Mammals 11:51:07 - 12:29:18

    10. Cranial size and lower first molar area were used to estimate mammalian body mass – an important feature that impacts many aspects of the biology and ecology of mammals

      An import step in the author’s research was the determination of the body masses of the fossil mammals they unearthed. Body mass was estimated by measuring the length × width (L×W) of the first lower molar. This has been found to be an accurate indicator of body mass through comparisons to molar size and the body mass of living mammals. This data is important to the authors since body mass has been linked with characteristics such as energy expenditure, gestational period, temperature regulation, and niche ecology. When reconstructing life during the time period being studied, body mass distributions in mammalian communities can be used to infer ancient environmental conditions. Therefore, determining the body mass of a fossil mammal is an important step toward understanding its palaeoecological role. Brain size (cranial size) usually increases with the size of the animal. However, the relationship is fairly inaccurate as cranial size does not correlate as closely to body mass as the area of the first molar does.

  2. Feb 2020
    1. Preservation of He and its isotope signatures in diamond is supported by He heterogeneities within individual diamonds (18, 20)

      In conducting their experiments, the research team acknowledged the inherent variability in the <sup>3</sup>He/<sup>4</sup>He ratio inside each diamond. To minimize these differences, the scientists used diamonds that were free of inclusions thereby eliminating the possibility of inconsistencies in helium ratios originating from these inclusions. Then, the crushed the diamond in a vacuum to verify that 90% of the helium is present in the groundmass of the diamond. This was then followed by step-heating the crushed diamond to measure the variability in the <sup>3</sup>He/<sup>4</sup>He ratio in the diamond.

    2. picogram analyses of Pb-Sr isotopes of fluid inclusions

      Picogram is a unit of measurement of weight and it is equivalent to one-trillionth (10<sup>-12</sup>) of a gram. A picogram analysis is done by weighing a sample at the picogram scale and then using other analytical techniques on this sample to get meaningful information.

    3. all diamonds show typical sublithospheric features

      In order to confirm the sublithospheric features, the authors characterized the structure of these diamonds using a technique called cathodoluminescence imaging. With this technique, light emitted by a sample when irradiated with electron radiation can be measured.

    4. compositions of basalts provide information

      Measurement of isotopic content of primitive basaltic rocks has been a useful method to understand the exact chemical composition of these old rocks. Learn more in this article from Eos explaining the importance of using these techniques. https://eos.org/features/isotope-geochemists-glimpse-earths-impenetrable-interior

    5. We studied 24 diamonds (1.3 to 6 mm in size) from the Juina-5 and Collier-4 kimberlites and São Luiz River (Juina, Brazil).

      The authors carefully inspected 24 diamonds excavated from the Juina area of Brazil. This location was chosen because the diamonds excavated from this area show characteristics similar to the ones which belong to the Earth's transition zone (410 to 660 km depth). The sizes of these diamonds ranged between 1.3 mm to 6 mm and in spite of this limitation in size, they could successfully detect the helium gases trapped in these diamonds.

    6. The carbon isotope compositions of the diamonds

      The carbon isotope compositions were measured using an instrument called Stable Isotopes mass spectrometer. Watch this video to get an idea about how this instrument works in a science laboratory: https://www.youtube.com/watch?v=SHbzEwMt-1s

    1. the caged arms made using a thermoplastic polymer,

      The arms were made out of a thermoplastic polymer which is a substance that can be heated to become pliable, and upon cooling, it hardens.

    2. mechanical properties of V-shaped arms

      Different forces were applied to the material. The physical reactions to those forces were noted in order to determine the mechanical properties of this material.

      Examples of Mechanical Properties: strength, toughness, brittleness, etc..

    3. Using this dosage form, we have shown 1- to 2-week-long delivery of anti-infectious disease agents previously (23, 25); however, month-long delivery of contraceptives has yet to be achieved.

      The dosage form has been shown to release the dosage for an extended period of 1-2 weeks, rather than the target period of 4 weeks.

    4. we decided to use PDMS-based polymer matrices

      PDMS-based material was used in this experiment because there are known uses for this material in other sustained- released products.

    5. a drug-polymer matrix within the sleeve

      Within the structural polymer that makes up the arms, levonorgestrel (LNG) can be loaded for extended release.

    6. an outer sleeve made of a rigid polymer that provides mechanical integrity (structural polymer)

      The polymer making up the outer sleeve is for structural support of the dosage form.

    7. assumes a size larger than that of the pylorus

      After ingestion, the dosage form is released from the capsule and expands to a size larger than the stomachs opening, which prevents it from passing through. This will allow for the extended release of the dosage.

    8. folding into a capsule to facilitate oral administration

      The dosage form is able to fold into itself in order to become a size capable of fitting inside a capsule for ingestion.

    1. Collagen disks 5 mm thick and 10 mm in diameter were cast in a mold or printed and implanted

      Two types of collagen disks were created: solid disks from a mold, and porous disks made by trapping and removing the microparticles in the FRESH support material. The disks were then tested in mice.

    2. We first focused on FRESH-printing a simplified model of a small coronary artery–scale linear tube from collagen

      The authors tested a printed collagen tube surrounded by a mixture containing collagen gel and C2Cl2 cells

      Over 5 days, one tube was left static (no perfusion) and the other underwent active perfusion.

  3. Nov 2019
    1. modular [subsets of species interacting preferentially with each other, forming modules of highly connected species

      Modular networks are groups of species that preferentially interact with each other. In Figure 1B, each module is represented by a distinct color.

      Connectance measures the proportion of interactions taking place in a network out of the total amount of possible interactions. The authors calculated a high connectance within each module, meaning the bird and plants species utilizing the majority of available interactions.

    2. The novel network was nested [specialist species interacting with proper subsets of partners of the most generalist species

      Networks such as seed dispersion networks consists of specialist species (which interacts with only a few, select species) and generalist species (which interacts with a broad range of other species). When a specialist interacts with one of the same species that a generalists interacts with, it is called a nested network.

      In Figure 1A, the specialists species are depicted as very thin rectangles to represent the few interactions they have with other species, whereas the generalist species are bigger rectangles to encompass the many interactions they have with other species. One example of a nested network is shown between the specialist animal at the bottom of the animal column interacting with the same plant that a generalist animal species, like the top rectangle of the animal column, also interacts with.

    1. injected with 4×1054×105<math xmlns="http://www.w3.org/1998/Math/MathML"><mn>4</mn><mo>×</mo><msup><mn>10</mn><mn>5</mn></msup></math> Sa1N tumor cells

      The experiment was repeated with a much larger number of tumor cells injected. This is to ensure that the effects seen rely only on the treatment course and not on other factors.

    2. All control mice injected subcutaneously with 1×1061×106<math xmlns="http://www.w3.org/1998/Math/MathML"><mn>1</mn><mo>×</mo><msup><mn>10</mn><mn>6</mn></msup></math> Sa1N cells

      The authors injected groups of 5 mice each with SA1N cells causing fibrocarcinoma.

  4. Oct 2019
    1. we injected groups of mice with 2×1062×106<math xmlns="http://www.w3.org/1998/Math/MathML"><mn>2</mn><mo>×</mo><msup><mn>10</mn><mn>6</mn></msup></math> wild-type 51BLim10 tumor cells and treated them with anti-CTLA-4 beginning on day 0 as before, or beginning 7 days later

      The authors conduct a new set of experiments to check whether administering anti-CTLA-4 after tumors are detected is as effective as administering it at the same time tumors are introduced. If they are able to successfully treat mice after tumors are already established, then maybe this treatment could work for human patients as well!

    2. Mice that had rejected V51BLim10 tumor cells as a result of treatment with anti-CTLA-4 were challenged with 4×1064×106<math xmlns="http://www.w3.org/1998/Math/MathML"><mn>4</mn><mo>×</mo><msup><mn>10</mn><mn>6</mn></msup></math> wild-type 51BLim10 cells 70 days after their initial tumor injections

      The authors injected the mice which were previously treated with unmodified tumor. If they developed an immune memory, they may be able to clear this tumor even though it is not expressing B7 and they have not been given anti-CTLA4 antibodies!

    3. injected with 2×1062×106<math xmlns="http://www.w3.org/1998/Math/MathML"><mn>2</mn><mo>×</mo><msup><mn>10</mn><mn>6</mn></msup></math> tumor cells

      The authors decided to check if there is an effect to changing the tumor dose. They halved the dose to 2x10\(^6\) and had a group of untreated and a group of anti-CTLA-4 treated mice.

    4. the growth of V51BLim10, a vector control tumor cell line that does not express B7

      This set of experiments was conducted with a variant of the same murine colon cancer tumor cells, but this time the tumors do not express B7. Thus the tumors are not able to provide the secondary signal to T cells.

    5. treated with anti-CTLA-4

      A third group of mice received anti-CTLA-4 antibodies as treatment.

    6. injected with 4×1064×106<math xmlns="http://www.w3.org/1998/Math/MathML"><mn>4</mn><mo>×</mo><msup><mn>10</mn><mn>6</mn></msup></math> V51BLim10 tumor cells and left untreated, or treated with anti-CD28

      The mice were split into groups that received different treatment regimens. Two of the groups were untreated, or were treated only with anti-CD28 antibodies.

    7. Two groups were treated with a series of intraperitoneal injections of either anti-CTLA-4 or anti-CD28

      The authors injected groups of mice with tumor cells expressing B7-1 molecules. These mice were then treated with two different regimens. One group of mice was injected with antibodies targeting CTLA-4, and another with antibodies targeting CD28.

    8. untreated controls

      The authors had a control group of mice which were injected with tumor cells expressing B7-1 molecules, but were not treated with any antibodies.

    9. in vivo administration of antibodies to CTLA-4

      The authors injected mice with antibodies that bind CTLA-4.

  5. Sep 2019
    1. on the silicon dioxide (SiO2) substrate by means of a solution process method (figs. S1 and S2), providing a partial coverage on the SiO2 surface (22).

      The authors have deposited a few drops of graphene in a colloidal liquid state on a SiO<sub>2</sub> substrate to give it a non-uniform coverage of graphene upon evaporating the solution. Simultaneously, they also deposited a solution containing nanodiamonds to get nanodiamond particles on the SiO<sub>2</sub> surface.

    2. We demonstrate our observation of stable macroscale superlubricity while sliding a graphene-coated surface against a DLC-coated counterface

      The authors have designed and performed superlubricity experiments by sliding DLC-coated stainless steel balls against a graphene surface. However, after analyzing the initial test results, they needed to modify the design by also incorporating nanodiamonds into the system. They anticipated that the nanodiamonds can act as nano-ball bearings, thereby enhancing the mechanical strength of graphene and contributing to superlubricity.

    3. Atomistic simulations

      Molecular dynamics is a computer simulation method that allows for prediction of the time evolution of a system of interacting particles such as atoms and molecules.

    1. by using a two-chamber place preference test

      The authors hypothesized that because the ZI to PVT projection promotes intake of foods that are pleasurable to eat and also makes mice overcome their aversion to light in order to eat that food, that stimulation of this pathway is pleasurable or rewarding for the mice.

      They tested this by placing the animals in a box with two identical compartments. The mice were able to freely move around the box. On one side of the chamber the mice received stimulation of their ZI-PVT neurons, whereas the stimulation was turned off when the mice were on the other side.

    2. optogenetic stimulation

      A technique that uses light to control the activity of cells, most commonly neurons, in living animals. The cells are genetically modified to express ion channels that are sensitive to light. Shining light on the neurons changes their activity, allowing scientists to understand the role of the neuron in a given behavior or physiological process.

    3. Anterograde AAV-ChIEF-tdTomato labeling

      Infection of the neurons with tdTomato-tagged AAV allows the projection of the ZI GABA axons to be visualized.

      The authors used this method to determine where in the brain these neurons project to.

    4. we injected Cre recombinase–inducible adeno-associated viruses (AAV) expressing the optogenetic channelrhodopsin-like ChIEF fused with a tdTomato reporter [AAVdj-CAG-DIO-ChIEF-tdTomato (driven by the CAG promoter) (10, 11)] bilaterally into the rostral ZI of vesicular GABA transporter (VGAT)–Cre mice that express Cre recombinase in GABA neurons

      To target a neuron population of interest, e.g. those that express GABA, scientists use genetically modified viruses (AAVs) to deliver proteins into the brain (such as optogenetic tools).

      This is achieved by using two tools: 1) a mouse line that expressed the enzyme Cre recombinase in a specific population of neurons (e.g. those that express the GABA transporter VGAT) and 2) an AAV that expresses an optogenetic protein only in the presence of Cre. The AAV is injected into the brain region of interest in the Cre mice. This AAV has a tdTomato tag which allows the injection site to be visualized under a fluorescent microscope.

      For further information on these tools see how mice optogenetics are used this video.

      The ZI in both hemispheres of the brain was injected with the AAV (bilaterally), with the region lying towards the front of the brain (rostral) being targeted. The optogenetic tool used (ChIEF) activates neurons when blue light is shone on the cells.

  6. Aug 2019
    1. Given that APC−/− tumors can efficiently transport both glucose and fructose, we sought to determine the metabolic fate of glucose and fructose using 13C isotopic tracing. We isolated tumors from APC−/− mice and exposed them to four different labeling conditions for 10 min ex vivo: 13C-glucose (labeled at all six carbons), 13C-fructose (labeled at all six carbons), 13C-glucose + unlabeled fructose, and 13C-fructose + unlabeled glucose.

      To study glucose and fructose metabolism in tumors the they traced the breakdown of the molecules.

      1. The scientists labeled glucose and fructose with a radioactive atom that can be traced even as the molecule is broken down.
      2. They incubated tumor tissues with labeled-glucose, labeled-fructose or a mix of labeled-glucose + fructose, or a mix of labeled-fructose + glucose. These tumor tissues absorb the sugars and metabolize them. Note: Adding a mixture of sugars to the tumor allows the scientists to determine how the metabolic pathways are related.
      3. The different components after metabolism are then determined in lab to trace the metabolic pathway of how tumors break down sugars.
    2. on a tumor tissue microarray containing 25 cases of human colon tumors ranging from early-stage adenomas to metastatic carcinoma (fig. S5B)

      In order to investigate tumor metabolism in human tissues scientists used a tissue microarray where tiny samples of human tumors or tissues were cultured and studied. They compared metabolism between 25 different tumors, of varying severity, to normal human intestinal cells.

    3. Given these findings, we hypothesized that fructose in the intestinal lumen might be efficiently transported and metabolized by tumors located in the distal small intestine and colon.

      The authors wanted to test whether tumors near the end (distal) of the intestines or in the colon consume fructose since it can be found in much higher concentrations in the colon than glucose can. Approach: They marked glucose and fructose molecules with C14 (radioactive carbon) which can be traced as the sugars get broken down and metabolized. This way if they find C14 from fructose and glucose in a tumor they can conclude that it metabolizes both sugars.

    4. Indeed, we found that fructose concentration was significantly increased in the colonic lumen (4.4 mM at peak 30 min) in WT mice after an oral bolus of HFCS (fig. S4A), consistent with impaired fructose uptake in the small intestine.

      The scientists repeated experiments from previous work to validate their methods. They fed mice (oral bolus) high fructose corn syrup and then 30 minutes later (to allow for digestion) measured fructose levels in the colon. Similar to previous studies, they found elevated fructose in the colon suggesting the passive transporters in the intestine were saturated and allowed fructose to pass through undigested.

    5. To uncouple the metabolic effects caused directly by HFCS from those caused by HFCS-induced obesity, we treated APC−/− mice with a restricted amount (400 μl of 25% HFCS) of HFCS daily via oral gavage starting the day after tamoxifen injection (referred to as the HFCS group).

      Here they did a two-part experiment: 1) They tested whether high fructose corn syrup itself induced metabolic dysfunction by giving mice a limited about of high fructose corn syrup so that they did not become obese.

      2) To test the effects of high fructose corn syrup on colorectal tumor formation and growth the authors compared tumor characteristics between mice fed with different amounts of high fructose corn syrup. First, they treated all mice with tamoxifen to activate tumor formation. Then they broke them down into three groups: HFCS- mice that were treated with a limited amount of high fructose corn syrup to prevent obesity, WB- mice that had high fructose corn syrup mixed in with water so they consume a lot of it, and Con- a control group with no high fructose corn syrup administered. They then compared the formation of tumors and their characteristics to look at the effects of high fructose corn syrup.

    6. We first determined the physiological effects of HFCS administered to APC−/− and wild-type (WT) mice

      The scientists were first interested in looking at how high fructose corn syrup affects an entire mouse, and compare the effects on normal mice and their genetically modified mouse (APC -/-). They did this by mixing high fructose corn syrup into their water and allowing them to drink as much as they wanted (ad libitum). They monitored the mice's weight over time.

    7. To untangle the link between sugar consumption, obesity, and cancer, we mimicked SSB consumption in a genetically engineered mouse model of intestinal tumorigenesis. In this model, the adenomatous polyposis coli (APC) gene is deleted in Lgr5+ intestinal stem cells upon systemic tamoxifen injection (Lgr5-EGFP-CreERT2; APCflox/flox, hereafter APC−/− mice) (11, 12).

      The scientists need a mouse model that will develop intestinal tumors so that they can study the effects of sugar-sweetened beverages on the tumor. They manipulated the mouse genes so that after injecting a drug (Tamoxifen) a gene in the intestine is deleted and tumors begin to form. With this genetically engineered mouse the scientists can induce tumor formation of the mouse, then track tumor size and metabolism to look at the effects of high fructose corn syrup in a diet.

    1. tetrodotoxin, which prevents Na+ influx elicited by veratridine, prevented the effects of depolarization

      The sodium influx caused by veratridine was cancelled out by tetrodotoxin exposure. Tetrodotoxin blocks the influx of sodium ions thereby stopping depolarization. Hence there is no increase in the activity of substance P in the presence of both drugs.

    2. one-way analysis of variance

      Used to compare average means of two or more samples.

      Here, the authors used this test to measure the activity of substance P across the following groups: (1) control, (2) in the presence of tetrodotoxin, (3) in the presence of veratridine, and (4) in the presence of tetrodotoxin and veratridine.

      Read more at Khan Academy.

    3. Control

      These are explants obtained from nucleus locus ceruleus. These explants are placed in a nutrient medium and no drugs are provided to this group. This group serve as a comparison group to the groups treated with the drugs.

    4. Autoradiography

      The last step in the dot blot to detect the materials of interest using a radioactive probe. Here, the radioactive probes tagged to proenkephalin were observed for three time points: 0 days, 1 day, and 3 days.

    5. The medullary explants exhibited a 50-fold rise in [Leu]enkephalin within 4 days, after a 2-day lag period, and continued increasing through 7 days, the longest time examined. In contrast, TH activity remained constant throughout, while PNMT decreased 60 percent in the first 4 hours, maintaining a stable plateau thereafter.

      Medullary explants were obtained from adult rats to understand the mechanism behind the different transmitter expression. 

      The tyrosine hydroxylase (TH) activity stayed consistent throughout the seven-day period. TH enzyme is responsible for the synthesis of catecholamines. The consistent TH activity indicates that there is no change related to the expression of catecholamines in this experiment. 

      Next, the PNMT enzyme is responsible for adrenergic expression. PMNT exhibited a 60% decrease in four hours and was maintained at that level for the remainder time period. 

      The [leu]enkephalin (opiate expression) showed a 50-fold increase in four days and continued to increase until seven days. There is a continuous increase in the expression of opiates as opposed to catecholamines.

    6. immunocytochemical reactivity

      Technique used to mark the target of interest using an antibody-based test. In this study, tyrosine hydroxylase and dopamine β-hydroxylase are the target molecules of interest. The antibodies against tyrosine hydroxylase—namely dopamine β-hydroxylase—are used to detect the molecule of interest.

    7. Depolarization with veratridine completely blocked the increase of substance P

      Explants were depolarized with veratridine, leading to a decrease in the levels of substance P.

    8. explanted superior cervical ganglia

      The ganglia was removed from the animal and transferred to a nutrient medium. These explants were maintained in the medium for six months to one year. At several time points, the explants were taken and observed for the activity of substance P.

    9. in culture

      The neurons are dissected from the animal and grown in a dish. The dish contains supplemental factors and a medium that mimics the composition of the fluid inside the animal.

    10. grown in dissociated cell culture

      Neurons are separated from the animal through mechanical or enzymatic disruption. The separated neurons are transferred to a dish or culture plate. The neurons are maintained in the dish.

  7. Jul 2019
    1. The multiplet nuclei capture rate was comparable to single-cell RNA-seq analysis using the 10× platform

      In order to see if they were accidentally capturing more than one nuclei at a time, the authors mixed nuclei from mouse and human samples prior to running snRNA-seq. If they saw mouse RNA mixed with human RNA, this meant there was a multiplet (or, more than one nuclei was captured).

      However, they found that there was very low rates of multiplets, meaning that their experiment is working well.

    2. We aimed to gain insight into cell type–specific transcriptomic changes by performing unbiased single-nucleus RNA sequencing (snRNA-seq) (4) of 41 postmortem tissue samples

      The authors wanted to see if particular cell types have different gene expression in autistic brains. They also examined two different brain areas to see if there's regional differences.

    3. We generated 104,559 single-nuclei gene expression profiles—52,556 from control subjects and 52,003 from ASD patients (data S2)—and detected a median of 1391 genes and 2213 transcripts per nucleus, a yield that is in agreement with a recent snRNA-seq

      The authors calculated the total number of genes expressed in the single-nuclei data of controls and ASD patients and found it was about the same. The median gene number is lower than the transcript number because a gene can have multiple transcript forms (called isoforms).

    4. 10× Genomics platform

      The 10X Genomic system is a platform that isolates single nuclei and isolates "libraries" (a collection of RNA fragments which can be used to identify particular RNAs) from each nuclei.

    5. To compare changes in ASD to those in patients with sporadic epilepsy only, we generated additional snRNA-seq data from eight PFC samples from patients with sporadic epilepsy and seven age-matched controls (data S1)

      The authors wanted to make sure that any effects they were seeing were specific to ASD, and not epilepsy, so they included patients with epilepsy alone as an additional control.

    6. (fig. S1A; P > 0.1, Mann-Whitney U test)

      This means that the control and ASD subjects didn't differ in age, sex, or RNA quality. This is important because results can be biased by uncontrolled factors (e.g., what if there's more females in one group, and the effect you're seeing is really due to sex?)

      Mann-Whitney U test is a statistical test that they used to show that there's no signifiant differences between the controls and ASD subjects.

    1. Relative to the wild-type protein, the evolved triple mutant catalyzes the reaction more than seven times faster, with turnover frequency (TOF) of 46 min–1 (Fig. 1E).

      Via site saturation mutagenesis, V75 and M103 positions along the protein sequence were identified as likely beneficial mutations and were randomized, i.e., the amino acids at these positions are replaced by random ones. A large number of random variants, which together constitute a library, are produced and then screened in an attempt to discover a highly active variant among them. The evolved triple mutant fits the bill.

    2. a 12-fold improvement over the wild-type protein (Fig. 1D).

      Recombinant Protein is a protein encoded by a gene that has been cloned in a system that supports expression of the gene (in this case, it is M100). Modification of the gene by recombinant DNA technology can lead to expression of a mutant protein. In this study, M100D mutation is more highly activating than the wild-type protein (as it occurs in nature).

    3. site-saturation mutagenesis

      M100 is the specific amino acid residue within the protein sequence that has been identified to be critical for the protein’s function. it is very important to determine the ideal amino acid residue for this position. Site saturation mutagenesis is a form of random mutagenesis, allowing the substitution of specific sites against all 20 possible amino acids at once. In this study, this technique is employed to generate a series of enzymes with enhanced activity and enantiospecificity.

    4. “Active site” structure of wild-type Rma cyt c showing a covalently bound heme cofactor ligated by axial ligands H49 and M100. Amino acid residues M100, V75, and M103 residing close to the heme iron were subjected to site-saturation mutagenesis.

      Proposed theory for the binding mode for the iron-carbene complex is one where the carbene complex forms such that it takes the place of the axial methionine. The silane may approach from the more exposed side in the wild-type protein. This further explains the observed stereochemistry of the organosilicon product. The V75T, M100D, and M103E mutations may improve reactivity by providing better access of the substrate to the iron center.

      A complete carbene transfer to the protein may be the reason for the catalyst to be inactivated. The activity and lifetime of Rma cyt c may be improved with further mutagenesis.

    5. Carbon–silicon bond forming rates over four generations of Rma cyt c.

      Turnover frequency for each variant relative to wild type protein: WT: 1 M100D 2.8 +/- 0.2 V75T M100D 4.6 +/- 0.3 V75T M100D M103E 7.1 +/- 0.4

      From this experimental data, it is clear that directed evolution has resulted from changing the enzyme from unselective wild type into a highly enantioselective variant.

    6. In addition, diazo compounds other than Me-EDA could be used for carbon–silicon bond formation

      Additional diazo compounds that were successful were are R3 = -CH3, -CH2CH3, -Ph.

    7. Fig. 2 Scope of Rma cyt c V75T M100D M103E-catalyzed carbon–silicon bond formation.
      1. Rma cyt c V75T M100D M103E shows excellent enantioselectivity and turnover over a wide range of substrates. Silicon substrates with weakly electron donating or activating methyl substituents (4), strongly electron donating -OMe (5), weakly deactivating -Cl (6), strongly deactivating -CF3 (7), and moderately deactivating COOMe and CONMe (9 and 10 respectively) show moderate to excellent turnover and high selectivity. No direct relationship exists between turnover number and substituents effects from this study. Enantioselectivity is excellent in all substrates. All products were identified using GC-MS and no traditional organic chemistry techniques were used.
    8. Carbon–silicon bond formation catalyzed by heme and purified heme proteins.

      Heme proteins that were readily available were screened to identify the one that gave the most enantioselectivity. This served as starting point for directed evolution. Purified heme protein, silane, diazoester, thiosulfate, methyl cyanide and M9-N buffer as the medium for microbial growth were stirred at room temperature in anaerobic conditions. Reactions were performed in triplicate. Unreacted starting materials was obtained in all cases and no further purification was carried out.

    1. We assembled and analyzed a dataset of 42 avian SDNs encompassing a broad geographical range, with data from islands (n = 17) and continents (n = 25) in tropical (n = 18) and nontropical (n = 24) areas (table S12). Although some of the other SDNs in the analyses included introduced species [e.g., (7, 34)], SDNs on O‘ahu present an extreme case of dominance by introduced species (>50%), coupled with extinction of all native frugivorous birds

      The authors surveyed data from seed dispersal networks across a variety of habitats, noting that the O'ahu was unique with the majority of its population being mostly made up of introduced species, with the original bird species of the island going extinct.

  8. Jun 2019
    1. The statistical significance of the observed topological patterns was assessed by contrasting observed values for each metric with the confidence interval from null models (13)

      To determine if an observation is a consequence of a measured phenomena, and not by chance, researchers must test (and reject) the null hypothesis. A null hypothesis states that a result or observation is due by chance, and so should be disregarded as insignificant.

      In this case, the authors test the significance of the identified patterns in the network by comparing these values to a null model, a generated collection of values randomized to produce a pattern based on no ecological mechanism (Gotelli and Graves, 1996). If the observed values differ from the range of null values defined by the null model's confidence interval, they are considered significant.

    2. To what extent are introduced species integrated into seed dispersal networks (SDNs), and do introduced dispersers replace extinct native animals? To investigate these questions, we examined interactions based on 3278 fecal samples from 21 bird species [tables S1 to S3 and (13)] collected over 3 years at seven sites encompassing broad environmental variation across Oʻahu (fig. S1 and table S1).

      The authors wanted to figure out how many new plant/animal species are being incorporated into O'ahu's ecosystem through the dispersion of plants' seeds by animals. They also wanted to determine if non-native animals are responsible for this distribution.

      Over the course of 3 years they collected poop samples from 21 different birds found in 7 different locations across the island of O'ahu. The supplemental figure 1 and table 1 describe the 7 locations' average rainfall, coordinates, and elevation to demonstrate the diversity of these areas. Another set of tables listed the different species and plants (introduced and native) found at each site.

    3. We estimated robustness of animals to the extirpation of plants (assuming bottom-up control) and robustness of plants to the extirpation of animals (top-down control). We simulated two scenarios, one in which order of extirpation was random and another—more extreme—scenario in which order was from the most generalist to the most specialist species. After using a null model correction on each metric to account for variation in sampling intensity and network dimensions across studies (14), we compared the 95% confidence intervals for the O‘ahu networks with the global dataset.

      The authors designed hypothetical scenarios where generalist or specialist species in a network went extinct, then determined the severity of these extinctions by measuring the amount of additional species that would (theoretically) go extinct as a consequence.

      The use of a null model ensures that the results found by these simulations are not a coincidence, that is, not due by chance.

    4. To examine interaction dynamics across sites and to test their association with environmental variables, we calculated the dissimilarity (interaction turnover) between pairs of networks, using data limited to species present in the networks.

      The authors compared the similarities and differences between species' interactions across different locations around the island, taking into account the unique environments of each site.

    5. Beckett’s algorithm
    1. veratridine depolarization

      Veratridine is a drug that causes an increase in the sodium influx. The authors used the drug to cause depolarization.

    2. histofluorescence

      Fluorescent markers are used to label catecholamine in the neurons and visualized using a fluorescent microscopy.

    1. Locomotor sensitization

      A technique used to measure the movement or locomotor activity of the animal assessed in the open field box. It is thought that with repeated administration of the drug, the animals can show an increase in locomotor activity, which is a sign of sensitization.

      Photobeams are placed on the walls of the box to record the movements of the animal. The mice can explore and get used to the test area of the open field.

      The animal is tracked for the distance covered in centimeters using an automated video system. The experiment is repeated on day 1 and on each day following the injection of cocaine (on days 8, 9, 10, 11). The total distance covered by the animal is recorded for each day.

      Watch the technique here at: https://www.jove.com/video/53107/assessment-cocaine-induced-behavioral-sensitization-conditioned-place

    2. CPP measures an animal’s preference for a particular place in its environment, which develops as that place becomes consistently associated with a rewarding stimulus and assumes some of the rewarding effects of the stimulus

      A three-chamber apparatus is constructed with access to two chambers for the animal for this experiment. There are 3 phases to this experiment: pre-conditioning, conditioning, and testing.

      Pre-conditioning: Animal can freely move on any side of the chamber. For each animal, the first preference of the chamber is noted. This was done on days 7 and 8 of the experimental protocol.

      Conditioning: Train the animals to saline on its preferred chamber and to cocaine on the least preferred side. This was done on days 9, 10, and 11 of the experimental protocol.

      Testing: On day 12 of the experiment, the animals are allowed to have access to both sides of the chamber for 30 minutes. The time spent in the preferred chamber and the less preferred chamber is recorded.

      View the video to learn more about the conditioned place preference protocol. The protocol is describing how to measure craving in animals using morphine as the drug of preference. https://www.jove.com/video/58384/a-conditioned-place-preference-protocol-for-measuring-incubation

    3. long-term potentiation

      Slice electrophysiology is a technique that is widely used to study synaptic plasticity. Brain slices containing the nucleus accumbens region was obtained from the mice.

      For this technique, stimulating and recording electrodes are needed. Recording electrodes measure the electrical activity of the neurons in the area. Stimulating electrodes are used to stimulate a dendrite (s) in the brain region that can elicit a response which can be recorded via the recording electrode. The stimulus is given at a rate of 1 per minute.

      The stimulating electrode was placed in the nucleus accumbens, and the recording electrode was placed near to the stimulating electrodes.

      The amplitude or the size of the response can be calculated from each stimulus. Baseline values were obtained. After that, an LTP stimulus was given, and the post-LTP data was collected. The data were normalized to baseline values.

      Watch the video here on LTP is done in hippocampus, a brain region involved in memory: https://www.jove.com/video/2330/preparation-acute-hippocampal-slices-from-rats-transgenic-mice-for

    4. high-frequency stimulation

      The neurons are activated by a high frequency of 100Hz. The protocol used here: 4 trains of 100 Hz tetanus 3 mins apart and are represented below:

    5. FosB expression

      Chromatin immunoprecipitation technique is used to measure FosB expression.

      Briefly, the brain tissue was fixed with formaldehyde to crosslink the DNA binding proteins. The DNA was sheared into small fragments, some of which contains the DNA binding proteins. Using specific antibodies (H4, H3), the DNA binding protein complex was isolated. The proteins are digested, and the DNA is released. The specific DNA sequences of interest were amplified to see if they precipitated with the antibody.

      Watch the video here: https://www.jove.com/science-education/5551/chromatin-immunoprecipitation

    6. PCR

      mRNA was isolated from the brain tissue using the Trizol reagent. RNA was later reverse transcribed to cDNA or complementary DNA using primers of interest. The fold difference of mRNA over control values is calculated and compared across the groups.

      Check the video here on the technique: https://www.youtube.com/watch?v=0MJIbrS4fbQ

    7. Immunoblots

      Protein was extracted from the tissue. Proteins are separated based on molecular weights in a SDS-Page gel. The gel was transferred to nitrocellulose membrane.

      The antibodies to H3 and H4 were applied to the membrane to detect the bands of interest.

      Watch the technique here: https://www.jove.com/science-education/5065/the-western-blot

    8. SAHA

      The drug was administered directly to the nucleus accumbens of the mice.

      In order to do so, the coordinates of the nucleus accumbens are obtained from the mouse brain atlas. The mouse is placed in a stereotaxic chamber, and the cannula was inserted into the brain to inject the drug every day for 7 days. The cannula was guided to be inserted into the brain using the coordinates

    9. HDAC activity

      The nuclear fractions are obtained from the mice using a nuclear extraction kit. HDAC activity was measured using the kit.

    10. 14 days after stopping 7 days of nicotine treatment

      The mice receive 7 days of nicotine or water, and then the animals are weaned off the drug for 14 days. Cocaine was administered to animals after day 14 of treatment.

    11. To investigate further the duration of the priming effect of nicotine

      What is the duration of nicotine exposure that is needed to obtain the priming response we see in these animals?

      Does nicotine need to be given closer to another drug or separated a few days apart?

    12. To test further the idea that histone acetylation and deacetylation are key molecular mechanisms for the effect of nicotine on the response to cocaine, we conducted two sets of experiments, one genetic and one pharmacological

      Next, the authors tested the idea of histone acetylation by using a low dosage of theophylline, an HDAC stimulator. In contrast to SAHA, the theophylline should decrease the response to cocaine.

    13. we asked whether we could simulate the effect of nicotine by specifically inhibiting deacetylases with the HDAC inhibitor suberoylanilide hydroxamine acid

      If nicotine is inhibiting HDAC activity, then by using an HDAC inhibitor, we should be able to mimic the effects of nicotine on LTP and FosB expression. This hypothesis was tested by using SAHA, an HDAC inhibitor.

    14. histone deacetylase (HDAC) activity directly in the nuclear fraction of cells in the striatum

      To confirm that, histone deacetylase activity (HDAC) was measured in the striatum.

    15. Does the hyperacetylation produced by nicotine result from activation of one or more acetylases or from the inhibition of deacetylases?

      The authors next addressed whether the acetylation of residues is due to an increase in activation of acetylases or due to inhibition of deacetylase.

    16. we used immunoblotting and examined the extent of chromatin modifications in the whole striatum of mice chronically treated with nicotine

      The authors observed the acetylation levels of H3 and H4 after 7 days of nicotine treatment in striatum tissue using chromatin immunoprecipitation and immunoblotting.

    17. whether nicotine enhances FosB expression in the striatum by altering chromatin structure at the FosB promoter and, if so, does it magnify the effect of cocaine?

      The authors asked the question: does nicotine increase FosB expression by altering the chromatin structure at FosB promoter?

    18. we gave cocaine (30 mg/kg) in two protocols: for 24 hours or 7 consecutive days followed by 24 hours of treatment with nicotine

      The animals were given cocaine in drinking water for 7 days. Later, the mice were administered nicotine for 4 days. FosB mRNA levels were measured.

    19. does nicotine pretreatment followed by cocaine increase the response to cocaine, whereas the reverse order of drug treatment does not?

      These experiments were performed to determine if the nicotine pretreatment combined with cocaine injection produces similar results to cocaine pretreatment combined with nicotine injection.

    20. We treated mice with nicotine (50 μg/ml) in the drinking water for either 24 hours (Fig. 1A) or 7 days

      Nicotine was added to the drinking water for the mice.

      7 days treatment: The mice were fed with the nicotine-containing water for 7 days. For the next 4 days, mice received a cocaine injection intraperitoneally (injection into a body cavity) with continuous exposure to nicotine-containing water. The injections were given once per day.

      24 hours of treatment. The mice were exposed to the nicotine-containing water for 24 hours, and the next 4 days; the mice received a cocaine injection (once per day) intraperitoneally with continuous exposure to nicotine-containing water.

    1. we expected the intervention to be particularly beneficial for women tending to endorse the gender stereotype.

      The authors predicted that the effect of the values affirmation intervention would be greater for women who more strongly endorse the gender stereotypes.

    2. We predicted a reduced gender gap in performance for women who completed the values affirmation.

      The authors' main predication was that women who completed values affirmation would have a smaller gender gap than women who did not complete values affirmation.

    3. In this randomized double-blind study

      The authors used a double-blind study design, meaning that neither the students nor the teaching assistants working with the students knew the purpose of the study, or to which group each student was assigned. Double-blind studies are meant to reduce unintentional bias on the part of the participant or the researcher, interpreting the data.

    4. We tested whether values affirmation would reduce the gender achievement gap in a 15-week introductory physics course for STEM majors.

      The authors tested whether using a values affirmation intervention in their college physics course could reduce the performance gap between men and women.

    5. The values-affirmation intervention used in this study involves writing about personally important values (such as friends and family). The writing exercise is brief (10 to 15 min) and is unrelated to the subject matter of the course.

      The key variable in this experiment is whether a student experiences the values affirmation intervention.

      In the values affirmation intervention, students briefly write about a value they find personally important.

    1. optogenetics allows genetically targeted photosensitization of individual circuit components

      The specificity of optogenetic treatments are of particular clinical interest and relevance for neuroscientists. Because individual cells can be targeted in the living organism, optogenetics allows scientists to better understand how different brains cells function and communicate.

  9. May 2019
    1. To test whether activation of the VGATZI-PVT inhibitory pathway leads to body weight gain, we selectively photostimulated this pathway for only 5 min every 3 hours over a period of 2 weeks.

      The authors hypothesize that because stimulation of the ZI to PVT pathway evokes a large increase in food intake in a short amount of time, long-term stimulation should lead to weight gain.

    2. To test the time course and efficiency of optogenetic activation of VGATZI-PVT inhibitory inputs to evoke feeding, we used a laser stimulation protocol of 10 s ON (20 Hz) followed by 30 s OFF for more than 20 min to study ZI axon stimulation in PVT brain slices and feeding behavior. Stimulation of ZI axons with this protocol hyperpolarized and inhibited PVT glutamatergic neurons each time the light was activated (Fig. 3A). Mice immediately started feeding for each of the 30 successive trials of ZI axon laser stimulation (Fig. 3B and movie S4). The mean latency to initiate feeding was 2.4 ± 0.6 s when we used laser stimulation of 20 Hz (Fig. 3C).

      The authors followed an optogenetic protocol by intermittently turning on the stimulation light for 10 seconds followed by 30 seconds of no stimulation. With each 10 seconds of light on, they measured how long it took for the mice to begin eating.

    3. We crossed VGAT-Cre mice with vGlut2-GFP mice in which neurons expressing vesicular glutamate transporter (vGlut2) were labeled with green fluorescent protein (GFP) to study whether ZI GABA neurons release synaptic GABA to inhibit PVT glutamate neurons (16, 17).

      The authors bred two different mouse lines together: one parent expressed Cre in VGAT-positive neurons and the other parent expressed a protein that emits green fluorescence (GFP) in vGlut2-positive neurons.

      The researchers than used the offspring of this cross to record from GFP-positive cells in a slice and ask whether VGAT cells in the ZI provide input to these neurons.

    4. Laser stimulation (1 to 20 Hz) evoked depolarizing currents in ZI ChIEF-tdTomato–expressing VGAT neurons tested with whole-cell recording in brain slices, displaying a high-fidelity correspondence with stimulation frequency (Fig. 1B).

      The authors recorded the activity of the ChIEF-expressing neurons in brain slices using electrodes. Stimulating the slice with blue light activated the ChIEF-expressing neurons, causing them to fire in the same pattern with which they were stimulated (i.e. high fidelity). Hz (hertz) refers to the number of times the light flashes per second, i.e. 20Hz corresponds to 20 flashes of light per second which caused the neurons to fire 20 times per second.

      This virtual lab demonstrates electrophysiological recordings of neurons: Neurophysiology Virtual Lab

    5. Cre recombinase–dependent rabies virus–mediated monosynaptic retrograde pathway tracing in vGluT2–Cre recombinase mice

      The authors identified the neurons that lie upstream and provide input to PVT neurons.

      They targeted excitatory PVT neurons using vGluT2-Cre mice and used a modified rabies virus that traffics into neurons that provide input to the starting population of cells.

    6. food intake was measured when food was put in a brightly illuminated chamber in a two-chamber light-or-dark conflict test

      Mice were placed in a chamber with two compartments—one with no lights and one brightly illuminated. Mice are innately averse to light and so will usually spend more time in the unlit compartment.

    7. After mice were partially fasted with only 60% of the normal food available during the preceding night, laser stimulation (20 Hz, 10 min ON followed by 10 min OFF, two times) of ChIEF-expressing PVT vGluT2 neurons reduced food intake (Fig. 4, F to H).

      The authors gave the mice a small amount of food to eat overnight, which meant that they were hungry during the experiment. Therefore, control mice commenced eating with short latency at at the onset of the stimulation protocol.

    8. To explore the neuronal pathway postsynaptic to the VGATZI-PVT axon terminals, we injected Cre-inducible AAV-ChIEF–tdTomato selectively into the PVT of vGlut2-Cre mice (Fig. 4A and fig. S8A).

      The authors assessed the role of the neurons downstream (postsynaptically) of the ZI neurons that project to the PVT. They examined how food intake was affected when PVT excitatory neurons were optogentically stimulated.

      Given that GABA is an inhibitory neurotransmitter, the PVT neurons would normally be inhibited when the ZI to PVT projection is active. Thus, when the PVT neurons are stimulated, food intake should increase. Indeed, this is what the authors found.

    9. To test whether ZI GABA neurons exert long-term effects on energy homeostasis, we microinjected AAV-flex-taCasp3-TEVp, which expresses caspase-3 (24), into the ZI of VGAT-Cre mice to selectively ablate ZI GABA neurons (fig. S7).

      The authors selectively killed ZI GABA neurons by using an AAV to express a caspase in these neurons. Caspase-3 is an enzyme that induces cell death.

    10. A chemo-genetic designer receptor exclusively activated by designer drugs (DREADD) was used to test the hypothesis that silencing the cells postsynaptic to ZI GABA axons, the PVT glutamate neurons, would enhance food intake. We injected Cre-inducible AAV5-hSyn-HA-hM4D(Gi)-IRES-mCherry coding for the clozapine-N-oxide (CNO) receptor into the PVT of vGlut2-Cre mice (25, 26) (fig. S9, A and B).

      Silencing of neurons in the PVT that receive input from ZI GABAergic neurons should increase food intake given that these neurons are inhibited by ZI GABA neurons, which increase food intake.

      The authors used a chemogenetic approach in which a modified (DREADD) receptor is expressed in the neurons using AAVs. The receptor is activated specifically by a synthetic drug (CNO) that has no other biological effect.

      The authors used this approach over an optogenetic method to silence the neurons as currently available optogenetic tools for inhibition are not very efficient.

    11. To confirm that PVT vGlut2 neurons were killed by the virus-generated caspase-3, we injected the Cre-dependent reporter construct AAV-tdTomato simultaneously with AAV-flex-taCasp3-TEVp to corroborate that reporter-expressing neurons were absent after selective caspase expression. With coinjection, little tdTomato expression was detected, whereas many cells were detected with injections of AAV-tdTomato by itself, consistent with the elimination of vGluT2 neurons in the PVT (fig. S10, A to D).

      To confirm that the caspase virus was killing cells, a tdTomato reporter, which makes the cells red under a fluorescent microscope, was injected at the same time as the caspase virus.

      The authors found that few tdTomato cells were present in mice that also received the caspase, compared to control mice that were injected with the tdTomato only. Thus, the caspase virus efficiently killed the PVT neurons.

    1. Let us assume that the genetic code is a simple one and ask how many bases code for one amino acid.

      In other words, how many bases in a row translate into one amino acid?

      Let's do a thought experiment (which is considerably cheaper than a laboratory experiment):

      Assume that each amino acid is coded for by two bases in a row. The code would have one of four different bases in the first position of the code (A, G, C, T) and one of four different bases for the second. How many combinations of pairs would be possible?

      For example: (1) A A (2) A G (3) A C (4) A T (5) G A (6) G G (7) G C (8) G T …

      If you continued to write out every combination, you would come up with 16 possible pairs of bases. However, that's four short of the 20 natural amino acids. This is a good sign that two bases is not enough to code for all possible amino acids (and, in fact, we now know that it takes three bases in a row).

      How many combinations would be possible if the code were a grouping of three bases?

    2. The crucial experiment is to put together, by genetic recombination, three mutants of the same type into one gene

      This "frame shift" experiment tests whether the bases are read in singlets, pairs, or triplets.

      https://ghr.nlm.nih.gov/primer/illustrations/frameshift.jpg

    3. These mutations are believed to be due to the addition or subtraction of one or more bases from the genetic message. They are typically produced by acridines, and cannot be reversed by mutagens which merely change one base into another. Moreover, these mutations almost always render the gene completely inactive, rather than partly so.

      By incorporating acridine into genetic material, Crick and coworkers produced mutations in DNA. These mutations led to either the addition or subtraction of one base pair in the genetic code.

    1. the Force and Motion Conceptual Evaluation (FMCE)

      A secondary outcome measure was student scores on the Force and Motion Conceptual Evaluation. Because this test is administered throughout the country, the authors can compare their findings to the normal results for the population.

    2. in-class exams

      The main way the author's assessed student performance was to compare scores on multiple choice exams. These scores are referred to as the main outcome measure.

    3. Students in the control group selected their least important values from the same list and wrote why these values might be important to other people.

      Students in the control group spent time writing about a value that was not important to their identity. This ensures that any difference between the values affirmation group and the control group is due to the act of self-reflection and affirmation, and not simply a result of general writing.

    4. As part of an online survey typically given in the course (week 2), students also indicated their endorsement of the stereotype that men perform better than women in physics.

      Level of endorsement of the stereotype that men perform better than women in physics is a moderating variable. Its value influences how much the values affirmation intervention affects course performance.

    5. attempts to reduce identity threat in authentic classroom contexts have been limited

      One of the key features of this study is that it takes places in an authentic college classroom, rather than being an artificial, one-time laboratory experiment.

    1. We performed a genome-wide screen for loci affecting overall body size in six species of Darwin’s finches that primarily differ in size and size-related traits: the small, medium, and large ground finches, and the small, medium, and large tree finches (Fig. 1, A and B, and table S1)

      The investigators initially had to decide that they were going to use samples from six species of finches. Then, they screened the entire genome of these species to look for genetic variants in different individuals. The objective was to see if any variation at any given location was associated with size and/or size-related traits.

    2. We genotyped a diagnostic SNP for the HMGA2 locus in medium ground finches on Daphne Major that experienced the severe drought in 2004–2005 (n = 71; 37 survived and 34 died) (11).

      To look at the specific finches that experienced the 2004-2005 drought, the researchers genotyped a HMGA2-specific SNP in both survivors and victims (71 total birds).

    3. we genotyped an additional 133 individuals of this species for a haplotype diagnostic SNP (A/G) at nucleotide position 7,003,776 base pairs in scaffold JH739900, ~2.3 kb downstream of HMGA2.

      Lamichhaney and colleagues took a closer look at another 133 birds. Specifically, they investigated a SNP at a certain position in the genome.

      Within the 17 SNPs, researchers knew that the large finches were homozygous (LL in Figure 2D) for one haplotype group and small finches were homozygous (SS) for another haplotype group. However, what was going on with the medium ground/tree finches?

      This particular SNP was shown to be associated with only beak and body size within these medium finches.

    4. we investigated whether the HMGA2 locus is primarily associated with variation in body size, beak size, or both.

      HMGA2 had been identified as a candidate gene, and the SNPs within the 525-kb region within HMGA2 had been located. Therefore, the researchers attempted to see which trait this gene was specifically related to.

    5. We identified 17 SNPs showing high genetic divergence between large and small ground finches and tree finches (FST > 0.8) at nucleotide sites in highly conserved regions across birds and mammals (PhastCons score > 0.8) (Fig. 2C).

      The researchers again calculated the fixation index. however, this time is was only for the variable region (~525-kb in size) that contained the HMGA2 gene.

      Remember, the fixation index score ranged from 0 (complete sharing of genetic material) to 1 (no sharing of genetic material).

      For a good explanation of a SNP, see this video.

    6. We constructed a maximum-likelihood phylogenetic tree on the basis of this ~525-kb region

      The genome-wide fixation index scan found that a region around 525 kb in size showed the most striking differences.

      Lamichhaney and colleagues constructed another phylogenetic tree based on the alignment of this region in all of the samples.

    7. scan comparing large, medium, and small ground finches and tree finches (Table 1) identified seven independent genomic regions with consistent genetic differentiation (ZFST > 5) in each contrast (Fig. 2A and table S2).

      Researchers performed pairwise, genome-wide fixation index (see definition in glossary section) scans across the whole genome. They did this with 15kb windows that were non-overlapping regions.

      The following three comparisons were made: 1) Large ground/tree versus medium ground/tree; 2) Large ground/tree versus small ground/tree; and 3) medium ground/tree versus small ground/tree.

      A compilation of SNP calls yielded 44,767,199 variable sites within or between populations. The fixation index score ranged from 0 (complete sharing of genetic material) to 1 (no sharing of genetic material).

      Each index value was then transformed into a Z-score, which is simply a measure of how many standard deviations below or above the population mean a raw score is. Further analysis was done if the Z-score of the fixation index was greater than five. See the supplementary materials for more information.

    8. We constructed a maximum-likelihood phylogenetic tree on the basis of all 180 genome sequences (Fig. 1C)

      The nucleotide alignment of the variable positions from all 180 samples (60 birds plus 120 from previous study) allowed the scientists to generate a phylogeny using software FastTree.

      See here to learn more about FastTree and its maximum-likelihood method.

    9. We combined these data with sequences from 120 birds, including all species of Darwin’s finches and two outgroup species (15)

      DNA extraction and whole genome sequencing was performed using samples from 60 birds. In addition, 120 bird samples from a previous study (Lamichhaney 2015) were used.

      The sequence reads were analyzed and trimmed using FASTQC software and FASTX software, respectively. To see more specific software used, see the supplementary materials

      The researchers used the genome assembly of a medium ground finch as a reference genome. The reads from the samples were aligned to this reference.

    10. We sequenced 10 birds from each of the six species (total 60 birds) to ~10× coverage per individual, using 2 × 125–base pair paired-end reads. The sequences were aligned to the reference genome from a female medium ground finch (12).

      Lamichhaney and colleagues sequenced a total of 60 birds. Sequencing is a technique used to actually read the DNA.

      For a history of DNA sequencing and assembly, this resource from HHMI BioInteractive is a great tool.

      This video shows specifically how Illumina Sequencing Technology works.

      If a finch genome is 1 Gbp (one trillion base pairs), sequencing "to ~10x coverage per individual" would mean that you obtain 10 Gbp of sequencing data.

      "Using 2 x 125-base pair paired-end reads" refers to the fact that the fragments sequenced were sequenced from both ends and not just one. Refer to the videos above for more information.

    11. We then genotyped individuals of the Daphne population of medium ground finches that succumbed or survived during the drought of 2004–2005.

      Genotyping establishes a genetic code for each individual finch. With birds, this can typically be done using plucked feathers, blood, or eggshell membranes.

      The researchers here used blood samples that were collected on FTA paper and then stored at -80°C. FTA paper is treated to bind and protect nucleic acids from degrading.

      This type of scanning is used to identify specific gene markers that are highly variable. Researchers wanted to identify a locus that showed beak size variation.

    1. Compared with REPAIRv1, REPAIRv2 exhibited increased specificity, with a reduction from 18,385 to 20 transcriptome-wide off-targets with high-coverage sequencing (125x coverage, 10 ng of REPAIR vector transfected)

      To more rigorously compare the off-target activity of two systems, the authors performed sequencing with higher coverage.

      Recall that earlier they used 12.5x coverage. Here, they used 125x coverage.

      Why is this important? Cellular genes are expressed at different levels which leads to a different number of individual mRNA molecules. The more abundant a particular molecule is, the easier it is to detect it at a given coverage. When you increase the coverage, you have a chance to catch molecules that are less abundant in the cell.

      This is exactly what happened in the experiment with the REPAIRv1. At 125x coverage, the authors detected off-targets in the majority of transcripts (18385 of around 20000 protein-coding genes in our genome). By contrast, the REPAIRv2 system was astonishingly more specific and produced off-targets 1000 times less frequently.

    2. We further explored motifs surrounding off-targets for the various specificity mutants

      Inspired by other explorations into 3' and 5' motifs, the authors looked at transcripts with off-target effects—specifically at two nucleotides surrounding the edited adenosine.

    3. A majority of mutants either significantly improved the luciferase activity for the targeting guide or increased the ratio of targeting to nontargeting guide activity, which we termed the specificity score

      The authors looked at two characteristics of the modified protein variants.

      First, they tested whether the mutant had changed its editing activity. This was calculated by looking at the restoration of the Cluc luciferase signal. While the authors didn't necessarily want increased editing activity, they wanted to avoid a loss of editing activity. 

      However, catalytic activity sometimes leads to more off-target effects. Therefore, they authors calculated the ratio between the Cluc signal in targeting and non-targeting conditions, i.e. the specificity score. The higher the score, the more specific a mutant variant was.

    4. we generated an RNA-editing reporter on Cluc by introducing a nonsense mutation [W85X (UGG→UAG)],

      To create the reporter, the researchers "broke" the gene for the Cluc luciferase by introducing a mutation in the UGG codon, changing it to UAG (a nonsense mutation, which signals the ribosome to stop translation). Since this codon was positioned in the beginning of the Cluc transcript, no luciferase was synthesized in the cell.

      The A (adenosine) in the UAG codon was the target for RNA editing. Both Cas13b-mediated RNA recognition and ADAR-mediated editing were required to remove the stop codon at the beginning of the Cluc transcript, which would restore Cluc expression.

      This means that Cluc luminescence would only be seen where editing (both targeting and cleavage) was successful.

    5. We next characterized the interference specificity of PspCas13b and LwaCas13a across the mRNA fraction of the transcriptome.

      The next question was to understand the specificity of Cas13 in the whole cellular transcriptome (the portion of the genome that's transcribed). In the previous experiments, the researchers looked at the expression of only one target (unmodified or modified). Here, they narrowed their focus from the entire genome to just the transcriptome.

      To do that, the authors transfected the cells with Cas13, LwaCas13a or PspCas13b, a gRNA to target Gluc, and a plasmid containing the gene for Gluc.

      The control cells got an irrelevant gRNA instead of the gRNA targeting Gluc. As an additional control for comparison, the authors used shRNA-mediated knockdown of Gluc in a parallel cell culture.

      After 48 hours the researchers collected the cells, extracted mRNA, and determined the sequences and the number of copies for each transcript.

    6. We transfected HEK293FT cells with either LwaCas13a or PspCas13b, a fixed guide RNA targeting the unmodified target sequence, and the mismatched target library corresponding to the appropriate system.

      The cells were transfected with a nuclease, a gRNA for the non-mutated target site and a whole library with all possible plasmid variants. After 48 hours the researchers collected the cells, extracted RNA, and determined which mutated sequences from the library were left uncleaved.

      The transcript with the unmodified sequence was depleted most efficiently so that its level was the lowest after the cleavage. The levels of all other sequences with substitutions decreased to a lesser extent or did not decrease at all. The better Cas13 cut the sequence, the higher the depletion of this sequence was.

      The authors then compared the sequences by their "depletion scores."

    7. Sequencing showed that almost all PFS combinations allowed robust knockdown

      Substitutions in the PFS motifs did not affect how well Cas13a and Cas13b found and cut the target sequences. As a result, sequences with such substitutions were depleted as successfully as the control sequence, which was unmodified.

    8. To more rigorously define the activity of PspCas13b and LwaCas13a, we designed position-matched guides tiling along both Gluc and Cluc transcripts and assayed their activity using our luciferase reporter assay.

      To figure out which parts of the Gluc or Cluc RNA molecules were the best targets for Cas13, the authors generated a series of gRNA guides where each guide was shifted one to several nucleotides relative to the previous one. This is called tiling.

      In this way, the guides together could cover the whole sequence or a part of a sequence that a researcher was interested in. See figure 4A and 4C or figure 5A for a visual of how the guides were "tiled."

    9. Therefore, we tested the interference activity of the seven selected Cas13 orthologs C-terminally fused to one of six different localization tags without msfGFP.

      The authors took six of the best-performing orthologs from the previous part of the study and replaced the msfGFP domain at the C-terminus of each ortholog with different localization sequences. They then tested Gluc knockdown in the same way they previously tested LwaCas13a.

    10. We transfected human embryonic kidney (HEK) 293FT cells with Cas13-expression, guide RNA, and reporter plasmids and then quantified levels of Cas13 expression and the targeted Gluc 48 hours later

      To directly compare the effectiveness of Cas13a, b, and c orthologs, the authors transfected cells with two luciferases, Cas13 and two different gRNAs targeting Gluc luciferase.

      They measured Gluc luciferase activity. Reduced Gluc luciferase activity indicated interference from the Cas13 ortholog and successful targeting by the gRNA.

      They determined the expression of Cas13 to see whether Gluc knockdown was dependent on the quantity of Cas13 rather than the specific orthology.

    11. Here, we describe the development of a precise and flexible RNA base editing technology using the type VI CRISPR-associated RNA-guided ribonuclease (RNase) Cas13

      In this article, the authors describe how they created a system that can edit RNA molecules. They used a Cas13 protein fused to an adenine deaminase. The Cas13 protein recognized specific sequences on an RNA molecule, and the adenine deaminase edited bases, which can convert A to I (which is functionally read as a G).

      The authors improved the targeting specificity (accuracy) and editing rate (precision) by Cas13 and deaminase mutagenesis, and determined the sequences for which this system is most effective. They showed one application of this technology by correcting a series of disease-causing mutations at the cellular level.

    1. We evaluated the collective scrolling and tribological behavior of many individual graphene patches and created a density distribution of their tribological state in order to assess their contribution to the observed friction

      Authors have conducted theoretical simulation to investigate the sliding behavior of an ensemble of graphene sheets to elucidate the macroscale scrolling phenomena. To explore the mesoscopic friction behavior, number density (number of particles per unit volume) of the patches as a function of the coefficient of friction and time is calculated by grouping the friction coefficients collected over the ensemble of graphene patches.

    2. we performed a large-scale MD simulation for an ensemble of graphene-plus-nanodiamonds present between DLC and the underlying multilayered graphene substrate (fig. S8).

      To understand the transition of friction from the nanoscale to the macroscopic superlubric condition observed in experiments, authors have simulated a mesoscopic scenario. They have created and analyzed an ensemble (assembly of systems) of graphene patches and nanodiamonds between the DLC and graphene substrate subjected to sliding friction.

    3. We have simulated the effects of surface chemistry and considered the role of defects

      To understand the role of defects in superlubricity, the authors performed computer simulations by introducing double vacancies and Stone-Wales defects on graphene sheets. Studies were conducted in both dry and humid environments.

    4. DLC-nanodiamond-graphene system in a humid environment

      Upon observing experimentally the effect of humidity on friction conditions, authors have extended their studies. They have performed computer simulations to further analyze the interaction between water molecules and graphene in the DLC-nanodiamond-graphene system in a humid environment.

  10. Apr 2019
    1. This prototype includes a MOF-801 layer (packing porosity of ~0.85, 5 by 5 by 0.31 cm, containing 1.34 g of activated MOF), an acrylic enclosure, and a condenser

      Since this paper was published, the authors refined and optimized the devise and tested it under desert conditions with record high efficiency.

      See "Related Content" tab for: Kim, Hyunho, et al. "Adsorption-based atmospheric water harvesting device for arid climates." Nature communications 9.1 (2018): 1191.

    2. Experiments were performed in a RH-controlled environmental chamber interfaced with a solar simulator.

      To test the material in a laboratory setup, the authors use an enclosed chamber in which conditions such as humidity, temperature, and solar illumination can be regulated. This guarantees control over the experimental conditions and reproducibility.

    3. activated (solvent removal from the pores) by heating at 150°C under vacuum for 24 hours

      Heating under reduced pressure lowers the boiling point of liquids. This allows all the solvent and water molecules trapped in the MOF to evaporate easily, emptying all the cavities before starting the experiment.

    1. We carried out more detailed analysis of the wear track that

      In order to understand the wear properties of the graphene-nanodiamond compound after the sliding experiments, the authors performed electron microscopy studies which can reveal the structure of the material in the wear debris.

    2. Raman analysis

      Raman spectroscopy is a chemical analysis technique capable of probing the chemical structure, crystallinity, and molecular interactions of materials.

    3. The contact area normalized with respect to the initial value at t = 0 is ~1 (22), as shown in Fig. 4C

      The authors defined contact area as the area of graphene atoms which are in the range of chemical interactions from the DLC tip atoms. The normalized contact area is defined as the contact area at any time (t) with respect to the initial contact area at time t=0 (when the graphene patches are fully expanded).

    4. To further explore the superlubricity mechanism, we performed molecular dynamics (MD) simulations (table S1)

      In order to elucidate the mechanism of graphene nanoscroll formation and the origin of superlubric state, the authors have conducted computer simulation studies.

    5. Our experiments suggest that the humid environment

      To investigate the effect of environmental conditions on nanoscale friction and superlubricity, the authors have conducted experiments in humid air in place of dry nitrogen.

    1. computed by subtracting the climatological temperature value (17) for the month in which the profile was measured

      For example, if a temperature profile was taken on Feb. 19th, for each depth in that profile the authors subtracted out the average value for all Februaries over the last 50 years for each depth point. This removes the seasonal temperature changes from the data-set, allowing the authors to focus on the long term variability instead.

    2. computed the contribution to the vertically integrated field shown in Fig. 3B from each 500-m layer

      By examining the ocean in distinct depth increments of 500m each, the authors aimed to determine where most of the cooling and heating of the North Atlantic is occurring.

    3. computed as the time derivative of heat content

      The authors calculated how much heat is being stored in the deep sea by looking at the cumulative temperature change from 1955-59 to 1970-74, as well as from 1970-74 to 1988-92.

      By integrating, or combining, the temperature data for all depths between 0 and 300m and between 0 and 3000 m, the authors aimed to examine where the heat is going - into the surface ocean or the deep ocean.

    4. running 5-year composites

      The authors combine 5 years of data into one value, typically by averaging all values. A running composite (sometimes known as a moving or running average) is calculated by creating a series of averages of different subsets of the full data set in order to compare to the original data set.

      Calculating a running composite is a common technique used with time series data in order to smooth out short-term fluctuations and highlight longer-term trends or cycles.

      For more information on how to calculate a moving average: http://www.statisticshowto.com/moving-average/

    5. Computation of the anomaly fields was similar to our earlier work (7), but some procedures were changed

      The authors calculated temperature anomalies by subtracting the seasonal temperature cycle from monthly data values.

      Every now and then a shipboard temperature measurement can malfunction or be used incorrectly, recording temperatures far higher or far lower than what is realistic. These values need to be excluded to accurately study the oceans. To make sure these errors do not affect the study, the authors considered a particular range of data points with cutoffs at the higher and lower end of the range.

      Unlike their previous work, the authors used a less strict cutoff for when data values were considered good enough to use in their analysis. This is because they found that some large-scale temperature features were mistakenly being flagged as "bad" data under the stricter cutoff, despite those features being real and measurable events in the ocean.

    6. yearly and year-season objectively analyzed temperature

      Using data available in the World Ocean Database, Levitus et al. looked at both annual temperature data and average season temperatures within each year (for winter, spring, summer, fall).

      However, because temperature changes over the course of a year due to the changing of the seasons (summers are warm, winters are cold), this seasonality must be taken into account when studying the overall change in ocean temperatures. To "objectively analyze" the data, the natural seasonal temperature cycle was subtracted from each data point in order to focus on the trends over time.

      For each monthly temperature data point, the average temperature for that month the sample was measured was subtracted. The difference between the data point and the monthly average is called an anomaly.

    7. Using these data, yearly, objectively analyzed, gridded analyses of the existing data were prepared and distributed (7) for individual years for the period 1960 to 1990.

      The authors averaged monthly oceanographic data acquired by ship-of-opportunity and research vessels into annual temperature measurements for every year from 1960 to 1990.

      Scientists measure locations on Earth using longitude (180° E ↔ 180° W) and latitude (90° N ↔ 90° S).  Lines of longitude and latitude cross to create a grid across the planet.

      For this study, Levitus et al. combined temperature data for every 1° longitude by 1° latitude area of the ocean. Where multiple ships frequented the same location, those multiple data points were averaged into one value for each matching depth.

    1. unconstrained canonical correspondence analysis

      Refers to a statistical method that searches for multivariate relationships between two data sets.

      This method is most often used in genetics and ecological sciences. Learn more about why, how, and when to use it here.

    1. To investigate PFS constraints on REPAIRv1, we designed a plasmid library that carryies a series of four randomized nucleotides at the 5′ end of a target site on the Cluc transcript

      Though the authors had already characterized PFS preferences for Cas13b, they needed to check that the fusion of Cas13b with ADAR did not change its targeting efficiency and specificity. The researchers also wanted to confirm that PFS would work when generating RNA edits. This is important as DNA base editors are limited by the PAM of Cas9 and Cpf1 making RNA more powerful since you can target anywhere in the transcriptome. Therefore, it was important to check PFS constraints again.

    2. We mutated residues in ADAR2DD(E488Q) previously determined to contact the duplex region of the target RNA (Fig. 6A) (19).

      The researchers mutated amino acids in ADAR involved with binding to the RNA target or catalytic deamination to test whether they affected deamination.

  11. Mar 2019
    1. A gradual increase in homozygosity was then observed over the next five generations (Fig. 2D), as expected from the small number of breeding pairs

      The authors measured genetic diversity through the inbreeding coefficient and average nucleotide diversity, finding that the pattern of decreasing genetic diversity was as expected for an inbreeding population.

    2. inbreeding coefficient (F)

      A statistical calculation that estimates the probability that an individual inherited two identical genes from one ancestor who occurs on both sides of the pedigree. The higher it is, the more inbred the organism is.

      It was used to both help assign the founder bird to a species, and to quantify the genetic diversity of the Big Bird population as it bred amongst itself.

    3. We followed the survival and breeding of this individual and its descendants for six generations over the next 31 years.

      The Grants began studying finches on Daphne Major Island in 1972. Along with team members, they returned there every year until 2012 to observe the finches. Observations included identities of mating pairs, number of offspring (used to construct pedigrees) and mortality of birds (recording deaths of birds).

      The techniques used include catching the birds and banding them (installing identification bands around their legs), as well as taking measurements of the birds (including body mass and beak measurements). Blood samples were also collected for later DNA analysis.

    1. an array of measures ranging from rotational behavior (Fig. 6D) to head position bias and locomotion

      Motor behavior was assessed using amphetamine-induced rotations, head position bias, and locomotion.

      Rotations were performed by injecting amphetamine 30 minutes prior to trial and placing the animal in an opaque cyclinder. Ipsilateral (same side as) rotations to the 6-OHDA lesion (clockwise) were added and contralateral (opposite side as) rotations were subtracted.

      Head position bias was determined by the number of head tilts over time, where a greater than 10 degree deviation left or right from midline was measured.

      Locomotion was measured using a software called Viewer that tracked motion and calculated distance.

    2. To probe the functional connectivity between these layer V projection neurons and STN in the PD animals, we conducted a separated-optrode experiment in anesthetized animals in which the fiber-optic and recording electrodes were placed in two different brain regions in Thy1::ChR2 animals

      Based upon previous findings that the cortex and STN are connected, the investigators wanted to know if driving M1 layer V neurons had an effect on STN neuronal firing and subsequent behavioral output. So they placed an optrode over M1 and a recording electrode in the STN.

    3. we used Thy1::ChR2 transgenic mice (22, 23) in which ChR2 is expressed in projection neurons, and we verified that in Thy1::ChR2 line 18, ChR2-YFP is excluded from cell bodies in the STN but is abundant in afferent fibers

      Thy1 (thymocyte differentiation antigen 1) is expressed in the axonal projections of mature neurons. When its promoter is placed in control over ChR2 expression, the protein would be expressed in the projection neurons as opposed to the somata of local neurons.

    4. Therapeutic effects could arise from driving axonal projections that enter the STN

      In other words, beneficial effects could arise by activating or targeting axonal projections entering the STN as oppossed to direct STN interventions.

    5. Because simple inhibition of excitatory cell bodies in the STN did not affect behavioral pathology and because HFS (90 to 130 Hz) is used for electrical DBS, we used ChR2 to drive high-frequency oscillations in this range within the STN.

      Since inhibiting excitatory neurons in the STN with eNpHR and activating glial cells with ChR2 were both insufficient at correcting motor deficits in hemiparkinsonian rats, the authors attempted to mimic the high frequency stimulation using ChR2 in the excitatory neurons.

    6. S100β staining

      S100β, also known as, calcium-binding protein B, is expressed by mature astrocytes. S100β can also be used as a peripheral biomarker of blood-brain barrier permeability.

    7. To inhibit the excitatory STN neurons directly, we delivered lentiviruses carrying eNpHR under the CaMKIIα promoter to the right STN of the hemiparkinsonian rats. CaMKIIα::eNpHR labeled with enhanced yellow fluorescent protein (EYFP) expression was specific to excitatory neurons

      Second generation lentiviruses encoding the enhanced halorhodopsin were created using three plasmids.

      One plasmid containing the gene of interest (eNpHR) under the control of the promoter (CaMKIIα) is called the transfer vector. The transfer vector usually encodes for a fluorescent gene to monitor expression, in this experiment, enhanced yellow fluorescent protein (EYFP) was added after the eNpHR sequence.

      A second plasmid containing the envelope gene, usually VSV-G (vesicular stomatitis virus), that allows for a broader degree of infectivity in various cells. The third plasmid contains all of the packaging genes necessary to create a functional viral unit.

      When all three plasmids are added to HEK293 cells (human embryonic kidney), viral particles are released from the cells and suspended in the culture media. Learn more about lentiviruses here.

    8. The STN is a predominantly excitatory structure (30) embedded within an inhibitory network. This anatomical arrangement enables a targeting strategy for selective STN inhibition (Fig. 1B), in which enhanced NpHR (eNpHR) (21) is expressed under control of the calcium/calmodulin-dependent protein kinase IIα (CaMKIIα) promoter, which is selective for excitatory glutamatergic neurons and not inhibitory cells, fibers of passage, glia, or neighboring structures

      Since the subthalamic nucleus is excitatory, meaning the neurons within release the neurotransmitter glutamate, selectively targeting this region can be accomplished via the promoter calcium/calmodulin-dependent protein kinase II alpha (CaMKIIα). Placing a gene downstream of the CaMKIIα promoter will cause the gene to be selectively expressed only in excitatory neurons.

      The authors placed the gene sequence for halorhodopsin under the control of the CaMKIIα promoter and were able to selectively inhibit the firing of excitatory glutamatergic neurons in the subthalamic nucleus.

    9. based on single-component microbial light-activated regulators of transmembrane conductance and fiber optic– and laser diode–based in vivo light delivery

      Opsins are transmembrane proteins (passing through the cell membrane) that are ion channels/pumps which allows for various ions like sodium, potassium, hydrogen, chloride, and calcium to move in or out of the cell.

      There are three main family of opsins: channelrhodopsins, halorhodopsins, and bacteriorhodopsins. Each opsin is light sensitive due to a chromophore retinal molecule within the transmembrane domain of the channel and each family is sensitive to a specific wavelength of light.

    10. Therefore, optogenetics, in principle, could be used to systematically probe specific circuit elements with defined frequencies of true excitation or inhibition in freely behaving parkinsonian rodents.

      This group has led the field of optogenetics by setting the groundwork for how this "tool" can be employed in neuroscience research.

    1. Dot blot analysis

      A quantitative method to detect mRNA levels in the explants. Here, mRNA levels of proenkephalin is measured using this method.

      Learn more with this video from Abnova.

    2. Denervation

      A technique used to separate or eliminate a particular nerve supply to specific area(s) in the nervous system.

    1. We performed a parametric study, including varying the packing porosity (0.5, 0.7, and 0.9) and layer thickness (1, 3, 5, and 10 mm), and determined the time and amount of harvestable water for a solar flux of 1 sun

      The authors examined different parameters to optimize the water harvesting properties of the material. The porosity and thickness of the material are assumed to be the parameters having the largest effect because they determine the amount of water that can be adsorbed for a chosen compacted adsorbent layer.

    1. Dirichlet multinomial mixtures

      Refers to a computational technique to model the probability of microbial metagenomics data by representing the data as a frequency matrix of the number of times each taxa is observed in a sample.