1,136 Matching Annotations
  1. Last 7 days
    1. A series of x-rays for the three animals treated with long-acting formulation–2 are shown in Fig. 2D.

      Fig. 2D shows X-ray images of the gastric resident dosage forms containing long-acting formulation–2 over 29 days. Two arms were lost off the dosage form in pig 3, but it was still able to remain in the stomach and provide consistent drug concentrations.

    2. The serum concentration of levonorgestrel in pigs treated with Levora tablets is shown in Fig. 2A.

      Fig. 2A shows the concentration of the drug from a Levora tablet in a pig over the course of 2 days. The concentration reaches it's peak of 199 ± 56 pg/ml (n = 5) at around 6 hours and by 48 hours dropped to 5 ± 4 pg/ml (n = 5).

    3. We then tested the stability of the interface between the material used to make the central elastomer (Elastollan 1185A10) and the arms of the dosage form (Sorona 3015G NC010) using a cyclic cantilever test. Over a 3-week period, there was progressive weakening of the interface (Fig. 1D).

      This method helps determine the load that can be repeatedly applied to the joint area between the central elastomer and the arms while still keeping its structural integrity.

    4. arms that had V-shaped grooves in them

      This arm design was initially used because of the higher surface area it allowed in comparison to the other designs.

    5. 45 ± 2 pg/ml (n = 3) and 54 ± 29 pg/ml (n = 3) on days 3 and 11, respectively.

      Using the capsule with three arms, they found concentrations of 45 ± 2 pg/ml on day 3 and 54 ± 29 pg/ml on day 11. These results illustrate that the capsule is steadily releasing the hormone over a 20-day period. One problem with this formulation is that the drug concentration fluctuates significantly over time which may reduce its effectiveness for pregnancy protection.

    6. Using long-acting formulation–1, we observed a maximal concentration of 55 ± 18 pg/ml (n = 3) on day 17 (Fig. 2B).

      Using the capsule with half of the drug loaded in poly(sebacic anhydride) and the other half in PDMS, the authors illustrated birth control drug delivery for an extended period. They observed a peak concentration of the drug on day 17. This shows that, unlike the daily tablet, their formulation slowly and steadily releases the drug and provides protection over a longer period.

    7. The flexural strength of the arms was reduced after 2 weeks of incubation in SGF

      After a period of 2 weeks in the simulated stomach acid, the strength of the caged arms decreased. The integrity of the polymeric arms are affected by exposure to the acid. The decrease in strength is ~25% after 2 weeks.

    8. assumes a size larger than that of the pylorus;

      After ingestion, the smart pill is released from the capsule and expands to a size larger than the stomach's opening, which prevents it from passing into small intestine. This will allow for the extended release of the birth control drug within the stomach.

    1. we developed an approach that uses rapid pH change to drive collagen self-assembly within a buffered support material, enabling us to (i) use chemically unmodified collagen as a bio-ink, (ii) enhance mechanical properties by using high collagen concentrations of 12 to 24 mg/ml, and (iii) create complex structural and functional tissue architectures. To accomplish this, we developed a substantially improved second generation of the freeform reversible embedding of suspended hydrogels (FRESH v2.0)

      For 3D printing of tissue components, the authors prefer to use collagen in its natural form, but this comes with a challenge. The 3D printing is performed by dispensing a stable bioink composed of collagen through a nozzle. During printing, unmodified collagen transforms into a gel through a self-assembly process (formation of links between the individual collagen structures). This self-assembly process is generally very difficult to control as it is challenging to dissolve collagen and prepare a stable bioink at physiological pH values of 7.4.

      To resolve this issue, the authors prepared bioinks by dissolving collagen in an acidic solution. Then, the authors dispensed this bioink into a support gel which rapidly changes the pH of the bioink (neutralizing it to pH=7.4), thereby promoting the self-assembly process. This allowed the authors to 3D print unmodified collagen, with higher density and strength, in complex shapes and structures.

    2. Finally, to demonstrate organ-scale FRESH v2.0 printing capabilities and the potential to engineer larger scaffolds, we printed a neonatal-scale human heart from collagen

      Previous experiments have only demonstrated small-scale applications of their printing method. So, in their last experiment, the authors printed a collagen structure of an infant-sized heart to show the larger-scale abilities of FRESH v2.0.

    3. A magnetic resonance imaging (MRI)–derived computer-aided design (CAD) model of an adult human heart was created, complete with internal structures such as valves, trabeculae, large veins, and arteries, but lacking smaller-scale vessels.

      A full scale human heart was developed on computer software. The overall structure, valves, and large vessels could be created by MRI and computer aided design, but smaller vessels (~100 microns) had to be created using a special computer program.

      A small subsection of this heart was selected and printed from the collagen

    4. we printed a tri-leaflet heart valve 28 mm in diameter

      A heart valve was printed with the collagen gel and was strengthened using standardized methods.

      Heart valves are structures with flaps of tissue (leaflets) that open and close. This allows blood to flow out to the rest of the body, and prevents it from flowing back into the heart (regurgitation). The valve printed here was 28 mm which is within human range, and was tri-leaflet meaning it contained 3 flaps.

      This is an important experiment because heart valves experience extreme forces in the body. If the overall goal for the authors is to 3D print full organs, they need to verify that their collagen can withstand these forces.

    5. We imaged the ventricles top-down to quantify motion of the inner and outer walls (Fig. 3K).

      The authors observed the heart as it contracted to see how the walls thickened during the contractions. This was done because wall thickening is a typical behavior of ventricle contraction

    6. We next FRESH-printed a model of the left ventricle of the heart using human stem cell–derived cardiomyocytes.

      The authors 3D printed a left ventricle, the bottom left chamber of the heart that pumps blood out to the body. They printed collagen gel for structure, and also used a cellular ink containing human embryonic stem cell-derived cardiomyocytes (hESC-CMs) which are the contracting cells in the heart.

      The ventricle was designed as an ellipsoidal shell with an outer and inner wall printed from collagen gel. The hESC-CMs were printed in the space between the two walls along with 2% cardiac fibroblasts, cells that produce connective tissue.

    7. Collagen disks 5 mm thick and 10 mm in diameter were cast in a mold or printed and implanted in an in vivo murine subcutaneous vascularization model

      Two types of collagen disks were created: i) solid disks from a mold, and ii) porous disks printed by FRESH v2.0.

      These disks were implanted under the skin of live rats to observe cell movement into the porous matrix as a first step towards formation of a network of blood vessels.

      After 3 and 7 days, the disks were removed and tested for this result.

    8. In FRESH v2.0, we developed a coacervation approach to generate gelatin microparticles with (i) uniform spherical morphology (Fig. 1D), (ii) reduced polydispersity (Fig. 1E), (iii) decreased particle diameter of ~25 μm (Fig. 1F), and (iv) tunable storage modulus and yield stress

      In the second version of FRESH, the authors wanted to improve the gelatin microparticles used in the support bath. To do this, they used coacervation which is a chemical method of producing polymer droplets.

      The particles produced with this method were smaller, more spherical, and had less size variation. Additionally, the method allowed the elasticity and strength of the particles to be varied.

    9. FRESH works by extruding bio-inks within a thermoreversible support bath

      As stated previously, the FRESH method uses a supporting gel that contains tiny particles to support the bio-inks (such as collagen-containing gel) as they are printed. The supporting gel is thermoreversible which means its properties can be changed with heat.

    1. The ability of HASEL actuators to self-heal from electrical damage provides the means to scale up devices to produce a large actuation stroke by stacking multiple actuators (Fig. 2A)

      Stacking actuators allows the production of a larger actuation stroke, which means more actuators can increase the full length of working movement when compared to one actuator. Since these actuators self heal from electrical damage, the actuation stroke can be increased without permanent damage to the system. .

  2. Mar 2020
    1. FRESH works by extruding bio-inks within a thermoreversible support bath

      As stated previously, the FRESH method uses a supporting gel that contains tiny particles to support the bio-inks (such as collagen) as they are printed. The gel is thermoreversible which means its properties can be changed with heat.

    2. We first focused on FRESH-printing a simplified model of a small coronary artery–scale linear tube from collagen

      The authors 3D printed a model of a small coronary artery with type 1 collagen (1.4 mm inner diameter, ~300-micron thick wall). C2Cl2 cells in a collagen gel were then printed around the tube to test its ability to support tissue.

      Over 5 days, one tube was left static (no perfusion) and the other underwent active perfusion in which the cells were fed.

    1. understanding the mechanical stability of the polymer at low pH

      Tests are performed in order to understand how this specific polymer reacts when introduced to different pH levels. These are done to ensure the polymer is stable in the low pH of the stomach.

    2. 126 ± 24 pg/ml (n = 3) on day 2

      Using the capsule with six loaded arms of PDMS, a maximum concentration was reached on day 2. Note how the maximum concentration is 126 ± 24 pg/ml, as opposed to the latter configuration with a maximum concentration of 55 ± 18 pg/ml.

    3. Despite this, the arms retained sufficient rigidity appropriate for incorporation in the dosage forms.

      Even with a decrease in flexural strength in the arms, it still has sufficient rigidity to move forward with the experiment. If the rigidity was affected more drastically, it could lead to the arms breaking inside the stomach. The arms are meant to keep the dosage form large enough so that it is too big to pass through the pylorus, therefore arm breakage could lead to early digestion of the pill.

    4. Solid arms made of Sorona 3015G NC010 were placed in simulated gastric fluid (SGF) for various times

      The simulated gastric fluid is a manmade fluid to mimic the acidic condition in human stomach. This fluid is used for testing the stability of the polymer that the solid arms are made of, for prolonged periods.

    5. understanding the mechanical stability of the polymer at low pH

      Tests are performed in order to understand how this specific polymer reacts when introduced to different pH levels. These are done to ensure the polymer is stable in the low pH of the stomach.

    6. we decided to use PDMS-based polymer matrices

      PDMS-based material was used in this experiment because there are known uses for this material in other sustained- release products.

    7. folding into a capsule to facilitate oral administration

      The dosage form is able to fold into itself to fit inside a capsule for ingestion.

    1. The use of liquid dielectrics enables HASEL actuators to self-heal from dielectric breakdown. In contrast to solid dielectrics, which are permanently damaged from breakdown, liquid dielectrics immediately return to an insulating state (fig. S5 and movie S1). This characteristic allowed donut HASEL actuators to self-heal from 50 dielectric breakdown events

      HASEL uses a liquid dielectric instead of a solid dielectric. Dielectric breakdown can be though of like a lightening strike. The sky and the ground are the opposite sides of the dielectric. When the power is too high there is a breakdown and the opposite sides connect and conduct electricity, a lightening strike. The connection is extremely powerful, just like a lightening bolt, and will often burn holes through the dielectric. HASEL, by using a dielectric is protected from these breakdowns. When a hole is burnt through the liquid dielectric it is filled in with the surrounding liquid.

    2. The actuator with larger electrodes displaced more liquid dielectric, generating a larger strain but a smaller force, because the resulting hydraulic pressure acts over a smaller area (Fig. 1D and fig. S2). Conversely, the actuator with smaller electrodes displaced less liquid dielectric, generating less strain but more force, because the resulting hydraulic pressure acts across a larger area (Fig. 1E).

      The size of the electrode determines the mount of force it can distribute. Smaller electrodes squish less dielectric meaning they have less strain but higher force. This is because there is less liquid to move with the same Maxwell strain. Larger electrodes have higher strain because they more more liquid but have less force. This is because the Maxwell strain is acting on much more liquid. Force is the amount that HASEL can move and strain is the amount you see HASEL change.

    3. Because Maxwell pressure is independent of the electrode area, actuation force and strain can be scaled by adjusting the ratio of electrode area to total area of the elastomeric shell.

      The magnetic attraction of the electrodes, the Maxwell strain, has nothing to do with the size of the electrode. This means that strain and the force that results from it can be scaled with the electrode size. The larger the area of the electrode the greater the force. Think of what the water balloon looks like when you push it down with one finger compared to two. The more fingers you used the larger the area of the balloon you touch and the higher the sides of the balloon rise.

    4. electrostatic Maxwell stress (20) pressurizes and displaces the liquid dielectric from between the electrodes to the surrounding volume.

      When the dielectric is polarized it makes either side of HASEL opposite ends of a magnet. The sides being opposite means they will attract one another. A Maxwell Stress is a mathematical model that explains the momentum caused by these two sides being magnetically attracted to each other. This momentum explains the pressure change that happens inside the actuator.

    5. After the pull-in transition (Fig. 1A), actuation strain further increases with voltage (Fig. 1B). For this design, hydraulic pressure causes the soft structure to deform into a toroidal or donut shape (Fig. 1C).

      Once the snap-in voltage is passed voltage and strain increase together. The more power that is added the more HASEL is squished.

      HASEL is initially a disk but when pressure is added becomes a donut.

    6. As voltage increases from V1 to V2, there is a small increase in actuation strain s. When voltage surpasses a threshold V2, the increase in electrostatic force starts to exceed the increase in mechanical restoring force, causing the electrodes to abruptly pull together (Fig. 1B)

      As you increase the voltage to the HASEL electrodes the dielectric become more polarize, more magnetic. The magnetic attraction is fighting against the pressure of the liquid dielectric in HASEL. The magnetic attraction is your finger pushing down on the water balloon and the dielectric pressure is the balloon resisting your push.

      As voltage is increased the magnetic attractive force becomes greater than the liquid pressure and the electrodes begin to move towards another. When you finally push hard enough the balloon begins to squish.

      The voltage that is high enough to cause this change is called the "snap-in voltage".

    7. HASEL actuators, where an elastomeric shell is partially covered by a pair of opposing electrodes and filled with a liquid dielectric (Fig. 1A).

      Researchers chose HASEL's structure because they wanted stable uniform force to be distributed from the actuator.

      When you push on the middle of the water balloon you raise all the sides around your finder equally. If you push off-center over even squish on side of the water balloon you have a less equal distribution of pressure.

    8. HASEL actuators generate hydraulic pressure locally via electrostatic forces acting on liquid dielectrics distributed throughout a soft structure.

      HASEL is like a half filled water balloon. If you press on the middle of the baloon with your finger you force the water from the midle of the balloon to the outer edges.

      HASEL is filled with a liquid dielectric that wont conduct electricity. When power is supplied to the electrodes, The are place on the top and bottom of HASEL. They polarize the dielectric, make it positive or negative, which makes the sides move towards one another.

      Like your finger on the water balloon, the dielectric squeeze HASEL and forces the unused dielectric to the outside, increasing the hydraulic pressure.

    9. Here, we develop a class of high-performance, versatile, muscle-mimetic soft transducers, termed HASEL (hydraulically amplified self-healing electrostatic) actuators.

      The researchers named with actuator HASEL. They explain HASEL is a higher-performance, versatile, muscle mimetic soft transducer.

      Higher-performance: compared to other soft-robot actuators HASEL uses less energy for great movement.

      Versatile: HASEL uses liquid dielectrics that protect it from breakdown.

      Muscle Mimetic: HASEL copies the movement of human muscles, expansion and contraction.

      Soft transducer: HASEL transfers electric energy into mechanical energy.

  3. Feb 2020
    1. the caged arms had a significantly higher fracture force than V-shaped arms (65.6 ± 7.5 N, n = 6 versus 51.7 ± 5.8 N, n = 6; P < 0.05, Student’s t test).

      The caged arms fracture force was significantly higher than the V-shaped arms. The three-point bend test shows that it takes a higher force to break the caged arms than it takes to break the V- shaped arms.

    2. gastric resident dosage form

      A dosage form that stays in the stomach during dosage release.

    3. the caged arms made using a thermoplastic polymer,

      The arms were made out of a thermoplastic polymer which is a substance that can be heated to become pliable, and upon cooling, it hardens.

    4. mechanical properties of V-shaped arms

      Different forces were applied to the material. The physical reactions to those forces were noted in order to determine the mechanical properties of this material.

      Examples of Mechanical Properties: strength, toughness, brittleness, etc..

    5. a drug-polymer matrix within the sleeve

      Within the structural polymer that makes up the arms, levonorgestrel (LNG) can be loaded for extended release.

    6. an outer sleeve made of a rigid polymer that provides mechanical integrity (structural polymer)

      The polymer making up the outer sleeve is for structural support of the dosage form.

    1. high-resolution stratigraphic framework

      To analyze what happened in these basin areas, the authors used sequence stratigraphy. They looked for variations in the successive layers of sedimentary rocks and the composition of the rocks. The order in which the different layers were deposited was carefully recorded. The chronostrategraphy aspect of their involved tracking changes in the character of the rocks through geologic time. Fossils found in each layer can then be better placed in terms of when these organisms evolved, and in some cases, became extinct.

      Title: Stratigraphic Principles: https://www.biointeractive.org/classroom-resources/stratigraphic-principles

    2. paleomagnetics

      This is a study of the magnetism in rocks induced by Earth's magnetic field. The minerals in certain rocks lock-in the direction and strength of the magnetic field at the time of their formation. The authors used this to determine the age of their fossil finds.

    3. CA-ID-TIMS U-Pb-dated volcanic ash

      This technique was used by the authors to date rock strata. Chemical Abrasion Isotope-Dilution Thermal Ionization Mass Spectrometry (CA-ID-TIMS) is a multistep, high precision technique used to determine the relative amounts of uranium-238 and lead-206 present in zircon crystals. Zircon crystals, formed from volcanic or metamorphic rock, are extremely durable and resistant to chemical breakdown. They can survive major geologic events. Over time, new zircon layers form on top of the original crystal. The center of the zircon remains unchanged, keeping the chemical characteristics of the rock in which it originally formed. Every radioactive element has a decay rate. The length of time it takes half of the radioactive atoms in a sample to decay (form into a different element) is referred to as its half-life. The half-life of uranium-238 to lead-206 takes 4.47 billion years. The authors selected zircon crystals from their study area and dated them using CA-ID-TIMS.

      Image of thermal-ionization mass spectrometer: https://www.usgs.gov/media/images/usgs-thermofinnigan-triton-thermal-ionization-mass-spectrometer

    4. two U-Pb radiometric dates

      To determine the age of some of the rocks in their study area, the authors used Uranium-lead dating. It is most often used to date volcanic and metamorphic rock and very old rocks. This technique involves the radioactive decay of U-238 and U-235 into lead (Pb). The 238 and 235 refers to the sum of the number of protons (92) and the neutrons in the nucleus of each of these forms or radioactive uranium. The half-life, which is how long it takes half of a sample of the U-238 to undergo radioactive decomposition and become Pb-206. The time is 4.47 billion years. The time it takes for half of a sample of U-235 to decay into Pb-207 is 704 million years. Since the two different forms of uranium have different half-lives and decompose into different forms of lead, they are a good check when calculating the age of a rock or fossil. This dating technique is best used on rocks that are from 1 million to 4.5 billion years old.

    5. The K–Pg boundary is demarcated by the decrease in abundance of Cretaceous pollen taxa

      Fossil pollen reveals how plant species evolved after the K-Pg extinction. The authors compared pollen from both sides of the K-Pg boundary. The authors discovered that immediately after the impact, there were few plants. This clip from HHMI's Rise of the Mammals shows how they obtained pollen and spore samples. 4:28:11 - 5:01:12

    6. concretions and are found in all observed facies

      These are compact masses of mineral matter, usually spherical or disk-shaped. They are carried along by water and become embedded in host rock that has a different composition. Concretions form in sediments before the sediments become rocks. The authors focused on searching for and cracking open concretions. Concretion explained in HHMI's Rise of the Mammals clip 7:32:20 - 9:22:20

    7. Loxolophus sp. [(E) and (F)

      Loxolophus is the oldest placental mammal to be discovered. It has teeth for eating both meat and plants, thus adapting it to a recovering ecosystem. Clip from HHMI's Rise of the Mammals 10:25:10 - 11:18:22

    8. Taeniolabis taoensis [(K) and (L)

      This mammal was a specialist and was proof that mammals were evolving and becoming larger as they specialized. Increased plant diversity fueled this growth. Clip from HHMI's Rise of the Mammals supports this: 11:18:09 - 11:51:06

    9. In addition, the pattern and abundance of vertebrates preserved in all paleoenvironments suggest that by ~700 ka post KPgE the largest mammals (25+ kg) were spatially partitioned across the landscape.

      The fossil record shows that after the K-Pg extinction event, there was an exponential increase in the maximum size of mammals. About 40 million years ago, this size increase leveled off. It is hypothesized that diversification to fill ecological niches was the primary driver of this rapid increase and that environmental temperature and land area have acted to constrain the continued increase. clip from HHMI's Rise of the Mammals 11:51:07 - 12:29:18

    10. Cranial size and lower first molar area were used to estimate mammalian body mass – an important feature that impacts many aspects of the biology and ecology of mammals

      An import step in the author’s research was the determination of the body masses of the fossil mammals they unearthed. Body mass was estimated by measuring the length × width (L×W) of the first lower molar. This has been found to be an accurate indicator of body mass through comparisons to molar size and the body mass of living mammals. This data is important to the authors since body mass has been linked with characteristics such as energy expenditure, gestational period, temperature regulation, and niche ecology. When reconstructing life during the time period being studied, body mass distributions in mammalian communities can be used to infer ancient environmental conditions. Therefore, determining the body mass of a fossil mammal is an important step toward understanding its palaeoecological role. Brain size (cranial size) usually increases with the size of the animal. However, the relationship is fairly inaccurate as cranial size does not correlate as closely to body mass as the area of the first molar does.

    1. Preservation of He and its isotope signatures in diamond is supported by He heterogeneities within individual diamonds (18, 20)

      In conducting their experiments, the research team acknowledged the inherent variability in the <sup>3</sup>He/<sup>4</sup>He ratio inside each diamond. To minimize these differences, the scientists used diamonds that were free of inclusions thereby eliminating the possibility of inconsistencies in helium ratios originating from these inclusions. Then, the crushed the diamond in a vacuum to verify that 90% of the helium is present in the groundmass of the diamond. This was then followed by step-heating the crushed diamond to measure the variability in the <sup>3</sup>He/<sup>4</sup>He ratio in the diamond.

    2. picogram analyses of Pb-Sr isotopes of fluid inclusions

      Picogram is a unit of measurement of weight and it is equivalent to one-trillionth (10<sup>-12</sup>) of a gram. A picogram analysis is done by weighing a sample at the picogram scale and then using other analytical techniques on this sample to get meaningful information.

    3. all diamonds show typical sublithospheric features

      In order to confirm the sublithospheric features, the authors characterized the structure of these diamonds using a technique called cathodoluminescence imaging. With this technique, light emitted by a sample when irradiated with electron radiation can be measured.

    4. compositions of basalts provide information

      Measurement of isotopic content of primitive basaltic rocks has been a useful method to understand the exact chemical composition of these old rocks. Learn more in this article from Eos explaining the importance of using these techniques. https://eos.org/features/isotope-geochemists-glimpse-earths-impenetrable-interior

    5. We studied 24 diamonds (1.3 to 6 mm in size) from the Juina-5 and Collier-4 kimberlites and São Luiz River (Juina, Brazil).

      The authors carefully inspected 24 diamonds excavated from the Juina area of Brazil. This location was chosen because the diamonds excavated from this area show characteristics similar to the ones which belong to the Earth's transition zone (410 to 660 km depth). The sizes of these diamonds ranged between 1.3 mm to 6 mm and in spite of this limitation in size, they could successfully detect the helium gases trapped in these diamonds.

    6. The carbon isotope compositions of the diamonds

      The carbon isotope compositions were measured using an instrument called Stable Isotopes mass spectrometer. Watch this video to get an idea about how this instrument works in a science laboratory: https://www.youtube.com/watch?v=SHbzEwMt-1s

  4. Nov 2019
    1. modular [subsets of species interacting preferentially with each other, forming modules of highly connected species

      Modular networks are groups of species that preferentially interact with each other. In Figure 1B, each module is represented by a distinct color.

      Connectance measures the proportion of interactions taking place in a network out of the total amount of possible interactions. The authors calculated a high connectance within each module, meaning the bird and plants species utilizing the majority of available interactions.

    2. The novel network was nested [specialist species interacting with proper subsets of partners of the most generalist species

      Networks such as seed dispersion networks consists of specialist species (which interacts with only a few, select species) and generalist species (which interacts with a broad range of other species). When a specialist interacts with one of the same species that a generalists interacts with, it is called a nested network.

      In Figure 1A, the specialists species are depicted as very thin rectangles to represent the few interactions they have with other species, whereas the generalist species are bigger rectangles to encompass the many interactions they have with other species. One example of a nested network is shown between the specialist animal at the bottom of the animal column interacting with the same plant that a generalist animal species, like the top rectangle of the animal column, also interacts with.

    1. injected with 4×1054×105<math xmlns="http://www.w3.org/1998/Math/MathML"><mn>4</mn><mo>×</mo><msup><mn>10</mn><mn>5</mn></msup></math> Sa1N tumor cells

      The experiment was repeated with a much larger number of tumor cells injected. This is to ensure that the effects seen rely only on the treatment course and not on other factors.

    2. All control mice injected subcutaneously with 1×1061×106<math xmlns="http://www.w3.org/1998/Math/MathML"><mn>1</mn><mo>×</mo><msup><mn>10</mn><mn>6</mn></msup></math> Sa1N cells

      The authors injected groups of 5 mice each with SA1N cells causing fibrocarcinoma.

  5. Oct 2019
    1. we injected groups of mice with 2×1062×106<math xmlns="http://www.w3.org/1998/Math/MathML"><mn>2</mn><mo>×</mo><msup><mn>10</mn><mn>6</mn></msup></math> wild-type 51BLim10 tumor cells and treated them with anti-CTLA-4 beginning on day 0 as before, or beginning 7 days later

      The authors conduct a new set of experiments to check whether administering anti-CTLA-4 after tumors are detected is as effective as administering it at the same time tumors are introduced. If they are able to successfully treat mice after tumors are already established, then maybe this treatment could work for human patients as well!

    2. Mice that had rejected V51BLim10 tumor cells as a result of treatment with anti-CTLA-4 were challenged with 4×1064×106<math xmlns="http://www.w3.org/1998/Math/MathML"><mn>4</mn><mo>×</mo><msup><mn>10</mn><mn>6</mn></msup></math> wild-type 51BLim10 cells 70 days after their initial tumor injections

      The authors injected the mice which were previously treated with unmodified tumor. If they developed an immune memory, they may be able to clear this tumor even though it is not expressing B7 and they have not been given anti-CTLA4 antibodies!

    3. injected with 2×1062×106<math xmlns="http://www.w3.org/1998/Math/MathML"><mn>2</mn><mo>×</mo><msup><mn>10</mn><mn>6</mn></msup></math> tumor cells

      The authors decided to check if there is an effect to changing the tumor dose. They halved the dose to 2x10\(^6\) and had a group of untreated and a group of anti-CTLA-4 treated mice.

    4. the growth of V51BLim10, a vector control tumor cell line that does not express B7

      This set of experiments was conducted with a variant of the same murine colon cancer tumor cells, but this time the tumors do not express B7. Thus the tumors are not able to provide the secondary signal to T cells.

    5. treated with anti-CTLA-4

      A third group of mice received anti-CTLA-4 antibodies as treatment.

    6. injected with 4×1064×106<math xmlns="http://www.w3.org/1998/Math/MathML"><mn>4</mn><mo>×</mo><msup><mn>10</mn><mn>6</mn></msup></math> V51BLim10 tumor cells and left untreated, or treated with anti-CD28

      The mice were split into groups that received different treatment regimens. Two of the groups were untreated, or were treated only with anti-CD28 antibodies.

    7. Two groups were treated with a series of intraperitoneal injections of either anti-CTLA-4 or anti-CD28

      The authors injected groups of mice with tumor cells expressing B7-1 molecules. These mice were then treated with two different regimens. One group of mice was injected with antibodies targeting CTLA-4, and another with antibodies targeting CD28.

    8. untreated controls

      The authors had a control group of mice which were injected with tumor cells expressing B7-1 molecules, but were not treated with any antibodies.

    9. in vivo administration of antibodies to CTLA-4

      The authors injected mice with antibodies that bind CTLA-4.

  6. Sep 2019
    1. on the silicon dioxide (SiO2) substrate by means of a solution process method (figs. S1 and S2), providing a partial coverage on the SiO2 surface (22).

      The authors have deposited a few drops of graphene in a colloidal liquid state on a SiO<sub>2</sub> substrate to give it a non-uniform coverage of graphene upon evaporating the solution. Simultaneously, they also deposited a solution containing nanodiamonds to get nanodiamond particles on the SiO<sub>2</sub> surface.

    2. We demonstrate our observation of stable macroscale superlubricity while sliding a graphene-coated surface against a DLC-coated counterface

      The authors have designed and performed superlubricity experiments by sliding DLC-coated stainless steel balls against a graphene surface. However, after analyzing the initial test results, they needed to modify the design by also incorporating nanodiamonds into the system. They anticipated that the nanodiamonds can act as nano-ball bearings, thereby enhancing the mechanical strength of graphene and contributing to superlubricity.

    3. Atomistic simulations

      Molecular dynamics is a computer simulation method that allows for prediction of the time evolution of a system of interacting particles such as atoms and molecules.

    1. by using a two-chamber place preference test

      The authors hypothesized that because the ZI to PVT projection promotes intake of foods that are pleasurable to eat and also makes mice overcome their aversion to light in order to eat that food, that stimulation of this pathway is pleasurable or rewarding for the mice.

      They tested this by placing the animals in a box with two identical compartments. The mice were able to freely move around the box. On one side of the chamber the mice received stimulation of their ZI-PVT neurons, whereas the stimulation was turned off when the mice were on the other side.

    2. optogenetic stimulation

      A technique that uses light to control the activity of cells, most commonly neurons, in living animals. The cells are genetically modified to express ion channels that are sensitive to light. Shining light on the neurons changes their activity, allowing scientists to understand the role of the neuron in a given behavior or physiological process.

    3. Anterograde AAV-ChIEF-tdTomato labeling

      Infection of the neurons with tdTomato-tagged AAV allows the projection of the ZI GABA axons to be visualized.

      The authors used this method to determine where in the brain these neurons project to.

    4. we injected Cre recombinase–inducible adeno-associated viruses (AAV) expressing the optogenetic channelrhodopsin-like ChIEF fused with a tdTomato reporter [AAVdj-CAG-DIO-ChIEF-tdTomato (driven by the CAG promoter) (10, 11)] bilaterally into the rostral ZI of vesicular GABA transporter (VGAT)–Cre mice that express Cre recombinase in GABA neurons

      To target a neuron population of interest, e.g. those that express GABA, scientists use genetically modified viruses (AAVs) to deliver proteins into the brain (such as optogenetic tools).

      This is achieved by using two tools: 1) a mouse line that expressed the enzyme Cre recombinase in a specific population of neurons (e.g. those that express the GABA transporter VGAT) and 2) an AAV that expresses an optogenetic protein only in the presence of Cre. The AAV is injected into the brain region of interest in the Cre mice. This AAV has a tdTomato tag which allows the injection site to be visualized under a fluorescent microscope.

      For further information on these tools see how mice optogenetics are used this video.

      The ZI in both hemispheres of the brain was injected with the AAV (bilaterally), with the region lying towards the front of the brain (rostral) being targeted. The optogenetic tool used (ChIEF) activates neurons when blue light is shone on the cells.

  7. Aug 2019
    1. Given that APC−/− tumors can efficiently transport both glucose and fructose, we sought to determine the metabolic fate of glucose and fructose using 13C isotopic tracing. We isolated tumors from APC−/− mice and exposed them to four different labeling conditions for 10 min ex vivo: 13C-glucose (labeled at all six carbons), 13C-fructose (labeled at all six carbons), 13C-glucose + unlabeled fructose, and 13C-fructose + unlabeled glucose.

      To study glucose and fructose metabolism in tumors the they traced the breakdown of the molecules.

      1. The scientists labeled glucose and fructose with a radioactive atom that can be traced even as the molecule is broken down.
      2. They incubated tumor tissues with labeled-glucose, labeled-fructose or a mix of labeled-glucose + fructose, or a mix of labeled-fructose + glucose. These tumor tissues absorb the sugars and metabolize them. Note: Adding a mixture of sugars to the tumor allows the scientists to determine how the metabolic pathways are related.
      3. The different components after metabolism are then determined in lab to trace the metabolic pathway of how tumors break down sugars.
    2. on a tumor tissue microarray containing 25 cases of human colon tumors ranging from early-stage adenomas to metastatic carcinoma (fig. S5B)

      In order to investigate tumor metabolism in human tissues scientists used a tissue microarray where tiny samples of human tumors or tissues were cultured and studied. They compared metabolism between 25 different tumors, of varying severity, to normal human intestinal cells.

    3. Given these findings, we hypothesized that fructose in the intestinal lumen might be efficiently transported and metabolized by tumors located in the distal small intestine and colon.

      The authors wanted to test whether tumors near the end (distal) of the intestines or in the colon consume fructose since it can be found in much higher concentrations in the colon than glucose can. Approach: They marked glucose and fructose molecules with C14 (radioactive carbon) which can be traced as the sugars get broken down and metabolized. This way if they find C14 from fructose and glucose in a tumor they can conclude that it metabolizes both sugars.

    4. Indeed, we found that fructose concentration was significantly increased in the colonic lumen (4.4 mM at peak 30 min) in WT mice after an oral bolus of HFCS (fig. S4A), consistent with impaired fructose uptake in the small intestine.

      The scientists repeated experiments from previous work to validate their methods. They fed mice (oral bolus) high fructose corn syrup and then 30 minutes later (to allow for digestion) measured fructose levels in the colon. Similar to previous studies, they found elevated fructose in the colon suggesting the passive transporters in the intestine were saturated and allowed fructose to pass through undigested.

    5. To uncouple the metabolic effects caused directly by HFCS from those caused by HFCS-induced obesity, we treated APC−/− mice with a restricted amount (400 μl of 25% HFCS) of HFCS daily via oral gavage starting the day after tamoxifen injection (referred to as the HFCS group).

      Here they did a two-part experiment: 1) They tested whether high fructose corn syrup itself induced metabolic dysfunction by giving mice a limited about of high fructose corn syrup so that they did not become obese.

      2) To test the effects of high fructose corn syrup on colorectal tumor formation and growth the authors compared tumor characteristics between mice fed with different amounts of high fructose corn syrup. First, they treated all mice with tamoxifen to activate tumor formation. Then they broke them down into three groups: HFCS- mice that were treated with a limited amount of high fructose corn syrup to prevent obesity, WB- mice that had high fructose corn syrup mixed in with water so they consume a lot of it, and Con- a control group with no high fructose corn syrup administered. They then compared the formation of tumors and their characteristics to look at the effects of high fructose corn syrup.

    6. We first determined the physiological effects of HFCS administered to APC−/− and wild-type (WT) mice

      The scientists were first interested in looking at how high fructose corn syrup affects an entire mouse, and compare the effects on normal mice and their genetically modified mouse (APC -/-). They did this by mixing high fructose corn syrup into their water and allowing them to drink as much as they wanted (ad libitum). They monitored the mice's weight over time.

    7. To untangle the link between sugar consumption, obesity, and cancer, we mimicked SSB consumption in a genetically engineered mouse model of intestinal tumorigenesis. In this model, the adenomatous polyposis coli (APC) gene is deleted in Lgr5+ intestinal stem cells upon systemic tamoxifen injection (Lgr5-EGFP-CreERT2; APCflox/flox, hereafter APC−/− mice) (11, 12).

      The scientists need a mouse model that will develop intestinal tumors so that they can study the effects of sugar-sweetened beverages on the tumor. They manipulated the mouse genes so that after injecting a drug (Tamoxifen) a gene in the intestine is deleted and tumors begin to form. With this genetically engineered mouse the scientists can induce tumor formation of the mouse, then track tumor size and metabolism to look at the effects of high fructose corn syrup in a diet.

    1. tetrodotoxin, which prevents Na+ influx elicited by veratridine, prevented the effects of depolarization

      The sodium influx caused by veratridine was cancelled out by tetrodotoxin exposure. Tetrodotoxin blocks the influx of sodium ions thereby stopping depolarization. Hence there is no increase in the activity of substance P in the presence of both drugs.

    2. one-way analysis of variance

      Used to compare average means of two or more samples.

      Here, the authors used this test to measure the activity of substance P across the following groups: (1) control, (2) in the presence of tetrodotoxin, (3) in the presence of veratridine, and (4) in the presence of tetrodotoxin and veratridine.

      Read more at Khan Academy.

    3. Control

      These are explants obtained from nucleus locus ceruleus. These explants are placed in a nutrient medium and no drugs are provided to this group. This group serve as a comparison group to the groups treated with the drugs.

    4. Autoradiography

      The last step in the dot blot to detect the materials of interest using a radioactive probe. Here, the radioactive probes tagged to proenkephalin were observed for three time points: 0 days, 1 day, and 3 days.

    5. The medullary explants exhibited a 50-fold rise in [Leu]enkephalin within 4 days, after a 2-day lag period, and continued increasing through 7 days, the longest time examined. In contrast, TH activity remained constant throughout, while PNMT decreased 60 percent in the first 4 hours, maintaining a stable plateau thereafter.

      Medullary explants were obtained from adult rats to understand the mechanism behind the different transmitter expression. 

      The tyrosine hydroxylase (TH) activity stayed consistent throughout the seven-day period. TH enzyme is responsible for the synthesis of catecholamines. The consistent TH activity indicates that there is no change related to the expression of catecholamines in this experiment. 

      Next, the PNMT enzyme is responsible for adrenergic expression. PMNT exhibited a 60% decrease in four hours and was maintained at that level for the remainder time period. 

      The [leu]enkephalin (opiate expression) showed a 50-fold increase in four days and continued to increase until seven days. There is a continuous increase in the expression of opiates as opposed to catecholamines.

    6. immunocytochemical reactivity

      Technique used to mark the target of interest using an antibody-based test. In this study, tyrosine hydroxylase and dopamine β-hydroxylase are the target molecules of interest. The antibodies against tyrosine hydroxylase—namely dopamine β-hydroxylase—are used to detect the molecule of interest.

    7. Depolarization with veratridine completely blocked the increase of substance P

      Explants were depolarized with veratridine, leading to a decrease in the levels of substance P.

    8. explanted superior cervical ganglia

      The ganglia was removed from the animal and transferred to a nutrient medium. These explants were maintained in the medium for six months to one year. At several time points, the explants were taken and observed for the activity of substance P.

    9. in culture

      The neurons are dissected from the animal and grown in a dish. The dish contains supplemental factors and a medium that mimics the composition of the fluid inside the animal.

    10. grown in dissociated cell culture

      Neurons are separated from the animal through mechanical or enzymatic disruption. The separated neurons are transferred to a dish or culture plate. The neurons are maintained in the dish.

  8. Jul 2019
    1. The multiplet nuclei capture rate was comparable to single-cell RNA-seq analysis using the 10× platform

      In order to see if they were accidentally capturing more than one nuclei at a time, the authors mixed nuclei from mouse and human samples prior to running snRNA-seq. If they saw mouse RNA mixed with human RNA, this meant there was a multiplet (or, more than one nuclei was captured).

      However, they found that there was very low rates of multiplets, meaning that their experiment is working well.

    2. We aimed to gain insight into cell type–specific transcriptomic changes by performing unbiased single-nucleus RNA sequencing (snRNA-seq) (4) of 41 postmortem tissue samples

      The authors wanted to see if particular cell types have different gene expression in autistic brains. They also examined two different brain areas to see if there's regional differences.

    3. We generated 104,559 single-nuclei gene expression profiles—52,556 from control subjects and 52,003 from ASD patients (data S2)—and detected a median of 1391 genes and 2213 transcripts per nucleus, a yield that is in agreement with a recent snRNA-seq

      The authors calculated the total number of genes expressed in the single-nuclei data of controls and ASD patients and found it was about the same. The median gene number is lower than the transcript number because a gene can have multiple transcript forms (called isoforms).

    4. 10× Genomics platform

      The 10X Genomic system is a platform that isolates single nuclei and isolates "libraries" (a collection of RNA fragments which can be used to identify particular RNAs) from each nuclei.

    5. To compare changes in ASD to those in patients with sporadic epilepsy only, we generated additional snRNA-seq data from eight PFC samples from patients with sporadic epilepsy and seven age-matched controls (data S1)

      The authors wanted to make sure that any effects they were seeing were specific to ASD, and not epilepsy, so they included patients with epilepsy alone as an additional control.

    6. (fig. S1A; P > 0.1, Mann-Whitney U test)

      This means that the control and ASD subjects didn't differ in age, sex, or RNA quality. This is important because results can be biased by uncontrolled factors (e.g., what if there's more females in one group, and the effect you're seeing is really due to sex?)

      Mann-Whitney U test is a statistical test that they used to show that there's no signifiant differences between the controls and ASD subjects.

    1. Relative to the wild-type protein, the evolved triple mutant catalyzes the reaction more than seven times faster, with turnover frequency (TOF) of 46 min–1 (Fig. 1E).

      Via site saturation mutagenesis, V75 and M103 positions along the protein sequence were identified as likely beneficial mutations and were randomized, i.e., the amino acids at these positions are replaced by random ones. A large number of random variants, which together constitute a library, are produced and then screened in an attempt to discover a highly active variant among them. The evolved triple mutant fits the bill.

    2. a 12-fold improvement over the wild-type protein (Fig. 1D).

      Recombinant Protein is a protein encoded by a gene that has been cloned in a system that supports expression of the gene (in this case, it is M100). Modification of the gene by recombinant DNA technology can lead to expression of a mutant protein. In this study, M100D mutation is more highly activating than the wild-type protein (as it occurs in nature).

    3. site-saturation mutagenesis

      M100 is the specific amino acid residue within the protein sequence that has been identified to be critical for the protein’s function. it is very important to determine the ideal amino acid residue for this position. Site saturation mutagenesis is a form of random mutagenesis, allowing the substitution of specific sites against all 20 possible amino acids at once. In this study, this technique is employed to generate a series of enzymes with enhanced activity and enantiospecificity.

    4. “Active site” structure of wild-type Rma cyt c showing a covalently bound heme cofactor ligated by axial ligands H49 and M100. Amino acid residues M100, V75, and M103 residing close to the heme iron were subjected to site-saturation mutagenesis.

      Proposed theory for the binding mode for the iron-carbene complex is one where the carbene complex forms such that it takes the place of the axial methionine. The silane may approach from the more exposed side in the wild-type protein. This further explains the observed stereochemistry of the organosilicon product. The V75T, M100D, and M103E mutations may improve reactivity by providing better access of the substrate to the iron center.

      A complete carbene transfer to the protein may be the reason for the catalyst to be inactivated. The activity and lifetime of Rma cyt c may be improved with further mutagenesis.

    5. Carbon–silicon bond forming rates over four generations of Rma cyt c.

      Turnover frequency for each variant relative to wild type protein: WT: 1 M100D 2.8 +/- 0.2 V75T M100D 4.6 +/- 0.3 V75T M100D M103E 7.1 +/- 0.4

      From this experimental data, it is clear that directed evolution has resulted from changing the enzyme from unselective wild type into a highly enantioselective variant.

    6. In addition, diazo compounds other than Me-EDA could be used for carbon–silicon bond formation

      Additional diazo compounds that were successful were are R3 = -CH3, -CH2CH3, -Ph.

    7. Fig. 2 Scope of Rma cyt c V75T M100D M103E-catalyzed carbon–silicon bond formation.
      1. Rma cyt c V75T M100D M103E shows excellent enantioselectivity and turnover over a wide range of substrates. Silicon substrates with weakly electron donating or activating methyl substituents (4), strongly electron donating -OMe (5), weakly deactivating -Cl (6), strongly deactivating -CF3 (7), and moderately deactivating COOMe and CONMe (9 and 10 respectively) show moderate to excellent turnover and high selectivity. No direct relationship exists between turnover number and substituents effects from this study. Enantioselectivity is excellent in all substrates. All products were identified using GC-MS and no traditional organic chemistry techniques were used.
    8. Carbon–silicon bond formation catalyzed by heme and purified heme proteins.

      Heme proteins that were readily available were screened to identify the one that gave the most enantioselectivity. This served as starting point for directed evolution. Purified heme protein, silane, diazoester, thiosulfate, methyl cyanide and M9-N buffer as the medium for microbial growth were stirred at room temperature in anaerobic conditions. Reactions were performed in triplicate. Unreacted starting materials was obtained in all cases and no further purification was carried out.

    1. We assembled and analyzed a dataset of 42 avian SDNs encompassing a broad geographical range, with data from islands (n = 17) and continents (n = 25) in tropical (n = 18) and nontropical (n = 24) areas (table S12). Although some of the other SDNs in the analyses included introduced species [e.g., (7, 34)], SDNs on O‘ahu present an extreme case of dominance by introduced species (>50%), coupled with extinction of all native frugivorous birds

      The authors surveyed data from seed dispersal networks across a variety of habitats, noting that the O'ahu was unique with the majority of its population being mostly made up of introduced species, with the original bird species of the island going extinct.

  9. Jun 2019
    1. The statistical significance of the observed topological patterns was assessed by contrasting observed values for each metric with the confidence interval from null models (13)

      To determine if an observation is a consequence of a measured phenomena, and not by chance, researchers must test (and reject) the null hypothesis. A null hypothesis states that a result or observation is due by chance, and so should be disregarded as insignificant.

      In this case, the authors test the significance of the identified patterns in the network by comparing these values to a null model, a generated collection of values randomized to produce a pattern based on no ecological mechanism (Gotelli and Graves, 1996). If the observed values differ from the range of null values defined by the null model's confidence interval, they are considered significant.

    2. To what extent are introduced species integrated into seed dispersal networks (SDNs), and do introduced dispersers replace extinct native animals? To investigate these questions, we examined interactions based on 3278 fecal samples from 21 bird species [tables S1 to S3 and (13)] collected over 3 years at seven sites encompassing broad environmental variation across Oʻahu (fig. S1 and table S1).

      The authors wanted to figure out how many new plant/animal species are being incorporated into O'ahu's ecosystem through the dispersion of plants' seeds by animals. They also wanted to determine if non-native animals are responsible for this distribution.

      Over the course of 3 years they collected poop samples from 21 different birds found in 7 different locations across the island of O'ahu. The supplemental figure 1 and table 1 describe the 7 locations' average rainfall, coordinates, and elevation to demonstrate the diversity of these areas. Another set of tables listed the different species and plants (introduced and native) found at each site.

    3. We estimated robustness of animals to the extirpation of plants (assuming bottom-up control) and robustness of plants to the extirpation of animals (top-down control). We simulated two scenarios, one in which order of extirpation was random and another—more extreme—scenario in which order was from the most generalist to the most specialist species. After using a null model correction on each metric to account for variation in sampling intensity and network dimensions across studies (14), we compared the 95% confidence intervals for the O‘ahu networks with the global dataset.

      The authors designed hypothetical scenarios where generalist or specialist species in a network went extinct, then determined the severity of these extinctions by measuring the amount of additional species that would (theoretically) go extinct as a consequence.

      The use of a null model ensures that the results found by these simulations are not a coincidence, that is, not due by chance.

    4. To examine interaction dynamics across sites and to test their association with environmental variables, we calculated the dissimilarity (interaction turnover) between pairs of networks, using data limited to species present in the networks.

      The authors compared the similarities and differences between species' interactions across different locations around the island, taking into account the unique environments of each site.

    5. Beckett’s algorithm
    1. veratridine depolarization

      Veratridine is a drug that causes an increase in the sodium influx. The authors used the drug to cause depolarization.

    2. histofluorescence

      Fluorescent markers are used to label catecholamine in the neurons and visualized using a fluorescent microscopy.

    1. Locomotor sensitization

      A technique used to measure the movement or locomotor activity of the animal assessed in the open field box. It is thought that with repeated administration of the drug, the animals can show an increase in locomotor activity, which is a sign of sensitization.

      Photobeams are placed on the walls of the box to record the movements of the animal. The mice can explore and get used to the test area of the open field.

      The animal is tracked for the distance covered in centimeters using an automated video system. The experiment is repeated on day 1 and on each day following the injection of cocaine (on days 8, 9, 10, 11). The total distance covered by the animal is recorded for each day.

      Watch the technique here at: https://www.jove.com/video/53107/assessment-cocaine-induced-behavioral-sensitization-conditioned-place

    2. CPP measures an animal’s preference for a particular place in its environment, which develops as that place becomes consistently associated with a rewarding stimulus and assumes some of the rewarding effects of the stimulus

      A three-chamber apparatus is constructed with access to two chambers for the animal for this experiment. There are 3 phases to this experiment: pre-conditioning, conditioning, and testing.

      Pre-conditioning: Animal can freely move on any side of the chamber. For each animal, the first preference of the chamber is noted. This was done on days 7 and 8 of the experimental protocol.

      Conditioning: Train the animals to saline on its preferred chamber and to cocaine on the least preferred side. This was done on days 9, 10, and 11 of the experimental protocol.

      Testing: On day 12 of the experiment, the animals are allowed to have access to both sides of the chamber for 30 minutes. The time spent in the preferred chamber and the less preferred chamber is recorded.

      View the video to learn more about the conditioned place preference protocol. The protocol is describing how to measure craving in animals using morphine as the drug of preference. https://www.jove.com/video/58384/a-conditioned-place-preference-protocol-for-measuring-incubation

    3. long-term potentiation

      Slice electrophysiology is a technique that is widely used to study synaptic plasticity. Brain slices containing the nucleus accumbens region was obtained from the mice.

      For this technique, stimulating and recording electrodes are needed. Recording electrodes measure the electrical activity of the neurons in the area. Stimulating electrodes are used to stimulate a dendrite (s) in the brain region that can elicit a response which can be recorded via the recording electrode. The stimulus is given at a rate of 1 per minute.

      The stimulating electrode was placed in the nucleus accumbens, and the recording electrode was placed near to the stimulating electrodes.

      The amplitude or the size of the response can be calculated from each stimulus. Baseline values were obtained. After that, an LTP stimulus was given, and the post-LTP data was collected. The data were normalized to baseline values.

      Watch the video here on LTP is done in hippocampus, a brain region involved in memory: https://www.jove.com/video/2330/preparation-acute-hippocampal-slices-from-rats-transgenic-mice-for

    4. high-frequency stimulation

      The neurons are activated by a high frequency of 100Hz. The protocol used here: 4 trains of 100 Hz tetanus 3 mins apart and are represented below:

    5. FosB expression

      Chromatin immunoprecipitation technique is used to measure FosB expression.

      Briefly, the brain tissue was fixed with formaldehyde to crosslink the DNA binding proteins. The DNA was sheared into small fragments, some of which contains the DNA binding proteins. Using specific antibodies (H4, H3), the DNA binding protein complex was isolated. The proteins are digested, and the DNA is released. The specific DNA sequences of interest were amplified to see if they precipitated with the antibody.

      Watch the video here: https://www.jove.com/science-education/5551/chromatin-immunoprecipitation

    6. PCR

      mRNA was isolated from the brain tissue using the Trizol reagent. RNA was later reverse transcribed to cDNA or complementary DNA using primers of interest. The fold difference of mRNA over control values is calculated and compared across the groups.

      Check the video here on the technique: https://www.youtube.com/watch?v=0MJIbrS4fbQ

    7. Immunoblots

      Protein was extracted from the tissue. Proteins are separated based on molecular weights in a SDS-Page gel. The gel was transferred to nitrocellulose membrane.

      The antibodies to H3 and H4 were applied to the membrane to detect the bands of interest.

      Watch the technique here: https://www.jove.com/science-education/5065/the-western-blot

    8. SAHA

      The drug was administered directly to the nucleus accumbens of the mice.

      In order to do so, the coordinates of the nucleus accumbens are obtained from the mouse brain atlas. The mouse is placed in a stereotaxic chamber, and the cannula was inserted into the brain to inject the drug every day for 7 days. The cannula was guided to be inserted into the brain using the coordinates

    9. HDAC activity

      The nuclear fractions are obtained from the mice using a nuclear extraction kit. HDAC activity was measured using the kit.

    10. 14 days after stopping 7 days of nicotine treatment

      The mice receive 7 days of nicotine or water, and then the animals are weaned off the drug for 14 days. Cocaine was administered to animals after day 14 of treatment.

    11. To investigate further the duration of the priming effect of nicotine

      What is the duration of nicotine exposure that is needed to obtain the priming response we see in these animals?

      Does nicotine need to be given closer to another drug or separated a few days apart?

    12. To test further the idea that histone acetylation and deacetylation are key molecular mechanisms for the effect of nicotine on the response to cocaine, we conducted two sets of experiments, one genetic and one pharmacological

      Next, the authors tested the idea of histone acetylation by using a low dosage of theophylline, an HDAC stimulator. In contrast to SAHA, the theophylline should decrease the response to cocaine.

    13. we asked whether we could simulate the effect of nicotine by specifically inhibiting deacetylases with the HDAC inhibitor suberoylanilide hydroxamine acid

      If nicotine is inhibiting HDAC activity, then by using an HDAC inhibitor, we should be able to mimic the effects of nicotine on LTP and FosB expression. This hypothesis was tested by using SAHA, an HDAC inhibitor.

    14. histone deacetylase (HDAC) activity directly in the nuclear fraction of cells in the striatum

      To confirm that, histone deacetylase activity (HDAC) was measured in the striatum.

    15. Does the hyperacetylation produced by nicotine result from activation of one or more acetylases or from the inhibition of deacetylases?

      The authors next addressed whether the acetylation of residues is due to an increase in activation of acetylases or due to inhibition of deacetylase.

    16. we used immunoblotting and examined the extent of chromatin modifications in the whole striatum of mice chronically treated with nicotine

      The authors observed the acetylation levels of H3 and H4 after 7 days of nicotine treatment in striatum tissue using chromatin immunoprecipitation and immunoblotting.

    17. whether nicotine enhances FosB expression in the striatum by altering chromatin structure at the FosB promoter and, if so, does it magnify the effect of cocaine?

      The authors asked the question: does nicotine increase FosB expression by altering the chromatin structure at FosB promoter?

    18. we gave cocaine (30 mg/kg) in two protocols: for 24 hours or 7 consecutive days followed by 24 hours of treatment with nicotine

      The animals were given cocaine in drinking water for 7 days. Later, the mice were administered nicotine for 4 days. FosB mRNA levels were measured.

    19. does nicotine pretreatment followed by cocaine increase the response to cocaine, whereas the reverse order of drug treatment does not?

      These experiments were performed to determine if the nicotine pretreatment combined with cocaine injection produces similar results to cocaine pretreatment combined with nicotine injection.

    20. We treated mice with nicotine (50 μg/ml) in the drinking water for either 24 hours (Fig. 1A) or 7 days

      Nicotine was added to the drinking water for the mice.

      7 days treatment: The mice were fed with the nicotine-containing water for 7 days. For the next 4 days, mice received a cocaine injection intraperitoneally (injection into a body cavity) with continuous exposure to nicotine-containing water. The injections were given once per day.

      24 hours of treatment. The mice were exposed to the nicotine-containing water for 24 hours, and the next 4 days; the mice received a cocaine injection (once per day) intraperitoneally with continuous exposure to nicotine-containing water.

    1. we expected the intervention to be particularly beneficial for women tending to endorse the gender stereotype.

      The authors predicted that the effect of the values affirmation intervention would be greater for women who more strongly endorse the gender stereotypes.

    2. We predicted a reduced gender gap in performance for women who completed the values affirmation.

      The authors' main predication was that women who completed values affirmation would have a smaller gender gap than women who did not complete values affirmation.

    3. In this randomized double-blind study

      The authors used a double-blind study design, meaning that neither the students nor the teaching assistants working with the students knew the purpose of the study, or to which group each student was assigned. Double-blind studies are meant to reduce unintentional bias on the part of the participant or the researcher, interpreting the data.

    4. We tested whether values affirmation would reduce the gender achievement gap in a 15-week introductory physics course for STEM majors.

      The authors tested whether using a values affirmation intervention in their college physics course could reduce the performance gap between men and women.

    5. The values-affirmation intervention used in this study involves writing about personally important values (such as friends and family). The writing exercise is brief (10 to 15 min) and is unrelated to the subject matter of the course.

      The key variable in this experiment is whether a student experiences the values affirmation intervention.

      In the values affirmation intervention, students briefly write about a value they find personally important.

    1. optogenetics allows genetically targeted photosensitization of individual circuit components

      The specificity of optogenetic treatments are of particular clinical interest and relevance for neuroscientists. Because individual cells can be targeted in the living organism, optogenetics allows scientists to better understand how different brains cells function and communicate.

  10. May 2019
    1. To test whether activation of the VGATZI-PVT inhibitory pathway leads to body weight gain, we selectively photostimulated this pathway for only 5 min every 3 hours over a period of 2 weeks.

      The authors hypothesize that because stimulation of the ZI to PVT pathway evokes a large increase in food intake in a short amount of time, long-term stimulation should lead to weight gain.

    2. To test the time course and efficiency of optogenetic activation of VGATZI-PVT inhibitory inputs to evoke feeding, we used a laser stimulation protocol of 10 s ON (20 Hz) followed by 30 s OFF for more than 20 min to study ZI axon stimulation in PVT brain slices and feeding behavior. Stimulation of ZI axons with this protocol hyperpolarized and inhibited PVT glutamatergic neurons each time the light was activated (Fig. 3A). Mice immediately started feeding for each of the 30 successive trials of ZI axon laser stimulation (Fig. 3B and movie S4). The mean latency to initiate feeding was 2.4 ± 0.6 s when we used laser stimulation of 20 Hz (Fig. 3C).

      The authors followed an optogenetic protocol by intermittently turning on the stimulation light for 10 seconds followed by 30 seconds of no stimulation. With each 10 seconds of light on, they measured how long it took for the mice to begin eating.

    3. We crossed VGAT-Cre mice with vGlut2-GFP mice in which neurons expressing vesicular glutamate transporter (vGlut2) were labeled with green fluorescent protein (GFP) to study whether ZI GABA neurons release synaptic GABA to inhibit PVT glutamate neurons (16, 17).

      The authors bred two different mouse lines together: one parent expressed Cre in VGAT-positive neurons and the other parent expressed a protein that emits green fluorescence (GFP) in vGlut2-positive neurons.

      The researchers than used the offspring of this cross to record from GFP-positive cells in a slice and ask whether VGAT cells in the ZI provide input to these neurons.

    4. Laser stimulation (1 to 20 Hz) evoked depolarizing currents in ZI ChIEF-tdTomato–expressing VGAT neurons tested with whole-cell recording in brain slices, displaying a high-fidelity correspondence with stimulation frequency (Fig. 1B).

      The authors recorded the activity of the ChIEF-expressing neurons in brain slices using electrodes. Stimulating the slice with blue light activated the ChIEF-expressing neurons, causing them to fire in the same pattern with which they were stimulated (i.e. high fidelity). Hz (hertz) refers to the number of times the light flashes per second, i.e. 20Hz corresponds to 20 flashes of light per second which caused the neurons to fire 20 times per second.

      This virtual lab demonstrates electrophysiological recordings of neurons: Neurophysiology Virtual Lab

    5. Cre recombinase–dependent rabies virus–mediated monosynaptic retrograde pathway tracing in vGluT2–Cre recombinase mice

      The authors identified the neurons that lie upstream and provide input to PVT neurons.

      They targeted excitatory PVT neurons using vGluT2-Cre mice and used a modified rabies virus that traffics into neurons that provide input to the starting population of cells.

    6. food intake was measured when food was put in a brightly illuminated chamber in a two-chamber light-or-dark conflict test

      Mice were placed in a chamber with two compartments—one with no lights and one brightly illuminated. Mice are innately averse to light and so will usually spend more time in the unlit compartment.

    7. After mice were partially fasted with only 60% of the normal food available during the preceding night, laser stimulation (20 Hz, 10 min ON followed by 10 min OFF, two times) of ChIEF-expressing PVT vGluT2 neurons reduced food intake (Fig. 4, F to H).

      The authors gave the mice a small amount of food to eat overnight, which meant that they were hungry during the experiment. Therefore, control mice commenced eating with short latency at at the onset of the stimulation protocol.

    8. To explore the neuronal pathway postsynaptic to the VGATZI-PVT axon terminals, we injected Cre-inducible AAV-ChIEF–tdTomato selectively into the PVT of vGlut2-Cre mice (Fig. 4A and fig. S8A).

      The authors assessed the role of the neurons downstream (postsynaptically) of the ZI neurons that project to the PVT. They examined how food intake was affected when PVT excitatory neurons were optogentically stimulated.

      Given that GABA is an inhibitory neurotransmitter, the PVT neurons would normally be inhibited when the ZI to PVT projection is active. Thus, when the PVT neurons are stimulated, food intake should increase. Indeed, this is what the authors found.

    9. To test whether ZI GABA neurons exert long-term effects on energy homeostasis, we microinjected AAV-flex-taCasp3-TEVp, which expresses caspase-3 (24), into the ZI of VGAT-Cre mice to selectively ablate ZI GABA neurons (fig. S7).

      The authors selectively killed ZI GABA neurons by using an AAV to express a caspase in these neurons. Caspase-3 is an enzyme that induces cell death.

    10. A chemo-genetic designer receptor exclusively activated by designer drugs (DREADD) was used to test the hypothesis that silencing the cells postsynaptic to ZI GABA axons, the PVT glutamate neurons, would enhance food intake. We injected Cre-inducible AAV5-hSyn-HA-hM4D(Gi)-IRES-mCherry coding for the clozapine-N-oxide (CNO) receptor into the PVT of vGlut2-Cre mice (25, 26) (fig. S9, A and B).

      Silencing of neurons in the PVT that receive input from ZI GABAergic neurons should increase food intake given that these neurons are inhibited by ZI GABA neurons, which increase food intake.

      The authors used a chemogenetic approach in which a modified (DREADD) receptor is expressed in the neurons using AAVs. The receptor is activated specifically by a synthetic drug (CNO) that has no other biological effect.

      The authors used this approach over an optogenetic method to silence the neurons as currently available optogenetic tools for inhibition are not very efficient.

    11. To confirm that PVT vGlut2 neurons were killed by the virus-generated caspase-3, we injected the Cre-dependent reporter construct AAV-tdTomato simultaneously with AAV-flex-taCasp3-TEVp to corroborate that reporter-expressing neurons were absent after selective caspase expression. With coinjection, little tdTomato expression was detected, whereas many cells were detected with injections of AAV-tdTomato by itself, consistent with the elimination of vGluT2 neurons in the PVT (fig. S10, A to D).

      To confirm that the caspase virus was killing cells, a tdTomato reporter, which makes the cells red under a fluorescent microscope, was injected at the same time as the caspase virus.

      The authors found that few tdTomato cells were present in mice that also received the caspase, compared to control mice that were injected with the tdTomato only. Thus, the caspase virus efficiently killed the PVT neurons.

    1. Let us assume that the genetic code is a simple one and ask how many bases code for one amino acid.

      In other words, how many bases in a row translate into one amino acid?

      Let's do a thought experiment (which is considerably cheaper than a laboratory experiment):

      Assume that each amino acid is coded for by two bases in a row. The code would have one of four different bases in the first position of the code (A, G, C, T) and one of four different bases for the second. How many combinations of pairs would be possible?

      For example: (1) A A (2) A G (3) A C (4) A T (5) G A (6) G G (7) G C (8) G T …

      If you continued to write out every combination, you would come up with 16 possible pairs of bases. However, that's four short of the 20 natural amino acids. This is a good sign that two bases is not enough to code for all possible amino acids (and, in fact, we now know that it takes three bases in a row).

      How many combinations would be possible if the code were a grouping of three bases?

    2. The crucial experiment is to put together, by genetic recombination, three mutants of the same type into one gene

      This "frame shift" experiment tests whether the bases are read in singlets, pairs, or triplets.

      https://ghr.nlm.nih.gov/primer/illustrations/frameshift.jpg

    3. These mutations are believed to be due to the addition or subtraction of one or more bases from the genetic message. They are typically produced by acridines, and cannot be reversed by mutagens which merely change one base into another. Moreover, these mutations almost always render the gene completely inactive, rather than partly so.

      By incorporating acridine into genetic material, Crick and coworkers produced mutations in DNA. These mutations led to either the addition or subtraction of one base pair in the genetic code.

    1. the Force and Motion Conceptual Evaluation (FMCE)

      A secondary outcome measure was student scores on the Force and Motion Conceptual Evaluation. Because this test is administered throughout the country, the authors can compare their findings to the normal results for the population.

    2. in-class exams

      The main way the author's assessed student performance was to compare scores on multiple choice exams. These scores are referred to as the main outcome measure.

    3. Students in the control group selected their least important values from the same list and wrote why these values might be important to other people.

      Students in the control group spent time writing about a value that was not important to their identity. This ensures that any difference between the values affirmation group and the control group is due to the act of self-reflection and affirmation, and not simply a result of general writing.

    4. As part of an online survey typically given in the course (week 2), students also indicated their endorsement of the stereotype that men perform better than women in physics.

      Level of endorsement of the stereotype that men perform better than women in physics is a moderating variable. Its value influences how much the values affirmation intervention affects course performance.

    5. attempts to reduce identity threat in authentic classroom contexts have been limited

      One of the key features of this study is that it takes places in an authentic college classroom, rather than being an artificial, one-time laboratory experiment.

    1. We performed a genome-wide screen for loci affecting overall body size in six species of Darwin’s finches that primarily differ in size and size-related traits: the small, medium, and large ground finches, and the small, medium, and large tree finches (Fig. 1, A and B, and table S1)

      The investigators initially had to decide that they were going to use samples from six species of finches. Then, they screened the entire genome of these species to look for genetic variants in different individuals. The objective was to see if any variation at any given location was associated with size and/or size-related traits.

    2. We genotyped a diagnostic SNP for the HMGA2 locus in medium ground finches on Daphne Major that experienced the severe drought in 2004–2005 (n = 71; 37 survived and 34 died) (11).

      To look at the specific finches that experienced the 2004-2005 drought, the researchers genotyped a HMGA2-specific SNP in both survivors and victims (71 total birds).

    3. we genotyped an additional 133 individuals of this species for a haplotype diagnostic SNP (A/G) at nucleotide position 7,003,776 base pairs in scaffold JH739900, ~2.3 kb downstream of HMGA2.

      Lamichhaney and colleagues took a closer look at another 133 birds. Specifically, they investigated a SNP at a certain position in the genome.

      Within the 17 SNPs, researchers knew that the large finches were homozygous (LL in Figure 2D) for one haplotype group and small finches were homozygous (SS) for another haplotype group. However, what was going on with the medium ground/tree finches?

      This particular SNP was shown to be associated with only beak and body size within these medium finches.

    4. we investigated whether the HMGA2 locus is primarily associated with variation in body size, beak size, or both.

      HMGA2 had been identified as a candidate gene, and the SNPs within the 525-kb region within HMGA2 had been located. Therefore, the researchers attempted to see which trait this gene was specifically related to.

    5. We identified 17 SNPs showing high genetic divergence between large and small ground finches and tree finches (FST > 0.8) at nucleotide sites in highly conserved regions across birds and mammals (PhastCons score > 0.8) (Fig. 2C).

      The researchers again calculated the fixation index. however, this time is was only for the variable region (~525-kb in size) that contained the HMGA2 gene.

      Remember, the fixation index score ranged from 0 (complete sharing of genetic material) to 1 (no sharing of genetic material).

      For a good explanation of a SNP, see this video.

    6. We constructed a maximum-likelihood phylogenetic tree on the basis of this ~525-kb region

      The genome-wide fixation index scan found that a region around 525 kb in size showed the most striking differences.

      Lamichhaney and colleagues constructed another phylogenetic tree based on the alignment of this region in all of the samples.

    7. scan comparing large, medium, and small ground finches and tree finches (Table 1) identified seven independent genomic regions with consistent genetic differentiation (ZFST > 5) in each contrast (Fig. 2A and table S2).

      Researchers performed pairwise, genome-wide fixation index (see definition in glossary section) scans across the whole genome. They did this with 15kb windows that were non-overlapping regions.

      The following three comparisons were made: 1) Large ground/tree versus medium ground/tree; 2) Large ground/tree versus small ground/tree; and 3) medium ground/tree versus small ground/tree.

      A compilation of SNP calls yielded 44,767,199 variable sites within or between populations. The fixation index score ranged from 0 (complete sharing of genetic material) to 1 (no sharing of genetic material).

      Each index value was then transformed into a Z-score, which is simply a measure of how many standard deviations below or above the population mean a raw score is. Further analysis was done if the Z-score of the fixation index was greater than five. See the supplementary materials for more information.

    8. We constructed a maximum-likelihood phylogenetic tree on the basis of all 180 genome sequences (Fig. 1C)

      The nucleotide alignment of the variable positions from all 180 samples (60 birds plus 120 from previous study) allowed the scientists to generate a phylogeny using software FastTree.

      See here to learn more about FastTree and its maximum-likelihood method.

    9. We combined these data with sequences from 120 birds, including all species of Darwin’s finches and two outgroup species (15)

      DNA extraction and whole genome sequencing was performed using samples from 60 birds. In addition, 120 bird samples from a previous study (Lamichhaney 2015) were used.

      The sequence reads were analyzed and trimmed using FASTQC software and FASTX software, respectively. To see more specific software used, see the supplementary materials

      The researchers used the genome assembly of a medium ground finch as a reference genome. The reads from the samples were aligned to this reference.

    10. We sequenced 10 birds from each of the six species (total 60 birds) to ~10× coverage per individual, using 2 × 125–base pair paired-end reads. The sequences were aligned to the reference genome from a female medium ground finch (12).

      Lamichhaney and colleagues sequenced a total of 60 birds. Sequencing is a technique used to actually read the DNA.

      For a history of DNA sequencing and assembly, this resource from HHMI BioInteractive is a great tool.

      This video shows specifically how Illumina Sequencing Technology works.

      If a finch genome is 1 Gbp (one trillion base pairs), sequencing "to ~10x coverage per individual" would mean that you obtain 10 Gbp of sequencing data.

      "Using 2 x 125-base pair paired-end reads" refers to the fact that the fragments sequenced were sequenced from both ends and not just one. Refer to the videos above for more information.

    11. We then genotyped individuals of the Daphne population of medium ground finches that succumbed or survived during the drought of 2004–2005.

      Genotyping establishes a genetic code for each individual finch. With birds, this can typically be done using plucked feathers, blood, or eggshell membranes.

      The researchers here used blood samples that were collected on FTA paper and then stored at -80°C. FTA paper is treated to bind and protect nucleic acids from degrading.

      This type of scanning is used to identify specific gene markers that are highly variable. Researchers wanted to identify a locus that showed beak size variation.

    1. Compared with REPAIRv1, REPAIRv2 exhibited increased specificity, with a reduction from 18,385 to 20 transcriptome-wide off-targets with high-coverage sequencing (125x coverage, 10 ng of REPAIR vector transfected)

      To more rigorously compare the off-target activity of two systems, the authors performed sequencing with higher coverage.

      Recall that earlier they used 12.5x coverage. Here, they used 125x coverage.

      Why is this important? Cellular genes are expressed at different levels which leads to a different number of individual mRNA molecules. The more abundant a particular molecule is, the easier it is to detect it at a given coverage. When you increase the coverage, you have a chance to catch molecules that are less abundant in the cell.

      This is exactly what happened in the experiment with the REPAIRv1. At 125x coverage, the authors detected off-targets in the majority of transcripts (18385 of around 20000 protein-coding genes in our genome). By contrast, the REPAIRv2 system was astonishingly more specific and produced off-targets 1000 times less frequently.

    2. We further explored motifs surrounding off-targets for the various specificity mutants

      Inspired by other explorations into 3' and 5' motifs, the authors looked at transcripts with off-target effects—specifically at two nucleotides surrounding the edited adenosine.

    3. A majority of mutants either significantly improved the luciferase activity for the targeting guide or increased the ratio of targeting to nontargeting guide activity, which we termed the specificity score

      The authors looked at two characteristics of the modified protein variants.

      First, they tested whether the mutant had changed its editing activity. This was calculated by looking at the restoration of the Cluc luciferase signal. While the authors didn't necessarily want increased editing activity, they wanted to avoid a loss of editing activity. 

      However, catalytic activity sometimes leads to more off-target effects. Therefore, they authors calculated the ratio between the Cluc signal in targeting and non-targeting conditions, i.e. the specificity score. The higher the score, the more specific a mutant variant was.

    4. we generated an RNA-editing reporter on Cluc by introducing a nonsense mutation [W85X (UGG→UAG)],

      To create the reporter, the researchers "broke" the gene for the Cluc luciferase by introducing a mutation in the UGG codon, changing it to UAG (a nonsense mutation, which signals the ribosome to stop translation). Since this codon was positioned in the beginning of the Cluc transcript, no luciferase was synthesized in the cell.

      The A (adenosine) in the UAG codon was the target for RNA editing. Both Cas13b-mediated RNA recognition and ADAR-mediated editing were required to remove the stop codon at the beginning of the Cluc transcript, which would restore Cluc expression.

      This means that Cluc luminescence would only be seen where editing (both targeting and cleavage) was successful.

    5. We next characterized the interference specificity of PspCas13b and LwaCas13a across the mRNA fraction of the transcriptome.

      The next question was to understand the specificity of Cas13 in the whole cellular transcriptome (the portion of the genome that's transcribed). In the previous experiments, the researchers looked at the expression of only one target (unmodified or modified). Here, they narrowed their focus from the entire genome to just the transcriptome.

      To do that, the authors transfected the cells with Cas13, LwaCas13a or PspCas13b, a gRNA to target Gluc, and a plasmid containing the gene for Gluc.

      The control cells got an irrelevant gRNA instead of the gRNA targeting Gluc. As an additional control for comparison, the authors used shRNA-mediated knockdown of Gluc in a parallel cell culture.

      After 48 hours the researchers collected the cells, extracted mRNA, and determined the sequences and the number of copies for each transcript.

    6. We transfected HEK293FT cells with either LwaCas13a or PspCas13b, a fixed guide RNA targeting the unmodified target sequence, and the mismatched target library corresponding to the appropriate system.

      The cells were transfected with a nuclease, a gRNA for the non-mutated target site and a whole library with all possible plasmid variants. After 48 hours the researchers collected the cells, extracted RNA, and determined which mutated sequences from the library were left uncleaved.

      The transcript with the unmodified sequence was depleted most efficiently so that its level was the lowest after the cleavage. The levels of all other sequences with substitutions decreased to a lesser extent or did not decrease at all. The better Cas13 cut the sequence, the higher the depletion of this sequence was.

      The authors then compared the sequences by their "depletion scores."

    7. Sequencing showed that almost all PFS combinations allowed robust knockdown

      Substitutions in the PFS motifs did not affect how well Cas13a and Cas13b found and cut the target sequences. As a result, sequences with such substitutions were depleted as successfully as the control sequence, which was unmodified.

    8. To more rigorously define the activity of PspCas13b and LwaCas13a, we designed position-matched guides tiling along both Gluc and Cluc transcripts and assayed their activity using our luciferase reporter assay.

      To figure out which parts of the Gluc or Cluc RNA molecules were the best targets for Cas13, the authors generated a series of gRNA guides where each guide was shifted one to several nucleotides relative to the previous one. This is called tiling.

      In this way, the guides together could cover the whole sequence or a part of a sequence that a researcher was interested in. See figure 4A and 4C or figure 5A for a visual of how the guides were "tiled."

    9. Therefore, we tested the interference activity of the seven selected Cas13 orthologs C-terminally fused to one of six different localization tags without msfGFP.

      The authors took six of the best-performing orthologs from the previous part of the study and replaced the msfGFP domain at the C-terminus of each ortholog with different localization sequences. They then tested Gluc knockdown in the same way they previously tested LwaCas13a.

    10. We transfected human embryonic kidney (HEK) 293FT cells with Cas13-expression, guide RNA, and reporter plasmids and then quantified levels of Cas13 expression and the targeted Gluc 48 hours later

      To directly compare the effectiveness of Cas13a, b, and c orthologs, the authors transfected cells with two luciferases, Cas13 and two different gRNAs targeting Gluc luciferase.

      They measured Gluc luciferase activity. Reduced Gluc luciferase activity indicated interference from the Cas13 ortholog and successful targeting by the gRNA.

      They determined the expression of Cas13 to see whether Gluc knockdown was dependent on the quantity of Cas13 rather than the specific orthology.

    11. Here, we describe the development of a precise and flexible RNA base editing technology using the type VI CRISPR-associated RNA-guided ribonuclease (RNase) Cas13

      In this article, the authors describe how they created a system that can edit RNA molecules. They used a Cas13 protein fused to an adenine deaminase. The Cas13 protein recognized specific sequences on an RNA molecule, and the adenine deaminase edited bases, which can convert A to I (which is functionally read as a G).

      The authors improved the targeting specificity (accuracy) and editing rate (precision) by Cas13 and deaminase mutagenesis, and determined the sequences for which this system is most effective. They showed one application of this technology by correcting a series of disease-causing mutations at the cellular level.

    1. We evaluated the collective scrolling and tribological behavior of many individual graphene patches and created a density distribution of their tribological state in order to assess their contribution to the observed friction

      Authors have conducted theoretical simulation to investigate the sliding behavior of an ensemble of graphene sheets to elucidate the macroscale scrolling phenomena. To explore the mesoscopic friction behavior, number density (number of particles per unit volume) of the patches as a function of the coefficient of friction and time is calculated by grouping the friction coefficients collected over the ensemble of graphene patches.

    2. we performed a large-scale MD simulation for an ensemble of graphene-plus-nanodiamonds present between DLC and the underlying multilayered graphene substrate (fig. S8).

      To understand the transition of friction from the nanoscale to the macroscopic superlubric condition observed in experiments, authors have simulated a mesoscopic scenario. They have created and analyzed an ensemble (assembly of systems) of graphene patches and nanodiamonds between the DLC and graphene substrate subjected to sliding friction.

    3. We have simulated the effects of surface chemistry and considered the role of defects

      To understand the role of defects in superlubricity, the authors performed computer simulations by introducing double vacancies and Stone-Wales defects on graphene sheets. Studies were conducted in both dry and humid environments.

    4. DLC-nanodiamond-graphene system in a humid environment

      Upon observing experimentally the effect of humidity on friction conditions, authors have extended their studies. They have performed computer simulations to further analyze the interaction between water molecules and graphene in the DLC-nanodiamond-graphene system in a humid environment.

  11. Apr 2019
    1. This prototype includes a MOF-801 layer (packing porosity of ~0.85, 5 by 5 by 0.31 cm, containing 1.34 g of activated MOF), an acrylic enclosure, and a condenser

      Since this paper was published, the authors refined and optimized the devise and tested it under desert conditions with record high efficiency.

      See "Related Content" tab for: Kim, Hyunho, et al. "Adsorption-based atmospheric water harvesting device for arid climates." Nature communications 9.1 (2018): 1191.

    2. Experiments were performed in a RH-controlled environmental chamber interfaced with a solar simulator.

      To test the material in a laboratory setup, the authors use an enclosed chamber in which conditions such as humidity, temperature, and solar illumination can be regulated. This guarantees control over the experimental conditions and reproducibility.

    3. activated (solvent removal from the pores) by heating at 150°C under vacuum for 24 hours

      Heating under reduced pressure lowers the boiling point of liquids. This allows all the solvent and water molecules trapped in the MOF to evaporate easily, emptying all the cavities before starting the experiment.

    1. We carried out more detailed analysis of the wear track that

      In order to understand the wear properties of the graphene-nanodiamond compound after the sliding experiments, the authors performed electron microscopy studies which can reveal the structure of the material in the wear debris.

    2. Raman analysis

      Raman spectroscopy is a chemical analysis technique capable of probing the chemical structure, crystallinity, and molecular interactions of materials.

    3. The contact area normalized with respect to the initial value at t = 0 is ~1 (22), as shown in Fig. 4C

      The authors defined contact area as the area of graphene atoms which are in the range of chemical interactions from the DLC tip atoms. The normalized contact area is defined as the contact area at any time (t) with respect to the initial contact area at time t=0 (when the graphene patches are fully expanded).

    4. To further explore the superlubricity mechanism, we performed molecular dynamics (MD) simulations (table S1)

      In order to elucidate the mechanism of graphene nanoscroll formation and the origin of superlubric state, the authors have conducted computer simulation studies.

    5. Our experiments suggest that the humid environment

      To investigate the effect of environmental conditions on nanoscale friction and superlubricity, the authors have conducted experiments in humid air in place of dry nitrogen.

    1. computed by subtracting the climatological temperature value (17) for the month in which the profile was measured

      For example, if a temperature profile was taken on Feb. 19th, for each depth in that profile the authors subtracted out the average value for all Februaries over the last 50 years for each depth point. This removes the seasonal temperature changes from the data-set, allowing the authors to focus on the long term variability instead.

    2. computed the contribution to the vertically integrated field shown in Fig. 3B from each 500-m layer

      By examining the ocean in distinct depth increments of 500m each, the authors aimed to determine where most of the cooling and heating of the North Atlantic is occurring.

    1. unconstrained canonical correspondence analysis

      Refers to a statistical method that searches for multivariate relationships between two data sets.

      This method is most often used in genetics and ecological sciences. Learn more about why, how, and when to use it here.

    1. To investigate PFS constraints on REPAIRv1, we designed a plasmid library that carryies a series of four randomized nucleotides at the 5′ end of a target site on the Cluc transcript

      Though the authors had already characterized PFS preferences for Cas13b, they needed to check that the fusion of Cas13b with ADAR did not change its targeting efficiency and specificity. The researchers also wanted to confirm that PFS would work when generating RNA edits. This is important as DNA base editors are limited by the PAM of Cas9 and Cpf1 making RNA more powerful since you can target anywhere in the transcriptome. Therefore, it was important to check PFS constraints again.