- Nov 2024
-
Local file Local file
-
Retreatment indications
Retreatment indications, kök kanal tedavisinin (endodontik tedavi) tekrar yapılması gerektiğini gösteren durumları ifade eder. Daha önce tedavi edilmiş bir dişin, çeşitli nedenlerle başarısız olması veya yeni sorunlar gelişmesi durumunda tekrar tedaviye ihtiyaç duyulur.
-
extrusion
çıkarma
-
Preventing apicalextrusion of rootcanal contents.
Kök kanal içeriğinin apikal dışarı taşmasını önleme
-
During EndodonticTreatment
Endodontik çalışma boyunun anatomik apeksin yaklaşık 1 mm kısa belirlenmesinin birkaç önemli nedeni vardır:
- Fizyolojik Foramen ile Anatomik Apeks Arasındaki Fark Anatomik apeks (kök ucunun en dış noktası) ile fizyolojik foramen (sinir ve damarların dişten çıktığı nokta) genellikle tam olarak aynı yerde değildir. Fizyolojik foramen genellikle anatomik apeksten yaklaşık 0,5-1 mm mesafede bulunur. Tedavi sırasında kök kanal dolgusunun fizyolojik foramenin ötesine taşmaması hedeflenir, çünkü bu durum ağrıya ve enfeksiyon riskine yol açabilir.
- Periapikal Doku Hasarını Önlemek Çalışma boyunun anatomik apeksi geçmesi durumunda kök kanal aletleri veya dolgu materyalleri, kök ucundaki periapikal dokulara zarar verebilir. Bu durum: Ağrı Enfeksiyon Tedavi başarısızlığı riskini artırır.
- Doğru Dolgu ve Sızdırmazlık Sağlama Çalışma boyunun 1 mm kısa tutulması, dolgu malzemesinin kök ucundan dışarı taşmasını önler. Dolgu malzemesi kökün dışına taşarsa: Tahriş Kronik inflamasyon gibi sorunlar ortaya çıkabilir.
- Hatalı Radyografik Ölçümleri Düzeltme Radyografik olarak ölçülen kök uzunluğu, anatomik ve fizyolojik farklılıklar nedeniyle yanıltıcı olabilir. Bu yüzden 1 mm'lik güvenlik payı bırakılarak hata riski azaltılır.
-
Distinguishinganatomic apex fromradiologic apex atthe working lenghtdetermination
Bu ifade, kök kanal tedavisinde kök kanalının son noktasını doğru bir şekilde belirleme sürecini anlatır. Burada iki önemli kavram öne çıkar:
Anatomik apeks:
Diş kökünün doğal yapısının sonlandığı noktadır. Genellikle, dişin fizyolojik olarak sonlandığı yer olarak kabul edilir. Radyolojik apeks:
Röntgen görüntüsünde görülen kök ucudur. Radyolojik görüntü, bazen anatomik apeksi tam olarak yansıtmayabilir; yanıltıcı olabilir.
-
Iatrogenic defects
Iatrogenic defects, tıbbi veya diş hekimliği uygulamaları sırasında istemeden oluşan hasar veya problemlerdir. "Iatrogenic" terimi, Yunanca kökenlidir ve "doktor kaynaklı" anlamına gelir. Bu tür hatalar, tedavi veya prosedür sırasında yapılan yanlışlıklar ya da öngörülemeyen yan etkiler sonucu meydana gelebilir.
Diş Hekimliğinde Örnekler Kök Kanallarında Perforasyon: Kanal tedavisi sırasında kökün yanlış delinmesi. Dişin Fazla Törpülenmesi: Protez veya dolgu hazırlığı sırasında diş yapısının gereğinden fazla alınması. Komşu Dişin Zarar Görmesi: Bir dişe işlem yapılırken yanındaki dişin yanlışlıkla hasar görmesi. Periodontal Hasar: Diş eti veya çevre dokuların yaralanması.
-
Dimantions of the coronal/radicular pulp chamberand calcifications
Kronal/köksel pulpa odasının boyutları ve kalsifikasyonlar
-
Follow-up
takip etmek
-
Diagnosis of odontogenic and nonodontogenic pathologies
Odontojenik patolojiler:
Dişlerden veya dişlerle ilişkili dokulardan kaynaklanan hastalıklardır. Örnekler: Diş apseleri Kistler (odontojenik kistler) Diş kaynaklı tümörler Nonodontojenik patolojiler:
Dişlerle doğrudan bağlantısı olmayan ancak ağız veya çene bölgesini etkileyen hastalıklardır. Örnekler: Sinüs enfeksiyonları Kemik tümörleri Travmalar veya sistemik hastalıklara bağlı lezyonlar
-
It captures the image as three-dimensional pixel units called voxels. This allows the imageto be of higher resolution.
Görüntüyü, voxel adı verilen üç boyutlu piksel birimleri olarak yakalar. Bu, görüntünün daha yüksek çözünürlükte olmasını sağlar.
-
Low radiation dose
Düşük radyasyon dozu
-
-
chem.libretexts.org chem.libretexts.org
-
salicylic acid
2-Hydroxybenzoic Acid per IUPAC.
-
-
-
also on the deeper conditions of social soil
Presencing Institute's U-theory is one approach to co-sensing and co-cultivating the relational soil. Prosocial.world 's ACT matrix is a great tools for setting up for co-compassing forward. See here for demo --> https://www.sociocracyforall.org/activating-collective-resilience-and-intimacy-michael-lennon-kathleen-walsh-loubna-echabbi/
-
-
www.biorxiv.org www.biorxiv.org
-
eLife Assessment
This study utilizes an elegant approach to examine valence encoding of the mesolimbic dopamine system. The findings are valuable, demonstrating differential responses of dopamine to the same taste stimulus according to its valence (i.e., appetitive or aversive) and in alignment with distinct behavioral responses. The evidence supporting the claims is convincing, resulting from a well-controlled experimental design with minimal confounds and thorough reporting of the data.
-
Reviewer #1 (Public review):
Summary:
Loh and colleagues investigate valence encoding in the mesolimbic dopamine system. Using an elegant approach, they show that sucrose, which normally evokes strong dopamine neuron activity and release in the nucleus accumbens, is made aversive via conditioned taste aversion, the same sucrose stimulus later evokes much less dopamine neuron activity and release. Thus, dopamine activity can dynamically track the changing valence of an unconditioned stimulus. These results are important for helping clarify valence and value related questions that are the matter of ongoing debate regarding dopamine functions in the field.
Strengths:
This is an elegant way to ask this question, the within subject's design and the continuity of the stimulus is a strong way to remove a lot of the common confounds that make it difficult to interpret valence-related questions. I think these are valuable studies that help tie up questions in the field while also setting up a number of interesting future directions. There are number of control experiments and tweaks to the design that help eliminate a number of competing hypotheses regarding the results. The data are clearly presented and contextualized.
Weaknesses for consideration:
The focus on one relatively understudied region of the rat striatum for dopamine recordings could potentially limit generalization of the findings. While this can be determined in future studies, the implications should be further discussed in the current manuscript.
-
Reviewer #2 (Public review):
Summary:
Koh et al. report an interesting manuscript studying dopamine binding in the lateral accumbens shell of rats across the course of conditioned taste aversion. The question being asked here is how does the dopamine system respond to aversion? The authors take advantage of unique properties of taste aversion learning (notably, within-subjects remapping of valence to the same physical stimulus) to address this.
They combine a well controlled behavioural design (including key, unpaired controls) with fibre photometry of dopamine binding via GrabDA and of dopamine neuron activity by gCaMP, careful analyses of behaviour (e.g., head movements; home cage ingestion), the authors show that, 1) conditioned taste aversion of sucrose suppresses the activity of VTA dopamine neurons and lateral shell dopamine binding to subsequent presentations of the sucrose tastant; 2) this pattern of activity was similar to the innately aversive tastant quinine; 3) dopamine responses were negatively correlated with behavioural (inferred taste reactivity) reactivity; and 4) dopamine responses tracked the contingency of between sucrose and illness because these responses recovered across extinction of the conditioned taste aversion.
Strengths:
There are important strengths here. The use of a well-controlled design, the measurement of both dopamine binding and VTA dopamine neuron activity, the inclusion of an extinction manipulation; and the thorough reporting of the data. I was not especially surprised by these results, but these data are a potentially important piece of the dopamine puzzle (e.g., as the authors note, salience-based argument struggles to explain these data).
Weaknesses for consideration:
(1) The focus here is on the lateral shell. This is a poorly investigated region in the context of the questions being asked here. Indeed, I suspect many readers might expect a focus on the medial shell. So, I think this focus is important. But, I think it does warrant greater attention in both the introduction and discussion. We do know from past work that there can be extensive compartmentalisation of dopamine responses to appetitive and aversive events and many of the inconsistent findings in the literature can be reconciled by careful examination of where dopamine is assessed. I do think readers would benefit from acknowledgement this - for example it is entirely reasonable to suppose that the findings here may be specific to the lateral shell.
(2) Relatedly, I think readers would benefit from an explicit rationale for studying the lateral shell as well as consideration of this in the discussion. We know that there are anatomical (PMID: 17574681), functional (PMID: 10357457), and cellular (PMID: 7906426) differences between the lateral shell and the rest of the ventral striatum. Critically, we know that profiles of dopamine binding during ingestive behaviours there can be highly dissimilar to the rest of ventral striatum (PMID: 32669355). I do think these points are worth considering.
(3) I found the data to be very thoughtfully analysed. But in places I was somewhat unsure:<br /> (a) Please indicate clearly in the text when photometry data show averages across trials versus when they show averages across animals.<br /> (b) I did struggle with the correlation analyses, for two reasons.<br /> (i) First, the key finding here is that the dopamine response to intraoral sucrose is suppressed by taste aversion. So, this will significantly restrict the range of dopamine transients, making interpretation of the correlations difficult.
(ii) Second, the authors report correlations by combining data across groups/conditions. I understand why the authors have done this, but it does risk obscuring differences between the groups. So, my question is: what happens to this trend when the correlations are computed separately for each group? I suspect other readers will share the same question. I think reporting these separate correlations would be very helpful for the field - regardless of the outcome.
(4) Figure 1A is not as helpful as it might be. I do think readers would expect a more precise reporting of GCaMP expression in TH+ and TH- neurons. I also note that many of the nuances in terms of compartmentalisation of dopamine signalling discussed above apply to ventral tegmental area dopamine neurons (e.g. medial v lateral) and this is worth acknowledging when interpreting.
-
Reviewer #3 (Public review):
Summary:
This study helps to clarify the mixed literature on dopamine responses to aversive stimuli. While it is well accepted that dopamine in the ventral striatum increases in response to various rewarding and appetitive stimuli, aversive stimuli have been shown to evoke phasic increases or decreasing depending on the exact aversive stimuli, behavioral paradigm, and/or dopamine recording method and location examined. Here the authors use a well-designed set of experiments to show differential responses to an appetitive primary reward (sucrose) that later becomes a conditioned aversive stimulus (sucrose previously paired with lithium chloride in a conditioned taste aversion paradigm). The results are interesting and add valuable data to the question of how the mesolimbic dopamine system encodes aversive stimuli, however, the conclusions are strongly stated given that the current data do not necessarily align with prior conflicting data in terms of recording location, and it is not clear exactly how to interpret the generally biphasic dopamine response to the CTA-sucrose which also evolves over exposures within a single session.
Strengths:
• The authors nicely demonstrate that their two aversive stimuli examined, quinine and sucrose following CTA, evoked aversive facial expressions and paw movements that differed from those following rewarding sucrose to support that the stimuli experienced by the rats differ in valence.
• Examined dopamine responses to the exact same sensory stimuli conditioned to have opposing valences, avoiding standard confounds of appetitive and aversive stimuli being sensed by different sensory modalities (i.e., sweet taste vs. electric shock).
• The authors examined multiple measurements of dopamine activity - cell body calcium (GCaMP6f) in midbrain and release in NAc (Grab-DA2h), which is useful as the prior mixed literature on aversive dopamine responses comes from a variety of recording methods.
• Correlations between sucrose preference and dopamine signals demonstrate behavioral relevance of the differential dopamine signals.
• The delayed testing experiment in Figure 7 nicely controls for the effect of time to demonstrate that the "rewarding" dopamine response to sucrose only recovers after multiple extinction sucrose exposures to extinguish the CTA.
Weaknesses for consideration:
• Regional differences in dopamine signaling to aversive stimuli are mentioned in the introduction and discussion. For instance, the idea that dopamine encodes salience is strongly argued against in the discussion, but the paper cited as arguing for that (Kutlu et al. 2021) is recording from the medial core in mice. Given other papers cited in the text about the regional differences in dopamine signaling in the NAc and from different populations of dopamine neurons in midbrain, it's important to mention this distinction wrt to salience signaling. Relatedly, the text says that the lateral NAc shell was targeted for accumbens recordings, but the histology figure looks like the majority of fibers were in the anterior lateral core of NAc. For the current paper to be a convincing last word on the issue, it would be extremely helpful to have similar recordings done in other parts of the NAc to do a more thorough comparison against other studies.
• Dopamine release in the NAc never dips below baseline for the conditioned sucrose. Is it possible to really consider this as a signal for valence per se, as opposed to it being a weaker response relative to the original sucrose response?
• Related to this, the main measure of the dopamine signal here, "mean z-score," obscures the temporal dynamics of the aversive dopamine response across a trial. This measure is used to claim that sucrose after CTA is "suppressing" dopamine neuron activity and release, which is true relative to the positive valence sucrose response. However, both GRAB-DA and cell-body GCaMP measurements show clear increases after onset of sucrose infusion before dipping back to baseline or slightly below in the average of all example experiments displayed. One could point to these data to argue either that aversive stimuli cause phasic increases in dopamine (due to the initial increase) or decreases (due to the delayed dip below baseline) depending on the measurement window. Some discussion of the dynamics of the response and how it relates to the prior literature would be useful.<br /> - Would this delayed below-baseline dip be visible with a shorter infusion time?<br /> - Does the max of the increase or the dip of the decrease better correlate with the behavioral measures of aversion (orofacial, paw movements) or sucrose preference than "mean z-score" measure used here?<br /> - The authors argue strongly in the discussion against the idea that dopamine is encoding "salience." Could this initial peak (also seen in the first few trials of quinine delivery, fig 1c color plot) be a "salience" response?
• Related to this, the color plots showing individual trials show a reduction in the increases to positive valence sucrose across conditioning day trials and a flip from infusion-onset increase to delayed increases across test day trials. This evolution across days makes it appear that the last few conditioning day trials would be impossible to discriminate from the first few test day trials in the CTA-paired. Presumably, from strength of CTA as a paradigm, the sucrose is already aversive to the animals at the first trial of test day. Why do the authors think the response evolves across this session?
• Given that most of the work is using a conditioned aversive stimulus, the comparison to a primary aversive tastant quinine is useful. However, the authors saw basically no dopamine response to a primary aversive tastant quinine (measured only with GRAB-DA) and saw less noticeable decreases following CTA for NAc recordings with GRAB-DA2h than with cell body GCaMP. Given that they are using the high-affinity version of the GRAB sensor, this calls into question whether this is a true difference in release vs. soma activity or issue of high affinity release sensor making decreases in dopamine levels more difficult to observe.
-
Author response:
Reviewer #1 (Public review):
Summary:
Loh and colleagues investigate valence encoding in the mesolimbic dopamine system. Using an elegant approach, they show that sucrose, which normally evokes strong dopamine neuron activity and release in the nucleus accumbens, is made aversive via conditioned taste aversion, the same sucrose stimulus later evokes much less dopamine neuron activity and release. Thus, dopamine activity can dynamically track the changing valence of an unconditioned stimulus. These results are important for helping clarify valence and value related questions that are the matter of ongoing debate regarding dopamine functions in the field.
Strengths:
This is an elegant way to ask this question, the within subject's design and the continuity of the stimulus is a strong way to remove a lot of the common confounds that make it difficult to interpret valence-related questions. I think these are valuable studies that help tie up questions in the field while also setting up a number of interesting future directions. There are number of control experiments and tweaks to the design that help eliminate a number of competing hypotheses regarding the results. The data are clearly presented and contextualized.
Weaknesses for consideration:
The focus on one relatively understudied region of the rat striatum for dopamine recordings could potentially limit generalization of the findings. While this can be determined in future studies, the implications should be further discussed in the current manuscript.
We agree that the manuscript would benefit from providing a stronger rationale for our recording sites and acknowledging the potential for regional differences in dopamine signaling. We have made the following additions to the manuscript:
Added to the Discussion: “Recordings were targeted to the lateral VTA and the corresponding approximate terminal site in the NAc lateral shell (Lammel et al., 2008). Subregional differences in dopamine activity likely contribute to mixed findings on dopamine and affect. For example, dopamine in the NAc lateral shell differentially encodes cues predictive of rewarding sucrose and aversive footshock, which is distinct from NAc medial shell dopamine responses (de Jong et al., 2019). Our findings are similar to prior work from our group targeting recordings to the NAc dorsomedial shell (Hsu et al., 2020; McCutcheon et al., 2012; Roitman et al., 2008): there, intraoral sucrose increased NAc dopamine release while the response in the same rats to quinine was significantly lower.”
Reviewer #2 (Public review):
Summary:
Koh et al. report an interesting manuscript studying dopamine binding in the lateral accumbens shell of rats across the course of conditioned taste aversion. The question being asked here is how does the dopamine system respond to aversion? The authors take advantage of unique properties of taste aversion learning (notably, within-subjects remapping of valence to the same physical stimulus) to address this.
They combine a well controlled behavioural design (including key, unpaired controls) with fibre photometry of dopamine binding via GrabDA and of dopamine neuron activity by gCaMP, careful analyses of behaviour (e.g., head movements; home cage ingestion), the authors show that, 1) conditioned taste aversion of sucrose suppresses the activity of VTA dopamine neurons and lateral shell dopamine binding to subsequent presentations of the sucrose tastant; 2) this pattern of activity was similar to the innately aversive tastant quinine; 3) dopamine responses were negatively correlated with behavioural (inferred taste reactivity) reactivity; and 4) dopamine responses tracked the contingency of between sucrose and illness because these responses recovered across extinction of the conditioned taste aversion.
Strengths:
There are important strengths here. The use of a well-controlled design, the measurement of both dopamine binding and VTA dopamine neuron activity, the inclusion of an extinction manipulation; and the thorough reporting of the data. I was not especially surprised by these results, but these data are a potentially important piece of the dopamine puzzle (e.g., as the authors note, salience-based argument struggles to explain these data).
Weaknesses for consideration:
(1) The focus here is on the lateral shell. This is a poorly investigated region in the context of the questions being asked here. Indeed, I suspect many readers might expect a focus on the medial shell. So, I think this focus is important. But, I think it does warrant greater attention in both the introduction and discussion. We do know from past work that there can be extensive compartmentalisation of dopamine responses to appetitive and aversive events and many of the inconsistent findings in the literature can be reconciled by careful examination of where dopamine is assessed. I do think readers would benefit from acknowledgement this - for example it is entirely reasonable to suppose that the findings here may be specific to the lateral shell.
As with our response to Reviewer 1, we agree that we should provide further rationale for focusing our recordings on the lateral shell and acknowledge potential differences in dopamine dynamics across NAc subregions. In addition to the changes in the Discussion detailed in our response to Reviewer 1, we have made the following additions to the Introduction:
Added to the Introduction: “NAc lateral shell dopamine differentially encodes cues predictive of rewarding (i.e., sipper spout with sucrose) and aversive stimuli (i.e., footshock), which is distinct from other subregions (de Jong et al., 2019). It is important to note that other regions of the NAc may serve as hedonic hotspots (e.g. dorsomedial shell; or may more closely align with the signaling of salience (e.g. ventromedial shell; (Yuan et al., 2021)).”
(2) Relatedly, I think readers would benefit from an explicit rationale for studying the lateral shell as well as consideration of this in the discussion. We know that there are anatomical (PMID: 17574681), functional (PMID: 10357457), and cellular (PMID: 7906426) differences between the lateral shell and the rest of the ventral striatum. Critically, we know that profiles of dopamine binding during ingestive behaviours there can be highly dissimilar to the rest of ventral striatum (PMID: 32669355). I do think these points are worth considering.
There are several reasons why dopamine dynamics were recorded in the NAc lateral shell:
(1) Dopamine neurons in more medial aspects of the VTA preferentially target the NAc medial shell and core whereas dopamine neurons in the lateral VTA – our target for VTA DA recordings – project to the lateral shell of the NAc (Lammel et al., 2008). Thus, our goal was to sample NAc release dynamics in areas that receive projections from our cell body recording sites.
(2) Cues predictive of reward availability (i.e., sipper spout with sucrose) and aversive stimuli (i.e., footshock) are differentially encoded by NAc lateral shell dopamine, which is distinct from NAc ventromedial shell dopamine responses (de Jong et al., 2019). These findings suggest a role for NAc lateral shell dopamine in the encoding of a stimulus’s valence, which made the subregion an area of interest for further examination.
(3) With respect to the medial NAc shell specifically, extensive literature had already shown it to be a ‘hedonic hotspot’ (Morales and Berridge, 2020; Yuan et al., 2021) whereas the ventral portion is more mixed with respect to valence (Yuan et al., 2021). We had previously shown that intraoral infusions of primary taste stimuli of opposing valence (i.e., sucrose and quinine) evoke differential responses in dopamine release within the NAc dorsomedial shell (Roitman et al., 2008). We more recently replicated differential dopamine responses from dopamine cell bodies in the lateral VTA (Hsu et al., 2020) and thus endeavored to the possibility of changing dopamine responses in the lateral VTA to the same stimulus as its valence changes. As a result of these choices, measuring dopamine release in the lateral shell was a logical choice. The field would greatly benefit from continued future work surveying the entirety of the VTA DA projection terminus.
We have included these points of justification in the Introduction and Discussion sections.
(3) I found the data to be very thoughtfully analysed. But in places I was somewhat unsure:
(a) Please indicate clearly in the text when photometry data show averages across trials versus when they show averages across animals.
We have now explicitly indicated in the figure legends of Figures 1, 3, 5, 7, and 8:
(1) In heat maps, each row represents the averaged (across rats) response on that trial.
(2) Traces below heat maps represent the response to infusion averaged first across trials for each rat and then across all rats.
(3) Insets represent the average z-score across the infusion period averaged first across all trials for each rat and then across all rats.
(b) I did struggle with the correlation analyses, for two reasons.
(i) First, the key finding here is that the dopamine response to intraoral sucrose is suppressed by taste aversion. So, this will significantly restrict the range of dopamine transients, making interpretation of the correlations difficult.
The overall hypothesis is that the dopamine response would correlate with the valence of a taste stimulus – even and especially when the stimulus remained constant but its valence changed. We inferred valence from the behavioral reactivity to the stimulus – reasoning that an appetitive taste will evoke minimal movement of the nose and paws (presumably because the animals are primarily engaging in small mouth movements associated with ingestion as shown by the seminal work of Grill and Norgren (1978) and the many studies published by the K.C. Berridge group) whereas an aversive taste will evoke significantly more movement as the rats engage in rejection responses (e.g. forelimb flails, chin rubs, etc.). When we conducted our regression analyses we endeavored to be as transparent as possible and labeled each symbol based on group (Unpaired vs Paired) and day (Conditioning vs Test). Both behavioral reactivity and dopamine responses change – but only for the Paired rats across days. In this sense, we believe the interpretation is clear. However, the Reviewer raises an important criticism that there would essentially be a floor effect with dopamine responses. We believe this is mitigated by data acquired across extinction and especially in Figure 9B. Here, the observations that dopamine responses fall to near zero but return to pre-conditioning levels in the Paired group with strong correlation between dopamine and behavioral reactivity throughout would hopefully partially allay the Reviewer’s concerns. See Part ii below for further support.
(ii) Second, the authors report correlations by combining data across groups/conditions. I understand why the authors have done this, but it does risk obscuring differences between the groups. So, my question is: what happens to this trend when the correlations are computed separately for each group? I suspect other readers will share the same question. I think reporting these separate correlations would be very helpful for the field -
regardless of the outcome.
To address this concern, we performed separate regression analyses for Paired and Unpaired rats and provide the table below to detail results where data were combined across groups or separated. Expectedly, all analyses in Paired rats indicated a significant inverse relationship between dopamine and behavioral reactivity. Afterall, it is only in this group where behavioral reactivity to the taste stimulus changes as function of conditioning. Perhaps even more striking is that in almost all comparisons, even when restricting the regression analysis to Unpaired rats, we still observed a significant inverse relationship between dopamine and behavioral reactivity in most experiments. We have outlined the separated correlations below (asterisks denote slopes significantly different from 0; * p<0.05; ** p<0.01; *** p<0.005; **** p<0.001):
Author response table 1.
(4) Figure 1A is not as helpful as it might be. I do think readers would expect a more precise reporting of GCaMP expression in TH+ and TH- neurons. I also note that many of the nuances in terms of compartmentalisation of dopamine signalling discussed above apply to ventral tegmental area dopamine neurons (e.g. medial v lateral) and this is worth acknowledging when interpreting t
Others have reported (Choi et al., 2020) and quantified (Hsu et al., 2020) GCaMP6f expression in TH+ neurons. While we didn’t report these quantifications, our observations were very much in line with previous quantifications from our laboratory (Hsu et al. 2020).
We agree that we should elaborate on VTA subregional differences and have answered this response above (See responses to Reviewer 1 Weakness #1 and Reviewer 2 Weakness #2).
Reviewer #3 (Public review):
Summary:
This study helps to clarify the mixed literature on dopamine responses to aversive stimuli. While it is well accepted that dopamine in the ventral striatum increases in response to various rewarding and appetitive stimuli, aversive stimuli have been shown to evoke phasic increases or decreasing depending on the exact aversive stimuli, behavioral paradigm, and/or dopamine recording method and location examined. Here the authors use a well-designed set of experiments to show differential responses to an appetitive primary reward (sucrose) that later becomes a conditioned aversive stimulus (sucrose previously paired with lithium chloride in a conditioned taste aversion paradigm). The results are interesting and add valuable data to the question of how the mesolimbic dopamine system encodes aversive stimuli, however, the conclusions are strongly stated given that the current data do not necessarily align with prior conflicting data in terms of recording location, and it is not clear exactly how to interpret the generally biphasic dopamine response to the CTA-sucrose which also evolves over exposures within a single session.
Strengths:
• The authors nicely demonstrate that their two aversive stimuli examined, quinine and sucrose following CTA, evoked aversive facial expressions and paw movements that differed from those following rewarding sucrose to support that the stimuli experienced by the rats differ in valence.
• Examined dopamine responses to the exact same sensory stimuli conditioned to have opposing valences, avoiding standard confounds of appetitive and aversive stimuli being sensed by different sensory modalities (i.e., sweet taste vs. electric shock)
• The authors examined multiple measurements of dopamine activity - cell body calcium (GCaMP6f) in midbrain and release in NAc (Grab-DA2h), which is useful as the prior mixed literature on aversive dopamine responses comes from a variety of recording methods.
• Correlations between sucrose preference and dopamine signals demonstrate behavioral relevance of the differential dopamine signals.
• The delayed testing experiment in Figure 7 nicely controls for the effect of time to demonstrate that the "rewarding" dopamine response to sucrose only recovers after multiple extinction sucrose exposures to extinguish the CTA.
Weaknesses for consideration:
(1) Regional differences in dopamine signaling to aversive stimuli are mentioned in the introduction and discussion. For instance, the idea that dopamine encodes salience is strongly argued against in the discussion, but the paper cited as arguing for that (Kutlu et al. 2021) is recording from the medial core in mice. Given other papers cited in the text about the regional differences in dopamine signaling in the NAc and from different populations of dopamine neurons in midbrain, it's important to mention this distinction wrt to salience signaling. Relatedly, the text says that the lateral NAc shell was targeted for accumbens recordings, but the histology figure looks like the majority of fibers were in the anterior lateral core of NAc. For the current paper to be a convincing last word on the issue, it would be extremely helpful to have similar recordings done in other parts of the NAc to do a more thorough comparison against other studies.
As the Reviewer notes, NAc dopamine recordings were aimed at the lateral NAc shell. It is possible that some dopamine neurons lying within the anterior lateral core were recorded. Fiber photometry and the size of the fiber optics cannot definitively identify the precise location and number of dopamine neurons from which we recorded. Still, recording sites did not systematically differ between groups. Further, the within-subjects design helps to mitigate any potential biases for one subregion over another. The results presented in the manuscript strongly support a valence code. It is difficult to be the ‘last word’ on this topic and we suspect debate will continue. We used taste stimuli for appetitive and aversive stimuli – whereas many in the field will continue to use other noxious stimuli (e.g. foot shock) that likely recruit different circuits en route to the VTA. And there may very well be a different regional profile for dopamine signaling with different noxious stimuli. Moreover, we used intraoral infusion to avoid confounds of stimulus avoidance and competing motivations (e.g. food or fluid deprivation). We believe that this is one of the most important and unique features of our report. Recent work supports a role for phasic increases in dopamine in avoidance of noxious stimuli (Jung et al., 2024) and it will be critical for the field to reflect on the differences between avoidance and aversion. Moreover, in ongoing studies we aspire to fully survey dopamine signaling in conditioned taste aversion across the medial-lateral and dorsal-ventral axes of the VTA and NAc.
(2) Dopamine release in the NAc never dips below baseline for the conditioned sucrose. Is it possible to really consider this as a signal for valence per se, as opposed to it being a weaker response relative to the original sucrose response?
Indeed, NAc dopamine release to intraoral quinine nor aversive sucrose doesn’t dip below baseline but rather dopamine binding doesn’t change from pre-infusion baseline levels. It should be noted that VTA dopamine cell body activity does indeed dip below baseline in response to aversive sucrose. Moreover, using fast-scan cyclic voltammetry, we showed that dopamine release dips below baseline in the NAc dorsomedial shell in response to intraoral quinine (Roitman et al., 2008). The differences across recording sites may reflect regional differences but they may also reflect differences in recording approaches. GrabDA2h, used here, has relatively slow kinetics that may obscure dips below baseline (see response Weakness# 8 below).
(3) Related to this, the main measure of the dopamine signal here, "mean z-score," obscures the temporal dynamics of the aversive dopamine response across a trial. This measure is used to claim that sucrose after CTA is "suppressing" dopamine neuron activity and release, which is true relative to the positive valence sucrose response. However, both GRAB-DA and cell-body GCaMP measurements show clear increases after onset of sucrose infusion before dipping back to baseline or slightly below in the average of all example experiments displayed. One could point to these data to argue either that aversive stimuli cause phasic increases in dopamine (due to the initial increase) or decreases (due to the delayed dip below baseline) depending on the measurement window. Some discussion of the dynamics of the response and how it relates to the prior literature would be useful.
We have used mean z-score to do much of our quantitative analyses but the Reviewer raises the intriguing possibility that we are masking an initial increase in dopamine release and VTA DA activity evoked by aversive taste by doing so. We included the heat maps in the manuscript to be as transparent as possible about the time course of dopamine responses – both within a trial and across trials. The Reviewer’s point prompted us to reflect further on the heat maps and recognize that trials early in the session often showed a brief increase in dopamine for aversive sucrose but this response dissipated (NAc dopamine release) or flipped (VTA DA cell body activity) over trials. We now quantitatively characterize this feature by looking at the timecourse of dopamine responses in each third of the trials (1-10, 11-20, 21-30; see Author response images 1,2 and 3). As we infer the valence of the stimulus from nose and paw movements (behavioral reactivity), it is especially striking that we a similar timecourse for changes in behavior. Collectively, the data may reflect an updating process that is relatively slow and requires experience of the stimulus in a new (aversive) state – that is, a model-free process. While our experiments were not designed to test the updating of dopamine responses and discern their participation in model-based versus model-free learning processes – another debate in the dopamine field (Cone et al., 2016; Deserno et al., 2021)– the data reflect a model-free process. This is further supported in the experiment involving multiple conditioning sessions, where dopamine ‘dips’ are observed in trials 1-10 on Conditioning Day 3 and Extinction Day 1 when the new value of sucrose has been established. Finally, the relatively slow updating of the value of sucrose is reflected in older literature using a continuous intraoral infusion. Using this approach, rats began rejecting the saccharin infusion only after ~2min rather than immediately (Schafe et al., 1998; Schafe and Bernstein, 1996; Wilkins and Bernstein, 2006).
Author response image 1.
Author response image 2.
Author response image 3.
(4) Would this delayed below-baseline dip be visible with a shorter infusion time?
While our experiments did not explore this parameter, it would be interesting to parametrically vary infusion duration times and examine differences in dopamine responses. However, we believe the most parsimonious explanation is that the ‘dip’ in VTA cell body activity develops as a function of the slow updating of the value of sucrose reflective of a model-free process. We recognize that this is mere speculation.
(5) Does the max of the increase or the dip of the decrease better correlate with the behavioral measures of aversion (orofacial, paw movements) or sucrose preference than "mean z-score" measure used here?
It seems plausible that finding the most extreme value from baseline could better correlate to behavioral measures. Time courses to max increase and max decrease are different. Moreover, with appetitive sucrose, there are often multiple transients that occur throughout a single intraoral infusion. Coupled with a noisy time course for individual components of behavioral reactivity, we determined that averaging data across the whole infusion period (i.e. mean z-score) was the most objective way we could analyze the dopamine and behavioral responses to taste stimuli.
(6) The authors argue strongly in the discussion against the idea that dopamine is encoding "salience." Could this initial peak (also seen in the first few trials of quinine delivery, fig 1c color plot) be a "salience" response?
Our response above to the potential for ‘mixed’ dopamine responses to aversive sucrose led to additional analyses that support a slow updating of both behavior and dopamine to the new, aversive value of sucrose. Quinine is innately aversive and thus the Reviewer rightly points out that even here we observe an increase in dopamine release evoked by quinine on the first few trials (as observed in the heat map). We’d like to note, though, that the order of stimulus exposure was counterbalanced across rats. In those rats first receiving a sucrose session, quinine initially caused a modest increase in dopamine release during the first 10 trials (which is more pronounced in the first 2 trials). In the subsequent 2 blocks of 10 trials, no such increase was observed. Interestingly, in rats for which quinine was their first stimulus, we did not see an increase in dopamine release on the first few trials (see Author response image 4). We speculate that the initial sucrose session required the value of intraoral infusions to be updated when quinine was delivered to these rats and that, once more, the updating process may be slow and akin to a model-free process. This analysis, at present, is underpowered but will direct future attention in follow-up work.
Author response image 4.
(7) Related to this, the color plots showing individual trials show a reduction in the increases to positive valence sucrose across conditioning day trials and a flip from infusion-onset increase to delayed increases across test day trials. This evolution across days makes it appear that the last few conditioning day trials would be impossible to discriminate from the first few test day trials in the CTA-paired. Presumably, from strength of CTA as a paradigm, the sucrose is already aversive to the animals at the first trial of test day. Why do the authors think the response evolves across this session?
As the Reviewer noted, Points 3-7 are related. We have speculated that the evolving dopamine response in Paired rats across test day trials reflects a model-free process. Importantly, as in the manuscript, our additional analyses once again show a tight relationship between behavioral reactivity and the dopamine response across the test session trials. It is important to note, though, that these experiments were not designed to test if responses reflect model-free or model-based processes.
(8) Given that most of the work is using a conditioned aversive stimulus, the comparison to a primary aversive tastant quinine is useful. However, the authors saw basically no dopamine response to a primary aversive tastant quinine (measured only with GRAB-DA) and saw less noticeable decreases following CTA for NAc recordings with GRAB-DA2h than with cell body GCaMP. Given that they are using the high-affinity version of the GRAB sensor, this calls into question whether this is a true difference in release vs. soma activity or issue of high affinity release sensor making decreases in dopamine levels more difficult to observe.
We share the same speculation as the Reviewer. Using fast-scan cyclic voltammetry, albeit measuring dopamine concentration in the dorsomedial shell, we observed a clear decrease from baseline with intraoral infusions of quinine (Roitman et al., 2008). Using fiber photometry here, the Reviewer and we note that GRAB_DA2h is a high-affinity (i.e., EC50: 7nM) dopamine sensor with relatively long off-kinetics (i.e., t1/2 decay time: 7300ms) (Labouesse et al., 2020). It may therefore be much more difficult to observe decreases (below baseline) using this sensor. The publication of new dopamine sensors - with lower affinity, faster kinetics, and greater dynamic range (Zhuo et al., 2024) – introduces opportunities for comparison and the greater potential for capturing decreases below baseline. Due to the poorer kinetics associated with GRAB_DA2h, we would not assert that direct comparisons between the GCaMP- and GRAB-based signals observed here represent true differences between somatic and terminal activity.
References
Choi JY, Jang HJ, Ornelas S, Fleming WT, Fürth D, Au J, Bandi A, Engel EA, Witten IB. 2020. A Comparison of Dopaminergic and Cholinergic Populations Reveals Unique Contributions of VTA Dopamine Neurons to Short-Term Memory. Cell Rep 33. doi:10.1016/j.celrep.2020.108492
Cone JJ, Fortin SM, McHenry JA, Stuber GD, McCutcheon JE, Roitman MF. 2016. Physiological state gates acquisition and expression of mesolimbic reward prediction signals. Proc Natl Acad Sci U S A 113. doi:10.1073/pnas.1519643113
de Jong JW, Afjei SA, Pollak Dorocic I, Peck JR, Liu C, Kim CK, Tian L, Deisseroth K, Lammel S. 2019. A Neural Circuit Mechanism for Encoding Aversive Stimuli in the Mesolimbic Dopamine System. Neuron 101. doi:10.1016/j.neuron.2018.11.005
Deserno L, Moran R, Michely J, Lee Y, Dayan P, Dolan RJ. 2021. Dopamine enhances model-free credit assignment through boosting of retrospective model-based inference. Elife 10. doi:10.7554/eLife.67778
Hsu TM, Bazzino P, Hurh SJ, Konanur VR, Roitman JD, Roitman MF. 2020. Thirst recruits phasic dopamine signaling through subfornical organ neurons. Proc Natl Acad Sci U S A 117:30744–30754. doi:10.1073/PNAS.2009233117/-/DCSUPPLEMENTAL
Jung K, Krüssel S, Yoo S, An M, Burke B, Schappaugh N, Choi Y, Gu Z, Blackshaw S, Costa RM, Kwon HB. 2024. Dopamine-mediated formation of a memory module in the nucleus accumbens for goal-directed navigation. Nat Neurosci. doi:10.1038/s41593-024-01770-9
Labouesse MA, Cola RB, Patriarchi T. 2020. GPCR-based dopamine sensors—A detailed guide to inform sensor choice for in vivo imaging. Int J Mol Sci. doi:10.3390/ijms21218048
Lammel S, Hetzel A, Häckel O, Jones I, Liss B, Roeper J. 2008. Unique Properties of Mesoprefrontal Neurons within a Dual Mesocorticolimbic Dopamine System. Neuron 57. doi:10.1016/j.neuron.2008.01.022
McCutcheon JE, Ebner SR, Loriaux AL, Roitman MF, Tobler PN. 2012. Encoding of aversion by dopamine and the nucleus accumbens. Front Neurosci 6. doi:10.3389/fnins.2012.00137
Morales I, Berridge KC. 2020. ‘Liking’ and ‘wanting’ in eating and food reward: Brain mechanisms and clinical implications. Physiol Behav. doi:10.1016/j.physbeh.2020.113152
Roitman MF, Wheeler RA, Wightman RM, Carelli RM. 2008. Real-time chemical responses in the nucleus accumbens differentiate rewarding and aversive stimuli. Nature Neuroscience 2008 11:12 11:1376–1377. doi:10.1038/nn.2219
Schafe GE, Bernstein IL. 1996. Forebrain contribution to the induction of a brainstem correlate of conditioned taste aversion: I. The amygdala. Brain Res 741. doi:10.1016/S0006-8993(96)00906-7
Schafe GE, Thiele TE, Bernstein IL. 1998. Conditioning method dramatically alters the role of amygdala in taste aversion learning. Learning and Memory 5. doi:10.1101/lm.5.6.481
Wilkins EE, Bernstein IL. 2006. Conditioning method determines patterns of c-fos expression following novel taste-illness pairing. Behavioural Brain Research 169. doi:10.1016/j.bbr.2005.12.006
Yuan L, Dou YN, Sun YG. 2021. Topography of reward and aversion encoding in the mesolimbic dopaminergic system. Journal of Neuroscience 39. doi:10.1523/JNEUROSCI.0271-19.2019
Zhuo Y, Luo B, Yi X, Dong H, Miao X, Wan J, Williams JT, Campbell MG, Cai R, Qian T, Li F, Weber SJ, Wang L, Li B, Wei Y, Li G, Wang H, Zheng Y, Zhao Y, Wolf ME, Zhu Y, Watabe-Uchida M, Li Y. 2024. Improved green and red GRAB sensors for monitoring dopaminergic activity in vivo. Nat Methods 21. doi:10.1038/s41592-023-02100-w
-
-
newtone.ai newtone.ai
-
Book your demo now Subtext
Text needed
-
THIS IS A PILL TITLE
Text needed or could be removed
-
Subtext
Text needed or could be removed
-
HeadingSubtext
Text needed
-
-
-
"running saved my life"
"running doesn't have barriers, running means you've got trainers on and you can conquer the world"
-
-
www.biorxiv.org www.biorxiv.org
-
Author response:
Reviewer #1:
We agree with Reviewer 1 that the flexibility of SPRAWL also makes it difficult to interpret its outputs. We consider SPRAWL to be a hypothesis-generation tool to answer simple questions of subcellular localization in a statistically robust manner. In this paper we include examples of how it can be incorporated with other tools and wetlab experimentation to build biological intuition. Our hope is that the SPRAWL software, or even the underlying simple statistical ideas are of use to others in the field.
Reviewer #2:
We agree with Reviewer #2 that this manuscript does not demonstrate biological significance of the observed results of applying SPRAWL to massively multiplexed FISH datasets. We agree it would require additional wetlab experiments such as cell-type specific and isoform-resolved fluorescence in-situ hybridization, which we consider beyond the scope of this paper. We believe that the observed correlations of subcellular localization detected by SPRAWL and the differential 3’ UTR usage detected by ReadZS are compelling, although not conclusive, as are the Timp3 experimental studies.
Our understanding is that Baysor is primarily a cell-segmentation algorithm, which is not what SPRAWL attempts to achieve. Baysor states that it identifies “cells of a distinct type will give rise to small molecular neighborhoods with stereotypical transcriptional composition, making it possible to interpret such neighborhoods without performing explicit cell segmentation” which we understand to mean that Baysor identifies spatial groupings of cells with “stereotypical transcriptional composition” rather than subcellular RNA localization. We do not think that SPRAWL and Baysor are comparable, but instead Baysor could be used as an upstream step to SPRAWL to potentially improve cell segmentation.
Reviewer #3:
We thank Reviewer #3 for identifying discrepancies in the paper which we addressed to the best of our abilities.
-
-
viewer.athenadocs.nl viewer.athenadocs.nl
-
geografische
culturele
-
gescreend
onderzoekje
-
-
www.biorxiv.org www.biorxiv.org
-
Author response:
Reviewer 1:
Many thanks for your positive review and clear overview of our paper. We also agree with your interpretation of our results that ‘the information that is decodable and the information that is task-relevant may relate in very different ways’ and we could have emphasised this point more in the paper.
With regards to the qualitative similarities between our models and our data, we agree that due to the fact that one can achieve any desired level of activity, decoding accuracy, performance, etc in a model, we focussed on changes over learning of key metrics that are commonly used in the field. Although this can appear qualitative at times because the raw values can differ between the data and our models, our main results are ultimately strongly quantitative (e.g., Fig. 3c,d, and Fig. 5f). We note that we could have fine tuned the models to have similar activity levels, decoding accuracies etc to our data, and on the face of it this may have made the results appear more convincing, but we felt that such trivial fine tuning does not change any of our key results in any fundamental way and is not the aim of computational modelling. The model one chooses to analyse will always be abstracted from biology in some way, by definition.
Reviewer 2:
Thank you very much for your kind comments and clear overview of our paper. We also hope that our paper ‘provides a valuable analysis of the effect of two parameters on representations of irrelevant stimuli in trained RNNs.’
With regards to our suggested mechanism of suppressing dynamically irrelevant stimuli, we are sorry that we did not provide a sufficient enough explanation of suppressing color representations when they are irrelevant. We hopefully provide a longer explanation here. Our mechanism of suppression of dynamically irrelevant stimuli does not suggest that it becomes un-suppressed later, only the behaviourally relevant variable should be decodable when it is needed (i.e., XOR). Although color decodability did increase slightly in the data and some of the models from the color period to the shape period, it was typically not significant and was therefore not a result that we emphasise in the paper (although this could be analysed further to see if additional mechanisms might explain it). We emphasise throughout that color decoding is typically similar between color and shape periods (either high or low) and either decreases or increases over time in both periods. We also focus on whether color decodability increases or decreases over learning during the color period when it is irrelevant (which we call ‘early color decoding’). Importantly, decoding of color or shape is not needed to perform the task, only decoding of XOR is needed to perform the task. For example, in our two-neuron networks, we observe perfect XOR decoding and only 75% decoding of color and shape, and decoding during the shape period is the same as the network at initialisation before any training. The mechanism we suggest of suppressing dynamically irrelevant stimuli does not predict that that stimulus should be un-suppressed later, only the behaviourally relevant variable should be decodable (i.e., XOR). Instead, what we try to explain is that color inputs can generate 0 firing rate during the color period, when that input does not need to be used and is therefore irrelevant (and color decoding decreases during the color period over learning), but these inputs can be combined with shape inputs later to create a perfectly decodable XOR response.
With regards to interpretation of our results based on metabolic cost constraints, we feel that this is an unnecessarily strong criticism to say that it ‘is not backed up by the presented data/analyses.’ All of our models were trained with only a metabolic cost constraint, a noise strength, and a task performance term. Therefore, the results of the models are directly attributable to the strength of metabolic cost that we use. Additionally, although one could in principle pick any of infinitely many different parameters to change and measure the response in an optimized network, varying metabolic cost and noise are two of the most fundamental phenomena that neural circuits must contend with, and many studies have analysed the impact they have on neural circuit dynamics. Furthermore, in line with previous studies (Yang et al., 2019, Whittington et al., 2022, Sussillo et al., 2015, Orhan et al., 2019, Kao et al., 2021, Cueva et al., 2020, Driscoll et al., 2022, Song et al., 2016, Masse et al., 2019, Schimel et al., 2023), we operationalized metabolic cost in our models through L2 firing rate regularization. This cost penalizes high overall firing rates. (Such an operationalization of metabolic cost also makes sense for our models because network performance is based on firing rates rather than subthreshold activities.) There are however alternative conceivable ways to operationalize a metabolic cost; for example L1 firing rate regularization has been used previously when optimizing neural networks and promotes more sparse neural firing. Interestingly, although our L2 is generally conceived to be weaker than L1 regularization, we still found that it encouraged the network to use purely sub-threshold activity in our task. The regularization of synaptic weights may also be biologically relevant because synaptic transmission uses the most energy in the brain compared to other processes (Faria-Pereira et al., 2022, Harris et al., 2012). Additionally, even subthreshold activity could be regularized as it also consumes energy (although orders of magnitude less than spiking (Zhu et al., 2019)). Therefore, future work will be needed to examine how different metabolic costs affect the dynamics of task-optimized networks.
With regards to color representations in PFC only qualitatively matching those in our models, in line with the comment from Reviewer 1, we agree that due to the fact that one can achieve any desired level of activity, decoding accuracy, performance, etc in a model, we focussed on changes over learning of key metrics that are commonly used in the field. Although this can appear qualitative at times because the raw values can differ between the data and our models, our main results are ultimately strongly quantitative (e.g., Fig. 3c,d, and Fig. 5f). We note that we could have fine tuned the models to have similar activity levels, decoding accuracies etc to our data, and on the face of it this may have made the results appear more convincing, but we felt that such trivial fine tuning does not change any of our key results in any fundamental way and is not the aim of computational modelling. The model one chooses to analyse will always be abstracted from biology in some way, by definition. Finally, of course we note that changes in color decoding could result from other causes, but we focussed on two key phenomena that neural circuits must contend with: noise and metabolic costs. Therefore, it is likely that these two variables play a strong role in stimulus representations in neural circuits
Reviewer 3:
Thank you very much for your thorough and clear overview of our paper and we agree that it is important to investigate phenomena and manipulations in computational models that are almost impossible to do in vivo and we are pleased you found our mathematical analyses rigorous and nicely documented.
Although we agree that it can be useful to study the responses of individual neurons, we focussed on population analyses of all available neurons without omitting or specifically selecting neurons based on their dynamics. We are also not suggesting that the activities of individual ‘neurons’ in the models and data should be similar since our models are highly abstract firing rate models. But rather, the overall computational strategy, which one can access through population decoding and cross-generalised decoding, was what we were interested in comparing between the models and the data and is arguably the correct level of analysis of such models (an data) given our key questions (Vyas et al., 2020, Churchland et al., 2012, Mante et al., 2013, Ebitz et al., 2021).
We also certainly agree and are more than open to the fact that suppression of irrelevant stimuli may already be happening on the inputs arriving in PFC. Indeed, we actually suggest this as the mechanism in Fig. 5 (together with recurrent circuit dynamics that make use of these inputs).
With regards to the dynamics of the two-neuron networks not being ‘informative of what happens in brain networks’, we agree that these models are very simplified and may only contain very fundamental similarities with biological neurons. However, we only used them to illustrate the fundamental mechanism of generating 0 firing rate during the color epoch so that it is more easily understandable for readers as they can see the entire 2-dimensional state space and the entire computational strategy can be seen (Fig. 5a-d). We also note that we did this for both rectified linear and tanh networks, thus showing that such a mechanism is preserved across fundamentally different firing rate nonlinearities. Additionally, after illustrating this fundamental mechanism of networks receiving color information but generating 0 firing rate, we show that the exact same mechanism is at play in the large networks we use throughout the paper (Fig. 5e). We also only compare the large networks to our neural recordings. We do agree though that it would be interesting to further compare fundamental similarities and differences between our models and our neural recordings (always at the right level of analysis that makes sense for our chosen models) to show that the mechanisms we uncover in our models are also strongly relevant for our data.
-
-
www.biorxiv.org www.biorxiv.org
-
Author response:
Public Reviews:
Reviewer #1 (Public review):
Summary:
The authors have used full-length single-cell sequencing on a sorted population of human fetal retina to delineate expression patterns associated with the progression of progenitors to rod and cone photoreceptors. They find that rod and cone precursors contain a mix of rod/cone determinants, with a bias in both amounts and isoform balance likely deciding the ultimate cell fate. Markers of early rod/cone hybrids are clarified, and a gradient of lncRNAs is uncovered in maturing cones. Comparison of early rods and cones exposes an enriched MYCN regulon, as well as expression of SYK, which may contribute to tumor initiation in RB1 deficient cone precursors.
Strengths:
(1) The insight into how cone and rod transcripts are mixed together at first is important and clarifies a long-standing notion in the field.
(2) The discovery of distinct active vs inactive mRNA isoforms for rod and cone determinants is crucial to understanding how cells make the decision to form one or the other cell type. This is only really possible with full-length scRNAseq analysis.
(3) New markers of subpopulations are also uncovered, such as CHRNA1 in rod/cone hybrids that seem to give rise to either rods or cones.
(4) Regulon analyses provide insight into key transcription factor programs linked to rod or cone fates.
(5) The gradient of lncRNAs in maturing cones is novel, and while the functional significance is unclear, it opens up a new line of questioning around photoreceptor maturation.
(6) The finding that SYK mRNA is naturally expressed in cone precursors is novel, as previously it was assumed that SYK expression required epigenetic rewiring in tumors.
Weaknesses:
(1) The writing is very difficult to follow. The nomenclature is confusing and there are contradictory statements that need to be clarified.
(2) The drug data is not enough to conclude that SYK inhibition is sufficient to prevent the division of RB1 null cone precursors. Drugs are never completely specific so validation is critical to make the conclusion drawn in the paper.
We thank the reviewer for describing the study’s strengths and weaknesses. In the upcoming revision, we will:
(1) improve the writing and clarify the nomenclature and contradictory statements, particularly those noted in the Reviewer’s Recommendations for Authors; and
(2) scale back the claims related to the role of SYK in the cone precursor response to RB1 loss; we agree that genetic perturbation of SYK is required to prove it’s role and will perform such analyses in a separate study.
Reviewer #2 (Public review):
Summary:
The authors used deep full-length single-cell sequencing to study human photoreceptor development, with a particular emphasis on the characteristics of photoreceptors that may contribute to retinoblastoma.
Strengths:
This single-cell study captures gene regulation in photoreceptors across different developmental stages, defining post-mitotic cone and rod populations by highlighting their unique gene expression profiles through analyses such as RNA velocity and SCENIC. By leveraging full-length sequencing data, the study identifies differentially expressed isoforms of NRL and THRB in L/M cone and rod precursors, illustrating the dynamic gene regulation involved in photoreceptor fate commitment. Additionally, the authors performed high-resolution clustering to explore markers defining developing photoreceptors across the fovea and peripheral retina, particularly characterizing SYK's role in the proliferative response of cones in the RB loss background. The study provides an in-depth analysis of developing human photoreceptors, with the authors conducting thorough analyses using full-length single-cell RNA sequencing. The strength of the study lies in its design, which integrates single-cell full-length RNA-seq, long-read RNA-seq, and follow-up histological and functional experiments to provide compelling evidence supporting their conclusions. The model of cell type-dependent splicing for NRL and THRB is particularly intriguing. Moreover, the potential involvement of the SYK and MYC pathways with RB in cone progenitor cells aligns with previous literature, offering additional insights into RB development.
Weaknesses:
The manuscript feels somewhat unfocused, with a lack of a strong connection between the analysis of developing photoreceptors, which constitutes the bulk of the manuscript, and the discussion on retinoblastoma. Additionally, given the recent publication of several single-cell studies on the developing human retina, it is important for the authors to cross-validate their findings and adjust their statements where appropriate.
We thank the reviewer for summarizing the main findings and for noting the compelling support for the conclusions, the intriguing cell type-dependent splicing of rod and cone lineage factors, and the insights into retinoblastoma development.
We concur that some studies of developing photoreceptors were not well connected to retinoblastoma, which diminished the focus. However, we suggest that it was valuable to highlight how deep, long read sequencing provided new insights into retinoblastoma. For example, our demonstration of similar rod- and cone-related gene expression in early cones and RB cells addressed concerns with the proposed cone cell-of-origin, adding disease relevance.
We will address the Reviewer’s request to cross-validate our findings with those of other single-cell studies of developing human retina and to adjust the related statements in our upcoming revision.
Reviewer #3 (Public review):
Summary:
The authors use high-depth, full-length scRNA-Seq analysis of fetal human retina to identify novel regulators of photoreceptor specification and retinoblastoma progression.
Strengths:
The use of high-depth, full-length scRNA-Seq to identify functionally important alternatively spliced variants of transcription factors controlling photoreceptor subtype specification, and identification of SYK as a potential mediator of RB1-dependent cell cycle reentry in immature cone photoreceptors.
Human developing fetal retinal tissue samples were collected between 13-19 gestational weeks and this provides a substantially higher depth of sequencing coverage, thereby identifying both rare transcripts and alternative splice forms, and thereby representing an important advance over previous droplet-based scRNA-Seq studies of human retinal development.
Weaknesses:
The weaknesses identified are relatively minor. This is a technically strong and thorough study, that is broadly useful to investigators studying retinal development and retinoblastoma.
We thank the reviewer for describing the strengths of the study. Our upcoming revision will address the minor concerns that were raised separately in the Reviewer’s Recommendations for Authors.
-
eLife Assessment
In this paper, the authors use single-cell RNA sequencing to understand post-mitotic cone and rod developmental states and identify cone-specific features that contribute to retinoblastoma genesis. The work is important and the evidence is generally convincing. The findings of rod/cone fate determination at a very early stage are intriguing.
-
Reviewer #1 (Public review):
Summary:
The authors have used full-length single-cell sequencing on a sorted population of human fetal retina to delineate expression patterns associated with the progression of progenitors to rod and cone photoreceptors. They find that rod and cone precursors contain a mix of rod/cone determinants, with a bias in both amounts and isoform balance likely deciding the ultimate cell fate. Markers of early rod/cone hybrids are clarified, and a gradient of lncRNAs is uncovered in maturing cones. Comparison of early rods and cones exposes an enriched MYCN regulon, as well as expression of SYK, which may contribute to tumor initiation in RB1 deficient cone precursors.
Strengths:
(1) The insight into how cone and rod transcripts are mixed together at first is important and clarifies a long-standing notion in the field.
(2) The discovery of distinct active vs inactive mRNA isoforms for rod and cone determinants is crucial to understanding how cells make the decision to form one or the other cell type. This is only really possible with full-length scRNAseq analysis.
(3) New markers of subpopulations are also uncovered, such as CHRNA1 in rod/cone hybrids that seem to give rise to either rods or cones.
(4) Regulon analyses provide insight into key transcription factor programs linked to rod or cone fates.
(5) The gradient of lncRNAs in maturing cones is novel, and while the functional significance is unclear, it opens up a new line of questioning around photoreceptor maturation.
(6) The finding that SYK mRNA is naturally expressed in cone precursors is novel, as previously it was assumed that SYK expression required epigenetic rewiring in tumors.
Weaknesses:
(1) The writing is very difficult to follow. The nomenclature is confusing and there are contradictory statements that need to be clarified.
(2) The drug data is not enough to conclude that SYK inhibition is sufficient to prevent the division of RB1 null cone precursors. Drugs are never completely specific so validation is critical to make the conclusion drawn in the paper.
-
Reviewer #2 (Public review):
Summary:
The authors used deep full-length single-cell sequencing to study human photoreceptor development, with a particular emphasis on the characteristics of photoreceptors that may contribute to retinoblastoma.
Strengths:
This single-cell study captures gene regulation in photoreceptors across different developmental stages, defining post-mitotic cone and rod populations by highlighting their unique gene expression profiles through analyses such as RNA velocity and SCENIC. By leveraging full-length sequencing data, the study identifies differentially expressed isoforms of NRL and THRB in L/M cone and rod precursors, illustrating the dynamic gene regulation involved in photoreceptor fate commitment. Additionally, the authors performed high-resolution clustering to explore markers defining developing photoreceptors across the fovea and peripheral retina, particularly characterizing SYK's role in the proliferative response of cones in the RB loss background. The study provides an in-depth analysis of developing human photoreceptors, with the authors conducting thorough analyses using full-length single-cell RNA sequencing. The strength of the study lies in its design, which integrates single-cell full-length RNA-seq, long-read RNA-seq, and follow-up histological and functional experiments to provide compelling evidence supporting their conclusions. The model of cell type-dependent splicing for NRL and THRB is particularly intriguing. Moreover, the potential involvement of the SYK and MYC pathways with RB in cone progenitor cells aligns with previous literature, offering additional insights into RB development.
Weaknesses:
The manuscript feels somewhat unfocused, with a lack of a strong connection between the analysis of developing photoreceptors, which constitutes the bulk of the manuscript, and the discussion on retinoblastoma. Additionally, given the recent publication of several single-cell studies on the developing human retina, it is important for the authors to cross-validate their findings and adjust their statements where appropriate.
-
Reviewer #3 (Public review):
Summary:
The authors use high-depth, full-length scRNA-Seq analysis of fetal human retina to identify novel regulators of photoreceptor specification and retinoblastoma progression.
Strengths:
The use of high-depth, full-length scRNA-Seq to identify functionally important alternatively spliced variants of transcription factors controlling photoreceptor subtype specification, and identification of SYK as a potential mediator of RB1-dependent cell cycle reentry in immature cone photoreceptors.
Human developing fetal retinal tissue samples were collected between 13-19 gestational weeks and this provides a substantially higher depth of sequencing coverage, thereby identifying both rare transcripts and alternative splice forms, and thereby representing an important advance over previous droplet-based scRNA-Seq studies of human retinal development.
Weaknesses:
The weaknesses identified are relatively minor. This is a technically strong and thorough study, that is broadly useful to investigators studying retinal development and retinoblastoma.
-
-
academic.oup.com academic.oup.com
-
The American message in Noah's prophecy, Palmer implied, was not that blacks had to be enslaved, but that their essential character befitted servitude.
从奴隶制到种族隔离的神学过渡
- 战前:帕默用《创世记》9章诺亚的预言为奴隶制辩护
- 战后:他巧妙地将同一段经文重新诠释,用于支持种族隔离
- 核心思想始终未变:黑人天性适合被统治,只是统治形式可以改变
从含到宁录的转化
含(诺亚之子)在圣经中象征着被诅咒的黑人后裔 宁录(含的后代)则代表了反抗者、野心家的形象 帕默借此构建了双重叙事: 驯服的黑人该像含一样甘于被统治 反抗的黑人则如宁录般危险、需要控制
这种解读让南方人既能维持对"顺从黑人"的父权式怜悯,又能合理化对"反抗黑人"的压制
"诺亚的相机":一个优雅的隐喻
相机定格永恒瞬间,暗示神的旨意永恒不变 镜头提供清晰视角,暗示圣经真理可以照亮现实 取景框选定画面,暗示需要用特定视角理解世界
这种思想转变既保持了连续性(种族等级观念),又适应了新形势(从奴隶制到隔离制),展现了南方保守主义者在剧变时代的精妙思维调适。
-
-
pmc.ncbi.nlm.nih.gov pmc.ncbi.nlm.nih.gov
-
Mfn2tm3Dcc/Mmcd
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
-
www.biorxiv.org www.biorxiv.org
-
https://electron-microscopy.hms.harvard.edu/methods
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
-
Local file Local file
-
BDSC:1104
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 182, in init if 'link' in row['document']: TypeError: argument of type 'NoneType' is not iterable
-
BSDC:458
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 182, in init if 'link' in row['document']: TypeError: argument of type 'NoneType' is not iterable
-
BDSC:26160
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 182, in init if 'link' in row['document']: TypeError: argument of type 'NoneType' is not iterable
-
-
www.nature.com www.nature.com
-
https://emcore.ucsf.edu/ucsf-software
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
Tags
Annotators
URL
-
-
www.ncbi.nlm.nih.gov www.ncbi.nlm.nih.gov
-
psPAX2
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
pMD2.G
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
-
pmc.ncbi.nlm.nih.gov pmc.ncbi.nlm.nih.gov
-
CL2355
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
Tags
Annotators
URL
-
-
www.sciencedirect.com www.sciencedirect.com
-
HTB-37
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
CCL-2
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:AB_476744
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:AB_10706161
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:AB_330944
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:AB_2536530
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:AB_390722
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
-
www.sciencedirect.com www.sciencedirect.com
-
Cat# TIB-152
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
Cat#006785
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:AB_394004
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:AB_2737820
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:AB_399877
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:AB_398483
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:AB_2737852
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:AB_394618
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:AB_395000
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:AB_2737732
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:SCR_002798
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:SCR_003070
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:SCR_008520
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:SCR_018190
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:AB_465394
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:AB_1031062
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:AB_940405
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:AB_2614304
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
Jackson Laboratory Cat_032276
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
Tags
- RRID:CVCL_0367
- RRID:SCR_003070
- RRID:AB_394618
- RRIDCUR:Unresolved
- RRID:AB_1031062
- RRID:AB_399877
- RRID:AB_465394
- RRID:AB_398483
- RRID:SCR_018190
- RRID:AB_2737852
- RRID:IMSR_JAX:006785
- RRID:AB_395000
- RRID:AB_2737732
- RRID:SCR_002798
- RRID:AB_394004
- RRID:AB_2614304
- RRID:IMSR_JAX:032276
- RRID:AB_2737820
- RRID:SCR_008520
- RRID:AB_940405
Annotators
URL
-
-
www.ncbi.nlm.nih.gov www.ncbi.nlm.nih.gov
-
Addgene_21915
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:CVCL_C291
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:CVCL_6813
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:CVCL_A221
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:CVCL_3876
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:CVCL_4401
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:CVCL_6122
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:CVCL_4633
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
-
www.ncbi.nlm.nih.gov www.ncbi.nlm.nih.gov
-
SCR_021713
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
Tags
Annotators
URL
-
-
pmc.ncbi.nlm.nih.gov pmc.ncbi.nlm.nih.gov
-
RRID:AB_2782966
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
Tags
Annotators
URL
-
-
pmc.ncbi.nlm.nih.gov pmc.ncbi.nlm.nih.gov
-
RRID:SCR_022735
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
Tags
Annotators
URL
-
-
www.biorxiv.org www.biorxiv.org
-
RRID:SCR_021756
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
-
www.biorxiv.org www.biorxiv.org
-
RRID:SCR_009550
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
-
www.biorxiv.org www.biorxiv.org
-
SCR_015656
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:SCR_015654
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:SCR_018734
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
-
www.pnas.org www.pnas.org
-
RRID:SCR_009961
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
Tags
Annotators
URL
-
-
www.nature.com www.nature.com
-
RRID:AB_228307
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
Tags
Annotators
URL
-
-
www.nature.com www.nature.com
-
RRID:AB_257896
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:AB_330744
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:AB_476758
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:AB_477247
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:AB_476760
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
-
pmc.ncbi.nlm.nih.gov pmc.ncbi.nlm.nih.gov
-
RRID:SCR_019057
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:SCR_018986
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
-
pubs.acs.org pubs.acs.org
-
RRID:SCR_018986
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:SCR_018302
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
-
www.sciencedirect.com www.sciencedirect.com
-
RRID:AB_3662842
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:AB_3662838
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:AB_3662841
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:AB_3662836
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:AB_3662840
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:AB_2814948
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:AB_10891773
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:AB_2534080
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:AB_141637
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:AB_3662834
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
-
pmc.ncbi.nlm.nih.gov pmc.ncbi.nlm.nih.gov
-
RRID:SCR_000432
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:SCR_001622
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
-
www.ncbi.nlm.nih.gov www.ncbi.nlm.nih.gov
-
RRID:IMSR_JAX:000664
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:AB_304362
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:AB_2279841
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:AB_2044003
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:AB_312660
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:AB_2904311
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:AB_2658273
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:AB_2536183
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
-
www.ncbi.nlm.nih.gov www.ncbi.nlm.nih.gov
-
RRID:IMSR_JAX:000664
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:MMRRC_034840-JAX
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
-
www.sciencedirect.com www.sciencedirect.com
-
Plasmid_87453
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
Plasmid_87451
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:AB_2722564
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
Plasmid_87452
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
RRID:AB_2313768
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
-
pmc.ncbi.nlm.nih.gov pmc.ncbi.nlm.nih.gov
-
RRID:SCR_019060
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
Tags
Annotators
URL
-
-
pmc.ncbi.nlm.nih.gov pmc.ncbi.nlm.nih.gov
-
MMRRC:033000
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
MMRRC:032999
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
MMRRC:032998
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
-
pmc.ncbi.nlm.nih.gov pmc.ncbi.nlm.nih.gov
-
MMRRC:000041
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
-
-
www.youtube.com www.youtube.com
-
run of the mini assembly is very straightforward but it's crucial you stick to the format
for - fascism - intervention - mini assembly format - Roger Hallam
fascism - intervention - mini assembly format - Roger Hallam - first, everyone introduces themselves - their name, - background, - life journey, - reason for attending mini-assembly - second, ask what they think is not going well - community where they live, - their country, - the world - third, similarities between people - fourth, each person shares a list of 3 or 4 priority things they think should change: - policies - ways of organizing - big issues - fifth - invite each person to run their own mini assembly
-
as with any social group that is a power law curve meaning for instance eighty percent of Trump supporters will change their view if they're listened to consistently maybe 19% are going to be resistant and need a good few conversations for them to at least have doubts and 1% are frankly psychopathic and they're never gonna change
for - stats - Perato's law - social transformation - fascism, polarization and climate crisis - climate communication - 80% will change if we listen, 19% will require deeper conversations - 1% will not change - Roger Hallam
-
we've come up with micro designs and processes which when repeated should enable assemblies to grow exponentially to create critical campaigns that can take over councils and governments
for - fascism and polarization intervention - scalability - Roger Hallam
-
I'll stick my head out here and say that we are 80% certain of being able to create a mass movement 10 times the size of Extinction Rebellion using this method organizations that can compete with fascism with power by dissolving that power through the same mechanisms Rogers discovered through listening
for - fascism, polarization and climate crisis - climate communications - social intervention - new movement that can be 10x the size of Extinction Rebellion - apply Carl Rogers discovery of listening - Roger Hallam
-
each person will agree to run a mini assembly themselves with fellow activists friends family people you know online or offline whatever works best but really just a few people it's fine five to 10 people
for - Anti-fascist strategy - depolarization strategy - mini assemblies - Roger Hallam
-
the most influential psychologist of the 20th century Carl Rogers as you may know discovered scientifically in the 1950s that listening to people and giving unconditional positive regard was the best way of helping them to heal and grow it was a revolution in understanding of the person and led to a massive change in how we look at psychological distress and the need for counselling
for - trauma - healing - by listening - Carl Rogers
-
why do fascist men stop being fascist yeah you got it when they get a girlfriend
for - fascism - antidote - question - what is a common way to transform fascist men? - answer - they get a girlfriend - Roger Hallam
Tags
- fascism - intervention - mini assembly format - Roger Hallam
- fascism and polarization intervention - scalability - Roger Hallam
- stats - Perato's law - social transformation - fascism, polarization and climate crisis - climate communication - 80% will change if we listen, 19% will require deeper conversations - 1% will not change - Roger Hallam
- fascism - antidote - question - what is a common way to transform fascist men? - answer - they get a girlfriend - Roger Hallam
- Anti-fascist strategy - depolarization strategy - mini assemblies - Roger Hallam
- fascism, polarization and climate crisis - climate communications - social intervention - new movement that can be 10x the size of Extinction Rebellion - apply Carl Rogers discovery of listening - Roger Hallam
- trauma - healing - by listening - Carl Rogers
Annotators
URL
-
-
e-learn.ue-varna.bg e-learn.ue-varna.bg
-
Структура на личността по Платонов: * Подструктура на насочеността на личността - изцяло социално обусловена; дава отговор на въпроса защо хората работят, защо предпочитат дадена организация, какво ги удовлетворява и какво не ги удовлетворява в организационен контекст * Подструктура на социалния опит - формира се в процеса на обучение; дава информация за знанията, навиците и уменията на личността * Подструктура на формите на психическо отражение - развива се чрез упражнения; обяснява защо хората с една и съща професионална подготовка не са в състояние еднакво успешно да работят при условия на високи нива на стрес * Биологично обусловена подструктура - биологичните особености на индивида
-
-
e-learn.ue-varna.bg e-learn.ue-varna.bg
-
Четирите велики неврози на съвремието по К. Хорни: * Невроза на привързаност - търсене на съпринадлежност * Невроза на властта - стремеж към власт и престиж * Невроза на покорността - конформизъм, автоматизъм в поведението, апатия * Неврзоизолация - бягство от хората, отчуждение
-
Защитни механизми по А. Фройд: * Проекция - личността не съзнава собствените си нежелани черти и емоции и несъзнателно ги изтласква в подсъзнанието и след това ги проектира извън себе си * Изтласкване - избирателно забравяне на събитие, свързано с конфликт , стрес или травмиращи състояния * Изместване - емоционалният конфликт се измества от една идея или обект към друга, която наподобява оригинала по някои свои качества * Отричане - омаловажаване на конфликта и аспектите на конфликтната ситуация * Соматизация - тенденция за реагиране с телесни вместо с психични прояви * Регресия - личността, изправена пред конфликт, стрес и фрустрация, може да се върне към по-ранен етап, е който се е чувствала сигурна и обгрижена * Агресивно поведение - индиректна агресия към другите или към себе си * Идентификация - личността приема върху себе си личните характеристики на друг човек * Компенсация - изравняване на психологични или психични дефицити чрез високи постижения в някаква област * Затваряне в себе си или изолация частично изтласкване, при което желанието или съответният спомен от миналото си запазват достъпа до съзнанието, а емоцията е отделена от тях о не може да достигне съзнанието * Аскетизъм -директно отхвърляне на всички съзнателно преживявани удоволствия и елиминиране на удоволствието като един от аспектите на човешките преживявания * Сублимация - личността обръща енергиите на неприемливите импулси в социално приемлива, творческа дейност * Хумор - наблягане върху забавната или иронична страна на даден конфликт или стресор * Алтруизъм - личността изпитва удовлетворение от това, че може да служи, да помага на други хора с полезни за тях действия и чрез съпреживяване на техните чувства * Интелектуализация - криене на емоционалния отговор зад думи и твърдения, отричащи проблема
-
Йерархична класификация на защитните механизми от психологично естество според относителната степен на зрелост на личността в нейните отделни етапи на развитие: * Нарцистични защити - при децата и при някои хора с психични разстройства * Незрели защити - в детска и юношеска възраст, но се срещат и при зрели възрастни хора * Невротични защити - при хора в различна възраст, подложени на психически стрес и при възрастни хора с невротични проблеми * Зрели защити - в зрялата възраст на психично здравите хора
-
Вродени архетипове, които трябва да се изпълнят със съдържание по Юнг: * Персона - всички наши характеристики и роли, които демонстрираме, като ги представяме на показ, за да създадем добро впечатление и да осъществим социално взаимодействие * Сянка - нашите истински психологически чувства, които крием от хората, тъмната страна на личността, нейните неприемливи сексуални и агресивни импулси, аморални мисли и чувства * Анимус - мъжкият аспект в колективното несъзнавано при жените * Анима - женският аспект в колективното несъзнавано при мъжете * Самостоятелност - сърцевината на личността, около която са организирани останалите елементи, която създава уникалността и неповторимостта в живота на всеки един от нас
-
Механизми за психологическа защита по Фройд: * Изтласкване - селективно забравяне на материал, свързан с конфликт и стрес * Потискане - човек избягва стресиращи мисли * Отричане - селективно се обръща внимание на предизвикващите страх аспекти на ситуацията и ги интерпретира наново * Проекция - атрибутиране на личностни характеристики или мотивациите на другите хора като функция на собствените хартактеристики и мотивации. Три вида: 1. атрибутивна - проектиране на собствена черта върху известни и уважавани личности с цел намаляване на стреса от преживяването ѝ 2. допълваща - човек съзнава определена характеристика или чувство и атрибутира тяхната причина на друг човек 3. класическа - дадена черта или характеристика се изтласква в несъзнаваното и човек не подозира за съществуването ѝ, но я проектира върху друг без да си дава сметка за нейното наличие у себе си. * Регресия - когато е изправен пред конфликт, стрес и особено фрустрация, човек може да се върне към по-ранен етап от живота, в който е имал сигурност. 2 типа регресия: 1. обективна - връщане при минал обект на удовлетворяване 2. регресия на нагона - човек, фрустриран по един нагон, може да получи удовлетворяване, като работи за удовлетворяването на друг нагон * Идентификация - човек приема върху себе си личностните характеристики на друг човек, за да постигне нещо, свързано с неизвестности или неприятности * Компенсация(Алфред Адлер) - опит за преодоляване на чувствата за малоценност и свързаната с тях тревожност, чрез допълнителни усилия в областта, в която се чувства малоценен * Формиране на реакция - при възможност заплашителен изтласкан материал да се върне в съзнанието, човек може да се опита да подкрепи изтласкването , като използва поведения, диаметрално противоположни на типа поведения, които биха произтекли от връщането на изтласкания материал * Рационализация - използване на "добри" причини за държането по определен начин * Интелектуализация - опит за овладяване на инстинктивните процеси чрез свързването им с идеи, които могат да бъдат разработвани в съзнанието * Сублимация - когато човек обръща енергия, свързана с неприемлив импулс или нагон, в социално приемлива дейност * Изместване - изразява се в замяна насочеността на своята активност от опасен към безопасен обект или в промяна начина на въздействие върху заплашващ обект
-
-
uw.pressbooks.pub uw.pressbooks.pub
-
pages using
Kyle Zachrich here in Seattle, WA
-
-
www.biorxiv.org www.biorxiv.org
-
eLife Assessment
This important study addresses how DNA replication restarts in Escherichia coli in the absence of a functional replication initiator protein DnaA. The authors show that helicase DnaB loading at the replication origin oriC can be executed by PriC under sub-optimal initiation conditions. While the genetic and biochemical evidence is solid, there is so far no direct evidence for PriC acting at oriC in vivo.
-
Reviewer #1 (Public review):
Summary:
This manuscript reports the investigation of PriC activity during DNA replication initiation in Escherichia coli. It is reported that PriC is necessary for the growth and control of DNA replication initiation under diverse conditions where helicase loading is perturbed at the chromosome origin oriC. A model is proposed where PriC loads helicase onto ssDNA at the open complex formed by DnaA at oriC. Reconstituted helicase loading assays in vitro support the model. The manuscript is well-written and has a logical narrative.
Major Questions/Comments:
An important observation here is that a ΔpriC mutant alone displays under-replication, suggesting that this helicase loading pathway is physiologically relevant. Has this PriC phenotype been reported previously? If not, would it be possible to confirm this result using an independent experimental approach (e.g. marker frequency analysis or fluorescent reporter-operator systems)?
Is PriA necessary for the observed PriC activity at oriC? Is there evidence that PriC functions independently of PriA in vivo?
Is PriC helicase loading activity in vivo at the origin direct (the genetic analysis leaves other possibilities tenable)? Could PriC enrichment at oriC be detected using chromatin immunoprecipitation?
-
Reviewer #2 (Public review):
This is a great paper. Yoshida et al. convincingly show that DnaA does not exclusively do loading of the replicative helicase at the E. coli oriC, but that PriC can also perform this function. Importantly, PriC seems to contribute to helicase loading even in wt cells albeit to a much lesser degree than DnaA. On the other hand, PriC takes a larger role in helicase loading during aberrant initiation, i.e. when the origin sequence is truncated or when the properties of initiation proteins are suboptimal. Here highlighted by mutations in dnaA or dnaC.
This is a major finding because it clearly demonstrates that the two roles of DnaA in the initiation process can be separated into initially forming an open complex at the DUE region by binding/nucleation onto DnaA-boxes and second by loading of the helicase. Whereas these two functions are normally assumed to be coupled, the present data clearly show that they can be separated and that PriC can perform at least part of the helicase loading provided that an area of duplex opening is formed by DnaA.
This puts into question the interpretation of a large body of previous work on mutagenesis of oriC and dnaA to find a minimal oriC/DnaA complex in many bacteria. In other words, mutants in which oriC is truncated/mutated may support the initiation of replication and cell viability only in the presence of PriC. Such mutants are capable of generating single-strand openings but may fail to load the helicase in the absence of PriC. Similarly, dnaA mutants may generate an aberrant complex on oriC that trigger strand opening but are incapable of loading DnaB unless PriC is present.
In the present work, the sequence of experiments presented is logical and the manuscript is clearly written and easy to follow. The very last part regarding PriC in cSDR replication does not add much to the story and may be omitted.
-
Reviewer #3 (Public review):
Summary:
At the abandoned replication fork, loading of DnaB helicase requires assistance from PriABC, repA, and other protein partners, but it does not require replication initiator protein, DnaA. In contrast, nucleotide-dependent DnaA binding at the specific functional elements is fundamental for helicase loading, leading to the DUE region's opening. However, the authors questioned in this study that in case of impeding replication at the bacterial chromosomal origins, oriC, a strategy similar to an abandoned replication fork for loading DnaB via bypassing the DnaA interaction step could be functional. The study by Yoshida et al. suggests that PriC could promote DnaB helicase loading on the chromosomal oriC ssDNA without interacting with the DnaA protein. However, the conclusions drawn from the primarily qualitative data presented in the study could be slightly overwhelming and need supportive evidence.
Strengths:
Understanding the mechanism of how DNA replication restarts via reloading the replisomes onto abandoned DNA replication forks is crucial. Notably, this knowledge becomes crucial to understanding how bacterial cells maintain DNA replication from a stalled replication fork when challenging or non-permissive conditions prevail. This critical study combines experiments to address a fundamental question of how DnaB helicase loading could occur when replication initiation impedes at the chromosomal origin, leading to replication restart.
Weaknesses:
The term colony formation used for a spotting assay could be misleading for apparent reasons. Both assess cell viability and growth; while colony formation is quantitative, spotting is qualitative. Particularly in this study, where differences appear minor but draw significant conclusions, the colony formation assays representing growth versus moderate or severe inhibition are a more precise measure of viability.
Figure 2<br /> The reduced number of two oriC copies per cell in the dnaA46priC-deficient strain was considered moderate inhibition. When combined with the data suggested by the dnaAC2priC-deficient strain containing two origins in cells with or without PriC (indicating no inhibition)-the conclusion was drawn that PriC rescue blocked replication via assisting DnaC-dependent DnaB loading step at oriC ssDNA.
The results provided by Saifi B, Ferat JL. PLoS One. 2012;7(3):e33613 suggests the idea that in an asynchronous DnaA46 ts culture, the rate by which dividing cells start accumulating arrested replication forks might differ (indicated by the two subpopulations, one with single oriC and the other with two oriC). DnaA46 protein has significantly reduced ATP binding at 42C, and growing the strain at 42C for 40-80 minutes before releasing them at 30 C for 5 minutes has the probability that the two subpopulations may have differences in the active ATP-DnaA. The above could be why only 50% of cells contain two oriC. Releasing cells for more time before adding rifampicin and cephalexin could increase the number of cells with two oriCs. In contrast, DnaC2 cells have inactive helicase loader at 42 C but intact DnaA-ATP population (WT-DnaA at 42 or 30 C should not differ in ATP-binding). Once released at 30 C, the reduced but active DnaC population could assist in loading DnaB to DnaA, engaged in normal replication initiation, and thus should appear with two oriC in a PriC-independent manner.
Broadly, the evidence provided by the authors may support the primary hypothesis. Still, it could call for an alternative hypothesis: PriC involvement in stabilizing the DnaA-DnaB complex (this possibility could exist here). To prove that the conclusions made from the set of experiments in Figures 2 and 3, which laid the foundations for supporting the primary hypothesis, require insights using on/off rates of DnaB loading onto DnaA and the stability of the complexes in the presence or absence of PriC, I have a few other reasons to consider the latter arguments.
Figure 3<br /> One should consider the fact that dnA46 is present in these cells. Overexpressing pdnaAFH could produce mixed multimers containing subunits of DnaA46 (reduced ATP binding) and DnaAFH (reduced DnaB binding). Both have intact DnaA-DnaA oligomerization ability. The cooperativity between the two functions by a subpopulation of two DnaA variants may compensate for the individual deficiencies, making a population of an active protein, which in the presence of PriC could lead to the promotion of the stable DnaA: DnaBC complexes, able to initiate replication. In the light of results presented in Hayashi et al. and J Biol Chem. 2020 Aug 7;295(32):11131-11143, where mutant DnaBL160A identified was shown to be impaired in DnaA binding but contained an active helicase function and still inhibited for growth; how one could explain the hypothesis presented in this manuscript. If PriC-assisted helicase loading could bypass DnaA interaction, then how growth inhibition in a strain carrying DnaBL160A should be described. However, seeing the results in light of the alternative possibility that PriC assists in stabilizing the DnaA: DnaBC complex is more compatible with the previously published data.
Figure 4<br /> Overexpression of DiaA could contribute to removing a higher number of DnaA populations. This could be more aggravated in the absence of PriC (DiaA could titrate out more DnaA)- the complex formed between DnaA: DnaBC is not stable, therefore reduced DUE opening and replication initiation leading to growth inhibition (Fig. 4A ∆priC-pNA135). Figure 7C: Again, in the absence of PriC, the reduced stability of DnaA: DnaBC complex leaves more DnaA to titrate out by DiaA, and thus less Form I*. However, adding PriC stabilizes the DnaA: DnaBC hetero-complexes, with reduced DnaA titration by DiaA, producing additional Form I*. Adding a panel with DnaBL160A that does not interact with DnaA but contains helicase activity could be helpful. Would the inclusion of PriC increase the ability of mutant helicase to produce additional Form I*?
Figure 5<br /> The interpretation is that colony formation of the Left-oriC ∆priC double mutant was markedly compromised at 37˚C (Figure 5B), and 256 the growth defects of the Left-oriC mutant at 25{degree sign}C and 30{degree sign}C were aggravated. However, prima facia, the relative differences in the growth of cells containing and lacking PriC are similar. Quantitative colony-forming data is required to claim these results. Otherwise, it is slightly confusing.
A minor suggestion is to include cells expressing PriC using plasmid DNA to show that adding PriC should reverse the growth defect of dnaA46 and dnaC2 strains at non-permissive temperatures. The same should be added at other appropriate places.
-
-
hzyiruai.feishu.cn hzyiruai.feishu.cn
-
英文介绍
English Version Romantic art, emerging in the late 18th century and flourishing throughout the 19th century, was part of the larger Romantic movement that spanned literature, music, and philosophy. It arose as a reaction against the rationality and strict formalism of the Enlightenment and Neoclassicism. Romantic art emphasized emotion, imagination, and individuality, celebrating the sublime beauty of nature, the exotic, and the power of human emotions.
Characteristics of Romantic Art
Emotion Over Reason - Romantic art prioritized emotional depth over rational thought. Paintings often depicted intense feelings such as awe, fear, love, or despair, encouraging the viewer's emotional engagement.
The Sublime and Nature - The sublime, a concept emphasizing nature's grandeur and power, played a central role. Romantic artists portrayed landscapes as vast, untamed, and sometimes terrifying, reflecting both the beauty and unpredictability of nature.
Individualism and Heroism - Romantic art celebrated individual experience, particularly the heroic, the mysterious, or the misunderstood. Subjects often included solitary figures, rebel leaders, or mythical heroes.
The Exotic and the Supernatural - Fascination with the exotic, the mysterious, and the supernatural was another hallmark. Artists often depicted foreign lands, Gothic ruins, or mythical and dreamlike scenes.
Rich and Dynamic Color - Romantic painters used dramatic contrasts and rich palettes to heighten the emotional intensity of their works. They explored bold, expressive brushwork and atmospheric effects.
Freedom of Composition - Rejecting rigid neoclassical structures, Romantic art embraced more dynamic and fluid compositions. Diagonal lines and dramatic perspectives created movement and energy.
Key Themes in Romantic Art
- Nature as a Reflection of Emotion
- Nature was often a central subject, not merely as a setting but as a mirror of human emotions.
-
Example: Caspar David Friedrich’s “Wanderer Above the Sea of Fog” depicts a lone figure contemplating an overwhelming landscape, symbolizing introspection and the sublime.
-
Historical and Revolutionary Themes
- Romantic artists depicted contemporary events, particularly revolutions and wars, with a dramatic and emotional flair.
-
Example: Eugène Delacroix’s “Liberty Leading the People”, which commemorates the 1830 French Revolution.
-
Mythology and Folklore
- Myths and legends were revisited as expressions of universal truths and emotions.
-
Example: William Blake’s visionary works, such as “The Great Red Dragon and the Woman Clothed with the Sun”, blend biblical imagery with mythological themes.
-
The Exotic and the Oriental
- Fascination with the Middle East, North Africa, and Asia led to “Orientalist” works depicting imagined and idealized exotic settings.
-
Example: Jean-Léon Gérôme’s “The Snake Charmer”.
-
The Supernatural and the Gothic
- Gothic horror and the supernatural captivated Romantic artists, reflecting the fascination with the unknown and the mysterious.
- Example: Henry Fuseli’s “The Nightmare”, a haunting portrayal of terror and dream states.
Prominent Artists and Works
- Caspar David Friedrich (1774–1840)
- A German painter known for his sublime landscapes that evoke introspection and spiritual awe.
-
Key Works: “Wanderer Above the Sea of Fog”, “The Abbey in the Oakwood”.
-
Eugène Delacroix (1798–1863)
- A French painter celebrated for his dynamic compositions and vibrant use of color.
-
Key Works: “Liberty Leading the People”, “The Death of Sardanapalus”.
-
William Blake (1757–1827)
- An English painter, poet, and printmaker known for his visionary works that blend spirituality and myth.
-
Key Works: “The Ancient of Days”, “The Great Red Dragon and the Woman Clothed with the Sun”.
-
J.M.W. Turner (1775–1851)
- A British artist celebrated for his atmospheric landscapes and exploration of light and color.
-
Key Works: “The Fighting Temeraire”, “Rain, Steam and Speed”.
-
Francisco Goya (1746–1828)
- A Spanish painter whose works transitioned from Enlightenment ideals to dark, Romantic themes.
- Key Works: “The Third of May 1808”, “Saturn Devouring His Son”.
Techniques and Innovations
Brushwork and Color - Romantic artists experimented with loose and expressive brushwork, moving away from the precision of Neoclassicism. - They used rich, often symbolic color schemes to enhance the emotional impact.
Light and Atmosphere - Light was a crucial element, often employed to create dramatic contrasts or suggest transcendence and mystery.
Scale and Perspective - Large-scale canvases were common, with dramatic use of perspective to draw viewers into the scene.
Influence and Legacy
Impact on Later Movements - Romantic art directly influenced later artistic movements such as Symbolism, Realism, and Impressionism. - Its emphasis on emotion, individuality, and the sublime carried forward into Modernism and beyond.
Cultural Impact - Romantic art shaped perceptions of nature, history, and mythology, leaving a lasting legacy in both visual and literary culture.
Museums and Collections - Romantic artworks are displayed in major museums worldwide, including the Louvre (Paris), the Prado (Madrid), and the National Gallery (London).
Conclusion
Romantic art remains a powerful testament to the human spirit's capacity for emotion, imagination, and wonder. Through its rich color, dramatic composition, and profound themes, Romanticism continues to resonate with audiences, inviting them to explore the depths of nature, history, and the human soul.
-
-
www.biorxiv.org www.biorxiv.org
-
eLife Assessment
This is a useful report of a spatially-extended model to study the complex interactions between immune cells, fibroblasts, and cancer cells, providing insights into how fibroblast activation can influence tumor progression. The model opens up new possibilities for studying fibroblast-driven effects in diverse settings, which is crucial for understanding potential tumor microenvironment manipulations that could enhance immunotherapy efficacy. While the results presented are solid and follow logically from the model's assumptions, some of these assumptions may require further validation, as they appear to oversimplify certain aspects in light of complex experimental findings, system geometry, and general principles of active matter research.
-
Reviewer #1 (Public review):
The authors present an important work where they model some of the complex interactions between immune cells, fibroblasts and cancer cells. The model takes into account the increased ECM production of cancer-associated fibroblasts. These fibres trap the cancer but also protect it from immune system cells. In this way, these fibroblasts' actions both promote and hinder cancer growth. By exploring different scenarios, the authors can model different cancer fates depending on the parameters regulating cancer cells, immune system cells and fibroblasts. In this way, the model explores non-trivial scenarios. An important weakness of this study is that, though it is inspired by NSCLC tumors, it is restricted to modelling circular tumor lesions and does not explore the formation of ramified tumors, as in NSCLC. In this way, is only a general model and it is not clear how it can be adapted to simulate more realistic tumor morphologies.
-
Reviewer #2 (Public review):
Summary:
The authors develop a computational model (and a simplified version thereof) to treat an extremely important issue regarding tumor growth. Specifically, it has been argued that fibroblasts have the ability to support tumor growth by creating physical conditions in the tumor microenvironment that prevent the relevant immune cells from entering into contact with, and ultimately killing, the cancer cells. This inhibition is referred to as immune exclusion. The computational approach follows standard procedures in the formulation of models for mixtures of different material species, adapted to the problem at hand by making a variety of assumptions as to the activity of different types of fibroblasts, namely "normal" versus "cancer-associated". The model itself is relatively complex, but the authors do a convincing job of analyzing possible behaviors and attempting to relate these to experimental observations.
Strengths:
As mentioned, the authors do an excellent job of analyzing the behavior of their model both in its full form (which includes spatial variation of the concentrations of the different cellular species) and in its simplified mean field form. The model itself is formulated based on established physical principles, although the extent to which some of these principles apply to active biological systems is not clear (see Weaknesses). The results of the model do offer some significant insights into the critical factors which determine how fibroblasts might affect tumor growth; these insights could lead to new experimental ways of unraveling these complex sets of issues and enhancing immunotherapy.
Weaknesses:
Models of the form being studied here rely on a large number of assumptions regarding cellular behavior. Some of these seemed questionable, based on what we have learned about active systems. The problem of T cell infiltration as well as the patterning of the extracellular matrix (ECM) by fibroblasts necessarily involve understanding cell motion and cell interactions due e.g. to cell signaling. Adopting an approach based purely on physical systems driven by free energies alone does not consider the special role that active processes can play, both in motility itself and in the type of self-organization that can occur due to these cell-cell interactions. This to me is the primary weakness of this paper.
A separate weakness concerns the assumption that fibroblasts affect T cell behavior primarily by just making a more dense ECM. There are a number of papers in the cancer literature (see, for some examples, Carstens, J., Correa de Sampaio, P., Yang, D. et al. Spatial computation of intratumoral T cells correlates with survival of patients with pancreatic cancer. Nat Commun 8, 15095 (2017); Sun, Xiujie, Bogang Wu, Huai-Chin Chiang, Hui Deng, Xiaowen Zhang, Wei Xiong, Junquan Liu et al. "Tumour DDR1 promotes collagen fibre alignment to instigate immune exclusion." Nature 599, no. 7886 (2021): 673-678) that seem to indicate that density alone is not a sufficient indicator of T cell behavior. Instead, the organization of the ECM (for example, its anisotropy) could be playing a much more essential role than is given credit for here. This possibility is hinted at in the Discussion section but deserves much more emphasis.
Finally, the mixed version of the model is, from a general perspective, not very different from many other published models treating the ecology of the tumor microenvironment (for a survey, see Arabameri A, Asemani D, Hadjati J (2018), A structural methodology for modeling immune-tumor interactions including pro-and anti-tumor factors for clinical applications. Math Biosci 304:48-61). There are even papers in this literature that specifically investigate effects due to allowing cancer cells to instigate changes in other cells from being tumor-inhibiting to tumor-promoting. This feature occurs not only for fibroblasts but also for example for macrophages which can change their polarization from M1 to M2. There needed to be some more detailed comparison with this existing literature.
-
-
www.theguardian.com www.theguardian.com
-
www.biorxiv.org www.biorxiv.org
-
eLife Assessment
This manuscript presents important information as to how adolescent alcohol exposure (AIE) alters pain behavior and relevant neurocircuits, with compelling data. The manuscript focuses on how AIE alters the basolateral amygdala, to the PFC (PV-interneurons), to the periaquaductal gray circuit, resulting in feed-forward inhibition. The manuscript is a detailed study of the role of alcohol exposure in regulating the circuit and reflexive pain, however, the role of the PV interneurons in mechanistically modulating this feed-forward circuit could be more strongly supported.
-
Reviewer #1 (Public review):
Summary:
In this manuscript by Obray et al., the authors show that adolescent ethanol exposure increases mechanical allodynia in adulthood. Additionally, they show that BLA-mediated inhibition of the prelimbic cortex is reduced, resulting in increased excitability in neurons that then project to vlPAG. This effect was mediated by BLA inputs onto PV interneurons. The primary finding of the manuscript is that these AIE-induced changes further impact acute pain processing in the BLA-PrL-vlPAG circuit, albeit behavioral readouts after inducing acute pain were not different between AIE rats and controls. These results provide novel insights into how AIE can have long-lasting effects on pain-related behaviors and neurophysiology. In this manuscript by Obray et al., the authors show that adolescent ethanol exposure increases mechanical allodynia in adulthood. Additionally, they show that BLA-mediated inhibition of the prelimbic cortex is reduced, resulting in increased excitability in neurons that then project to vlPAG. This effect was mediated by BLA inputs onto PV interneurons. The primary finding of the manuscript is that these AIE-induced changes further impact acute pain processing in the BLA-PrL-vlPAG circuit, albeit behavioral readouts after inducing acute pain were not different between AIE rats and controls. These results provide novel insights into how AIE can have long-lasting effects on pain-related behaviors and neurophysiology.
Strengths:
The manuscript was very well written and the experiments were rigorously conducted. The inclusion of both behavioral and neurophysiological circuit recordings was appropriate and compelling. The attention to SABV and appropriate controls was well thought out. The Discussion provided novel ideas for how to think about AIE and chronic pain and proposed several interesting mechanisms. This was a very well-executed set of experiments.
Weaknesses:
There is a mild disconnect between behavioral readout (reflexive pain) and neural circuits of interest (emotional). Considering that this circuit is likely engaged in the aversiveness of pain, it would have been interesting to see how carrageenan and/or AIE impacted non-reflexive pain measures. Perhaps this would reveal a potentiated or dysregulated phenotype that matches the neurophysiological changes reported. However, this critique does not take away from the value of the paper or its conclusions.
-
Reviewer #2 (Public review):
Summary:
The study by Obray et al. entitled "Adolescent alcohol exposure promotes mechanical allodynia and alters synaptic function at inputs from the basolateral amygdala to the prelimbic cortex" investigated how adolescent intermittent ethanol exposure (AIE) affects the BLA -> PL circuit, with an emphasis on PAG projecting PL neurons, and how AIE changes mechanical and thermal nociception. The authors found that AIE increased mechanical, but not thermal nociception, and an injection of an inflammatory agent did not produce changes in an ethanol-dependent manner. Physiologically, a variety of AIE-specific effects were found in PL neuron firing at BLA synapses, suggestive of AIE-induced alterations in neurotransmission at BLA-PVIN synapses.
Strengths:
This was a comprehensive examination of the effects of AIE on this neural circuit, with an in-depth dissection of the various neuronal connections within the PL.
Sex was included as a biological variable, yet there were little to no sex differences in AIE's effects, suggestive of similar adaptations in males and females.
-
Reviewer #3 (Public review):
Summary:
Obray et al. investigate the long-lasting effects of adolescent intermittent ethanol (AIE) in rats, a model of alcohol dependence, on a neural circuit within the prefrontal cortex. The studies are focused on inputs from the basolateral amygdala (BLA) onto parvalbumin (PV) interneurons and pyramidal cells that project to the periaqueductal gray (PAG). The authors found that AIE increased BLA excitatory drive onto parvalbumin interneurons and increased BLA feedforward inhibition onto PAG-projecting neurons.
Strengths:
Fully powered cohorts of male and female rodents are used, and the design incorporates both AIE and an acute pain model. The authors used several electrophysiological techniques to assess synaptic strength and excitability from a few complimentary angles. The design and statistical analysis are sound, and the strength of evidence supporting synaptic changes following AIE results is solid.
Weaknesses:
(1) There is incomplete evidence supporting some of the conclusions drawn in this manuscript. The authors claim that the changes in feedforward inhibition onto pyramidal cells are due to the changes in parvalbumin interneurons, but evidence is not provided to support that idea. PV cells do not spontaneously fire action potentials spontaneously in slices (nor do they receive high levels of BLA activity while at rest in slices). It is possible that spontaneous GABA release from PV cells is increased after AIE but the authors did not report sIPSC frequency. Second, the authors did not determine that PV cells mediate the feedforward BLA op-IPSCs and changes following AIE (this would require manipulation to reduce/block PV-IN activity). This limitation in results and interpretation is important because prior work shows BLA-PFC feedforward IPSCs can be driven by somatostatin cells. Cholecystokinin cells are also abundant basket cells in PFC and have been recently shown to mediate feedforward inhibition from the thalamus and ventral hippocampus, so it's also possible that CCK cells are involved in the effects observed here.
(2) The authors conclude that the changes in this circuit likely mediate long-lasting hyperalgesia, but this is not addressed experimentally. In some ways, the focused nature of the study is a benefit in this regard, as there is extensive prior literature linking this circuit with pain behaviors in alternative models (e.g., SNI), but it should be noted that these studies have not assessed hyperalgesia stemming from prior alcohol exposure. While the current studies do not include a causative behavioral manipulation, the strength of the association between BLA-PL-PAG function and hyperalgesia could be bolstered by current data if there were relationships detected between electrophysiological properties and hyperalgesia. Have the authors assessed this? In addition, this study is limited by not addressing the specificity of synaptic adaptations to the BLA-PL-PAG circuit. For instance, PL neurons send reciprocal projections to BLA and send direct projections to the locus coeruleus (which the authors note is an important downstream node of the PAG for regulating pain).
(3) I have some concerns about methodology. First, 5-ms is a long light pulse for optogenetics and might induce action-potential independent release. Does TTX alone block op-EPSCs under these conditions? Second, PV cells express a high degree of calcium-permeable AMPA receptors, which display inward rectification at positive holding potentials due to blockade from intracellular polyamines. Typically, this is controlled/promoted by including spermine in the internal solution, but I do not believe the authors did that. Nonetheless, the relatively low A/N ratios for this cell type suggest that CP-AMPA receptors were not sampled with the +40/+40 design of this experiment, raising concerns that the majority of AMPA receptors in these cells were not sampled during this experiment. Finally, it should be noted that asEPSC frequency can also reflect changes in a number of functional/detectable synapses. This measurement is also fairly susceptible to differences in inter-animal differences in ChR2 expression. There are other techniques for assessing presynaptic release probability (e.g., PPR, MK-801 sensitivity) that would improve the interpretation of these studies if that is intended to be a point of emphasis.
(4) In a few places in the manuscript, results following voluntary drinking experiments (especially Salling et al. and Sicher et al.) are discussed without clear distinction from prior work in vapor models of dependence
(5) Discussion (lines 416-420). The authors describe some differing results with the literature and mention that the maximum current injection might be a factor. To me, this does not seem like the most important factor and potentially undercuts the relevance of the findings. Are the cells undergoing a depolarization block? Did the authors observe any changes in the rheobase or AP threshold? On the other hand, a more likely difference between this and previous work is that the proportion of PAG-projecting cells is relatively low, so previous work in L5 likely sampled many types of pyramidal cells that project to other areas. This is a key example where additional studies by the current group assessing a distinct or parallel set of pyramidal cells would aid in the interpretation of these results and help to place them within the existing literature. Along these lines, PAG-projecting neurons are Type A cells with significant hyperpolarization sag. Previous studies showed that adolescent binge drinking stunts the development of HCN channel function and ensuing hyperpolarization sag. Have the authors observed this in PAG-projecting cells? Another interesting membrane property worth exploring with the existing data set is the afterhyperpolarization / SK channel function.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back to stage 5 of this advanced demo series.
And in this stage you're going to be adding a load balancer and auto scaling group to provision and terminate instances automatically based on the load of the system.
By adding a load balancer you'll also abstract connections away from individual instances which will allow elastic scaling and self-healing if any of the instances have problems.
Now the first step to moving towards this elastic architecture is to create the load balancer.
To do that move to the EC2 console, scroll down and toward the bottom under load balancing click on load balancers.
Go ahead and click on create load balancer and it's going to be an application load balancer that we're creating.
So click on create.
We're going to be calling the load balancer A4L WordPress ALB.
It's going to be an internet facing load balancer which means the nodes of the load balancer will be allocated with public IP addressing.
And we want the IP address type for this demonstration to be IP version 4.
Okay so now we need to select the subnets that the load balancer nodes will be placed into.
So first make sure that the animals for life VPC is selected so A4L VPC.
And then check the box next to US East 1A, 1B and 1C.
For US East 1A I want you to select the SN-PUB-A which is the public subnet inside Availability Zone A so US East 1A.
For US East 1B I want you to select the public subnet in AZB so SN-PUB-B.
And then lastly for US East 1C we'll be selecting the SN-PUB-C.
So this configures the subnets that the load balancer nodes will be placed into because they're public subnets and because we have the scheme set to internet facing these nodes will be provided with public IP addressing.
Next under security groups click on the cross to delete the default security group.
And then click in the drop down and go ahead and select A4L VPC-SG load balancer.
Now there will be some random afterwards that's okay just make sure you select A4L VPC-SG load balancer.
Now scroll down and under listeners and routing make sure that the protocol is set to HTTP and the port is set to 80.
Application load balancers work using target groups and so we need to define a target group to forward the traffic to.
Now we don't currently have any target groups which have been created so we need to go ahead and click on create target group.
Now under basic configuration the target type is going to be instances so make sure that that's selected.
Under target group name just enter A4L WordPress ALBTG.
Scroll down further still make sure the protocol is set to HTTP and port is set to 80 on this screen as well.
Make sure the VPC is set to A4L VPC.
The protocol version by default should be HTTP1 you can leave that as the default.
Under health checks make sure the health check protocol is HTTP and the health check path is forward slash.
Once that's set go ahead and click next.
Now we won't be adding any instances to the target group these can either be added manually or a target group can be integrated with an autoscaling group and that's something that we'll be configuring later in this advanced demo.
For now just scroll down to the bottom and click create target group.
Then go back to the previous tab click on the refresh icon and then select the A4L WordPress ALBTG from the drop down.
Now we won't be picking any add-on services so you don't need to check the AWS global accelerator.
Just scroll down to the bottom and click create load balancer.
Next click on view load balancer and then select the load balancer that you've just started creating and we'll need to create another parameter in the parameter store so we'll need the DNS name of the load balancer.
So go ahead and click on the little symbol next to that to copy that into your clipboard.
Next you'll need to move back to the parameter store.
Now because we're automating this environment we need to provide a way so that all of the EC2 instances know the DNS name of the load balancer because this will be used as a workaround to the fact that the IP addresses are hard coded into the database so we need to provide an automatic way of exposing the load balancer DNS name to the EC2 instances.
Click on create parameter for the parameter name forward slash A4L forward slash WordPress forward slash ALB for application load balancer and then DNS name so forward slash A4L forward slash WordPress forward slash ALB DNS name for description put DNS name of the application load balancer for WordPress.
We're going to be picking a standard tier parameter.
It's going to be a string parameter.
It's going to be a text for data type and in value go ahead and paste the DNS name of the load balancer which you just copied into your clipboard scroll down to the bottom and click on create parameter.
Now the next thing we're going to do is to update the launch template and this is quite a complex update so you need to understand exactly what we're doing.
Currently and I've mentioned this a few times throughout this demo series the IP address of the first EC2 instance that's used for a WordPress deployment is hard coded into the database.
Now this is fine if it's a static IP address but if it's not or if you're using multiple EC2 instances then you can't use IP addresses because they change both on an individual EC2 instance and if you're scaling using multiple instances.
So we need to replace this hard coded value with the DNS name of the load balancer.
So that's what we're going to do.
We're going to update the launch template with some final configuration so that it can adjust this configuration replacing the IP address with the DNS name of the load balancer.
So go back to the EC2 console, click on launch templates, select the WordPress launch template and click on actions modify template create new version.
Under the template version description we're going to use app only, users EFS file system defined in /a4l/wordpress/efs/fsid and then ALB home added to the WP database.
So we're going to make some on the fly adjustments to the WordPress database when every instance is provisioned to make sure that the load balancer DNS name is set to be the home URL for WordPress.
So again scroll all the way down to the bottom because we're using an older template as the foundation for this one.
All of the values will be pre-populated.
Expand advanced details and scroll all the way down to user data and then just expand this text entry to make it slightly easier to interact with.
As with the previous step position your cursor at the end of this top line and press enter twice.
We need to add the first two lines of script which will bring in the application load balancer DNS name into an environment variable using systems manager parameter store.
So now this instance when it's provisioning has the DNS name of the load balancer.
Now next move all the way down to the bottom of this user data.
So the last step that we want a machine to do when it's provisioning is to perform this update of the database.
So there's a fairly large block of text which you need to copy from this stages text instructions.
It's stage five and you need to paste this into the bottom of this file.
So right at the bottom after these last two fine statements paste in this block.
So this should start with the cat command on the top line of what you've just pasted in and then all the way down at the bottom.
It should end with forward slash home forward slash EC2 hyphen user forward slash update underscore WP underscore IP dot SH.
Essentially what this does is to bring in the WordPress configuration file to get the current authentication details for the database.
So all these lines at the top are just designed to get the authentication information.
So the DB name, the DB user and the DB password.
This line runs a database script to get the old value for the IP address of the original IP address of the EC2 instance.
So this is pulling in the original hard coded IP address.
Then we're going to take the load balancer DNS name and we're going to run a series of SQL commands to update the database moving from that hard coded IP.
To using the ALB DNS name.
Now what this is actually doing is this line here is creating a script file and it's going to put into this script file everything until this EOF directive.
So scrolling down this means that everything between these two lines is going to be stored in this script.
Then we're going to make the script executable using CHmod 755.
We're going to echo the path to this script into ETC RC.local which is run every time the instance is started up.
And then finally we're going to run this script the once to update this information right here and now.
So this new version of the launch template essentially changes what this hard coded IP address is every time to be the DNS name of the load balancer.
It means if we ever change the DNS name of this load balancer this script will automatically correct this hard coded value.
Now this is a thing specific to WordPress and there are many situations where you'll have applications which have certain nuances that you need to be aware of when creating elastic architectures.
This is the one for WordPress.
So now that we've made these changes go ahead and click on create template version to create that new version of this launch template.
Click on launch template some for the final time we need to update the default version.
So make sure this launch template is selected.
Click on actions scroll down select set default version click in the drop down the current default version is version three we want to select version four so select that and then click set as default version.
Now that means the launch template is updated and we can now provision instances in a fully elastic way.
Okay so this is the end of part one of this lesson.
It was getting a little bit on the long side and I wanted to give you the opportunity to take a small break maybe stretch your legs or make a coffee.
Now part two will continue immediately from this point so go ahead complete this video and when you're ready I look forward to you joining me in part two.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back to stage 5 of this advanced demo series.
And in this stage you're going to be adding a load balancer and auto scaling group to provision and terminate instances automatically based on the load of the system.
By adding a load balancer you'll also abstract connections away from individual instances which will allow elastic scaling and self-healing if any of the instances have problems.
Now the first step to moving towards this elastic architecture is to create the load balancer.
To do that move to the EC2 console, scroll down and toward the bottom under load balancing click on load balancers.
Go ahead and click on create load balancer and it's going to be an application load balancer that we're creating.
So click on create.
We're going to be calling the load balancer A4L WordPress ALB.
It's going to be an internet facing load balancer which means the nodes of the load balancer will be allocated with public IP addressing.
And we want the IP address type for this demonstration to be IP version 4.
Okay so now we need to select the subnets that the load balancer nodes will be placed into.
So first make sure that the animals for life VPC is selected so A4L VPC.
And then check the box next to US East 1A, 1B and 1C.
For US East 1A I want you to select the SN-PUB-A which is the public subnet inside Availability Zone A so US East 1A.
For US East 1B I want you to select the public subnet in AZB so SN-PUB-B.
And then lastly for US East 1C we'll be selecting the SN-PUB-C.
So this configures the subnets that the load balancer nodes will be placed into because they're public subnets and because we have the scheme set to internet facing these nodes will be provided with public IP addressing.
Next under security groups click on the cross to delete the default security group.
And then click in the drop down and go ahead and select A4L VPC-SG load balancer.
Now there will be some random afterwards that's okay just make sure you select A4L VPC-SG load balancer.
Now scroll down and under listeners and routing make sure that the protocol is set to HTTP and the port is set to 80.
Application load balancers work using target groups and so we need to define a target group to forward the traffic to.
Now we don't currently have any target groups which have been created so we need to go ahead and click on create target group.
Now under basic configuration the target type is going to be instances so make sure that that's selected.
Under target group name just enter A4L WordPress ALBTG.
Scroll down further still make sure the protocol is set to HTTP and port is set to 80 on this screen as well.
Make sure the VPC is set to A4L VPC.
The protocol version by default should be HTTP1 you can leave that as the default.
Under health checks make sure the health check protocol is HTTP and the health check path is forward slash.
Once that's set go ahead and click next.
Now we won't be adding any instances to the target group these can either be added manually or a target group can be integrated with an autoscaling group and that's something that we'll be configuring later in this advanced demo.
For now just scroll down to the bottom and click create target group.
Then go back to the previous tab click on the refresh icon and then select the A4L WordPress ALBTG from the drop down.
Now we won't be picking any add-on services so you don't need to check the AWS global accelerator.
Just scroll down to the bottom and click create load balancer.
Next click on view load balancer and then select the load balancer that you've just started creating and we'll need to create another parameter in the parameter store so we'll need the DNS name of the load balancer.
So go ahead and click on the little symbol next to that to copy that into your clipboard.
Next you'll need to move back to the parameter store.
Now because we're automating this environment we need to provide a way so that all of the EC2 instances know the DNS name of the load balancer because this will be used as a workaround to the fact that the IP addresses are hard coded into the database so we need to provide an automatic way of exposing the load balancer DNS name to the EC2 instances.
Click on create parameter for the parameter name forward slash A4L forward slash WordPress forward slash ALB for application load balancer and then DNS name so forward slash A4L forward slash WordPress forward slash ALB DNS name for description put DNS name of the application load balancer for WordPress.
We're going to be picking a standard tier parameter.
It's going to be a string parameter.
It's going to be a text for data type and in value go ahead and paste the DNS name of the load balancer which you just copied into your clipboard scroll down to the bottom and click on create parameter.
Now the next thing we're going to do is to update the launch template and this is quite a complex update so you need to understand exactly what we're doing.
Currently and I've mentioned this a few times throughout this demo series the IP address of the first EC2 instance that's used for a WordPress deployment is hard coded into the database.
Now this is fine if it's a static IP address but if it's not or if you're using multiple EC2 instances then you can't use IP addresses because they change both on an individual EC2 instance and if you're scaling using multiple instances.
So we need to replace this hard coded value with the DNS name of the load balancer.
So that's what we're going to do.
We're going to update the launch template with some final configuration so that it can adjust this configuration replacing the IP address with the DNS name of the load balancer.
So go back to the EC2 console, click on launch templates, select the WordPress launch template and click on actions modify template create new version.
Under the template version description we're going to use app only, users EFS file system defined in /a4l/wordpress/efs/fsid and then ALB home added to the WP database.
So we're going to make some on the fly adjustments to the WordPress database when every instance is provisioned to make sure that the load balancer DNS name is set to be the home URL for WordPress.
So again scroll all the way down to the bottom because we're using an older template as the foundation for this one.
All of the values will be pre-populated.
Expand advanced details and scroll all the way down to user data and then just expand this text entry to make it slightly easier to interact with.
As with the previous step position your cursor at the end of this top line and press enter twice.
We need to add the first two lines of script which will bring in the application load balancer DNS name into an environment variable using systems manager parameter store.
So now this instance when it's provisioning has the DNS name of the load balancer.
Now next move all the way down to the bottom of this user data.
So the last step that we want a machine to do when it's provisioning is to perform this update of the database.
So there's a fairly large block of text which you need to copy from this stages text instructions.
It's stage five and you need to paste this into the bottom of this file.
So right at the bottom after these last two fine statements paste in this block.
So this should start with the cat command on the top line of what you've just pasted in and then all the way down at the bottom.
It should end with forward slash home forward slash EC2 hyphen user forward slash update underscore WP underscore IP dot SH.
Essentially what this does is to bring in the WordPress configuration file to get the current authentication details for the database.
So all these lines at the top are just designed to get the authentication information.
So the DB name, the DB user and the DB password.
This line runs a database script to get the old value for the IP address of the original IP address of the EC2 instance.
So this is pulling in the original hard coded IP address.
Then we're going to take the load balancer DNS name and we're going to run a series of SQL commands to update the database moving from that hard coded IP.
To using the ALB DNS name.
Now what this is actually doing is this line here is creating a script file and it's going to put into this script file everything until this EOF directive.
So scrolling down this means that everything between these two lines is going to be stored in this script.
Then we're going to make the script executable using CHmod 755.
We're going to echo the path to this script into ETC RC.local which is run every time the instance is started up.
And then finally we're going to run this script the once to update this information right here and now.
So this new version of the launch template essentially changes what this hard coded IP address is every time to be the DNS name of the load balancer.
It means if we ever change the DNS name of this load balancer this script will automatically correct this hard coded value.
Now this is a thing specific to WordPress and there are many situations where you'll have applications which have certain nuances that you need to be aware of when creating elastic architectures.
This is the one for WordPress.
So now that we've made these changes go ahead and click on create template version to create that new version of this launch template.
Click on launch template some for the final time we need to update the default version.
So make sure this launch template is selected.
Click on actions scroll down select set default version click in the drop down the current default version is version three we want to select version four so select that and then click set as default version.
Now that means the launch template is updated and we can now provision instances in a fully elastic way.
Okay so this is the end of part one of this lesson.
It was getting a little bit on the long side and I wanted to give you the opportunity to take a small break maybe stretch your legs or make a coffee.
Now part two will continue immediately from this point so go ahead complete this video and when you're ready I look forward to you joining me in part two.
-
-
202411-typehint.gihyo-python-monthly.pages.dev 202411-typehint.gihyo-python-monthly.pages.dev
-
SQL
「データベースから取得した文字列」 のほうが読みやすいかも。
-
文字列リテラル
最初に読んだときにピンとこなかったので、以下のように追記するのはどうでしょうか?
「文字列リテラル、つまりダブルクオートやシングルクオートで文字列を宣言した文字列のみ」
長くて読みにくいかも・・。(もう少しわかり易い表現があるといいかな・・)
-
Fettet
typo: Ferret
-
ユーザー向けのユーザー向けの
2回繰り返しているようです
-
Python 3.11でtypingモジュールにLiteralStringが追加されました
以下のドキュメントには "Note that LiteralString is a special form used solely for type checking. " と書いているので、LiteralStringはstrのサブクラスとして追加されたわけではなく、あくまで型チェックのための型だということは触れておいてもいいかもしれません。
type("seven")
を実行してみると<class 'str'>
が返ってくるのに疑問を持つ人がいるかもしれないので。
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back to stage 4 of this advanced demo series.
Now in stage 4, we're going to perform the last step before we can make this a truly elastic and scalable design.
And we're going to migrate the wp-content folder which stores these priceless animal images from the EC2 instance onto EFS which is the elastic file system.
This is a shared network file system that we can use to store images or other content in a resilient way outside of the life cycle of these individual EC2 instances.
So to do that, we need to move back to the AWS console, click on the services drop down and type EFS.
Right click and open the EFS console in a new tab.
Once that's opened, click on create file system.
Now we're going to step through the full configuration options so rather than using this simplified user interface, go ahead and click on customize.
So the first step is to create the file system itself.
So for name, go ahead and call this a4l-wordpress-content.
Leave the storage class as standard.
These cat images are critical data and so we are going to leave automatic backups enabled.
And we're also going to leave life cycle management set to be the default so 30 days since the last access for throughput mode pick bursting which links the throughput to the size of the storage.
Then expand additional settings.
You've got two performance modes, general purpose and max IO.
For this demonstration, go ahead and select general purpose.
Max IO is for very specific high performance scenarios for 99% of use cases.
You should select general purpose.
Now also go ahead and untick enable encryption of data at rest.
If this were a production scenario, you would leave this on.
But for this demo, which is focusing on architecture evolution, it simplifies the implementation if we disable it.
So go ahead and make sure that encryption is disabled.
Once you've done that, that's all of the file system specific options that we need to configure.
So go ahead and click on next.
In this part, you're configuring the EFS mount targets, which are the network interfaces in the VPC, which your instances will connect with.
So in the virtual private cloud drop down, select it and then pick a for L VPC.
So this is the VPC that these mount targets are going to go into.
Now, each of the mount targets is secured by a security group.
The first thing we need to do is to strip off the default security group for the VPC.
So click in the crosses next to each of these security groups.
Now, you should have three rows, one for each availability zone.
So in my case, you are seized one A, one B and one C and make sure that you've got the same selected.
So one row for each availability zone, A, B and C.
Now in the subnet drop down for availability zone one A, I want you to go ahead and pick SN-AP-A.
So this should be 10.16.32.0/20.
For the US East one B row, I want you to go ahead and pick SN-AP-B.
This should be 10.16.96.0/20.
And then finally for US East one C, I want you to go ahead and pick SN-AP-C, which should be 10.16.160.0/20.
Now for all three rows within the security groups drop down, I want you to go ahead and select A4LVPC-SGEFS.
Again, for each of these, it will have some randomness after it, but just make sure you pick the right one.
A4LVPC-SGEFS.
And you need to pick that for each of the three rows.
Make sure you pick the right one because if you don't, it will impact your ability to connect.
So there the mount targets configured and they'll be allocated with an IP address in each of these subnets automatically, which will allow you to connect to them.
At this point, go ahead and click on Next.
You can configure some additional file system policies.
This is entirely optional.
We won't be using that.
So just go ahead and click on Next.
And then on the review screen, scroll all the way down to the bottom and just click on Create.
Now the file system itself will initially show as being in the creating state and it will then change to available.
Go ahead and click on the file system itself.
Click on the Network tab and then just scroll down and these are the mount targets which are being created.
Now in order to configure our EC2 instance, we will need all of these mount targets to be in the available state.
But what we can do to save some time is we can note down the file system ID of this EFS file system.
So this is this value.
You can see it at the top header here or you can see it in this row at the top.
Just note that down and copy that into your clipboard because we need to configure another parameter to point at this file system ID.
Because remember when we're scaling things automatically, it's always best practice to use the parameter store to store configuration information.
So click on Services, type Sys which are the first few letters of Systems Manager and open that in a new tab.
Once you're at the Systems Manager console, go ahead and click on Parameter Store and then you need to click Create Parameter to create a new parameter.
We're going to call this parameter forward slash A4L forward slash WordPress forward slash and then EFS for Elastic File System, FS for File System and then ID.
So EFS File System ID.
For description, put File System ID for WordPress content and then in brackets WP-Content and that will help us know exactly what this parameter is for.
As before, we'll be picking the standard tier, the type will be string, the data type will be text and then into the value, just go ahead and paste that file system ID.
And once you've done all that, you can go ahead and click on Create Parameter.
Once that's done, go back to the EFS console and if required, just hit refresh and make sure that all of these mount targets are in the available state.
This is what it should look like with all three showing a green tick and available.
Once that's the case, go to the EC2 console because now we're going to configure our EC2 instance to connect to this file system.
So go to Running Instances, locate the WordPress -LT instance, right click, select Connect, choose Session Manager and then click on Connect.
And this will open Session Manager console to the EC2 instance.
As always, type shudubash, press Enter, cd and press Enter and then type clear and press Enter again, just to clear the screen making it easier to see.
Now, even though EFS is based on NFS, which is a standard, in order to get EC2 instances to connect to EFS, we need to install an additional tools package.
And to do that, we use this command.
So type or paste that in and press Enter to install the EFS support package.
Once that's installed again, I'm going to clear the screen to make it easier to see.
Then I'm going to move to the Web Root folder by typing cd /vr/www/html.
And what I'm going to do is to move the entire wp-content folder somewhere else.
So if I just go inside this folder to illustrate exactly what it looks like and then do a list, you'll see that inside there are plugins, themes and uploads.
And inside those folders are any media assets used by WordPress.
So I'm just going to type cd /dot/ to move back up a level out of this folder.
And then I'm going to move this entire folder to the /tmp folder, which is a temporary folder.
So mv/wp-content///tmp and that moves that entire folder to the temporary folder.
Then we're going to create a new folder.
So shudu space mkdir space wp-content.
This will be the mount point for the EFS file system.
So I'm making an empty directory.
Then I'm going to clear the screen and then paste in the next two commands from the lesson instructions.
And this populates an environment variable called EFS/FSID with the value from the parameter you just created in the parameter store.
So this is now the file system ID of the EFS file system.
Now there's a file called fstab which exists in the /etc folder.
And inside there it's called fstab and this contains a list of file systems which are mounted on this EC2 instance.
Initially this only has the single line for the boot volume.
What we're going to do is add an additional line to this fstab file.
And this line is going to configure the EC2 instance so that it mounts our EFS file system on boot every single time.
And this is this command.
So it echoes this line.
So the file system ID from the environment variable.
We're going to mount it to the folder that we just created.
So the wp-content folder and these are all of the file system options.
So we're going to put that into the fstab file.
So if we now cap this file it's got this extra line.
And this means this file system will be mounted whenever the operating system starts.
And we can force this just for now by running mount space-a space-t space-efs space-defaults.
And this will mount the EFS file system onto this EC2 instance.
We can verify that by doing a df space-k.
And the bottom line should show us that we've now got this EFS file system mounted as the wp-content folder.
So this is the folder that WordPress expects its media to be inside.
Now all that remains is for us to migrate the existing data that we moved to the temporary folder back in to wp-content.
And to do that we use this command.
So we're using the mv command to move forward slash tmp forward slash wp-content forward slash star.
So any files and folders and then we're moving it back into var www.html wp-content.
So this is the EFS file system.
So run that and that will copy the data back to EFS, which remember is now mounted where WordPress expects it to be.
Now that might take a few moments to complete.
Once it's done, we just need to fix up the permissions.
So run this command chown space-bigr space-ec2-user colon apache space and then slash var slash www.
So this just reestablishes permissions and ownership of everything in this particular part of the file system.
Just make sure we won't have any problems going forward.
Now at this point we're going to use the reboot command to restart this instance.
And if everything goes well, the instance should start, the EFS file system should be loaded and WordPress should have access to all of this wp-content, which is now running from a network file system.
So go ahead type reboot and press enter.
If you press enter just to make sure that you are disconnected and I am.
So that's good.
So now I need to wait a few minutes for this EC2 instance or at least its operating system to restart.
So I'll go ahead and close down this session manager tab.
Go back to the EC2 console.
After waiting a few minutes, I'll right click select connect check session manager click on connect.
Assuming the instance has restarted, I'll be back at the prompt.
And if I do a DF space-k if everything's working as expected, the EFS file system will still be mounted into the directory that we configured.
If I go back to the EC2 console and just copy down the instances public IP version for address, either refresh the tab if you still got it open or paste in the IP address and reload that page.
And if everything's working as expected, all of these high quality critical cat pitches should still load from the WordPress blog.
So now at this point when we're interacting with the application, both the database and the wp-content both exist away from the EC2 instance.
And this means we're now in a position where we can scale the EC2 instance without worrying about the data or the media for any of the posts.
And this means we can now further evolve this architecture to be fully elastic.
Now there is one more thing that we need to do before moving on to the next stage of the demo and implementing this final step towards a fully elastic architecture.
And that's that we need to update the launch template to include this updated configuration so that it uses EFS.
To do that, go back to the EC2 console, go to launch templates, select the launch template.
So check the box, click on the actions drop down, select modify template, create new version.
For template version description, use app only, uses EFS file system defined in and then the parameter store value that contains the file system ID.
So this is just the description.
Now again, because we're creating a new version, it will populate all of the configuration with the previous template version.
But I'll need you to scroll all the way down to the bottom, expand advanced details and scroll all the way down.
Again, we're going to make some edits to the user data.
So expand this box a little bit to make it easier to read.
What I'll need you to do is to put your cursor after the end of this top line and just press enter twice to make some space and then paste in this set of configuration.
And again, this is stored within the instructions for this stage of the demo series that will just populate an environment variable with the file system ID that it will get from the parameter store.
Scroll down and next you're looking for a software installation line.
You're looking for this line, the line that performs the installation of the Maria DB server, the Apache web server and the W get utility.
Position your cursor after the word stress and then press space.
And then I'll want you to add this text followed by a space, which is Amazon hyphen EFS hyphen utils.
Next, scroll down a little bit further and you're looking for the line that says system, CTL, start, HTTBD.
Click on the end to position your cursor at the end of that line and then press enter twice to add some space and then paste in this next block also contained within this lessons instructions.
What this does is to make a WP hyphen content folder before we install WordPress, configure the ownership of the entire folder tree and then add the line for EFS to the FSTAB file and then mount this EFS file system in to VARWWWW/HTML/WP hyphen content.
And this means that when we're automatically provisioning this instance before we install WordPress, we're creating and mounting this EFS file system.
And then we go on to installing WordPress, configuring the database and performing the final fix of all of the permissions at that folder structure.
Next, scroll down.
We're done with all of the launch template user data configuration.
Just go ahead and click on create template version.
We need to make this new version the default.
So click on launch templates, select the WordPress launch template, click on actions, scroll down, select set default version, click in the dropdown.
Version two should currently be the default.
Change that to version three and click set as default version.
So at this point, you further evolved the architecture.
Now we have both the database for WordPress stored in RDS and the WP hyphen content data stored within the Elastic file system.
So we've solved many of the applications limitations.
We can scale the database independently of the application.
We've stored the media files separate from the instance.
So now we can scale the instance freely out or in without risking the media or the database.
We do still have two final limitations which will be fixing together in the next stage of this demo series.
One is that customers still connect to the instance directly so we don't have any health checks.
We don't have any auto healing capabilities and we're limited to how we can scale.
And then finally, the IP address of the instance is still hard coded into the database.
And so even if we did provision additional instances, WordPress would expect all of the data to be loaded from that one single original instance.
And to allow us to scale, we have to resolve both of those problems.
At this point though, you've done everything required in stage four.
So go ahead, complete this video.
And when you're ready, I look forward to you joining me in stage five of this advanced demo series.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in stage three of this demo series, you're going to change the single server architecture that's on screen now and move towards something a little more scalable.
You're going to migrate the database from the EC2 instance into a separate RDS instance and that means each of these can scale independently, so you can grow or shrink the database independently of the EC2 instance.
It also means that the data in the database lives past the lifecycle of the EC2 instance and this is required for later stages in the demo where you want to scale in and out based on load.
So let's go ahead and do that.
So you'll need to be at the AWS console, click on services and in the find services drop down, type RDS and then open that in a new tab.
Now we're going to create a subnet group first and a subnet group is what allows RDS to select from a range of subnets to put its databases inside.
In this case, we'll be giving RDS a selection of three subnets, so SN-DB-A, B and C.
So three availability zones which it can choose to deploy database instances into.
So to do that, look on the left hand menu and just click on subnet groups.
Click on create DB subnet group.
For name, call it WordPress RDS subnet group.
Under description, just type RDS subnet group for WordPress.
In the VPC drop down, select the A4L VPC.
Scroll down a little and then under availability zones, click in the drop down and check the box next to US East 1A, 1B and 1C because we have database subnets in each of those availability zones and these were created as part of the infrastructure cloud formation template that you applied at the start of this advanced demo.
Once you've selected those availability zones, next we need to pick the subnets inside those availability zones that the databases will go into.
So click in the subnets drop down.
Now you could go to the VPC console and get the IP address ranges that correspond to the different database subnets but I'm going to save us some time.
So in US East 1A, you need to pick 10.16.16.0/20.
That's the database subnet in availability zone A.
In availability zone B, you need to pick 10.16.80.0/20.
That's the database subnet in AZB.
And then in US East 1C, you need to pick 10.16.144.0 because that's the database subnet in availability zone C.
So now you've selected the three availability zones, the three subnets in those availability zones so you can scroll down and click on create.
So that creates the database subnet group that RDS uses in order to select which subnets database instances should go into.
The next step is to actually create the RDS instance itself.
And to start with, we're going to use a free tier eligible database.
So go ahead and click on databases, click on create database, select standard create.
RDS is capable of using lots of different database engines, but we're going to select MySQL.
So select MySQL.
Scroll down and under version, put the version number that's inside this lesson's description.
AWS regularly make changes and instead of using the version you see on this video, pick the one that's inside this lesson's description.
Scroll down.
Under templates, click on free tier because this will make sure that we're only selecting options that are eligible under the free tier.
And we want to keep the first part of this demo series completely within the AWS free tier.
Now under DB instance identifier, we need to give this instance a name.
So delete this placeholder and then just enter A4L wordpress.
Now for master username and password, we need to enter the values from the parameter store that we entered previously.
So click on services, start typing sys and then right click on systems manager and open in a new tab.
Go to the parameter store, look for the DB user parameter and then copy what's in the value field and then go back to the RDS console and paste that in for master username.
So that should be A4L wordpress user.
Do the same for the master password.
So for that, you need to go back to parameter store and this time you're looking for A4L wordpress DB password.
So select that.
Once you're here, click on show and then copy the value for this parameter.
Once you've got that value, paste it into both the master and confirm password boxes.
Scroll down further still and now you need to pick the database instance size.
Now because we've selected free tier eligible, we can only select DB.t3.micro.
Or in some cases, this may be slightly different, but it's only going to allow you to pick free tier eligible instance types.
So we can leave that selected.
It is the default because we picked free tier only.
Now scroll down to connectivity.
Under the virtual private cloud VPC, click in the drop down and select the A4L VPC.
So this defines the VPC that this database is going into.
Once you've selected that, make sure for subnet group, you've got WordPress RDS subnet group selected.
Choose no for publicly accessible and then for existing VPC security groups, I want you to go ahead and click on the cross next to default and then click in the drop down and select A4L VPC - SG database.
And again, this will have some randomness on the end, but that's perfectly okay.
So select A4L VPC - SG database.
Under availability zone preference, select US East 1A.
This makes sure this database just to start off with is in the same availability zone as the EC2 instance.
Scroll down further still, go past database authentication and then expand additional configuration.
And this is important because we need to set an initial database name.
So for the initial database name, we'll need to go back to the parameter store.
This time we need the value for the A4L WordPress DB name parameter.
So select that and then copy its value.
So copy that into the clipboard, go back to the RDS console and paste that in for the initial database name.
And that should be A4L WordPress DB.
At this point, we can leave everything else as default.
So scroll all the way down to the bottom and click on create database.
Now this can take anywhere up to 30 minutes to create the database and it will need to be fully ready before you move on to the next step.
So now's a great time to pause this video, go and grab a coffee and wait for this database to become available, at which point you can resume the video.
Now that this database instance is available, the next thing to do is to migrate the actual WordPress data.
And to do that, we need to move back to the EC2 console.
So open the EC2 console, locate WordPress -LT and then select that instance, right click, select connect, choose session manager and then click on connect.
We're going to perform the migration from this instance itself.
To start with run shudu space bash and press enter, cd and press enter and then type clear and press enter.
We're going to be running some commands which are in the text instructions for this stage of the demo series.
The first set of commands will load data from the parameter store into environment variables within the operating system.
So go ahead and copy all of the first block of commands and paste it in to this terminal.
This will load the DB password, the DB root password, DB user, DB name and DB endpoint all into environment variables and make sure to press enter on the last line just to complete that command.
Next we're going to export the data from the local MariaDB database instance and we'll do that using this command.
So mysqldump -h space and then uses these environment variables.
So the database endpoint which will be local host and then a space -u and then a space and then the database user which is also an environment variable and then a space -p and then DB password which is an environment variable and then a space and then DB name which is also an environment variable.
And then we direct the output of this command into a file called a4lwordpress.sql which is a database export file.
So the best way is to copy and paste this out of the lesson instructions and then press enter and then run an ls space -la and just make sure that you've got that a4lwordpress.sql file and this is an output of the current sqldatabase for WordPress.
Now next we need to change the parameter in parameter store for DB endpoint so that it points at our new RDS instance.
So go back to the RDS console, click on the a4lwordpress instance and then copy this endpoint name into your clipboard.
So it should start with a4lwordpress and then some random and then the region and then RDS and then amazonaws.com.
So copy all of that into your clipboard and then either open the systems manager console and go to the parameter store or if you still got it open in a previous tab then you can open that tab.
So click on parameter store to list all the parameters.
Now at this point we're going to delete one of these parameters and it needs to be a deletion because we're going to recreate it.
Please make sure that you do delete it and recreate it rather than just editing the value for the existing parameter because that won't work.
You'll need to select the checkbox next to a4lwordpress.db endpoint and then click on delete.
And once you've done that click on delete parameters to confirm that deletion and we're going to create a new parameter with the same name.
So click on create parameter for name put forward slash a4lwordpress/db endpoint which is the same name as before.
For description put WordPress endpoint name.
We're going to use the standard tier again.
It's going to be a string type.
The data type is going to be text and then in the value paste in the RDS endpoint that you just copied into your clipboard.
And once you've done that scroll down and click on create parameter.
Go back to the session manager tab that you've got open to the instance and we need to refresh the environment variable with the updated parameter store parameter.
So to do that copy and paste this next block of commands and this updates the db endpoint with the new RDS DNS name.
Once we've updated that then we can run the mysql command to load in the a4lwordpress.sql export into the RDS instance and that's using this command.
So again mysql -h space and then the RDS endpoint name which is in that environment variable and then specifying the db user db password and db name and then directing the command to load in the contents of this file.
So if we paste all that in and press enter that imports that database export into RDS.
So now RDS has the same data as our local Maria db installation.
Now to finalize the migration we need to update the wordpress configuration file.
So instead of pointing at the local Maria db instance it points at RDS.
And we can do that using sed and perform a replace of local host with the contents of the db endpoint environment variable which remember now contains the DNS name for the RDS instance.
And the location of the file that will be performing this replace on is /var/www/html/wp-config.php which is the wordpress configuration file.
So paste that in and press enter and that's reconfigured wordpress now so that it talks to the RDS instance for the database functionality.
Lastly we can run these commands to both disable Maria db so it doesn't start every time the operating system boots and set it to stopped right now.
So now Maria db is no longer running on this EC2 instance.
So we can verify that the functionality of our application is still there by going back to the EC2 console.
Selecting wordpress -lt just copy this public IP address into your clipboard.
If you already have it open in an existing tab you can refresh.
It should still load the blog and yet we've still got the same best animals blog post.
But now wordpress is loading the data for this blog post from the RDS instance.
Now to be really clear at this point wordpress when you create a blog post has two different sets of data.
It has the data of the blog post so the text, the metadata, the author, the date and time, the permissions, the published status and many other things they're stored in the database.
But any media, any content for this blog post is still stored locally in a directory called wp-content.
That is still on the EC2 instance or that we've migrated in this stage of the demo is the database itself from Maria db through to RDS.
Now before we finish with this stage of the demo series there's one final task and that's to update the launch template so we can launch additional EC2 instances.
But using this new configuration so pointing at the RDS instance.
So to do that go back to the EC2 console and click on launch templates.
Click in the checkbox next to the wordpress launch template.
Select the actions drop down and then locate and click modify template create new version.
Now for the description we're going to put single server app only.
So we're indicating with this version of the launch template we no longer have the database inside the instance itself.
Now because we're creating this from a previous version all of the boxes will be pre-populated.
What we need to do is to update the user data.
So go all the way down to the bottom and expand advanced details scroll all the way down to the bottom of that and find the user data box.
And I find it's easier if we just expand it to make it slightly easier to see.
There are a number of things which we need to adjust in this user data.
First just scroll down and you need to locate this block of commands.
So system CTL enable and system CTL start.
What we need to remove are the lines that refer to MariaDB.
So the top one is system CTL enable MariaDB select that and delete and then locate system CTL start MariaDB select that and delete.
So that prevents MariaDB starting on the EC2 instance.
Now because we're using an RDS instance we also need to remove this line which attempts to set the root password of the MariaDB database instance.
We don't need that anymore so delete that.
Scroll all the way down to the bottom and look for this block.
So it starts with echo create database DB name and it finishes with RM/TMP/DB.setup.
This is the block that creates the database within MariaDB, creates the user and sets all of the permissions.
But because we're using RDS now we don't need to do any of this so we're going to delete this block as well.
Once you've done that you can go ahead and click on create template version and this will create a new version but this time designed to use RDS.
Once you've done that go back to the launch template screen and click on the launch template.
We need to change it so that the new version is the default version that's used whenever we launch instances from this template.
So click on the launch template.
Once that's loaded you'll see we're currently on version one.
Change this to version two and you'll see the updated details and then click on the actions drop down.
Select set default version.
In the dialogue make sure that version two is shown under template version and then click on set as default version.
And at this point version two or the one which uses RDS is now set as default and this means when we use this template to launch any instances this is the version that will be used by default.
Now at this point that's everything that I wanted you to do in stage three of this demo series.
So you've migrated the data for a working WordPress installation from a local MariaDB database instance through to RDS.
And that's essential to be able to scale this application because now the data is outside of the lifecycle of the EC2 instance.
So we know that for any scale in or out events it won't impact the relational or SQL based data.
It also means that we can scale the database independently of the WordPress application instances.
So that helps us reach the desired outcome of a fully elastic architecture.
Now at this point we've actually fixed many of the limitations of this design.
At this point the only things that we need to fix are the application media.
So the WordPress content which still resides in a folder local to the EC2 instance.
So we need to migrate this out so that we can scale the instances in and out without risking that data.
The other things that are still limiting factors are that customers are still connecting directly to the instance.
So we need to resolve that by using a load balancer and the IP address of the instance is still hard coded into the database.
So if this EC2 instance fails for whatever reason and we provision a new one, it won't function because WordPress expects everything to be loaded from this IP address.
So that's something we need to resolve.
But at this point that's everything you need to do in stage three.
In stage four you'll be migrating these images from the EC2 instance into an elastic file system.
And that's one of the last stages that we need to do before we can make this a fully elastic design.
So go ahead complete this video and when you're ready I'll look forward to you joining me in stage four of this advanced demo series.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back to stage two of this advanced demo lesson and again have included full instructions attached to this lesson.
And this stage of the demo will be another one where you're entering lots of commands because you're going to automate the build of the WordPress application instance.
So again, I would recommend opening the instructions for this demo lesson and copy and pasting the commands rather than typing them out by hand.
Now at this point in the advanced demo series, you're going to have a leftover instance that you used to manually install WordPress in the previous stage.
It should be called WordPress - Manual.
So I'm going to want you to go ahead and right click on that and select terminate instance and confirm that process to remove this instance from your AWS account.
We're going to be setting up exactly the same single instance deployment of WordPress, so both the database and the application on the same instance.
But instead of manually building this, we're going to be using a launch template.
So from the EC2 console, just go ahead and click on launch templates under instances.
The first step is to create a launch template for our WordPress application.
So go ahead and click on create launch template.
Now launch templates are actually a new version of launch configurations that were previously used with auto scaling groups.
Launch templates allow you to either launch instances manually using the template or they can be part of auto scaling groups.
But what a launch template allows you to do is to specify all of the configuration in advance to launch an instance and that template can be used to launch one or many instances.
So we're going to create a launch template which will automate the installation of WordPress, MariaDB and perform all of the configuration.
And a launch template can actually have many different versions, which is a feature we'll use throughout this demo series as we evolve the design.
So the first step is to name this template and we're going to call it WordPress.
Under template version description, go ahead and enter single server DB and app.
And then check this box which says provide guidance to help me set up a template that I can use with EC2 auto scaling.
We're not immediately going to set it up as part of an auto scaling group, but it will help us highlight any options which are required if we want to use it with an auto scaling group.
Now launch templates can actually be created from scratch or they can be based on a previous template version.
If we expand source template, you're able to specify a template which this template is based on.
But in this case, we're creating one from scratch so we won't set any of those options.
Now just scroll down.
So the next thing we're going to define in this launch template is the AMI that we're going to use.
So go ahead and click on Quickstart.
And once this has changed, we're going to use the same AMI we've been using previously.
So I want you to go ahead and click on Amazon Linux, specifically Amazon Linux 2023.
It should be the SSD volume type.
It should be listed as free tier eligible and just make sure that you've got 64 bit x86 selected.
And then scroll down further still and in the instance type drop down, we're looking for the T series of instances.
And then you need to select the one that's free tier eligible.
In most cases, this will be T2.micro, but select whichever is free tier eligible.
We want to keep this advanced demo as much as possible within the free tier.
Scroll down again and for key pair, just make sure that it says don't include in the launch template.
Move down further still to network settings.
Then make sure select existing security group is selected.
And then in the security groups drop down, click in that and make sure that you select the A4L.
VPC - SG WordPress.
So this is the security group which will automatically be associated with any instances launched using this launch template.
So select A4L.
VPC - SG WordPress and there will be some randomness after this.
That's fine.
Just make sure you select the SG WordPress group and then we can scroll down further still.
Now we can leave storage volumes as default.
We won't set any resource tags.
We won't do any configuration of network interfaces, but I will want you to expand advanced details.
There are a few things that we need to set within advanced details.
The first is an IAM instance profile.
So click in this drop down and then make sure that you pick A4L.
VPC - SG WordPress instance profile.
Again, there will be some randomness.
That's fine.
What this is doing is creating the configuration which will attach an instance role to this EC2 instance.
And this instance role is going to provide all the permissions required to interact with the parameter store and the elastic file system and anything else that this instance requires.
And this was pre-created on your behalf using the cloud formation template.
Next, scroll down further still and look for credit specification.
Remember, this is the same option that you set when launching an instance manually.
Now, as before, it's always best to set this to unlimited.
But if you are using a brand new AWS account, then it's possible that AWS won't allow you to use this option.
So you should probably go ahead and pick standard.
It won't make that much of a difference.
I'm going to pick unlimited, but I do suggest if you are using a fairly new account, you go ahead and select standard.
So that's the configuration for the instance, the base level configuration.
What I want you to do now though is to scroll all the way down to the bottom and there's a user data box.
This user data allows us to specify bootstrapping information to automatically configure our EC2 instances.
So into this user data box, I want you to paste the entire code snippet within stage 2B of this stages instructions.
And again, they're attached to this lesson.
The top line should be hash bang forward slash bin forward slash bash and then a space hyphen XE.
And then if you scroll all the way down to the bottom, the last line should be RM space forward slash TMP forward slash DB dot setup.
And now we can see we've pasted this entire user data.
Once you've done that, go ahead and click on create launch template.
Now that user data that you just pasted in is essentially all of the commands that you ran in the previous stage of the demo.
Only instead of pasting them one by one, you've defined them within the user data.
So this simply automates the process end to end.
So to test this, go ahead and click on launch templates towards the top of the screen.
It should show that you have a single launch template.
It's called WordPress.
The default version is one and the latest version is one.
And as we move throughout this demo series, the latest version and the default version will change.
So just keep an eye on those as we go.
For now, though, I want you to click in the checkbox next to this launch template, click on actions and then launch instance from template.
So this is going to launch an EC2 instance using this launch template.
We're asked to choose a launch template and a version and define the number of instances and we can leave all of these as the defaults.
If we just scroll down, you'll see how it's pre-populating all of these values with the configuration from the launch template.
And that's what we want.
Under key pair name, just select to proceed without a key pair not recommended.
And that's the default value.
Scroll down further still.
Even the networking configuration is partially pre-populated.
The only thing we need to do is specify a subnet that this instance will be launched into.
And when we configure auto scaling groups to use this launch template, the auto scaling group will configure the subnets on our behalf.
Because we're launching an instance directly from the launch template, we have to specify this subnet.
So click in the subnet dropdown and then look for SN-PUB-A.
Because we're going to deploy this WordPress instance into the public subnet in Availability Zone A.
So select that.
Scroll down.
Look for the resource tag section and click on add tag.
We're going to add a tag to the instance launched by this template.
So into key, just type name and then for value, use WordPress-LT.
And this will just tell us that this is an instance launched using the launch template.
Once you've entered those, just scroll all the way down to the bottom and click launch instance.
And this will launch an EC2 instance using this template.
And this will automate everything that we had to do in the previous stage manually.
So this saves us significant time and it enables us to use automation in later stages of this demo series.
So now go ahead and click on the instance ID in this success box and this will take you to the EC2 console.
Just give this instance a couple of minutes to finish its build process.
Even though we're automating the process, it does still take some time to perform the installation and the configuration of all of those different components.
So go ahead and just copy the public IP version for address of this instance into your clipboard.
And then after you've waited a few minutes, open that in a new tab.
If you get an error or it opens with a blank page, then you just need to give it a few minutes longer.
But when it's finished, it should show the same WordPress installation screen.
Once it does load the installation screen, we're going to follow the same process.
So site title is Categorum, username is Admin.
Enter the same password and then enter the fake test at test.com email address.
Then click on install WordPress.
Then click on login.
Enter admin again.
Enter the password.
Click on login.
It looks as though our automated WordPress build has worked because the dashboard has loaded.
Click on posts.
Delete the default post.
Click on add new.
For the title, the best animals again, click on the plus, select gallery, click on upload.
And again, pick a selection of animal pictures and click on open.
Remember, this is a new EC2 instance.
So the one we previously terminated will have also deleted the data on that previous instance.
Once these images have uploaded, click on publish and then publish again to upload the images to the EC2 instance and store the data within the database.
So remember two components, the data stored in the database and the images or media stored locally on the EC2 instance.
Click on view post to make sure that this loads correctly.
It does.
So that means the automatic build has worked okay.
Everything's functioning as we expect.
This has been an automatic build of a functional WordPress application.
Now, the only thing that's changed from the previous stage of this advanced demo series is we've automated the build of this instance.
It still has much the same limitations as the previous stage.
So while we can improve the build time and we can use launch templates to support further automation, the database and application are still on the same instance.
So neither can scale without the other.
The database of the application is still located on that instance, meaning scale in or out operations risk this data.
The WordPress content store is also stored locally on the instance.
So again, any scale in or out operations risk the media that's stored locally as well as the database.
Customers still connect directly to the instance, which means we can't perform health checks or automatically heal any failed instances.
For this, we need a load balancer which we'll be looking at in later stages of this demo series.
And of course, the IP address of the instance is still hard coded into the database.
So this is something else we need to resolve as we move through the demo series.
With that being said, though, that is everything that you needed to do in stage two of this demo series.
So in this stage, you've automated the build of the WordPress instance using a launch template.
Now, in stage three, you're going to migrate the data from the local database on EC2 into RDS.
And this will move the data out of the lifecycle of the EC2 instance.
And this makes it easier to scale.
So in stage three, you're going to perform that migration and then update the launch template to take account of that configuration change.
So go ahead and complete this stage of the demo lesson.
And when you're ready, I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
This is part two of this lesson.
We're going to continue immediately from the end of part one.
So let's get started.
Now that we've created all of those, we need to go ahead and install WordPress on our EC2 instance.
So move back to instances.
By now, the instance should be in the running state.
Right-click, select Connect, change it to Session Manager, and then go ahead and click on Connect.
This will allow us to connect into the EC2 instance without worrying about direct network access or having an SSH key pair.
Once you're connected, go ahead and type shudu bash and press Enter, then type cd and press Enter, and then type clear and press Enter.
And that will just clear the screen to make everything easy to see.
Now, at this point, there are a lot of commands that you'll need to type in to manually install WordPress.
Now, you can copy and paste these out of the text instructions for this stage of the demo lesson.
But while you're doing so, I want you to imagine that you'd have to type these in one by one, because I want you to get an appreciation for just how long this install would take if you were doing it entirely manually.
So first, we need to set some environment variables on this instance with the parameters that we've just stored in Parameter Store.
So go ahead and copy all of this set of commands out of this stage's instructions, and this will set environment variables on this instance with values from the Parameter Store.
And again, imagine how long this would take if you had to type all of this manually.
Once we've got those variables configured, next we need to just update the operating system on the instance, make sure it's running with all the patches, and just update the package repositories.
And we can do that with this command.
The next set of commands in this stage's instructions install prerequisites.
So this is the MariaDB database server, the Apache web server, WGet, some libraries, and a stress test utility.
So go ahead and paste in the next block of commands to install all of these packages.
Now, again, this is something that we will automate later in this demo series, but I want you to have an appreciation for just how long this takes.
I'll type clear again to clear the screen, and then the next set of commands will start up the web server and the database server and ensure that both of them start up automatically when the instance operating system is first started.
So if we restart this instance, both of these services will start up automatically.
Again, make sure you press enter on the last command to make sure that starts up successfully.
So that's the Apache web server and MariaDB that are both started and set to automatically start on operating system boot.
Again, I'll clear the screen, and the next command that you'll run sets the root password for the MariaDB database server.
So this is my SQL admin, and you're setting the password for the root user, and we're using the environment variable that we created earlier with values taken from the parameter store.
So that sets the root password for the local database instance.
Next, we're going to download and install WordPress, and we do that with the next block of commands.
So this first downloads the WordPress package.
It moves into the web root directory.
It expands that package and then clears up after itself.
So now we have WordPress installed.
Again, I'll clear the screen to make it easier to see.
This next set of commands replaces some placeholders in the wp-config.php file, which is the configuration for WordPress, and it replaces the placeholders with values taken earlier from the parameter store.
So this is how we're configuring WordPress to be able to connect to the local MariaDB database server.
The next block of commands that we use will fix up the permissions of all of this directory structure, so we don't have any problems accessing these files or any other security issues.
Again, make sure you press Enter on the last command, and then we're almost done.
The last step is to actually create the WordPress database, create the WordPress database user, set the password, and then grant permission on that database to that user.
So these are all steps that we need to do because we're using a self-managed MariaDB database instance.
So paste in this next block of commands and press Enter.
So this has created a db.setup file with a number of SQL commands, and then it's used the MySQL utility to run those commands, which have created the database, the database user, and set permissions, and then it's cleared up the temporary file after all of that's been done.
And at this point, that's all of the configuration needed.
We've installed WordPress, we've installed MariaDB, we've started them both up, we've corrected permissions, and adjusted the configuration files.
Now you've had the ability to copy and paste these commands from the lesson instructions, but imagine if you had to type them in all one by one.
It would take much longer, and he's also something that's prone to many errors.
That's something important to keep in mind as we move through this advanced demo.
So the next step is to move back to the EC2 console.
Make sure you've got the WordPress-manual instance selected, and then copy down the IP version for public IP address into your clipboard, and make sure that you do copy the public IP address, and don't click on the open address link, because that uses HTTPS, which we're not using.
So go ahead and open that in a new browser tab.
Now this is going to take you to the setup screen for WordPress.
We're going to perform a quick setup.
So under site title, I want you to enter CategorM.
Under username, I want you to enter Admin.
We'll keep things simple.
For password, enter the Animals for Life password that we've been using in previous steps.
Under email, go ahead and enter a fake email address, and then click on Install WordPress.
That'll perform the final installation steps, at which point you can click on login.
You'll need to enter the Admin username together with the password that you've just chosen, and then click on login.
So this is the WordPress dashboard, and this suggests that our WordPress application is working absolutely fine.
So to test it, just go to posts.
We're going to delete the default post of Hello World.
Once done, go ahead and click on Add New.
You can just close down this Welcome to Block Editor dialog.
Under title, use the best animal and then S, because we might have more than one animal, and then just put an exclamation mark at the end for effect.
Click on the plus underneath that title, select Gallery.
Click on Upload, select some animal pictures to upload.
If you don't have any, you can go to Google Images and download some cat or dog or gerbil or guinea pig pictures.
Anything that you want, chickens, snakes, just select a couple of animal pictures to upload, and then click on Open.
And then once they've uploaded, you can go ahead and click on Publish, and then Publish again, and this will publish this post.
And what it's doing in order to publish it is it's uploading the images into a local image store that's called wp-content.
And in addition to that, it's storing the metadata for this post into the local MariaDB database.
So there are two different places that data is stored, the local content store, as well as the database.
So keep that in mind as we move on throughout this lesson.
At this point, click on View Post.
Just verify the post loads, it does.
So that means everything's working as expected.
Now, the configuration that you've just implemented has a number of important limitations.
The first is that the application and database have been built manually, which takes time and doesn't allow automation.
It's been slow and annoying, and that's very much the intention.
Additionally, the database and the application are on the same instance.
Neither of them can scale without the other.
The database of the application is stored on an EC2 instance, and that means that scaling in or out risks data in this database.
The application media, so the content is stored, also local to the instance in a folder called wp-content, and this means again, any scaling events in or out risks this media.
Additionally, customer connections are directly to an instance, which prevents us from doing any form of scaling, automatic healing, or any health checks.
One final part about WordPress that isn't commonly known is the IP address of the instance is actually hard-coded into the database.
Now, where this starts to exhibit problems is when running inside AWS because EC2 instances don't have static IP addresses.
If we go back to the EC2 console, right-click on this instance, and then stop the instance.
Remember, a stop and start of an instance will not force the change of the public IP address of the instance, so restarting it isn't enough.
You need to stop and then start.
Watch what happens when the instance fully moves into a stop state.
First, it loses this public IP address and it moves into the stop state.
If I right-click to then start, that will take a few moments, but what will happen is once it's fully started, it will have a different IP version for public address.
So now if I copy that IP address into my clipboard, move back to the tab where the website was previously open, and then open this new IP address in a different browser tab and note how it doesn't load.
Even though the IP address is correct, it's not loading our WordPress website.
The reason for that is the application is hard-coded with the IP address that was used to install WordPress.
And so what it's attempting to do now is reference the old IP address.
It's trying to contact the previous EC2 instance.
Now, this is crucial because it prevents us from scaling the application.
If we create new EC2 instances, they'll all point back at this instance.
Even if we fix the database and content issues, we need to resolve the ability of WordPress to scale.
And don't worry, we'll look at that later in this demo series.
For now, that's everything you needed to do in stage one of this advanced demo.
You've manually created a WordPress application with the application and database running on the same instance.
In stage two, you're going to automate this process.
So go ahead, complete this part of the demo series, and when you're ready, I'll look forward to you joining me in stage two.
-
-
www.gutenberg.org www.gutenberg.org
-
The question we are deciding with so little consciousness of what it involves is this: What shall we do with our natural resources? Upon the final answer that we shall make to it hangs the success or failure of this Nation in accomplishing its manifest destiny.
This passage underscores the gravity of the decision regarding the use and management of natural resources, emphasizing that the outcome will determine the success or failure of the nation in fulfilling its "manifest destiny." The rhetorical question "What shall we do with our natural resources?" highlights the urgency and responsibility involved, suggesting that the consequences of this decision are far-reaching and fundamental to the nation's future.
-
If we are to have prosperity in this country, it will be because we have an abundance of natural resources available for the citizen. In other words, as the minds of the children are guided toward the idea of foresight, just to that extent, and probably but little more, will the generations that are coming hereafter be able to carry through the great task of making this Nation what its manifest destiny demands that it shall be.
This passage connects prosperity to the availability of natural resources and suggests that the future success of the nation depends on the foresight instilled in children regarding resource management. The idea that "prosperity" hinges on an "abundance of natural resources" speaks to a classical economic view where the foundation of national wealth is often seen in terms of resource extraction and consumption. However, the emphasis on guiding children toward "foresight" introduces an important consideration of sustainability and long-term planning, which is essential in discussions about how societies can balance development with resource conservation.
-
The question of the conservation of our natural resources is not a simple question, but it requires, and will increasingly require, thinking out along lines directed to the fundamental economic basis upon which this Nation exists. I think it can not be disputed that the natural resources exist for and belong to the people; and I believe that the part of the work which falls to the women (and it is no small part) is to see to it that the children, who will be the men and women of the future, have their share of these resources uncontrolled by monopoly and unspoiled by waste.
The mention of women playing a significant role in ensuring that future generations have access to these resources speaks to a broader discourse on gendered responsibilities, often associated with caretaking and sustainability. This connects to discussions in lectures about the intersection of environmental conservation and social justice, where marginalized groups—particularly women—are frequently positioned as stewards of both the environment and future generations.
-
The people of this country have lost vastly more than they can ever regain by gifts of public property, forever and without charge, to men who gave nothing in return. It is true that, we have made superb material progress under this system, but it is not well for us to rejoice too freely in the slices the special interests have given us from the great loaf of the property of all the people.
This passage reflects a critique of the unequal distribution of public resources and the systemic prioritization of special interests over the collective good. The idea that "gifts of public property" are given "forever and without charge" to individuals or entities who contribute "nothing in return" ties directly to discussions about the commodification of public assets and the tension between private gain and public welfare. It raises questions about the ethical and economic implications of privatizing resources that should ostensibly benefit all citizens.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this advanced demo lesson you're going to get the chance to experience how to do a practical architecture evolution.
Now one of the things that I find very common amongst my students is that they complete the certification and as soon as they get their first job interview many of them which have an architectural scenario component they struggle on how to get started, how to design an architecture for a given scenario.
So in this advanced demo series you're going to step through and evolve an architecture yourself.
So you'll start with a single EC2 instance running the WordPress blogging engine and this single instance will be running the application itself, the database and it will be storing the content for all of the blog posts.
And for this example we're going to assume it's an animal pictures blog.
Now crucially in this first stage you're going to build this server manually to experience all of the different components that need to operate to produce this web application.
Once you've built the instance manually next you'll replicate the process but using a launch template to provide automatic provisioning of this WordPress application but crucially it will still be the one single WordPress instance.
Next you'll perform a database migration moving the MySQL database off the EC2 instance and running it on a dedicated RDS instance.
So now the database, the data of this application will exist outside the life cycle of the EC2 instance and this is the first step of moving towards a fully elastic scalable architecture.
Next once you've migrated the database instead of storing the content locally on the EC2 instance you'll provision the Elastic File System or EFS which provides a network based resilient shared file system and you'll migrate all of the content for the WordPress application from the instance to this Elastic File System.
Once done these are all the components required to move this architecture to be fully elastic and that is being able to scale out or in based on load on that system.
So the next step will be to move away from your customers connecting directly to this single EC2 instance.
Instead you'll provision an autoscaling group which will allow instances to scale out or in as required and you'll configure an Elastic Load Balancer to point at that autoscaling group so your customers will connect in via the application load balancer rather than connecting to the instances directly and this will abstract your customers away from the instances it will allow your system to be fully resilient self-healing and fully elastically scalable.
So by completing this advanced demo lesson you'll learn how to get started with scenario based questions as part of job interviews.
With that being said let's go ahead and get started and to do that we need to move to the AWS console.
To get started you're going to need to be logged in to a full AWS account without any restrictions you should be logged in as an admin user.
If you're watching this demo as part of any of my courses then you need to use the general AWS account so that's the management account of the AWS organization which we've set up in the course and as always please make sure that you've selected the northern Virginia region.
Now attached to this lesson are two links one of them is a one-click provision for the base infrastructure of this advanced demo lesson and the other is a link to the GitHub repository which contains text-based instructions for every stage of this advanced demo.
So to start with go ahead and click on the one-click provisioning link.
This is going to take you to the quick create stack page and everything should be pre-populated.
The stack name should say A4LVPC all you need to do is check this acknowledgement box and then go ahead and click on create stack.
Now you'll need to wait for this stack to move from create in progress to create complete before you can continue with the demo so go ahead and pause the video and you can resume it once this stack is in a create complete state.
So now this stack's moved into the create complete state the first stage of this advanced demo series is to manually create a single instance WordPress deployment.
Now this CloudFormation template has created the architecture that you can see on screen now so the VPC together with the three tier architecture so database application and public split across three different availability zones.
So what you're going to do in this first part of this demo series is to create this single EC2 instance and you're going to do it manually so that you can experience all of the associated limitations.
So make sure that you do have the text-based instructions open and the link for those is attached to this lesson because it will make it easier because you can copy and paste any commands or any configuration items.
The first thing to do though is to click on services and then type EC2 into the services drop down and click on EC2 to move to the EC2 console.
We're going to be launching our WordPress instance so what I need you to do is to click on launch instance and then again on launch instance.
Now you should be fairly familiar with creating an EC2 instance so we're going to go through this part relatively quickly.
So first you need to name the EC2 instance so go ahead and enter WordPress - manual in the name box and then scroll down and select Amazon Linux specifically Amazon Linux 2023 and just make sure that it's shown as free tier eligible.
Simply make sure that it says 64-bit x86.
Once set scroll down again and go to the instance type box, click in the drop down and just make sure that you have a free tier eligible instance selected.
For most people this should be T2.micro but just make sure that it's an equivalent sized instance which is under the free tier.
Continue scrolling down and under the key pair box just click in the drop down and select proceed without a key pair because we won't be connecting to this instance using an SSH key we'll be using session manager.
Once selected scroll down further still and click on edit next to network settings.
In the VPC box make sure that A4LVPC is selected.
This is the animals for life VPC created by the one click deployment.
Then under subnet make sure that SN-PUB-A is selected.
This is the public subnet in availability zone A.
Below this make sure that for both auto assign public IP and auto assign IPv6 IP both of these need to be set to enable.
Once done scroll down again and next to firewall security groups check the box to say to select an existing security group.
And then in the drop down make sure that you pick A4LVPC-SG WordPress.
Now this will be followed by some randomness and that's okay just make sure that it's the SG-WordPress security group.
This will allow us to connect into this instance using TCP port 80 which is HTTP.
Once selected scroll down and we won't be making any changes to the storage we'll be using the default of 8GIB of GP3 storage.
Below this expand advanced details and there are a couple of things that we need to change.
First click on the drop down under IAM instance profile and just make sure that you select the A4LVPC-WordPress instance profile and again this will have some randomness after it and that's okay.
Scroll down and next you're looking for a box which says credit specification.
Now for this my preference is that you select unlimited because this will make the performance of the EC2 instance potentially better than not selecting anything at all or selecting standard.
Now on brand new AWS accounts it's relatively common that you can't select unlimited.
AWS don't allow you generally to select unlimited until the account has a billing history.
So you might want to select standard here to avoid any problems.
I'm going to select unlimited because my account allows it but if you've got a new AWS account then go ahead and select standard.
If you do choose to select unlimited and you do receive an error then you can go ahead and repeat this process but select standard.
So go ahead and select standard in your case and then scroll down and that's everything that we need to set at this point.
Everything else looks good so go ahead and click on launch instance.
So now that our instance is provisioning just go ahead and click on instances at the top and that will allow us to monitor the progress.
Now we'll need this to be in a running state before we perform the WordPress installation but there's one more set of steps that I want to do first.
Now throughout this advanced demo lesson we're going to be taking this single instance WordPress application and moving it towards a fully scalable or an elastically scalable design.
Now to do that we need to move away from statically setting any configuration options so we're going to make use of the parameter store which is part of systems manager and we're going to create some parameters that our automatic build processes later in this demo will utilize.
For now we're going to be performing everything manually but we'll still be using these variables because it will simplify what we have to type in the EC2 instance.
So go ahead and click on services.
Start typing systems manager and then once you see it populated in the list you can right click and open that in a new tab.
Once you're at the systems manager console on the left under application management just locate parameter store and click it to move to the parameter store console and we're going to create a number of parameters.
Now if you're watching this demo as part of my courses you may already have some parameters listed on this screen.
If you have any existing ones which begin with forward slash A4L then go ahead and delete them before continuing.
So go ahead and click on create parameter and the exact naming for each of these is in the full instructions contained on the github repository which is attached to this lesson so make sure you've got that open it'll make it significantly easier and less prone to errors.
We're going to create a number of parameters for WordPress and the first is the database username so the username that will have permissions on the WordPress database.
So I want you to set the name to forward slash A4L forward slash WordPress forward slash DB user.
For description WordPress database user you can set the tier for the parameter to standard or advanced to keep things in the free tier we're going to use standard it's going to be a string parameter the data type is going to be text and the value needs to be our actual database username so for this demonstration we're going to use A4L WordPress user so enter that and click on create parameter.
Now we're going to be moving quicker now now that you've seen the process our next parameter is going to be the database name so enter this in the name field for description WordPress database name again standard string data type of text and the value is going to be the WordPress database name so A4L WordPress DB scroll down and click on create parameter.
Next is going to be the database endpoint so the host name that WordPress will connect to so for name enter this A4L WordPress DB endpoint for the description.
WordPress endpoint name again standard string text for data type and then to start with because the database is on the same instance as the application the value will be local host so enter that and go ahead and click on create parameter.
Next we'll be creating a parameter to store the password of the WordPress user so click on create parameter this time it's A4L WordPress DB password for description WordPress DB password again standard tier but this time it's going to be a secure string for KMS key source use current account and then for KMS key ID it will be alias AWS SSM which is the default KMS key for this service for value go ahead and enter a strong password again this is for the WordPress user that has permissions to access the database so if this were production it would need to be a strong password now I recommend that you use the same password as I'm using in this demo it uses number letter substitution and I know that it works with all of the different system components now I've included this password in the text based instructions and I do recommend that you use it in your demo as well go ahead and enter something in this value and then scroll down and click on create parameter and then last time click on create parameter again for name this time A4L WordPress DB root password and this is the root password for the local database server that's running on the EC2 instance so for description WordPress DB root password standard again and then again secure string because we're storing a password KMS key sources my current account leave everything else as default and then enter another strong password if this were production generally this would be different from the previous password but as this is a demo you should use the same strong password as you used previously whichever you choose go ahead and enter that into the value box and then click on create parameter okay so this is the end of part one of this lesson it was getting a little bit on the long side and I wanted to give you the opportunity to take a small break maybe stretch your legs or make a coffee now part two will continue immediately from this point so go ahead complete this video and when you're ready I look forward to you joining me in part two.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this demo lesson you're going to get the chance to quickly experience how session stickiness works with load balances.
Now it's going to be a pretty brief demo lesson because I've tried to automate much of the infrastructure configuration that you've already done by this point in the course.
I want to focus on this demo lesson purely on the session stickiness configuration so let's jump in and get started and we're going to start by applying a CloudFormation template which will create the basic infrastructure that we need.
So I'm going to move across to the AWS console.
Now to start with make sure you're logged into an AWS account and the user that you're using has admin privileges on that account and you've got the Northern Virginia region selected.
Now attached to this demo lesson and in the demo instructions is a one click link that you can use to deploy the infrastructure so go ahead and click on that link.
It'll take you to a quick create stack screen and all you'll need to do is to scroll all the way down to the bottom, check this capabilities box and then click on create stack.
Now that can take anywhere from five to ten minutes to create so while that's creating let's talk through the architecture that you'll be using for this demo.
The template which you're currently applying will create this architecture so it creates a VPC and then inside of that three public subnets one in each AZ then it creates an auto scaling group and linked to this is a launch template providing instance build directives.
The auto scaling group is set to create six EC2 instances, two in each AZ and then it creates a load balancer configured to run from each public subnet.
So this is the architecture that's going to exist in the AWS account once the cloud formation stack has finished creating.
Now in this demo you're first going to connect to the load balancer with session stickiness disabled.
This means that each time you connect to the load balancer the connection can be sent to any of the six instances meaning that each of them has around a 16.66 recurring chance to get a connection.
So you'll first connect to the load balancer in this configuration.
Once you've seen how that looks you're going to enable session stickiness and see how that affects the architecture.
What will happen is the first time you connect to the load balancer with session stickiness enabled a cookie called AWS ALB will be generated and returned to your browser.
Unfortunately for this guy it's not that type of cookie.
What happens next is that any connections made while the cookie is valid are locked to one specific EC2 instance and they'll be locked to that instance until the cookie expires or that instance fails its health check at which point any connections will move to a different EC2 instance.
Now at this point let's move back to the console and just check how the cloud formation creation process is going.
At this point mine is still in a create in progress and you'll need this stack to be in a create complete state before moving on.
So go ahead and pause the video and resume it once this changes to create complete.
Okay so now the stack is in a create complete status we're good to move on and the first thing we'll need to do is verify that all of the six EC2 instances are functioning as they should be.
So to do that go ahead and click on services and then type EC2 in the find services box and open that in a new tab then move to that tab and click on instances running.
Now again this might look a little bit different in your account that's okay what we need to do is select each of these instances in turn and we're looking for the instance public IP version 4 DNS name.
So go ahead and locate the public IP version 4 DNS field and just copy that into your clipboard and then open that in a new tab.
The instance should load and it should show an instance ID, a random color background and an animated catgif.
Now I want you to go ahead and open each of the remaining five instances each in its own tab so let's do that next.
So select the second instance, scroll down, locate the public DNS address and then open that in a new tab.
You'll see this has a different color background and a different animated catgif.
We'll do the third instance, again different color background, different catgif.
We'll do the fourth, once again different background, different gif.
Do the fifth, different background, different gif and then finally the sixth instance.
So we have each of the six EC2 instances all with a different background and a different catgif.
Next scroll down on the menu on the left and click on load balances.
You should see a load balancer which starts with ALB - ALB and then some random that's fine.
Select that and copy the load balance the DNS name into your clipboard and then open that in a new tab.
So this opens the load balancer and if you refresh that a few times you'll see that it moves between all of the EC2 instances.
Now it could load the same instance twice or it might cycle through the same EC2 instances but you should see as you refresh it's cycling between all of the available instances and that's because we don't have session stickiness enabled.
It's just doing a round robin approach to select different back-end instances within the target group.
So each time we refresh there's a chance that it will move to a different back-end instance.
Now let's assume at this point that we have an application which doesn't handle state in an external way so it stores the state on the EC2 instance itself.
Well to enable session stickiness with application load balances we do it on a target group basis.
So click on target groups and then click on the target group to go into its configuration.
Locate and click on the attributes tab.
Click on edit next to attributes.
To enable session stickiness all we have to do is check this box select load balancer generated cookie and then pick validity period for the cookie that's generated by the application load balancer.
So go ahead and leave this value as 1 but then click on the drop-down and change this from days to minutes.
Once you've done that click on save changes and now session stickiness is enabled on this load balancer.
If we go back to the tab that we have open to the load balancer and just keep hitting refresh you might notice initially that it changes to a new instance but at a certain point if you keep clicking it will lock to a specific EC2 instance and won't change.
So now we're on this particular EC2 instance it's got this instance ID and even though we keep hitting refresh the background and the catgif remains the same.
Now the way that this works and I can demonstrate this using Firefox if I go to the menu bar click on tools then browser tools then web developer tools and then click on the storage tab you'll be able to see that as part of accessing this load balancer I've got two cookies and one of the cookies the one that we're interested in is AWS ALB.
This is the cookie that controls the session stickiness so every time I access this load balancer from the first point when this cookie is generated it passes this cookie back to the load balancer and it knows which back-end EC2 instance I should be connected to and so I will stay connected to this back-end instance until the cookie expires or this instance fails its health check.
So let's test that what I want you to do is to copy down the instance ID that you're connected to and it will be different for you and just pay attention to the last few digits of the instance ID.
Now if I go back to the EC2 console go to dashboard and then instances running locate the instance you just noted down the ID for right click and then stop that instance and confirm.
We'll give that instance a few moments to stop if we go back to the load balancer tab and just keep hitting refresh now now that the instance is in a stopped state the load balancer detects that it's no longer valid and so I immediately switch to a brand new EC2 instance the cookie generated by the load balancer is updated to lock me to this new EC2 instance and I wouldn't have any idea that this back-end instance has failed and no longer responds to requests other than the fact that I can see that I've changed instances because I've created the instances to highlight which instance ID is being used.
Now if I go back to the EC2 console select this instance again right click and this time start the instance even though this instance is started up again I won't reconnect to that original back-end instance because now I'm locked to this instance and there's a chance that what might happen while you're doing this demo lesson is while that instance was in a stopped state because this has been configured to use elastic load balancer health checks it might have detected that this instance is in a failed state and so it's instructed the auto scaling group to terminate that instance and replace it with a new one so don't be surprised if when you try to start this instance up it's in a terminated state that's okay the system is working as intended.
So back to the load balancer tab I'll just keep hitting refresh and what we'll see is after the cookie expires there's always a chance that we could be moved onto a new EC2 instance.
To return the configuration back to how it was at the start of the demo we can go back to the EC2 console go down to target groups open this target group click on the attributes tab and then edit and then uncheck the stickiness box and save the changes and at this point the cookie that's generated will no longer lock our connections to one specific back-end instance and so over time if we keep refreshing this page we should be moved between different back-end EC2 instances because again now we no longer have session stickiness.
Now that's all I really wanted to highlight in this demo lesson I just wanted to give you some practical exposure to how the session stickiness feature works of application load balancers so this is something that you need to understand for the exam essentially if your application doesn't handle state externally to individual EC2 instances then you need the load balancer to make sure that any connections from a given user always end up on the same EC2 instance and the way to do that is with application load balancer controlled session stickiness now remember this does come with some negatives it means that the load balancer is not able to as efficiently distribute load across each of the back-end instances so while session stickiness is enabled it means customers are locked to one particular EC2 instance and even if customers locked to one instance generate much more load than customers locked to other instances the load balancer doesn't have the same level of flexibility to distribute connections so where possible application should be designed so they handle sessions externally to the instances and then you should not have session stickiness enabled and this is the way to ensure well-performing elastic architectures now at this point that's everything that you need to do in this demo lesson all that remains is to tidy up the environment so go back to the cloud formation console we can just go ahead and click on stacks click in the box next to the ALB stack click on delete and then click delete stack which will delete the stack and all of the infrastructure that it created at the start of this demo lesson at this point congratulations you've successfully completed this demo lesson and implemented the architecture that's on screen now as well as experienced how an application load balancer handles session stickiness so I hope you enjoyed the demo go ahead complete this video and when you're ready I look forward to you joining me in the next lesson.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this brief lesson I want to cover two features of the Elastic Load Balancer series of products and those features are SSL offload and session stickiness.
Now you'll need to be aware of the architecture of both of these for the exam.
The implementation details aren't required, the theory of the architecture is what matters, so let's jump in and get started.
Now there are three ways that a load balancer can handle secure connections and these three ways are bridging, pass through and offload.
Each of these comes with their pros and their cons and for the exam and to be a good solutions architect you need to understand the architecture and the positives and negatives of them all.
So let's step through each of these in turn.
So first we've got bridging mode and this is actually the default mode of an application load balancer.
With bridging mode one or more clients makes one or more connections to a load balancer and that load balancer is configured so that its listener uses HTTPS and this means that SSL connections occur between the client and the load balancer.
So they're decrypted known as terminated on the load balancer itself and this means that the load balancer needs an SSL certificate which matches the domain name that the application uses.
And it also means in theory that AWS do have some level of access to that certificate and that's important if you have strong security frameworks that you need to stay inside of.
So if you're in a situation where you need to be really careful about where your certificates are stored then potentially you might have a problem with bridged mode.
Once the secure connection from the client has been terminated on the load balancer the load balancer makes second connections to the back end compute resources EC2 instances in this example.
Remember HTTPS is just HTTP with a secure wrapper.
So when the SSL connection comes from the client to the front facing the listener side of the load balancer it gets terminated which essentially means that the SSL wrapper is removed from the unencrypted HTTP which is inside.
So the load balancer has access to the HTTP which it can understand and use to make decisions.
So the important thing to understand is that an application load balancer in bridging mode can actually see the HTTP traffic.
It can take actions based on the contents of HTTP and this is the reason why this is the default mode for the application load balancer.
And it's also the reason why the application load balancer requires an SSL certificate because it needs to decrypt any data that's being encrypted by the client.
It needs to decrypt it first then interpret it then create new encrypted sessions between it and the back end EC2 instances.
Now this also means that the EC2 instances will need matching SSL certificates.
So certificates which match the domain name that the application is using.
So the elastic load balancer will re-encrypt the HTTP within a secure wrapper and deliver this to the EC2 instances which will use the SSL certificate to decrypt that encrypted connection.
So they both need the SSL certificates to be located on the EC2 instances as well as needing the compute to be able to perform those cryptographic operations.
So in bridging mode which is the default, every EC2 instance at the back end needs to perform cryptographic operations.
And for high volume applications the overhead of performing these operations can be significant.
So the positives of this method is that the elastic load balancer gets to see the unencrypted HTTP and can take actions based on what's contained in this plain text protocol.
The method does have negatives though because the certificate does need to be stored on the load balancer itself and that's a risk.
And then the EC2 instances also need a copy of that certificate which is an admin overhead and they need the compute to be able to perform the cryptographic operations.
So those are two pretty important negatives that can play a part on which connection method you select for any architectures that you design.
Now next we have SSL pass through and this architecture is very different.
With this method the client connects but the load balancer just passes that connection along to one of the back end instances.
It doesn't decrypt it at all.
The connection encryption is maintained between the client and the back end instances.
The instances still need to have the SSL certificates installed but the load balancer doesn't.
Specifically it's a network load balancer which is able to perform this style of connection architecture.
The load balancer is configured to listen using TCP.
So this is important.
It means that it can see the source and destination IP addresses and ports.
So it can make basic decisions about which instances send traffic to i.e. the process of performing the load balancing.
But it never touches the encryption.
The encrypted connection exists as one encrypted tunnel between the client all the way through to one of the back end instances.
Now using this method means that AWS never need to see the certificate that you use.
It's managed and controlled entirely by you.
You can even use a cloud HSM appliance which I'll talk about later in the course to make this even more secure.
The negative though is that you don't get to perform any load balancing based on the HTTP part because that's never decrypted.
It's never exposed to the network load balancer and the instances still need to have the certificates and still need to perform the cryptographic operations which users compute.
Now the last method that we have is SSL offload and with this architecture clients connect to the load balancer in the same way using HTTPS.
The connections use HTTPS and are terminated on the load balancer and so it needs an SSL certificate which matches the name that's used by the application.
But the load balancer is configured to connect to the back end instances using HTTP so the connections are never encrypted again.
What this means is that from a customer perspective data is encrypted between them and the load balancer.
So at all times while using the public internet data is encrypted but it transits from the load balancer to the EC2 instances in plain text form.
It means that while a certificate is required on the load balancer it's not needed on the EC2 instances.
The EC2 instances only need to handle HTTP traffic and because of that they don't need to perform any cryptographic operations which reduces the per instance overhead and also potentially means you can use smaller instances.
The downside is that data is in plain text form across AWS's network but if this isn't a problem then it's a very effective solution.
So now that we've talked about the different connection architectures now let's quickly talk about stickiness.
Connection stickiness is a pretty important concept to understand for anybody designing a scalable solution using load balancers.
Now let's look at an example architecture.
We have our customer Bob, a load balancer and a set of back end EC2 instances.
If we have no session stickiness then for any sessions which Bob or anyone else makes they're distributed across all of the back end instances based on fair balancing and any health checks.
So generally this means a fairly equal distribution of connections across all back end instances.
The problem with this approach though is that if the application doesn't handle sessions externally every time Bob lands on a new instance it would be like he's starting again.
He would need to log in again and fill his shopping cart again.
Applications need to be designed to handle state appropriately, an application which uses stateless EC2 instances where the state is handled in say DynamoDB can use this non-sticky architecture and operate without any problems.
But if the state is stored on a particular server then you can't have sessions being fully load balanced across all of the different servers because every time a connection moves to a different server it will impact the user experience.
Now there is an option available within Elastic Load Balancers called session stickiness and within an application load balancer this is enabled on a target group.
Now what this means is that if enabled the first time that a user makes a request the load balancer generates a cookie called AWSALB.
And this cookie has a duration which you define when enabling the feature and a valid duration is anywhere between 1 second and 7 days.
If you enable this option it means that every time a single user accesses this application the cookie is provided along with the request and it means that for this one particular cookie sessions will be sent always to the same back end instance.
So in this case all connections will go to EC2-2 for this one particular user.
Now this situation of sending sessions to the same server this will happen until one of two things occur.
The first thing is that if we have a server failure so in this example if EC2-2 fails then this one particular user will be moved over to a different EC2 instance.
And the second thing which can occur to change this session stickiness is that the cookie can expire.
As soon as the cookie expires and disappears the whole process will repeat over again and the user will receive a new cookie and be allocated a new back end instance.
Session stickiness is designed to allow an application to function using a load balancer if the state of the user session is stored on an individual server.
The problem with this method is that it can cause uneven load on back end servers because a single user even if he or she is causing significant amounts of load will only ever use one single server.
Where possible applications should be designed to use stateless servers.
So holding the session or user state somewhere else so not on the EC2 instance but somewhere else like DynamoDB.
And if you do that if you host the session externally it means that the EC2 instances are completely stateless and load balancing can be performed automatically by the load balancer without using cookies in a completely fair and balanced way.
So that's everything I wanted to cover about connection stickiness and that's now the end of this lesson.
I just wanted to quickly cover two pretty important techniques that you might need to be aware of for the exam.
So at this point go ahead and complete the video and when you're ready as always I'll look forward to you joining me in the next lesson.
-
-
data-feminism.mitpress.mit.edu data-feminism.mitpress.mit.edu
-
Patton’s approach to incorporating culture, context, and nuance took the form of direct contact with and centering the perspectives of the youth whose behaviors his group sought to study
I think the text following this is a really important example as to why cultural diversity is so crucial in creating systems. If something is to be used by anyone then it should be understood by everyone. In order to make that possible lots of different perspectives are necessary. Unfortunately the people who profit off the creation of systems do not care if it works equally for everyone, so long as it works for them and their demographic and that they make money off of it.
-
-
illuminem.com illuminem.com
-
Rogan Hallam’s award-winning research at King’s College London demonstrates that when people sit in small circles to discuss a social issue (with biscuits on the table!) for most of a public meeting, 80% leave feeling empowered. In contrast, only 20% feel empowered after a conventional meeting with a series of speakers and no small group discussion.
for - TPC network - validation
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson I want to talk about auto-scaling groups and health checks.
Now this is going to be fairly brief but it's something that's really important for the exam.
So let's jump in and get started.
Auto-scaling groups assess the health of instances within that group using health checks.
And if an instance fails a health check then it's replaced within the auto-scaling group.
So this is a method of automatically healing the instances within the auto-scaling group.
Now there are three different types of health checks which can be used with auto-scaling groups.
We have EC2 which is the default.
We have ELB checks which can be enabled on an auto-scaling group.
And then we have custom health checks.
Now with EC2 checks which are the default any of these statuses is viewed as unhealthy.
So essentially anything but the instance running is viewed as unhealthy.
So if it's stopping, if it's stopped, terminated, if it's shutting down or if it's impaired meaning it doesn't have two out of two status checks then it's viewed as unhealthy.
We also have the option of using load balancer health checks and for an instance to be viewed as healthy when this option is used the instance needs to be both running and it needs to be passing the load balancer health check.
Now this is important because if you're using an application load balancer then these checks can be application aware.
So you can define a specific page of that application that can be used as a health check.
You can do text pattern matching and this can be checked using an application load balancer.
So when you integrate this with an auto-scaling group the checks that that auto-scaling group is capable of performing become much more application aware.
Finally we have custom health checks and this is where an external system can be integrated and mark instances as healthy or unhealthy.
So this allows you to extend the functionality of these auto-scaling group health checks by implementing a process specific to your business or using an external tool.
Now I also want to introduce the concept of a health check grace period.
So by default this is 300 seconds or 5 minutes and essentially this is a configurable value which needs to expire before health checks will take effect on a specific instance.
So in this particular case if you select 300 seconds then it means that a system has 5 minutes to launch the system, to perform any bootstrapping and then any application start-up procedures or configuration before it can fail a health check.
So this is really useful if you're performing bootstrapping with your EC2 instances which are launched by the auto-scaling group.
Now this is an important one because it does come up on the exam and it's often a cause of an auto-scaling group continuously provisioning and then terminating instances.
If you don't have a sufficiently long health check grace period then you can be in a situation where the health checks start taking effect before the applications have finished configuring and at that point it will be viewed as unhealthy, terminated and a new instance will be provisioned and that process will repeat over and over again.
So you need to know how long your application instances take to launch, bootstrap and then perform any configuration processes and that's how long you need to set your health check grace period to be.
Now that's everything I wanted to cover in this brief theory lesson.
I just wanted to make sure that you understand the options that you have available for health checks within auto-scaling groups.
With that being said go ahead and complete this video and when you're ready I'll look forward to you joining me in the next.
-
-
press.rebus.community press.rebus.community
-
. The only constant in digital culture is change, which may sound cliché, but the underlying ICT structures shift so often that it can be difficult for cultural trends to take hold.
Digital culture is changing rapidly, one moment we like our cheesy drippy and another we are yelling MUUUUSSSTTTTTTAAAARRRRDDD from the top of our lungs because of digital culture. It changes so fast that sometimes we forget about trends that happened a month ago. I like that cause everything gets so over used but at the same digital culture has such a hold on us. Like, I challenge you the reader to take a day away from your phone and see what it feels like. I use to do it a lot and trust me it feels so weird. It makes you feel so lost and so confused like you don't have a single clue what is going on in the world.
-
Dating in real life (IRL) is changing as more and more people use dating apps and websites.
Yes, this is statistically proven as well. More people meet on dating apps or social media then in real life social situations. The mass changing of what use to be normal social interaction is changing rapidly, coffee shops and bookstores suffer from the loss of hopeless romantics camping the romance section hoping to find a partner. This rise of more internet oriented meeting is of course due to our development with technology
-
“If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place.”
There is nothing private about the internet. The internet is public information. Every Tweet, Insta Post, and Snapchat is public for the world regardless of if your account is private or not. It makes me sad that this is this way. To not have any privacy in a space where we should have privacy is very concerning. Its like having cameras in your bathroom. Yet at the same time we do get some funny stuff from it like funny posts of cats or funny videos of our friends doing goofy things
-
-
engl252fa24.commons.gc.cuny.edu engl252fa24.commons.gc.cuny.edu
-
how the writing process entailed an interrogation of the absence, not only of cognitive interest, but of “major” feelings around the topic: “the process of writing Weather was about trying to move from thinking about what is happening to feeling the immensity and sadness of it
Offil conveyed this very well throughout he novel, I found myself experiencing this idea, that something was purposely not being acknowledge. So to know that this is what she was thinking about is very inspiring.
-
At the centre of our collective inability to apprehend the climate crisis is our failure to imagine ourselves as anything other than the centre of, well, everything. The Anthropocene, as noted by Daniel Cordel and Diletta de Cristofaro in their introduction to C21 Literature’s special issue on the topic, challenges us “to think beyond the human even though we inevitably cannot escape that subject position”
How can we "think beyond human thought" when there's an inability in humans to understand that they aren't at the center of everything.
-
The text prises open a space of dissonance between the affective knowledge of climate change as it appears in our everyday lives (trivial, innocent, diffuse) and the intellectual knowledge that the broader situation is immense, terrifying and very serious.
Offil's Novel hit close to home with the way it used the protagonist's internal anxiety as a way to really sell this idea that danger awaits even, when you're not directly experiencing it.
-
-
pressbooks.library.torontomu.ca pressbooks.library.torontomu.ca
-
Spam messages, in the form of emails and texts, are “unsolicited commercial messages” sent either to advertise a new product or to trick people into sharing sensitive information through a process called phishing (more about phishing below). Canadian Anti-Spam Legislation (CASL) protects individuals by outlining clear rules about digital communication processes for businesses. Sending spam messages is a direct violation of the law. Businesses that send unsolicited emails to individuals are legally required to provide an “unsubscribe” option for those who may have wanted the messages at one time but who have changed their minds and no longer want them.
I just kept wondering if companies like email providers could do more to combat phishing, such as integrating AI-powered filters or offering mandatory short training sessions for employees in industries prone to cyberattacks. I would have never known half the risk I was putting my information on if it wasn't for having someone who is in the field of cyber security so broadening the reach is important.
-
Have you ever considered why products you searched for on Amazon show up in your Facebook feed, pop up in your Google search results, or appear on YouTube in advertisements? Cookies—small pieces of data with a unique ID placed on your device by websites—are online tracking tools that enable this to happen. Cookies can store your website-specific browsing behaviour and any site-specific customization (for example, your location preferences), as well as keep track of items added to a cart on online shopping sites, such as Amazon. In addition, they can track your purchases, content you’ve viewed, and your clicking behaviour. The biggest concern with cookies is that they enable targeted online advertising by sharing your usage and browsing data with advertisers. In addition, certain adve
Cookies and tracking make online experiences smoother but raise significant privacy concerns. I hadn't noticed any privacy violations when I clicked "allow" to cookies because I was never taught about it. While first-party cookies enhance functionality, third-party cookies build extensive profiles about users, which could feel invasive.
-
Let’s face it, very few people read the “terms and conditions,” or the “terms of use” agreements prior to installing an application (app). These agreements are legally binding, and clicking “I agree” may permit apps (the companies that own them) to access your: calendar, camera, contacts, location, microphone, phone, or storage, as well as details and information about your friends. While some applications require certain device permissions to support functionality—for example, your camera app will most likely need to access your phone’s storage to save the photos and videos you capture—other permissions are questionable. Does a camera app really need access to your microphone? Think about the privacy implications of this decision.
This shows how digital footprints impact our lives. It raises important questions like how much of our private information we unconsciously trade for convenience. Many people might underestimate the long-term implications of leaving digital traces, such as identity theft or targeted manipulation.
-
When downloading an app, stop and consider: Have you read the app’s terms of use? Do you know what you’re giving the app permission to access? (e.g., your camera, microphone, location information, contacts, etc.) Can you change the permissions you’ve given the app without affecting its functionality? Who gets access to the data collected through your use of the app, and how will it be used? What kind of privacy options does the app offer?
I think that most people skip app terms and policies, but you know how important it is to understand what the app is gaining and what they do for our privacy. Having a digital citizenship curriculum can help make everyone aware of the importance of privacy.
-
-
psycnet.apa.org psycnet.apa.org
-
howed no sig-nificant differences between groups (p a . 11) on any dependentmeasure except METS, with universal lower than track and waitlist on pre-METS score (11.34 ± 1.78, 14.32 ± 3.35, and(4.46 ±3.75, respectively), F(2, 35) = 5.24, p < .0
its interesting that there is no significant difference
-
-
emilyliu.me emilyliu.me
-
a web forum
This may be a new way to look at web annotations as well
-
What happens if Bluesky has to shut down as a company?
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson I want to quickly cover a pretty advanced feature of auto scaling groups and that's auto scaling group life cycle hooks.
So let's jump in and take a look at what these are and how they work.
So life cycle hooks allow you to configure custom actions which can occur during auto scaling group actions.
So you can define actions which occur either during instance launch transitions or instance terminate transitions.
So what this allows you to do is when an auto scaling group scales out or scales in it will either launch or terminate instances and normally this process is completely under the control of the auto scaling group.
So as soon as it makes a decision to provision or terminate an instance this process happens with no ability for you to influence the outcome.
What life cycle hooks do is when you create them instances are paused within the launch or terminate flow and they pause or wait in this state until one of two things happen.
Either a configurable timeout and when that timeout expires which by default is 3600 seconds they will either continue or abandon the auto scaling group action.
The alternative is whatever process that you perform you can explicitly resume the process using complete life cycle action once you've performed whichever activity you want to perform.
Now in addition to this life cycle hooks can either be integrated with EventBridge or SNS notifications which allow your systems to perform event driven processing based on the launch or termination of EC2 instances within an auto scaling group.
So let's look at how this looks visually.
So let's start with a simple auto scaling group.
If we configure instance launch and terminate hooks this is what it might look like.
So normally when an auto scaling group gets a scale out situation an instance will be launched and it starts off in the pending state.
When it completes it will move into the in service state but this gives us no opportunity to perform any custom activities.
What we could do is define a life cycle hook and hook into the instance launch transition.
So if we do hook into this transition the instance would move from pending to pending wait and it would wait in this state.
This allows us to perform a set of custom actions.
An example might be to load or index some data which might take some time and during this time the instance stays in this state.
Once done it will move from a pending wait state to a pending proceed state and from there it would move into the in service state.
So this is the process when configuring a life cycle hook for this part of an EC2 instances life cycle.
It's these extra steps the wait and proceed which allows the opportunity to run custom actions and the same happens in reverse if we define an instance terminate hook.
What would normally happen when a scaling event happens would be the instance would move from a terminating state to a terminated state and again we wouldn't have the ability to perform any custom actions.
Well what we could do is define a life cycle hook to hook into that instead the instance would move from terminating to terminating wait where it would wait for a timeout.
Now by default this is 3600 seconds and it would wait at this point or until we ran the complete life cycle action operation.
We could use this time period to maybe back up some data or logs or otherwise tidy up the instance prior to its termination and once the timeout expired or when we explicitly call complete life cycle action then it would move from terminating wait to terminating proceed and then finally through to the terminated state.
Now life cycle hooks can integrate as I mentioned previously with SNS for transition notifications and EventBridge can also be used to initiate other processes based on the hooks in an event-driven way.
Now that's everything I wanted to cover about life cycle hooks so at this point go ahead and complete this lesson and when When you're ready, I look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson I want to cover in a little bit more detail something I've touched on earlier and that's Auto Scaling Group Scaling Polices.
So let's just jump in and get started.
One thing many students get confused over is whether scaling policies are required on an Auto Scaling Group.
Now you'll see in demos elsewhere in the course that this is not the case.
They can be created without any Auto Scaling Polices and they work just fine.
When created without any scaling policies it means that an Auto Scaling Group has static values for min size, max size and desired capacity.
Now if you hear the term manual scaling that actually refers to when you manually adjust these values.
Now this is useful in testing or urgent situations or when you need to hold capacity at a fixed number of instances for example as a cost control measure.
Now in addition to manual scaling we also have different types of dynamic scaling which allow you to scale the capacity of your Auto Scaling Group in response to changing demand.
So there are a few different types of dynamic scaling and I want to introduce them here and then cover them in a little bit more detail.
At a high level each of these adjusts the desired capacity of an Auto Scaling Group based on a certain criteria.
First we have simple scaling and with this one you define actions which occur when an alarm moves into an alarm state.
For example by adding one instance if CPU utilization is above 40% or removing one instance if CPU utilization is below 40%.
This helps infrastructure scale out and in based on demand.
The problem is that this scaling is inflexible.
It's adding or removing a static amount based on the state of an alarm.
So it's simple but it's not all that efficient.
Step scaling increases or decreases the desired capacity based on a set of scaling adjustments known as step adjustments that vary based on the size of the alarm breach.
So you can define upper and lower bounds.
For example you can pick a CPU level which you want say 50% and you can say that if the actual CPU is between 50 and 60% then do nothing.
If the CPU is between 60 and 70% then add one instance or if the CPU is between 70 and 80 add two instances and then finally the CPU is between 90 and 100% then add three instances and you can do the same in reverse.
The same step changes as CPU is below 50% only removing rather than adding instances.
Now generally step scaling is always better than simple because it allows you to adjust better to changing load patterns on the system.
Next we have target tracking which comes with a predefined set of metrics.
Currently this is CPU utilization, average network in, average network out and ALB request count per target.
Now the premise is simple enough you define an ideal value so the target that you want to track against for that metric for example you might say that you want 50% CPU on average.
The auto scaling group then calculates the scaling adjustment based on the metric and the target value all automatically.
The auto scaling group keeps the metric at the value that you want and it adjusts the capacity as required to make that happen so the further away the actual value of the metric is from your target value the more extreme the action either adding or removing compute.
Then lastly it's possible to scale based on an SQS queue and this is a common architecture for a workable where you can increase or decrease capacity based on approximate number of messages visible so as more messages are added to the queue the auto scaling group increases in capacity to process messages and then as the queue empties the group scales back to reduce costs.
Now one really common area of confusion is the difference between simple scaling and step scaling.
AWS recommends step scaling versus simple at this point in time but it's important to understand why so let's take a look visually.
Let's start with some simple scaling and I want to explain this using the same auto scaling group but at three points in time.
The auto scaling group is initially configured with a minimum of one, a maximum of four and a desired of one and that means right now we're going to have one out of a maximum of four instances provisioned and operational and let's also assume that the current average CPU is 10%.
Now with simple scaling we create or use an existing alarm as a guide.
Let's say that we decide to use the average CPU utilization so we create two different scaling rules.
The first which says that if average CPU is above 50% then add two instances and another which removes two instances if the CPU is below 50%.
With this type of scaling if the CPU suddenly jumped to say 60% then the top rule would apply and this rule would add two instances changing the desired capacity from one to three.
This value is still within the minimum of one and the maximum of four and so two additional instances would be provisioned with room for a fourth.
If the CPU usage dropped to say 10% then the second rule would apply and the desired capacity would be reduced by two or set to the minimum so in this case it would change from three to one.
Two instances would be terminated and the auto scaling group would be running with one instance and a capacity for three more as required.
Now this works but it's not very flexible.
Whatever the load whether it's 1% over what you want or 50% over two instances are added and the same is used in reverse.
Whether it's 1% below what you want or 50% below the same two instances are always removed so with simple scaling you're adding or removing the same amount no matter how extreme the increases and decreases in the metric that you're monitoring.
With step scaling it's more flexible so with step scaling you're still checking an alarm but for step scaling you can define rules with steps so you can define an alarm which alarms when the CPU is above 50% and one which alarms when the CPU is below 50% and you can create steps which adjust capacity based on how far away from that value it is.
So in this case if the CPU usage is between 50 and 59% do nothing between 60 and 69% add one between 70 and 79% add two and then between 80 and 100 add three and the same in reverse so between 40 and 49 do nothing between 30 and 39 remove one between 20 and 29 remove two and then between 0 and 19 remove three.
So let's say that we had an auto scaling group at six points in time so we start with the auto scaling group on the left and let's say that it has 5% load and let's say that we have the same minimum one maximum four as the previous example the policy is trying to remove three instances with this level of CPU but as the auto scaling group has the minimum of one the auto scaling group starts with one instance with a capacity for a further three as required.
If our application receives a massive amount of incoming load let's say that the CPU usage increases to 100% and this is an extreme example but based on the scaling policy this would add three instances taking us to the maximum value of four so our auto scaling group now has four instances running which is also the maximum value for that group.
Now at this point with the same amount of incoming load the increased number of instances is probably going to reduce the average CPU.
Let's say that it reduces it to 55% well this causes no change instances and neither added or removed because anything in the range of 40 to 59 means zero change.
Next say that the load on the system reduces so CPU drops to 5% and this removes three instances dropping the desired capacity down to one with the option for a further three instances as required.
Next the average CPU stays at 5 but the minimum of the auto scaling group is one so the number of instances stay the same even though the step scaling rule should attempt to remove three instances at this level so we always have the minimum number of instances as defined within the minimum value of the auto scaling group.
Now maybe we end the day with some additional load on the system let's say for example that the CPU usage goes to 60% and this adds one additional instance so you should be able to see by now that step scaling is great for variable load where you need to control how systems scale out and in.
It allows you to handle large increases and decreases in load much better than simple scaling so based on how extreme the increase or decrease is determines how many units of compute are added or removed it's not static like simple scaling and that's the main difference between simple and step the ability to scale in different ways based on how extreme the load changes are.
With that being said though that's everything I wanted to cover in this lesson go ahead and complete the video and when you're ready I look forward to you joining me in the next.
-
-
www.tandfonline.com www.tandfonline.com
-
Sept
talks about how it is possible the study did not gather from the best possible sample but it is probably good enough
-
explanation for the null finding among men is thatdetachment and depression are not issues that influencesmoking among male college students
mental health issues to not affect the males that much
-
study indicate that femalesmokers exhibit higher levels of nicotine dependence thando their male counterparts. Consistent with the studyhypotheses, attachment mediated the relationship betweengender and nicotine dependence; depression moderatedthis relationship. Lower levels of attachment mediated thegender-dependence relationship. Therefore, female smok-ers who did not perceive themselves as “connected” withtheir peer group were more dependent on nicotine. More-over, female smokers with elevated depressive symptomswere also more dependent on nicotine.
summary of the findings
-
all studentsenrolled in introductory psychology classes at Texas Techand has been approved by the Institutional Review Board(IRB
a good study that was approved and checked for bias
-
women are twice as likely toexperience major depression and that the onset of majordepression coincides with the first years of college, 22 theability of nicotine to dispel depression effectively 23,24 mayact as a powerful smoking reinforcer for young first-yearcollege women.
when it comes to women, nicotine dependencea nd depression fall hand and hand together and have a strong relationship
-
adjustment to college during theinitial years is strongly associated with depression, partic-ularly for young female students
is this why so many people start vaping at this time in their lifes
-
enhanced social affiliation of tobacco use rather than thepharmacological effects of cigarette smoking
females want to fit in a lot more than males tend to do
-
familial relations motivates young people to seek othermodels (smoking friends) in their environmen
social pressure and needs to fit in affect the females and their nicotine addictions
-
-
docdrop.org docdrop.org
-
teachers do most of t 7 . thinking,
Just like cognitive coaching!
-
Well, they show up, and we have to sit in a room all day and hear about stuff we already know. The sessions are boring, so we sit there and talk about , 8 “I’d hate that too,” Devona agreed. “But, what if your trainer of trainers met you on your floor, got to know you, and really listened to and affirmed you? What if you became comfortable telling her where you wanted to im ore a the trainer of trainers worked with you, showed you exactly how to improve in your chosen area by working with your patients, and then watched you and gave you hel i pful suggestions and support unti easily do the new skill?” —_— “Oh, Fd love that,” said Devona’s friend. “That's what I do,” said Devona. - a ie what instructional coaches do. Shoulder to shoulder with Ra, bet hee share teaching strategies that help teachers meet Faches are “y i 0 accomplish this, we have found that instructional A partners vo ‘ e ective when they do two things: (a) position teachers at coaching really is two teachers talking with each other Chapter 1 | What Does It Mean to Improve? 3
I am sad that is is their experience, and I know it is what many teachers feel as well. How do we change this?
-