- Oct 2024
-
www.medrxiv.org www.medrxiv.org
-
Reviewer #1 (Public Review):
Padilha et al. aimed to find prospective metabolite biomarkers in serum of children aged 6-59 months that were indicative of neurodevelopmental outcomes. The authors leveraged data and samples from the cross-sectional Brazilian National Survey on Child Nutrition (ENANI-2019), and an untargeted multisegment injection-capillary electrophoresis-mass spectrometry (MSI-CE-MS) approach was used to measure metabolites in serum samples (n=5004) which were identified via a large library of standards. After correlating the metabolite levels against the developmental quotient (DQ), or the degree of which age-appropriate developmental milestones were achieved as evaluated by the Survey of Well-being of Young Children, serum concentrations of phenylacetylglutamine (PAG), cresol sulfate (CS), hippuric acid (HA) and trimethylamine-N-oxide (TMAO) were significantly negatively associated with DQ. Examination of the covariates revealed that the negative associations of PAG, HA, TMAO and valine (Val) with DQ were specific to younger children (-1 SD or 19 months old), whereas creatinine (Crtn) and methylhistidine (MeHis) had significant associations with DQ that changed direction with age (negative at -1 SD or 19 months old, and positive at +1 SD or 49 months old). Further, mediation analysis demonstrated that PAG was a significant mediator for the relationship of delivery mode, child's diet quality and child fiber intake with DQ. HA and TMAO were additional significant mediators of the relationship of child fiber intake with DQ.
Strengths of this study include the large cohort size and study design allowing for sampling at multiple time points along with neurodevelopmental assessment and a relatively detailed collection of potential confounding factors including diet. The untargeted metabolomics approach was also robust and comprehensive allowing for level 1 identification of a wide breadth of potential biomarkers. Given their methodology, the authors should be able to achieve their aim of identifying candidate serum biomarkers of neurodevelopment for early childhood. The results of this work would be of broad interest to researchers who are interested in understanding the biological underpinnings of development and also for tracking development in pediatric populations, as it provides insight for putative mechanisms and targets from a relevant human cohort that can be probed in future studies. Such putative mechanisms and targets are currently lacking in the field due to challenges in conducting these kind of studies, so this work is important.
However, in the manuscript's current state, the presentation and analysis of data impede the reader from fully understanding and interpreting the study's findings. Particularly, the handling of confounding variables is incomplete. There is a different set of confounders listed in Table 1 versus Supplementary Table 1 versus Methods section Covariates versus Figure 4. For example, Region is listed in Supplementary Table 1 but not in Table 1, and Mode of Delivery is listed in Table 1 but not in Supplementary Table 1. Many factors are listed in Figure 4 that aren't mentioned anywhere else in the paper, such as gestational age at birth or maternal pre-pregnancy obesity.
The authors utilize the directed acrylic graph (DAG) in Figure 4 to justify the further investigation of certain covariates over others. However, the lack of inclusion of the microbiome in the DAG, especially considering that most of the study findings were microbial-derived metabolite biomarkers, appears to be a fundamental flaw. Sanitation and micronutrients are proposed by the authors to have no effect on the host metabolome, yet sanitation and micronutrients have both been demonstrated in the literature to affect microbiome composition which can in turn affect the host metabolome.
Additionally, the authors emphasized as part of the study selection criteria the following,<br /> "Due to the costs involved in the metabolome analysis, it was necessary to further reduce the sample size. Then, samples were stratified by age groups (6 to 11, 12 to 23, and 24 to 59 months) and health conditions related to iron metabolism, such as anemia and nutrient deficiencies. The selection process aimed to represent diverse health statuses, including those with no conditions, with specific deficiencies, or with combinations of conditions. Ultimately, through a randomized process that ensured a balanced representation across these groups, a total of 5,004 children were selected for the final sample (Figure 1)."
Therefore, anemia and nutrient deficiencies are assumed by the reader to be important covariates, yet, the data on the final distribution of these covariates in the study cohort is not presented, nor are these covariates examined further.
The inclusion of specific covariates in Table 1, Supplementary Table 1, the statistical models, and the mediation analysis is thus currently biased as it is not well justified.
Finally, it is unclear what the partial-least squares regression adds to the paper, other than to discard potentially interesting metabolites found by the initial correlation analysis.
-
Reviewer #2 (Public Review):
A strength of the work lies in the number of children Padilha et al. were able to assess (5,004 children aged 6-59 months) and in the extensive screening that the Authors performed for each participant. This type of large-scale study is uncommon in low-to-middle-income countries such as Brazil.<br /> The Authors employ several approaches to narrow down the number of potentially causally associated metabolites.<br /> Could the Authors justify on what basis the minimum dietary diversity score was dichotomized? Were sensitivity analyses undertaken to assess the effect of this dichotomization on associations reported by the article? Consumption of each food group may have a differential effect that is obscured by this dichotomization.<br /> Could the Authors specify the statistical power associated with each analysis?<br /> Could the Authors describe in detail which metric they used to measure how predictive PLSR models are, and how they determined what the "optimal" number of components were?<br /> The Authors use directed acyclic graphs (DAG) to identify confounding variables of the association between metabolites and DQ. Could the dataset generated by the Authors have been used instead? Not all confounding variables identified in the literature may be relevant to the dataset generated by the Authors.<br /> Were the systematic reviews or meta-analyses used in the DAG performed by the Authors, or were they based on previous studies? If so, more information about the methodology employed and the studies included should be provided by the Authors.<br /> Approximately 72% of children included in the analyses lived in households with a monthly income superior to the Brazilian minimum wage. The cohort is also biased towards households with a higher level of education. Both of these measures correlate with developmental quotient. Could the Authors discuss how this may have affected their results and how generalizable they are?<br /> Further to this, could the Authors describe how inequalities in access to care in the Brazilian population may have affected their results? Could they have included a measure of this possible discrepancy in their analyses?<br /> The Authors state that the results of their study may be used to track children at risk for developmental delays. Could they discuss the potential for influencing policies and guidelines to address delayed development due to malnutrition and/or limited access to certain essential foods?
-
Reviewer #3 (Public Review):
The ENANI-2019 study provides valuable insights into child nutrition, development, and metabolomics in Brazil, highlighting both challenges and opportunities for improving child health outcomes through targeted interventions and further research.
Strengths of the methods and results:<br /> (1) The study utilizes data from the ENANI-2019 cohort, which was already existing. This cohort choice allows for longitudinal assessments and exploration of associations between metabolites and developmental outcomes. In addition, there was conservation of resources which are scanty in all settings in the current scenario.<br /> (2) The study aims to investigate the relationship between circulating metabolites (exposure) and early childhood development (outcome), specifically developmental quotient (DQ). The objectives are clearly stated, which facilitates focused research questions and hypotheses. The population that is studied is clearly mentioned.<br /> (3) The study accessed a large number of children under five years, with blood collected from a final sample size of 5,004 children. The exclusion of infants under six months due to venipuncture challenges and lack of reference values highlights practical considerations in research design.<br /> The study sample reflects a diverse range of children in terms of age, sex distribution, weight status, maternal education, and monthly family income. This diversity enhances the generalizability of findings across different sociodemographic groups within Brazil.<br /> (4) The study uses standardized measures (e.g., DQ assessments) and chronological age. Confounding variables, such as child's age, diet quality, and nutritional status, are carefully considered and incorporated into analyses through a Directed Acyclic Graph (DAG). The mean DQ of 0.98 indicates overall developmental norms among the studied children, with variations noted across different demographic factors such as age, region, and maternal education. The prevalence of Minimum Dietary Diversity (MDD) being met by 59.3% of children underscores dietary patterns and their potential impact on health outcomes. The association between nutritional status (weight-for-height z-scores) and developmental outcomes (DQ) provides insights into the interplay between nutrition and child development.<br /> The study identified key metabolites associated with developmental quotient (DQ):<br /> Component 1: Branched-chain amino acids (Leucine, Isoleucine, Valine).<br /> Component 2: Uremic toxins (Cresol sulfate, Phenylacetylglutamine).<br /> Component 3: Betaine and amino acids (Glutamine, Asparagine).<br /> The study focused on several serum metabolites like PAG (phenylacetylglutamine), CS (p-cresyl sulfate), HA (hippuric acid), TMAO (trimethylamine-N-oxide), MeHis (methylhistidine), and Crtn (creatinine). These metabolites are implicated in various metabolic pathways linked to gut microbiota activity, amino acid metabolism, and dietary factors.<br /> These metabolites explained a significant portion of both metabolite variance (39.8%) and DQ variance (4.3%). The study suggests that these metabolites can be used as proxy measures of the gut microbiome in children.<br /> (5) The use of partial least square regression (PLSR) with cross-validation (80% training, 20% testing) which is a robust approach to identify metabolites predictive of DQ, which minimizes overfitting. This model allows for outliers to remain outliers for transparency.<br /> The Directed Acyclic Graph (DAG) identifies and adjusts for confounding variables (e.g., child's diet quality, nutritional status) and strengthens the validity of findings by controlling for potential biases. Developmental and gender differences were studied by testing interactions with the age of the child and the sex.<br /> Mediation analysis exploring metabolites as potential mediators provides insights into underlying pathways linking exposures (e.g., diet, microbiome) with DQ.<br /> The use of Benjamini-Hochberg correction for multiple comparisons and bootstrap tests (5,000 iterations) enhances the reliability of results by controlling false discovery rates and assessing significance robustly.
Significant correlations between serum metabolites and DQ, particularly negative associations with certain metabolites like PAG and CS, suggest potential biomarkers or pathways influencing developmental outcomes. Notably, these associations varied with age, suggesting different metabolic impacts during early childhood development.
Weaknesses:<br /> (1) The data collected was incomplete especially those related to breastfeeding history and birth weight. These have been mentioned in the limitations of the study but yet might have been potential confounders or even factors leading to the particular identified metabolite state of the population.<br /> (2) Other tests than mediation analysis might have been used to ensure reliability and robustness of the data. How data was processed, data cleaning methods, how outliers were handled and sensitivity analyses would ensure robustness of the findings.<br /> (3) The generalizability of the data is not sound especially considering the children mostly belonged to a higher socioeconomic group in Brazil with mother or caregiver education being above a certain level. Comparative studies with children from other socio-economic groups and other cohorts might have been useful. Consideration of sample size adequacy and power analysis might have helped in generalizing the findings.<br /> (4) Caution is needed in interpreting causality from this data because of the nature of the study design Discussing alternative explanations and potential confounding factors in more depth could strengthen the conclusions.
Appraisal<br /> (1) The aims of the study were to identify associations between children's serum metabolome and Early Childhood development. This aim was met. The results do confirm their conclusions.<br /> Impact of the work on the field
(1) Unless actual gut microbiome of children in this age group from gut bacteria examination or gastrointestinal examination of the gut of children, the causality of gut metabolome on early childhood development cannot be established with certainty. Because this may not be possible in every situation, proxy methods such as the one elucidated here might be useful, considering the risk-benefit ratio.<br /> (2) More research is needed on this theme through longitudinal studies to validate these findings and explore underlying pathways involving gut-brain interactions and metabolic dysregulation.<br /> Other readings: Readers are advised to read other research from other countries and other languages to understand the connection between gut microbiome, metabolite spectra, and child development. In addition to study the effect of these factors on child mental development too.
Readers might consider the following questions:<br /> (1) Should investigators study the families through direct observation of diet and other factors to look for a connection between food taken in and gut microbiome and child development?<br /> (2) Can an examination of the mother's gut microbiome influence the child's microbiome? Can the mother or caregiver's microbiome influence early childhood development?<br /> (3) Is developmental quotient enough to study early childhood development? Is it comprehensive enough?
-
-
drive.google.com drive.google.com
-
Reptiles are the only ectothermic amniotes, and therefore become a pivotal group to study in order to provide important insights into both the evolution of the immune system as well as the functioning of the immune system in an ecological setting.
-
-
www.biorxiv.org www.biorxiv.org
-
eLife Assessment
This important work addresses the role of Marcks/Markcksl during spinal cord development and regeneration. The study is exceptional in combining molecular approaches to understand the mechanisms of tissue regeneration with behavioural assays, which is not commonly employed in the field. The data presented is convincing and comprehensive, using many complementary methodologies.
-
Reviewer #1 (Public Review):
In this manuscript, El Amri et al. are exploring the role of Marcks and Marcksl1 proteins during spinal cord development and regeneration in Xenopus. Using two different techniques to knockdown their expressions, they argue that these proteins are important for neural progenitors proliferation and neurites outgrowth in both contexts. Finally, using a pharmalogical approach, they suggest that Marcks and Marcksl1 work by modulating the activity of PLD and the levels of PIP2 whilst PKC could modulate Marcks activity.<br /> The strength of this manuscript resides in the ability of the authors to knockdown the expression of 4 different genes using 2 different methods to assess the role of this protein family during early development and regeneration at the late tadpole stage. This has always been a limiting factor in the field as the tools to perform conditional knockouts in Xenopus are very limited. However, this will not really be applicable to essential genes as it relies on the general knockdown of protein expression. The generation of antibodies able to detect endogenous Marcks/Marcksl1 is also a powerful tool to assess the extent to which the expression of these proteins is down-regulated.<br /> Whilst there is a great amount of data provided in this manuscript and there is strong evidence to show that Marcks are important for spinal cord development and regeneration, their roles in both contexts is not explored fully. The description of the effect of knocking down Marcks/Marcksl1 on neurons and progenitors is rather superficial and the evidence for the underlying mechanism underpinning their roles is not very convincing.
-
Reviewer #2 (Public Review):
M. El Amri et al., investigated the functions of Marcks and Marcks like 1 during spinal cord (SC) development and regeneration in Xenopus laevis. The authors rigorously performed loss of function with morpholino knock-down and CRISPR knock-out combining rescue experiments in developing spinal cord in embryo and regeneration in tadpole stage.
For the assays in the developing spinal cord, a unilateral approach (knock-down/out only one side of the embryo) allowed the authors to assess the gene functions by direct comparing one-side (e.g. mutated SC) to the other (e.g. wild type SC on the other side). For the assays in regenerating SC, the authors microinject CRISPR reagents into 1-cell stage embryo. When the embryo (F0 crispants) grew up to tadpole (stage 50), the SC was transected. They then assessed neurite outgrowth and progenitor cell proliferation. The validation of the phenotypes was mostly based on the quantification of immunostaining images (neurite outgrowth: acetylated tubulin, neural progenitor: sox2, sox3, proliferation: EdU, PH3), that are simple but robust enough to support their conclusions. In both SC development and regeneration, the authors found that Marcks and Marcksl1 were necessary for neurite outgrowth and neural progenitor cell proliferation.<br /> The authors performed rescue experiments on morpholino knock-down and CRISPR knock-out conditions by Marcks and Marcksl1 mRNA injection for SC development and pharmacological treatments for SC development and regeneration. The unilateral mRNA injection rescued the loss-of-function phenotype in the developing SC. To explore the signalling role of these molecules, they rescued the loss-of-function animals by pharmacological reagents They used S1P: PLD activator, FIPI: PLD inhibitor, NMI: PIP2 synthesis activator and ISA-2011B: PIP2 synthesis inhibitor. The authors found the activator treatment rescued neurite outgrowth and progenitor cell proliferation in loss of function conditions. From these results, the authors proposed PIP2 and PLD are the mediators of Marcks and Marcksl1 for neurite outgrowth and progenitor cell proliferation during SC development and regeneration. The results of the rescue experiments are particularly important to assess gene functions in loss of function assays, therefore, the conclusions are solid. In addition, they performed gain-of-function assays by unilateral Marcks or Marcksl1 mRNA injection showing that the injected side of the SC had more neurite outgrowth and proliferative progenitors. The conclusions are consistent with the loss-of-function phenotypes and the rescue results. Importantly, the authors showed the linkage of the phenotype and functional recovery by behavioral testing, that clearly showed the crispants with SC injury swam less distance than wild types with SC injury at 10-day post surgery.<br /> Prior to the functional assays, the authors analyzed the expression pattern of the genes by in situ hybridization and immunostaining in developing embryo and regenerating SC. They confirmed that the amount of protein expression was significantly reduced in the loss of function samples by immunostaining with the specific antibodies that they made for Marcks and Marcksl1. Although the expression patterns are mostly known in previous works during embryo genesis, the data provided appropriate information to readers about the expression and showed efficiency of the knock-out as well.
MARCKS family genes have been known to be expressed in the nervous system. However, few studies focus on the function in nerves. This research introduced these genes as new players during SC development and regeneration. These findings could attract broader interests from the people in nervous disease model and medical field. Although it is a typical requirement for loss of function assays in Xenopus laevis, I believe that the efficient knock-out for four genes by CRISPR/Cas9 was derived from their dedication of designing, testing and validation of the gRNAs and is exemplary.
Weaknesses,<br /> 1) Why did the authors choose Marcks and Marcksl1?<br /> The authors mentioned that these genes were identified with a recent proteomic analysis of comparing SC regenerative tadpole and non-regenerative froglet (Line (L) 54-57). However, although it seems the proteomic analysis was their own dataset, the authors did not mention any details to select promising genes for the functional assays (this article). In the proteomic analysis, there must be other candidate genes that might be more likely factors related to SC development and regeneration based on previous studies, but it was unclear what the criteria to select Marcks and Marcksl1 was.
2) Gene knock-out experiments with F0 crispants,<br /> The authors described that they designed and tested 18 sgRNAs to find the most efficient and consistent gRNA (L191-195). However, it cannot guarantee the same phenotypes practically, due to, for example, different injection timing, different strains of Xenopus laevis, etc. Although the authors mentioned the concerns of mosaicism by themselves (L180-181, L289-292) and immunostaining results nicely showed uniformly reduced Marcks and Marcksl1 expression in the crispants, they did not refer to this issue explicitly.
3) Limitations of pharmacological compound rescue<br /> In the methods part, the authors describe that they performed titration experiments for the drugs (L702-704), that is a minimal requirement for this type of assay. However, it is known that a well characterized drug is applied, if it is used in different concentrations, the drug could target different molecules (Gujral TS et al., 2014 PNAS). Therefore, it is difficult to eliminate possibilities of side effects and off targets by testing only a few compounds.
-
Reviewer #3 (Public Review):
El Amri et al conducted an analysis on the function of marcks and marcksl in Xenopus spinal cord development and regeneration. Their study revealed these proteins are crucial for neurite outgrowth and cell proliferation, including Sox2+ progenitors. Furthermore, they suggested these genes may act through the PLD pathway. The study is well-executed with appropriate controls and validation experiments, distinguishing it from typical regeneration research by including behavioral assays. The manuscript is commendable for its quantifications, literature referencing, careful conclusions, and detailed methods. Conclusions are well-supported by the experiments performed in this study. Overall, this manuscript contributes to the field of spinal cord regeneration and sets a good example for future research in this area.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this demo lesson you're going to learn how to install the Docker engine inside an EC2 instance and then use that to create a Docker image.
Now this Docker image is going to be running a simple application and we'll be using this Docker image later in this section of the course to demonstrate the Elastic Container service.
So this is going to be a really useful demo where you're going to gain the experience of how to create a Docker image.
Now there are a few things that you need to do before we get started.
First as always make sure that you're logged in to the I am admin user of the general AWS account and you'll also need the Northern Virginia region selected.
Now attached to this lesson is a one-click deployment link so go ahead and click that now.
This is going to deploy an EC2 instance with some files pre downloaded that you'll use during the demo lesson.
Now everything's pre-configured you just need to check this box at the bottom and click on create stack.
Now that's going to take a few minutes to create and we need this to be in a create complete state.
So go ahead and pause the video wait for your stack to move into create complete and then we're good to continue.
So now this stack is in a create complete state and we're good to continue.
Now if you're following along with this demo within your own environment there's another link attached to this lesson called the lesson commands document and that will include all of the commands that you'll need to type as you move through the demo.
Now I'm a fan of typing all commands in manually because I personally think that it helps you learn but if you are the type of person who has a habit of making mistakes when typing along commands out then you can copy and paste from this document to avoid any typos.
Now one final thing before we finish at the end of this demo lesson you'll have the opportunity to upload the Docker image that you create to Docker Hub.
If you're going to do that then you should pre sign up for a Docker Hub account if you don't already have one and the link for this is included attached to this lesson.
If you already have a Docker Hub account then you're good to continue.
Now at this point what we need to do is to click on the resources tab of this stack and locate the public EC2 resource.
Now this is a normal EC2 instance that's been provisioned on your behalf and it has some files which have been pre downloaded to it.
So just go ahead and click on the physical ID next to public EC2 and that will move you to the EC2 console.
Now this machine is set up and ready to connect to and I've configured it so that we can connect to it using Session Manager and this avoids the need to use SSH keys.
So to do that just right-click and then select connect.
You need to pick Session Manager from the tabs across the top here and then just click on connect.
Now that will take a few minutes but once connected you should see this prompt.
So it should say SH- and then a version number and then dollar.
Now the first thing that we need to do as part of this demo lesson is to install the Docker engine.
The Docker engine is the thing that allows Docker containers to run on this EC2 instance.
So we need to install the Docker engine package and we'll do that using this command.
So we're using shudu to get admin permissions then the package manager DNF then install then Docker.
So go ahead and run that and that will begin the installation of Docker.
It might take a few moments to complete it might have to download some prerequisites and you might have to answer that you're okay with the install.
So press Y for yes and then press enter.
Now we need to wait a few moments for this install process to complete and once it has completed then we need to start the Docker service and we do that using this command.
So shudu again to get admin permissions and then service and then the Docker service and then start.
So type that and press enter and that starts the Docker service.
Now I'm going to type clear and then press enter to make this easier to see and now we need to test that we can interact with the Docker engine.
So the most simple way to do that is to type Docker space and then PS and press enter.
Now you're going to get an error.
This error is because not every user of this EC2 instance has the permissions to interact with the Docker engine.
We need to grant permissions for this user or any other users of this EC2 instance to be able to interact with the Docker engine and we're going to do that by adding these users to a group and we do that using this command.
So shudu for admin permissions and then user mod -a -g for group and then the Docker group and then EC2 -user.
Now that will allow a local user of this system, specifically EC2 -user, to be able to interact with the Docker engine.
Okay so I've cleared the screen to make it slightly easier to see now that we've added EC2 -user the ability to interact with Docker.
So the next thing is we need to log out and log back in of this instance.
So I'm going to go ahead and type exit just to disconnect from session manager and then click on close and then I'm going to reconnect to this instance and you need to do the same.
So connect back in to this EC2 instance.
Now once you're connected back into this EC2 instance we need to run another command which moves us into EC2 user so it basically logs us in as EC2 -user.
So that's this command and the result of this would be the same as if you directly logged in to EC2 -user.
Now the reason we're doing it this way is because we're using session manager so that we don't need a local SSH client or to worry about SSH keys.
We can directly log in via the console UI we just then need to switch to EC2 -user.
So run this command and press enter and we're now logged into the instance using EC2 -user and to test everything's okay we need to use a command with the Docker engine and that command is Docker space ps and if everything's okay you shouldn't see any output beyond this list of headers.
What we've essentially done is told the Docker engine to give us a list of any running containers and even though we don't have any it's not erred it's simply displayed this empty list and that means everything's okay.
So good job.
Now what I've done to speed things up if you just run an LS and press enter the instance has been configured to download the sample application that we're going to be using and that's what the file container.zip is within this folder.
I've configured the instance to automatically extract that zip file which has created the folder container.
So at this point I want you to go ahead and type cd space container and press enter and that's going to move you inside this container folder.
Then I want you to clear the screen by typing clear and press enter and then type ls space -l and press enter.
Now this is the web application which I've configured to be automatically downloaded to the EC2 instance.
It's a simple web page we've got index.html which is the index we have a number of images which this index.html contains and then we have a docker file.
Now this docker file is the thing that the docker engine will use to create our docker image.
I want to spend a couple of moments just stepping you through exactly what's within this docker file.
So I'm going to move across to my text editor and this is the docker file that's been automatically downloaded to your EC2 instance.
Each of these lines is a directive to the docker engine to perform a specific task and remember we're using this to create a docker image.
This first line tells the docker engine that we want to use version 8 of the Red Hat Universal base image as the base component for our docker image.
This next line sets the maintainer label it's essentially a brief description of what the image is and who's maintaining it in this case it's just a placeholder of animals for life.
This next line runs a command specifically the yum command to install some software specifically the Apache web server.
This next command copy copies files from the local directory when you use the docker command to create an image so it's copying that index.html file from this local folder that I've just been talking about and it's going to put it inside the docker image in this path so it's going to copy index.html to /var/www/html and this is where an Apache web server expects this index.html to be located.
This next command is going to do the same process for all of the jpegs in this folder so we've got a total of six jpegs and they're going to be copied into this folder inside the docker image.
This line sets the entry point and this essentially determines what is first run when this docker image is used to create a docker container.
In this example it's going to run the Apache web server and finally this expose command can be used for a docker image to tell the docker engine which services should be exposed.
Now this doesn't actually perform any configuration it simply tells the docker engine what port is exposed in this case port 80 which is HTTP.
Now this docker file is going to be used when we run the next command which is to create a docker image.
So essentially this file is the same docker file that's been downloaded to your EC2 instance and that's what we're going to run next.
So this is the next command within the lesson commands document and this command builds a container image.
What we're essentially doing is giving it the location of the docker file.
This dot at the end contains the working directory so it's here where we're going to find the docker file and any associated files that that docker file uses.
So we're going to run this command and this is going to create our docker image.
So let's go ahead and run this command.
It's going to download version 8 of UBI which it will use as a starting point and then it's going to run through every line in the docker file performing each of the directives and each of those directives is going to create another layer within the docker image.
Remember from the theory lesson each line within the docker file generally creates a new file system layer so a new layer of a docker image and that's how docker images are efficient because you can reuse those layers.
Now in this case this has been successful.
We've successfully built a docker image with this ID so it's giving it a unique ID and it's tagged this docker image with this tag colon latest.
So this means that we have a docker image that's now stored on this EC2 instance.
Now I'll go ahead and clear the screen to make it easier to see and let's go ahead and run the next command which is within the lesson commands document and this is going to show us a list of images that are on this EC2 instance but we're going to filter based on the name container of cats and this will show us the docker image which we've just created.
So the next thing that we need to do is to use the docker run command which is going to take the image that we've just created and use it to create a running container and it's that container that we're going to be able to interact with.
So this is the command that we're going to use it's the next one within the lesson commands document.
It's docker run and then it's telling it to map port 80 on the container with port 80 on the EC2 instance and it's telling it to use the container of cats image and if we run that command docker is going to take the docker image that we've got on this EC2 instance run it to create a running container and we should be able to interact with that container.
So if you go back to the AWS console if we click on instances so look for a4l-public EC2 that's in the running state.
I'm just going to go ahead and select this instance so that we can see the information and we need the public IP address of this instance.
Go ahead and click on this icon to copy the public IP address into your clipboard and then open that in a new tab.
Now be sure not to use this link to the right because that's got a tendency to open the HTTPS version.
We just need to use the IP address directly.
So copy that into your clipboard open a new tab and then open that IP address and now we can see the amazing application if it fits i sits in a container in a container and this amazing looking enterprise application is what's contained in the docker image that you just created and it's now running inside a container based off that image.
So that's great everything's working as expected and that's running locally on the EC2 instance.
Now in the demo lesson for the elastic container service that's coming up later in this section of the course you have two options.
You can either use my docker image which is this image that I've just created or you can use your own docker image.
If you're going to use my docker image then you can skip this next step.
You don't need a docker hub account and you don't need to upload your image.
If you want to use your own image then you do need to follow these next few steps and I need to follow them anyway because I need to upload this image to docker hub so that you can potentially use it rather than your own image.
So I'm going to move back to the session manager tab and I'm going to control C to exit out of this running container and I'm going to type clear to clear the screen and make it easier to see.
Now to upload this to docker hub first you need to log in to docker hub using your credentials and you can do that using this command.
So it's docker space login space double hyphen username equals and then your username.
So if you're doing this in your own environment you need to delete this placeholder and type your username.
I'm going to type my username because I'll be uploading this image to my docker hub.
So this is my docker hub username and then press enter and it's going to ask for the corresponding password to this username.
So I'm going to paste in my password if you're logging into your docker hub you should use your password.
Once you've pasted in the password go ahead and press enter and that will log you in to docker hub.
Now you don't have to worry about the security message because whilst your docker hub password is going to be stored on the EC2 instance shortly we're going to terminate this instance which will remove all traces of this password from this machine.
Okay so again we're going to upload our docker image to docker hub so let's run this command again and you'll see because we're just using the docker images command we can see the base image as well as our image.
So we can see red hat UBI 8.
We want the container of cats latest though so what you need to do is copy down the image ID of the container of cats image.
So this is the top line in my case container of cats latest and then the image ID.
So then we need to run this command so docker space tag and then the image ID that you've just copied into your clipboard and then a space and then your docker hub username.
In my case it's actrl with 1L if you're following along you need to use your own username and then forward slash and then the name of the image that you want this to be stored as on docker hub so I'm going to use container of cats.
So that's the command you need to use so docker tag and then your image ID for container of cats and then your username forward slash container of cats and press enter and that's everything we need to do to prepare to upload this image to docker hub.
So the last command that we need to run is the command to actually upload the image to docker hub and that command is docker space push so we're going to push the image to docker hub then we need to specify the docker hub username so again this is my username but if you're doing this in your environment it needs to be your username and then forward slash and then the image name in my case container of cats and then colon latest and once you've got all that go ahead and press enter and that's going to push the docker image that you've just created up to your docker hub account and once it's up there it means that we can deploy from that docker image to other EC2 instances and even ECS and we're going to do that in a later demo in this section of the course.
Now that's everything that you need to do in this demo lesson you've essentially installed and configured the docker engine you've used a docker file to create a docker image from some local assets you've tested that docker image by running a container using that image and then you've uploaded that image to docker hub and as I mentioned before we're going to use that in a future demo lesson in this section of the course.
Now the only thing that remains to do is to clear up the infrastructure that we've used in this demo lesson so go ahead and close down all of these extra tabs and go back to the cloud formation console this is the stack that's been created by the one click deployment link so all you need to do is select this stack it should be called EC2 docker and then click on delete and confirm that deletion and that will return the account into the same state as it was at the start of this demo lesson.
Now that is everything you need to do in this demo lesson I hope it's been useful and I hope you've enjoyed it so go ahead and complete the video and when you're ready I look forward to you joining me in the next.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Additionally, spam and output from Large Language Models like ChatGPT can flood information spaces (e.g., email, Wikipedia) with nonsense, useless, or false content, making them hard to use or useless.
That is a very valid concern. AI-generated content, such as from ChatGPT, tends to spam online platforms like email and Wikipedia with misinformation, making people not trust the platforms. Because Wikipedia, for example, enables users to edit entries, it is highly susceptible to the addition of false information. There are systems in place for moderation, but it's tough to keep up with how quickly AI can generate content. It requires stronger editorial controls and awareness on the part of users to maintain the reliability of such platforms.
-
Then Sean Black, a programmer on TikTok saw this and decided to contribute by creating a bot that would automatically log in and fill out applications with random user info, increasing the rate at which he (and others who used his code) could spam the Kellogg’s job applications:
This is a great example of using social media for the right cause and explaining how the context matters. It shows that ethical trolling can be done to get social justice for those who have been wronged, forcing such a big company to act right. It's interesting to see how the company's decision backfired using trolling.
-
People in the antiwork subreddit found the website where Kellogg’s posted their job listing to replace the workers. So those Redditors suggested they spam the site with fake applications, poisoning the job application data, so Kellogg’s wouldn’t be able to figure out which applications were legitimate or not (we could consider this a form of trolling). Then Kellogg’s wouldn’t be able to replace the striking workers, and they would have to agree to better working conditions.
I don't have a problem with this kind of poisoning. When you stand up to a company like Kellog's that has money, expensive lawyers and cares only about their bottom line it needs to be done. The David and Goliath of it all begs for action to fighting against unfair work conditions for the ordinary worker.
-
-
pressbooks.lib.jmu.edu pressbooks.lib.jmu.eduWork3
-
Does anyone know the original Italian word for "work"?
-
What does the Italian word "work" convey in Montessori's time?
-
[MAPS 2024 conversation] Italian translations of the term "work": * "meaningful activity" * "play" (lavora) ... i.e., "meaningful play Context: English translations of Montessori's original writing. Italian has different meanings than English translations Historical context matters as it relates to the meaning of terms.
-
-
www.dailymaverick.co.za www.dailymaverick.co.za
-
for - polycrisis - organized crime - Daily Maverick article - organized crime - Cape Town - How the state colludes with SA’s underworld in hidden web of organised crime – an expert view - Victoria O’Regan - 2024, Oct 18 - book - Man Alone: Mandela’s Top Cop – Exposing South Africa’s Ceaseless Sabotage - Daily Maverick journalist Caryn Dolley - 2024 - https://viahtml.hypothes.is/proxy/https://shop.dailymaverick.co.za/product/man-alone-mandelas-top-cop-exposing-south-africas-ceaseless-sabotage/?_gl=11mkyl5s_gcl_auODI2MTMxODEuMTcyNjI0MDAwMg.._gaNzQ5NDM3NzE0LjE3MjMxODY0NzY._ga_Y7XD5FHQVG*MTcyOTM1MjgwOS4xLjAuMTcyOTM1MjgxOS41MC4wLjkyNTE5MDk2OA..
summary - This article revolves around the research of South African crime reporter Caryn Dolley on the organized web of crime in South Africa - She discusses the nexus of - trans-national drug cartels - local Cape Town gangs - South African state collusion with gangs - in her new book: Man Alone: Mandela's Top Cop - Exposing South Africa's Ceaseless Sabotage - It illustrates how on-the-ground efforts to fight crime are failing because they do not effectively address this criminal nexus - The book follows the life of retired top police investigator Andre Lincoln whose expose paints the deep level of criminal activity spanning government, trans-national criminal networks and local gangs - Such organized crime takes a huge toll on society and is an important contributor to the polycrisis. - Non-linear approaches are necessary to tackle this systemic problem - One possibility is a trans-national citizen-led effort
Tags
- book - Man Alone: Mandela’s Top Cop – Exposing South Africa’s Ceaseless Sabotage - Daily Maverick journalist Caryn Dolley - 2024
- polycrisis - organized crime
- Daily Maverick article - organized crime - Cape Town - How the state colludes with SA’s underworld in hidden web of organised crime – an expert view - Victoria O’Regan - 2024, Oct 18
Annotators
URL
-
-
www.nytimes.com www.nytimes.com
Tags
- Drivers and Impacts of the Record-Breaking 2023 Wildfire Season in Canada
- by: Manuela Andreoni
- Kanada
- Brendan Byrne
- flash droughts
- Natural Resources Canada
- regeneration failure
- Ellen Whitman
- increasing risk of wildfires
- Carbon emissions from the 2023 Canadian wildfires
- Marc-André Parisien
Annotators
URL
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Social Media platforms use the data they collect on users and infer about users to increase their power and increase their profits.
I completely agree with this. As TikTok gained popularity with its short videos, many other platforms quickly adopted this feature for creating and sharing short-form content. Instagram introduced Reels, and YouTube launched Shorts, both experiencing significant growth as a result. Even Spotify has now incorporated a similar short video format.
-
One of the main goals of social media sites is to increase the time users are spending on their social media sites. The more time users spend, the more money the site can get from ads, and also the more power and influence those social media sites have over those users. So social media sites use the data they collect to try and figure out what keeps people using their site, and what can they do to convince those users they need to open it again later.
I like the algorithm social media platforms use because it shows me content that I like to see. I have always wondered how do social media sites make money from the ads, anytime I get an ad on any platform I always skip them if I can.
-
So social media sites use the data they collect to try and figure out what keeps people using their site, and what can they do to convince those users they need to open it again later.
Social media has achieved this goal long ago as this generation is on their phones all day. Such as every day when I check my screen time it's over 8 hours or more, and 70% time is spent on TikTok. By using data mining the app has fairly figured out what phase of life I am in and every TikTok that i see is relatable so I feel a connection with it. For example, if someone goes through a break-up, their whole FYP will be filled with tiktoks that would be about a break-up on how someone went through something same or something comforting, keeping them hooked to it. As for me whatever I am going through in my life it's like my TikTok knows all of it and shows exact same posts. In this way, I can think how data mining may be used to extract my conversations with my friends or what I like and repost depending on my mood is also being tracked.
-
-
www.youtube.com www.youtube.com
-
modern science
The advent of
-
the axial age saw the incorporation of the inner dialogue into the human sense of self agency
Axial age
-
a word is the body of a concept a concept is the soul of a word
Well said
-
the ploma
Preloma?
-
the collective unconscious is in fact the implicate noetic realm
Hegel absolute spirit foreshadows this
-
participants in the Stream of becoming
Stream of becoming
-
the truths of science
X
-
our cognitive faculties
our cognitive faculties are imperfect machines which have been haphazardly assembled by the blind
watchmaker of algorithmic natural selection
-
-
pierce.instructure.com pierce.instructure.com
-
The Tuskegee Experiment based on information presented in different genres
-
-
www.theatlantic.com www.theatlantic.com
-
In psychology, the belief that only conservatives can be authoritarians, and that therefore only conservative authoritarians warrant serious study, has proved self-reinforcing over the course of decades.
!
-
Intriguingly, the researchers found some common traits between left-wing and right-wing authoritarians, including a “preference for social uniformity, prejudice towards different others, willingness to wield group authority to coerce behavior, cognitive rigidity, aggression and punitiveness towards perceived enemies, outsized concern for hierarchy, and moral absolutism.”
!
-
But one reason left-wing authoritarianism barely shows up in social-psychology research is that most academic experts in the field are based at institutions where prevailing attitudes are far to the left of society as a whole. Scholars who personally support the left’s social vision—such as redistributing income, countering racism, and more—may simply be slow to identify authoritarianism among people with similar goals.
!
-
-
www.biorxiv.org www.biorxiv.org
-
Overall Assessment (4/5)
Summary: The authors provide a software tool NeuroVar that helps visualizing genetic variations and gene expression profiles of biomarkers in different neurological diseases.
Technical Release criteria
Is the language of sufficient quality? * The language quality of the document is of sufficient quality. I did not notice any major issues.
Is there a clear statement of need explaining what problems the software is designed to solve and who the target audience is? * Yes, authors provide a statement of need. Authors mention that there is the need for a specialized software tool to identify genes from transcriptomic data and genetic variations such as SNPs, specifically for neurological diseases. Perhaps authors could expand on how they chose the diseases. E.g. stroke is not listed among the neurological diseases. Perhaps authors could expand a bit on the diseases they chose in the introduction.
Is the source code available, and has an appropriate Open Source Initiative license been assigned to the code? * Yes the source code is available in github under the following link: https://github.com/omicscodeathon/neurovar. Additionally authors deposited the source code and additional supplementary data in a permanent depository with zenodo under the following DOI: https://zenodo.org/records/13375493. They also provided test data https://zenodo.org/records/13375591. I was able to download and access the complete set of data
As Open Source Software are there guidelines on how to contribute, report issues or seek support on the code? * I did not find any way to contribute, report issues or seek support. I would recommend that the authors add this information to the Github README file.
Is the code executable? * Yes, I could execute the code using Rstudio 4.3.3
Is the documentation provided clear and user friendly? * The documentation is provided and is user friendly. I was able to install, test and run the tool using RStudio. Authors may consider to offer also a simple website link for the RshinyTools if possible. This may enable the access also for scientists that are not familiar with R.Especially, it is great that authors provided a demonstration video. I was able to reproduce the steps. However, I would recommend to add more information into the Youtube video. E.g. reference to the preprint/ paper and Github link would be helpful to connect the data. Perhaps authors could also expand a bit on the possibilities to export data from their software. And provide different formats e.g., PDF / PNG /JPEG. I think this is important for many researchers to export their outputs e.g., from the heatmaps.
Is installation/deployment sufficiently outlined in the paper and documentation, and does it proceed as outlined? * I could follow the installation process, but perhaps authors could add few more details how to download from Github in more detail. As some scientist may have trouble with it. Also perhaps an installation video (additionally to the video demonstration of the Neurovar Shiny App might be helpful.·
Is there a clearly-stated list of dependencies, and is the core functionality of the software documented to a satisfactory level? * Yes, dependencies are listed and are installed automatically. It worked for me with Rstudio version 4.3.3. In the manuscript and in the
Have any claims of performance been sufficiently tested and compared to other commonly-used packages? * not applicable
Are there (ideally real world) examples demonstrating use of the software? * Yes, authors use the example of Epilepsy, focal epilepsy and the gene of interest DEPDC5. I replicated their search and got the same results. However, I find that the label in Figure 1 in the gene’s transcript could be a bit more clear. E.g. it is not clear to me what transcript start and end refers to. It might also be more helpful if authors provide an example dataset for the Expression data that is loaded in the software by default.Furthermore authors use a case study results using RNAseq in ALS patients with mutations in FUS, TARDBP, SOD1, VCP genes.
Is test data available, either included with the submission or openly available via cited third party sources (e.g. accession numbers, data DOIs, etc.)? * Yes the authors provide test data with dois: https://zenodo.org/records/13375591.
Is automated testing used or are there manual steps described so that the functionality of the software can be verified? * Automated testing is not used as far as I can access it.
Overall Recommendation: * Accept with revisions
Reviewer Information: Ruslan Rust is an assistant professor in neuroscience and physiology at University of Southern California working on stem cell therapies on stroke. His lab is particularly interested in working with genomic data and the development of new biomarkers for stroke, AD and other neurological diseases.
Dr. Ruslan Rust's profile on ResearchHub: https://www.researchhub.com/author/4945925
ResearchHub Peer Reviewer Statement: This peer review has been uploaded from ResearchHub as part of a paid peer review initiative. ResearchHub aims to accelerate the pace of scientific research using novel incentive structures.
-
-
thewasteland.info thewasteland.info
-
whirlpool.
The whirlpool contrasts with moments of stillness and clarity in the poem. It underscores the tension between chaos and order, reflecting the desire for meaning in a fragmented world. The whirlpool serves as a reminder of the relentless motion of time and the challenges of finding stability.
-
The river sweats Oil and tar
The lines "The river sweats / Oil and tar" reflect the industrial pollution of the environment and symbolizes the decay and corruption present in modern life. The river, typically a symbol of life and renewal is assigned a certain vitality and is transformed into a site of contamination, highlighting themes of desolation and moral decline in the post-war world.
-
Twit twit twit Jug jug jug jug jug jug So rudely forc’d. Tereu
In "The Waste Land," the lines "Twit twit twit / Jug jug jug jug jug jug / So rudely forc'd" evoke a jarring and fragmented sense of communication, drawing from the myth of Tereus, Procne, and Philomela. This reference introduces themes of violence, loss, and the disruption of natural order. The repetition of "twit" and "jug" creates a rhythmic yet unsettling sound, almost mocking in its simplicity. It highlights the stark contrast between the complexity of human emotion and the reduced, animalistic quality of the sounds. This mirrors the broader themes of disconnection and alienation throughout the poem. The reference to Tereus—who brutally silenced Philomela by cutting out her tongue—serves as a potent metaphor for silencing and trauma. In this context, the nymphs and their experiences are connected to loss and violence, underscoring the idea that beauty and vitality are often subjected to brutal realities.
-
departed.
The indentation of “departed” draws attention to the unusual experience of the nymphs, who traditionally symbolize beauty, love, and the natural world, often associated with life and abundance. However, in Eliot’s context, their presence serves to contrast the barrenness and emptiness of modern existence. Also, decapitalizing “departed” shifts the agency of the myths and implies a more passive experience as they have been swept away and lost without active control over their fate. This loss of agency aligns with the themes in "The Waste Land," where characters often feel powerless in the face of societal decay and personal disillusionment. The experience of the nymphs can be interpreted as a reflection of unfulfilled longing and the impact of a fragmented society on intimate relationships. Instead of celebrating love and connection, their references evoke a sense of nostalgia for a more vibrant, meaningful past that has been lost. This mirrors the sorrow expressed in Psalm 137, where the Israelites long for their homeland, suggesting a universal longing for wholeness and the deep human need for connection.
Ultimately, the nymphs' experience in "The Waste Land" draws attention to the contrast between the idealized past and the stark reality of the present, reinforcing the poem's exploration of loss, longing, and the search for identity in a desolate world.The line "Departed, have left no addresses" from "The Waste Land" resonates deeply with the themes in Psalm 137, particularly the sense of dislocation and absence. In Psalm 137, the Israelites lament their exile in Babylon, feeling disconnected from their homeland and traditions. The line evokes a profound sense of loss and the inability to return to a place of belonging, mirroring the mournful sentiment of having no way to communicate or reconnect with what has been left behind. Both texts express a longing for something lost and the pain of separation, emphasizing the emotional weight of exile. Just as the Israelites mourn their captivity and the destruction of their identity, Eliot's line suggests a broader existential crisis where individuals feel untethered in a fragmented world, underscoring the despair and disconnection prevalent in both works.
-
HURRY UP PLEASE ITS TIME
Eliot artfully weaves imagery and language that evokes quietude into the fabric of the poem, creating a body of work whose essence personifies forms of silence. The poem possesses a hushed quality, behaving similarly to a curse word. As if to engage and think with the poem is taboo. Yet, when read, the assemblage of fragmented imagery, allusions, ambiguous language and voice, or lack thereof, engenders a profusion of sound. Eliot’s use of syntax in “A Game of Chess” depicts the unexpected resonance of unsaid speech, drawing attention to the hidden yet audible nature of cognition. The capitalization of “HURRY UP PLEASE IT’S TIME,” a noticeable shift from the earlier lowercase dialogue, intends to evoke a semblance of sound while maintaining the generally quiet disposition of the poem. Eliot's interplay with cognition and sound probes the potency of unsaid speech, revealing how the silence between words carry as much meaning as spoken language itself, inviting readers to consider the depths of thought and emotion that lie beneath the surface of expression.
-
The Chair she sat in, like a burnished throne,
I am drawn to the parallels between T.S. Eliot’s The Waste Land and Baudelaire’s “A Martyred Woman,” particularly their shared exploration of the suffering and sacrifice of women. Both works present women as embodiments of beauty intertwined with pain. In Baudelaire’s poem, the “martyred woman” is depicted as suffering yet noble, while Eliot’s female characters often reflect a sense of despair and emotional turmoil despite their allure. Baudelaire explicitly frames women as martyrs, suggesting that their beauty is a source of suffering. Similarly, Eliot’s portrayal of women suggests that they endure personal sacrifices and struggles, often reflecting broader societal issues. This martyrdom emphasizes the emotional toll placed on women. Both poets critique the societal roles imposed on women. Baudelaire highlights how women are idealized yet subjected to suffering, while Eliot’s women often navigate a fragmented identity within a patriarchal context, exposing the emptiness behind romanticized notions of femininity. In both texts, women experience deep alienation. Baudelaire's martyred figures are isolated in their suffering, while Eliot’s women, such as Lil or the clairvoyante, illustrate the emotional disconnect prevalent in modern life, reinforcing feelings of loneliness and despair.
-
'That corpse you planted last year in your garden,
Baudelaire juxtaposes the beauty of art and nature with the harsh realities of life, often reflecting on the dualities of pleasure and suffering. The poems frequently capture the essence of modern urban life, particularly in Paris, highlighting the alienation and moral ambiguity found in the city. Baudelaire delves into themes of vice and corruption, examining how they coexist with beauty. He often portrays sin as an integral part of human nature. Despite the dark themes, there are moments of seeking transcendence through art, love, and spirituality, hinting at the possibility of redemption amid despair. Interestingly, Baudelaire positions the poet as a visionary who can perceive the deeper truths of existence, navigating the complexities of the human condition.
The line "that corpse you planted last year in your garden" embodies themes of beauty and decay; the imagery of the corpse juxtaposed with the idea of a garden symbolizes the intersection of life and death. It suggests that what might typically be seen as beautiful (a garden) is tainted by decay and mortality. This line hints at buried past sins or traumas, implying that the speaker is grappling with unresolved issues that refuse to remain hidden. The corpse can symbolize guilt or repressed memories that disrupt the facade of normalcy. The garden, often a symbol of natural beauty and cultivation, contrasts sharply with the idea of a corpse. This reflects the alienation and spiritual emptiness of modern life, where even beauty is intertwined with death. The act of planting a corpse can be seen as a perverse twist on the natural cycle of life, suggesting a disruption in the natural order. It points to the theme of regeneration but in a way that is grotesque and unsettling. This line encapsulates Eliot’s task of confronting uncomfortable truths. It suggests that to understand the modern condition, one must acknowledge the darker aspects of existence.
-
from the hyacinth garden,
Eliot weaves themes of beauty, love, and loss inspired by the story of Apollo and Hyacinth into the fabric of “The Waste Land,” particularly the cycles of life and death, the transient nature of beauty, and the emotional desolation of the modern world. The tale of Apollo, the god of light and music, and Hyacinth, his beloved, emphasizes the intensity of love and the tragedy of loss. Hyacinth's death, caused by an accidental injury from Apollo’s discus, illustrates how beauty can be fleeting and how love can lead to deep sorrow. In the myth, Hyacinth is transformed into a flower after his death, symbolizing the idea of regeneration. However, in "The Waste Land," this regeneration is complicated by the poem’s pervasive sense of despair and fragmentation. The cycles of life and death are depicted, but they often feel broken or unfulfilled. Eliot contrasts the mythic beauty of Apollo and Hyacinth with the barrenness of the modern world. The decorated imagery of the myth serves to heighten the bleakness of contemporary existence, where love and beauty seem diminished or lost amidst urban decay and spiritual emptiness. The reference to this myth also connects to the broader cultural and literary heritage that Eliot draws upon throughout "The Waste Land." It reflects his engagement with themes of mythology, art, and the human condition, suggesting that ancient stories continue to resonate, even in a fractured modern context.
-
Quando fiam uti chelidon
10.18
Does “The Waste Land” end on a positive note? In debating with myself, I found my answer to remain hopelessly inconclusive. In the final section of the poem, it seems that our protagonist, in a role similar to a quester, has finally arrived at the Waste Land’s “Chapel Perilous” following the hopeful “violet hour” (380). Still, readers are left clueless regarding whether the desired task of regeneration has been completed. In what seems to be the most climactic scene, a rooster announces the arrival of rain from the chapel rooftop, yet two details keep me unnerved about this resolution:
Firstly, where on Earth did the rain go? The “damp gust” is responsible for “bringing [the] rain,” yet this action is trapped in an unfinished, infinitive state (394-5). In fact, the “black clouds,” confined in a distant mountain chain, can never rejuvenate the withering land in the riverbanks and valleys (397).
In addition, the cock, the announcer of the rain, is itself heavily connected to the uncertain state between life and death. Firstly, the animal figures in Ariel’s song “Hark, hark! I hear / [...] Cry, Cock a diddle dow” in Shakespeare’s Tempest, which brings to mind the fabricated death of Alonso, King of Naples. Secondly, the word is mentioned in another Shakespearian play, Hamlet, in the specific context of King Hamlet’s appearance as a ghost (ghost-hood and fabricated deaths suggest a similar border state between life and death). This brings even greater uncertainty regarding the cock’s ability of announcing/directing genuine revitalization.
This sense of incompletion persists until the very last stanza, in which border states, including the shore that the speaker sits at (between water and land) and the London Bridge (between life and death/Inferno), figure heavily. In addition, the insufficiency of Philomela’s transformation is emphasized once again. The line “quando fiam uti chelidon” merely anticipates a future gaining of a voice similar to that of the swallow’s, yet the task is essentially unfulfillable – while both sexes of the swallow can sing, only the male nightingale sings (429). Philomela’s metamorphosis still does not liberate her from her silence, a reminder of her subjugation. It is, once again, an incomplete renewal at best.
-
falling down falling down falling down
This is one of many times in the poem where repetition like this occurs. This is similar to "The Vigil of Venus" where the line "Tomorrow may loveless, may tomorrow make love" is repeated several times throughout the poem. Interestingly, the line itself is almost repetition but not quite, which makes the idea of love in the poem feel like an ever-changing thing that isn't stagnant. Meanwhile, "The Waste Land"'s use of "falling down falling down falling down," through its insistent and exact repetition, seems to show an action that cannot be undone and is damaging, like the London Bridge falling down.
-
My friend
In Angela’s annotation for this line, she interrogates the true nature of friendship, claiming that friendship in “The Waste Land” appears in relation to “indifference” and “superficiality” (Li). She cites Bradley as one of her sources, specifically, "a common understanding being admitted, how much does that imply? What is the minimum of sameness that we need suppose to be involved in it?" (Bradley, 6). The word “understanding” specifically caught my attention, as it is central to the Brihadaranyaka Upanishad. This line of “The Waste Land” is in reference to the part of the Upanishad that means “give”: “Then the human beings said to him, ‘Teach us, father.’ He spoke to them the same syllable DA. ‘Did you understand?’ ‘We understood,’ they said. ‘You told us, “Give (datta)”’” (Brihadaranyaka, Chapter 2). Yet, although the humans were instructed to give, Eliot appears to extend this scene, resuming it when the humans reflect upon the past, asking “what have we given?”
The deception and failure of friendship that Angela identifies as it relates to this line may also provide an answer to the shortcomings of the humans to “give.” Before the line Angela quotes, Bradley states, “what, however, we are convinced of, is briefly this, that we understand and, again, are ourselves understood” (Bradley, 6). Very clearly, Bradley accuses the human race of being under an illusion of understanding one another. If they are under the illusion of understanding, then the credibility of the humans in the Upanishad is completely undermined when they say that they “understand” what datta means. Possibly, they misunderstand what it means to “give,” or, Eliot may be making the claim that they misunderstood the meaning of datta itself as it exists in the universe of the poem. With this in mind, it makes sense that the humans are unable to point to what they’ve given in “The Waste Land.” They are left without direction, and, according to Bradley, they are condemned to failure in connecting, or “giving” themselves to one another. Even “my friend” implies an antithesis to “give”--possession. Eliot seems to agree with Bradley’s proposal that friendship, relationship, true exchange between one person and another is something beyond human understanding.
-
Only at nightfall, aethereal rumours Revive for a moment a broken Coriolanus
Coming back to what I said in a previous annotation about actions getting darker as night comes, this seems to flip that idea on its head a bit when saying "Only at nightfall, aethereal rumours / Revive for a moment a broken Coriolanus". Coriolanus is a Shakespeare character who is notably a bit of an antihero, so these lines seem to say that "aethereal rumors" at nightfall are what temporarily redeem Coriolanus, despite a previous annotation of mine arguing that peoples' actions get darker as the night falls. For Coriolanus, it seems to be the opposite.
This is also interesting when you consider Francis Herbert Bradley's Appearance and Reality where he argues that much of what humans perceive is an illusion, which makes it hard for people to truly connect with each other. This makes me wonder if these "aethereal rumours" are then actually other people and not supernatural beings, but Eliot is referring to them this way to show the true distance between ourselves and the reality of other people.
-
Who is the third who walks always beside you?
Both this stanza and P. Marudanayagum's "Retelling of an Indian Legend" deal with a mysterious other. In the legend, the vial (verandah) has enough space for one person to lie on, two people to sit on, or three people to stand on. Once three people are standing on the vial, they feel a fourth presence but don't know who it is, before realizing it's Lord Vishnu (a Hindu God). Following the logic of this legend, a mysterious presence in a space where it's not physically possible for the presence to fit inside is probably a God or other supernatural thing. However, this stanza shows two, not three, people that are standing, and their space isn't limited, but there's also a mysterious presence. There's definitely a lot to unpack here, and I'd welcome any theories about it, but I desperately need to go to sleep and can't properly theorize at this point.
-
Quando fiam uti chelidon—O swallow swallow
The 6th line of Eliot’s final stanza in “The Waste Land” reads, “Quando fiam uti chelidon”, or “when shall I be as the swallow”. This line was taken from Pervigilium Veneris, translated by Allen Tate, which recalls the story of Philomena, an Athenian princess who was raped by a king, and later turned into a bird. In order to gain a better sense of Eliot’s reference, we can look at it in the context of the stanza in the Pervigilium Veneris, which reads “She sings, we are silent. When will my spring come? Shall I find my voice when I shall be as the swallow? … Silent, I lost the muse. Return, Apollo!”. The mention of spring harkens back to the beginning of “The Waste Land”, where spring plays a major theme. In the Pervigilium Veneris, Philomena attributes spring to herself, calling it “my spring”, suggesting that spring represents her own rebirth and restoration. Thus, we might be able to interpret Eliot’s “spring” in a similar manner. Philomena’s seeking out of her voice is also interesting in terms of “The Waste Land”, which is built on fragmented dialogue and ever changing voices. Interestingly, Philomena seems to have lost “the muse”, or the divine inspiration, and in frustration, she calls out to Apollo to inspire her once again. Eliot, through his biblical references and prayers seems to be calling out to the divine, perhaps for his own inspiration as well. Another significant part of the Pervigilium Veneris are the repeating lines, “Tomorrow may loveless, may lover tomorrow make love.” Through these repeating and ambiguous lines, the reader can get a sense of the future, and the contrast between lovelessness and making love in that future. The word “may” expresses possibility, but can also be interpreted as expressing a wish, or hope. At the final stanza, this phrase shifts into, “Tomorrow let loveless, let lover tomorrow make love.” The newly introduced word, “let”, seems to acknowledge how fate is in the hands of the gods, as it is more of a direct expression of desire. Ultimately this repetition and prayer falls in line with similar repetitions such as “HURRY UP IT IS TIME” in “The Waste Land”, suggesting Eliot’s intensifying attempts at communication with the divine.
-
We think of the key, each in his prison Thinking of the key, each confirms a prison Only at nightfall, aethereal rumours
While reading this stanza of “What the Thunder Said”, I instantly connected Eliot’s mention of aethereal rumors to “Appearance and Reality” by Francis Herbert Bradley. Bradley’s philosophical essay attempts to examine and explain interactions between souls. In particular, Bradley mentions ether while discussing the possibility of direct communication through souls ( as in soul-to-soul communication without the use of bodies). Bradley explains that this communication would occur by ‘a medium extended in space, and of course, like “ether,” quite material.”. Thus ether, while material, is equated to the direct impressions on one soul from another. With this understand of ether, we can interpret “ethereal rumors” to be ones not concerned with the external environment or human bodies, rather, spiritual messages that transcend the normal methods of bodily communication, such as the voice. However, Bradley seems to doubt the existence of this ethereal communication, and proceeds to worry, stating “If such alterations of our bodies are the sole means which we posses for conveying what is in us, can we be sure that in the end we really have conveyed it?”. Essentially, Bradley shares his fears that humans are unable to fully represent their souls through their bodies. Interestingly, Eliot’s two previous lines seem to evoke a similar notion of distorted communication between souls. Eliot states, “We think of the key, each in his prison// Thinking of the key, each confirms a prison”. In these lines, the people’s thoughts are collective and similar, but each individual has his own prison. When regarding the word “key”, one might think of a physical key to the prison, however, I argue that the word “key”, instead, refers to the ethereal communication between souls discussed by Bradley. A key is defined as “a thing that provides a means of understanding something”, such as “the key to the code”, or “the key to the riddle”. With this understanding of a key, we can interpret Eliot’s prisons as what Bradley would describe as limits of the bodily expression of the soul. These prisons seem to be “affirmed” by the existence of this “key”, which might represent another concern that the bodily methods of communication are only seen as limits due to the yearning for ethereal soul-to-soul communication.
-
A woman drew her long black hair out tight And fiddled whisper music on those strings
Beginning this stanza of “What the Thunder Said”, Eliot describes a woman who manipulated her hair, and “fiddled whisper music on those strings”. Interpreting “those strings” as the woman’s own hair we can interpret a curious instance of a woman using her body as an instrument to play music. Of course, we must acknowledge that realistically, one can’t make any substantial sounds with their hair, and thus we can interpret her “whisper music” as imagined, or only perceived by her. In terms of the human body, especially in relation to hair, we can further understand this passage by looking at page 298 of the Visuddhi-Magga. This page discusses the superficiality of beauty and the ego, as it declares that the human body is repulsive. The repulsiveness of the human body is argued, as the Visuddhi Magga reads, “When any part of the body becomes detached, as, for instance, the hair of the head … people are unwilling so much as to touch it”. According to the Visuddhi Magga, humans assign significance and beauty to discardable parts of their body, and when those parts are discarded, humans view them with disgust. When comparing the teachings of the Visuddhi-Magga with the long-haired woman, there seems to be a contrast in appreciation for the human body. While the Visuddhi-Magga argues that the body, especially the hair, is repulsive, the woman is using her own hair as an instrument, something of significance and beauty in and of itself. I believe another important aspect of this analysis lies in the consideration of Eliot’s notion of “conceptual death”. In “The Waste Land” Eliot has challenged the reader’s literal understanding of death, and instead seems to propose the idea that death is a complex and cultural state that cannot be so easily defined. Literally, our hair is dead, but when attached to our body, it becomes a part of a living thing, and thus seems to gain significance through what I argue is “conceptual vitality”. Interpeting the lesson of the Visuddhi-Magga, hair loses its “vitality” when it is cut off, and becomes recognizably repulsive. Though it was always dead, it has lost its significance to the body. I would argue that the woman using her hair as an instrument is an affirmation of the hair’s significance to herself, and thus, a part of her own conceptual vitality.
-
Here is no water but only rock
Psalm 63 describes longing for God in a place with no water, while this stanza describes longing for water whilst pointing out the abundance of rock. In Psalm 63, it even says of God, "My soul thirsteth for thee," which equates God to water in a sense. When looking at this section of "The Waste Land" together with Psalm 63, it makes this part seem notably unreligious.
-
A current under sea Picked his bones in whispers.
This line, which seemingly emphasizes how water can kill/take apart human beings, draws a contrast to Corinthians, which states "All were baptized into Moses in the cloud and in the sea," which appears to show how water is used to "baptize" someone into a religion. I think the difference between the water usages in these respective works stems from Eliot looking at some of the more literal actions of water while Corinthians looks at more figurative, religious uses of it.
-
Past the Isle of Dogs.
Eliot references the "Isle of Dogs". Matthew 7:6 states "Give not that which is holy unto the dogs, neither cast ye your pearls before swine, lest they trample them under their feet, and turn again and rend you." I find this interesting because I essentially understand to say "be careful who you associate with because not all people are good," which draws a contrast to this stanza which generally seems to lack intention. For example, it says "The barges drift" and references "Drifting logs," which implies a lack of control over the circumstances. This is all very interesting because in Matthew, "the dogs" seemingly refer to people you find yourself associating with if you become too careless with your actions, and going "Past the Isle of Dogs" feels similarly unintentional.
-
And gropes his way, finding the stairs unlit . . .
Interestingly, throughout this entire long stanza, the night seems to become darker as the actions become darker. First, we're just in the "violet hour", then time passes throughout the stanza, and it ends with "And gropes his way, finding the stairs unlit" (Eliot, 248), after Tiresias has raped a woman. The way light and darkness is used here draws a contrast to how it's used in Fragment 149 of a Sappho poem, where she refers to "Bringing everything that shining Dawn scattered, you bring the sheep, you bring the goat, you bring the child back to its mother" (Sappho). Here, darkness and nighttime are seen as things that bring people/animals together in a pleasurable way by reuniting them, whereas in this stanza Tiresias and a woman are brought together at night, but he rapes her, thereby correlating darkness and nighttime with darker actions in "The Waste Land".
-
Sweet Thames, run softly, till I end my song.
This is a clear reference to Edmund Spenser's poem "Prothalamion". I think it might represent Eliot trying to get closer to that aesthetic of the 1500s, when the world was generally more untouched. In the next lines he says "The river bears no empty bottles, sandwich papers, / Silk handkerchiefs, cardboard boxes, cigarette ends / Or other testimony of summer nights" (Eliot, 176-178), which I think touches on the theme of industrialization. Therefore, Eliot may be referencing Spenser to feel closer to that pre-industrial world.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this very brief demo lesson, I just want to demonstrate a very specific feature of EC2 known as termination protection.
Now you don't have to follow along with this in your own environment, but if you are, you should still have the infrastructure created from the previous demo lesson.
And also if you are following along, you need to be logged in as the I am admin user to the general AWS account.
So the management account of the organization and have the Northern Virginia region selected.
Now again, this is going to be very brief.
So it's probably not worth doing in your own environment unless you really want to.
Now what I want to demonstrate is termination protection.
So I'm going to go ahead and move to the EC2 console where I still have an EC2 instance running created in the previous demo lesson.
Now normally if I right click on this instance, I'm given the ability to stop the instance, to reboot the instance or to terminate the instance.
And this is assuming that the instance is currently in a running state.
Now if I go to terminate instance, straight away I'm presented with a dialogue where I need to confirm that I want to terminate this instance.
But it's easy to imagine that somebody who's less experienced with AWS can go ahead and terminate that and then click on terminate to confirm the process without giving it much thought.
And that can result in data loss, which isn't ideal.
What you can do to add another layer of protection is to right click on the instance, go to instance settings, and then change termination protection.
If you click that option, you get this dialogue where you can enable termination protection.
So I'm going to do that, I'm going to enable termination protection because this is an essential website for animals for life.
So I'm going to enable it and click on save.
And now that instance is protected against termination.
If I right click on this instance now and go to terminate instance and then click on terminate, I get a dialogue that I'm unable to terminate the instance.
The instance and then the instance ID may not be terminated, modify its disable API termination instance attribute and then try again.
So this instance is now protected against accidental termination.
Now this presents a number of advantages.
One, it protects against accidental termination, but it also adds a specific permission that is required in order to terminate an instance.
So you need the permission to disable this termination protection in addition to the permissions to be able to terminate an instance.
So you have the option of role separation.
You can either require people to have both the permissions to disable termination protection and permissions to terminate, or you can give those permissions to separate groups of people.
So you might have senior administrators who are the only ones allowed to remove this protection, and junior or normal administrators who have the ability to terminate instances, and that essentially establishes a process where a senior administrator is required to disable the protection before instances can be terminated.
It adds another approval step to this process, and it can be really useful in environments which contain business critical EC2 instances.
So you might not have this for development and test environments, but for anything in production, this might be a standard feature.
If you're provisioning instances automatically using cloud formation or other forms of automation, this is something that you can enable in an automated way as instances are launching.
So this is a really useful feature to be aware of.
And for the SysOps exam, it's essential that you understand when and where you'd use this feature.
And for both the SysOps and the developer exams, you should pay attention to this, disable API termination.
You might be required to know which attribute needs to be modified in order to allow terminations.
So really for both of the exams, just make sure that you're aware of exactly how this process works end to end, specifically the error message that you might get if this attribute is enabled and you attempt to terminate an instance.
At this point though, that is everything that I wanted to cover about this feature.
So right click on the instance, go to instance settings, change the termination protection and disable it, and then click on save.
One other feature which I want to introduce quickly, if we right click on the instance, go to instance settings, and then change shutdown behavior, you're able to specify whether an instance should move into a stop state when shut down, or whether you want it to move into a terminate state.
Now logically, the default is stop, but if you are running an environment where you don't want to consider the state of an instance to be valuable, then potentially you might want it to terminate when it shuts down.
You might not want to have an account with lots of stopped instances.
You might want the default behavior to be terminate, but this is a relatively niche feature, and in most cases, you do want the shutdown behavior to be stop rather than terminate, but it's here where you can change that default behavior.
Now at this point, that is everything I wanted to cover.
If you were following along with this in your own environment, you do need to clear up the infrastructure.
So click on the services dropdown, move to cloud formation, select the status checks and protect stack, and then click on delete and confirm that by clicking delete stack.
And once this stack finishes deleting all of the infrastructure that's been used during this demo and the previous one will be cleared from the AWS account.
If you've just been watching, you don't need to worry about any of this process, but at this point, we're done with this demo lesson.
So go ahead, complete the video, and once you're ready, I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this demo lesson either you're going to get the experience or you can watch me interacting with an Amazon machine image.
So we created an Amazon machine image or AMI in a previous demo lesson and if you recall it was customized for animals for life.
It had an install of WordPress and it had the Kause application installed and a custom login banner.
Now this is a really simple example of an AMI but I want to step you through some of the options that you have when dealing with AMIs.
So if we go to the EC2 console and if you are following along with this in your own environment do make sure that you're logged in as the IAM admin user of the general AWS account, so the management account of the organization and you have the Northern Virginia region selected.
The reason for being so specific about the region is that AMIs are regional entities so you create an AMI in a particular region.
So if I go and select AMIs under images within the EC2 console I'll see the animals for life AMI that I created in a previous demo lesson.
Now if I go ahead and change the region maybe from Northern Virginia which is US-East-1 to US-East- Ohio which is US-East-2 if I make that change what we'll see is we'll go back to the same area of the console only now we won't see any AMIs that's because an AMI is tied to the region in which it's created.
Every AMI belongs in one region and it has a unique AMI ID.
So let's move back to Northern Virginia.
Now we are able to copy AMIs between regions this allows us to make one AMI and use it for a global infrastructure platform so we can right-click and select copy AMI then select the destination region and then for this example let's say that I did want to copy it to Ohio then I would select that in the drop-down it would allow me to change the name if I wanted or I could keep it the same for description it would show that it's been copied from this AMI ID in this region and then it would have the existing description at the end.
So at this point I'm going to go ahead and click copy AMI and that process has now started so if I close down this dialogue and then change it from US East 1 to US East 2 so select that now we have a pending AMI and this is the AMI that's being copied from the US - East - one region into this region if we go ahead and click on snapshots under elastic block store then we're going to see the snapshot or snapshots which belong to this AMI.
Now depending on how busy AWS is it can take a few minutes for the snapshots to appear on this screen just go ahead and keep refreshing until they appear.
In our case we only have the one which is the boot volume that's used for our custom AMI.
Now the time taken to copy a snapshot between regions depends on many factors what the source and destination region are and the distance between the two the size of the snapshot and the amount of data it contains and it can take anywhere from a few minutes to much much longer so this is not an immediate process.
Once the snapshot copy completes then the AMI copy process will complete and that AMI is then available in the destination region but an important thing that I want to keep stressing throughout this course is that this copied AMI is a completely different AMI.
AMIs are regional don't fall for any exam questions which attempt to have you use one AMI for several regions.
If we're copying this animals for life AMI from one region to another region in effect we're creating two different AMIs.
So take note of this AMI ID in this region and if we switch back to the original source region so US - East - 1 note how this AMI has a different ID so they are different AMIs completely different AMIs you're creating a new one as part of the copy process.
So while the data is going to be the same conceptually they are completely separate objects and that's critical for you to understand both for production usage and when answering any exam questions.
Now while that's copying I want to demonstrate the other important thing which I wanted to show you in this demo lesson and that's permissions of AMIs.
So if I right-click on this AMI and edit AMI permissions by default an AMI is private.
Being private means that it's only accessible within the AWS account which has created the AMI and so only identities within that account that you grant permissions are able to access it and use it.
Now you can change the permission of the AMI you could set it to be public and if you set it to public it means that any AWS account can access this AMI and so you need to be really careful if you select this option because you don't want any sensitive information contained in that snapshot to be leaked to external AWS accounts.
A much safer way is if you do want to share the AMI with anyone else then you can select private but explicitly add other AWS accounts to be able to interact with this AMI.
So I could click in this box and then for example if I clicked on services and I just moved to the AWS organization service I'll open that in a new tab and let's say that I chose to share this AMI with my production account so I selected my production account ID and then I could add this into this box which would grant my production AWS account the ability to access this AMI.
Now no tell there's also this checkbox and this adds create volume permissions to the snapshots associated with this AMI so this is something that you need to keep in mind.
Generally if you are sharing an AMI to another account inside your organization then you can afford to be relatively liberal with permissions so generally if you're sharing this internally I would definitely check this box and that gives full permissions on the AMI as well as the snapshots so that anyone can create volumes from those snapshots as well as accessing the AMI.
So these are all things that you need to consider.
Generally it's much preferred to explicitly grant an AWS account permissions on an AMI rather than making that AMI public.
If you do make it public you need to be really sure that you haven't leaked any sensitive information, specifically access keys.
While you do need to be careful of that as well if you're explicitly sharing it with accounts, generally if you're sharing it with accounts then you're going to be sharing it with trusted entities.
You need to be very very careful if ever you're using this public option and I'll make sure I include a link attached to this lesson which steps through all of the best practice steps that you need to follow if you're sharing an AMI publicly.
There are a number of really common steps that you can use to minimize lots of common security issues and that's something you should definitely do if you're sharing an AMI.
Now if you want to do you could also share an AMI with an organizational unit or organization and you can do that using this option.
This makes it easier if you want to share an AMI with all AWS accounts within your organization.
At this point though I'm not going to do that we don't need to do that in this demo.
What we're going to do now though is move back to US-East-2.
That's everything I wanted to cover in this demo lesson.
Now this AMI is available we can right click and select D register and move back to US-East-1 and now that we've done this demo lesson we can do the same process with this AMI.
So we can right click select D register and that will remove that AMI.
Click on snapshots this is the snapshot created by this AMI so we need to delete this as well right click delete that snapshot confirm that and we'll need to do the same process in the region that we copied the AMI and the snapshots to.
So select US-East-2 it should be the only snapshot in the region make sure it is the correct one right click delete confirm that deletion and now you've cleared up all of the extra things created within this demo lesson.
Now that's everything that I wanted to cover I just wanted to give you an overview of how to work with AMIs from the console UI from a copying and sharing perspective.
Go ahead and complete this video and when you're ready I look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
This is part two of this lesson.
We're going to continue immediately from the end of part one.
So let's get started.
So the first step is to shut down this instance.
So we don't want to create an AMI from a running instance because that can cause consistency issues.
So we're going to close down this tab.
We're going to return to instances, right-click, and we're going to stop the instance.
We need to acknowledge this and then we need to wait for the instance to change into the stopped state.
It will start with stopping.
We'll need to refresh it a few times.
There we can see it's now in a stopped state and to create the AMI, we need to right-click on that instance, go down to Image and Templates, and select Create Image.
So this is going to create an AMI.
And first we need to give the AMI a name.
So let's go ahead and use Animals for Life template WordPress.
And we'll use the same for Description.
Now what this process is going to do is it's going to create a snapshot of any of the EBS volumes, which this instance is using.
It's going to create a block device mapping, which maps those snapshots onto a particular device ID.
And it's going to use the same device ID as this instance is using.
So it's going to set up the storage in the same way.
It's going to record that storage inside the AMI so that it's identical to the instance we're creating the AMI from.
So you'll see here that it's using EBS.
It's got the original device ID.
The volume type is set to the same as the volume that our instance is using, and the size is set to 8.
Now you can adjust the size during this process as well as being able to add volumes.
But generally when you're creating an AMI, you're creating the AMI in the same configuration as this original instance.
Now I don't recommend creating an AMI from a running instance because it can cause consistency issues.
If you create an AMI from a running instance, it's possible that it will need to perform an instance reboot.
You can force that not to occur, so create an AMI without rebooting.
But again, that's even less ideal.
The most optimal way for creating an AMI is to stop the instance and then create the AMI from that stopped instance, which will have fully consistent storage.
So now that that's set, just scroll down to the bottom and go ahead and click on Create Image.
Now that process will take some time.
If we just scroll down, look under Elastic Block Store and click on Snapshots.
You'll see that initially it's creating a snapshot of the boot volume of our original EC2 instance.
So that's the first step.
So in creating the AMI, what needs to happen is a snapshot of any of the EBS volumes attached to that EC2 instance.
So that needs to complete first.
Initially it's going to be an appending state.
We'll need to give that a few moments to complete.
If we move to AMIs, we'll see that the AMI is also creating it too.
It is in appending state and it's waiting for that snapshot to complete.
Now creating a snapshot is storing a full copy of any of the data on the original EBS volume.
And the time taken to create a snapshot can vary.
The initial snapshot always takes much longer because it has to take that full copy of data.
And obviously depending on the size of the original volume and how much data is being used, will influence how long a snapshot takes to create.
So the more data, the larger the volume, the longer the snapshot will take.
After a few more refreshes, the snapshot moves into a completed status and if we move across to AMIs under images, after a few moments this too will change away from appending status.
So let's just refresh it.
After a few moments, the AMI is now also in an available state and we're good to be able to use this to launch additional EC2 instances.
So just to summarize, we've launched the original EC2 instance, we've downloaded, installed and configured WordPress, configured that custom banner.
We've shut down the EC2 instance and generated an AMI from that instance.
And now we have this AMI in a state where we can use it to create additional instances.
So we're going to do that.
We're going to launch an additional instance using this AMI.
While we're doing this, I want you to consider exactly how much quicker this process now is.
So what I'm going to do is to launch an EC2 instance from this AMI and note that this instance will have all of the configuration that we had to do manually, automatically included.
So right click on this AMI and select launch.
Now this will step you through the launch process for an EC2 instance.
You won't have to select an AMI because obviously you are now explicitly using the one that you've just created.
You'll be asked to select all of the normal configuration options.
So first let's put a name for this instance.
So we'll use the name "instance" from AMI.
Then we'll scroll down.
As I mentioned moments ago, we don't have to specify an AMI because we're explicitly launching this instance from an AMI.
Scroll down.
You'll need to specify an instance type just as normal.
We'll use a free tier eligible instance.
This is likely to be T2 or T3.micro.
Below that, go ahead and click and select Proceed without a key pair not recommended.
Scroll down.
We'll need to enter some networking settings.
So click on Edit next to Network Settings.
Click in VPC and select A4L-VPC1.
Click in Subnet and make sure that SN-Web-A is selected.
Make sure the box is below a both set to enable for the auto assign IP settings.
Under Firewall, click on Select Existing Security Group.
Click in the Security Groups drop down and select AMI-Demo-Instance Security Group.
And that will have some random at the end.
That's absolutely fine.
Select that.
Scroll down.
And notice that the storage is configured exactly the same as the instance which you generated this AMI from.
Everything else looks good.
So we can go ahead and click on Launch Instance.
So this is launching an instance using our custom created AMI.
So let's close down this dialog and we'll see the instance initially in a pending state.
Remember, this is launching from our custom AMI.
So it won't just have the base Amazon Linux 2 operating system.
Now it's going to have that base operating system plus all of the custom configuration that we did before creating the AMI.
So rather than having to perform that same WordPress download installation configuration and the banner configuration each and every time, now we've baked that in to the AMI.
So now when we launch one instance, 10 instances, or 100 instances from this AMI, all of them are going to have this configuration baked in.
So let's give this a few minutes to launch.
Once it's launched, we'll select it, right click, select Connect, and then connect into it using EC2, Instance Connect.
Now one thing you will need to change because we're using a custom AMI, AWS can't necessarily detect the correct username to use.
And so you might see sometimes it says root.
Just go ahead and change this to EC2-user and then go ahead and click Connect.
And if everything goes well, you'll be connected into the instance and you'll see our custom Cowsay banner.
So all that configuration is now baked in and it's automatically included whenever we use that AMI to launch an instance.
If we go back to the AWS console and select instances, make sure we still have the instance from AMI selected and then locate its public IP version for address.
Don't use this link because that will use HTTPS instead, copy the IP address into your clipboard and open that in a new tab.
Again, all being well, you should see the WordPress installation dialogue and that's because we've baked in the installation and the configuration into this AMI.
So we've massively reduced the ongoing efforts required to launch an animals for life standard build configuration.
If we use this AMI to launch hundreds or thousands of instances each and every time we're saving all the time and the effort required to perform this configuration and using an AMI is just one way that we can automate the build process of EC2 instances within AWS.
And over the remainder of the course, I'm going to be demonstrating the other ways that you can use as well as comparing and contrasting the advantages and disadvantages of each of those methods.
Now that's everything that I wanted to cover in this demo lesson.
You've learned how to create an AMI and how to use it to save significant effort on an ongoing basis.
So let's clear up all of the infrastructure that we've used in this lesson.
So move back to the AWS console, close down this tab, go back to instances, and we need to manually terminate the instance that we created from our custom AMI.
So right click and then go to terminate instance.
You'll need to confirm that.
That will start the process of termination.
Now we're not going to delete the AMI or snapshots because there's a demo coming up later in this section of the course where you're going to get the experience of copying and sharing an AMI between AWS regions.
So we're going to need to leave this in place.
So we're not going to delete the AMI or the snapshots created within this lesson.
Verify that that instance has been terminated and once it has, click on services, go to cloud formation, select the AMI demo stack, select delete and then confirm that deletion.
And that will remove all of the infrastructure that we've created within this demo lesson.
And at this point, that's everything that I wanted you to do in this demo.
So go ahead, complete this video.
And when you're ready, I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this demo lesson you'll be creating an AMI from a pre-configured EC2 instance.
So you'll be provisioning an EC2 instance, configuring it with a popular web application stack and then creating an AMI of that pre-configured web application.
Now you know in the previous demo where I said that you would be implementing the WordPress manual install once?
Well I might have misled you slightly but this will be the last manual install of WordPress in the course, I promise.
What we're going to do together in this demo lesson is create an Amazon Linux AMI for the animals for life business but one which includes some custom configuration and an install of WordPress ready and waiting to be initially configured.
So this is a fairly common use case so let's jump in and get started.
Now in order to perform this demo you're going to need some infrastructure, make sure you're logged into the general AWS account, so the management account of the organization and as always make sure that you have the Northern Virginia region selected.
Now attached to this lesson is a one-click deployment link, go ahead and click that link.
This will open the quick create stack screen, it should automatically be populated with the AMI demo as the stack name, just scroll down to the bottom, check this capabilities acknowledgement box and then click on create stack.
We're going to need this stack to be in a create complete state so go ahead and pause the video and we can resume once the stack moves into create complete.
Okay so that stacks now moved into a create complete state, we're good to continue with the demo.
Now you're going to be using some command line commands within an EC2 instance as part of creating an Amazon machine image so also attached to this lesson is the lessons command document which contains all of those commands so go ahead and open that document.
Now you might recognize these as the same commands that you used when you were performing a manual WordPress installation and that's the case we're running the same manual installation process as part of setting up our animals for life AMI so you're going to need all of these commands but as you've already experienced them in the previous demo lesson I'm going to run through them a lot quicker in this demo lesson so go back to the AWS console and we need to move to the EC2 area of the console so click on the services drop down, type EC2 into this search box and then open that in a new tab.
Once you there go ahead and click on running instances, close down any dialogues about any console changes we want to maximize the amount of screen space that we have, we're going to connect to this A4L public EC2 instance this is the instance that we're going to use to create our AMI so we're going to set the instance up manually how we want it to be and then we're going to use it to generate an AMI so we need to connect to this instance so right click select connect we're going to use EC2 instance connect to do the work within our browser so make sure the username is EC2-user and then connect to this instance then once connected we're going to run through the commands to install WordPress really quickly we're going to start again by setting the variables that will use throughout the installation so you can just go ahead and copy and paste those straight in and press enter now we're going to run through all of the next set of commands really quickly because you use them in the previous demo lesson so first we're going to go ahead and install the MariaDB server Apache and the Wget utility while that's installing copy all of the commands from step 3 so these are commands which enable and start Apache and MariaDB go ahead and paste all of those four in and press enter so now Apache and MariaDB are both set to start when the instance boots as well as being set to currently started I'll just clear the screen to make this easier to see next we're going to set the DB root password again that's this command using the contents of the variable that you set at the start next we download WordPress once it's downloaded we move into the web root folder we extract the download we copy the files from within the WordPress folder that we've just extracted into the current folder which is the web root once we've done that we remove the WordPress folder itself and then we tidy up by deleting the download I'm going to clear the screen we copy the template configuration file into its final file name so wp-config.php then we're going to replace the placeholders in that file we're going to start with the database name using the variable that you set at the start next we're going to use the database user which you also set at the start and finally the database password and then we're going to set the ownership on all of these files to be the Apache user and the Apache group clear the screen next we need to create the DB setup script that are demonstrated in the previous demo so we need to run a collection of commands the first to enter the create database command the next one to enter the create user command and set that password the next one to grant permissions on the database to that user then flush the permissions then we need to run that script using the MySQL command line interface that runs all of those commands and performs all of those operations and then we tidy up by deleting that file now at this point we've done the exact same process that we did in the previous demo we've installed and set up WordPress and if everything's working okay we can go back to the AWS console click on instances select the running a4l-public ec2 instance copy down its IP address again make sure you copy that down don't click this link and then open that in a new tab if everything's working as expected you should see the WordPress installation dialogue now this time because we're creating an AMI we don't want to perform the installation we want to make sure that when anyone uses this AMI they're also greeted with this installation so we're going to leave this at this point we're not going to perform the installation instead we're going to go back to the ec2 instance now because this ec2 instance is for the animals for life business we want to customize it and make sure that everybody knows that this is an animals for life ec2 instance now to do that we're going to install an animal themed utility called cow say I'm going to clear the screen to make it easier to see and then just to demonstrate exactly what cow say does I'm going to run a cow say oh hi and if all goes well we see a cow using ASCII art saying the oh hi message that we just typed so we're going to use this to create a message of the day welcome when anyone connects to this ec2 instance to do that we're going to create a file inside the configuration folder of this ec2 instance so we're going to use shudu nano and we're going to create this file so forward slash etc forward slash update hyphen motd dot d forward slash 40 hyphen cow so we're going to create that file this is the file that's going to be used to generate the output when anyone logs in to this ec2 instance so we're going to copy in these two lines and then press enter so this means when anyone logs into the ec2 instance they're going to get an animal themed welcome so use control o to save that file and control x to exit clear the screen to make it easier to see we're going to make sure that file that we've just edited has the correct permissions then we're going to force an update of the message of the day so this is going to be what's displayed when anyone logs into this instance and then finally now that we've completed this configuration we're going to reboot this ec2 instance so we're going to use this command to reboot it and just to illustrate how this works I'm going to close down that tab and return to the ec2 console give this a few moments to restart that should have rebooted by now so we're going to select it right click go to connect again use ec2 instance connect assuming everything's working now when we connect to the instance we'll see an animal themed login banner so this is just a nice way that we can ensure that anyone logging into this instance understands that a he uses the Amazon Linux 2 AMI and be that it belongs to animals for life so we've created this instance using the Amazon Linux 2 AMI we've performed the WordPress installation and initial configuration we've customized the banner and now we're going to use this as our template instance to create our AMI that can then be used to launch other instances okay so this is the end of part one of this lesson it was getting a little bit on the long side and so I wanted to add a break it's an opportunity just to take a rest or grab a coffee part 2 will be continuing immediately from the end of part one so go ahead complete the video and when you're ready join me in part two
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
This is part two of this lesson.
We're going to continue immediately from the end of part one.
So let's get started.
So this is the folder containing the WordPress installation files.
Now there's one particular file that's really important, and that's the configuration file.
So there's a file called WP-config-sample, and this is actually the file that contains a template of the configuration items for WordPress.
So what we need to do is to take this template and change the file name to be the proper file name, so wp-config.php.
So we're going to create a copy of this file with the correct name.
And to do that, we run this command.
So we're copying the template or the sample file to its real file name, so wp-config.php.
And this is the name that WordPress expects when it initially loads its configuration information.
So run that command, and that now means that we have a live config file.
Now this command isn't in the instructions, but if I just take a moment to open up this file, you don't need to do this.
I'm just demonstrating what's in this file for your benefit.
But if I run a sudo nano, and then wp, and then hyphen-config, and then php, this is how the file looks.
So this has got all the configuration information in.
So it stores the database name, the database user, the database host, and lots of other information.
Now notice how it has some placeholders.
So this is where we would need to replace the placeholders with the actual configuration information.
So the database name itself, the host name, the database username, the database password, all that information would need to be replaced.
Now we're not going to type this in manually, so I'm going to control X to exit out of this, and then clear the screen again to make it easy to see.
We're going to use the Linux utility sed, or S-E-D.
And this is a utility which can perform a search and replace within a text file.
It's actually much more complex and capable than that.
It can perform many different manipulation operations.
But for this demonstration, we're going to use it as a simple search and replace.
Now we're going to do this a number of times.
First, we're going to run this command, which is going to replace this placeholder.
Remember, this is one of the placeholders inside the configuration file that I've just demonstrated, wp-config.
We're going to replace the placeholder here with the contents of the variable name, dbname, that we set at the start of this demo.
So this is going to replace the placeholder with our actual database name.
So I'm going to enter that so you can do the same.
We're going to run the sed command again, but this time it's going to replace the username placeholder with the dbuser variable that we set at the start of this demo.
So use that command as well.
And then lastly, it will do the same for the database password.
So type or copy and paste this command and press enter.
And that now means that this wp-config has the actual configuration information inside.
And just to demonstrate that, you don't need to do this part.
I'll just do it to demonstrate.
If I edit this file again, you'll see that all of these placeholders have actually been replaced with actual values.
So I'm going to control X out of that and then clear the screen.
And that concludes the configuration for the WordPress application.
So now it's ready.
Now it knows how to communicate with the database.
What we need to do to finish off the configuration though is just to make sure that the web server has access to all of the files within this folder.
And to do that, we use this command.
So we're making sure that we use the shown command or chown and set the ownership of all of the files in this folder and any subfolders to be the Apache user and the Apache group.
And the Apache user and Apache group belong to the web server.
So this just makes sure that the web server is able to access and control all of the files in the web root folder.
So run that command and press enter.
And that concludes the installation part of the WordPress application.
There's one final thing that we need to do and that's to create the database that WordPress will use.
So I'm going to clear the screen to make it easy to see.
Now what we're going to do in order to configure the database is we're going to make a database setup script.
We're going to put this script inside the forward slash TMP folder and we're going to call it DB.setup.
So what we need to do is enter the commands into this file that will create the database.
After the database is created, it needs to create a database user and then it needs to grant that user permissions on that database.
Now again, instead of manually entering this, we're going to use those variable names that were created at the start of the demo.
So we're going to run a number of commands.
These are all in the lessons commands document.
The first one is this.
So this echoes this text and because it has a variable name in, this variable name will be replaced by the actual contents of the variable.
Then it's going to take this text with the replacement of the contents of this variable and it's going to enter that into this file.
So forward slash TMP, forward slash DB setup.
So run that and that command is going to create the WordPress database.
Then we're going to use this command and this is the same so it echoes this text but it replaces these variable names with the contents of the variables.
This is going to create our WordPress database user.
It's going to set its password and then it's going to append this text to the DB setup file that we're creating.
Now all of these are actually database commands that we're going to execute within the MariaDB database.
So enter that to add that line to DB.setup.
Then we have another line which uses the same architecture as the ones above it.
It echoes the text.
It replaces these variable names with the contents and then outputs that to this DB.setup file and this command grants our database user permissions to our WordPress database.
And then the last command is this one which just flushes the privileges and again we're going to add this to our DB.setup script.
So now I'm just going to cat the contents of this file so you can just see exactly what it looks like.
So cat and then space forward slash TMP, forward slash DB.setup.
So as you'll see it's replaced all of these variable names with the actual contents.
So this is what the contents of this script actually looks like.
So these are commands which will be run by the MariaDB database platform.
To run those commands we use this.
So this is the MySQL command line interface.
So we're using MySQL to connect to the MariaDB database server.
We're using the username of root.
We're passing in the password and then using the contents of the DB root password variable.
And then once we authenticate the database we're passing in the contents of our DB.setup script.
And so this means that all of the lines of our DB.setup script will be run by the MariaDB database and this will create the WordPress database, the WordPress user and configure all of the required permissions.
So go ahead and press enter.
That command is run by the MariaDB platform and that means that our WordPress database has been successfully configured.
And then lastly just to keep things secure because we don't want to leave files laying around on the file system with authentication information inside.
We're just going to run this command to delete this DB.setup file.
Okay, so that concludes the setup process for WordPress.
It's been a fairly long intensive process but that now means that we have an installation of WordPress on this EC2 instance, a database which has been installed and configured.
So now what we can do is to go back to the AWS console, click on instances.
We need to select the A4L-PublicEC2 and then we need to locate its IP address.
Now make sure that you don't use this open address link because this will attempt to open the IP address using HTTPS and we don't have that configured on this WordPress instance.
Instead, just copy the IP address into your clipboard and then open that in a new tab.
If everything's successful, you should see the WordPress installation dialog and just to verify this is working successfully, let's follow this process through.
So pick English, United States for the language.
For the blog title, just put all the cats and then admin as the username.
You can accept the default strong password.
Just copy that into your clipboard so we can use it to log in in a second and then just go ahead and enter your email.
It doesn't have to be a correct one.
So I normally use test@test.com and then go ahead and click on install WordPress.
You should see a success dialog.
Go ahead and click on login.
Username will be admin, the password that you just copied into your clipboard and then click on login.
And there you go.
We've got a working WordPress installation.
We're not going to configure it in any detail but if you want to just check out that it works properly, go ahead and click on this all the cats at the top and then visit site and you'll be able to see a generic WordPress blog.
And that means you've completed the installation of the WordPress application and the database using a monolithic architecture on a single EC2 instance.
So this has been a slow process.
It's been manual and it's a process which is wide open for mistakes to be made at every point throughout that process.
Can you imagine doing this twice?
What about 10 times?
What about a hundred times?
It gets pretty annoying pretty quickly.
In reality, this is never done manually.
We use automation or infrastructure as code systems such as cloud formation.
And as we move through the course, you're going to get experience of using all of these different methods.
Now that we're close to finishing up the basics of VPC and EC2 within the course, things will start to get much more efficient quickly because I'm going to start showing you how to use many of the automation and infrastructure as code services within AWS.
And these are really awesome to use.
And you'll see just how much power is granted to an architect, a developer, or an engineer by using these services.
For now though, that is the end of this demo lesson.
Now what we're going to do is to clear up our account.
So we need to go ahead and clear all of this infrastructure that we've used throughout this demo lesson.
To do that, just move back to the AWS console.
If you still have the cloud formation tab open and move back to that tab, otherwise click on services and then click on cloud formation.
If you don't see it anywhere, you can use this box to search for it, select the word, press stack, select delete, and then confirm that deletion.
And that will delete the stack, clear up all of the infrastructure that we've used throughout this demo lesson and the account will now be in the same state as it was at the start of this lesson.
So from this point onward in the course, we're going to start using automation.
Now there is a lesson coming up in a little while in this section of the course, where you're going to create an Amazon machine image which is going to contain a pre-baked copy of the WordPress application.
So as part of that lesson, you are going to be required to perform one more manual installation of WordPress, but that's going to be part of automating the installation.
So you'll start to get some experience of how to actually perform automated installations and how to design architectures which have WordPress as a component.
At this point though, that's everything I wanted to cover.
So go ahead, complete this video, and when you're ready, I look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson we're going to be doing something which I really hate doing and that's using WordPress in a course as an example.
Joking aside though WordPress is used in a lot of courses as a very simple example of an application stack.
The problem is that most courses don't take this any further.
But in this course I want to use it as one example of how an application stack can be evolved to take advantage of AWS products and services.
What we're going to be using WordPress for in this demo is to give you experience of how a manual installation of a typical application stack works in EC2.
We're going to be doing this so you can get the experience of how not to do things.
My personal belief is that to fully understand the advantages that automation features within AWS provide, you need to understand what a manual installation is like and what problems you can experience doing that manual installation.
As we move through the course we can compare this to various different automated ways of installing software within AWS.
So you're going to get the experience of bad practices, good practices and the experience to be able to compare and contrast between the two.
By the end of this demonstration you're going to have a working WordPress site but it won't have any high availability because it's running on a single EC2 instance.
It's going to be architecturally monolithic with everything running on the one single instance.
In this case that means both the application and the database.
The design is fairly straightforward.
It's just the Animals for Life VPC.
We're going to be deploying the WordPress application into a single subnet, the WebA public subnet.
So this subnet is going to have a single EC2 instance deployed into it and then you're going to be doing a manual install onto this instance and the end result is a working WordPress installation.
At this point it's time to get started and implement this architecture.
So let's go ahead and switch over to our AWS console.
To get started with this demo lesson you're going to need to do a few preparation steps.
First just make sure that you're logged in to the general AWS account, so the management account of the organization and as always make sure you have the Northern Virginia region selected.
Now attached to this lesson is a one-click deployment for the base infrastructure that we're going to use.
So go ahead and open the one-click deployment link that's attached to this lesson.
That link is going to take you to the Quick Create Stack screen.
Everything should be pre-populated.
The stack name should be WordPress.
All you need to do is scroll down towards the bottom, check this capabilities box and then click on Create Stack.
And this stack is going to need to be in a Create Complete state before we move on with the demo lesson.
So go ahead and pause this video, wait for the stack to change to Create Complete and then we're good to continue.
Also attached to this lesson is a Lessons Command document which lists all of the commands that you'll be using within the EC2 instance throughout this demo lesson.
So go ahead and open that as well.
So that should look something like this and these are all of the commands that we're going to be using.
So these are the commands that perform a manual WordPress installation.
Now that that stack's completed and we've got the Lesson Commands document open, the next step is to move across to the EC2 console because we're going to actually install WordPress manually.
So click on the Services drop-down and then locate EC2 in this All Services part of the screen.
If you've recently visited it, it should be in the Recently Visited section under Favorites or you can go ahead and type EC2 in the search box and then open that in a new tab.
And then click on Instances running and you should see one single instance which is called A4L-PublicEC2.
Go ahead and right-click on this instance.
This is the instance we'll be installing WordPress within.
So right-click, select Connect.
We're going to be using our browser to connect to this instance so we'll be using Instance Connect just verify that the username is EC2-user and then go ahead and connect to this instance.
Now again, I fully understand that a manual installation of WordPress might seem like a waste of time but I genuinely believe that you need to understand all the problems that come from manually installing software in order to understand the benefits which automation provides.
It's not just about saving time and effort.
It's also about error reduction and the ability to keep things consistent.
Now I always like to start my installations or my scripts by setting variables which will store the configuration values that everything from that point forward will use.
So we're going to create four variables.
One for the database name, one for the database user, one for the database password and then one for the root or admin password of the database server.
So let's start off by using the pre-populated values from the Lessened Commands documents.
So that's all of those variables set and we can confirm that those are working by typing echo and then a space and then a dollar and then the name of one of those variables.
So for example, dbname and press Enter and that will show us the value stored within that variable.
So now we can use these later points of the installation.
So at this point I'm going to clear the screen to keep it easy to see and stage two at this installation process is to install some system software.
So there are a few things that we need to install in order to allow a WordPress installation.
We'll install those using the DNF package manager.
We need to give it admin privileges which is why we use shudu and then the packages that we're going to install are the database server which is Maria db-server the Apache web server which is HTTPD and then a utility called Wget which we're going to use to download further components of the installation.
So go ahead and type or copy and paste that command and press Enter and that installation process will take a few moments and it will go through installing that software and any of the prerequisites.
They're done so I'll clear the screen to keep this easy to read.
Now that all those packages are installed we need to start both the web server and the database server and ensure that both of them are started if ever the machine is restarted.
So to do that we need to enable and start those services.
So enabling and starting means that both of the services are both started right now and they'll start if the machine reboots.
So first we'll use this command.
So we're using admin privileges again, systemctl which allows us to start and stop system processes and then we use enable and then HTTPD which is the web server.
So type and press enter and that ensures that the web server is enabled.
We need to run the same command again but this time specifying MariaDB to ensure that the database server is enabled.
So type or copy and paste and press enter.
So that means both of those processes will start if ever the instance is rebooted and now we need to manually start both of those so they're running and we can interact with them.
So we need to use the same structure of command but instead of enable we need to start both of these processes.
So first the web server and then the database server.
So that means the CC2 instance now has a running web and database server both of which are required for WordPress.
So I'll clear the screen to keep this easy to read.
Next we're going to move to stage 4 and stage 4 is that we need to set the root password of the database server.
So this is the username and password that will be used to perform all of the initial configuration of the database server.
Now we're going to use this command and you'll note that for password we're actually specifying one of the variables that we configured at the start of this demo.
So we're using the DB root password variable that we configured right at the start.
So go ahead and copy and paste or type that in and press enter and that sets the password for the root user of this database platform.
The next step which is step 5 is to install the WordPress application files.
Now to do that we need to install these files inside what's known as the web root.
So whenever you browse to a web server either using an IP address or a DNS name if you don't specify a path so if you just use the server name for example netflix.com then it loads those initial files from a folder known as the web root.
Now on this particular server the web root is stored in /varr/www/html so we need to download WordPress into that folder.
Now we're going to use this command Wget and that's one of the packages that we installed at the start of this lesson.
So we're giving it admin privileges and we're using Wget to download latest.tar.gz from wordpress.org and then we're putting it inside this web root.
So /varr/www/html.
So go ahead and copy and paste or type that in and press enter.
That'll take a few moments depending on the speed of the WordPress servers and that will store latest.tar.gz in that web root folder.
Next we need to move into that folder so cd space /varr/www/html and press enter.
We need to use a Linux utility called tar to extract that file.
So sudo and then tar and then the command line options -zxvf and then the name of the file so latest.tar.gz So copy and paste or type that in and press enter and that will extract the WordPress download into this folder.
So now if we do an ls -la you'll see that we have a WordPress folder and inside that folder are all of the application files.
Now we actually don't want them inside a WordPress folder.
We want them directly inside the web root.
So the next thing we're going to do is this command and this is going to copy all of the files from inside this WordPress folder to . and . represents the current folder.
So it's going to copy everything inside WordPress into the current working directory which is the web root directory.
So enter that and that copies all of those files.
And now if we do another listing you'll see that we have all of the WordPress application files inside the web root.
And then lastly for the installation part we need to tidy up the mess that we've made.
So we need to delete this WordPress folder and the download file that we just created.
So to do that we'll run an rm -r and then WordPress to delete that folder.
And then we'll delete the download with sudo rm and then a space and then the name of the file.
So latest.tar.gz.
And that means that we have a nice clean folder.
So I'll clear the screen to make it easy to see.
And then I'll just do another listing.
Okay so this is the end of part one of this lesson.
It was getting a little bit on the long side and so I wanted to add a break.
It's an opportunity just to take a rest or grab a coffee.
Part two will be continuing immediately from the end of part one.
So go ahead complete the video and when you're ready join me in part two.
-
-
www.biorxiv.org www.biorxiv.org
-
Editors Assessment:
PhysiCell is an open source multicellular systems simulator for studying many interacting cells in dynamic tissue microenvironments. As part of the PhysiCell ecosystem of tools and modules this paper presents a PhysiCell addon, PhysiMeSS (MicroEnvironment Structures Simulation) which allows the user to accurately represent the extracellular matrix (ECM) as a network of fibres. This can specify rod-shaped microenvironment elements such as the matrix fibres (e.g. collagen) of the ECM, allowing the PhysiCell user the ability to investigate physical interactions with cells and other fibres. Reviewers asked for additional clarification on a number of features. And the paper now clear future releases will provide full 3D compatibility and include working on fibrogenesis, i.e. the creation of new ECM fibres by cells.
This evaluation refers to version 1 of the preprint
-
AbstractThe extracellular matrix is a complex assembly of macro-molecules, such as collagen fibres, which provides structural support for surrounding cells. In the context of cancer metastasis, it represents a barrier for the cells, that the migrating cells needs to degrade in order to leave the primary tumor and invade further tissues. Agent-based frameworks, such as PhysiCell, are often use to represent the spatial dynamics of tumor evolution. However, typically they only implement cells as agents, which are represented by either a circle (2D) or a sphere (3D). In order to accurately represent the extracellular matrix as a network of fibres, we require a new type of agent represented by a segment (2D) or a cylinder (3D).In this article, we present PhysiMeSS, an addon of PhysiCell, which introduces a new type of agent to describe fibres, and their physical interactions with cells and other fibres. PhysiMeSS implementation is publicly available at https://github.com/PhysiMeSS/PhysiMeSS, as well as in the official Physi-Cell repository. We also provide simple examples to describe the extended possibilities of this new framework. We hope that this tool will serve to tackle important biological questions such as diseases linked to dis-regulation of the extracellular matrix, or the processes leading to cancer metastasis.
This work has been published in GigaByte Journal under a CC-BY 4.0 license (https://doi.org/10.46471/gigabyte.136), and has published the reviews under the same license. It is also part of GigaByte’s PhysiCell Ecosystem series for tools that utilise or build upon the PhysiCell platform: https://doi.org/10.46471/GIGABYTE_SERIES_0003 These reviews are as follows.
Reviewer 1. Erika Tsingos
One important aspect that the authors need to be aware of and mention explicitly is that their algorithm for fiber set-up leads to differences in fiber concentration and orientation at the boundary, because fibers that are not wholly contained in the simulation box are discarded. The effect of this choice can be seen upon close inspection of Figure 2: In the left panel, fibers align tangentially to the boundary, so locally the orientation is not isotropic. Similarly, in Figure 2 middle and right panels, the left and right boundaries have lower local fiber concentration. This issue could potentially affect the outcome of a simulation, so it's important that readers are made aware so that if necessary they can address this with a modified algorithm. ----- Minor comments: In the abstract, the phrasing implies agent-based frameworks are only used for tumour evolution. I would rephrase such that it is clear that tumour evolution is one example among many possible applications. I suggest adding a dash to improve readability in the following sentence in the introduction: "However, we note that the applications of PhysiMeSS stretch beyond those wanting to model the ECM -- as the new cylindrical/rod-shaped agents could be used to model blood vessel segments or indeed create obstacles within the domain." In the implementation section, add a short sentence to clarify if PhysiMeSS is "backwards compatible" with older PhysiCell models that do not use the fiber agent. Notation in equations: A single vertical line is absolute value, and two vertical lines is Euclidean norm? The explanation of Equation 1 implies that the threshold v_{max} should limit the parallel force, but the text does not explicitly say if ||v|| is restricted to be less or equal to v_{max}. Is that the case? In Equation 2, I don't see the need to square the terms in parenthesis. If |v*l_f| is an absolute value it is always positive. Since l_f is normalized the value of the dot product is only between 0 and the magnitude of v. Am I missing something? Are p_x and p_y in the moment arm magnitude coordinates with respect to the fiber center? Table 2: It would be helpful to have a separate column with the corresponding symbols used throughout the text and equations. Figure 5/6: Missing crosslinker color legend. ----- Typos/grammar: "As an aside, an not surprisingly," --> As an aside, and not surprisingly, "This may either be because as a cell tries to migrate through the domain fibres which act as obstacles in the cell’s path," --> remove the word "which"
Reviewer 2. Jinseok Park
Noel et al. introduce PhysiMess - a new PhysiCell Addon for ECM remodeling. This new addon is a powerful tool to simulate ECM remodeling and has the potential to be applied to mechanobiology research, which makes my enthusiasm high. I would like to give a few suggestions. 1) Basically, it is an addon of PhysiCell. So, I suggest describing PhysiCell and how to add the addon for readers who are not familiar with these tools. Also, screen captures of tool manipulation would be very helpful. 2) Figure 2 and 3 exhibit the outcome of the addon showing ECM remodeling. I would suggest to show actual ECM images modeled by the addon. 3) The equations reflect four interactions, and in my understanding, the authors describe cell-fibre, fiber-cell, and fiber-fiber interactions. I suggest generating an example corresponding to each interaction's modulation and explaining how the add-on results explain the physiological phenomena. For instance, focal adhesion may be a key modulator of cell-fibre or fiber-cell interaction, presumably, alpha or beta fiber. I would demonstrate how the different parameters generate different results and explain the physiological situation modeled by the results. 4) Similarly, Figure 5 and Figure 6 only show one example and no comparison with other conditions. For example, It would be better to exhibit no pressure/pressure conditions. It may help readers estimate how the pressure impacts cell proliferation.
Reviewer 3. Simon Syga
The presented paper "PhysiMeSS - A New PhysiCell Addon for Extracellular Matrix Modelling" is a useful extension to the popular simulation framework PhysiCell. It enables the simulation of cell populations interacting with the extracellular matrix, which is represented by a set of line segments (2D) or cylinders (3D). These represend a new kind of agent in the simulation framework. The paper outlines the basic implementation, properties and interactions of these agents. I recommend publication after a small set of minor issues have been addressed. Please refer to the attached marked-up PDF file for these minor issues and suggestions. https://gigabyte-review.rivervalleytechnologies.comdownload-api-file?ZmlsZV9wYXRoPXVwbG9hZHMvZ3gvVFIvNTUwL2d4LVRSLTE3MTk5NDYwNjlfU1kucGRm
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this video we're going to interact with instant store volumes.
Now this part of the demo does come at a cost.
This isn't inside the free tier because we're going to be launching some instances which are fairly large and are not included in the free tier.
The demo has a cost of approximately 13 cents per hour and so you should only do this part of the demo if you're willing to accept that cost.
If you don't want to accept those costs then you can go ahead and watch me perform these within my test environment.
So to do this we're going to go ahead and click on instances and we're going to launch an instance manually.
So I'm going to click on launch instances.
We're going to name the instance, Instance Store Test so put that in the name box.
Then scroll down, pick Amazon Linux, make sure Amazon Linux 2023 is selected and the architecture needs to be 64 bit x86.
Scroll down and then in the instance type box click and we need to find a different type of instance.
This is going to be one that supports instance store volumes.
So scroll down and we're looking for m5dn.large.
This is a type of instance which includes one instance store volume.
So select that then scroll down a little bit more and under key pair click in the box and select proceed without a key pair not recommended.
Scroll down again and under network settings click on edit.
Click in the VPC drop down and select a4l-vpc1.
Under subnet make sure sn-web-a is selected.
Make sure enabled is selected for both of the auto assign public IP drop downs.
Then we're going to select an existing security group click the drop down select the EBS demo instance security group.
It will have some random after it but that's okay.
Then scroll down and under storage we're going to leave all of the defaults.
What you are able to do though is to click on show details next to instance store volumes.
This will show you the instance store volumes which are included with this instance.
You can see that we have one instance store volume it's 75 GB in size and it has a slightly different device name.
So dev nvme0n1.
Now all of that looks good so we're just going to go ahead and click on launch instance.
Then click on view all instances and initially it will be an appending state and eventually it will move into a running state.
Then we should probably wait for the status check column to change from initializing to 2 out of 2.
Go ahead and pause the video and wait for this status check to change to be fully green.
It should show 2 out of 2 status checks.
That's now in a running state with 2 out of 2 checks so we can go ahead and connect to this instance.
Before we do though just go ahead and select the instance and just note the instances public IP version 4 address.
Now this address is really useful because it will change if the EC2 instance moves between EC2 hosts.
So it's a really easy way that we can verify whether this instance has moved between EC2 hosts.
So just go ahead and note down the IP address of the instance that you have if you're performing this in your own environment.
We're going to go ahead and connect to this instance though so right click, select connect, we'll be choosing instance connect, go ahead and connect to the instance.
Now many of these commands that we'll be using should by now be familiar.
Just refer back to the lessons command document if you're unsure because we'll be using all of the same commands.
First we need to list all of the block devices which are attached to this instance and we can do that with LSBLK.
This time it looks a little bit different because we're using instance store rather than EBS additional volumes.
So in this particular case I want you to look for the 8G volume so this is the root volume.
This represents the boot or root volume of the instance.
Remember that this particular instance type came with a 75GB instance store volume so we can easily identify it's this one.
Now to check that we can verify whether there's a file system on this instance store volume.
If we run this command, so the same command we've used previously so shudu file -s and then the id of this volume so dev nvme1n1, you'll see it reports data.
And if you recall from the previous parts of this demo series this indicates that there isn't a file system on this volume.
We're going to create one and to do that we use this command again it's the same command that we've used previously just with the new volume id.
So press enter to create a file system on this raw block device this instance store volume and then we can run this command again to verify that it now has a file system.
To mount it we can follow the same process that we did in the earlier stages of this demo series.
We'll need to create a directory for this volume to be mounted into this time we'll call it forward slash instance store.
So create that folder and then we're going to mount this device into that folder so shudu mount then the device id and then the mount point or the folder that we've previously created.
So press enter and that means that this block device this instance store volume is now mounted into this folder.
And if we run a df space -k and press enter you can see that it's now mounted.
Now we're going to move into that folder by typing cd space forward slash instance store and to keep things efficient we're going to create a file called instance store dot txt.
And rather than using an editor we'll just use shudu touch and then the name of the file and this will create an empty file.
If we do an LS space -la and press enter you can see that that file exists.
So now that we have this file stored on a file system which is running on this instance store volume let's go ahead and reboot this instance.
Now we need to be careful we're not going to stop and start the instance we're going to restart the instance.
Restarting is different than stop and start.
So to do that we're going to close this tab move back to the ec2 console so click on instances right click on instance store test and select reboot instance and then confirm that.
Note what this IP address is before you initiate the reboot operation and then just give this a few minutes to reboot.
Then right click and select connect.
Using instance connect go ahead and connect back to the instance.
And again if it appears to hang at this point then you can just wait for a few moments and then connect again.
But in this case I've left it long enough and I'm connected back into the instance.
Now once I'm back in the instance if I run a df space -k and press enter note how that file system is not mounted after the reboot.
Now that's fine because we didn't configure the Linux operating system to mount this file system when the instance is restarted.
But what we can do is do an LS BLK again to list the block device.
We can see that it's still there and we can manually mount it back in the same folder as it was before the reboot.
To do that we run this command.
So it's mounting the same volume ID the same device ID into the same folder.
So go ahead and run that command and press enter.
Then if we use cd space forward slash and then instance store press enter and then do an LS space -la we can see that this file is still there.
Now the file is still there because instance store volumes do persist through the restart of an EC2 instance.
Restarting an EC2 instance does not move the instance from one EC2 host to another.
And because instance store volumes are directly attached to an EC2 host this means that the volume is still there after the machine has restarted.
Now we're going to do something different though.
Close this tab down.
Move back to instances.
Again pay special attention to this IP address.
Now we're going to right click and stop the instance.
So go ahead and do that and confirm it if you're doing this in your own environment.
Watch this public IP v4 address really carefully.
We'll need to wait for the instance to move into a stopped state which it has and if we select the instance note how the public IP version for address has been unallocated.
So this instance is now not running on an EC2 host.
Let's right click.
Go to start instance and start it up again.
Only to give that a few moments again.
It'll move into a running state but notice how the public IP version for address has changed.
This is a good indication that the instance has moved from one EC2 host to another.
So let's give this instance a few moments to start up.
And once it has right click, select connect and then go ahead and connect to the instance using instance connect.
Once connected go ahead and run an LS BLK and press enter and you'll see it appears to have the same instance store volume attached to this instance.
It's using the same ID and it's the same size.
But let's go ahead and verify the contents of this device using this command.
So shudu file space -s space and then the device ID of the instance store volume.
For press enter, now note how it shows data.
So even though we created a file system in the previous step after we've stopped and started the instance, it appears this instance store volume has no data.
Now the reason for that is when you restart an EC2 instance, it restarts on the same EC2 host.
But when you stop and start an EC2 instance, which is a distinctly different operation, the EC2 instance moves from one EC2 host to another.
And that means that it has access to completely different instance store volumes than it did on that previous host.
It means that all of the data, so the file system and the test file that we created on the instance store volume, before we stopped and started this instance, all of that is lost.
When you stop and start an EC2 instance or for any other reason, which means the instance moves from one host to another, all of the data is lost.
So instance store volumes are ephemeral.
They're not persistent and you can't rely on them to keep your data safe.
And it's really important that you understand that distinction.
If you're doing the developer or sysop streams, it's also important that you understand the difference between an instance restart, which keeps the same EC2 host, and a stop and start, which moves an instance from one host to another.
The format means you're likely to keep your data, but the latter means you're guaranteed to lose your data when using instance store volumes.
EBS on the other hand, as we've seen, is persistent and any data persists through the lifecycle of an EC2 instance.
Now with that being said, though, that's everything that I wanted to demonstrate within this series of demo lessons.
So let's go ahead and tidy up the infrastructure.
Close down this tab, click on instances.
If you follow this last part of the demo in your own environment, go ahead and right click on the instance store test instance and terminate that instance.
That will delete it along with any associated resources.
We'll need to wait for this instance to move into the terminated state.
So give that a few moments.
Once that's terminated, go ahead and click on services and then move back to the cloud formation console.
You'll see the stack that you created using the one click deploy at the start of this lesson.
Go ahead and select that stack, click on delete and then delete stack.
And that's going to put the account back in the same state as it was at the start of this lesson.
So it will remove all of the resources that have been created.
And at that point, that's the end of this demo series.
So what did you learn?
You learned that EBS volumes are created within one specific availability zone.
EBS volumes can be mounted to instances in that availability zone only and can be moved between instances while retaining their data.
You can create a snapshot from an EBS volume which is stored in S3 and that data is replicated within the region.
And then you can use snapshots to create volumes in different availability zones.
I told you how snapshots can be copied to other AWS regions either as part of data migration or disaster recovery and you learned that EBS is persistent.
You've also seen in this part of the demo series that instant store volumes can be used.
They are included with many instance types but if the instance moves between EC2 hosts so if an instance is stopped and then started or if an EC2 host has hardware problems then that EC2 instance will be moved between hosts and any data on any instant store volumes will be lost.
So that's everything that you needed to know in this demo lesson and you're going to learn much more about EC2 and EBS in other lessons throughout the course.
At this point though, thanks for watching and doing this demo.
I hope it was useful but go ahead complete this video and when you're ready I look forward to you joining me in the next.
-
-
otcabrina.weebly.com otcabrina.weebly.com
-
nd the front paws and backside of our dog
Great!
-
It is relatively easy to move from this position, especially for a 4 year old
As soon as he lets go of the dog, he will become much less stable.
-
internal rotation in the right leg
Looks like slight external rotation of the left and possible internal rotation of the right. Hard to tell from this angle.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
This is part two of this lesson.
We're going to continue immediately from the end of part one.
So let's get started.
We just need to give this a brief moment to perform that reboot.
So just wait a couple of moments and once you have right click again, select Connect.
We're going to use EC2 instance connect again.
Make sure the user's correct and then click on Connect.
Now, if it doesn't immediately connect you to the instance, if it appears to have frozen for a couple of seconds, that's fine.
It just means that the instance hasn't completed its restart.
Wait for a brief while longer and then attempt another connect.
This time you should be connected back to the instance and now we need to verify whether we can still see our volume attached to this instance.
So do a DF space -k and press Enter and you'll note that you can't see the file system.
That's because before we rebooted this instance, we used the mount command to manually mount the file system on our EBS volume into the EBS test folder.
Now that's a manual process.
It means that while we could interact with that before the reboot, it doesn't automatically mount that file system when the instance restarts.
To do that, we need to configure it to auto-mount when the instance starts up.
So to do that, we need to get the unique ID of the EBS volume, which is attached to this instance.
And to get that, we run a shudu space blkid.
Now press Enter and that's going to list the unique identifier of all of the volumes attached to this instance.
You'll see the boot volume listed as devxvda1 and the EBS volume that we've just attached listed as devxvdf.
So we need the unique ID of the volume that we just added.
So that's the one next to xvdf.
So go ahead and select this unique identifier.
You'll need to make sure that you select everything between the speech marks and then copy that into your clipboard.
Next, we need to edit the FSTAB file, which controls which file systems are mounted by default.
So we're going to run a shudu and then space nano, which is our editor, and then a space, and then forward slash ETC, which is the configuration directory on Linux, another forward slash and then FSTAB and press Enter.
And this is the configuration file for which file systems are mounted by our instance.
And we're going to add a similar line.
So first we need to use uuid, which is the unique identifier, and then the equal symbol.
And then we need to paste in that unique ID that we just copied to our clipboard.
Once that's pasted in, press Space.
This is the ID of the EBS volume, so the unique ID.
Next, we need to provide the place where we want that volume to be mounted.
And that's the folder we previously created, which is forward slash EBS test.
Then a space, we need to tell the OS which file system is used, which is xfs, and then a space.
And then we need to give it some options.
You don't need to understand what these do in detail.
We're going to use defaults, all one word, and then a comma, and then no fail.
So once you've entered all of that, press Ctrl+O to save that file, and Enter, and then Ctrl+X to exit.
Now this will be mounted automatically when the instance starts up, but we can force that process by typing shudu space mount space-a.
And this will perform a mount of all of the volumes listed in the FS tab file.
So go ahead and press Enter.
Now if we do a df space-k and press Enter, you'll see that our EBS volume once again is mounted within the EBS test folder.
So I'm going to clear the screen, then I'm going to move into that folder, press Enter, and then do an ls space-la, and you'll see that our amazing test file still exists within this folder.
And that shows that the data on this file system is persistent, and it's available even after we reboot this EC2 instance, and that's different than instance store volumes, which I'll be demonstrating later on.
At this point, we're going to shut down this instance because we won't be needing it anymore.
So close down this tab, click on Instances, right-click on instance one-AZA, and then select Stop Instance.
You'll need to confirm it, refresh that and wait for it to move into a stopped state.
Once it has stopped, go down and click on Volumes, select the EBS test volume, right-click and detach it.
We're going to detach this volume from the instance that we've just stopped.
You'll need to confirm that, and that will begin the process and it will detach that volume from the instance, and this demonstrates how EBS volumes are completely separate from EC2 instances.
You can detach them and then attach them to other instances, keeping the data that's on that volume.
Just keep refreshing.
We need to wait for that to move into an available state, and once it has, we're going to right-click, select Attach Volume, click inside the instance box, and this time, we're going to select instance two-AZA.
It should be the only one listed now in a running state.
So select that and click on Attach.
Just refresh that and wait for that to move into an in-use state, which it is, then move back to instances, and we're going to connect to the instance that we just attached that volume to.
So select instance two-AZA, right-click, select Connect, and then connect to that instance.
Once we connected to that instance, remember this is an instance that we haven't interacted with this EBS volume with.
So this instance has no initial configuration of this EBS volume, and if we do a DF-K, you'll see that this volume is not mounted on this instance.
What we need to do is do an LS, BLK, and this will list all of the block devices on this instance.
You'll see that it's still using XVDF because this is the device ID that we configured when attaching the volume.
Now, if we run this command, so shudu, file, S, and then the device ID of this EBS volume, notice how now it shows a file system on this EBS volume because we created it on the previous instance.
We don't need to go through all of the process of creating the file system because EBS volumes persist past the lifecycle of an EC2 instance.
You can interact with an EBS volume on one instance and then move it to another and the configuration is maintained.
We're going to follow the same process.
We're going to create a folder called EBSTEST.
Then we're going to mount the EBS volume using the device ID into this folder.
We're going to move into this folder and then if we do an LS, space-LA, and press Enter, you'll see the test file that you created in the previous step.
It still exists and all of the contents of that file are maintained because the EBS volume is persistent storage.
So that's all I wanted to verify with this instance that you can mount this EBS volume on another instance inside the same availability zone.
At this point, close down this tab and then click on Instances and we're going to shut down this second EC2 instance.
So right-click and then select Stop Instance and you'll need to confirm that process.
Wait for that instance to change into a stop state and then we're going to detach the EBS volume.
So that's moved into the stopped state.
We can select Volumes, right-click on this EBSTEST volume, detach the volume and confirm that.
Now next, we want to mount this volume onto the instance that's in Availability Zone B and we can't do that because EBS volumes are located in one specific availability zone.
Now to allow that process, we need to create a snapshot.
Snapshots are stored on S3 and replicated between multiple availability zones in that region and snapshots allow us to take a volume in one availability zone and move it into another.
So right-click on this EBS volume and create a snapshot.
Under Description, just use EBSTESTSNAP and then go ahead and click on Create Snapshot.
Just close down any dialogues, click on Snapshots and you'll see that a snapshot is being created.
Now depending on how much data is stored on the EBS volume, snapshots can either take a few seconds or anywhere up to several hours to complete.
This snapshot is a full copy of all of the data that's stored on our original EBS volume.
But because the snapshot is stored in S3, it means that we can take this snapshot, right-click, create volume and then create a volume in a different availability zone.
Now you can change the volume type, the size and the encryption settings at this point, but we're going to leave everything the same and just change the availability zone from US-EAST-1A to US-EAST-1B.
So select 1B in availability zone, click on Add Tag.
We're going to give this a name to make it easier to identify.
For the value, we're going to use EBS Test Volume-AZB.
So enter that and then create the volume.
Close down any dialogues and at this point, what we're doing is using this snapshot which is stored inside S3 to create a brand new volume inside availability zone US-EAST-1B.
At this point, once the volume is in an available state, make sure you select the right one, then we can right-click, we can attach this volume and this time when we click in the instance box, you'll see the instance that's in availability zone 1B.
So go ahead and select that and click on Attach.
Once that volume is in use, go back to Instances, select the third instance, right-click, select Connect, choose Instance Connect, verify the username and then connect to the instance.
Now we're going to follow the same process with this instance.
So first, we need to list all of the attached block devices using LSBLK.
You'll see the volume we've just created from that snapshot, it's using device ID XVDF.
We can verify that it's got a file system using the command that we've used previously and it's showing an XFS file system.
Next, we create our folder which will be our mount point.
Then we mount the device into this mount point using the same command as we've used previously, move into that folder and then do a listing using LS-LA and you should see the same test file you created earlier and if you cap this file, it should have the same contents.
This volume has the same contents because it's created from a snapshot that we created of the original volume and so its contents will be identical.
Go ahead and close down this tab to this instance, select instances, right click, stop this instance and then confirm that process.
Just wait for that instance to move into the stopped state.
We're going to move back to volumes, select the EBS test volume in availability zone 1B, detach that volume and confirm it and then just move to snapshots and I want to demonstrate how you have the option of right clicking on a snapshot.
You can copy the snapshot and choose a different regions.
So as well as snapshots giving you the option of moving EBS volume data between availability zones, you can also use snapshots to copy data between regions.
Now I'm not going to do this process but I could select a different region, for example, Asia Pacific Sydney and copy that snapshot to the Sydney region.
But there's no point doing that because we just have to remember to clean it up afterwards.
That process is fairly simple and will allow us to copy snapshots between regions.
It might take some time again depending on the amount of data within that snapshot but it is a process that you can perform either as part of data migration or disaster recovery processes.
So go ahead and click on cancel and at this point we're just going to clear things up because this is the end of this first phase of this demo lesson.
So right click on this snapshot and just delete the snapshot and confirm that.
Then go to volumes, select the volume in US East 1A, right click, delete that volume and confirm.
Select the volume in US East 1B, right click, delete volume and confirm.
And that just means we've tidied up both of those EBS volumes within this account.
Now that's the end of this first stage of this set of demo lessons.
All the steps until this point have been part of the free tier within AWS.
What follows won't be part of the free tier.
We're going to be creating a larger instant size and this will have a cost attached but I want to use it to demonstrate instant store volumes and how you can interact with them and some of their unique characteristics.
So I'm going to move into a new video and this new video will have an associated charge.
So you can either watch me perform the steps or you can do it within your own environment.
Now go ahead and complete this video and when you're ready, you can move on to the next video where we're going to investigate instant store volumes.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and we're going to use this demo lesson to get some experience of working with EBS and Instant Store volumes.
Now before we get started, this series of demo videos will be split into two main components.
The first component will be based around EBS and EBS snapshots and all of this will come under the free tier.
The second component will be based on Instant Store volumes and will be using larger instances which are not included within the free tier.
So I'm going to make you aware of when we move on to a part which could incur some costs and you can either do that within your own environment or watch me do it in the video.
If you do decide to do it in your own environment, just be aware that there will be a 13 cents per hour cost for the second component of this demo series and I'll make it very clear when we move into that component.
The second component is entirely optional but I just wanted to warn you of the potential cost in advance.
Now to get started with this demo, you're going to need to deploy some infrastructure.
To do that, make sure that you're logged in to the general account, so the management account of the organization and you've got the Northern Virginia region selected.
Now attached to this demo is a one click deployment link to deploy the infrastructure.
So go ahead and click on that link.
That's going to open this quick create stack screen and all you need to do is scroll down to the bottom, check this capabilities box and click on create stack.
Now you're going to need this to be in a create complete state before you continue with this demo.
So go ahead and pause the video, wait for that stack to move into the create complete status and then you can continue.
Okay, now that's finished and the stack is in a create complete state.
You're also going to be running some commands within EC2 instances as part of this demo.
Also attached to this lesson is a lesson commands document which contains all of those commands and you can use this to copy and paste which will avoid errors.
So go ahead and open that link in a separate browser window or separate browser tab.
It should look something like this and we're going to be using this throughout the lesson.
Now this cloud formation template has created a number of resources, but the three that we're concerned about are the three EC2 instances.
So instance one, instance two and instance three.
So the next thing to do is to move across to the EC2 console.
So click on the services drop down and then either locate EC2 under all services, find it in recently visited services or you can use the search box at the top type EC2 and then open that in a new tab.
Now the EC2 console is going through a number of changes so don't be alarmed if it looks slightly different or if you see any banners welcoming you to this new version.
Now if you click on instances running, you'll see a list of the three instances that we're going to be using within this demo lesson.
We have instance one - az a.
We have instance two - az a and then instance one - az b.
So these are three instances, two of which are in availability zone A and one which is in availability zone B.
Next I want you to scroll down and locate volumes under elastic block store and just click on volumes.
And what you'll see is three EBS volumes, each of which is eight GIB in size.
Now these are all currently in use.
You can see that in the state column and that's because all of these volumes are in use as the boot volumes for those three EC2 instances.
So on each of these volumes is the operating system running on those EC2 instances.
Now to give you some experience of working with EBS volumes, we're going to go ahead and create a volume.
So click on the create volume button.
The first thing you'll need to do when creating a volume is pick the type and there are a number of different types available.
We've got GP2 and GP3 which are the general purpose storage types.
We're going to use GP3 for this demo lesson.
You could also select one of the provisioned IOPS volumes.
So this is currently IO1 or IO2.
And with both of these volume types, you're able to define IOPS separately from the size of the volume.
So these are volume types that you can use for demanding storage scenarios where you need high-end performance or when you need especially high performance for smaller volume sizes.
Now IO1 was the first type of provisioned IOPS SSD introduced by AWS and more recently they've introduced IO2 and enhanced it which provides even higher levels of performance.
In addition to that we do have the non-SSD volume types.
So SC1 which is cold HDD, ST1 which is throughput optimized HDD and then of course the original magnetic type which is now legacy and AWS don't recommend this for any production usage.
For this demo lesson we're going to go ahead and select GP3.
So select that.
Next you're able to pick a size in GIB for the volume.
We're going to select a volume size of 10 GIB.
Now EBS volumes are created within a specific availability zone so you have to select the availability zone when you're creating the volume.
At this point I want you to go ahead and select US-EAST-1A.
When creating volume you're also able to specify a snapshot as the basis for that volume.
So if you want to restore a snapshot into this volume you can select that here.
At this stage in the demo we're going to be creating a blank EBS volume so we're not going to select anything in this box.
We're going to be talking about encryption later in this section of the course.
You are able to specify encryption settings for the volume when you create it but at this point we're not going to encrypt this volume.
We do want to add a tag so that we can easily identify the volume from all of the others so click on add tag.
For the key we're going to use name.
For the value we're going to put EBS test volume.
So once you've entered both of those go ahead and click on create volume and that will begin the process of creating the volume.
Just close down any dialogues and then just pay attention to the different states that this volume goes through.
It begins in a creating state.
This is where the storage is being provisioned and then made available by the EBS product.
If we click on refresh you'll see that it changes from creating to available and once it's in an available state this means that we can attach it to EC2 instances.
And that's what we're going to do so we're going to right click and select attach volume.
Now you're able to attach this volume to EC2 instances but crucially only those in the same availability zone.
EBS is an availability zone scoped service and so you can only attach EBS volumes to EC2 instances within that same availability zone.
So if we select the instance box you'll only see instances in that same availability zone.
Now at this point go ahead and select instance 1 in availability zone A.
Once you've selected it you'll see that the device field is populated and this is the device ID that the instance will see for this volume.
So this is how the volume is going to be exposed to the EC2 instance.
So if we want to interact with this instance inside the operating system this is the device that we'll use.
Now different operating systems might see this in slightly different ways.
So as this warning suggests certain Linux kernels might rename SDF to XVDF.
So we've got to be aware that when you do attach a volume to an EC2 instance you need to get used to how that's seen inside the operating system.
How we can identify it and how we can configure it within the operating system for use.
And I'm going to demonstrate that in the next part of this demo lesson.
So at this point just go ahead and click on attach and this will attach this volume to the EC2 instance.
Once that's attached to the instance and you see the state change to in use then just scroll up on the left hand side and select instances.
We're going to go ahead and connect to instance 1 in availability zone A.
This is the instance that we just attached that EBS volume to so we want to interact with this instance and see how we can see the EBS volume.
So right click on this instance and select connect and then you could either connect with an SSH client or use instance connect and to keep things simple we're going to connect from our browser so select the EC2 instance connect option make sure the user's name is set to EC2-user and then click on connect.
So now we connected to this EC2 instance and it's at this point that we'll start needing the commands that are listed inside the lesson commands document and again this is attached to this lesson.
So first we need to list all the block devices which are connected to this instance and we're going to use the LSBLK command.
Now if you're not comfortable with Linux don't worry just take this nice and slowly and understand at a high level all the commands that we're going to run.
So the first one is LSBLK and this is list block devices.
So if we run this we'll be able to see a list of all of the block devices connected to this EC2 instance.
You'll see the root device this is the device that's used to boot the instance it contains the instance operating system you'll see that it's 8 gig in size and then this is the EBS volume that we just attached to this instance.
You'll see that device ID so XVDF and you'll see that it's 10 gig in size.
Now what we need to do next is check whether there is a file system on this block device.
So this block device we've created it with EBS and then we've attached it to this instance.
Now we know that it's blank but it's always safe to check if there's any file system on a block device.
So to do that we run this command.
So we're going to check are there any file systems on this block device.
So press enter and if you see just data that indicates that there isn't any file system on this device and so we need to create one.
You can only mount file systems under Linux and so we need to create a file system on this raw block device this EBS volume.
So to do that we run this command.
So shoo-doo again is just giving us admin permissions on this instance.
MKFS is going to make a file system.
We specify the file system type with hyphen t and then XFS which is a type of file system and then we're telling it to create this file system on this raw block device which is the EBS volume that we just attached.
So press enter and that will create the file system on this EBS volume.
We can confirm that by rerunning this previous command and this time instead of data it will tell us that there is now an XFS file system on this block device.
So now we can see the difference.
Initially it just told us that there was data, so raw data on this volume and now it's indicating that there is a file system, the file system that we just created.
Now the way that Linux works is we mount a file system to a mount point which is a directory.
So we're going to create a directory using this command.
MKDIR makes a directory and we're going to call the directory forward slash EBS test.
So this creates it at the top level of the file system.
This signifies root which is the top level of the file system tree and we're going to make a folder inside here called EBS test.
So go ahead and enter that command and press enter and that creates that folder and then what we can do is to mount the file system that we just created on this EBS volume into that folder.
And to do that we use this command, mount.
So mount takes a device ID, so forward slash dev forward slash xvdf.
So this is the raw block device containing the file system we just created and it's going to mount it into this folder.
So type that command and press enter and now we have our EBS volume with our file system mounted into this folder.
And we can verify that by running a df space hyphen k.
And this will show us all of the file systems on this instance and the bottom line here is the one that we've just created and mounted.
At this point I'm just going to clear the screen to make it easier to see and what we're going to do is to move into this folder.
So cd which is change directory space forward slash EBS test and then press enter and that will move you into that folder.
Once we're in that folder we're going to create a test file.
So we're going to use this command so shudu nano which is a text editor and we're going to call the file amazing test file dot txt.
So type that command in and press enter and then go ahead and type a message.
It can be anything you just need to recognize it as your own message.
So I'm going to use cats are amazing and then some exclamation marks.
Then I'm going to press control o and enter to save that file and then control x to exit again clear the screen to make it easier to see.
And then I'm going to do an LS space hyphen LA and press enter just to list the contents of this folder.
So as you can see we've now got this amazing test file dot txt.
And if we cat the contents of this so cat amazing test file dot txt you'll see the unique message that you just typed in.
So at this point we've created this file within the folder and remember the folder is now the mount point for the file system that we created on this EBS volume.
So the next step that I want you to do is to reboot this EC2 instance.
To do that type sudo space and then reboot and press enter.
Now this will disconnect you from this session.
So you can go ahead and close down this tab and go back to the EC2 console.
Just go ahead and click on instances.
Okay, so this is the end of part one of this lesson.
It was getting a little bit on the long side and so I wanted to add a break.
It's an opportunity just to take a rest or grab a coffee.
Part two will be continuing immediately from the end of part one.
So go ahead complete the video and when you're ready join me in part two.
-
-
medium.com medium.com
-
Effective collaboration is essential for mutual learning.
for - Deep Humanity - intertwingled individual / collective learning - evolutionary learning journey - symmathesy - mutual learning - Nora Bateson
-
preliminary ground-setting
for - co-creative collaboration - preliminary groundwork
comment - How many times have I seen people come together with good intention to collaborate on some meaningful project onlyl for the project to fall apart some time later due to differences that emerge later on? - Without laying the proper framework for engagement and conflict resolution, we cannot prevent future conflicts from emerging - What is that proper framework? - What variables bring people closer together? - What variables drive people further apart? - We must identify those variables. They are complex because each one of us see's reality from our own unique perspective
-
for - Medium article - co-creative collaboration - Donna Nelham
summary - Donna takes us on a deep dive into the word collaboration what is needed to forge deep and meaningful collaboration and why it often fails - She introduces the term "collaboration washing" (like green washing) into our lexicon - This article is provocation for deep dive into what it means to collaborate - The questions we ask ourselves will lead us back to the most fundamental philosophical questions of self and other and how we formed these
-
Humans are naturally communal social beings with innate abilities to live and work together. However, living through the western influenced Industrial Age, our interdependence and interconnectedness with one another and our living planet has been on a steady downward spiral — de-emphasized, compromised and downgraded.
for - separation - reference - The three great separations
separation - reference - The three great separations - https://hyp.is/go?url=https%3A%2F%2Finthesetimes.com%2Farticle%2Findustrial-agricultural-revolution-planet-earth-david-korten&group=world
-
What conditions nurture collaboration?🔮 What conditions prevent or squash it?🔮 Can we expand our collective collaborative literacy with a wider, deeper repertoire to navigate wisely and well through the inherently messy and often difficult iterations of true collaboration?
for - questions - collaboration literacy - Donna Nelham - to - book - The Birth and Death of Meaning - Ernest Becker -
questions - collaboration - Donna Nelham - These three questions are all related - To get to the root of collaboration, it is helpful to examine the roots of human psychology to understand the fundamental relationship between - the individual and - the group - In his work "The Birth ad Death of Meaning, Ernest Becker argues, citing other peers, that - the self concept needs to emerge for effective group collaboration to develop and - the self concept requires others in order to construct it - Hence, other is already implicated in the construction of our own self - In Deep Humanity terminology, we call this intertwingledness of the self and other the "individual / collective gestalt"
to - book - The Birth and Death of Meaning - Ernest Becker - https://hyp.is/40fZHv9CEe6bTovrYzF92A/www.themortalatheist.com/blog/the-birth-and-death-of-meaning-ernest-becker
-
Capacity for deep collaboration calls for…
for - adder - for deep collaboration - article - Co-creative Collaboration - Donna Nelham
adder - for deep collaboration - article - Co-creative Collaboration - Donna Nelham - symmathesy - mutual learning - Nora Bateson - https://hyp.is/_V3NAk4UEe6Z6btu_1LIkA/norabateson.wordpress.com/2015/11/03/symmathesy-a-word-in-progress/
-
collaboration washing
for - portmanteau - collaboration washing - Donna Nelham
portmanteau - collaboration washing - Donna Nelham - like greenwashing - nice!
Tags
- symmathesy - mutual learning - Nora Bateson
- adder - for deep collaboration - article - Co-creative Collaboration - Donna Nelham
- portmanteau - collaboration washing - Donna Nelham
- to - book - The Birth and Death of Meaning - Ernest Becker
- Medium article - co-creative collaboration - Donna Nelham
- co-creative collaboration - preliminary ground-setting
- Deep Humanity - intertwingled individual / collective learning - evolutionary learning journey
- questions - collaboration literacy - Donna Nelham
- separation - reference - The three great separations
Annotators
URL
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this demo lesson you're going to evolve the infrastructure which you've been using throughout this section of the course.
In this demo lesson you're going to add private internet access capability using NAT gateways.
So you're going to be applying a cloud formation template which creates this base infrastructure.
It's going to be the animals for life VPC with infrastructure in each of three availability zones.
So there's a database subnet, an application subnet and a web subnet in availability zone A, B and C.
Now to this point what you've done is configured public subnet internet access and you've done that using an internet gateway together with routes on these public subnets.
In this demo lesson you're going to add NAT gateways into each availability zone so A, B and C and this will allow this private EC2 instance to have access to the internet.
Now you're going to be deploying NAT gateways into each availability zone so that each availability zone has its own isolated private subnet access to the internet.
It means that if any of the availability zones fail then each of the others will continue operating because these route tables which are attached to the private subnets they point at the NAT gateway within that availability zone.
So each availability zone A, B and C has its own corresponding NAT gateway which provides private internet access to all of the private subnets within that availability zone.
Now in order to implement this infrastructure you're going to be applying a one-click deployment and that's going to create everything that you see on screen now apart from these NAT gateways and the route table configurations.
So let's go ahead and move across to our AWS console and get started implementing this architecture.
Okay so now we're at the AWS console as always just make sure that you're logged in to the general AWS account as the I am admin user and you'll need to have the Northern Virginia region selected.
Now at the end of the previous demo lesson you should have deleted all of the infrastructure that you've created up until that point so the animals for live VPC as well as the Bastion host and the associated networking.
So you should have a relatively clean AWS account.
So what we're going to do first is use a one-click deployment to create the infrastructure that we'll need within this demo lesson.
So attached to this demo lesson is a one-click deployment link so go ahead and open that link.
That's going to take you to a quick create stack screen.
Everything should be pre-populated the stack name should be a4l just scroll down to the bottom check this capabilities box and then click on create stack.
Now this will start the creation process of this a4l stack and we will need this to be in a create complete state before we continue.
So go ahead pause the video wait for your stack to change into create complete and then we good to continue.
Okay so now this stacks moved into a create complete state then we good to continue.
So what we need to do before we start is make sure that all of our infrastructure has finished provisioning.
To do that just go ahead and click on the resources tab of this cloud formation stack and look for a4l internal test.
This is an EC2 instance a private EC2 instance so this doesn't have any public internet connectivity and we're going to use this to test on that gateway functionality.
So go ahead and click on this icon under physical ID and this is going to move you to the EC2 console and you'll be able to see this a4l - internal - test instance.
Now currently in my case it's showing as running but the status check is showing as initializing.
Now we'll need this instance to finish provisioning before we can continue with the demo.
What should happen is this status check should change from initializing to two out of two status checks and once you're at that point you should be able to right click and select connect and choose session manager and then have the option of connecting.
Now you'll see that I don't because this instance hasn't finished its provisioning process.
So what I want you to do is to go ahead and pause this video wait for your status checks to change to two out of two checks and then just go ahead and try to connect to this instance using session manager.
Only resume the video once you've been able to click on connect under the session manager tab and don't worry if this takes a few more minutes after the instance finishes provisioning before you can connect to session manager.
So go ahead and pause the video and when you can connect to the instance you're good to continue.
Okay so in my case it took about five minutes for this to change to two out of two checks past and then another five minutes before I could connect to this EC2 instance.
So I can right click on here and put connect.
I'll have the option now of picking session manager and then I can click on connect and this will connect me in to this private EC2 instance.
Now the reason why you're able to connect to this private instance is because we're using session manager and I'll explain exactly how this product works elsewhere in the course but essentially it allows us to connect into an EC2 instance with no public internet connectivity and it's using VPC interface endpoints to do that which I'll be explaining elsewhere in the course but what you should find when you're connected to this instance if you try to ping any internet IP address so let's go ahead and type ping and then a space 1.1.1.1.1 and press enter you'll note that we don't have any public internet connectivity and that's because this instance doesn't have a public IP version for address and it's not in a subnet with a route table which points at the internet gateway.
This EC2 instance has been deployed into the application a subnet which is a private subnet and it also doesn't have a public IP version for address.
So at this point what we need to do is go ahead and deploy our NAT gateways and these NAT gateways are what will provide this private EC2 instance with connectivity to the public IP version for internet so let's go ahead and do that.
Now to do that we need to be back at the main AWS console click in the services search box at the top type VPC and then right click and open that in a new tab.
Once you do that go ahead and move to that tab once you there click on NAT gateways and create a NAT gateway.
Okay so once you're here you'll need to specify a few things you'll need to give the NAT gateway a name you'll need to pick a public subnet for the NAT gateway to go into and then you'll need to give the NAT gateway an elastic IP address which is an IP address which doesn't change.
So first we'll set the name of the NAT gateway and we'll choose to use a4l for animals for life -vpc1 -natgw and then -a because this is going into availability zone A.
Next we'll need to pick the public subnet that the NAT gateway will be going into so click on the subnet drop down and then select the web a subnet which is the public subnet in availability zone a so sn -web -a.
Now we need to give this NAT gateway an elastic IP it doesn't currently have one so we need to click on allocate elastic IP which gives it an allocation.
Don't worry about the connectivity type we'll be covering that elsewhere in the course just scroll down to the bottom and create the NAT gateway.
Now this process will take some time and so we need to go ahead and create the two other NAT gateways.
So click on NAT gateways at the top and then we're going to create a second NAT gateway.
So go ahead and click on create NAT gateway again this time we'll call the NAT gateway a4l -vpc1 -natgw -b and this time we'll pick the web b subnet so sn -web -b allocated elastic IP again and click on create NAT gateway then we'll follow the same process a third time so click create NAT gateway use the same naming scheme but with -c pick the web c subnet from the list allocate an elastic IP and then scroll down and click on create NAT gateway and at this point we've got the three NAT gateways that are being created they're all in appending state if we go to elastic IPs we can see the three elastic IPs which have been allocated to the NAT gateways and we can scroll to the right or left and see details on these IPs and if we wanted we could release these IPs back to the account once we'd finish with them now at this point you need to go ahead and pause the video and resume it once all three of those NAT gateways have moved away from appending state we need them to be in an available state ready to go before we can continue with this demo so go ahead and pause and resume once all three have changed to an available state okay so all these are now in an available state so that means they're good to go they're providing service now if you scroll to the right in this list you're able to see additional information about these NAT gateways so you can see the elastic and private IP address the VPC and then the subnet that each of these NAT gateways are located in what we need to do now is configure the routing so that the private instances can communicate via the NAT gateways so right click on route tables and open in a new tab and we need to create a new route table for each of the availability zones so go ahead and click on create route table first we need to pick the VPC for this route table so click on the VPC drop down and then select the animals for live VPC so a for L hyphen VPC one once selected go ahead and name at the route table we're going to keep the naming scheme consistent so a for L hyphen VPC one hyphen RT for route table hyphen private a so enter that and click on create then close that dialogue down and create another route table this time we'll use the same naming scheme but of course this time it will be RT hyphen private B select the animals for life VPC and click on create close that down and then finally click on create route table again this time a for L hyphen VPC one hyphen RT hyphen private C again click on the VPC drop down and select the animals for life VPC and then click on create so that's going to leave us with three route tables one for each availability zone what we need to do now is create a default route within each of these route tables and that route is going to point at the NAT gateway in the same availability zone so select the route table private a and then click on the routes tab once you've selected the routes tab click on edit routes and we're going to add a new route it's going to be the IP version for default route of 0.0.0.0/0 and then click on target and pick NAT gateway and we're going to pick the NAT gateway in availability zone a and because we named them it makes it easy to select the relevant one from this list so go ahead and pick a for L hyphen VPC one hyphen NAT GW hyphen a so because this is the route table in availability zone a we need to pick the same NAT gateway so save that and close and now we'll be doing the same process for the route table in availability zone B make sure the routes tab is selected and click on edit routes click on add route again 0.0.0.0/0 and then for target pick NAT gateway and then pick the NAT gateway that's in availability zone B so NAT GW hyphen B once you've done that save the route table and then next select the route table in availability zone C so select RT hyphen private C make sure the routes tab is selected and click on edit routes again we'll be adding a route it will be the IP version for default route so 0.0.0.0/0 select a target go to NAT gateway and pick the NAT gateway in availability zone C so NAT GW hyphen C once you've done that save the route table and now our private EC2 instance should be able to ping 1.1.1.1 because we have the routing infrastructure in place so let's move back to our private instance and we can see that it's not actually working now the reason for this is that although we have created these routes we haven't actually associated these route tables with any of the subnets subnets in a VPC which don't have an explicit route table association are associated with the main route table now we need to explicitly associate each of these route tables with the subnets inside that same AZ so let's go ahead and pick RT hyphen private A we'll go through in order so select it click on the subnet associations tab and edit subnet associations and then you need to pick all of the private subnets in AZ A so that's the reserved subnet so reserved hyphen A the app subnet so app hyphen A and the DB subnet so DB hyphen A so all of these are the private subnets in availability zone A notice how all the public subnets are associated with this custom route table you created earlier but the ones we're setting up now are still associated with the main route table so we're going to resolve that now by associating this route table with those subnets so click on save and this will associate all of the private subnets in AZ A with the AZ A route table so now we're going to do the same process for AZ B and AZ C and we'll start with AZ B so select the private B route table click on subnet associations edit subnet associations so select application B database B and then reserved B and then scroll down and save the associations and then select the private C route table click on subnet associations edit subnet associations and then select reserved C database C and then application C and then scroll down and save those associations and now that we've associated these route tables with the subnets and now that we've added those default routes if we go back to session manager where we still have the connection open to the private EC2 instance we should see that the ping has started to work and that's because we now have a NAT gateway providing service to each of the private subnets in all of the three availability zones okay so that's everything you needed to cover in this demo lesson now it's time to clean up the account and return it to the same state as it was at the start of this demo lesson from this point on within the course you're going to be using automation and so we can remove all the configuration that we've done inside this demo lesson so the first thing we need to do is to reverse the route table changes that we've done so we need to go ahead and select the RT hyphen private a route table go ahead and select subnet associations and then edit the subnet associations and then just uncheck all of these subnets and this will return these to being associated with the main route table so scroll down and click on save do the same for RT hyphen private be so deselect all of these associations and click on save and then the same for RT hyphen private see so select it go to subnet associations and then edit them and remove all of these subnets and click on save next select all of these private route tables these are the ones that we created in this lesson so select them all click on the actions drop down and then delete route table and confirm by clicking delete route tables go to NAT gateways on the left and we need to select each of the NAT gateways in turn so a and then click on actions and delete NAT gateway type delete click delete then select be and do the same process actions delete NAT gateway type delete click delete and finally the same for see so select the C NAT gateway click on actions and delete NAT gateway you'll need to type delete to confirm click on delete now we're going to need all of these to be in a fully deleted state before we can continue so hit refresh and make sure that all three NAT gateways are deleted if yours aren't deleted if they're still listed in a deleting state then go ahead and pause the video and resume once all of these have changed to deleted at this point all of the NAT gateways have deleted so you can go ahead and click on elastic IPs and we need to release each of these IPs so select one of them and then click on actions and release elastic IP addresses and click release and do the same process for the other two click on release then finally actions release IP click on release once that's done move back to the cloud formation console select the stack which was created by the one click deployment at the start of the lesson and click on delete and then confirm that deletion and that will remove the cloud formation stack and any resources created as part of this demo and at that point once that finishes deleting the account has been returned into the same state as it was at the start of this demo lesson so I hope this demo lesson has been useful just to reiterate what you've done you've created three NAT gateways for a region resilient design you've created three route tables one in each availability zone added a default IP version for route pointing at the corresponding NAT gateway and associated each of those route tables with the private subnets in those availability zones so you've implemented a regionally resilient NAT gateway architecture so that's a great job that's a pretty complex demo but it's going to be functionality that will be really useful if you're using AWS in the real world or if you have to answer any exam questions on NAT gateways with that being said at this point you have cleared up the account you've deleted all the resources so go ahead complete this video and when you're ready I'll see you in the next.
-
-
arxiv.org arxiv.org
-
Data construction prompt. Fig. 6 shows theprompt used for Chinese distillation data construc-tion. We follow Zhou et al. (2024) to design theprompt for Chinese data construction. We adoptthe data construction prompt of Pile-NER-type 3,since it shows the best performance as in (Zhouet al., 2024).Figure 6: Data construction prompt for Chinese opendomain NER.Data processing. Following (Zhou et al., 2024),we chunk the passages sampled from the Sky cor-pus4 to texts of a max length of 256 tokens andrandomly sample 50K passages. Due to limitedcomputation resources, we sample the first twentyfiles in Sky corpus for data construction, since thesize of the entire Sky corpus is beyond the pro-cessing capability of our machines. We conductthe same data processing procedures including out-put filtering and negative sampling as in UniNER.Specifically, the negative sampling strategy for en-tity types, is applied with a probability proportionalto the frequency of entity types in the entire con
Qúa trình xây dựng dữ liệu Sky-NER (Open NER tiếng Trung): - Xây dựng prompt: Dựa trên chiến lược của bài UniversalNER. - Xử lý dữ liệu: Thu thập dữ liệu bằng cách cắt đoạn văn trong sky-scorpus thành các đoạn văn bản có độ dài tối đa là 256 token và chọn ra ngẫu nhiên 50K đoạn văn.
-
ference with out-domain examples. Duringinference, since examples from the automaticallyconstructed data is not aligned with the domainsand schemas of the human-annotated benchmarks,we refer to them as out-domain examples. Fig. 4shows the results of inference with out-domain ex-amples using diverse retrieval strategies. We usethe model trained with NN strategy here. After ap-plying example filtering such as BM25 scoring, in-ference with out-domain examples shows improve-ments compared to the baseline, suggesting theneed of example filtering when implementing RAGwith out-domain examples
Qúa trình infer với các mẫu out-domain: Trong quá trình infer, bởi vì các mẫu từ tập dữ liệu xây dựng tự động có domain và format không giống với dữ liệu được gán nhãn bởi con người, các mẫu này sẽ được gọi là out-domain.
Trong thực nghiệm ở hình 4, mô hình RA-IT được huấn luyện với chiến lược truy xuất NN. Sau khi áp dụng bộ lọc BM25, việc infer với các mẫu out-domain cho thấy cải thiện so với baseline, từ đó cho thấy tầm quan trọng trong việc thêm bộ lọc khi áp dụng RAG với các mẫu out-domain.
-
Training with diverse retrieval strategies. Fig.3 visualize the results of training with various re-trieval strategies. We conduct inference with andwithout examples for each strategy, and set the re-trieval strategy of inference the same as of training.The most straight forward method NN shows bestperformances, suggesting the benefits of semanti-cally similar examples. Random strategy, though in-Figure 4: Impacts of inferece with out-domain examplesusing various retrieval strategies. The average F1 valueof the evaluated benchmarks are reported. w/o exmp.means inference without example. Applying examplefiltering strategy such as BM25 filtering benefits RAGwith out-domain examples.Figure 5: Impacts of inference with in-domain examples.The average F1 value of the evaluated benchmarks arereported. N -exmp. means the example pool of size N .Sufficient in-domain examples are helpful for RAG.ferior to NN, also shows improvements, indicatingthat random examples might introduce some gen-eral information of NER taks to the model. Mean-while, inference with examples does not guaranteeimprovements and often hurt performances. Thismay due to the differences of the annotation schemabetween the automatically constructed data and thehuman-annotated benchmarks
Huấn luyện với các chiến lược truy xuất khác nhau: Được thể hiện ở hình 3. Qúa trình infer được tiến hành có hoặc không có các mẫu tham khảo với mỗi chiến lược trích xuất, và chiến lược trích xuất trong cả quá trình huấn luyện và quá trình infer là giống nhau.
Kết quả cho thấy NN là chiến lược truy xuất tốt nhất, từ đó cho thấy tầm quan trọng của các mẫu tham khảo có sự tương đồng về mặt ngữ nghĩa. Trong khi đó, việc infer với các ví dụ thì không đảm bảo sự tăng tiến và thường ảnh hưởng tiêu cực đến kết quả.
-
Diverse retrieval strategies. The followingstrategies are explored in the subsequent analysis.(1) Nearest neighbor (NN), the strategy used in themain experiments, retrieves k nearest neighborsof the current sample. (2) Nearest neighbor withBM25 filter (NN, BM), where we apply BM25 scor-ing to filters out NN examples not passing a prede-fined threshold. Samples with no satisfied exam-ples are used with the vanilla instruction template.(3) Diverse nearest neighbor (DNN), retrieves Knearest neighbors with K >> k and randomly se-lects k examples from them. (4) Diverse nearestwith BM25 filter (DNN,BM), filters out DNN exam-ples not reaching the BM25 threshold. (5) Random,uniformly selects k random examples. (6) Mixednearest neighbors (MixedNN), mixes the using ofthe NN and random retrieval strategies with theratio of NN set to a.
Các chiến lược truy xuất chính: - Nearest neighbor (NN): Chiến lược được sử dụng trong các thực nghiệm chính, có khả năng trích xuất ra k mẫu gần với mẫu cần truy xuất nhất. - NN với bộ lọc BM25 (NN, BM): bộ lọc BM25 được sử dụng để lọc các mẫu NN có độ tương đồng ko vượt qua 1 ngưỡng nhất định - NN đa dạng: truy xuất K mẫu NN với K >> k, sau đó chọn ngẫu nhiên k mẫu trong K mẫu NN trên. - Random - NN hỗn hợp:Sử dụng kết hợp NN và các chiến lược chọn ngẫu nhiên với tỉ lệ chọn của NN là alpha
-
We explore the impacts of diverse retrieval strate-gies. We conduct analysis on 5K data size for costsaving as the effect of RA-IT is consistent acrossvarious data sizes as shown in Section 3.4. Wereport the average results of the evaluated bench-marks here
Phân tích: Phân tích này được thực hiện để khám phá mức độ ảnh hưởng của các chiến lược truy xuất khác nhau. Phân tích được tiến hành với mẫu dữ liệu có kích thước 5K.
-
The main results are summarized in Table 1 and2 respectively. We report the results of inferencewithout examples for RA-IT here, since we foundthis setting exhibits more consistent improvements.The impacts of inference with examples are studiedin Section 3.5. As shown in the tables, RA-ITshows consistent improvements on English andChinese across various data sizes. This presumablybecause the retrieved context enhance the model
Kết quả chính: Được thể hiện ở bảng 1 và bảng 2. Chú ý rằng, thực nghiệm trong 2 bảng này đã thực hiện quá trình infer mà không có few-shot, lý do bởi việc infer này đem lại sự tăng tiến bền vững trong kết quả.
Kết quả cho thấy RA-IT đạt kết quả tốt nhất. Lý do cho sự tăng tiến này được cho là nhờ ngữ cảnh được truy xuất đã làm tăng cường khả năng hiểu đầu vào của mô hình, từ đó thể hiện sự cần thiết của các mẫu instruction có tăng cường ngữ cảnh.
-
We conduct a preliminary study on IT data effi-ciency in targeted distillation for open NER byexploring the impact of varous datas sizes: [0.5K,1K, 5K, 10K, 20K, 30K, 40K, 50K]. We use vanillaIT for preliminary study. Results are visualized inFig. 2. The following observations are consistentin English and Chinese: (1) a small data size al-ready surpass ChatGPT’s performances. (2) Perfor-mances are improving as the data sizes increased to10K or 20K, but begin to decline and then remainat a certain level as data sizes further increased to50K. Recent work for IT data selection, Xia et al.Figure 2: Preliminary study of IT data efficiency foropen NER in English (left) and Chinese (right) scenar-ios, where the training data are Pile-NER and Sky-NERrespectively. Average zero-shot results of evaluatedbenchmarks are illustrated. The performance does notnecessarily improve as the data increases.(2024); Ge et al. (2024); Du et al. (2023) also findthe superior performances of only limited data size.We leave selecting more beneficial IT data for IEas future work. Accordingly, we conduct mainexperiments on 5K, 10K and 50K data sizes
Nghiên cứu chuẩn bị cho đánh giá hiệu quả của dữ liệu: Nghiên cứu chuẩn bị được tiến hành cho việc đánh giá hiệu quả của bộ dữ liệu IT trong việc chiết xuất có mục tiêu của bài toán open NER bằng cách khám phá mức độ ảnh hưởng của dữ liệu ở nhiều kích thước khác nhau: [0.5K, 1K, 5K,...]. Mẫu IT đơn thuần được sử dụng để thực hiện nghiên cứu này.
Các kết luận rút ra: - Một lượng nhỏ dữ liệu đã có thể vượt qua được khả năng của chatGPT. - Kết quả có sự tăng tiến thuận theo độ tăng của kích thước mô hình (từ 10K lên 20K), nhưng bắt đầu giảm và ổn định ở một mức cụ thể khi dữ liệu tiếp tục tăng đến mức 50k. Các nghiên cứu gần đây về việc chọn dữ liệu IT cũng cho ra kết quả việc trội của bộ dữ liệu nhỏ có kích thước hạn chế.
-
Training data: For English, we use thetraining data Pile-NER released by Zhou et al.(2024). For Chinese, we use the training data Sky-NER constructed in this paper as described in Sec-tion 3.2. We use LoRA (Hu et al., 2021) to trainmodels. Retrieval: We adopt GTE-large2 (Liet al., 2023) to generate text embeddings and setk = 2 in main experiments. Evaluation: Wemainly focus on the zero-shot evaluation. ForEnglish, we adopt benchmarks CrossNER, MIT-Movie and MIT-restaurant following Zhou et al.(2024). For Chinese, we collect eight benchmarksacross diverse domains, of which details are in Ap-pendix D. We report micro-F1 value
Thực nghiệm: - Mô hình LLM: LLaMA-3-3B và Qwen-1.5.7B. - Bộ dữ liệu: Đối với tiếng Anh, bộ dữ liệu Pile-NER được sử dụng. Đối với tiếng Trung, bộ dữ liệu Sky-NER do chính nhóm tác giả xây dựng được sử dụng. LoRA được sử dụng trong quá trình huấn luyện LLM - Mô hình truy xuất: Sử dụng GTE-large để tạo ra các embedding câu và số lượng mẫu tương đồng được truy xuất là 2. - Phương pháp đánh giá: Tập trung vào đánh giá Zero-shot.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Platforms also collect information on how users interact with the site. They might collect information like (they don’t necessarily collect all this, but they might): when users are logged on and logged off who users interact with What users click on what posts users pause over where users are located what users send in direct messages to each other
I find it scary that these platforms monitor every move we make on there sites especially them checking our direct messages with others. Our direct messages aren't as private as we think if these platforms are sitting there collecting this data.
-
-
static1.squarespace.com static1.squarespace.com
-
Who made the water, the raft, the trinity of Earth-Creators? Like manyCalifornia creation epics, the Maidu account seems to begin in the middle ofthe story. Mysteriously, elements of the world seem to have always beenpresent, their existence apparently beyond question or speculation.
This creation story is interesting to me because it makes me wonder if the earth is being depicted as the "god" of the story. In most of the creation stories I am familiar with, the "god" of the story is the only thing present at the beginning, and it's existence is never really questioned. Earth Initiate does not appear to be an all-powerful being in this story, so I'm curious whether a "god" was present in their beliefs or not.
-
-
www.theguardian.com www.theguardian.com
-
Der Stress, dem die Wassersysteme der Welt ausgesetzt sind, wird dazu führen, dass das 2030 die Nachfrage nach Wasser 40% höher sein wird als das Angebot. Der Bericht der Globalen Komission für die Wasserökonomie stellt fest, dass ohne radikale Gegenmaßnahmen die Hälfte der Nahrungsproduktion der Welt in den kommenden 25 Jahren gefährdet ist. Trotz der Verbundenheit der globalen Wasserressourcen werde Wasser noch nicht als globales Gemeingut gemanagt. https://www.theguardian.com/environment/2024/oct/16/global-water-crisis-food-production-at-risk
-
-
-
Noch nie ist die CO2-Konzentration in der Atmosphäre so stark gestiegen wie im vergangenen Jahr, nämlich um 3,37 parts per million (PPM). Die Konzentration liegt jetzt bei 422 PPM. Vor allem die sehr geringe CO2-Aufnahme durch Ozean- und Landsenken hat diese Steigerung verursacht https://taz.de/Hiobsbotschaft-fuers-Klima/!6040258/
-
-
mlpp.pressbooks.pub mlpp.pressbooks.pub
-
The 1935 Social Security Act provided for old-age pensions, unemployment insurance, and economic assistance for both the elderly and dependent children.
This was the creation of Social Security Numbers right? Also how did it allow the elders to retire?
-
At the time of the stock market crash, southerners were already underpaid, underfed, and undereducated.
Out of context, but were the farmers/southerners still able to have pets, like dogs? I know that farmers usually have at least 1 dog? Or a Cat?
-
-
socialsci.libretexts.org socialsci.libretexts.org
-
t is likely that you have more in common with that reality TV star than you care to admit. We tend to focus on personality traits in others that we feel are important to our own personality. What we like in ourselves, we like in others, and what we dislike in ourselves, we dislike in others (McCornack, 2007). If you admire a person’s loyalty, then loyalty is probably a trait that you think you possess as well. If you work hard to be positive and motivated and suppress negative and unproductive urges within yourself, you will likely think harshly about those negative traits in someone else. After all, if you can suppress your negativity, why can’t they do the same? This way of thinking isn’t always accurate or logical, but it is common.
To me this has never even registered in my head. I am going to focus on this the next time my girlfriend is watching reality tv. I know that I am most aware that I tend to root for the underdogs in most scenarios. I want the one who was counted out to win. I wonder how that relates to my personality. I know I always admire the extroverts, but I felt like that was because I am not very extroverted and wanted to be like them. Intersting self observation for me to try in the coming days.
-
his simple us/them split affects subsequent interaction, including impressions and attributions. For example, we tend to view people we perceive to be like us as more trustworthy, friendly, and honest than people we perceive to be not like us (Brewer, 1999).
I am currently working on a construction site here in Boise. I am from Tennessee and all my coworkers are from Kentucky. One day a coworker told me the superindentent didnt like me. Obviously confused since we had only been working together for 3 days, I asked, Why? My coworker told me simply for the fact that I am not from Kentucky, he did not trust me or think I was a capable worker because of where I grew up. I know its not fair but the only thing I can do is prove him wrong and help him recognize his inherant bias is not always correct.
-
First impressions are enduring because of the primacy effect, which leads us to place more value on the first information we receive about a person. So if we interpret the first information we receive from or about a person as positive, then a positive first impression will form and influence how we respond to that person as the interaction continues.
This bit of information reminds me of a few studies and lawsuits that have occurred in the last decade or two regarding names on job applications. The inquiries focused on the concept that someone's name being less culturally familiar to a recruiter would negatively bias an applicant's chances of getting to the interview stage. This effect was studied using identical resumes with different names associated to measure employer responses. This seems like a great example of the primacy effect making biases that are sometimes difficult to identify more obvious.
-
-
pmc.ncbi.nlm.nih.gov pmc.ncbi.nlm.nih.gov
-
In conclusion, it is important that primary care physicians get well versed with the future AI advances and the new unknown territory the world of medicine is heading toward.
The conclusion summarizes how physicians should get used to AI because it will soon be a big part of their work.
-
Some studies have been documented where AI systems were able to outperform dermatologists in correctly classifying suspicious skin lesions.[18] This because AI systems can learn more from successive cases and can be exposed to multiple cases within minutes, which far outnumber the cases a clinician could evaluate in one mortal lifetime.
This shows that AI can also take jobs as away as well as male them better.
-
. In conclusion, the physicians who used documentation support such as dictation assistance or medical scribe services engaged in more direct face time with patients than those who did not use these services
This shows that physicians using AI save more time and are able to interact with patients more.
-
The Da Vinci robotic surgical system developed by Intuitive surgicals has revolutionized the field of surgery especially urological and gynecological surgeries.
This paragraph show how AI is being used in surgery. Robots are mimicking surgeons to perform surgery.
-
Radiology is the branch that has been the most upfront and welcoming to the use of new technology.
This paragraph talks about how Radiology is using AI. Radiology uses AI to help identify abnormal and normal scans more quickly, especially in busy hospitals with fewer staff.
-
A lot of AI is already being utilized in the medical field, ranging from online scheduling of appointments, online check-ins in medical centers, digitization of medical records, reminder calls for follow-up appointments and immunization dates for children and pregnant females to drug dosage algorithms and adverse effect warnings while prescribing multidrug combinations.
This shows the different ways medicine is being utilized in medicine
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Employees at the company misusing their access, like Facebook employees using their database permissions to stalk women
This can be highly problematic as the employees would basically be logged onto your accounts and can even view your posts which are on a privacy setting "only-me". This reminds me of how someone I know was mistreated by their manager and they had an issue over their wages so right before giving in her resignation letter she leaked the company's database by posting it on Twitter, which included budgeting and the balance sheet.
-
-
socialsci.libretexts.org socialsci.libretexts.org
-
Listening to people who are different from us is a key component of developing self-knowledge. This may be uncomfortable, because our taken-for-granted or deeply held beliefs and values may become less certain when we see the multiple perspectives that exist.
Listening to the thoughts and opinions of people with differing cultures or political opinions with the intention to understand, instead of respond, is such a powerful tool. It can help dismantle prejudices, make you a better advocate for your own values, and/or help practice giving people room to communicate what they really intending to say rather than giving preloaded responses. I think most people would benefit greatly from engaging in this kind of practice on a regular basis.
-
-
socialsci.libretexts.org socialsci.libretexts.org
-
Self-discrepancy theory states that people have beliefs about and expectations for their actual and potential selves that do not always match up with what they actually experience (Higgins, 1987).
I have experienced this kind expectation to reality relationship in some of my personal relationships. These people had an idea of what they could be if they could just stop being inadequate that only served to generate shame and guilt. Often, there was never any real grounding for the things they expected of themselves, but they felt the weight of those expectations as if they were an undeniable reflection of their potential. I am sure many of this is related to external social expectations that are later internalized. These expectations seem to rarely serve as drivers for someone to be more productive and more often seem to break people down and make them overall less likely to engage with life.
-
If a man wants to get into better shape and starts an exercise routine, he may be discouraged by his difficulty keeping up with the aerobics instructor or running partner and judge himself as inferior, which could negatively affect his self-concept.
One of our recent lectures identified the importance of an improvement mindset. Tools like these could help avoid developing unrealistic expectations that ultimately dissuade attempts at self improvement. They could provide an interpretive lens to contextualize feedback in ways that are more constructive.
-
-
files.eric.ed.gov files.eric.ed.gov
-
Student parents identified the need for establishing a physical space on campus that worked for them and their families
would allow lots of parents to seek help on their assignments and other questions if their kids were welcomed in the space. as mom, this would be extremely helpful.
-
“I didn’t know that [the college] was giving out hotspots or the internet and thencomputers. I had to go buy my own computer, which I put on my credit card and I’m stillpaying for it as of right now. And then internet, I’m barely hanging in there to pay forthat because the school gives internet, but it was so slow that me and my son couldn’tbe on the internet at the same time.”
sad that many students, especially student parents have to go through this.
-
Some community college staff members identified a lack of campus-wide understanding and awareness of the needs of studentparents.
staff should be more aware of the needs of student parents seeking help to achieve their goals
-
Current statewide and federal data systems do not adequatelyrecord the number of student parents in higher education
thus, student parents don't receive all the help they need.
-
Meeting thefinancial demands of monthly rent, child care, children’sclothing, college expenses, utilities, and other set expensesis not attainable for most without the use of student loans
Student parents seeking a higher education should recieve more help as they are trying to better themselves and the lives of their children.
-
Often, collegeadministrators are not empathetic to the unique needs of student parents, nor are their institutions equipped with the financialresources to help student parents navigate their journey.
making it harder for those parents seeking a higher education
-
he needs and demands ofstudent parents matter in the higher education landscape as more than one in five college students have a dependent and donot earn college degrees at the same rate as their childless peers
There should be more resources for student parents so they can achieve their goals. They are just as important as the average student.
-
-
docdrop.org docdrop.orgUntitled3
-
Federal funding for campus child care is limited and favors 4-year institutions
the majority of student parents attend community college, so why is most of the funding going to 4 year universities?
-
Figure 1. Proportion of Community Colleges withOn-Campus Child Care, Nationally, 2001-2008
the chart shows that on campus childcare has decreased, yet the demand for higher education has increased.
-
over 80 percent reportedthat the availability of child care was very important in the decision to attend college
childcare is an important factor for student parents, without childcare they wouldn't be able to attend school.
-
-
socialsci.libretexts.org socialsci.libretexts.org
-
For something unexpected to become salient, it has to reach a certain threshold of difference. If you walked into your regular class and there were one or two more students there than normal, you may not even notice. If you walked into your class and there was someone dressed up as a wizard, you would probably notice.
I can see this effect happen where you may not think expectation would matter very much. There are many times where someone has stopped to say something to me, but the content of the statement is outside of my current mode of thinking. A perfectly understandable statement can become completely unintelligible purely because the context of the message did not prepare the receiver to comprehend it. If something like this can happen in the case of straight forward comprehension, the effect must me exacerbated by the complexity or obscurity of the intended communication.
-
-
pb-dev-01.cognella.com pb-dev-01.cognella.com
-
Chapter 1 Introduction Test work Many Europeans thought that India’s history was not important. They argued that Africans were inferior to Europeans, and they used this ash to help justify sla very. Africa was by no means inferior to Europe. The people who suffered the most from the Transatlantic Slave trade were civilized, organized, and technologically advanced peoples, long before the arrival fittest of European slavers. Egypt was the first of many great African civilizations, existing for absdasddsaaout 2,000 years before Rome was built. It lasted thousands of years and achieved many magnificent and incredible things in the fields of science, mathematics, medicine, technology and the arts. In the west of Africa, the kingdom of Ghana was a vast Empire that traded in gold, salt, and copper between the ninth and thirteenth centuries.The kingdoms of Benin and Ife were led by the Yoruba people and sprang up between the 11th and 12th centuries. The Ife civilization goes back as far as 500 B.C. and its people made objects from bronze, brass, copper, wood, and ivory. From the thirteenth to the fifteenth century, the kingdom of Mali had an organized trading system, with gold dust and agricultural produce being exported. Cowrie shells were used as a form of currency and gold, salt and copper were traded. Between 1450–1550, the Songhai Kingdom grew very powerful and prosperous. It had a well-organized system of government; a developed currency and it imported fabrics from Europe. Timbu ktu became one of the most important places in the world as libraries and universities were meeting places for poets, scholars, and artists from around Africa and the Arab World. Figure 1.1 Forms of slavery existed in Africa before Europeans arrived. However, African slavery was different from what was to come. People were enslaved as punishment for a crime, payment for a debt or as a prisoner of war; most enslaved people were captured in battle. In some kingdoms, temporary slavery was a punishment for some crimes. In some cases, enslaved people could work to buy their freedom. Children have been saved of enslaved people did not automatically become slaves.Chapter ObjectivesAfter this chapter, students will be able to:Explain the significance of the Middle PassageIdentify the stages of the Trans-Atlantic Slave TradeUse primary and interactive sources to analyze the beginnings of the slave trade and the Middle PassageDefine the economic, moral, and political ideologies of implementing and justifying the slave tradeGuiding QuestsDirections: As you engage with the CONTENT in this chapter, keep the following questions in mind. Look for the information that provides answers to these questions and deepens your understanding.How did slavery become synonymous with African enslavement?What were the routes of the first slave ships?What stimulated the slave trade?What makes African slavery different than other forms of slavery?Resistance was an important part of life for enslaved people. What were some of the ways in which they resisted being enslaved? Figure 1.2Interactive Map Key Terms, People, Places, and EventsTrans-Atlantic Slave TradeBenin and IfeSonghai KingdomBarracoonsElminaNautical technologyBartolomeu DiasChristopher ColumbusHispaniolaGuanchesTainosFernando II of Aragon and Isabel I of CastileLaws of Burgos and Laws of GranadaEmperor Charles VNicolas OvandoIndiesEnriquillo’s RevoltQuobna Ottobah CugoanoPoint of No ReturnMiddle PassageOlaudah EquianoThumb screwsZongThe Dolben ActSection I: Introducing the Slave Trade and New World SlaveryIntroduction to Reading #1: Interesting Narrative of the Life of Olaudah EquianoThe personal accounts of enslaved individuals such as Olaudah Equiano are critical in understanding the harsh realities of the slave trade and the Middle Passage as well as demonstrating the ways in which captive Africans resisted their new station in life and fought for abolition. Olaudah Equiano (c. 1745–1797) was an African born (Kingdom of Benin) writer and abolitionist who documents in his memoir his journey from being captured at eleven years old, the Middle Passage, and working throughout the British Atlantic World as an explorer and merchant before settling in Europe as a free man, converting to Christianity and fought for the abolishment of the slave trade. The following excerpt comes from his memoirs, published in 1789. Reading 1.1Olaudah Equiano Describes the Middle Passage, 1789Olaudah EquianoOlaudah Equiano, Selection from “The Interesting Narrative of the Life of Olaudah Equiano, or Gustavus Vassa, the African, written by Himself,” The Interesting Narrative of the Life of Olaudah Equiano, or Gustavus Vassa, the African, written by Himself, pp. 51–54. 1790.At last, when the ship we were in had got in all her cargo, they made ready with many fearful noises, and we were all put under deck, so that we could not see how they managed the vessel. But this disappointment was the least of my sorrow. The stench of the hold while we were on the coast was so intolerably loathsome, that it was dangerous to remain there for any time, and some of us had been permitted to stay on the deck for the fresh air; but now that the whole ship’s cargo were confined together, it became absolutely pestilential. The closeness of the place, and the heat of the climate, added to the number in the ship, which was so crowded that each had scarcely room to turn himself, almost suffocated us. This produced copious perspirations, so that the air soon became unfit for respiration, from a variety of loathsome smells, and brought on a sickness among the slaves, of which many died, thus falling victims to the improvident avarice, as I may call it, of their purchasers. This wretched situation was again aggravated by the galling of the chains, now become insupportable; and the filth of the necessary tubs, into which the children often fell, and were almost suffocated. The shrieks of the women, and the groans of the dying, rendered the whole a scene of horror almost inconceivable. Happily perhaps for myself I was soon reduced so low here that it was thought necessary to keep me almost always on deck; and from my extreme youth I was not put in fetters. In this situation I expected every hour to share the fate of my companions, some of whom were almost daily brought upon deck at the point of death, which I began to hope would soon put an end to my miseries. Often did I think many of the inhabitants of the deep much more happy than myself; I envied them the freedom they enjoyed, and as often wished I could change my condition for theirs. Every circumstance I met with served only to render my state more painful, and heighten my apprehensions, and my opinion of the cruelty of the whites. One day they had taken a number of fishes; and when they had killed and satisfied themselves with as many as they thought fit, to our astonishment who were on the deck, rather than give any of them to us to eat, as we expected, they tossed the remaining fish into the sea again, although we begged and prayed for some as well we cold, but in vain; and some of my countrymen, being pressed by hunger, took an opportunity, when they thought no one saw them, of trying to get a little privately; but they were discovered, and the attempt procured them some very severe floggings.One day, when we had a smooth sea, and a moderate wind, two of my wearied countrymen, who were chained together (I was near them at the time), preferring death to such a life of misery, somehow made through the nettings, and jumped into the sea: immediately another quite dejected fellow, who, on account of his illness, was suffered to be out of irons, also followed their example; and I believe many more would soon have done the same, if they had not been prevented by the ship’s crew, who were instantly alarmed. Those of us that were the most active were, in a moment, put down under the deck; and there was such a noise and confusion amongst the people of the ship as I never heard before, to stop her, and get the boat to go out after the slaves. However, two of the wretches were drowned, but they got the other, and afterwards flogged him unmercifully, for thus attempting to prefer death to slavery. In this manner we continued to undergo more hardships than I can now relate; hardships which are inseparable from this accursed trade. – Many a time we were near suffocation, from the want of fresh air, which we were often without for whole days together. This, and the stench of the necessary tubs, carried off many. During our passage I first saw flying fishes, which surprised me very much: they used frequently to fly across the ship, and many of them fell on the deck. I also now first saw the use of the quadrant. I had often with astonishment seen the mariners make observations with it, and I could not think what it meant. They at last took notice of my surprise; and one of them, willing to increase it, as well as to gratify my curiosity, made me one day look through it. The clouds appeared to me to be land, which disappeared as they passed along. This heightened my wonder: and I was now more persuaded than ever that I was in another world, and that every thing about me was magic. At last we came in sight of the island of Barbadoes, at which the whites on board gave a great shout, and made many signs of joy to us. https://youtu.be/PmQvofAiZGAThe Arrival of European TradersDuring the fifteenth and sixteenth centuries, European traders started to get involved in the slave trade. European traders took interest in African nations and kingdoms, such as Ghana and Mali because of their complex trading networks. Shortly after, traders became interested in trading in human beings, taking people from western Africa to Europe and the Americas. Initially, this began on a small scale but due to the slave trade, it grew during the seventeenth and eighteenth centuries, as European countries conquered many of the Caribbean islands and much of North and South America. Europeans who settled in the Americas were attracted by the idea of owning their own land and not having to work for someone else. Convicts from Britain were sent to work on the plantations but there were never enough. To satisfy the growing demand for labor, Europeans purchased African people.They wanted the enslaved people to work in mines and on tobacco plantations in South America and on sugar plantations in the West Indies. Millions of Africans were enslaved and forced across the Atlantic, to labor in plantations in the Caribbean and America. Once Europeans became involved, slavery changed, leading to generations of peoples being taken from their homelands and enslaved. Children whose parents were enslaved became slaves as well.How Were They Enslaved?The major means of enslaving Africans were warfare, raiding and kidnapping, though people were enslaved through judicial processes, debt as well as drought and famine in regions where rainfall was scarce. Violence was another form utilized to enslave people. Warfare was used as a source to captured people in the regions of the Senegambia, the Gold Coast, the Slave Coast (Bight of Benin) and Angola. Raiding and kidnapping seemed to have dominated in the Bight of Biafra. Many captives were forced to travel long distances from the areas they called home to the coast, which meant there was an increase in the risk of deaths.Slave factories, dungeons, and forts were erected along the coast of West Africa, housing captured Africans in holding pens (barracoons) awaiting passage throughout the New World. They were equipped with up to a hundred guns and cannons to defend European interests on the coast, by keeping competitors away. There were nearly one hundred castles spread along the coast. The forts had the same simple design, with narrow windowless stone dungeons for captured Africans and fine residences for Europeans. The largest of these forts was Elmina. The fort had been fought over by the Portuguese, the Dutch and the British. At the height of the trade, Elmina housed 400 company personnel, including the company director, as well as 300 forts. The whole commerce surrounding the slave trade had created a town outside the castle, of about 1000 Africans. In other cases, the enslaved Africans were kept on board the ships, until sufficient numbers were captured, waiting perhaps for months in cramped conditions, before setting sail.The Ethnic Groups of the EnslavedThe British traders covered the West African coast from Senegal in the north to the Congo in the south, occasionally venturing to take slaves from South-East Africa in present day Mozambique. Many venues on the African Atlantic coast were more desirable to traders looking for the supply of enslaved people than others. This appeal was reliant on the level of support from the chieftains instead of topographical barriers or the demography of local populations. While some African rulers fought against the slave trade, other African rulers were willing participants, supplying European traders with the enslaved people they wanted. As the demand for African labor grew, some African traders began capturing other Africans and selling them to European traders. The Portuguese, French, and British often helped these rulers in wars against their enemies. African rulers had their own stake in the trade. Those who were willing to supply enslaved Africans became very rich and powerful as well as strongly armed with guns from Europe. The numbers of wars increased, and they became more violent because of the European guns and weapons. Many Africans died for every enslaved person who was eventually sold.The enslaved Africans included a combination of ethnic groups. However, after 1660, over half of the Africans capture and taken away by British ships came from just three regions—the Bight of Biafra, the Gold Coast, and Central Africa. Within the Bight of Biafra two venues, Old Calabar on the Cross River and Bonny in the Niger Delta were the major suppliers of the enslaved boarding British ships. The top three ethnic groups that accounted for the number of enslaved Africans within the British slave trade were the Igbos from the Bight of Biafra, the Akan from the Gold Coast and the Bantu from Central Africa.The Portuguese Slave Trade in AfricaUp to the late medieval era, southern Europe instituted a significant market for North African merchants who brought commodities like gold as well as a small numbers of slaves in caravans across the Sahara Desert. During the early fifteenth century, advances in nautical technology, permitted Portuguese sailors to travel south along Africa’s Atlantic coast in looking for a direct maritime route to gold-producing regions in West Africa. Founded in 1482 near the town of Elmina in present-day Ghana, São Jorge da Mina gave the Portuguese better access to sources of West African gold.By the mid-1440s, a trading post was established on the small island off the coast of present-day Mauritania. The Portuguese established similar trading “factories” with the goal of tapping into local commercial networks. Portuguese traders acquired captives for export and numerous West African commodities such as ivory, peppers, textiles, wax, grain, and copper. They established colonies on previously uninhabited Atlantic African islands that would later serve as gathering areas for captives and commodities to be shipped to Iberia, and then to the Americas. By the 1460s, the Portuguese began colonizing the Cape Verde Islands (Cabo Verde). Additionally, the Portuguese sailors encountered the islands of São Tomé and Príncipe around 1470 with colonization beginning in the 1490s. These islands served as entrepôts for Portuguese commerce across western Africa.In 1453, the Ottoman Empire’s successful capture of Constantinople (Istanbul), Western Europe’s main source for spices, silks, and other luxury goods produced in the Arab World and Asia, added further incentive for European overseas expansion. In 1488, following years of Portuguese expeditions sailing along western Africa’s coastlines, Portuguese navigator Bartolomeu Dias famously sailed around the Cape of Good Hope. As a result, this opened up European access to the Indian Ocean. By the end of the century, Portuguese merchants surpasses Islamic commercial, political, and military grips in North Africa and in the eastern Mediterranean. A major outcome of Portuguese overseas expansion during this time was an intense rise in Iberian access to sub-Saharan trade networks. The following century gave way to Portugal’s expansion into western Africa leading Iberian merchants to recognize the economic opportunity of a widespread slave trading business.The Spanish and New World SlaverySpain was the first to make widespread use of enslaved Africans as a labor force in the colonial Americas. After his 1492 voyage, with support from the Spanish Crown and roughly one thousand Spanish colonists, Genoese merchant Christopher Columbus established the first European colony in the Americas on the island of Hispaniola. It has been reported that Columbus had previous involvement trading in West Africa and had visited the Canary Islands, where the Guanches had been enslaved by the Spanish and exported to Spain. While Columbus’ interests were mainly in gold, he realized Caribbean islanders’ value as slaves.In early 1495, preparing to return to Spain, he loaded his ships with five hundred enslaved Taínos from Hispaniola. Consequently, only three hundred survived. Spanish monarchs, Fernando II of Aragon and Isabel I of Castile, quickly cut his slaving activities short, attempting to compensate for the gold that was not flowing in. However, forced Amerindian labor grew progressively vital for the Spanish Royal policies. These policies were contradictory in a number of ways. While the Spanish Crown intended to protect Amerindians from abuse, they also expected them to accept Spanish rule, embrace Catholicism, and become accustom to a work regimen that was designed to make Spain’s overseas colonies profitable. In 1501, the royals ordered Hispaniola’s governor to return all property stolen from Taínos, and to pay them wages for the labor they performed. Additional reforms were outlined in the Laws of Burgos (1512), and later in the Laws of Granada (1526), however, they have been largely ignored by Spanish colonists. In the meantime, Spain’s royals granted colonists dominion over Amerindian subjects, convincing Indigenous populations to perform labor. This was an adaptation of the medieval encomienda, a quasi-feudal system in which Iberian Christians who performed military service were authorized to rule people and oversee resources in lands taken from Iberian Muslims.In spite of their opposition to the trans-Atlantic slave trade of Amerindians, the Crown allowed their enslavement and sale within the Americas. The first half of the sixteenth century saw Spanish colonists conducting raids throughout the Caribbean, transporting captives from Central America, northern South America, and Florida to Hispaniola and other Spanish colonies. There were two key arguments used to defend the enslavement of Amerindians. The first concept was “just war” against anyone who rebelled against the Crown or did not accept Christianity. The second concept was ransom meaning that any Amerindian held captive were eligible for purchase with the intention to Christianize them as well as rescue them from supposedly cannibalistic captors. The Spanish colonizers soon realized that forced enslavement and labor of Indigenous groups was not a feasible option. While the physical demands were intense, diseases such as smallpox, measles, chicken pox, and typhus devastated Indigenous populations, thus leading to a workforce that could not be sustained. Proponents of reform spoke out against Spanish colonization and abuses towards Amerindians, stating that it was deplorable on the grounds of religion and morality. Due to this mass decline of Indigenous populations, Emperor Charles V passed a series of laws in the 1540s known as the “New Laws of the Indies for the Good Treatment and Preservation of the Indians,” or just the “New Laws.”Among these new laws was the 1542 royal decree that abolished Amerindian slavery. Also, it was no longer a requirement for Indigenous people to provide free labor and Spanish colonists’ children could no longer inherit encomiendas. There were some oppositions to these changes from colonists in Mexico and Peru; places where colonists owned encomiendas similar to small kingdoms. As colonists complained and pushed back against the decree, some of the New Laws were partially enforced and some traditional practices were partially restored. On the contrary, Spanish colonists responding to declining Indigenous population began to search elsewhere for laborers to fulfill demand. As the Portuguese slave trade flourished, they set their sights on Africa.The Early Trans-Atlantic Slave TradeThe first political leader to manage the trans-Atlantic slave trade was Nicolas Ovando. He imported African captives from Spain to the island of Hispaniola. In 1502, Ovando became the third governor of the “Indies” following Christopher Columbus and Francisco de Bobadilla. Ovando was accused of indoctrinating Amerindians by the Catholic monarchs who argued that since they were converts, they should not have any contact with Muslims, Jews, or Protestants. Thus, the monarchs barred North African “Moorish” captives from being transported to the New World, however they allowed black captives and other captives who were born in Spain or Portugal. While Ovando at first resisted the trans-Atlantic slave trade, letters exchanged between Ovando and Spain after 1502 referred to captives exclusively as “negros,” or “blacks.”When the first captives arrived in Hispaniola, many immediately began resisting by escaping into the mountains and launching raids against Spanish settlements. In 1503, due to fears of African captives escaping and influencing Amerindians to revolt, Ovando petitioned the Spanish government to ban the trans-Atlantic slave trade. Shortly after, the indigenous of Hispaniola incited an uprising known as Enriquillo’s Revolt (1519–1533). This revolt demonstrates overlap with increasing African resistance and probably involved some involvement with enslaved Africans. In 1505, the governor sent a request to King Fernando II for seventeen captives to be sent to the mines in Hispaniola. To up the ante, the king used the labor of captives to increase gold production, and sent one hundred black captives from Spain directly to the governor. Over the next several years, the labor of African captives proved to be so effective that Ovando had 250 more African transported from Europe to work in the gold and copper mines.Between 1501 and 1518, the trans-Atlantic slave trade was comprised of Africans who were transported from Iberia. The Spanish Crown prohibited direct traffic from Africa because they feared that African captives would bring their African spiritual and religious practices to Indigenous populations thus interfering with Christian indoctrination. While the number of captive Africans was relatively low at this time, Hispaniola’s thriving population saw a dramatic decline from 60,000 to less than 20,000 from 1508–1518. Therefore, colonists needed laborers to maintain the colony’s gold mines and sugar industry. While the connection between race and slavery did not fully develop into a rigid racial hierarchy until the colonization of the Americas, specifically, North America, the Spanish Crown was adamant that African captives would come from sub-Saharan Africa.Section II: Passages to the New WorldIntroduction to Reading #2: Narrative of the Enslavement of Quobna Ottobah Cugoano, A Native of AfricaLike the plight of Equiano, Quobna Ottobah Cugoano (c. 1757– ?) was born in modern day Ghana and captured at the age of thirteen by a fellow African and sold to the British and forced into slavery. His memoir discusses his experiences during the Middle Passage and enslavement on a sugar cane plantation in Grenada located in the Caribbean. In 1772, after working on the plantation for two years, he was bought by an Englishman and taken to England. Here he converted to Christianity, obtained his freedom, and learn to read and write. He built relationships with Blacks in Britain such as Equiano and become involved in the movement to abolish the slave trade. The following excerpt provides some context into the first-hand experiences of the horrors of the Middle Passage from the point of view of Cugoano. Reading 1.2Narrative of the Enslavement of Ottabah Cugoano, A Native of AfricaOttabah CugoanoOttabah Cugoano, “Narrative of the Enslavement of Ottabah Cugoano, A Native of Africa,” The Negro’s Memorial; or, Abolitionist’s Catechism; by an Abolitionist, ed. Thomas Fisher, pp. 120–127. 1824.The following artless narrative, as given to the public by the subject of it, in 1787, fell into the hands of the author of the foregoing pages when they were nearly completed, and after that portion of his work to which it more particularly belonged had been printed off. It is, nevertheless, a narrative of such high interest, and exhibits the Slave-trade and Slavery in such striking colors, throwing light upon not a few of the most important facts which form the argument of this work, that he could not resist the temptation to give it in an appendix, leaving it to operate unassisted upon the minds of his readers, and to inspire them, according to their respective mental constitutions, either with admiration or detestation of the SLAVE-TRADE and NEGRO SLAVERY.I was early snatched away from my native country, with about eighteen or twenty more boys and girls, as we were playing in a field. We lived but a few days' journey from the coast where we were kidnapped, and as we were decoyed and drove along, we were soon conducted to a factory, and from thence, in the fashionable way of traffic, consigned to Grenada. Perhaps it may not be amiss to give a few remarks, as some account of myself, in this transposition of captivity.I was born in the city of Agimaque, on the coast of Fantyn; my father was a companion to the chief in that part of the country of Fantee, and when the old king died I was left in his house with his family; soon after I was sent for by his nephew, Ambro Accasa, who succeeded the old king in the chiefdom of that part of Fantee, known by the name of Agimaque and Assince. I lived with his children, enjoying peace and tranquillity, about twenty moons, which, according to their way of reckoning time, is two years. I was sent for to visit an uncle, who lived at a considerable distance from Agimaque. The first day after we set out we arrived at Assinee, and the third day at my uncle's habitation, where I lived about three months, and was then thinking of returning to my father and young companion at Agimaque; but by this time I had got well acquainted with some of the children of my uncle's hundreds of relations, and we were some days too venturesome in going into the woods to gather fruit and catch birds, and such amusements as pleased us. One day I refused to go with the rest, being rather apprehensive that something might happen to us; till one of my playfellows said to me, "Because you belong to the great men, you are afraid to “venture your carcase, or else of the bounsam,” which is the devil. This enraged me so much, that I set a resolution to join the rest, and we went into the woods, as usual but we had not been above two hours, before our troubles began, when several great ruffians came upon us suddenly, and said we had committed a fault against their lord, and we must go and answer for it ourselves before him.Some of us attempted, in vain, to run away, but pistols and cutlasses were soon introduced, threatening, that if we offered to stir, we should all lie dead on the spot. One of them pretended to be more friendly than the rest, and said that he would speak to their lord to get us clear, and desired that we should follow him; we were then immediately divided into different parties, and drove after him. We were soon led out of the way which we knew, and towards evening, as we came in sight of a town, they told us that this great man of theirs lived there, but pretended it was too late to go and see him that night. Next morning there came three other men, whose language differed from ours, and spoke to some of those who watched us all the night; but he that pretended to be our friend with the great man, and some others, were gone away. We asked our keeper what these men had been saying to them, and they answered, that they had been asking them and us together to go and feast with them that day, and that we must put off seeing the great man till after, little thinking that our doom was so nigh, or that these villains meant to feast on us as their prey. We went with them again about half a day's journey, and came to a great multitude of people, having different music playing; and all the day after we got there, we were very merry with the music, dancing, and singing. Towards the evening, we were again persuaded that we could not get back to where the great man lived till next day; and when bed-time came, we were separated into different houses with different people. When the next morning came, I asked for the men that brought me there, and for the rest of my companions; and I was told that they were gone to the sea-side, to bring home some rum, guns, and powder, and that some of my companions were gone with them, and that some were gone to the fields to do something or other. This gave me strong suspicion that there was some treachery in the case, and I began to think that my hopes of returning home again were all over. I soon became very uneasy, not knowing what to do, and refused to eat or drink, for whole days together, till the man of the house told me that he would do all in his power to get me back to my uncle; then I eat a little fruit with him, and had some thoughts that I should be sought after, as I would be then missing at home about five or six days. I inquired every day if the men had come back, and for the rest of my companions, but could get no answer of any satisfaction. I was kept about six days at this man's house, and in the evening there was another man came, and talked with him a good while and I heard the one say to the other he must go, and the other said, the sooner the better; that man came out and told me that he knew my relations at Agimaque, and that we must set out to-morrow morning, and he would convey me there. Accordingly we set out next day, and travelled till dark, when we came to a place where we had some supper and slept. He carried a large bag, with some gold dust, which he said he had to buy some goods at the sea-side to take with him to Agimaque. Next day we travelled on, and in the evening came to a town, where I saw several white people, which made me afraid that they would eat me, according to our notion, as children, in the inland parts of the country. This made me rest very uneasy all the night, and next morning I had some victuals brought, desiring me to eat and make haste, as my guide and kidnapper told me that he had to go to the castle with some company that were going there, as he had told me before, to get some goods. After I was ordered out, the horrors I soon saw and felt, cannot be well described; I saw many of my miserable countrymen chained two and two, some handcuffed, and some with their hands tied behind. We were conducted along by a guard, and when we arrived at the castle, I asked my guide what I was brought there for, he told me to learn the ways of the browfow, that is, the white-faced people. I saw him take a gun, a piece of cloth, and some lead for me, and then he told me that he must now leave me there, and went off. This made me cry bitterly, but I was soon conducted to a prison, for three days, where I heard the groans and cries of many, and saw some of my fellow-captives. But when a vessel arrived to conduct us away to the ship, it was a most horrible scene; there was nothing to be heard but the rattling of chains, smacking of whips, and the groans and cries of our fellow-men. Some would not stir from the ground, when they were lashed and beat in the most horrible manner. I have forgot the name of this infernal fort; but we were taken in the ship that came for us, to another that was ready to sail from Cape Coast. When we were put into the ship, we saw several black merchants coming on board, but we were all drove into our holes, and not suffered to speak to any of them. In this situation we continued several days in sight of our native land; but I could find no good person to give any information of my situation to Accasa at Agimaque. And when we found ourselves at last taken away, death was more preferable than life; and a plan was concerted amongst us, that we might burn and blow up the ship, and to perish all together in the flames: but we were betrayed by one of our own countrywomen, who slept with some of the headmen of the ship, for it was common for the dirty filthy sailors to take the African women and lie upon their bodies; but the men were chained and pent up in holes. It was the women and boys which were to burn the ship, with the approbation and groans of the rest; though that was prevented, the discovery was likewise a cruel bloody scene.But it would be needless to give a description of all the horrible scenes which we saw, and the base treatment which we met with in this dreadful captive situation, as the similar cases of thousands, which suffer by this infernal traffic, are well known. Let it suffice to say that I was thus lost to my dear indulgent parents and relations, and they to me. All my help was cries and tears, and these could not avail, nor suffered long, till one succeeding woe and dread swelled up another. Brought from a state of innocence and freedom, and, in a barbarous and cruel manner, conveyed to a state of horror and slavery, this abandoned situation may be easier conceived than described. From the time that I was kidnapped, and conducted to a factory, and from thence in the brutish, base, but fashionable way of traffic, consigned to Grenada, the grievous thoughts which I then felt, still pant in my heart; though my fears and tears have long since subsided. And yet it is still grievous to think that thousands more have suffered in similar and greater distress, Under the hands of barbarous robbers, and merciless task-masters; and that many, even now, are suffering in all the extreme bitterness of grief and woe, that no language can describe. The cries of some, and the sight of their misery, may be seen and heard afar; but the deep-sounding groans of thousands, and the great sadness of their misery and woe, under the heavy load of oppressions and calamities inflicted upon them, are such as can only be distinctly known to the ears of Jehovah Sabaoth.This Lord of Hosts, in his great providence, and in great mercy to me, made a way for my deliverance from Grenada. Being in this dreadful captivity and horrible slavery, without any hope of deliverance, for about eight or nine months, beholding the most dreadful scenes of misery and cruelty, and seeing my miserable companions often cruelly lashed, and, as it were, cut to pieces, for the most trifling faults; this made me often tremble and weep, but I escaped better than many of them. For eating a piece of sugar-cane, some were cruelly lashed, or struck over the face, to knock their teeth out. Some of the stouter ones, I suppose, often reproved, and grown hardened and stupid with many cruel beatings and lashings, or perhaps faint and pressed with hunger and hard labour, were often committing trespasses of this kind, and when detected, they met with exemplary punishment. Some told me they had their teeth pulled out, to deter others, and to prevent them from eating any cane in future. Thus seeing my miserable companions and countrymen in this pitiful, distressed, and horrible situation, with all the brutish baseness and barbarity attending it, could not but fill my little mind horror and indignation. But I must own, to the shame of my own countrymen, that I was first kidnapped and betrayed by some of my own complexion, who were the first cause of my exile, and slavery; but if there were no buyers there would be no sellers. So far as I can remember, some of the Africans in my country keep slaves, which they take in war, or for debt; but those which they keep are well fed, and good care taken of them, and treated well; and as to their clothing, they differ according to the custom of the country. But I may safely say, that all the poverty and misery that any of the inhabitants of Africa meet with among themselves, is far inferior to those inhospitable regions of misery which they meet with in the West-Indies, where their hard-hearted overseers have neither Regard to the laws of God, nor the life of their fellow-men.Thanks be to God, I was delivered from Grenada, and that horrid brutal slavery. A gentleman coming to England took me for his servant, and brought me away, where I soon found my situation become more agreeable. After coming to England, and seeing others write and read, I had a strong desire to learn, and getting what assistance I could, I applied myself to learn reading and writing, which soon became my recreation, pleasure, and delight; and when my master perceived that I could write some, he sent me to a proper school for that purpose to learn. Since, I have endeavoured to improve my mind in reading, and have sought to get all the intelligence I could, in my situation of life, towards the state of my brethren and countrymen in complexion, and of the miserable situation of those who are barbarously sold into captivity, and unlawfully held in slavery. https://youtu.be/S72vvfBTQwsTrans-Atlantic Slave TradeThe Transatlantic Slave Trade had three stages. During STAGE 1, slave ships departed from British ports like London, Liverpool, and Bristol making the journey to West Africa, carrying goods such as cloth, guns, ironware, and drink that had been made in Britain. On the West African coast, these goods would be traded for men, women, and children who had been captured by slave traders or bought from African chiefs.The second stage saw dealers kidnap people from villages up to hundreds of miles inland. One such person was Quobna Ottobah Cugoano who described how the slavers attacked with pistols and threatened to kill those who did not obey. The captives were forced to march long distances with their hands tied behind their backs and their necks connected by wooden yokes. The traders held the enslaved Africans until a ship appeared, and then sold them to a European or African captain. It often took a long time for a captain to fill his ship. He rarely filled his ship in one spot. Instead, he would spend three to four months sailing along the coast, looking for the fittest and cheapest slaves. Ships would sail up and down the coast filling their holds with enslaved Africans. This part of the journey, the coast, is referred to as the Point of No Return.During the horrifying Middle Passage, enslaved Africans were tightly packed onto ships that would carry them to their final destination. Numerous cases of violent resistance by Africans against slave ships and their crews were documented. The final stage, STAGE 3 occurred at the destination in the New World where enslaved Africans were sold to the highest bidder at slave auctions. They belonged to the plantation owner, like any other possession, and had no rights at all. Enslaved Africans were often punished very harshly and often resisted their enslavement in many ways, from revolution to silent, personal resistance. Some refused to be enslaved and took their own lives. Sometimes pregnant women preferred abortion to bringing a child into slavery. On the plantations, many enslaved Africans tried to slow down the pace of work by pretending to be ill, causing fires, or “accidentally” breaking tools.Running away was also a form of resistance. Some escaped to South America, England, northern American cities, or Canada. Additionally, enslaved people led hundreds of revolts, rebellions, and uprisings. Approximately two-thirds of enslaved Africans taken to the Americas ended up on sugar plantations. Sugar was used to sweeten another crop harvested by enslaved Africans in the West Indies—coffee. With the money made from the sale of enslaved Africans, goods such as sugar, coffee and tobacco were bought and carried back to Britain for sale. The ships were loaded with produce from the plantations for the voyage home. Resistance took many forms, some individual, some collective. Enslaved people resisted capture and imprisonment, attacked slave ships from the shore and engaged in shipboard revolts, fighting to free themselves and others. It is important to remember that there was resistance throughout the Transatlantic Slave Trade system beginning when Africans were first kidnapped. In some cases, resistance involved attacks from the shore, as well as ‘insurrections' aboard ships. Some captive Africans refused to be enslaved and took their own lives by jumping from slave ships or refusing to eat. As the system of slavery expanded, resistance will be demonstrated in various ways.Middle PassageThe Middle Passage refers to the part of the trade where Africans, densely packed onto ships, were transported across the Atlantic to the West Indies. The voyage took three to four months and, during this time, the enslaved people mostly lay chained in rows on the floor of the hold or on shelves that ran around the inside of the ships' hulls. There were no more than six hundred enslaved people on each ship. Captives from different nations were mixed together, making it difficult for them to communicate. Men were separated from women and children.Olaudah Equiano was a former enslaved African, seaman, and merchant who wrote an autobiography depicting the horrors of slavery and lobbied Parliament for its abolition. In his biography, he records he was born in what is now Nigeria, kidnapped and sold into slavery as a child. He then endured the middle passage on a slave ship bound for the New World.A great deal of sources remain such as captain's logbooks, memoirs, and shipping company records, all of which describe life on ships. For example, when asked if the slaves had ‘room to turn themselves or lie easy', a Dr Thomas Trotter replied: “By no means. The slaves that are out of irons are laid spoonways … and closely locked to one another. It is the duty of the first mate to see them stowed in this manner every morning … and when the ship had much motion at sea … they were often miserably bruised against the deck or against each other … I have seen the breasts heaving … with all those laborious and anxious efforts for life…” To the contrary, during a Parliamentary investigation, a witness to the slave trade, Robert Norris, described how “‘delightful' the slave ships were, arguing that enslaved people had sufficient room, air, and provisions. When upon deck, they made merry and amused themselves with dancing … In short, the voyage from Africa to the West Indies was one of the happiest periods of their life!”Horrors of the JourneyThe Middle Passage was a system that brutalized both sailors and enslaved people. The captain had total authority over those aboard the ship and was answerable to nobody. Captives usually outnumbered the crew by ten to one, so they were whipped or put in thumb screws if there was any sign of rebellion. Despite this, resistance was common. The European crews made sure that the captives were fed and forced them to exercise. On all ships, the death toll was high. Between 1680 and 1688, 23 out of every 100 people taken aboard the ships of the Royal African Company died in transit. When disease began to spread, the dying were sometimes thrown overboard. In November 1781, around 470 slaves were crammed aboard the slave ship Zong. During the voyage to Jamaica, many got sick. Seven crew and sixty Africans died. Captain Luke Collingwood ordered the sick enslaved Africans, 133 in total, thrown overboard, only one survived.When the Zong arrived back in England, its owners claimed for the value of the slaves from their insurers. They argued that they had little water, and the sick Africans posed a threat to the remaining cargo and crew. In 1783, the owners won their case. This case did much to illustrate the horrors of the trade and sway public opinion against it. The death toll amongst sailors was also terribly high, roughly twenty percent. Sometimes the crew would be harshly treated on purpose during the ‘middle passage'. Fewer hands were required on the third leg and wages could be saved if the sailors jumped ship in the West Indies. It was not uncommon to see injured sailors living in the Caribbean and North American ports. The Dolben Act was passed in 1788, which fixed the number of enslaved people in proportion to the ship's size, but conditions were still horrendous. Research has shown that a man was given a space of 6 feet by 1 foot 4 inches; a woman 5 feet by 1 foot 4 inches and girls 4 feet 6 inches by 1 foot.ReferencesBailey, Anne. Voices of the Atlantic Slave Trade: Beyond the Silence and the Shame. Boston: Beacon Press, 2005.Mustakeem, Sowande. Slavery at Sea: Terror, Sex, and Sickness in the Middle Passage. Champaign, IL: University of Illinois Press, 2016.Smallwood, Stephanie. Saltwater Slavery: A Middle Passage from Africa to American Diaspora. Cambridge: Harvard University Press, 2008.Figure CreditsFig. 1.1: Copyright © by Grin20 (CC BY-SA 2.5) at https://commons.wikimedia.org/wiki/File:Africa_slave_Regions.svg.Fig. 1.2: Copyright © by Sémhur (CC BY-SA 3.0) at https://commons.wikimedia.org/wiki/File:Triangular_trade.png.Fig. 1.3: Copyright © by SimonP (CC BY-SA 2.0) at https://commons.wikimedia.org/wiki/File:Triangle_trade2.png.
Can I annotate an entire chapter?
-
-
www.ribbonfarm.com www.ribbonfarm.com
-
runaway explosion
why would it be that? bc there's too much work that requires management and headcount?
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
For social media content, replication means that the content (or a copy or modified version) gets seen by more people. Additionally, when a modified version gets distributed, future replications of that version will include the modification
In the context of social media, replication refers to the process where content, or a modified version of it, is shared and distributed across platforms, reaching more viewers. When users share or remix content, the new version may include changes or additions, which are then carried forward as the content continues to spread. This creates a cycle where the modified version becomes the basis for future iterations, allowing both the original and the altered content to reach even larger audiences over time.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Knowing that there is a recommendation algorithm, users of the platform will try to do things to make the recommendation algorithm amplify their content. This is particularly important for people who make their money from social media content.
Knowing how recommendation algorithms work, users - especially content creators - will often adjust their strategies to expand their content, such as by increasing engagement and using trending topics. This is critical for creators who rely on social media for income, as higher visibility can lead to more opportunities for monetization. However, this also raises ethical issues, as it can sometimes encourage sensationalism or low-quality content exploitation systems.
-
-
-
This is probably the most intuitive tip
Test annotation!
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
We mentioned Design Justice earlier, but it is worth reiterating again here that design justice includes considering which groups get to be part of the design process itself.
It's essential that Design Justice emphasizes not only the outcome of design but who is involved in the process. If only dominant groups are part of the decision-making, we risk creating systems that unintentionally harm or exclude marginalized groups. Ensuring that all voices are represented can lead to more inclusive, equitable design solutions that truly serve diverse communities.
-
-
drive.google.com drive.google.com
-
Most songwriters, for instance, rely on a time-honored verse-chorus-verse pattern, and few people would call Shakespeare uncreative because he didn’t invent the sonnet or the dramatic forms that he used to such dazzling effect. Even the most avant-garde, cutting-edge artists like improvisational jazz musicians need to master the basic forms that their work improvises on, departs from, and goes beyond, or else their work will come across as uneducated child’s play
I understand these examples, but for some reason I don't think it is the same... Some songwriters don't "rely on a time-honored verse-chorus-verse", whilst some do many don't. That is why I think that writers can use their own language, and their own creativity, and their own writing styles to in turn make it a great paper.
-
sophisticated thinking and writing, and they often require a great deal of practice and instruction to use successfully.
This reminds me of how we have been talking about writing to meet the status quo of the perfect paper.
-
Students are quick to see that no one person owns a conventional formula like “on the one hand . . . on the other hand. . . .” Phrases like “a controversial issue” are so commonly used and recycled that they are generic—community property that can be freely used without fear of committing plagiarism.
I am currently watching an episode of Gilmore Girls where one of the main characters bright up the question of how commonly used catch phrases can be used a plagiarism. Sometimes I live in fear of committing plagiarism. Sometimes a thought comes into my mind and I get nervous I read/heard it somewhere then I will get flagged for plagiarism. At times I worry more about the sources being peer reviewed or is it in MLA/APA format, more than the actual paper. It is hard to know what is/isn't plagiarism.
-
At strategic moments throughout your text, we recommend that you include what we call “return sentences.
Return sentence is a great way of writing and I will make sure to use that in my next essay. Also, other template are very useful and give me a better understanding to what the writer try to teach us.
-
you could start with an illustrative quotation, a revealing fact or statistic, or—as we do in this chapter
I like to use a statistic at the beginning of my essay because I think it grab people attention and make them curious about the rest of the essay.
-
to keep an audience engaged, a writer needs to explain what he or she is responding to
A writers should know their audience and work to make them engage and not lost during the speech. Also, it is better to give each information on the right time to make the audience comfortable about what we are saying.
-
Because our speaker failed to mention what others had said about Dr. X’s work, he left his audience unsure about why he felt the need to say what he was saying
It is important to show why are you bringing other words and how it really helps you introducing your points.
-
-
alvinntnu.github.io alvinntnu.github.io
-
"male", "female"
Please note that I corrected the typos here. Alvin
-
-
www.biorxiv.org www.biorxiv.org
-
eLife Assessment
This important paper demonstrates that different PKA subtypes exhibit distinct subcellular localization at rest in CA1 neurons. The authors provide compelling evidence that when all tested PKA subtypes are activated by norepinephrine, catalytic subunits translocate to dendritic spines but regulatory subunits remain unmoved. Furthermore, PKA-dependent regulation of synaptic plasticity and transmission can be supported only by wildtype, dissociable PKA, but not by inseparable PKA.
-
Reviewer #1 (Public review):
Summary:
This is a short self-contained study with a straightforward and interesting message. The paper focuses on settling whether PKA activation requires dissociation of the catalytic and regulatory subunits. This debate has been ongoing for ~ 30 years, with renewed interest in the question following a publication in Science, 2017 (Smith et al.). Here, Xiong et al demonstrate that fusing the R and C subunits together (in the same way as Smith et al) prevents the proper function of PKA in neurons. This provides further support for the dissociative activation model - it is imperative that researchers have clarity on this topic since it is so fundamental to building accurate models of localised cAMP signalling in all cell types. Furthermore, their experiments highlight that C subunit dissociation into spines is essential for structural LTP, which is an interesting finding in itself. They also show that preventing C subunit dissociation reduces basal AMPA receptor currents to the same extent as knocking down the C subunit. Overall, the paper will interest both cAMP researchers and scientists interested in fundamental mechanisms of synaptic regulation.
Strengths:
The experiments are technically challenging and well executed. Good use of control conditions e.g untransfected controls in Figure 4.
Weaknesses:
The novelty is lessened given the same team has shown dissociation of the C subunit into dendritic spines from RIIbeta subunits localised to dendritic shafts before (Tillo et al., 2017). Nevertheless, the experiments with RII-C fusion proteins are novel and an important addition.
-
Reviewer #2 (Public review):
Summary:
PKA is a major signaling protein which has been long studied and is vital for synaptic plasticity. Here, the authors examine the mechanism of PKA activity and specifically focus on addressing the question of PKA dissociation as a major mode of its activation in dendritic spines. This would potentially allow to determine the precise mechanisms of PKA activation and address how it maintains spatial and temporal signaling specificity.
Strengths:
The results convincingly show that PKA activity is governed by the subcellular localization in dendrites and spines and is mediated via subunit dissociation. The authors make use of organotypic hippocampal slice cultures, where they use pharmacology, glutamate uncaging, and electrophysiological recordings.
Overall, the experiments and data presented are well executed. The experiments all show that at least in the case of synaptic activity, distribution of PKA-C to dendritic spines is necessary and sufficient for PKA mediated functional and structural plasticity.<br /> The authors were able to persuasively support their claim that PKA subunit dissociation is necessary for its function and localization in dendritic spines. This conclusion is important to better understand the mechanisms of PKA activity and its role in synaptic plasticity.
Weaknesses:
While the experiments are indeed convincing and well executed, the data presented is similar to previously published work from the Zhong lab (Tillo et al., 2017, Zhong et al 2009). This reduces the novelty of the findings in terms of re-distribution of PKA subunits, which was already established, at least to some degree.
-
Reviewer #3 (Public review):
Summary:
Xiong et al. investigated the debated mechanism of PKA activation using hippocampal CA1 neurons under pharmacological and synaptic stimulations. Examining all major PKA-R isoforms in these neurons, they found that a portion of PKA-C dissociates from PKA-R and translocate into dendritic spines following norepinephrine bath application. Additionally, their use of a non-dissociable form of PKA demonstrates its essential role in structural long-term potentiation (LTP) induced by two-photon glutamate uncaging, as well as in maintaining normal synaptic transmission, as verified by electrophysiology. This study presents a valuable finding on the activation-dependent re-distribution of PKA catalytic subunits in CA1 neurons, a process vital for synaptic functionality. The robust evidence provided by the authors makes this work particularly relevant for biologists seeking to understand PKA activation mechanisms, its downstream effects, and synaptic plasticity.
Strengths:
The study is methodologically robust, particularly in the application of two-photon imaging and electrophysiology. The experiments are well-designed with effective controls and a comprehensive analysis. The credibility of the data is further enhanced by the research team's previous works in related experiments. The study provides sufficient evidence to support the classical model of PKA activation via dissociation in neurons.
Weaknesses:
No specific weaknesses are noted in the current study; future research could provide additional insights by exploring PKA dissociation under varied physiological conditions, particularly in vivo, to further validate and expand upon these findings.
-
Author response:
The following is the authors’ response to the original reviews.
New Experiments
(1) Activation-dependent dynamics of PKA with the RIα regulatory subunit, adding to the answer to Reviewers 1 and 2. To determine the dynamics of all PKA isoforms, we have added experiments that used PKA-RIα as the regulatory subunit. We found differential translocation between PKA-C (co-expressed with PKA-RIα) and PKA-RIα (Figure 1–figure supplement 3), similar to the results when PKA-RIIα or PKA-RIβ was used.
(2) PKA-C dynamics elicited by a low concentration of norepinephrine, addressing Reviewer 3’s comment. We have found that PKA-C (co-expressed with RIIα) exhibited similar translocation into dendritic spines in the presence of a 5x lowered concentration (2 μM) of norepinephrine, suggesting that the translocation occurs over a wide range of stimulus strengths (Figure 1-figure supplement 2).
Reviewer #1 (Public Review):
Summary:
This is a short self-contained study with a straightforward and interesting message. The paper focuses on settling whether PKA activation requires dissociation of the catalytic and regulatory subunits. This debate has been ongoing for ~ 30 years, with renewed interest in the question following a publication in Science, 2017 (Smith et al.). Here, Xiong et al demonstrate that fusing the R and C subunits together (in the same way as Smith et al) prevents the proper function of PKA in neurons. This provides further support for the dissociative activation model - it is imperative that researchers have clarity on this topic since it is so fundamental to building accurate models of localised cAMP signalling in all cell types. Furthermore, their experiments highlight that C subunit dissociation into spines is essential for structural LTP, which is an interesting finding in itself. They also show that preventing C subunit dissociation reduces basal AMPA receptor currents to the same extent as knocking down the C subunit. Overall, the paper will interest both cAMP researchers and scientists interested in fundamental mechanisms of synaptic regulation.
Strengths:
The experiments are technically challenging and well executed. Good use of control conditions e.g untransfected controls in Figure 4.
We thank the reviewer for their accurate summarization of the position of the study in the field and for the positive evaluation of our study.
Weaknesses:
The novelty is lessened given the same team has shown dissociation of the C subunit into dendritic spines from RIIbeta subunits localised to dendritic shafts before (Tillo et al., 2017). Nevertheless, the experiments with RII-C fusion proteins are novel and an important addition.
We thank the reviewer for noticing our earlier work. The first part of the current work is indeed an extension of previous work, as we have articulated in the manuscript. However, this extension is important because recent studies suggested that the majority of PKA-RIIβ are axonal localized. The primary PKA subtypes in the soma and dendrite are likely PKA-RIβ or PKA-RIIα. Although it is conceivable that the results from PKA-RIIβ can be extended to the other subunits, given the current debate in the field regarding PKA dissociation (or not), it remains important to conclusively demonstrate that these other regulatory subunit types also support PKA dissociation within intact cells in response to a physiological stimulant. To complete the survey for all PKA-R isoforms, we have now added data for PKA-RIα (New Experiment #1), as they are also expressed in the brain (e.g., https://www.ncbi.nlm.nih.gov/gene/5573). Additionally, as the reviewer points out, our second part is a novel addition to the literature.
Reviewer #2 (Public Review):
Summary:
PKA is a major signaling protein that has been long studied and is vital for synaptic plasticity. Here, the authors examine the mechanism of PKA activity and specifically focus on addressing the question of PKA dissociation as a major mode of its activation in dendritic spines. This would potentially allow us to determine the precise mechanisms of PKA activation and address how it maintains spatial and temporal signaling specificity.
Strengths:
The results convincingly show that PKA activity is governed by the subcellular localization in dendrites and spines and is mediated via subunit dissociation. The authors make use of organotypic hippocampal slice cultures, where they use pharmacology, glutamate uncaging, and electrophysiological recordings.
Overall, the experiments and data presented are well executed. The experiments all show that at least in the case of synaptic activity, the distribution of PKA-C to dendritic spines is necessary and sufficient for PKA-mediated functional and structural plasticity.
The authors were able to persuasively support their claim that PKA subunit dissociation is necessary for its function and localization in dendritic spines. This conclusion is important to better understand the mechanisms of PKA activity and its role in synaptic plasticity.
We thank the reviewer for their positive evaluation of our study.
Weaknesses:
While the experiments are indeed convincing and well executed, the data presented is similar to previously published work from the Zhong lab (Tillo et al., 2017, Zhong et al 2009). This reduces the novelty of the findings in terms of re-distribution of PKA subunits, which was already established. A few alternative approaches for addressing this question: targeting localization of endogenous PKA, addressing its synaptic distribution, or even impairing within intact neuronal circuits, would highly strengthen their findings. This would allow us to further substantiate the synaptic localization and re-distribution mechanism of PKA as a critical regulator of synaptic structure, function, and plasticity.
We thank the reviewer for noticing our earlier work. The first part of the current work is indeed an extension of previous work, as we have articulated in the manuscript. However, this extension is important because recent studies suggested that the majority of PKA-RIIβ are axonal localized. The primary PKA subtypes in the soma and dendrite are likely PKA-RIβ or PKA-RIIα. Although it is conceivable that the results from PKA-RIIβ can be extended to the other subunits, given the current debate in the field regarding PKA dissociation (or not), it remains important to conclusively demonstrate that these other regulatory subunit types also support PKA dissociation within intact cells in response to a physiological stimulant. To complete the survey for all PKA-R isoforms, we have now added data for PKA-RIα (New Experiment #1), as they are also expressed in the brain (e.g., https://www.ncbi.nlm.nih.gov/gene/5573). Additionally, as Reviewer 1 points out, our second part is a novel addition to the literature.
We also thank the reviewer for suggesting the experiments to examine PKA’s synaptic localization and dynamics as a key mechanism underlying synaptic structure and function. We agree that this is a very interesting topic. At the same time, we feel that this mechanistic direction is open ended at this time and beyond what we try to conclude within this manuscript: prevention of PKA dissociation in neurons affects synaptic function. Therefore, we will save the suggested direction for future studies. We hope the reviewer understand.
Reviewer #3 (Public Review):
Summary:
Xiong et al. investigated the debated mechanism of PKA activation using hippocampal CA1 neurons under pharmacological and synaptic stimulations. Examining the two PKA major isoforms in these neurons, they found that a portion of PKA-C dissociates from PKA-R and translocates into dendritic spines following norepinephrine bath application. Additionally, their use of a non-dissociable form of PKC demonstrates its essential role in structural long-term potentiation (LTP) induced by two-photon glutamate uncaging, as well as in maintaining normal synaptic transmission, as verified by electrophysiology. This study presents a valuable finding on the activation-dependent re-distribution of PKA catalytic subunits in CA1 neurons, a process vital for synaptic functionality. The robust evidence provided by the authors makes this work particularly relevant for biologists seeking to understand PKA activation and its downstream effects essential for synaptic plasticity.
Strengths:
The study is methodologically robust, particularly in the application of two-photon imaging and electrophysiology. The experiments are well-designed with effective controls and a comprehensive analysis. The credibility of the data is further enhanced by the research team's previous works in related experiments. The conclusions of this paper are mostly well supported by data. The research fills a significant gap in our understanding of PKA activation mechanisms in synaptic functioning, presenting valuable insights backed by empirical evidence.
We thank the reviewer for their positive evaluation of our study.
Weaknesses:
The physiological relevance of the findings regarding PKA dissociation is somewhat weakened by the use of norepinephrine (10 µM) in bath applications, which might not accurately reflect physiological conditions. Furthermore, the study does not address the impact of glutamate uncaging, a well-characterized physiologically relevant stimulation, on the redistribution of PKA catalytic subunits, leaving some questions unanswered.
We agreed with the Reviewer that testing under physiological conditions is critical especially given the current debate in the literature. That is why we tested PKA dynamics induced by the physiological stimulant, norepinephrine. It has been suggested that, near the release site, local norepinephrine concentrations can be as high as tens of micromolar (Courtney and Ford, 2014). Based on this study, we have chosen a mid-range concentration (10 μM). At the same time, in light of the Reviewer’s suggestion, we have now also tested PKA-RIIα dissociation at a 5x lower concentration of norepinephrine (2 μM; New Experiment #2). The activation and translocation of PKA-C is also readily detectible under this condition to a degree comparable to when 10 μM norepinephrine was used.
Regarding the suggested glutamate uncaging experiment, it is extremely challenging because of finite signal-to-noise ratios in our experiments. From our past studies, we know that activated PKA-C can diffuse three dimensionally, with a fraction as membrane-associated proteins and the other as cytosolic proteins. Although we have evidence that its membrane affinity allows it to become enriched in dendritic spines, it is not known (and is unlikely) that activated PKA-C is selectively targeted to a particular spine. Glutamate uncaging of a single spine presumably would locally activate a small number of PKA-C. It will be very difficult to trace the 3D diffusion of these small number of molecules in the presence of surrounding resting-state PKA-C molecules. Finally, we hope the reviewer agrees that, regardless of the result of the glutamate uncaging experiment, the above new experiment (New Experiment #2) already indicate that certain physiologically relevant stimuli can drive PKA-C dissociation from PKA-R and translocation to spines, supporting our conclusion.
Reviewer #2 (Recommendations For The Authors):
It was a pleasure reading your paper, and the results are well-executed and well-presented.
My main and only recommendations are two ways to further expand the scope of the findings.
First, I believe addressing the endogenous localization of PKA-C subunit before and after PKA activation would be highly important to validate these claims. Overexpression of tagged proteins often shows vastly different subcellular distribution than their endogenous counterparts. Recent technological advances with CRISPR/Cas9 gene editing (Suzuki et al Nature 2016 and Gao et al Neuron 2019 for example) which the Zhong lab recently contributed to (Zhong et al 2021 eLife) allow us to tag endogenous proteins and image them in fixed or live neurons. Any experiments targeting endogenous PKA subunits that support dissociation and synaptic localization following activation would be very informative and greatly increase the novelty and impact of their findings.
We agreed that addressing the endogenous PKA dynamics is important. However, despite recent progress, endogenous labeling using CRISPR-based methods remains challenging and requires extensive optimization. This is especially true for signaling proteins whose endogenous abundance is often low. We have tried to label PKA catalytic subunits and regulatory subunits using both the homologous recombination-based method SLENDR and our own non-homologous end joining-based method CRISPIE. We did not succeed, in part because it is very difficult to see any signal under wide-field fluorescence conditions, which makes it difficult to screen different constructs for optimizing parameters. It is also possible that, at the endogenous abundance, the label is just not bright enough to be seen. Nevertheless, for both PKA type Iβ and type IIα that we studied in this manuscript, we have correlated the measured parameters (specifically, Spine Enrichment Index or SEI) with the overexpression level (Figure 1-figure supplement 1). We found that they are not strongly correlated with the expression level under our conditions. By extrapolating to non-overexpression conditions, our conclusion remains valid.
To overcome the inability to label endogenous PKA subunits using CRISPR-based methods, we have also attempted a conditional knock-in method call ENABLED that we previously developed to label PKA-Cα. In preliminary results, we found that endogenously label PKA were very dim. However, in a subset of cells that are bright enough to be quantified, the PKA catalytic subunit indeed translocated to dendritic spines upon stimulation (see Additional Fig. 1 in the next page), corroborating our results using overexpression. These results, however, are not ready to be published because characterization of the mouse line takes time and, at this moment, the signal-to-noise ratio remains low. We hope that the reviewer can understand.
Author response image 1.
Endogeneous PKA-Cα translocate to dendritic spines upon activation.
Second, experiments which would advance and validate these findings in vivo would be highly valuable. This could be achieved in a number of ways - one would be overexpression of tagged PKA versions and examining sub-cellular distribution before and after physiological activation in vivo. Another possibility is in vivo perturbation - one would speculate that disruption or tethering of PKA subunits to the dendrite would lead to cell-specific functional and structural impairments. This could be achieved in a similar manner to the in vitro experiments, with a PKA KO and replacement strategy of the tethered C-R plasmid, followed by structural or functional examination of neurons.
I would like to state that these experiments are not essential in my opinion, but any improvements in one of these directions would greatly improve and extend the impact and findings of this paper.
We thank the reviewer for the suggestion and the understanding. The suggested in vivo experiments are fascinating. However, in vivo imaging of dendritic spine morphology is already in itself challenging. The difficulty greatly increases when trying to detect partial, likely transient translocation of a signaling protein. It is also very difficult to knock down endogenous PKA while simultaneously expressing the R-C construct in a large number of cells to achieve detectable circuit or behavioral effect (and hope that compensation does not happen over weeks). We hope the reviewer agrees that these experiments would be their own project and go beyond the time and scope of the current study.
Reviewer #3 (Recommendations For The Authors):
Please elaborate on the methods used to visualize PKA-RIIα and PKA-RIβ subunits.
As suggested, we have now included additional details for visualizing PKA-Rs in the text. Specifically, we write (pg. 5): “…, as visualized using expressed PKA-R-mEGFP in separate experiments (Figs. 1A-1C).”.
-
-
-
Author response:
The following is the authors’ response to the original reviews.
Reviewer #1 (Public Review):
Summary:
The authors examined the salt-dependent phase separation of the low-complexity domain of hnRN-PA1 (A1-LCD). Using all-atom molecular dynamics simulations, they identified four distinct classes of salt dependence in the phase separation of intrinsically disordered proteins (IDPs), which can be predicted based on their amino acid composition. However, the simulations and analysis, in their current form, are inadequate and incomplete.
Strengths:
The authors attempt to unravel the mechanistic insights into the interplay between salt and protein phase separation, which is important given the complex behavior of salt effects on this process. Their effort to correlate the influence of salt on the low-complexity domain of hnRNPA1 (A1-LCD) with a range of other proteins known to undergo salt-dependent phase separation is an interesting and valuable topic.
Weaknesses:
(1) The simulations performed are not sufficiently long (Figure 2A) to accurately comment on phase separation behavior. The simulations do not appear to have converged well, indicating that the system has not reached a steady state, rendering the analysis of the trajectories unreliable.
We have extended the simulations for an additional 500 ns, to 1500 ns. The last 500 ns show reasonably good convergence (see Figure 2A).
(2) The majority of the data presented shows no significant alteration with changes in salt concentration. However, the authors have based conclusions and made significant comments regarding salt activities. The absence of error bars in the data representation raises questions about its reliability. Additionally, the manuscript lacks sufficient scientific details of the calculations.
We have now included error bars. With the error bars, the salt dependences of all the calculated properties (exception for Rg) show a clear trend. Additionally, we have expanded the descriptions of our calculations (p. 15-16).
(3) In Figures 2B and 2C, the changes in the radius of gyration and the number of contacts do not display significant variations with changes in salt concentration. The change in the radius of gyration with salt concentration is less than 1 Å, and the number of contacts does not change by at least 1. The authors' conclusions based on these minor changes seem unfounded.
The variation of ~ 1 Å for the calculated Rg is similar to the counterpart for the experimental Rg. As for the number of contacts, note that this property is presented on a per-residue basis, so a value of 1 means that each residue picks up one additional contact, or each protein chain gains a total of 131 contacts, when the salt concentration is increased from 50 to 1000 mM.
Reviewer #2 (Public Review):
This is an interesting computational study addressing how salt affects the assembly of biomolecular condensates. The simulation data are valuable as they provide a degree of atomistic details regarding how small salt ions modulate interactions among intrinsically disordered proteins with charged residues, namely via Debye-like screening that weakens the effective electrostatic interactions among the polymers, or through bridging interactions that allow interactions between like charges from different polymer chains to become effectively attractive (as illustrated, e.g., by the radial distribution functions in Supplementary Information). However, this manuscript has several shortcomings:
(i) Connotations of the manuscript notwithstanding, many of the authors' concepts about salt effects on biomolecular condensates have been put forth by theoretical models, at least back in 2020 and even earlier. Those earlier works afford extensive information such as considerations of salt concentrations inside and outside the condensate (tie-lines). But the authors do not appear to be aware of this body of prior works and therefore missed the opportunity to build on these previous advances and put the present work with its complementary advantages in structural details in the proper context.
(ii) There are significant experimental findings regarding salt effects on condensate formation [which have been modeled more recently] that predate the A1-LCD system (ref.19) addressed by the present manuscript. This information should be included, e.g., in Table 1, for sound scholarship and completeness.
(iii) The strengths and limitations of the authors' approach vis-à-vis other theoretical approaches should be discussed with some degree of thoroughness (e.g., how the smallness of the authors' simulation system may affect the nature of the "phase transition" and the information that can be gathered regarding salt concentration inside vs. outside the "condensate" etc.). Accordingly, this manuscript should be revised to address the following. In particular, the discussion in the manuscript should be significantly expanded by including references mentioned below as well as other references pertinent to the issues raised.
(1) The ability to use atomistic models to address the questions at hand is a strength of the present work. However, presumably because of the computational cost of such models, the "phase-separated" "condensates" in this manuscript are extremely small (only 8 chains). An inspection of Fig.1 indicates that while the high-salt configuration (snapshot, bottom right) is more compact and droplet-like than the low-salt configuration (top right), it is not clear that the 50 mM NaCl configuration can reasonably correspond to a dilute or homogeneous phase (without phase separation) or just a condensate with a lower protein concentration because the chains are still highly associated. One may argue that they become two droplets touching each other (the chains are not fully dispersed throughout the simulation box, unlike in typical coarse-grained simulations of biomolecular phase separation). While it may not be unfair to argue from this observation that the condensed phase is less stable at low salt, this raises critical questions about the adequacy of the approach as a stand-alone source of theoretical information. Accordingly, an informative discussion of the limitation of the authors' approach and comparisons with results from complementary approaches such as analytical theories and coarsegrained molecular dynamics will be instructive-even imperative, especially since such results exist in the literature (please see below).
We now discuss the limitations of our all-atom simulations and also other approaches (p. 13; see below).
(2) The aforementioned limitation is reflected by the authors' choice of using Dmax as a sort of phase separation order parameter. However, no evidence was shown to indicate that Dmax exhibits a twostate-like distribution expected of phase separation. It is also not clear whether a Dmax value corresponding to the linear dimension of the simulation box was ever encountered in the authors' simulated trajectories such that the chains can be reliably considered to be essentially fully dispersed as would be expected for the dilute phase. Moreover, as the authors have noted in the second paragraph of the Results, the variation of Dmax with simulation time does not show a monotonic rank order with salt concentration. The authors' explanation is equivalent to stipulating that the simulation system has not fully equilibrated, inevitably casting doubt on at least some of the conclusions drawn from the simulation data.
First off, with the extended simulations, the Dmax values converge to a tiered order rank, with successively decreasing values from low salt (50 mM) to intermediate salt (150 and 300 mM) to high salt (500 and 1000 mM). Secondly, as we now state (p. 13), our low-salt simulations mimic a homogenous solution whereas our high-salt simulations mimic the dense phase of a phase-separated system. The intermediate-salt simulations also mimic the dense phase but at a somewhat lower concentration (hence the intermediate Dmax value).
(3) With these limitations, is it realistic to estimate possible differences in salt concentration between the dilute and condensed phases in the present work? These features, including tie-lines, were shown to be amenable to analytical theory and coarse-grained molecular dynamics simulation (please see below).
The differences in salt effects that we report do not represent those between two phases. Rather, as explained in the preceding reply, they represent differences between a homogenous solution at low salt and the dense phase at higher salt. We also acknowledge salt effects calculated by analytical theory and coarse-grained simulations (p. 13).
(4) In the comparison in Fig.2B between experimental and simulated radius of gyration as a function of [NaCl], there is an outlier among the simulated radii of gyration at [NaCl] ~ 250 mM. An explanation should be offered.
After extending the simulations and analyzing the last 500 ns, the Rg data no longer show an outlier though still have some fluctuations from one salt concentration to another.
(5) The phenomenon of no phase separation at zero and low salt and phase separation at higher salt has been observed for the IDP Caprin1 and several of its mutants [Wong et al., J Am Chem Soc 142, 24712489 (2020) [https://pubs.acs.org/doi/full/10.1021/jacs.9b12208], see especially Fig.9 of this reference]. This work should be included in the discussion and added to Table 1.
We now have added Caprin1 to Table 1 (new ref 26) and discuss this paper (p. 13).
(6) The authors stated in the Introduction that "A unifying understanding of how salt affects the phase separation of IDPs is still lacking". While it is definitely true that much remains to be learned about salt effects on IDP phase separation, the advances that have already been made regarding salt effects on IDP phase separation is more abundant than that conveyed by this narrative. For instance, an analytical theory termed rG-RPA was put forth in 2020 to provide a uniform (unified) treatment of salt, pH, and sequence-charge-pattern effects on polyampholytes and polyelectrolytes (corresponding to the authors' low net charge and high net charge cases). This theory offers a means to predict salt-IDP tie-lines and a comprehensive account of salt effect on polyelectrolytes resulting in a lack of phase separation at extremely low salt and subsequent salt-enhanced phase separation (similar to the case the authors studied here) and in some cases re-entrant phase separation or dissolution [Lin et al., J Chem Phys 152. 045102 (2020) [https://doi.org/10.1063/1.5139661]]. This work is highly relevant and it already provided a conceptual framework for the authors' atomistic results and subsequent discussion. As such, it should definitely be a part of the authors' discussion.
We now cite this paper (new ref 34) in Introduction (p. 4). We also discuss its results for Caprin1 (new ref 18; p. 13).
(7) Bridging interactions by small ions resulting in effective attractive interactions among polyelectrolytes leading to their phase separation have been demonstrated computationally by Orkoulas et al., Phys Rev Lett 90, 048303 (2003) [https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.90.048303]. This result should also be included in the discussion.
We now cite this paper (new ref 41; p. 11).
(8) More recently, the salt-dependent phase separations of Caprin1, its RtoK variants and phosphorylated variant (see item #5 above) were modeled (and rationalized) quite comprehensively using rG-RPA, field-theoretic simulation, and coarse-grained molecular dynamics [Lin et al., arXiv:2401.04873 [https://arxiv.org/abs/2401.04873]], providing additional data supporting a conceptual perspective put forth in Lin et al. J Chem Phys 2020 (e.g., salt-IDP tie-lines, bridging interactions, reentrance behaviors etc.) as well as in the authors' current manuscript. It will be very helpful to the readers of eLife to include this preprint in the authors' discussion, perhaps as per the authors' discretion along the manner in which other preprints are referenced and discussed in the current version of the manuscript.
We now cite this paper (new ref 18) and discuss it along with new ref 26 in Discussion (p. 13).
Reviewer #3 (Public Review):
Summary:
This study investigates the salt-dependent phase separation of A1-LCD, an intrinsically disordered region of hnRNPA1 implicated in neurodegenerative diseases. The authors employ all-atom molecular dynamics (MD) simulations to elucidate the molecular mechanisms by which salt influences A1-LCD phase separation. Contrary to typical intrinsically disordered protein (IDP) behavior, A1-LCD phase separation is enhanced by NaCl concentrations above 100 mM. The authors identify two direct effects of salt: neutralization of the protein's net charge and bridging between protein chains, both promoting condensation. They also uncover an indirect effect, where high salt concentrations strengthen pi-type interactions by reducing water availability. These findings provide a detailed molecular picture of the complex interplay between electrostatic interactions, ion binding, and hydration in IDP phase separation.
Strengths:
Novel Insight: The study challenges the prevailing view that salt generally suppresses IDP phase separation, highlighting A1-LCD's unique behavior.
Rigorous Methodology: The authors utilize all-atom MD simulations, a powerful computational tool, to investigate the molecular details of salt-protein interactions.
Comprehensive Analysis: The study systematically explores a wide range of salt concentrations, revealing a nuanced picture of salt effects on phase separation.
Clear Presentation: The manuscript is well-written and logically structured, making the findings accessible to a broad audience.
Weaknesses:
Limited Scope: The study focuses solely on the truncated A1-LCD, omitting simulations of the full-length protein. This limitation reduces the study's comparative value, as the authors note that the full-length protein exhibits typical salt-dependent behavior. A comparative analysis would strengthen the manuscript's conclusions and broaden its impact.
Perhaps we did not impress on the reviewer how expensive the all-atom MD simulations on A1-LCD were: the systems each contained half a million atoms and the simulations took many months to complete. That said, we agree with the reviewer that, ideally, a comparative study on a protein showing the typical screening class of salt dependence would have made our work more complete. However, we are confident of the conclusions for several reasons. First, the three salt effects – charge neutralization, bridging, and strengthening of pi-types of interactions – revealed by the all-atom simulations are physically sound and well-supported by other studies. Second, these effects led us to develop a unified picture for the salt dependence of homotypic phase separation, in the form of a predictor for the classes of salt dependence based on amino-acid composition. This predictor works well for nearly 30 proteins. Third, recent studies using analytical theory and coarse-grained simulations (new ref 18) also strongly support our conclusions.
Reviewer #1 (Recommendations For The Authors):
(1) In Figure 1, the color scheme should be updated and the figure remade, as the current set of color choices makes it very difficult to distinguish the magenta spheres.
We have increased the sizes of ions in Figure 1 to make them distinguishable.
(2) Within the framework of atomistic simulations, the influence of salt concentration alteration on protein conformational plasticity is worth investigating. This could be correlated (with proper details) with the effect of salt-concentration-modulated protein aggregation behavior.
We now use RMSF to measure conformational plasticity, which shows a clear salt-dependent trend with a 27% reduction in fluctuations from 50 mM to 1000 mM NaCl (new Fig. S1).
(3) The authors should mention the protein concentrations employed in the simulations and whether these are consistent with experimentally used concentrations.
We have mentioned the initial concentration (3.5 mM). We now further state that this concentration is maintained in the low-salt simulations, indicating absence of phase separation, but is increased to 23 mM in the high-salt simulations, indicating phase separation. The latter value is consistent with the measured concentrations in the dense phase (last two paragraphs of p. 5).
(4) It would be useful to test the salt effect for at least two extreme salt concentrations at various protein concentrations, consistent with experimental protein concentration ranges.
In simulation studies of short peptides (ref 37), we have shown that the initial concentration does not affect the final concentration in the dense phase, as expected for phase-separation systems. We expect that the same will be true for the A1-LCD system at intermediate and high salt where phase separation occurs. Though this expectation could be tested by simulations at a different initial protein concentration, such simulations would be expensive but unlikely to yield new physical insight.
(5) Importantly, the simulations do not appear to have converged well enough (Figure 2A). The authors should extend the simulation trajectories to ensure the system has reached a steady state.
We extended the simulations for an additional 500 ns, which now appear to show convergence. In Figure 2A we now see Dmax values converge to a tiered order rank, with successively decreasing values from low salt (50 mM) to intermediate salt (150 and 300 mM) to high salt (500 and 1000 mM).
(6) The authors mention "phase separation" in the title, but with only a 1 μs simulation trajectory, it is not possible to simulate a phenomenon like phase separation accurately. Since atomistic simulations cannot realistically capture phase separation on this timescale, a coarse-grained approach is more suitable. To properly explore salt effects in the context of phase separation, long timescale simulation trajectories should be considered. Otherwise, the data remain unreliable.
Our all-atom simulations revealed rich salt effects that might have been missed in coarse-grained simulations. It is true that coarse-grained models allow the simulations of the phase separation process, but as we have recently demonstrated (refs 36 and 37), all-atom simulations on the μs timescale are also able to capture the spontaneous phase separation of peptides and small IDPs. A1-LCD is much larger than those systems, so we had to use a relatively small chain number (8 chains here vs 64 used in ref 37 and 16 used in ref 37). S2ll, we observe the condensation into a dense phase at high salt. We discuss the pros and cons of all-atom vs. coarse-grained simulations in p. 13.
(7) In Figure 5E, the plot does not show that g(r) has reached 1. If it does, the authors should show the full curve. The same issue remains with supplementary figures 1, 2, 3, etc.
We now show the approach to 1 in the insets of Figs. S2, S3, S4, and 5E.
(8) None of the data is represented with error bars. The authors should include error bars in their data representations.
We have now included error bars in all graphs that report average values.
(9) The authors state that "the net charge of the system reduces to only +8 at 1000 mM NaCl (Figure 3C)" but do not explain how this was calculated.
We now add this explanation in methods (p. 16).
(10). The authors mention "similar to the role played by ATP molecules in driving phase separation of positively charged IDPs." However, ATP can inhibit aggregation, and its induction of phase separation is concentration-dependent. Given ATP's large aromatic moiety, its comparison to ions is not straightforward and is more complex. This comparison can be at best avoided.
In this context we are comparing the bridging capability of ATP molecules in driving phase separation of positively charged IDPs in ref 36 to the bridging capability of the ions here. In ref 36 the authors show ATP bridging interactions between protein chains similar to what we show here with ions.
(11) Many calculations are vaguely represented. The process for calculating the number of bridging ions, for example, is not well documented. The authors should provide sufficient details to allow for the reproducibility of the data.
We have now expanded the methods section to include more detailed information on calculations done.
Reviewer #3 (Recommendations For The Authors):
Include error bars or standard deviations for all results averaged over four replicates, particularly for the number of ions and contacts per residue. This would provide a clearer picture of the data's reliability and variability.
We have now included error bars in all graphs that report averaged values.
Strengthen the support for the conclusion that "each Arg sidechain often coordinates two Cl- ions, multiple backbone carbonyls often coordinate a single Na+ ion." While Fig. 3A clearly demonstrates ArgCl- coordination, the Na+ coordination claim for a 131-residue protein requires further clarification. Consider including the integration profile of radial distribution functions for Na+ ions to bolster this assertion.
We now report the number of Na+ ions that coordinate with multiple backbone carbonyls (p. 7) as well as the number of Na+ ions that bridge between A1-LCD chains via coordination with multiple backbone carbonyls (p. 9). Please note that Figure 4A right panel displays an example of Na+ coordinating with multiple backbone carbonyls.
Address the following typographical errors in the main text: o Page 11, line 25: "distinct classes of sat dependence" should be "distinct classes of salt dependence" o Page 14, line 9: "for Cl- and 3.0 and 5.4 A" should be "for Cl- and 3.0 and 5.4 √Ö" o Page 14, line 18: "As a control, PRDFs for water were also calculated" should be "As a control, RDFs for water were also calculated" (assuming PRDF was meant to be RDF)
We have now corrected these typos.
Consider expanding the study to include simulations of the full-length protein to provide a more comprehensive comparison between the truncated A1-LCD and the complete protein's behavior in various salt concentrations.
As we explained above, even with eight chains of A1-LCD, which has 131 residues, the systems already contain half a million atoms each and the all-atom simulations took many months to complete. Full-length A1 has 314 residues so a multi-chain system would be too large to be feasible for all-atom simulations.
-
eLife Assessment
In this potentially important study, the authors conducted atomistic simulations to probe the salt-dependent phase separation of the low-complexity domain of hnRN-PA1 (A1-LCD). The authors have identified both direct and indirect mechanisms of salt modulation, provided explanations for four distinct classes of salt dependence, and proposed a model for predicting protein properties from amino acid composition. There is a range of opinions regarding the strength of evidence, with some considering the evidence as incomplete due to the limitations in the length and statistical errors of the computationally intense atomistic MD simulations.
-
Reviewer #1 (Public review):
Summary:
The authors examined the salt-dependent phase separation of the low-complexity domain of hnRN-PA1 (A1-LCD). Using all-atom molecular dynamics simulations, they identified four distinct classes of salt dependence in the phase separation of intrinsically disordered proteins (IDPs), which can be predicted based on their amino acid composition. However, the simulations and analysis, in their current form, are inadequate and incomplete.
Strengths:
The authors attempt to unravel the mechanistic insights into the interplay between salt and protein phase separation, which is important given the complex behavior of salt effects on this process. Their effort to correlate the influence of salt on the low-complexity domain of hnRNPA1 (A1-LCD) with a range of other proteins known to undergo salt-dependent phase separation is an interesting and valuable topic.
Weaknesses:
Based on the reviewer's assessment of the manuscript, the following points were raised:
(1) The simulation duration is too short to draw comprehensive conclusions about phase separation.<br /> (2) There are concerns regarding the convergence of the simulations, particularly as highlighted in Figure 2A.<br /> (3) The simulation begins with a protein concentration of 3.5 mM ("we built an 8-copy model for the dense phase (with an initial concentration of 3.5 mM)"), which is high for phase separation studies. The reviewer questions the use of the term "dense phase" and suggests that the authors conduct a clearer analysis depicting the coexistence of both the dilute and dense phases to represent a steady state. Without this, the realism of the described phenomena is doubtful. Commenting on phase separation under conditions that don't align with typical phase separation parameters is not acceptable.<br /> (4) The inference that "Each Arg sidechain often coordinates two Cl- ions simultaneously, but each Lys sidechain coordinates only one Cl- ion" is questioned. According to Supplementary Figure 2A, Lys seems to coordinate with Cl- ions more frequently than Arg.<br /> (5) The authors are requested to update the figure captions for Supplementary Figures 2 and 3, specifying which system the analyses were performed on.<br /> (6) It is difficult to observe a clear trend due to irregularities in the data. Although the authors have included a red dotted line in the figures, the trend is not monotonic. The reviewer expresses concerns about significant conclusions drawn from these figures (e.g., Figure 2C, Figure 5A, Supplementary Figure 1).<br /> (7) Given the error in the radius of gyration (Rg) calculations, the reviewer questions the validity of drawing conclusions from this data.<br /> (8) The pair correlation function values in Figure 5E and supplementary figure 4 show only minor differences, and the reviewer questions whether these differences are significant.<br /> (9) Previous reports suggest that, upon self-assembly, protein chains extend within the condensate, leading to a decrease in intramolecular contacts. However, the authors show an increase in intramolecular contacts with increasing salt concentration (Figure 2C), which contradicts prior studies. The reviewer advises the authors to carefully review this and provide justification.<br /> (10) A systematic comparison of estimated parameters with varying salt concentrations is required. Additionally, the authors should provide potential differences in salt concentrations between the dilute and condensed phases.<br /> (11) The reviewer finds that the majority of the data presented shows no significant alteration with changes in salt concentration, yet the authors have made strong conclusions regarding salt activity.
The manuscript lacks sufficient scientific details of the calculations.
-
Reviewer #2 (Public review):
This is an interesting computational study addressing how salt affects the assembly of biomolecular condensates. The simulation data are valuable as they provide a degree of atomistic details regarding how small salt ions modulate interactions among intrinsically disordered proteins with charged residues, namely via Debye-like screening that weakens the effective electrostatic interactions among the polymers, or through bridging interactions that allow interactions between like charges from different polymer chains to become effectively attractive (as illustrated, e.g., by the radial distribution functions in Supplementary Information). However, this manuscript has several shortcomings: (i) Connotations of the manuscript notwithstanding, many of the authors' concepts about salt effects on biomolecular condensates have been put forth by theoretical models, at least back in 2020 and even earlier. Those earlier works afford extensive information such as considerations of salt concentrations inside and outside the condensate (tie-lines). But the authors do not appear to be aware of this body of prior works and therefore missed the opportunity to build on these previous advances and put the present work with its complementary advantages in structural details in the proper context. (ii) There are significant experimental findings regarding salt effects on condensate formation [which have been modeled more recently] that predate the A1-LCD system (ref.19) addressed by the present manuscript. This information should be included, e.g., in Table 1, for sound scholarship and completeness. (iii) The strengths and limitations of the authors' approach vis-à-vis other theoretical approaches should be discussed with some degree of thoroughness (e.g., how the smallness of the authors' simulation system may affect the nature of the "phase transition" and the information that can be gathered regarding salt concentration inside vs. outside the "condensate" etc.).
Comments on revised version:
The authors have adequately addressed my previous concerns and suggestions. The manuscript is now significantly improved. The new results and analyses provided by the authors represent a substantial advance in our understanding of the role of electrostatics in the assembly of biomolecular condensates.
-
Reviewer #3 (Public review):
Summary:
This study investigates the salt-dependent phase separation of A1-LCD, an intrinsically disordered region of hnRNPA1 implicated in neurodegenerative diseases. The authors employ all-atom molecular dynamics (MD) simulations to elucidate the molecular mechanisms by which salt influences A1-LCD phase separation. Contrary to typical intrinsically disordered protein (IDP) behavior, A1-LCD phase separation is enhanced by NaCl concentrations above 100 mM. The authors identify two direct effects of salt: neutralization of the protein's net charge and bridging between protein chains, both promoting condensation. They also uncover an indirect effect, where high salt concentrations strengthen pi-type interactions by reducing water availability. These findings provide a detailed molecular picture of the complex interplay between electrostatic interactions, ion binding, and hydration in IDP phase separation.
Strengths:
• Novel Insight: The study challenges the prevailing view that salt generally suppresses IDP phase separation, highlighting A1-LCD's unique behavior.<br /> • Rigorous Methodology: The authors utilize all-atom MD simulations, a powerful computational tool, to investigate the molecular details of salt-protein interactions.<br /> • Comprehensive Analysis: The study systematically explores a wide range of salt concentrations, revealing a nuanced picture of salt effects on phase separation.<br /> • Clear Presentation: The manuscript is well-written and logically structured, making the findings accessible to a broad audience.
Weaknesses:
• Limited Scope: The study focuses solely on the truncated A1-LCD, omitting simulations of the full-length protein. This limitation reduces the study's comparative value, as the authors note that the full-length protein exhibits typical salt-dependent behavior. However, given the much larger size of the full-length protein, it is acceptable to omit it given the current computing resources available.
Overall, this manuscript represents a significant contribution to the field of IDP phase separation. The authors' findings provide valuable insights into the molecular mechanisms by which salt modulates this process, with potential implications for understanding and treating neurodegenerative diseases.
-
-
mlpp.pressbooks.pub mlpp.pressbooks.pub
-
Skills mattered less and less in an industrialized, mass-producing economy, and their strength as individuals seemed ever smaller and less significant when companies grew in size and power and managers gained wealth and political influence. Long hours, dangerous working conditions, and the difficulty of supporting a family on meager and unpredictable wages compelled workers to organize armies of labor and battle against the power of capital.
This is a good point how the struggle for power often comes with inhumane treatment.
-
-
-
Group opinion statements generated by the Habermas Machine were consistently preferred by group members over those written by human mediators and received higher ratings from external judges for quality, clarity, informativeness, and perceived fairness
What do we mean by preferred? What do we knwo about the the colelctive shadow that was not harvested?
-
goal of maximizing group approval ratings
A very limited intention - just to maximise apporoval What abotu to access the quantum potential?
-
individual
We like the term Undividual rather than Individual Individual means undivided whole but we have come to understand it as separate.
-
based on the personal opinions and critiques
Why only opinions and critiques? What about potentials? What about shadows? What about Qualia?
-
We asked whether an AI system based on large language models (LLMs) could successfully capture the underlying shared perspectives of a group of human discussants by writing a “group statement” that the discussants would collectively endorse.
This is the same process as Quaker Clerks of Meetings have been doing for nearly 400 years
-
The AI’s statements were more clear, logical, and informative without alienating minority perspectives
This shows the importance of language Yet, the language can come FROM the group rather than be PUT TO the group
-
consensus
What if consensus at the group meeting does not last after the meeting is over?
-
discussants
In Dialogue, DISCUSS comes from Percuss or Concuss - beating an idea to death. In Dialogue, the idea is held open for us all to witness and explore beenath its symptom, its explicate, by delving into its implicate as an undivided wholeness
-
To act collectively, groups must reach agreement;
Yes. And the collective must also be able to reach disagreement and still stay in Dialogic relations
-
-
Local file Local file
-
We're going to have to control your tongue," the dentist says, pulling out all the metal frommy mouth. Silver bits plop and tinkle into the basin. My mouth is a motherlode.
metaphor
-
-
popular.info popular.info
-
Previously, there was an advisory committee comprised of five librarians and five community members. As a result of the change, the librarians were removed from the Committee, and the determinations of the new Committee, which consisted of five non-librarians, became binding
jfc
-
The decision was made after the government of Montgomery County, under pressure from right-wing activists, removed librarians from the process of reviewing children's books and replaced them with a "Citizens Review Committee."
what the fuck?? THIS IS LITERALLY PART OF WHY WE GO TO GRADUATE SCHOOL
-
-
www.edutopia.org www.edutopia.org
-
there’s no need to hide because struggle and failure are neutralized, normalized, and even celebrated.
I appreciate how they discussed being open to getting an answer wrong and working with those students, with patient to help them gain a better understanding of the topic.
-
Walking toward equity will help us to create inclusive, 21st-century classrooms.
Allowing access to certain resources that enhance a student's learning is the prime example of a well-structured classroom
-
1. Know every child:
By embracing "storientation," educators can learn about students' interests, families, and experiences outside of school. This approach counters the tendency to rely on a single narrative, fostering a deeper understanding t
-
. Practice lean-in assessment
The main idea is that "lean-in assessment" is crucial for understanding each student's unique learning journey. By engaging with students and observing their approaches to tasks, strengths, and challenges, educators can gather valuable insights that standardized tests cannot provide.
-
Flex your routines:
The key point is that flexibility in teaching routines is essential for effective instruction. While structured mini-lessons can be useful, they may not meet the diverse needs of all learners.
-
Make it safe to fail
The central message is that creating a safe space for failure in the classroom is essential for learning. By framing failure as valuable data rather than a source of shame, students can openly acknowledge their struggles.
-
iew culture as a resource:
The main idea is that culture should be viewed as a valuable resource in education. Ignoring students' identities diminishes their experiences and potential for learning. Recognizing and engaging with students' cultural backgrounds allows them to better understand and connect with challenging content. Encouraging students to share their backgrounds fosters a supportive environment that values diversity and enhances learning for everyone.
-
If we’re committed to the success of every child, we must acknowledge the uneven playing field that exists for many: ELLs, students with special needs, children experiencing trauma or relentless poverty, and students of color who confront unconscious biases about their capacity. Walking toward equity will help us to create inclusive, 21st-century classrooms.
The key message is that achieving success for every child requires recognizing and addressing the inequalities faced by specific groups, including English language learners, students with special needs, and those impacted by trauma or poverty
-
In an equitable classroom, there’s no need to hide because struggle and failure are neutralized, normalized, and even celebrated.
Relaying to students that their point in their educational journey does not define them as an individual and instilling a joy for understanding and learning that differs among students is where the true importance lies
-
but she was losing learners in the process.
Although the implementation of mini lessons can be taught with the hope it will reach all students, understanding that not all students learn in the same way nor are they all at the same point in their educational journey, it is important to offer additional support when noticed to be prompted
-
An equity stance pushes us to couple high expectations with a commitment to every child’s success.
Ensuring the success of every student individually is of greatest value is crucial to achieving an equitable classroom that values the learning point of all students
-
Finally, don’t be culture-blind. When we ignore students’ identities, we efface who they are in the world and lose a rich resource for learning
Yes its important to know about all of the students identities read books about different cultures to show everyone and boarder theyre horizons
-
: Teach students that failure is just another form of data. When a child feels shame about his learning gaps, he’ll hide behind quiet compliance or bravado and acting out.
Its important to teach them to learn from theyre mistakes so they won't be ashamed of the mistakes everyone makes because no one is perfect
-
If pulling a student out of an activity to support him or her makes you uncomfortable, notice your discomfort and try not to let it control your decisions.
Sometimes a student needs a break because theyre over simulated and its nothing to be uncomfortable about and you should explain we all have our moments for a little minting breather and come back in ready to effectively finish the lesson
-
. Practice lean-in assessment: As you gather a student’s human story, start to piece together his or her learning story.
The more you know about how a student works the more you can develop different lesson plans to build off those skills and expand off that
-
Become a warm demander: Author Lisa Delpit describes warm demanders as teachers who “expect a great deal of their students, convince them of their own brilliance
Its important to let know that theyre capable of being able to accomplish anything they wanna do give the the confidence to do what they wanna do and they'll succeed
-
-
www.biorxiv.org www.biorxiv.org
-
eLife Assessment
This manuscript presents a valuable new quantitative crosslinking mass spectrometry approach using novel isobaric crosslinkers. The data are solid and the method has potential for a broad application in structural biology if more isobaric crosslinking channels are available and the quantitative information of the approach is exploited in more depth.
-
Reviewer #1 (Public review):
Summary:
Crosslinking mass spectrometry has become an important tool in structural biology, providing information about protein complex architecture, binding sites and interfaces, and conformational changes. One key challenge of this approach represents the quantitation of crosslinking data to interrogate differential binding states and distributions of conformational states.
Here, Luo and Ranish present a novel class of isobaric crosslinkers ("Qlinkers"), conduct proof-of-concept benchmarking experiments on known protein complexes, and show example applications on selected target proteins. The data are solid and this could well be an exciting, convincing new approach in the field if the quantitation strategy is made more comprehensive and the quantitative power of isobaric labeling is fully leveraged as outlined below. It's a promising proof-of-concept, and potentially of broad interest for structural biologists.
Strengths:
The authors demonstrate the synthesis, application, and quantitation of their "Q2linkers", enabling relative quantitation of two conditions against each other. In benchmarking experiments, the Q2linkers provide accurate quantitation in mixing experiments. Then the authors show applications of Q2linkers on MBP, Calmodulin, selected transcription factors, and polymerase II, investigating protein binding, complex assembly, and conformational dynamics of the respective target proteins. For known interactions, their findings are in line with previous studies, and they show some interesting data for TFIIA/TBP/TFIIB complex formation and conformational changes in pol II upon Rbp4/7 binding.
Weaknesses:
This is an elegant approach but the power of isobaric mass tags is not fully leveraged in the current manuscript.
First, "only" Q2linkers are used. This means only two conditions can be compared. Theoretically, higher-plexed Qlinkers should be accessible and would also be needed to make this a competitive method against other crosslinking quantitation strategies. As it is, two conditions can still be compared relatively easily using LFQ - or stable-isotope-labeling based approaches. A "Q5linker" would be a really useful crosslinker, which would open up comprehensive quantitative XLMS studies.
Second, the true power of isobaric labeling, accurate quantitation across multiple samples in a single run, is not fully exploited here. The authors only show differential trends for their interaction partners or different conformational states and do not make full quantitative use of their data or conduct statistical analyses. This should be investigated in more detail, e.g. examine Qlinker quantitation of MBP incubated with different concentrations of maltose or Calmodulin incubated with different concentrations of CBPs. Does Qlinker quantitation match ratios predicted using known binding constants or conformational state populations? Is it possible to extract ratios of protein populations in different conformations, assembly, or ligand-bound states?
With these two points addressed this approach could be an important and convincing tool for structural biologists.
Comments on latest version:
I raised only two points which they have not addressed: Higher multiplexing of Qlinkers (1) and experiments to assess the statistical power of their quantitation strategy (2).
I can see that point (1) requires substantial experimental efforts and synthesis of novel Qlinkers would be months of work. This is an editorial decision if the limited quantitative power of the "2-plex" approach they have right now is sufficient to support publication in eLife. While I like the approach, I feel it falls short of its potential in its current form.
For point (2), the authors did not do any supporting experiments. They claim "higher plex Qlinkers" would need to be available, but I suggested experiments that can be done even with Q2linkers: Using one of the two channels as a reference channel (similar the Super-SILAC strategy published in 2010 by Geiger et al; using an isotope-labeled channel as a stable reference channel between different experiments and LC-MS runs), they could do time-courses or ligand-concentration-series with the other channel and then show that Qlinkers allow quantitative monitoring of the different populations (e.g. conformations or ligand-bound proteins).
As an additional point, I was a bit surprised to read that the quantitation evaluation in Figure 1 is based on a single experiment (reviewer response document page 6, line 2 in the authors' reply). I strongly suggest this to be repeated a few times so a proper statistical test on experimental reproducibiltiy of Qlinkers can be conducted.
In summary, the authors declined to do any experimental work to address my concerns.
-
Reviewer #2 (Public review):
The regulation of protein function heavily relies on the dynamic changes in the shape and structure of proteins and their complexes. These changes are widespread and crucial. However, examining such alterations presents significant challenges, particularly when dealing with large protein complexes in conditions that mimic the natural cellular environment. Therefore, much emphasis has been put on developing novel methods to study protein structure, interactions, and dynamics. Crosslinking mass spectrometry (CSMS) has established itself as such a prominent tool in recent years. However, doing this in a quantitative manner to compare structural changes between conditions has proven to be challenging due to several technical difficulties during sample preparation. Luo and Ranish introduce a novel set of isobaric labeling reagents, called Qlinkers, to allow for a more straightforward and reliable way to detect structural changes between conditions by quantitative CSMS (qCSMS).
The authors do an excellent job describing the design choices of the isobaric crosslinkers and how they have been optimized to allow for efficient intra- and inter-protein crosslinking to provide relevant structural information. Next, they do a series of experiments to provide compelling evidence that the Qlinker strategy is well suited to detect structural changes between conditions by qCSMS. First, they confirm the quantitative power of the novel-developed isobaric crosslinkers by a controlled mixing experiment. Then they show that they can indeed recover known structural changes in a set of purified proteins (complexes) - starting with single subunit proteins up to a very large 0.5 MDa multi-subunit protein complex - the polII complex.
The authors give a very measured and fair assessment of this novel isobaric crosslinker and its potential power to contribute to the study of protein structure changes. They show that indeed their novel strategy picks up expected structural changes, changes in surface exposure of certain protein domains, changes within a single protein subunit but also changes in protein-protein interactions. However, they also point out that not all expected dynamic changes are captured and that there is still considerable room for improvement (many not limited to this crosslinker specifically but many crosslinkers used for CSMS).
Taken together the study presents a novel set of isobaric crosslinkers that indeed open up the opportunity to provide better qCSMS data, which will enable researchers to study dynamic changes in the shape and structure of proteins and their complexes.
Comments on latest version:
The authors have not really addressed most of the concerns. They have added minimal discussion points to the text. This is okay from my perspective as eLife's policy is to leave it up to the authors of how strongly to consider the reviewers' comments. I should add that I do fully agree with the other reviewer that the quantitative assessment from Figure 1 should have been done in triplicates at least and that this would actually be essential.
-
Author response:
The following is the authors’ response to the previous reviews.
Reviewer #1 (Public review):
Summary:
Crosslinking mass spectrometry has become an important tool in structural biology, providing information about protein complex architecture, binding sites and interfaces, and conformational changes. One key challenge of this approach represents the quantitation of crosslinking data to interrogate differential binding states and distributions of conformational states.
Here, Luo and Ranish present a novel class of isobaric crosslinkers ("Qlinkers"), conduct proof-of-concept benchmarking experiments on known protein complexes, and show example applications on selected target proteins. The data are solid and this could well be an exciting, convincing new approach in the field if the quantitation strategy is made more comprehensive and the quantitative power of isobaric labeling is fully leveraged as outlined below. It's a promising proof-of-concept, and potentially of broad interest for structural biologists.
Strengths:
The authors demonstrate the synthesis, application, and quantitation of their "Q2linkers", enabling relative quantitation of two conditions against each other. In benchmarking experiments, the Q2linkers provide accurate quantitation in mixing experiments. Then the authors show applications of Q2linkers on MBP, Calmodulin, selected transcription factors, and polymerase II, investigating protein binding, complex assembly, and conformational dynamics of the respective target proteins. For known interactions, their findings are in line with previous studies, and they show some interesting data for TFIIA/TBP/TFIIB complex formation and conformational changes in pol II upon Rbp4/7 binding.
Weaknesses:
This is an elegant approach but the power of isobaric mass tags is not fully leveraged in the current manuscript.
First, "only" Q2linkers are used. This means only two conditions can be compared. Theoretically, higher-plexed Qlinkers should be accessible and would also be needed to make this a competitive method against other crosslinking quantitation strategies. As it is, two conditions can still be compared relatively easily using LFQ - or stable-isotope-labeling based approaches. A "Q5linker" would be a really useful crosslinker, which would open up comprehensive quantitative XLMS studies.
We agree that a multiplexed Qlinker approach would be very useful. The multiplexed Qlinkers are more difficult and more expensive to synthesize. We are currently working on different schemes for synthesizing multiplexed Qlinkers.
Second, the true power of isobaric labeling, accurate quantitation across multiple samples in a single run, is not fully exploited here. The authors only show differential trends for their interaction partners or different conformational states and do not make full quantitative use of their data or conduct statistical analyses. This should be investigated in more detail, e.g. examine Qlinker quantitation of MBP incubated with different concentrations of maltose or Calmodulin incubated with different concentrations of CBPs. Does Qlinker quantitation match ratios predicted using known binding constants or conformational state populations? Is it possible to extract ratios of protein populations in different conformations, assembly, or ligand-bound states?
With these two points addressed this approach could be an important and convincing tool for structural biologists.
We agree that multiplexed Qlinkers would open the door to exciting avenues of investigation such as studying conformational state populations. We plan to conduct the suggested experiments when multiplexed Qlinkers are available.
Reviewer #2 (Public review):
The regulation of protein function heavily relies on the dynamic changes in the shape and structure of proteins and their complexes. These changes are widespread and crucial. However, examining such alterations presents significant challenges, particularly when dealing with large protein complexes in conditions that mimic the natural cellular environment. Therefore, much emphasis has been put on developing novel methods to study protein structure, interactions, and dynamics. Crosslinking mass spectrometry (CSMS) has established itself as such a prominent tool in recent years. However, doing this in a quantitative manner to compare structural changes between conditions has proven to be challenging due to several technical difficulties during sample preparation. Luo and Ranish introduce a novel set of isobaric labeling reagents, called Qlinkers, to allow for a more straightforward and reliable way to detect structural changes between conditions by quantitative CSMS (qCSMS).
The authors do an excellent job describing the design choices of the isobaric crosslinkers and how they have been optimized to allow for efficient intra- and inter-protein crosslinking to provide relevant structural information. Next, they do a series of experiments to provide compelling evidence that the Qlinker strategy is well suited to detect structural changes between conditions by qCSMS. First, they confirm the quantitative power of the novel-developed isobaric crosslinkers by a controlled mixing experiment. Then they show that they can indeed recover known structural changes in a set of purified proteins (complexes) - starting with single subunit proteins up to a very large 0.5 MDa multi-subunit protein complex - the polII complex.
The authors give a very measured and fair assessment of this novel isobaric crosslinker and its potential power to contribute to the study of protein structure changes. They show that indeed their novel strategy picks up expected structural changes, changes in surface exposure of certain protein domains, changes within a single protein subunit but also changes in protein-protein interactions. However, they also point out that not all expected dynamic changes are captured and that there is still considerable room for improvement (many not limited to this crosslinker specifically but many crosslinkers used for CSMS).
Taken together the study presents a novel set of isobaric crosslinkers that indeed open up the opportunity to provide better qCSMS data, which will enable researchers to study dynamic changes in the shape and structure of proteins and their complexes. However, in its current form, the study some aspects of the study should be expanded upon in order for the research community to assess the true power of these isobaric crosslinkers. Specifically:
Although the authors do mention some of the current weaknesses of their isobaric crosslinkers and qCSMS in general, more detail would be extremely helpful. Throughout the article a few key numbers (or even discussions) that would allow one to better evaluate the sensitivity (and the applicability) of the method are missing. This includes:
(1) Throughout all the performed experiments it would be helpful to provide information on how many peptides are identified per experiment and how many have actually a crosslinker attached to it.
As the goal of the experiments is to maximize identification of crosslinked peptides which tend to have higher charge states, we targeted ions with charge states of 3+ or higher in our MS acquisition settings for CLMS, and ignored ions with 2+ charge states, which correspond to many of the normal (i.e., not crosslinked) peptides that are identified by MS. As a result, normal peptides are less likely to be identified by the MS procedure used in our CLMS experiments compared to MS settings typically used to identify normal peptides. Our settings may also fail to identify some mono-modified peptides. Like most other CLMS methods, the total number of identified crosslinked peptide spectra is usually less than 1% of the total acquired spectra and we normally expect the crosslinked species to be approximately 1% of the total peptides.
We added information about the number of crosslinked and monolinked peptides identified in the pol I benchmarking experiments (line 173). The number of crosslinks and monolinks identified in the pol II +/- a-amanitin experiment, the TBP/TFIIA/TFIIB experiment and the pol II experiment +/- Rpb4/7 are also provided.
(2) Of all the potential lysines that can be modified - how many are actually modified? Do the authors have an estimate for that? It would be interesting to evaluate in a denatured sample the modification efficiency of the isobaric crosslinker (as an upper limit as here all lysines should be accessible) and then also in a native sample. For example, in the MBP experiment, the authors report the change of one mono-linked peptide in samples containing maltose relative to the one not containing maltose. The authors then give a great description of why this fits to known structural changes. What is missing here is a bit of what changes were expected overall and which ones the authors would have expected to pick up with their method and why have they not been picked up. For example, were they picked up as modified by the crosslinker but not differential? I think this is important to discuss appropriately throughout the manuscript to help the reader evaluate/estimate the potential sensitivity of the method. There are passages where the authors do an excellent job doing that - for example when they mention the missed site that they expected to see in the initial the pol II experiments (lines 191 to 207). This kind of "power analysis" should be heavily discussed throughout the manuscript so that the reader is better informed of what sensitivity can be expected from applying this method.
Regarding the Pol II complex experiment described in Figures 4 and 5, out of the 277 lysine residues in the complex, 207 were identified as monolinked residues (74.7%), and 817 crosslinked pairs out of 38,226 potential pairs (2.1%) were observed. The ability of CLMS to detect proximity/reactivity changes may be impacted by several factors including 1) the (low) abundance of crosslinked peptides in complex mixtures, 2) the presence of crosslinkable residues in close proximity with appropriate orientation, and 3) the ability to generate crosslinked peptides by enzymatic digestion that are amenable to MS analysis (i.e., the peptides have appropriate m/z’s and charge states, the peptides ionize well, the peptides produce sufficient fragment ions during MS2 analysis to allow confident identification). Future efforts to enrich crosslinked peptides prior to MS analysis may improve sensitivity.
It is very difficult to estimate the modification efficiency of Qlinker (or many other crosslinkers) based on peptide identification results. One major reason for this is that trypsin is not able to cleave after a crosslinker-modified lysine residue. As a result, the peptides generated after the modification reaction have different lengths, compositions, charge states, and ionization efficiencies compared to unmodified peptides. These differences make it very difficult to estimate the modification efficiencies based on the presence/absence of certain peptide ions, and/or the intensities of the modified and unmodified versions of a peptide. Also, 2+ ions which correspond to many normal (i.e., unmodified) peptides were excluded by our MS acquisition settings.
It is also very difficult to predict which structural changes are expected and which crosslinked peptides and/or modified peptides can be observed by MS. This is especially true when the experiment involves proteins containing unstructured regions such as the experiments involving Pol II, and TBP, TFIIA and TFIIB. Since we are at the early stages of using qCLMS to study structural changes, we are not sure which changes we can expect to observe by qCLMS. Additional applications of Qlinker-CLMS are needed to better understand the types of structural changes that can be studied using the approach.
We hope that our discussions of some the limitations of CLMS for detecting conformational/reactivity changes provide the reader with an understanding of the sensitivity that can be expected with the approach. At the end of the paragraph about the pol II a-amanitin experiment we say, “Unfortunately, no Q2linker-modified peptides were identified near the site where α-amanitin binds. This experiment also highlights one of the limitations of residue-specific, quantitative CLMS methods in general. Reactive residues must be available near the region of interest, and the modified peptides must be identifiable by mass spectrometry.” In the section about Rbp4/7-induced structural changes in pol II we describe the under-sampling issue. And in the last paragraph we reiterate these limitations and say, “This implies that this strategy, like all MS-based strategies, can only be used for interpretation of positively identified crosslinks or monolinks. Sensitivity and under sampling are common problems for MS analysis of complex samples.”
(3) It would be very helpful to provide information on how much better (or not) the Qlinker approach works relative to label-free qCLMS. One is missing the reference to a potential qCLMS gold standard (data set) or if such a dataset is not readily available, maybe one of the experiments could be performed by label-free qCLMS. For example, one of the differential biosensor experiments would have been well suited.
We agree with the reviewer that it will be very helpful to establish gold standard datasets for CLMS. As we further develop and promote this technology, we will try to establish a standardized qCLMS.
Reviewer #1 (Recommendations for the authors):
Only a very minor point:
I may have missed it but it's not really clear how many independent experiments were used for the benchmarking quantitation and mixing experiments for Figure 1. What is the reproducibility across experiments on average and on a per-peptide basis?
Otherwise, I think the approach would really benefit from at least "Q5linkers" or even "Q10linkers", if possible. And then conduct detailed quantitative studies, either using dilution series or maybe investigating the kinetics of complex formation.
We used a sample of BSA crosslinked peptides to optimize the MS settings, establish the MS acquisition strategies and test the quantification schemes. The data in Figure 1 is based on one experiment, in which used ~150 ug of purified pol I complexes from a 6 L culture. We added this information to the Figure 1 legend. We also provide information about the reproducibility of peptide quantification by plotting the observed and expected ratios for each monolinked and crosslinked peptide identified in all of the runs in Figure S3.
We agree with the reviewer that the Qlinker approach would be even more attractive if multiplex Qlinker reagents were designed. The multiplexed Qlinkers are more difficult and more expensive to synthesize. We are currently working on different schemes for synthesizing multiplexed Qlinkers.
Reviewer #2 (Recommendations for the authors):
In addition to the public review I have the following recommendations/questions:
(1) The first part of the results section where the synthesis of the crosslinker is explained is excellent for mass spec specialists, but problematic for general readers - either more info should be provided (e.g. b1+ ions - most readers will have no idea why that is) - or potentially it could be simplified here and the details shifted to Materials and Methods for the expert reader. The same is true below for the length of spacer arms.
However - in general this level of detail is great - but can impact the ease of understanding for the more mass spec affine but not expert reader.
We have added the following sentence to assist the general reader: A b1+ ion is an ion with a charge state of +1 corresponding to the first N-terminal amino acid residue after breakage of the first peptide bond (lines 126-128).
(2) The Calmodulin experiment (lines 239 to 257) - it is a very nice result that they see the change in the crosslinked peptide between residues K78-K95, but the monolinks are not just detected as described in the text but actually go 2 fold up. This would have been actually a bit expected if the residues are now too far away to be still crosslinked that the monolinks increase. In this case, this counteraction of monolinks to crosslinked sites can also be potentially used as a "selection criteria" for interesting sites that change. Is that a possible interpretation or do the authors think that upregulation of the monolinks is a coincidence and should not be interpreted?
We agree with the reviewer that both monolinks and crosslinks can be used as potential indicators for some changes. However, it is much more difficult to interpret the abundance information from monolinks because, unlike crosslinks, there is little associated structural/proximity information with monolinks. Because it is difficult to understand the reason(s) for changes in monolink abundance, we concentrate on changes in crosslink abundances, which provide proximity/structural information about the crosslinked residues.
(3) Lines 267 to 274: a small thing but the structural information provided is quite dense I have to say. Maybe simplify or accompany with some supplemental figures?
We agree that the structural information is a bit dense especially for readers who are not familiar with the pol II system. We added a reference to Figure 3c (line 177) to help the reader follow the structural information.
As qCLMS is still a relatively new approach for studying conformational changes, the utility of the approach for studying different types of conformational changes is still unclear. Thus, one of the goals of the experiments is to demonstrate the types of conformational changes that can be detected by Q2linkers. We hope that the detailed descriptions will help structural biologists understand the types of conformational changes that can be detected using Qlinkers.
(4) Line 280: explain maybe why the sample was fractionated by SCX (I guess to separate the different complexes?).
SCX was used to reduce the complexity of the peptide mixtures. As the samples are complex and crosslinked peptides are of low abundance compared to normal peptides, SCX can separate the peptides based on their positive charges. Larger peptides and peptides with higher charge states, such as crosslinked peptides, tend to elute at higher salt concentration during SCX chromatography. The use of SCX to fractionate complex peptide mixtures is described in the “General crosslinking protocol and workflow optimization” section of the Methods, and we added a sentence to explain why the sample was fractionated by SCX (lines 278-279).
(5) Lines 354 to 357: "This suggests that the inability to identity most of these crosslinked peptides in both experiments is mainly due to under-sampling during mass spectrometry analysis of the complex samples, rather than the absence of the crosslinked peptides in one of the experiments."
This is an extremely important point for the interpretation of missing values - have the authors tried to also collect the mass spec data with DIA which is better in recovery of the same peptide signals between different samples? I realize that these are isobaric samples so DIA measurements per se are not useful as the quantification is done on the reporter channels in the MS2, but it would at least give a better idea if the missing signals were simply not picked up for MS2 as claimed by the authors or the modified peptides are just not present. Another possibility is for the authors to at least try to use a "match between the run" function as can be done in Maxquant. One of the strengths of the method is that it is quantitative and two states are analyzed together, but as can be seen in this experiment, more than two states might want to be compared. In such cases, the under-sampling issue (if that is indeed the cause) makes interpretation of many sites hard (due to missing values) and it would be interesting if for example, an analysis approach with a "match between the runs" function could recover some of the missing values.
We agree that undersampling/missing values is an important issue that needs to be addressed more thoroughly. This also highlights the importance of qCLMS, as conclusions about structural changes based on the presence/absence of certain crosslinked species in database search results may be misleading if the absence of a species is due to under-sampling. We have not tried to collect the data with DIA since we would lose the quantitative information. It would be interesting to see if match between runs can recover some of the missing values. While this could provide evidence to support the under-sampling hypothesis, it would not recover the quantitative information.
We recommend performing label swap experiments and focusing downstream analysis on the crosslinks/monolinks that are identified on both experiments. Future development of multiplexed Qlinker reagents should help to alleviate under-sampling issues. See response to Reviewer #1.
(6) Lines 375 to 393 (the whole paragraph): extremely detailed and not easy to follow. Is that level of detail necessary to drive home that point or could it be visualized in enough detail to help follow the text?
We agree that the paragraph is quite detailed, but we feel that the level of detailed is necessary to describe the types of conformational changes that can be detected by the quantitative crosslinking data, and also illustrate the challenges of interpreting the structural basis for some crosslink abundance changes even when high resolution structural data exists.
To make it easier to follow, we added a sentence to the legend of Figure 5b. “In the holo-pol II structure (right), Switch 5 bending pulls Rpb1:D1442 away from K15, breaking the salt bridge that is formed in the core pol II structure (left). The increase in the abundances of the Rpb1:15-Rpb6:76 and Rpb1:15-Rpb6:72 crosslinks in holo-pol II is likely attributed to the salt bridge between K15 and D1442 in core pol II which impedes the NHS ester-based reaction between the epsilon amino group of K15 and the crosslinker.”
(7) Final paragraph in the results section - lines 397 and 398: "All of the intralinks involving Rpb4 are more abundant in holo-pol II as expected." If I understand that experiment correctly the intralinks with Rpb4 should not be present at all as Rpb4 has been deleted. Is that due to interference between the 126 and 127 channels in MS2? If so, then this also sets a bit of the upper limit of quantitative differences that can be seen. The authors should at least comment on that "limitation".
Yes, we shouldn’t detect any Rpb4 peptides in the sample derived from the Rpb4 knockout strain. The signal from Rpb4 peptides in the DRpb4 sample is likely due to co-eluting ions. To clarify, we changed the text to:
All of the intralinks involving Rpb4 are more abundant in the holo-pol II sample (even though we don’t expect any reporter ion signal from Rpb4 peptides derived from the ∆Rpb4 pol II sample, we still observed reporter ion signals from the channel corresponding to the DRpb4 sample, potentially due to the presence of low abundance, co-eluting ions)(lines 395-399).
(8) Materials and Methods - line 690: I am probably missing something but why were two different mass additions to lysine added to the search (I would have expected only one for the crosslinker)?
The 297 Da modification is for monolinked peptides with one end of the crosslinker hydrolyzed and 18 Da water molecule is added. The 279 Da modification is for crosslinks and sometimes for looplinks (crosslinks involving two lysine residues on the same tryptic peptide).
-
-
www.carnegie.org www.carnegie.org
-
beyond our power to alter, and therefore to be accepted and made the best of. It is a waste of time to criticize the inevitable.
for - quote / critique - it is upon us, beyond our power to alter, and therefore to be accepted and made the best of. It is a waste of time to criticize the inevitable. - Andrew Carnegie - The Gospel of Wealth - alternatives - to - mainstream companies - cooperatives - Peer to Peer - Decentralized Autonomous Organization (DAO) - Fair Share Commons - B Corporations - Worker owned companies
quote / critique - it is upon us, beyond our power to alter, and therefore to be accepted and made the best of. It is a waste of time to criticize the inevitable. - Andrew Carnegie - The Gospel of Wealth - This is a defeatist attitude that does not look for a condition where both enormous inequality AND universal squalor can both eliminated - Today, there are a growing number of alternative ideas which can challenge this claim such as: - Cooperatives - example - Mondragon corporation with 70,000 employees - B Corporations - Fair Share Commons - Peer to Peer - Worker owned companies - Cosmolocal organizations - Decentralized Autonomous Organization (DAO)
-
Thus is the problem of Rich and Poor to be solved. The laws of accumulation will be left free; the laws of distribution free. Individualism will continue, but the millionaire will be but a trustee for the poor; intrusted for a season with a great part of the increased wealth of the community, but administering it for the community far better than it could or would have done for itself.
for - quote / critique / question - Thus is the problem of Rich and Poor to be solved. The laws of accumulation will be left free; the laws of distribution free. Individualism will continue, but the millionaire will be but a trustee for the poor; intrusted for a season with a great part of the increased wealth of the community, but administering it for the community far better than it could or would have done for itself. - The Gospel of Wealth - Andrew Carnegie
quote / critique / question - Thus is the problem of Rich and Poor to be solved. The laws of accumulation will be left free; the laws of distribution free. Individualism will continue, but the millionaire will be but a trustee for the poor; intrusted for a season with a great part of the increased wealth of the community, but administering it for the community far better than it could or would have done for itself. - The Gospel of Wealth - Andrew Carnegie - The problem with this reasoning is that it is circular - By rewarding oneself an extreme and unfettered amount of wealth for one's entrepreneurship skills creates inequality in the first place - Competition that destroys other corporations ends up reducing jobs - At the end of life, the rich entrepreneur desires to give back to society the wealth that (s)he originally stole - If one had reasonable amounts of rewarding innovation instead of unreasonable amounts, the problem of inequality can be largely mitigated in the first place whilst still recognizing and rewarding individual effort and ingenuity
-
The price we pay for this salutary change is, no doubt, great.
for - quote / critique - The price we pay for this salutary change is, no doubt, great - Andrew Carnegie
quote / critique - The price we pay for this salutary change is, no doubt, great - Andrew Carnegie - Carnegie goes on to write that the great freedoms offered by industrial mass production has an unavoidable price to be paid - Successful manufacturing and production cooperatives, B-Corporations, worker-owned companies, etc have disproved that it is an either-or situation. - Consider the case of the Spanish manufacturing giant, Mondragon, a federation of worker cooperatives employing 70,000 people located in Spain - where this price is NOT paid - Carnegie's essay reflects a perspective based on the time when he was alive - Were Carnegie alive today to witness the natural conclusion of his trend of progress in the Anthropocene, he would witness - extreme pollution levels of industrial mass production threatening to destabilize human civilization itself - astronomical wealth inequality - And these two are linked: - wealth inequality - a handful of elites have the same wealth as the bottom half of humanity - carbon inequality - that same handful pollutes as much as the bottom half of humanity
to - Mondragon cooperative - explore - https://hyp.is/GeIKao1rEe-9jA_97_KRBg/exploremondragon.com/en/ - Oxfam wealth and carbon inequality reports - https://jonudell.info/h/facet/?max=100&expanded=true&user=stopresetgo&exactTagSearch=true&any=oxfam
-
destruction of Individualism
for - critique - destruction of Individualism - The Gospel of Wealth - Andrew Carnegie - individual / collective Gestalt - Deep Humanity
critique - destruction of Individualism - The Gospel of Wealth - Andrew Carnegie - From a Deep Humanity perspective, the individual and the collective are intertwingled - This is the individual / collective gestalt - Communism and Capitalism are both extreme poles - the truth lies somewhere in the middle - which acknowledges both are individual AND collective nature simultaneously - and works to balance them
-
the right of the laborer to his hundred dollars in the savings bank, and equally the legal right of the millionaire to his millions.
for - critique - extreme wealth inequality cannot be avoided for the greater improvement of society - The Gospel of Wealth - Andrew Carnegie - stats - Mondragon corporation - comparison of pay difference between highest paid and lowest paid - adjacency - Gandhi quote - Andrew Carnegie beliefs in The Gospel of Wealth
critique - extreme wealth inequality cannot be avoided for the greater improvement of society - The Gospel of Wealth - Andrew Carnegie - It's a matter of degree - Wealth differences within US corporations of 344 to 1 are obscene and not necessary, as proven by - Wealth difference of 6 to 1 in Mondragon federation of cooperatives - To quote - Gandhi, there is enough to meet everyone's needs but not enough to meet everyone's greed - The great problem with such large wealth disparity is that those who know how to game the system can earn obscene amounts of money - and since the concept of luxury goods is made desirable and proportional to monetary wealth, it creates a positive feedback loop of insatiability - The combination of engaging in ever greater luxury lifestyle and power is intoxicating and addictive
to - stats - Mondragon corporation - comparison of pay difference between highest paid and lowest paid - https://hyp.is/QAxx-o14Ee-_HvN5y8aMiQ/www.csmonitor.com/Business/2024/0513/income-inequality-capitalism-mondragon-corporation
-
That this talent for organization and management is rare among men is proved by the fact that it invariably secures for its possessor enormous rewards, no matter where or under what laws or conditions.
for - critique - extreme wealth a reward for rare management skills - Andrew Carnegie - The Gospel of Wealth - Mondragon counterexample - to - stats - Mondragon pay difference between highest and lowest paid - article - In this Spanish town, capitalism actually works for the workers - Christian Science Monitor - Erika Page - 2024, June 7
critique - extreme wealth a reward for rare management skills - Andrew Carnegie - The Gospel of Wealth - Mondragon counterexample - This is invalidated today by large successful cooperatives such as Mondragon
to - stats - Mondragon corporation - comparison of pay difference between highest paid and lowest paid - https://hyp.is/QAxx-o14Ee-_HvN5y8aMiQ/www.csmonitor.com/Business/2024/0513/income-inequality-capitalism-mondragon-corporation
Tags
- critique - extreme wealth a reward for rare management skills - Andrew Carnegie - The Gospel of Wealth - Mondragon counterexample
- adjacency - Gandhi quote - Andrew Carnegie beliefs in The Gospel of Wealth
- ritique - destruction of Individualism - The Gospel of Wealth - Andrew Carnegie - individual / collective Gestalt - Deep Humanity
- quote / critique - it is upon us, beyond our power to alter, and therefore to be accepted and made the best of. It is a waste of time to criticize the inevitable. - Andrew Carnegie - The Gospel of Wealth
- quote / critique / question - Thus is the problem of Rich and Poor to be solved. The laws of accumulation will be left free; the laws of distribution free. Individualism will continue, but the millionaire will be but a trustee for the poor; intrusted for a season with a great part of the increased wealth of the community, but administering it for the community far better than it could or would have done for itself. - The Gospel of Wealth - Andrew Carnegie
- to - Mondragon cooperative - explore
- critique - extreme wealth inequality cannot be avoided for the greater improvement of society - The Gospel of Wealth - Andrew Carnegie
- alternatives - to - mainstream companies - cooperatives - Peer to Peer - Decentralized Autonomous Organization (DAO) - Fair Share Commons - B Corporations - Worker owned companies
- quote / critique - The price we pay for this salutary change is, no doubt, great - Andrew Carnegie
- stats - Mondragon corporation - comparison of pay difference between highest paid and lowest paid
- Oxfam wealth and carbon inequality reports
- to - stats - Mondragon pay difference between highest and lowest paid - article - In this Spanish town, capitalism actually works for the workers - Christian Science Monitor - Erika Page - 2024, June 7
Annotators
URL
-
-
www.britannica.com www.britannica.com
-
William I the Conqueror
first Norman King
-
-
www.poetryfoundation.org www.poetryfoundation.org
-
Next thing an unwary mouse Bares his flank
A mouse the size of a New Your city rat jumped out at the cat, and the cat got scared for its life because the mouse was bigger than him
-
-
www.biorxiv.org www.biorxiv.org
-
eLife Assessment
This useful study by Nandy and colleagues examined relationships between behavioral state, neural activity in cortical area V4, and trial-by-trial variability in the ability to detect weak visual stimuli. They present solid evidence indicating that certain changes in arousal and eye-position stability, along with patterns of synchrony in the activity of neurons in different layers of V4, can show modest correspondences to changes in the ability to correctly detect a stimulus. These findings are likely to be of interest to those who seek a deeper understanding of circuit mechanisms that underlie perception.
-
Reviewer #1 (Public review):
Summary:
In this study, Nandy and colleagues examine neural, physiological and behavioral correlates of perceptual variability in monkeys performing a visual change detection task. They used a laminar probe to record from area V4 while two macaque monkeys detected a small change in stimulus orientation that occurred at a random time in one of two locations, focusing their analysis on stimulus conditions where the animal was equally likely to detect (hit) or not-detect (miss) a briefly presented orientation change (target). They discovered two behavioral and physiological measures that are significantly different between hit and miss trials - pupil size tends to be slightly larger on hits vs. misses, and monkeys are more likely to miss the target on trials in which they made a microsaccade shortly before target onset. They also examined multiple measures of neural activity across the cortical layers and found some measures that are significantly different between hits and misses.
Strengths:
Overall the study is well executed and the analyses are appropriate (with some possible caveats discussed below).
Weaknesses:
I have two remaining concerns. First, with the exception of the pre-target microsaccades, the correlates of perceptual variability (differences between hits and misses) appear to be weak and disconnected. The GLM analysis of the predictive power of trial outcome based on the behavioral and neural measures is only discussed at the end of the paper. This analysis shows that some of the measures have no significant predictive power, while others cannot be examined using the GLM analysis because these measures cannot be estimated in single trials. Given these weak and disconnected effects, my overall sense is that the current results provide a limited advance to our understanding of the neural basis of perceptual variability.
In addition, because the authors combine data across stimulus contrasts, I am somewhat uneasy about the possible confounding effect of contrast. As expected, stimulus contrast affected the probability of hits vs. misses. Independently, contrast may have affected some of the physiological measurements. Therefore, showing that contrast is not the source of the covariations between the physiological/behavioral measurements and perception can be challenging, and I am not convinced that the authors have ruled this out as a possible confound. It is unclear why the authors had to vary contrast in the first place, and why the analyses had to be done by combining the data across contrasts or by ignoring contrast as a variable (e.g., in the GLM analysis).
-
-
qcengl110.commons.gc.cuny.edu qcengl110.commons.gc.cuny.edu
-
the First-Year Writing Committee
Hi! Just showing you what an annotation looks like in context and also explaining what this is a bit more.
The FYW Committee is a committee comprised of full-timers at QC who teach in the first-year writing program and / or are experts in Writing Studies. We make decisions about stuff that happens in the FYW program. Formerly, we've also done things like review three-year contract materials.
-
-
uta.pressbooks.pub uta.pressbooks.pub
-
As part of this project, the colleges created a search filter allowing students to easily find open and affordable courses (Goodman 2017). Shortly after, some instructors reported concerns that the filter might actually deter students from signing up for their classes, so the project team reduced the visibility of the course markings (Goodman 2017).
Ah, I wish I could access their SIS and see the filter.
-
-
docdrop.org docdrop.org
-
were
"Were" should be "was" as it is referring to the word "average," a singular quantity. "Were" would be used if the author was referring to multiple averages.
-
-
docdrop.org docdrop.org
-
This process was repeated 3 Ɵmes with sugar water.
No values regarding sugar percent concentration are specified with the three repeated experiments. It is clear that the author repeated the experiment with different concentrations of sugar water when the reader refers to the graph. These values need to be included in the description.
-
-
online.clackamas.edu online.clackamas.edu
-
Liquid iscalculated by subtracting the initial mass from the final mass
It is unclear if the author is trying to make a general statement about calculations or if the author is trying to relay what actually happened during the experiment. The verb "is" must be changed to "was" to be in the past passive tense. And, the "liquid" needs to be specified. For ease of understanding, the previous sentence can be combined with this one: "Then, the mass of the liquid within the beaker was calculated by subtracting the initial mass from the final mass."
-
-
www.americanyawp.com www.americanyawp.com
-
I find as much as I can do to manufacture cloathing for my family which would else be Naked.
Abigail is doing what she can to provide for her family, she is currently very busy, this is probably why she hasn't attempted to make salt peter (white powder used to make gunpowder and preserve food)
-
Many grown person[s] are now sick with it, in this [street?] 5. It rages much in other Towns.
In the second sentence of this paragraph Abigail mentions she is attending the sick, one of her neighbors and in this highlighted section she mentions how it has affected many other people and towns. Most likely just like Abigail was doing other women were also helping nurse the sick.
Thoughts: I find Abigail's actions important here because during a war it is vital to be in good health. Illnesses can bring death and also weaken, decreasing helping hands that could help them win this war.
-
I have lately seen a small Manuscrip de[s]cribing the proportions for the various sorts of powder, fit for cannon, small arms and pistols. If it would be of any Service your way I will get it transcribed and send it to you
Here Abigial is being resourceful. She has mention that her husband asked if she had made salt peter (white powder used to make gunpowder and preserve food)
But she has not attempted it yet, but she has information on who can provide that material to her husband and should he need it. She offers to make the purchase and send it to him.
Thoughts: Abigail is helping her husband, providing materials that would be vital to aide them in war and her husband is not taking her seriously about giving women equal rights.
-
-
www.theguardian.com www.theguardian.com
-
You argue there are certain situations where we could replace the animals we experiment on with humans…
Animal experimentation has been going on for hundreds of years and helped with pharmaceutical and illness research. Doing the same experiments on humans would not work because of the extent of the experiments and the types of tests preformed.
-
-
online.clackamas.edu online.clackamas.edu
-
rod, by
The comma is unnecessary. Also, using the preposition "by" when referring to subtraction creates confusion. The described values should be flipped: "The volume was calculated by subtracting the initial volume of the water in the graduated cylinder from the volume of the graduated cylinder after the metal rod had been added. To further clarify, numeric values should be used.
-
-
www.csmonitor.com www.csmonitor.com
-
The income disparity between the highest- and lowest-paid employees in Mondragon’s cooperatives is capped at a ratio of 6-to-1, compared with a typical ratio of 344-to-1 in the United States. (It’s typically 77-to-1 in Spain.)
for - stats - Mondragon corporation - pay difference comparison between highest paid and lowest paid - from - essay - The Gospel of Wealth - Andrew Carnegie - Carnegie organization
from - essay - The Gospel of Wealth - Andrew Carnegie - Carnegie organization - https://hyp.is/dIoiDo16Ee-0n2OpOK3lwg/www.carnegie.org/about/our-history/gospelofwealth/
stats - Mondragon corporation - comparison of pay difference between highest paid and lowest paid - Modragon - 6 to 1 - typical US - 344 to 1 - typical Spain - 77 to 1
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
For example, social media data about who you are friends with might be used to infer your sexual orientation. Social media data might also be used to infer people’s: Race Political leanings Interests Susceptibility to financial scams Being prone to addiction (e.g., gambling)
This makes a lot of sense but is also incredible scary to know that someone is being watched and most likely being manipulated. The susceptible to scams and prone to addiction is probably the most frightening thing. I feel that is just evil to track people's vulnerabilities and possible use it against them.
-
-
www.biorxiv.org www.biorxiv.org
-
Author response:
Review #1:
Also, they observed no difference in the binding free energy of phosphatidylserine with wild TREM2-Ig and mutant TREM2-Ig, which is a bit inconsistent with the previous report with experiment studies by Journal of Biological Chemistry 293, (2018), Alzheimer's and Dementia 17, 475-488 (2021), Cell 160, 1061-1071 (2015).
We directly note this contrast with experimental findings in the body of our work, particularly given the known limitations of free energy calculations in MD simulations, as outlined in the Limitations section. Our claim is that the loss of function in the R47H variant extends beyond decreased binding affinities and also impacts binding patterns. As stated in our manuscript: ‘Our observations for both sTREM2 and TREM2 indicate that R47H-induced dysfunction may result not only from diminished ligand binding but also an impaired ability to discriminate between different ligands in the brain, proposing a novel mechanism for loss-of-function.’
Perhaps the authors made significant efforts to run a number of simulations for multiple models, which is nearly 17 microseconds in total; none of the simulations has been repeated independently at least a couple of times, which makes me uncomfortable to consider this finding technically true. Most of the important conclusions that authors claimed, including the opposite results from previous research, have been made on the single run, which raises the question of whether this observation can be reproduced if the simulation has been repeated independently. Although the authors stated the sampling number and length of MD simulations in the current manuscript as a limitation of this study, it must be carefully considered before concluding rather than based on a single run.
The reviewer raises an interesting point regarding the repetition of individual simulations, a consideration we carefully evaluated during the design of this study. However, we believe our approach—running multiple independent models of the same system—offers a more rigorous methodology than simply repeating simulations of the same docked model. This strategy allows us to sample several distinct starting configurations, thereby minimizing biases introduced by docking algorithms and single-model reliance.
In our study, we demonstrate that within the 150 ns timescale of our protein/ligand (PL) simulations, the relatively small ligands are able to move from their initial docking positions to a specific binding site. While ideally, replicates of these independent models would further strengthen the findings, this was not computationally feasible given the unprecedented total duration of our simulations. Importantly, our conclusions are seldom based on the results of a single protein/PL simulation.
Moreover, the ergodic hypothesis suggests that over sufficiently long timescales, simulations will explore all accessible states. Additionally, we have performed several replicate simulations of our WT and R47H Ig-like domain models in solution, specifically to investigate CDR2 loop dynamics.
In this case, since the system involves only the protein and lacks the independent replicates seen in the protein/PL simulations, these runs were chosen to effectively capture the stochastic nature of CDR2 loop movement.
sTREM2 shows a neuroprotective effect in AD, even with the mutations with R47H, as evidenced by authors based on their simulation. sTREM2 is known to bind Aβ within the AD and reduce Aβ aggregation, whereas R47H mutant increases Aβ aggregation. I wonder why the authors did not consider Aβ as a ligand for their simulation studies. As a reader in this field, I would prefer to know the protective mechanism of sTREM2 in Aβ aggregation influenced by the stalk domain.
Our initial approach for this study used Aβ as a ligand rather than phospholipids. However, we noted the difficulties in simulating Aβ, particularly in choosing relevant Aβ structures and oligomeric states (n-mers). We believe that phospholipids represent an equally pertinent ligand for TREM2, given its critical role in lipid sensing and metabolism. Furthermore, there is growing recognition in the AD research community of the need to move beyond Aβ and focus on other understudied pathological mechanisms.
In a similar manner, why only one mutation is considered "R47H" for the study? There are more server mutations reported to disrupt tethering between these CDRs, such as T66M. Although this "T66M" is not associated with AD, I guess the stalk domain protective mechanism would not be biased among different diseases. Therefore, it would be interesting to see whether the findings are true for this T66M.
In most previous studies, the mechanism for CDR destabilization by mutant was explored, like the change of secondary structures and residue-wise interloop interaction pattern. While this is not considered in this manuscript, neither detailed residue-wise interaction that changed by mutant or important for 'ligand binding" or "stalk domain".
These are both excellent points that deserve extensive investigation. While R47H is the most common and prolific mutation in literature, an extensive catalog of other mutations is important to explore. We are currently preparing two separate publications that will delve into these gaps in more detail, as addressing them was beyond the scope of the present study.
The comparison between the wild and mutant and other different complex structures must be determined by particular statistical calculations to state the observed difference between different structures is significant. Since autocorrelation is one of the major concerns for MD simulation data for predicting statistical differences, authors can consider bootstrap calculations for predicting statistical significance.
We are currently working to address this comment to strengthen the validity of our results and statistical conclusions in the revised manuscript.
Review #2:
The authors state that reported differences in ligand binding between the TREM2 and sTREM2 remain unexplained, and the authors cite two lines of evidence. The first line of evidence, which is true, is that there are differences between lipid binding assays and lipid signaling assays. However, signaling assays do not directly measure binding. Secondly, the authors cite Kober et al 2021 as evidence that sTREM2 and TREM2 showed different affinities for Abeta1-42 in a direct binding assay. Unfortunately, when Kober et al measured the binding of sTREM2 and Ig-TREM2 to Abeta they reported statistically identical affinities (Kd = 3.8 {plus minus} 2.9 µM vs 5.1 {plus minus} 3.7 µM) and concluded that the stalk did not contribute measurably to Abeta binding.
We appreciate the reviewer’s insight and acknowledge the need to clarify our interpretation of Kober et al. (2021). We will adjust and refocus how we reference this evidence from Kober et al. in our revised manuscript.
In line with these findings, our energy calculations reveal that sTREM2 exhibits weaker—but still not statistically significant—binding affinities for phospholipids compared to TREM2. These results suggest that while overall binding affinity might be similar, differences in binding patterns or specific lipid interactions could still contribute to functional differences observed between TREM2 and sTREM2.
The authors appear to take simulations of the Ig domain (without any stalk) as a surrogate for the full-length, membrane-bound TREM2. They compare the Ig domain to a sTREM2 model that includes the stalk. While it is fully plausible that the stalk could interact with and stabilize the Ig domain, the authors need to demonstrate why the full-length TREM2 could not interact with its own stalk and why the isolated Ig domain is a suitable surrogate for this state.
We believe that this is a major limitation of all computational work of TREM2 to-date, and of experimental work which only presents the Ig-like domain. This is extensively discussed in the limitations section of our paper. Hence, we are currently working toward a manuscript that will be the first biologically relevant model of TREM2 in a membrane and will challenge the current paradigm of using the Ig-like domain as an experimental surrogate for TREM2.
-
eLife Assessment
This useful manuscript addresses some key molecular mechanisms on the neuroprotective roles of soluble TREM2 in neurodegenerative diseases. Thw study will advance our understanding of TREM2 mutations, particularly on the damaging effect of known TREM2 mutations, and also explain why soluble TREM2 can antagonize Aβ aggregation. However, the primary experimental method, MD simulations, suffers from limited sampling, rendering the results incomplete for definite conclusions.
-
Reviewer #1 (Public review):
In this manuscript, Saeb et al reported the mechanistic roles of the flexible stalk domain in sTREM2 function using molecular dynamics simulations. They have reported some interesting molecular bases explaining why sTREM2 shows protective effects during AD, such as partial extracellular stalk domain promoting binding preference and stabilities of sTREM2 with its ligand even in the presence of known AD-risk mutation, R47H. Furthermore, they found that the stalk domain itself acts as the site for ligand binding by providing an "expanded surface", known as 'Expanded Surface 2' together with the Ig-like domain. Also, they observed no difference in the binding free energy of phosphatidyl-serine with wild TREM2-Ig and mutant TREM2-Ig, which is a bit inconsistent with the previous report with experiment studies by Journal of Biological Chemistry 293, (2018), Alzheimer's and Dementia 17, 475-488 (2021), Cell 160, 1061-1071 (2015).
Perhaps the authors made significant efforts to run a number of simulations for multiple models, which is nearly 17 microseconds in total; none of the simulations has been repeated independently at least a couple of times, which makes me uncomfortable to consider this finding technically true. Most of the important conclusions that authors claimed, including the opposite results from previous research, have been made on the single run, which raises the question of whether this observation can be reproduced if the simulation has been repeated independently. Although the authors stated the sampling number and length of MD simulations in the current manuscript as a limitation of this study, it must be carefully considered before concluding rather than based on a single run.
sTREM2 shows a neuroprotective effect in AD, even with the mutations with R47H, as evidenced by authors based on their simulation. sTREM2 is known to bind Aβ within the AD and reduce Aβ aggregation, whereas R47H mutant increases Aβ aggregation. I wonder why the authors did not consider Aβ as a ligand for their simulation studies. As a reader in this field, I would prefer to know the protective mechanism of sTREM2 in Aβ aggregation influenced by the stalk domain.
In a similar manner, why only one mutation is considered "R47H" for the study? There are more server mutations reported to disrupt tethering between these CDRs, such as T66M. Although this "T66M" is not associated with AD, I guess the stalk domain protective mechanism would not be biased among different diseases. Therefore, it would be interesting to see whether the findings are true for this T66M.
In most previous studies, the mechanism for CDR destabilization by mutant was explored, like the change of secondary structures and residue-wise interloop interaction pattern. While this is not considered in this manuscript, neither detailed residue-wise interaction that changed by mutant or important for 'ligand binding" or "stalk domain".
The comparison between the wild and mutant and other different complex structures must be determined by particular statistical calculations to state the observed difference between different structures is significant. Since autocorrelation is one of the major concerns for MD simulation data for predicting statistical differences, authors can consider bootstrap calculations for predicting statistical significance.
-
Reviewer #2 (Public review):
Significance:
TREM2 is an immunomodulatory receptor expressed on myeloid cells and microglia in the brain. TREM2 consists of a single immunoglobular (Ig) domain that leads into a flexible stalk, transmembrane helix, and short cytoplasmic tail. Extracellular proteases can cleave TREM2 in its stalk and produce a soluble TREM2 (sTREM2). TREM2 is genetically linked to Alzheimer's disease (AD), with the strongest association coming from an R47H variant in the Ig domain. Despite intense interest, the full TREM2 ligand repertoire remains elusive, and it is unclear what function sTREM2 may play in the brain. The central goal of this paper is to assess the ligand-binding role of the flexible stalk that is generated during the shedding of TREM2. To do this, the authors simulate the behavior of constructs with and without stalk. However, it is not clear why the authors chose to use the isolated Ig domain as a surrogate for full-length TREM2. Additionally, experimental binding evidence that is misrepresented by the authors contradicts the proposed role of the stalk.
Summary and strengths:
The authors carry out MD simulations of WT and R47H TREM2 with and without the flexible stalk. Simulations are carried out for apo TREM2 and for TREM2 in complex with various lipids. They compare results using just the Ig domain to results including the flexible stalk that is retained following cleavage to generate sTREM2. The computational methods are well-described and should be reproducible. The long simulations are a strength, as exemplified in Figure 2A where a CDR2 transition happens at ~400-600 ns. The stalk has not been resolved in structural studies, but the simulations suggest the intriguing and readily testable hypothesis that the stalk interacts with the Ig domain and thereby contributes to the stability of the Ig domain and to ligand binding. I suspect biochemists interested in TREM2 will make testing this hypothesis a high priority.
Weaknesses:
Unfortunately, the work suffers from two fundamental flaws.
(1) The authors state that reported differences in ligand binding between the TREM2 and sTREM2 remain unexplained, and the authors cite two lines of evidence. The first line of evidence, which is true, is that there are differences between lipid binding assays and lipid signaling assays. However, signaling assays do not directly measure binding. Secondly, the authors cite Kober et al 2021 as evidence that sTREM2 and TREM2 showed different affinities for Abeta1-42 in a direct binding assay. Unfortunately, when Kober et al measured the binding of sTREM2 and Ig-TREM2 to Abeta they reported statistically identical affinities (Kd = 3.8 {plus minus} 2.9 µM vs 5.1 {plus minus} 3.7 µM) and concluded that the stalk did not contribute measurably to Abeta binding.
(2) The authors appear to take simulations of the Ig domain (without any stalk) as a surrogate for the full-length, membrane-bound TREM2. They compare the Ig domain to a sTREM2 model that includes the stalk. While it is fully plausible that the stalk could interact with and stabilize the Ig domain, the authors need to demonstrate why the full-length TREM2 could not interact with its own stalk and why the isolated Ig domain is a suitable surrogate for this state.
-
-
viewer.athenadocs.nl viewer.athenadocs.nl
-
Z-scores make it easier to compare a specific value to others in the distribution
wichtig
-
-
www.biorxiv.org www.biorxiv.org
-
eLife Assessment
The study provides a valuable showcase of a workflow to perform large-scale characterization of drug mechanisms of action using proteomics in which on-target and off-targets of 166 compounds using proteome solubility analysis in living cells and cell lysates were determined. The evidence supporting the claims of the authors is solid, however, the inclusion of more replicate experiments and more statistical rigor would have strengthened the study. This will be of broad interest to medicinal chemists, toxicologists, computational biologists and biochemists.
-
-
vimeo.com vimeo.com
-
"I'm always trying to get back to the 20s a little bit." <br /> —John Dickerson, in Field Notes interview (2016) https://vimeo.com/169725470
Dickerson says he's got two screens on the computer in his office as well as an ipad and a phone. But he's also got a "notebook does only one thing". He's also got an old black lacquer Underwood (No. 4, 5, or 6?) on his office desk still.
Wonder if he uses it?
-