1. Oct 2024
    1. El timo y el bazo se mantienen activos durante el verano e involucionan en en invierno cuando se encuentran en hibernación

    2. Reptiles are the only ectothermic amniotes, and therefore become a pivotal group to study in order to provide important insights into both the evolution of the immune system as well as the functioning of the immune system in an ecological setting.

    1. eLife Assessment

      This important work advances our understanding of factors influencing early childhood development. The large sample size and methodology applied make the findings of this study convincing; however, support for some of the claims made by the authors is incomplete. The work will be of interest to researchers in developmental science and early childhood pediatrics.

    2. Reviewer #1 (Public Review):

      Padilha et al. aimed to find prospective metabolite biomarkers in serum of children aged 6-59 months that were indicative of neurodevelopmental outcomes. The authors leveraged data and samples from the cross-sectional Brazilian National Survey on Child Nutrition (ENANI-2019), and an untargeted multisegment injection-capillary electrophoresis-mass spectrometry (MSI-CE-MS) approach was used to measure metabolites in serum samples (n=5004) which were identified via a large library of standards. After correlating the metabolite levels against the developmental quotient (DQ), or the degree of which age-appropriate developmental milestones were achieved as evaluated by the Survey of Well-being of Young Children, serum concentrations of phenylacetylglutamine (PAG), cresol sulfate (CS), hippuric acid (HA) and trimethylamine-N-oxide (TMAO) were significantly negatively associated with DQ. Examination of the covariates revealed that the negative associations of PAG, HA, TMAO and valine (Val) with DQ were specific to younger children (-1 SD or 19 months old), whereas creatinine (Crtn) and methylhistidine (MeHis) had significant associations with DQ that changed direction with age (negative at -1 SD or 19 months old, and positive at +1 SD or 49 months old). Further, mediation analysis demonstrated that PAG was a significant mediator for the relationship of delivery mode, child's diet quality and child fiber intake with DQ. HA and TMAO were additional significant mediators of the relationship of child fiber intake with DQ.

      Strengths of this study include the large cohort size and study design allowing for sampling at multiple time points along with neurodevelopmental assessment and a relatively detailed collection of potential confounding factors including diet. The untargeted metabolomics approach was also robust and comprehensive allowing for level 1 identification of a wide breadth of potential biomarkers. Given their methodology, the authors should be able to achieve their aim of identifying candidate serum biomarkers of neurodevelopment for early childhood. The results of this work would be of broad interest to researchers who are interested in understanding the biological underpinnings of development and also for tracking development in pediatric populations, as it provides insight for putative mechanisms and targets from a relevant human cohort that can be probed in future studies. Such putative mechanisms and targets are currently lacking in the field due to challenges in conducting these kind of studies, so this work is important.

      However, in the manuscript's current state, the presentation and analysis of data impede the reader from fully understanding and interpreting the study's findings. Particularly, the handling of confounding variables is incomplete. There is a different set of confounders listed in Table 1 versus Supplementary Table 1 versus Methods section Covariates versus Figure 4. For example, Region is listed in Supplementary Table 1 but not in Table 1, and Mode of Delivery is listed in Table 1 but not in Supplementary Table 1. Many factors are listed in Figure 4 that aren't mentioned anywhere else in the paper, such as gestational age at birth or maternal pre-pregnancy obesity.

      The authors utilize the directed acrylic graph (DAG) in Figure 4 to justify the further investigation of certain covariates over others. However, the lack of inclusion of the microbiome in the DAG, especially considering that most of the study findings were microbial-derived metabolite biomarkers, appears to be a fundamental flaw. Sanitation and micronutrients are proposed by the authors to have no effect on the host metabolome, yet sanitation and micronutrients have both been demonstrated in the literature to affect microbiome composition which can in turn affect the host metabolome.

      Additionally, the authors emphasized as part of the study selection criteria the following,<br /> "Due to the costs involved in the metabolome analysis, it was necessary to further reduce the sample size. Then, samples were stratified by age groups (6 to 11, 12 to 23, and 24 to 59 months) and health conditions related to iron metabolism, such as anemia and nutrient deficiencies. The selection process aimed to represent diverse health statuses, including those with no conditions, with specific deficiencies, or with combinations of conditions. Ultimately, through a randomized process that ensured a balanced representation across these groups, a total of 5,004 children were selected for the final sample (Figure 1)."

      Therefore, anemia and nutrient deficiencies are assumed by the reader to be important covariates, yet, the data on the final distribution of these covariates in the study cohort is not presented, nor are these covariates examined further.

      The inclusion of specific covariates in Table 1, Supplementary Table 1, the statistical models, and the mediation analysis is thus currently biased as it is not well justified.

      Finally, it is unclear what the partial-least squares regression adds to the paper, other than to discard potentially interesting metabolites found by the initial correlation analysis.

    3. Reviewer #2 (Public Review):

      A strength of the work lies in the number of children Padilha et al. were able to assess (5,004 children aged 6-59 months) and in the extensive screening that the Authors performed for each participant. This type of large-scale study is uncommon in low-to-middle-income countries such as Brazil.<br /> The Authors employ several approaches to narrow down the number of potentially causally associated metabolites.<br /> Could the Authors justify on what basis the minimum dietary diversity score was dichotomized? Were sensitivity analyses undertaken to assess the effect of this dichotomization on associations reported by the article? Consumption of each food group may have a differential effect that is obscured by this dichotomization.<br /> Could the Authors specify the statistical power associated with each analysis?<br /> Could the Authors describe in detail which metric they used to measure how predictive PLSR models are, and how they determined what the "optimal" number of components were?<br /> The Authors use directed acyclic graphs (DAG) to identify confounding variables of the association between metabolites and DQ. Could the dataset generated by the Authors have been used instead? Not all confounding variables identified in the literature may be relevant to the dataset generated by the Authors.<br /> Were the systematic reviews or meta-analyses used in the DAG performed by the Authors, or were they based on previous studies? If so, more information about the methodology employed and the studies included should be provided by the Authors.<br /> Approximately 72% of children included in the analyses lived in households with a monthly income superior to the Brazilian minimum wage. The cohort is also biased towards households with a higher level of education. Both of these measures correlate with developmental quotient. Could the Authors discuss how this may have affected their results and how generalizable they are?<br /> Further to this, could the Authors describe how inequalities in access to care in the Brazilian population may have affected their results? Could they have included a measure of this possible discrepancy in their analyses?<br /> The Authors state that the results of their study may be used to track children at risk for developmental delays. Could they discuss the potential for influencing policies and guidelines to address delayed development due to malnutrition and/or limited access to certain essential foods?

    4. Reviewer #3 (Public Review):

      The ENANI-2019 study provides valuable insights into child nutrition, development, and metabolomics in Brazil, highlighting both challenges and opportunities for improving child health outcomes through targeted interventions and further research.

      Strengths of the methods and results:<br /> (1) The study utilizes data from the ENANI-2019 cohort, which was already existing. This cohort choice allows for longitudinal assessments and exploration of associations between metabolites and developmental outcomes. In addition, there was conservation of resources which are scanty in all settings in the current scenario.<br /> (2) The study aims to investigate the relationship between circulating metabolites (exposure) and early childhood development (outcome), specifically developmental quotient (DQ). The objectives are clearly stated, which facilitates focused research questions and hypotheses. The population that is studied is clearly mentioned.<br /> (3) The study accessed a large number of children under five years, with blood collected from a final sample size of 5,004 children. The exclusion of infants under six months due to venipuncture challenges and lack of reference values highlights practical considerations in research design.<br /> The study sample reflects a diverse range of children in terms of age, sex distribution, weight status, maternal education, and monthly family income. This diversity enhances the generalizability of findings across different sociodemographic groups within Brazil.<br /> (4) The study uses standardized measures (e.g., DQ assessments) and chronological age. Confounding variables, such as child's age, diet quality, and nutritional status, are carefully considered and incorporated into analyses through a Directed Acyclic Graph (DAG). The mean DQ of 0.98 indicates overall developmental norms among the studied children, with variations noted across different demographic factors such as age, region, and maternal education. The prevalence of Minimum Dietary Diversity (MDD) being met by 59.3% of children underscores dietary patterns and their potential impact on health outcomes. The association between nutritional status (weight-for-height z-scores) and developmental outcomes (DQ) provides insights into the interplay between nutrition and child development.<br /> The study identified key metabolites associated with developmental quotient (DQ):<br /> Component 1: Branched-chain amino acids (Leucine, Isoleucine, Valine).<br /> Component 2: Uremic toxins (Cresol sulfate, Phenylacetylglutamine).<br /> Component 3: Betaine and amino acids (Glutamine, Asparagine).<br /> The study focused on several serum metabolites like PAG (phenylacetylglutamine), CS (p-cresyl sulfate), HA (hippuric acid), TMAO (trimethylamine-N-oxide), MeHis (methylhistidine), and Crtn (creatinine). These metabolites are implicated in various metabolic pathways linked to gut microbiota activity, amino acid metabolism, and dietary factors.<br /> These metabolites explained a significant portion of both metabolite variance (39.8%) and DQ variance (4.3%). The study suggests that these metabolites can be used as proxy measures of the gut microbiome in children.<br /> (5) The use of partial least square regression (PLSR) with cross-validation (80% training, 20% testing) which is a robust approach to identify metabolites predictive of DQ, which minimizes overfitting. This model allows for outliers to remain outliers for transparency.<br /> The Directed Acyclic Graph (DAG) identifies and adjusts for confounding variables (e.g., child's diet quality, nutritional status) and strengthens the validity of findings by controlling for potential biases. Developmental and gender differences were studied by testing interactions with the age of the child and the sex.<br /> Mediation analysis exploring metabolites as potential mediators provides insights into underlying pathways linking exposures (e.g., diet, microbiome) with DQ.<br /> The use of Benjamini-Hochberg correction for multiple comparisons and bootstrap tests (5,000 iterations) enhances the reliability of results by controlling false discovery rates and assessing significance robustly.

      Significant correlations between serum metabolites and DQ, particularly negative associations with certain metabolites like PAG and CS, suggest potential biomarkers or pathways influencing developmental outcomes. Notably, these associations varied with age, suggesting different metabolic impacts during early childhood development.

      Weaknesses:<br /> (1) The data collected was incomplete especially those related to breastfeeding history and birth weight. These have been mentioned in the limitations of the study but yet might have been potential confounders or even factors leading to the particular identified metabolite state of the population.<br /> (2) Other tests than mediation analysis might have been used to ensure reliability and robustness of the data. How data was processed, data cleaning methods, how outliers were handled and sensitivity analyses would ensure robustness of the findings.<br /> (3) The generalizability of the data is not sound especially considering the children mostly belonged to a higher socioeconomic group in Brazil with mother or caregiver education being above a certain level. Comparative studies with children from other socio-economic groups and other cohorts might have been useful. Consideration of sample size adequacy and power analysis might have helped in generalizing the findings.<br /> (4) Caution is needed in interpreting causality from this data because of the nature of the study design Discussing alternative explanations and potential confounding factors in more depth could strengthen the conclusions.

      Appraisal<br /> (1) The aims of the study were to identify associations between children's serum metabolome and Early Childhood development. This aim was met. The results do confirm their conclusions.<br /> Impact of the work on the field

      (1) Unless actual gut microbiome of children in this age group from gut bacteria examination or gastrointestinal examination of the gut of children, the causality of gut metabolome on early childhood development cannot be established with certainty. Because this may not be possible in every situation, proxy methods such as the one elucidated here might be useful, considering the risk-benefit ratio.<br /> (2) More research is needed on this theme through longitudinal studies to validate these findings and explore underlying pathways involving gut-brain interactions and metabolic dysregulation.<br /> Other readings: Readers are advised to read other research from other countries and other languages to understand the connection between gut microbiome, metabolite spectra, and child development. In addition to study the effect of these factors on child mental development too.

      Readers might consider the following questions:<br /> (1) Should investigators study the families through direct observation of diet and other factors to look for a connection between food taken in and gut microbiome and child development?<br /> (2) Can an examination of the mother's gut microbiome influence the child's microbiome? Can the mother or caregiver's microbiome influence early childhood development?<br /> (3) Is developmental quotient enough to study early childhood development? Is it comprehensive enough?

    1. eLife Assessment

      This important work addresses the role of Marcks/Markcksl during spinal cord development and regeneration. The study is exceptional in combining molecular approaches to understand the mechanisms of tissue regeneration with behavioural assays, which is not commonly employed in the field. The data presented is convincing and comprehensive, using many complementary methodologies.

    2. Reviewer #1 (Public Review):

      In this manuscript, El Amri et al. are exploring the role of Marcks and Marcksl1 proteins during spinal cord development and regeneration in Xenopus. Using two different techniques to knockdown their expressions, they argue that these proteins are important for neural progenitors proliferation and neurites outgrowth in both contexts. Finally, using a pharmalogical approach, they suggest that Marcks and Marcksl1 work by modulating the activity of PLD and the levels of PIP2 whilst PKC could modulate Marcks activity.<br /> The strength of this manuscript resides in the ability of the authors to knockdown the expression of 4 different genes using 2 different methods to assess the role of this protein family during early development and regeneration at the late tadpole stage. This has always been a limiting factor in the field as the tools to perform conditional knockouts in Xenopus are very limited. However, this will not really be applicable to essential genes as it relies on the general knockdown of protein expression. The generation of antibodies able to detect endogenous Marcks/Marcksl1 is also a powerful tool to assess the extent to which the expression of these proteins is down-regulated.<br /> Whilst there is a great amount of data provided in this manuscript and there is strong evidence to show that Marcks are important for spinal cord development and regeneration, their roles in both contexts is not explored fully. The description of the effect of knocking down Marcks/Marcksl1 on neurons and progenitors is rather superficial and the evidence for the underlying mechanism underpinning their roles is not very convincing.

    3. Reviewer #2 (Public Review):

      M. El Amri et al., investigated the functions of Marcks and Marcks like 1 during spinal cord (SC) development and regeneration in Xenopus laevis. The authors rigorously performed loss of function with morpholino knock-down and CRISPR knock-out combining rescue experiments in developing spinal cord in embryo and regeneration in tadpole stage.

      For the assays in the developing spinal cord, a unilateral approach (knock-down/out only one side of the embryo) allowed the authors to assess the gene functions by direct comparing one-side (e.g. mutated SC) to the other (e.g. wild type SC on the other side). For the assays in regenerating SC, the authors microinject CRISPR reagents into 1-cell stage embryo. When the embryo (F0 crispants) grew up to tadpole (stage 50), the SC was transected. They then assessed neurite outgrowth and progenitor cell proliferation. The validation of the phenotypes was mostly based on the quantification of immunostaining images (neurite outgrowth: acetylated tubulin, neural progenitor: sox2, sox3, proliferation: EdU, PH3), that are simple but robust enough to support their conclusions. In both SC development and regeneration, the authors found that Marcks and Marcksl1 were necessary for neurite outgrowth and neural progenitor cell proliferation.<br /> The authors performed rescue experiments on morpholino knock-down and CRISPR knock-out conditions by Marcks and Marcksl1 mRNA injection for SC development and pharmacological treatments for SC development and regeneration. The unilateral mRNA injection rescued the loss-of-function phenotype in the developing SC. To explore the signalling role of these molecules, they rescued the loss-of-function animals by pharmacological reagents They used S1P: PLD activator, FIPI: PLD inhibitor, NMI: PIP2 synthesis activator and ISA-2011B: PIP2 synthesis inhibitor. The authors found the activator treatment rescued neurite outgrowth and progenitor cell proliferation in loss of function conditions. From these results, the authors proposed PIP2 and PLD are the mediators of Marcks and Marcksl1 for neurite outgrowth and progenitor cell proliferation during SC development and regeneration. The results of the rescue experiments are particularly important to assess gene functions in loss of function assays, therefore, the conclusions are solid. In addition, they performed gain-of-function assays by unilateral Marcks or Marcksl1 mRNA injection showing that the injected side of the SC had more neurite outgrowth and proliferative progenitors. The conclusions are consistent with the loss-of-function phenotypes and the rescue results. Importantly, the authors showed the linkage of the phenotype and functional recovery by behavioral testing, that clearly showed the crispants with SC injury swam less distance than wild types with SC injury at 10-day post surgery.<br /> Prior to the functional assays, the authors analyzed the expression pattern of the genes by in situ hybridization and immunostaining in developing embryo and regenerating SC. They confirmed that the amount of protein expression was significantly reduced in the loss of function samples by immunostaining with the specific antibodies that they made for Marcks and Marcksl1. Although the expression patterns are mostly known in previous works during embryo genesis, the data provided appropriate information to readers about the expression and showed efficiency of the knock-out as well.

      MARCKS family genes have been known to be expressed in the nervous system. However, few studies focus on the function in nerves. This research introduced these genes as new players during SC development and regeneration. These findings could attract broader interests from the people in nervous disease model and medical field. Although it is a typical requirement for loss of function assays in Xenopus laevis, I believe that the efficient knock-out for four genes by CRISPR/Cas9 was derived from their dedication of designing, testing and validation of the gRNAs and is exemplary.

      Weaknesses,<br /> 1) Why did the authors choose Marcks and Marcksl1?<br /> The authors mentioned that these genes were identified with a recent proteomic analysis of comparing SC regenerative tadpole and non-regenerative froglet (Line (L) 54-57). However, although it seems the proteomic analysis was their own dataset, the authors did not mention any details to select promising genes for the functional assays (this article). In the proteomic analysis, there must be other candidate genes that might be more likely factors related to SC development and regeneration based on previous studies, but it was unclear what the criteria to select Marcks and Marcksl1 was.

      2) Gene knock-out experiments with F0 crispants,<br /> The authors described that they designed and tested 18 sgRNAs to find the most efficient and consistent gRNA (L191-195). However, it cannot guarantee the same phenotypes practically, due to, for example, different injection timing, different strains of Xenopus laevis, etc. Although the authors mentioned the concerns of mosaicism by themselves (L180-181, L289-292) and immunostaining results nicely showed uniformly reduced Marcks and Marcksl1 expression in the crispants, they did not refer to this issue explicitly.

      3) Limitations of pharmacological compound rescue<br /> In the methods part, the authors describe that they performed titration experiments for the drugs (L702-704), that is a minimal requirement for this type of assay. However, it is known that a well characterized drug is applied, if it is used in different concentrations, the drug could target different molecules (Gujral TS et al., 2014 PNAS). Therefore, it is difficult to eliminate possibilities of side effects and off targets by testing only a few compounds.

    4. Reviewer #3 (Public Review):

      El Amri et al conducted an analysis on the function of marcks and marcksl in Xenopus spinal cord development and regeneration. Their study revealed these proteins are crucial for neurite outgrowth and cell proliferation, including Sox2+ progenitors. Furthermore, they suggested these genes may act through the PLD pathway. The study is well-executed with appropriate controls and validation experiments, distinguishing it from typical regeneration research by including behavioral assays. The manuscript is commendable for its quantifications, literature referencing, careful conclusions, and detailed methods. Conclusions are well-supported by the experiments performed in this study. Overall, this manuscript contributes to the field of spinal cord regeneration and sets a good example for future research in this area.

    1. Welcome back and in this demo lesson you're going to learn how to install the Docker engine inside an EC2 instance and then use that to create a Docker image.

      Now this Docker image is going to be running a simple application and we'll be using this Docker image later in this section of the course to demonstrate the Elastic Container service.

      So this is going to be a really useful demo where you're going to gain the experience of how to create a Docker image.

      Now there are a few things that you need to do before we get started.

      First as always make sure that you're logged in to the I am admin user of the general AWS account and you'll also need the Northern Virginia region selected.

      Now attached to this lesson is a one-click deployment link so go ahead and click that now.

      This is going to deploy an EC2 instance with some files pre downloaded that you'll use during the demo lesson.

      Now everything's pre-configured you just need to check this box at the bottom and click on create stack.

      Now that's going to take a few minutes to create and we need this to be in a create complete state.

      So go ahead and pause the video wait for your stack to move into create complete and then we're good to continue.

      So now this stack is in a create complete state and we're good to continue.

      Now if you're following along with this demo within your own environment there's another link attached to this lesson called the lesson commands document and that will include all of the commands that you'll need to type as you move through the demo.

      Now I'm a fan of typing all commands in manually because I personally think that it helps you learn but if you are the type of person who has a habit of making mistakes when typing along commands out then you can copy and paste from this document to avoid any typos.

      Now one final thing before we finish at the end of this demo lesson you'll have the opportunity to upload the Docker image that you create to Docker Hub.

      If you're going to do that then you should pre sign up for a Docker Hub account if you don't already have one and the link for this is included attached to this lesson.

      If you already have a Docker Hub account then you're good to continue.

      Now at this point what we need to do is to click on the resources tab of this stack and locate the public EC2 resource.

      Now this is a normal EC2 instance that's been provisioned on your behalf and it has some files which have been pre downloaded to it.

      So just go ahead and click on the physical ID next to public EC2 and that will move you to the EC2 console.

      Now this machine is set up and ready to connect to and I've configured it so that we can connect to it using Session Manager and this avoids the need to use SSH keys.

      So to do that just right-click and then select connect.

      You need to pick Session Manager from the tabs across the top here and then just click on connect.

      Now that will take a few minutes but once connected you should see this prompt.

      So it should say SH- and then a version number and then dollar.

      Now the first thing that we need to do as part of this demo lesson is to install the Docker engine.

      The Docker engine is the thing that allows Docker containers to run on this EC2 instance.

      So we need to install the Docker engine package and we'll do that using this command.

      So we're using shudu to get admin permissions then the package manager DNF then install then Docker.

      So go ahead and run that and that will begin the installation of Docker.

      It might take a few moments to complete it might have to download some prerequisites and you might have to answer that you're okay with the install.

      So press Y for yes and then press enter.

      Now we need to wait a few moments for this install process to complete and once it has completed then we need to start the Docker service and we do that using this command.

      So shudu again to get admin permissions and then service and then the Docker service and then start.

      So type that and press enter and that starts the Docker service.

      Now I'm going to type clear and then press enter to make this easier to see and now we need to test that we can interact with the Docker engine.

      So the most simple way to do that is to type Docker space and then PS and press enter.

      Now you're going to get an error.

      This error is because not every user of this EC2 instance has the permissions to interact with the Docker engine.

      We need to grant permissions for this user or any other users of this EC2 instance to be able to interact with the Docker engine and we're going to do that by adding these users to a group and we do that using this command.

      So shudu for admin permissions and then user mod -a -g for group and then the Docker group and then EC2 -user.

      Now that will allow a local user of this system, specifically EC2 -user, to be able to interact with the Docker engine.

      Okay so I've cleared the screen to make it slightly easier to see now that we've added EC2 -user the ability to interact with Docker.

      So the next thing is we need to log out and log back in of this instance.

      So I'm going to go ahead and type exit just to disconnect from session manager and then click on close and then I'm going to reconnect to this instance and you need to do the same.

      So connect back in to this EC2 instance.

      Now once you're connected back into this EC2 instance we need to run another command which moves us into EC2 user so it basically logs us in as EC2 -user.

      So that's this command and the result of this would be the same as if you directly logged in to EC2 -user.

      Now the reason we're doing it this way is because we're using session manager so that we don't need a local SSH client or to worry about SSH keys.

      We can directly log in via the console UI we just then need to switch to EC2 -user.

      So run this command and press enter and we're now logged into the instance using EC2 -user and to test everything's okay we need to use a command with the Docker engine and that command is Docker space ps and if everything's okay you shouldn't see any output beyond this list of headers.

      What we've essentially done is told the Docker engine to give us a list of any running containers and even though we don't have any it's not erred it's simply displayed this empty list and that means everything's okay.

      So good job.

      Now what I've done to speed things up if you just run an LS and press enter the instance has been configured to download the sample application that we're going to be using and that's what the file container.zip is within this folder.

      I've configured the instance to automatically extract that zip file which has created the folder container.

      So at this point I want you to go ahead and type cd space container and press enter and that's going to move you inside this container folder.

      Then I want you to clear the screen by typing clear and press enter and then type ls space -l and press enter.

      Now this is the web application which I've configured to be automatically downloaded to the EC2 instance.

      It's a simple web page we've got index.html which is the index we have a number of images which this index.html contains and then we have a docker file.

      Now this docker file is the thing that the docker engine will use to create our docker image.

      I want to spend a couple of moments just stepping you through exactly what's within this docker file.

      So I'm going to move across to my text editor and this is the docker file that's been automatically downloaded to your EC2 instance.

      Each of these lines is a directive to the docker engine to perform a specific task and remember we're using this to create a docker image.

      This first line tells the docker engine that we want to use version 8 of the Red Hat Universal base image as the base component for our docker image.

      This next line sets the maintainer label it's essentially a brief description of what the image is and who's maintaining it in this case it's just a placeholder of animals for life.

      This next line runs a command specifically the yum command to install some software specifically the Apache web server.

      This next command copy copies files from the local directory when you use the docker command to create an image so it's copying that index.html file from this local folder that I've just been talking about and it's going to put it inside the docker image in this path so it's going to copy index.html to /var/www/html and this is where an Apache web server expects this index.html to be located.

      This next command is going to do the same process for all of the jpegs in this folder so we've got a total of six jpegs and they're going to be copied into this folder inside the docker image.

      This line sets the entry point and this essentially determines what is first run when this docker image is used to create a docker container.

      In this example it's going to run the Apache web server and finally this expose command can be used for a docker image to tell the docker engine which services should be exposed.

      Now this doesn't actually perform any configuration it simply tells the docker engine what port is exposed in this case port 80 which is HTTP.

      Now this docker file is going to be used when we run the next command which is to create a docker image.

      So essentially this file is the same docker file that's been downloaded to your EC2 instance and that's what we're going to run next.

      So this is the next command within the lesson commands document and this command builds a container image.

      What we're essentially doing is giving it the location of the docker file.

      This dot at the end contains the working directory so it's here where we're going to find the docker file and any associated files that that docker file uses.

      So we're going to run this command and this is going to create our docker image.

      So let's go ahead and run this command.

      It's going to download version 8 of UBI which it will use as a starting point and then it's going to run through every line in the docker file performing each of the directives and each of those directives is going to create another layer within the docker image.

      Remember from the theory lesson each line within the docker file generally creates a new file system layer so a new layer of a docker image and that's how docker images are efficient because you can reuse those layers.

      Now in this case this has been successful.

      We've successfully built a docker image with this ID so it's giving it a unique ID and it's tagged this docker image with this tag colon latest.

      So this means that we have a docker image that's now stored on this EC2 instance.

      Now I'll go ahead and clear the screen to make it easier to see and let's go ahead and run the next command which is within the lesson commands document and this is going to show us a list of images that are on this EC2 instance but we're going to filter based on the name container of cats and this will show us the docker image which we've just created.

      So the next thing that we need to do is to use the docker run command which is going to take the image that we've just created and use it to create a running container and it's that container that we're going to be able to interact with.

      So this is the command that we're going to use it's the next one within the lesson commands document.

      It's docker run and then it's telling it to map port 80 on the container with port 80 on the EC2 instance and it's telling it to use the container of cats image and if we run that command docker is going to take the docker image that we've got on this EC2 instance run it to create a running container and we should be able to interact with that container.

      So if you go back to the AWS console if we click on instances so look for a4l-public EC2 that's in the running state.

      I'm just going to go ahead and select this instance so that we can see the information and we need the public IP address of this instance.

      Go ahead and click on this icon to copy the public IP address into your clipboard and then open that in a new tab.

      Now be sure not to use this link to the right because that's got a tendency to open the HTTPS version.

      We just need to use the IP address directly.

      So copy that into your clipboard open a new tab and then open that IP address and now we can see the amazing application if it fits i sits in a container in a container and this amazing looking enterprise application is what's contained in the docker image that you just created and it's now running inside a container based off that image.

      So that's great everything's working as expected and that's running locally on the EC2 instance.

      Now in the demo lesson for the elastic container service that's coming up later in this section of the course you have two options.

      You can either use my docker image which is this image that I've just created or you can use your own docker image.

      If you're going to use my docker image then you can skip this next step.

      You don't need a docker hub account and you don't need to upload your image.

      If you want to use your own image then you do need to follow these next few steps and I need to follow them anyway because I need to upload this image to docker hub so that you can potentially use it rather than your own image.

      So I'm going to move back to the session manager tab and I'm going to control C to exit out of this running container and I'm going to type clear to clear the screen and make it easier to see.

      Now to upload this to docker hub first you need to log in to docker hub using your credentials and you can do that using this command.

      So it's docker space login space double hyphen username equals and then your username.

      So if you're doing this in your own environment you need to delete this placeholder and type your username.

      I'm going to type my username because I'll be uploading this image to my docker hub.

      So this is my docker hub username and then press enter and it's going to ask for the corresponding password to this username.

      So I'm going to paste in my password if you're logging into your docker hub you should use your password.

      Once you've pasted in the password go ahead and press enter and that will log you in to docker hub.

      Now you don't have to worry about the security message because whilst your docker hub password is going to be stored on the EC2 instance shortly we're going to terminate this instance which will remove all traces of this password from this machine.

      Okay so again we're going to upload our docker image to docker hub so let's run this command again and you'll see because we're just using the docker images command we can see the base image as well as our image.

      So we can see red hat UBI 8.

      We want the container of cats latest though so what you need to do is copy down the image ID of the container of cats image.

      So this is the top line in my case container of cats latest and then the image ID.

      So then we need to run this command so docker space tag and then the image ID that you've just copied into your clipboard and then a space and then your docker hub username.

      In my case it's actrl with 1L if you're following along you need to use your own username and then forward slash and then the name of the image that you want this to be stored as on docker hub so I'm going to use container of cats.

      So that's the command you need to use so docker tag and then your image ID for container of cats and then your username forward slash container of cats and press enter and that's everything we need to do to prepare to upload this image to docker hub.

      So the last command that we need to run is the command to actually upload the image to docker hub and that command is docker space push so we're going to push the image to docker hub then we need to specify the docker hub username so again this is my username but if you're doing this in your environment it needs to be your username and then forward slash and then the image name in my case container of cats and then colon latest and once you've got all that go ahead and press enter and that's going to push the docker image that you've just created up to your docker hub account and once it's up there it means that we can deploy from that docker image to other EC2 instances and even ECS and we're going to do that in a later demo in this section of the course.

      Now that's everything that you need to do in this demo lesson you've essentially installed and configured the docker engine you've used a docker file to create a docker image from some local assets you've tested that docker image by running a container using that image and then you've uploaded that image to docker hub and as I mentioned before we're going to use that in a future demo lesson in this section of the course.

      Now the only thing that remains to do is to clear up the infrastructure that we've used in this demo lesson so go ahead and close down all of these extra tabs and go back to the cloud formation console this is the stack that's been created by the one click deployment link so all you need to do is select this stack it should be called EC2 docker and then click on delete and confirm that deletion and that will return the account into the same state as it was at the start of this demo lesson.

      Now that is everything you need to do in this demo lesson I hope it's been useful and I hope you've enjoyed it so go ahead and complete the video and when you're ready I look forward to you joining me in the next.

    1. Additionally, spam and output from Large Language Models like ChatGPT can flood information spaces (e.g., email, Wikipedia) with nonsense, useless, or false content, making them hard to use or useless.

      That is a very valid concern. AI-generated content, such as from ChatGPT, tends to spam online platforms like email and Wikipedia with misinformation, making people not trust the platforms. Because Wikipedia, for example, enables users to edit entries, it is highly susceptible to the addition of false information. There are systems in place for moderation, but it's tough to keep up with how quickly AI can generate content. It requires stronger editorial controls and awareness on the part of users to maintain the reliability of such platforms.

    2. Then Sean Black, a programmer on TikTok saw this and decided to contribute by creating a bot that would automatically log in and fill out applications with random user info, increasing the rate at which he (and others who used his code) could spam the Kellogg’s job applications:

      This is a great example of using social media for the right cause and explaining how the context matters. It shows that ethical trolling can be done to get social justice for those who have been wronged, forcing such a big company to act right. It's interesting to see how the company's decision backfired using trolling.

  2. pressbooks.lib.jmu.edu pressbooks.lib.jmu.edu
    1. Does anyone know the original Italian word for "work"?

    2. What does the Italian word "work" convey in Montessori's time?

    3. Work

      Language evolution! Historical discussion.

    4. [MAPS 2024 conversation] Italian translations of the term "work": * "meaningful activity" * "play" (lavora) ... i.e., "meaningful play Context: English translations of Montessori's original writing. Italian has different meanings than English translations Historical context matters as it relates to the meaning of terms.

    1. for - polycrisis - organized crime - Daily Maverick article - organized crime - Cape Town - How the state colludes with SA’s underworld in hidden web of organised crime – an expert view - Victoria O’Regan - 2024, Oct 18 - book - Man Alone: Mandela’s Top Cop – Exposing South Africa’s Ceaseless Sabotage - Daily Maverick journalist Caryn Dolley - 2024 - https://viahtml.hypothes.is/proxy/https://shop.dailymaverick.co.za/product/man-alone-mandelas-top-cop-exposing-south-africas-ceaseless-sabotage/?_gl=11mkyl5s_gcl_auODI2MTMxODEuMTcyNjI0MDAwMg.._gaNzQ5NDM3NzE0LjE3MjMxODY0NzY._ga_Y7XD5FHQVG*MTcyOTM1MjgwOS4xLjAuMTcyOTM1MjgxOS41MC4wLjkyNTE5MDk2OA..

      summary - This article revolves around the research of South African crime reporter Caryn Dolley on the organized web of crime in South Africa - She discusses the nexus of - trans-national drug cartels - local Cape Town gangs - South African state collusion with gangs - in her new book: Man Alone: Mandela's Top Cop - Exposing South Africa's Ceaseless Sabotage - It illustrates how on-the-ground efforts to fight crime are failing because they do not effectively address this criminal nexus - The book follows the life of retired top police investigator Andre Lincoln whose expose paints the deep level of criminal activity spanning government, trans-national criminal networks and local gangs - Such organized crime takes a huge toll on society and is an important contributor to the polycrisis. - Non-linear approaches are necessary to tackle this systemic problem - One possibility is a trans-national citizen-led effort

    1. Social Media platforms use the data they collect on users and infer about users to increase their power and increase their profits.

      I completely agree with this. As TikTok gained popularity with its short videos, many other platforms quickly adopted this feature for creating and sharing short-form content. Instagram introduced Reels, and YouTube launched Shorts, both experiencing significant growth as a result. Even Spotify has now incorporated a similar short video format.

    2. One of the main goals of social media sites is to increase the time users are spending on their social media sites. The more time users spend, the more money the site can get from ads, and also the more power and influence those social media sites have over those users. So social media sites use the data they collect to try and figure out what keeps people using their site, and what can they do to convince those users they need to open it again later.

      I like the algorithm social media platforms use because it shows me content that I like to see. I have always wondered how do social media sites make money from the ads, anytime I get an ad on any platform I always skip them if I can.

    1. Social newborn

      Broader /related Term: Third Plane of Development; Adolescence

    1. modern science

      The advent of

    2. transcendental Purity

      Fallen material world

    3. imperfection and impurity

      Complex imperfection

    4. the axial age saw the incorporation of the inner dialogue into the human sense of self agency

      Axial age

    5. a word is the body of a concept a concept is the soul of a word

      Well said

    6. Alfred North Whitehead in 1927

      Creativity

    7. the ploma

      Preloma?

    8. the collective unconscious is in fact the implicate noetic realm

      Hegel absolute spirit foreshadows this

    9. noata shamelessly lifted from Edmund

      X

    10. participants in the Stream of becoming

      Stream of becoming

    11. intricate self- metamorphic and purposive complexes of prehension or experiential relationships

      Ditto

    12. by theories which stray even further

      From

    13. mind warping mathematical toys

      X

    14. underlying conceptual Inc coherence

      Incoherence

    15. the truths of science

      X

    16. our cognitive faculties

      our cognitive faculties are imperfect machines which have been haphazardly assembled by the blind

      watchmaker of algorithmic natural selection

    17. a new form of technological mysticism

      Mysticism

    18. self-confirmation engines

      Technological mysticism

    1. Die aktuelle weltweite Korallenbleiche ist bereits die vierte in 25 Jahren. Die Temperaturen in einem großen Teil der tropischen Meere lagen in diesem Sommer 3° über dem Durchschnitt. Im Gespräch mit der Repubblica erklärt der Korallenexperte Roberto Danovaro, dass ein Viertel der globalen Korallenbestände bereits verloren ist. Korallenriffe, die Ökosysteme mit der größten Biodiversität, sind durch die globale Erhitzung besonders verwundbar https://www.repubblica.it/green-and-blue/dossier/negazionisti-climatici/2024/10/07/news/barriera_corallina_sbiancamento_crisi_clima_roberto_danovaro-423531902/

    1. In psychology, the belief that only conservatives can be authoritarians, and that therefore only conservative authoritarians warrant serious study, has proved self-reinforcing over the course of decades.

      !

    2. “powerful pressures to maintain discipline among members, advocate aggressive and censorious means of stifling opposition, [and] believe in top-down absolutist leadership.”

      !

    3. Intriguingly, the researchers found some common traits between left-wing and right-wing authoritarians, including a “preference for social uniformity, prejudice towards different others, willingness to wield group authority to coerce behavior, cognitive rigidity, aggression and punitiveness towards perceived enemies, outsized concern for hierarchy, and moral absolutism.”

      !

    4. But one reason left-wing authoritarianism barely shows up in social-psychology research is that most academic experts in the field are based at institutions where prevailing attitudes are far to the left of society as a whole. Scholars who personally support the left’s social vision—such as redistributing income, countering racism, and more—may simply be slow to identify authoritarianism among people with similar goals.

      !

    1. menselijk handelen

      kopen, lenen, huren etc.

    2. Blote rechtsfeiten

      Geboren worden, sterven, verouderen etc.

    1. Overall Assessment (4/5)

      Summary: The authors provide a software tool NeuroVar that helps visualizing genetic variations and gene expression profiles of biomarkers in different neurological diseases.

      Technical Release criteria

      Is the language of sufficient quality? * The language quality of the document is of sufficient quality. I did not notice any major issues.

      Is there a clear statement of need explaining what problems the software is designed to solve and who the target audience is? * Yes, authors provide a statement of need. Authors mention that there is the need for a specialized software tool to identify genes from transcriptomic data and genetic variations such as SNPs, specifically for neurological diseases. Perhaps authors could expand on how they chose the diseases. E.g. stroke is not listed among the neurological diseases. Perhaps authors could expand a bit on the diseases they chose in the introduction.

      Is the source code available, and has an appropriate Open Source Initiative license been assigned to the code? * Yes the source code is available in github under the following link: https://github.com/omicscodeathon/neurovar. Additionally authors deposited the source code and additional supplementary data in a permanent depository with zenodo under the following DOI: https://zenodo.org/records/13375493. They also provided test data https://zenodo.org/records/13375591. I was able to download and access the complete set of data

      As Open Source Software are there guidelines on how to contribute, report issues or seek support on the code? * I did not find any way to contribute, report issues or seek support. I would recommend that the authors add this information to the Github README file.

      Is the code executable? * Yes, I could execute the code using Rstudio 4.3.3

      Is the documentation provided clear and user friendly? * The documentation is provided and is user friendly. I was able to install, test and run the tool using RStudio. Authors may consider to offer also a simple website link for the RshinyTools if possible. This may enable the access also for scientists that are not familiar with R.Especially, it is great that authors provided a demonstration video. I was able to reproduce the steps. However, I would recommend to add more information into the Youtube video. E.g. reference to the preprint/ paper and Github link would be helpful to connect the data. Perhaps authors could also expand a bit on the possibilities to export data from their software. And provide different formats e.g., PDF / PNG /JPEG. I think this is important for many researchers to export their outputs e.g., from the heatmaps.

      Is installation/deployment sufficiently outlined in the paper and documentation, and does it proceed as outlined? * I could follow the installation process, but perhaps authors could add few more details how to download from Github in more detail. As some scientist may have trouble with it. Also perhaps an installation video (additionally to the video demonstration of the Neurovar Shiny App might be helpful.·

      Is there a clearly-stated list of dependencies, and is the core functionality of the software documented to a satisfactory level? * Yes, dependencies are listed and are installed automatically. It worked for me with Rstudio version 4.3.3. In the manuscript and in the

      Have any claims of performance been sufficiently tested and compared to other commonly-used packages? * not applicable

      Are there (ideally real world) examples demonstrating use of the software? * Yes, authors use the example of Epilepsy, focal epilepsy and the gene of interest DEPDC5. I replicated their search and got the same results. However, I find that the label in Figure 1 in the gene’s transcript could be a bit more clear. E.g. it is not clear to me what transcript start and end refers to. It might also be more helpful if authors provide an example dataset for the Expression data that is loaded in the software by default.Furthermore authors use a case study results using RNAseq in ALS patients with mutations in FUS, TARDBP, SOD1, VCP genes.

      Is test data available, either included with the submission or openly available via cited third party sources (e.g. accession numbers, data DOIs, etc.)? * Yes the authors provide test data with dois: https://zenodo.org/records/13375591.

      Is automated testing used or are there manual steps described so that the functionality of the software can be verified? * Automated testing is not used as far as I can access it.

      Overall Recommendation: * Accept with revisions

      Reviewer Information: Ruslan Rust is an assistant professor in neuroscience and physiology at University of Southern California working on stem cell therapies on stroke. His lab is particularly interested in working with genomic data and the development of new biomarkers for stroke, AD and other neurological diseases.

      Dr. Ruslan Rust's profile on ResearchHub: https://www.researchhub.com/author/4945925

      ResearchHub Peer Reviewer Statement: This peer review has been uploaded from ResearchHub as part of a paid peer review initiative. ResearchHub aims to accelerate the pace of scientific research using novel incentive structures.

    1. Das von den französischen Grünen regierte Lyon reagiert auf die globale Erhitzung mit einer „Strategie der durchlässigen Stadt“. Dazu gehört es, bei ausnahmslos jedem neuen Bauprojekt Wasser versickern zu lassen statt es abzuleiten. Eine Vizepräsidentin der Region erklärt die Strategie - und den Unwillen des Staats zur finanziellen Unterstützung der Stadt -im Gespräch mit der Libération aus Anlass der extremen Regenfälle im Département Rhône https://www.liberation.fr/societe/inondations-dans-la-metropole-de-lyon-nous-payons-des-annees-damenagements-urbains-qui-nont-pas-tenu-compte-du-dereglement-climatique-20241018_FT2OJG5YNVFWBJMHM37NJGB634/?redirected=1

    1. しいます

      します

    2. ウェジェット

      ウィジェット

    3. づつ

      ずつ

    4. ごと

    5. グラフを表現する機能があります

      内部的にはこれを使っているっぽいので、どこかで紹介してもいいかも

      https://altair-viz.github.io/

    6. t.session_state.dices.append

      これ、ここは書き換えないで if 文の手前で

      dices = st.session_state.dices

      とすればいいのでは(そうしたら他は書き換えなくていい

    7. if "dices" not in st.session_state: # セッションデータの初期化 st.session_state.dices = []

      streamlitのサンプルコードでもこうなっていますが、

      not inで存在チェックしているのに、初期化するときは属性になっているので少しトリッキーだなと思いました。 そこについて説明してほしいです。

      st.session_state["dices"] = [] でも同じように動作するっぽい

    8. これらを

      これら、とはなにとなにを指していますか?

    9. 上記のステップの3つ目

      こう書くなら、上のステップを数字付きの箇条書きにして「ステップ3まで」と書くとわかりやすい

    10. ここで、st.writeにいろんなデータ型を渡しただけのアプリを作って、それぞれいい感じに表示されるところをスクショでみせてほしい。

    11. サンプルアプリ(2)

      これも名前を付けてほしい

    12. それ相応

      それ相応、がどういうことを指しているのかがわかりにくい。

      相応だと思っているのは誰なのか、が気になる。

      「適切な」とか「データ型にあった」とかの表現でもよいのでは

    13. プロパティ

      引数?

    14. 以下のとおりです。

      いきなり結果になっているけど、初期状態とテキスト入力してボタンを押した状態の2パターンがほしい。

      (アニメgifだとうれしいなー

    15. randam

      typo: random

    16. 入力されたもを

      typo: 入力されたものを?

    17. # 入力ボックス

      コードがごちゃっとしているので、コメントは1行上につけつつ、大きく機能が分かれるところで空行とか入れた方が読みやすいと思います

    18. st.text_input

      ここの手前に説明がほしいです。

      以下、○○について説明します。みたいな

    19. splited_text

      splitの過去形はsplitらしい。なのでsplit_textでよい

      https://www.eigo-bu.com/vocab/pp/split

      リストなので words とかでもいいのでは

    20. choice

      choicedが好み

    21. replace(" ", " ")

      全角半角の書き換えとかは本題とは関係ないし、シンプルに

      text.split() でよいのでは

    22. スペース区切りの文字列から一つの単語を選択する

      このタイトルと、コードで st.title() している内容が違う。

      どういう意図のタイトルなのか?

    23. 内容

      文章にしてほしい

    24. サンプルアプリ(1

      (1)はわかりにくいので、名前を付けた方がいいと思います

      サンプル - ランダム選択アプリ

      とか

    25. 起動

      アプリを起動

      とか

    26. ```bash

      なにか書き方を間違えてそう

    27. import streamlit as st

      キャプションを入れてほしい

      ```{code-block} python :caption: app.py

      import streamlit ...

      ```

    28. st.title("サンプルアプリ")

      空行をあけてほしい派

    29. 多くの依存パッケージがあり、pandasなども依存しており多くのパッケージがインストールされます。

      表現が冗長

      pandasなど多くの依存パッケージが一緒にインストールされます。

      とか

    30. # venvの作成と有効化

      全体的にコメントじゃなくてcaptionにした方がよいかと

    31. venvについては

      4月の記事でも、今までの他の人の記事でもそこまで説明していないので、venvの説明はなくてもいいのでは

      https://terada-202410-streamlit.gihyo-python-monthly.pages.dev/2024/202404

    32. されている

      している

      Streamlitが主体だと思うので受け身じゃなくてよい

    33. 開発開発

      typo

    34. 複雑な処理を

      複雑な処理はサーバーサイドではしているけど、フロントはシンプルみたいなことを言いたいと思うんですが、伝わらないと思います。

    35. これらの

      これら、が連続していて読みにくいのと、ここでの「これら」は一つ前の「これら」と違うことを指している? 代名詞を使わずに具体的に書いた方が良いのではにか

    36. 一文の「、」が多いので読みにくいです。整理してほしい

    37. これはら

      typo: これらは

    38. 機能にフォーカスを当てて、よく使う機能を紹介します

      機能がかぶるので、1つめをトルでもいいかも

    1. whirlpool.

      The whirlpool contrasts with moments of stillness and clarity in the poem. It underscores the tension between chaos and order, reflecting the desire for meaning in a fragmented world. The whirlpool serves as a reminder of the relentless motion of time and the challenges of finding stability.

    2. The river sweats Oil and tar

      The lines "The river sweats / Oil and tar" reflect the industrial pollution of the environment and symbolizes the decay and corruption present in modern life. The river, typically a symbol of life and renewal is assigned a certain vitality and is transformed into a site of contamination, highlighting themes of desolation and moral decline in the post-war world.

    3. Twit twit twit Jug jug jug jug jug jug So rudely forc’d. Tereu

      In "The Waste Land," the lines "Twit twit twit / Jug jug jug jug jug jug / So rudely forc'd" evoke a jarring and fragmented sense of communication, drawing from the myth of Tereus, Procne, and Philomela. This reference introduces themes of violence, loss, and the disruption of natural order. The repetition of "twit" and "jug" creates a rhythmic yet unsettling sound, almost mocking in its simplicity. It highlights the stark contrast between the complexity of human emotion and the reduced, animalistic quality of the sounds. This mirrors the broader themes of disconnection and alienation throughout the poem. The reference to Tereus—who brutally silenced Philomela by cutting out her tongue—serves as a potent metaphor for silencing and trauma. In this context, the nymphs and their experiences are connected to loss and violence, underscoring the idea that beauty and vitality are often subjected to brutal realities.

    4. departed.

      The indentation of “departed” draws attention to the unusual experience of the nymphs, who traditionally symbolize beauty, love, and the natural world, often associated with life and abundance. However, in Eliot’s context, their presence serves to contrast the barrenness and emptiness of modern existence. Also, decapitalizing “departed” shifts the agency of the myths and implies a more passive experience as they have been swept away and lost without active control over their fate. This loss of agency aligns with the themes in "The Waste Land," where characters often feel powerless in the face of societal decay and personal disillusionment. The experience of the nymphs can be interpreted as a reflection of unfulfilled longing and the impact of a fragmented society on intimate relationships. Instead of celebrating love and connection, their references evoke a sense of nostalgia for a more vibrant, meaningful past that has been lost. This mirrors the sorrow expressed in Psalm 137, where the Israelites long for their homeland, suggesting a universal longing for wholeness and the deep human need for connection.

      Ultimately, the nymphs' experience in "The Waste Land" draws attention to the contrast between the idealized past and the stark reality of the present, reinforcing the poem's exploration of loss, longing, and the search for identity in a desolate world.The line "Departed, have left no addresses" from "The Waste Land" resonates deeply with the themes in Psalm 137, particularly the sense of dislocation and absence. In Psalm 137, the Israelites lament their exile in Babylon, feeling disconnected from their homeland and traditions. The line evokes a profound sense of loss and the inability to return to a place of belonging, mirroring the mournful sentiment of having no way to communicate or reconnect with what has been left behind. Both texts express a longing for something lost and the pain of separation, emphasizing the emotional weight of exile. Just as the Israelites mourn their captivity and the destruction of their identity, Eliot's line suggests a broader existential crisis where individuals feel untethered in a fragmented world, underscoring the despair and disconnection prevalent in both works.

    5. HURRY UP PLEASE ITS TIME

      Eliot artfully weaves imagery and language that evokes quietude into the fabric of the poem, creating a body of work whose essence personifies forms of silence. The poem possesses a hushed quality, behaving similarly to a curse word. As if to engage and think with the poem is taboo. Yet, when read, the assemblage of fragmented imagery, allusions, ambiguous language and voice, or lack thereof, engenders a profusion of sound. Eliot’s use of syntax in “A Game of Chess” depicts the unexpected resonance of unsaid speech, drawing attention to the hidden yet audible nature of cognition. The capitalization of “HURRY UP PLEASE IT’S TIME,” a noticeable shift from the earlier lowercase dialogue, intends to evoke a semblance of sound while maintaining the generally quiet disposition of the poem. Eliot's interplay with cognition and sound probes the potency of unsaid speech, revealing how the silence between words carry as much meaning as spoken language itself, inviting readers to consider the depths of thought and emotion that lie beneath the surface of expression.

    6. The Chair she sat in, like a burnished throne,

      I am drawn to the parallels between T.S. Eliot’s The Waste Land and Baudelaire’s “A Martyred Woman,” particularly their shared exploration of the suffering and sacrifice of women. Both works present women as embodiments of beauty intertwined with pain. In Baudelaire’s poem, the “martyred woman” is depicted as suffering yet noble, while Eliot’s female characters often reflect a sense of despair and emotional turmoil despite their allure. Baudelaire explicitly frames women as martyrs, suggesting that their beauty is a source of suffering. Similarly, Eliot’s portrayal of women suggests that they endure personal sacrifices and struggles, often reflecting broader societal issues. This martyrdom emphasizes the emotional toll placed on women. Both poets critique the societal roles imposed on women. Baudelaire highlights how women are idealized yet subjected to suffering, while Eliot’s women often navigate a fragmented identity within a patriarchal context, exposing the emptiness behind romanticized notions of femininity. In both texts, women experience deep alienation. Baudelaire's martyred figures are isolated in their suffering, while Eliot’s women, such as Lil or the clairvoyante, illustrate the emotional disconnect prevalent in modern life, reinforcing feelings of loneliness and despair.

    7. 'That corpse you planted last year in your garden,

      Baudelaire juxtaposes the beauty of art and nature with the harsh realities of life, often reflecting on the dualities of pleasure and suffering. The poems frequently capture the essence of modern urban life, particularly in Paris, highlighting the alienation and moral ambiguity found in the city. Baudelaire delves into themes of vice and corruption, examining how they coexist with beauty. He often portrays sin as an integral part of human nature. Despite the dark themes, there are moments of seeking transcendence through art, love, and spirituality, hinting at the possibility of redemption amid despair. Interestingly, Baudelaire positions the poet as a visionary who can perceive the deeper truths of existence, navigating the complexities of the human condition.

      The line "that corpse you planted last year in your garden" embodies themes of beauty and decay; the imagery of the corpse juxtaposed with the idea of a garden symbolizes the intersection of life and death. It suggests that what might typically be seen as beautiful (a garden) is tainted by decay and mortality. This line hints at buried past sins or traumas, implying that the speaker is grappling with unresolved issues that refuse to remain hidden. The corpse can symbolize guilt or repressed memories that disrupt the facade of normalcy. The garden, often a symbol of natural beauty and cultivation, contrasts sharply with the idea of a corpse. This reflects the alienation and spiritual emptiness of modern life, where even beauty is intertwined with death. The act of planting a corpse can be seen as a perverse twist on the natural cycle of life, suggesting a disruption in the natural order. It points to the theme of regeneration but in a way that is grotesque and unsettling. This line encapsulates Eliot’s task of confronting uncomfortable truths. It suggests that to understand the modern condition, one must acknowledge the darker aspects of existence.

    8. from the hyacinth garden,

      Eliot weaves themes of beauty, love, and loss inspired by the story of Apollo and Hyacinth into the fabric of “The Waste Land,” particularly the cycles of life and death, the transient nature of beauty, and the emotional desolation of the modern world. The tale of Apollo, the god of light and music, and Hyacinth, his beloved, emphasizes the intensity of love and the tragedy of loss. Hyacinth's death, caused by an accidental injury from Apollo’s discus, illustrates how beauty can be fleeting and how love can lead to deep sorrow. In the myth, Hyacinth is transformed into a flower after his death, symbolizing the idea of regeneration. However, in "The Waste Land," this regeneration is complicated by the poem’s pervasive sense of despair and fragmentation. The cycles of life and death are depicted, but they often feel broken or unfulfilled. Eliot contrasts the mythic beauty of Apollo and Hyacinth with the barrenness of the modern world. The decorated imagery of the myth serves to heighten the bleakness of contemporary existence, where love and beauty seem diminished or lost amidst urban decay and spiritual emptiness. The reference to this myth also connects to the broader cultural and literary heritage that Eliot draws upon throughout "The Waste Land." It reflects his engagement with themes of mythology, art, and the human condition, suggesting that ancient stories continue to resonate, even in a fractured modern context.

    9. Quando fiam uti chelidon

      10.18

      Does “The Waste Land” end on a positive note? In debating with myself, I found my answer to remain hopelessly inconclusive. In the final section of the poem, it seems that our protagonist, in a role similar to a quester, has finally arrived at the Waste Land’s “Chapel Perilous” following the hopeful “violet hour” (380). Still, readers are left clueless regarding whether the desired task of regeneration has been completed. In what seems to be the most climactic scene, a rooster announces the arrival of rain from the chapel rooftop, yet two details keep me unnerved about this resolution:

      Firstly, where on Earth did the rain go? The “damp gust” is responsible for “bringing [the] rain,” yet this action is trapped in an unfinished, infinitive state (394-5). In fact, the “black clouds,” confined in a distant mountain chain, can never rejuvenate the withering land in the riverbanks and valleys (397).

      In addition, the cock, the announcer of the rain, is itself heavily connected to the uncertain state between life and death. Firstly, the animal figures in Ariel’s song “Hark, hark! I hear / [...] Cry, Cock a diddle dow” in Shakespeare’s Tempest, which brings to mind the fabricated death of Alonso, King of Naples. Secondly, the word is mentioned in another Shakespearian play, Hamlet, in the specific context of King Hamlet’s appearance as a ghost (ghost-hood and fabricated deaths suggest a similar border state between life and death). This brings even greater uncertainty regarding the cock’s ability of announcing/directing genuine revitalization.

      This sense of incompletion persists until the very last stanza, in which border states, including the shore that the speaker sits at (between water and land) and the London Bridge (between life and death/Inferno), figure heavily. In addition, the insufficiency of Philomela’s transformation is emphasized once again. The line “quando fiam uti chelidon” merely anticipates a future gaining of a voice similar to that of the swallow’s, yet the task is essentially unfulfillable – while both sexes of the swallow can sing, only the male nightingale sings (429). Philomela’s metamorphosis still does not liberate her from her silence, a reminder of her subjugation. It is, once again, an incomplete renewal at best.

    10. falling down falling down falling down

      This is one of many times in the poem where repetition like this occurs. This is similar to "The Vigil of Venus" where the line "Tomorrow may loveless, may tomorrow make love" is repeated several times throughout the poem. Interestingly, the line itself is almost repetition but not quite, which makes the idea of love in the poem feel like an ever-changing thing that isn't stagnant. Meanwhile, "The Waste Land"'s use of "falling down falling down falling down," through its insistent and exact repetition, seems to show an action that cannot be undone and is damaging, like the London Bridge falling down.

    11. My friend

      In Angela’s annotation for this line, she interrogates the true nature of friendship, claiming that friendship in “The Waste Land” appears in relation to “indifference” and “superficiality” (Li). She cites Bradley as one of her sources, specifically, "a common understanding being admitted, how much does that imply? What is the minimum of sameness that we need suppose to be involved in it?" (Bradley, 6). The word “understanding” specifically caught my attention, as it is central to the Brihadaranyaka Upanishad. This line of “The Waste Land” is in reference to the part of the Upanishad that means “give”: “Then the human beings said to him, ‘Teach us, father.’ He spoke to them the same syllable DA. ‘Did you understand?’ ‘We understood,’ they said. ‘You told us, “Give (datta)”’” (Brihadaranyaka, Chapter 2). Yet, although the humans were instructed to give, Eliot appears to extend this scene, resuming it when the humans reflect upon the past, asking “what have we given?”

      The deception and failure of friendship that Angela identifies as it relates to this line may also provide an answer to the shortcomings of the humans to “give.” Before the line Angela quotes, Bradley states, “what, however, we are convinced of, is briefly this, that we understand and, again, are ourselves understood” (Bradley, 6). Very clearly, Bradley accuses the human race of being under an illusion of understanding one another. If they are under the illusion of understanding, then the credibility of the humans in the Upanishad is completely undermined when they say that they “understand” what datta means. Possibly, they misunderstand what it means to “give,” or, Eliot may be making the claim that they misunderstood the meaning of datta itself as it exists in the universe of the poem. With this in mind, it makes sense that the humans are unable to point to what they’ve given in “The Waste Land.” They are left without direction, and, according to Bradley, they are condemned to failure in connecting, or “giving” themselves to one another. Even “my friend” implies an antithesis to “give”--possession. Eliot seems to agree with Bradley’s proposal that friendship, relationship, true exchange between one person and another is something beyond human understanding.

    12. Only at nightfall, aethereal rumours Revive for a moment a broken Coriolanus

      Coming back to what I said in a previous annotation about actions getting darker as night comes, this seems to flip that idea on its head a bit when saying "Only at nightfall, aethereal rumours / Revive for a moment a broken Coriolanus". Coriolanus is a Shakespeare character who is notably a bit of an antihero, so these lines seem to say that "aethereal rumors" at nightfall are what temporarily redeem Coriolanus, despite a previous annotation of mine arguing that peoples' actions get darker as the night falls. For Coriolanus, it seems to be the opposite.

      This is also interesting when you consider Francis Herbert Bradley's Appearance and Reality where he argues that much of what humans perceive is an illusion, which makes it hard for people to truly connect with each other. This makes me wonder if these "aethereal rumours" are then actually other people and not supernatural beings, but Eliot is referring to them this way to show the true distance between ourselves and the reality of other people.

    13. Who is the third who walks always beside you?

      Both this stanza and P. Marudanayagum's "Retelling of an Indian Legend" deal with a mysterious other. In the legend, the vial (verandah) has enough space for one person to lie on, two people to sit on, or three people to stand on. Once three people are standing on the vial, they feel a fourth presence but don't know who it is, before realizing it's Lord Vishnu (a Hindu God). Following the logic of this legend, a mysterious presence in a space where it's not physically possible for the presence to fit inside is probably a God or other supernatural thing. However, this stanza shows two, not three, people that are standing, and their space isn't limited, but there's also a mysterious presence. There's definitely a lot to unpack here, and I'd welcome any theories about it, but I desperately need to go to sleep and can't properly theorize at this point.

    14. Quando fiam uti chelidon—O swallow swallow

      The 6th line of Eliot’s final stanza in “The Waste Land” reads, “Quando fiam uti chelidon”, or “when shall I be as the swallow”. This line was taken from Pervigilium Veneris, translated by Allen Tate, which recalls the story of Philomena, an Athenian princess who was raped by a king, and later turned into a bird. In order to gain a better sense of Eliot’s reference, we can look at it in the context of the stanza in the Pervigilium Veneris, which reads “She sings, we are silent. When will my spring come? Shall I find my voice when I shall be as the swallow? … Silent, I lost the muse. Return, Apollo!”. The mention of spring harkens back to the beginning of “The Waste Land”, where spring plays a major theme. In the Pervigilium Veneris, Philomena attributes spring to herself, calling it “my spring”, suggesting that spring represents her own rebirth and restoration. Thus, we might be able to interpret Eliot’s “spring” in a similar manner. Philomena’s seeking out of her voice is also interesting in terms of “The Waste Land”, which is built on fragmented dialogue and ever changing voices. Interestingly, Philomena seems to have lost “the muse”, or the divine inspiration, and in frustration, she calls out to Apollo to inspire her once again. Eliot, through his biblical references and prayers seems to be calling out to the divine, perhaps for his own inspiration as well. Another significant part of the Pervigilium Veneris are the repeating lines, “Tomorrow may loveless, may lover tomorrow make love.” Through these repeating and ambiguous lines, the reader can get a sense of the future, and the contrast between lovelessness and making love in that future. The word “may” expresses possibility, but can also be interpreted as expressing a wish, or hope. At the final stanza, this phrase shifts into, “Tomorrow let loveless, let lover tomorrow make love.” The newly introduced word, “let”, seems to acknowledge how fate is in the hands of the gods, as it is more of a direct expression of desire. Ultimately this repetition and prayer falls in line with similar repetitions such as “HURRY UP IT IS TIME” in “The Waste Land”, suggesting Eliot’s intensifying attempts at communication with the divine.

    15. We think of the key, each in his prison Thinking of the key, each confirms a prison Only at nightfall, aethereal rumours

      While reading this stanza of “What the Thunder Said”, I instantly connected Eliot’s mention of aethereal rumors to “Appearance and Reality” by Francis Herbert Bradley. Bradley’s philosophical essay attempts to examine and explain interactions between souls. In particular, Bradley mentions ether while discussing the possibility of direct communication through souls ( as in soul-to-soul communication without the use of bodies). Bradley explains that this communication would occur by ‘a medium extended in space, and of course, like “ether,” quite material.”. Thus ether, while material, is equated to the direct impressions on one soul from another. With this understand of ether, we can interpret “ethereal rumors” to be ones not concerned with the external environment or human bodies, rather, spiritual messages that transcend the normal methods of bodily communication, such as the voice. However, Bradley seems to doubt the existence of this ethereal communication, and proceeds to worry, stating “If such alterations of our bodies are the sole means which we posses for conveying what is in us, can we be sure that in the end we really have conveyed it?”. Essentially, Bradley shares his fears that humans are unable to fully represent their souls through their bodies. Interestingly, Eliot’s two previous lines seem to evoke a similar notion of distorted communication between souls. Eliot states, “We think of the key, each in his prison// Thinking of the key, each confirms a prison”. In these lines, the people’s thoughts are collective and similar, but each individual has his own prison. When regarding the word “key”, one might think of a physical key to the prison, however, I argue that the word “key”, instead, refers to the ethereal communication between souls discussed by Bradley. A key is defined as “a thing that provides a means of understanding something”, such as “the key to the code”, or “the key to the riddle”. With this understanding of a key, we can interpret Eliot’s prisons as what Bradley would describe as limits of the bodily expression of the soul. These prisons seem to be “affirmed” by the existence of this “key”, which might represent another concern that the bodily methods of communication are only seen as limits due to the yearning for ethereal soul-to-soul communication.

    1. Welcome back and in this very brief demo lesson, I just want to demonstrate a very specific feature of EC2 known as termination protection.

      Now you don't have to follow along with this in your own environment, but if you are, you should still have the infrastructure created from the previous demo lesson.

      And also if you are following along, you need to be logged in as the I am admin user to the general AWS account.

      So the management account of the organization and have the Northern Virginia region selected.

      Now again, this is going to be very brief.

      So it's probably not worth doing in your own environment unless you really want to.

      Now what I want to demonstrate is termination protection.

      So I'm going to go ahead and move to the EC2 console where I still have an EC2 instance running created in the previous demo lesson.

      Now normally if I right click on this instance, I'm given the ability to stop the instance, to reboot the instance or to terminate the instance.

      And this is assuming that the instance is currently in a running state.

      Now if I go to terminate instance, straight away I'm presented with a dialogue where I need to confirm that I want to terminate this instance.

      But it's easy to imagine that somebody who's less experienced with AWS can go ahead and terminate that and then click on terminate to confirm the process without giving it much thought.

      And that can result in data loss, which isn't ideal.

      What you can do to add another layer of protection is to right click on the instance, go to instance settings, and then change termination protection.

      If you click that option, you get this dialogue where you can enable termination protection.

      So I'm going to do that, I'm going to enable termination protection because this is an essential website for animals for life.

      So I'm going to enable it and click on save.

      And now that instance is protected against termination.

      If I right click on this instance now and go to terminate instance and then click on terminate, I get a dialogue that I'm unable to terminate the instance.

      The instance and then the instance ID may not be terminated, modify its disable API termination instance attribute and then try again.

      So this instance is now protected against accidental termination.

      Now this presents a number of advantages.

      One, it protects against accidental termination, but it also adds a specific permission that is required in order to terminate an instance.

      So you need the permission to disable this termination protection in addition to the permissions to be able to terminate an instance.

      So you have the option of role separation.

      You can either require people to have both the permissions to disable termination protection and permissions to terminate, or you can give those permissions to separate groups of people.

      So you might have senior administrators who are the only ones allowed to remove this protection, and junior or normal administrators who have the ability to terminate instances, and that essentially establishes a process where a senior administrator is required to disable the protection before instances can be terminated.

      It adds another approval step to this process, and it can be really useful in environments which contain business critical EC2 instances.

      So you might not have this for development and test environments, but for anything in production, this might be a standard feature.

      If you're provisioning instances automatically using cloud formation or other forms of automation, this is something that you can enable in an automated way as instances are launching.

      So this is a really useful feature to be aware of.

      And for the SysOps exam, it's essential that you understand when and where you'd use this feature.

      And for both the SysOps and the developer exams, you should pay attention to this, disable API termination.

      You might be required to know which attribute needs to be modified in order to allow terminations.

      So really for both of the exams, just make sure that you're aware of exactly how this process works end to end, specifically the error message that you might get if this attribute is enabled and you attempt to terminate an instance.

      At this point though, that is everything that I wanted to cover about this feature.

      So right click on the instance, go to instance settings, change the termination protection and disable it, and then click on save.

      One other feature which I want to introduce quickly, if we right click on the instance, go to instance settings, and then change shutdown behavior, you're able to specify whether an instance should move into a stop state when shut down, or whether you want it to move into a terminate state.

      Now logically, the default is stop, but if you are running an environment where you don't want to consider the state of an instance to be valuable, then potentially you might want it to terminate when it shuts down.

      You might not want to have an account with lots of stopped instances.

      You might want the default behavior to be terminate, but this is a relatively niche feature, and in most cases, you do want the shutdown behavior to be stop rather than terminate, but it's here where you can change that default behavior.

      Now at this point, that is everything I wanted to cover.

      If you were following along with this in your own environment, you do need to clear up the infrastructure.

      So click on the services dropdown, move to cloud formation, select the status checks and protect stack, and then click on delete and confirm that by clicking delete stack.

      And once this stack finishes deleting all of the infrastructure that's been used during this demo and the previous one will be cleared from the AWS account.

      If you've just been watching, you don't need to worry about any of this process, but at this point, we're done with this demo lesson.

      So go ahead, complete the video, and once you're ready, I'll look forward to you joining me in the next.

    1. Welcome back and in this demo lesson either you're going to get the experience or you can watch me interacting with an Amazon machine image.

      So we created an Amazon machine image or AMI in a previous demo lesson and if you recall it was customized for animals for life.

      It had an install of WordPress and it had the Kause application installed and a custom login banner.

      Now this is a really simple example of an AMI but I want to step you through some of the options that you have when dealing with AMIs.

      So if we go to the EC2 console and if you are following along with this in your own environment do make sure that you're logged in as the IAM admin user of the general AWS account, so the management account of the organization and you have the Northern Virginia region selected.

      The reason for being so specific about the region is that AMIs are regional entities so you create an AMI in a particular region.

      So if I go and select AMIs under images within the EC2 console I'll see the animals for life AMI that I created in a previous demo lesson.

      Now if I go ahead and change the region maybe from Northern Virginia which is US-East-1 to US-East- Ohio which is US-East-2 if I make that change what we'll see is we'll go back to the same area of the console only now we won't see any AMIs that's because an AMI is tied to the region in which it's created.

      Every AMI belongs in one region and it has a unique AMI ID.

      So let's move back to Northern Virginia.

      Now we are able to copy AMIs between regions this allows us to make one AMI and use it for a global infrastructure platform so we can right-click and select copy AMI then select the destination region and then for this example let's say that I did want to copy it to Ohio then I would select that in the drop-down it would allow me to change the name if I wanted or I could keep it the same for description it would show that it's been copied from this AMI ID in this region and then it would have the existing description at the end.

      So at this point I'm going to go ahead and click copy AMI and that process has now started so if I close down this dialogue and then change it from US East 1 to US East 2 so select that now we have a pending AMI and this is the AMI that's being copied from the US - East - one region into this region if we go ahead and click on snapshots under elastic block store then we're going to see the snapshot or snapshots which belong to this AMI.

      Now depending on how busy AWS is it can take a few minutes for the snapshots to appear on this screen just go ahead and keep refreshing until they appear.

      In our case we only have the one which is the boot volume that's used for our custom AMI.

      Now the time taken to copy a snapshot between regions depends on many factors what the source and destination region are and the distance between the two the size of the snapshot and the amount of data it contains and it can take anywhere from a few minutes to much much longer so this is not an immediate process.

      Once the snapshot copy completes then the AMI copy process will complete and that AMI is then available in the destination region but an important thing that I want to keep stressing throughout this course is that this copied AMI is a completely different AMI.

      AMIs are regional don't fall for any exam questions which attempt to have you use one AMI for several regions.

      If we're copying this animals for life AMI from one region to another region in effect we're creating two different AMIs.

      So take note of this AMI ID in this region and if we switch back to the original source region so US - East - 1 note how this AMI has a different ID so they are different AMIs completely different AMIs you're creating a new one as part of the copy process.

      So while the data is going to be the same conceptually they are completely separate objects and that's critical for you to understand both for production usage and when answering any exam questions.

      Now while that's copying I want to demonstrate the other important thing which I wanted to show you in this demo lesson and that's permissions of AMIs.

      So if I right-click on this AMI and edit AMI permissions by default an AMI is private.

      Being private means that it's only accessible within the AWS account which has created the AMI and so only identities within that account that you grant permissions are able to access it and use it.

      Now you can change the permission of the AMI you could set it to be public and if you set it to public it means that any AWS account can access this AMI and so you need to be really careful if you select this option because you don't want any sensitive information contained in that snapshot to be leaked to external AWS accounts.

      A much safer way is if you do want to share the AMI with anyone else then you can select private but explicitly add other AWS accounts to be able to interact with this AMI.

      So I could click in this box and then for example if I clicked on services and I just moved to the AWS organization service I'll open that in a new tab and let's say that I chose to share this AMI with my production account so I selected my production account ID and then I could add this into this box which would grant my production AWS account the ability to access this AMI.

      Now no tell there's also this checkbox and this adds create volume permissions to the snapshots associated with this AMI so this is something that you need to keep in mind.

      Generally if you are sharing an AMI to another account inside your organization then you can afford to be relatively liberal with permissions so generally if you're sharing this internally I would definitely check this box and that gives full permissions on the AMI as well as the snapshots so that anyone can create volumes from those snapshots as well as accessing the AMI.

      So these are all things that you need to consider.

      Generally it's much preferred to explicitly grant an AWS account permissions on an AMI rather than making that AMI public.

      If you do make it public you need to be really sure that you haven't leaked any sensitive information, specifically access keys.

      While you do need to be careful of that as well if you're explicitly sharing it with accounts, generally if you're sharing it with accounts then you're going to be sharing it with trusted entities.

      You need to be very very careful if ever you're using this public option and I'll make sure I include a link attached to this lesson which steps through all of the best practice steps that you need to follow if you're sharing an AMI publicly.

      There are a number of really common steps that you can use to minimize lots of common security issues and that's something you should definitely do if you're sharing an AMI.

      Now if you want to do you could also share an AMI with an organizational unit or organization and you can do that using this option.

      This makes it easier if you want to share an AMI with all AWS accounts within your organization.

      At this point though I'm not going to do that we don't need to do that in this demo.

      What we're going to do now though is move back to US-East-2.

      That's everything I wanted to cover in this demo lesson.

      Now this AMI is available we can right click and select D register and move back to US-East-1 and now that we've done this demo lesson we can do the same process with this AMI.

      So we can right click select D register and that will remove that AMI.

      Click on snapshots this is the snapshot created by this AMI so we need to delete this as well right click delete that snapshot confirm that and we'll need to do the same process in the region that we copied the AMI and the snapshots to.

      So select US-East-2 it should be the only snapshot in the region make sure it is the correct one right click delete confirm that deletion and now you've cleared up all of the extra things created within this demo lesson.

      Now that's everything that I wanted to cover I just wanted to give you an overview of how to work with AMIs from the console UI from a copying and sharing perspective.

      Go ahead and complete this video and when you're ready I look forward to you joining me in the next.

    1. Welcome back.

      This is part two of this lesson.

      We're going to continue immediately from the end of part one.

      So let's get started.

      So the first step is to shut down this instance.

      So we don't want to create an AMI from a running instance because that can cause consistency issues.

      So we're going to close down this tab.

      We're going to return to instances, right-click, and we're going to stop the instance.

      We need to acknowledge this and then we need to wait for the instance to change into the stopped state.

      It will start with stopping.

      We'll need to refresh it a few times.

      There we can see it's now in a stopped state and to create the AMI, we need to right-click on that instance, go down to Image and Templates, and select Create Image.

      So this is going to create an AMI.

      And first we need to give the AMI a name.

      So let's go ahead and use Animals for Life template WordPress.

      And we'll use the same for Description.

      Now what this process is going to do is it's going to create a snapshot of any of the EBS volumes, which this instance is using.

      It's going to create a block device mapping, which maps those snapshots onto a particular device ID.

      And it's going to use the same device ID as this instance is using.

      So it's going to set up the storage in the same way.

      It's going to record that storage inside the AMI so that it's identical to the instance we're creating the AMI from.

      So you'll see here that it's using EBS.

      It's got the original device ID.

      The volume type is set to the same as the volume that our instance is using, and the size is set to 8.

      Now you can adjust the size during this process as well as being able to add volumes.

      But generally when you're creating an AMI, you're creating the AMI in the same configuration as this original instance.

      Now I don't recommend creating an AMI from a running instance because it can cause consistency issues.

      If you create an AMI from a running instance, it's possible that it will need to perform an instance reboot.

      You can force that not to occur, so create an AMI without rebooting.

      But again, that's even less ideal.

      The most optimal way for creating an AMI is to stop the instance and then create the AMI from that stopped instance, which will have fully consistent storage.

      So now that that's set, just scroll down to the bottom and go ahead and click on Create Image.

      Now that process will take some time.

      If we just scroll down, look under Elastic Block Store and click on Snapshots.

      You'll see that initially it's creating a snapshot of the boot volume of our original EC2 instance.

      So that's the first step.

      So in creating the AMI, what needs to happen is a snapshot of any of the EBS volumes attached to that EC2 instance.

      So that needs to complete first.

      Initially it's going to be an appending state.

      We'll need to give that a few moments to complete.

      If we move to AMIs, we'll see that the AMI is also creating it too.

      It is in appending state and it's waiting for that snapshot to complete.

      Now creating a snapshot is storing a full copy of any of the data on the original EBS volume.

      And the time taken to create a snapshot can vary.

      The initial snapshot always takes much longer because it has to take that full copy of data.

      And obviously depending on the size of the original volume and how much data is being used, will influence how long a snapshot takes to create.

      So the more data, the larger the volume, the longer the snapshot will take.

      After a few more refreshes, the snapshot moves into a completed status and if we move across to AMIs under images, after a few moments this too will change away from appending status.

      So let's just refresh it.

      After a few moments, the AMI is now also in an available state and we're good to be able to use this to launch additional EC2 instances.

      So just to summarize, we've launched the original EC2 instance, we've downloaded, installed and configured WordPress, configured that custom banner.

      We've shut down the EC2 instance and generated an AMI from that instance.

      And now we have this AMI in a state where we can use it to create additional instances.

      So we're going to do that.

      We're going to launch an additional instance using this AMI.

      While we're doing this, I want you to consider exactly how much quicker this process now is.

      So what I'm going to do is to launch an EC2 instance from this AMI and note that this instance will have all of the configuration that we had to do manually, automatically included.

      So right click on this AMI and select launch.

      Now this will step you through the launch process for an EC2 instance.

      You won't have to select an AMI because obviously you are now explicitly using the one that you've just created.

      You'll be asked to select all of the normal configuration options.

      So first let's put a name for this instance.

      So we'll use the name "instance" from AMI.

      Then we'll scroll down.

      As I mentioned moments ago, we don't have to specify an AMI because we're explicitly launching this instance from an AMI.

      Scroll down.

      You'll need to specify an instance type just as normal.

      We'll use a free tier eligible instance.

      This is likely to be T2 or T3.micro.

      Below that, go ahead and click and select Proceed without a key pair not recommended.

      Scroll down.

      We'll need to enter some networking settings.

      So click on Edit next to Network Settings.

      Click in VPC and select A4L-VPC1.

      Click in Subnet and make sure that SN-Web-A is selected.

      Make sure the box is below a both set to enable for the auto assign IP settings.

      Under Firewall, click on Select Existing Security Group.

      Click in the Security Groups drop down and select AMI-Demo-Instance Security Group.

      And that will have some random at the end.

      That's absolutely fine.

      Select that.

      Scroll down.

      And notice that the storage is configured exactly the same as the instance which you generated this AMI from.

      Everything else looks good.

      So we can go ahead and click on Launch Instance.

      So this is launching an instance using our custom created AMI.

      So let's close down this dialog and we'll see the instance initially in a pending state.

      Remember, this is launching from our custom AMI.

      So it won't just have the base Amazon Linux 2 operating system.

      Now it's going to have that base operating system plus all of the custom configuration that we did before creating the AMI.

      So rather than having to perform that same WordPress download installation configuration and the banner configuration each and every time, now we've baked that in to the AMI.

      So now when we launch one instance, 10 instances, or 100 instances from this AMI, all of them are going to have this configuration baked in.

      So let's give this a few minutes to launch.

      Once it's launched, we'll select it, right click, select Connect, and then connect into it using EC2, Instance Connect.

      Now one thing you will need to change because we're using a custom AMI, AWS can't necessarily detect the correct username to use.

      And so you might see sometimes it says root.

      Just go ahead and change this to EC2-user and then go ahead and click Connect.

      And if everything goes well, you'll be connected into the instance and you'll see our custom Cowsay banner.

      So all that configuration is now baked in and it's automatically included whenever we use that AMI to launch an instance.

      If we go back to the AWS console and select instances, make sure we still have the instance from AMI selected and then locate its public IP version for address.

      Don't use this link because that will use HTTPS instead, copy the IP address into your clipboard and open that in a new tab.

      Again, all being well, you should see the WordPress installation dialogue and that's because we've baked in the installation and the configuration into this AMI.

      So we've massively reduced the ongoing efforts required to launch an animals for life standard build configuration.

      If we use this AMI to launch hundreds or thousands of instances each and every time we're saving all the time and the effort required to perform this configuration and using an AMI is just one way that we can automate the build process of EC2 instances within AWS.

      And over the remainder of the course, I'm going to be demonstrating the other ways that you can use as well as comparing and contrasting the advantages and disadvantages of each of those methods.

      Now that's everything that I wanted to cover in this demo lesson.

      You've learned how to create an AMI and how to use it to save significant effort on an ongoing basis.

      So let's clear up all of the infrastructure that we've used in this lesson.

      So move back to the AWS console, close down this tab, go back to instances, and we need to manually terminate the instance that we created from our custom AMI.

      So right click and then go to terminate instance.

      You'll need to confirm that.

      That will start the process of termination.

      Now we're not going to delete the AMI or snapshots because there's a demo coming up later in this section of the course where you're going to get the experience of copying and sharing an AMI between AWS regions.

      So we're going to need to leave this in place.

      So we're not going to delete the AMI or the snapshots created within this lesson.

      Verify that that instance has been terminated and once it has, click on services, go to cloud formation, select the AMI demo stack, select delete and then confirm that deletion.

      And that will remove all of the infrastructure that we've created within this demo lesson.

      And at this point, that's everything that I wanted you to do in this demo.

      So go ahead, complete this video.

      And when you're ready, I'll look forward to you joining me in the next.

    1. Welcome back and in this demo lesson you'll be creating an AMI from a pre-configured EC2 instance.

      So you'll be provisioning an EC2 instance, configuring it with a popular web application stack and then creating an AMI of that pre-configured web application.

      Now you know in the previous demo where I said that you would be implementing the WordPress manual install once?

      Well I might have misled you slightly but this will be the last manual install of WordPress in the course, I promise.

      What we're going to do together in this demo lesson is create an Amazon Linux AMI for the animals for life business but one which includes some custom configuration and an install of WordPress ready and waiting to be initially configured.

      So this is a fairly common use case so let's jump in and get started.

      Now in order to perform this demo you're going to need some infrastructure, make sure you're logged into the general AWS account, so the management account of the organization and as always make sure that you have the Northern Virginia region selected.

      Now attached to this lesson is a one-click deployment link, go ahead and click that link.

      This will open the quick create stack screen, it should automatically be populated with the AMI demo as the stack name, just scroll down to the bottom, check this capabilities acknowledgement box and then click on create stack.

      We're going to need this stack to be in a create complete state so go ahead and pause the video and we can resume once the stack moves into create complete.

      Okay so that stacks now moved into a create complete state, we're good to continue with the demo.

      Now you're going to be using some command line commands within an EC2 instance as part of creating an Amazon machine image so also attached to this lesson is the lessons command document which contains all of those commands so go ahead and open that document.

      Now you might recognize these as the same commands that you used when you were performing a manual WordPress installation and that's the case we're running the same manual installation process as part of setting up our animals for life AMI so you're going to need all of these commands but as you've already experienced them in the previous demo lesson I'm going to run through them a lot quicker in this demo lesson so go back to the AWS console and we need to move to the EC2 area of the console so click on the services drop down, type EC2 into this search box and then open that in a new tab.

      Once you there go ahead and click on running instances, close down any dialogues about any console changes we want to maximize the amount of screen space that we have, we're going to connect to this A4L public EC2 instance this is the instance that we're going to use to create our AMI so we're going to set the instance up manually how we want it to be and then we're going to use it to generate an AMI so we need to connect to this instance so right click select connect we're going to use EC2 instance connect to do the work within our browser so make sure the username is EC2-user and then connect to this instance then once connected we're going to run through the commands to install WordPress really quickly we're going to start again by setting the variables that will use throughout the installation so you can just go ahead and copy and paste those straight in and press enter now we're going to run through all of the next set of commands really quickly because you use them in the previous demo lesson so first we're going to go ahead and install the MariaDB server Apache and the Wget utility while that's installing copy all of the commands from step 3 so these are commands which enable and start Apache and MariaDB go ahead and paste all of those four in and press enter so now Apache and MariaDB are both set to start when the instance boots as well as being set to currently started I'll just clear the screen to make this easier to see next we're going to set the DB root password again that's this command using the contents of the variable that you set at the start next we download WordPress once it's downloaded we move into the web root folder we extract the download we copy the files from within the WordPress folder that we've just extracted into the current folder which is the web root once we've done that we remove the WordPress folder itself and then we tidy up by deleting the download I'm going to clear the screen we copy the template configuration file into its final file name so wp-config.php then we're going to replace the placeholders in that file we're going to start with the database name using the variable that you set at the start next we're going to use the database user which you also set at the start and finally the database password and then we're going to set the ownership on all of these files to be the Apache user and the Apache group clear the screen next we need to create the DB setup script that are demonstrated in the previous demo so we need to run a collection of commands the first to enter the create database command the next one to enter the create user command and set that password the next one to grant permissions on the database to that user then flush the permissions then we need to run that script using the MySQL command line interface that runs all of those commands and performs all of those operations and then we tidy up by deleting that file now at this point we've done the exact same process that we did in the previous demo we've installed and set up WordPress and if everything's working okay we can go back to the AWS console click on instances select the running a4l-public ec2 instance copy down its IP address again make sure you copy that down don't click this link and then open that in a new tab if everything's working as expected you should see the WordPress installation dialogue now this time because we're creating an AMI we don't want to perform the installation we want to make sure that when anyone uses this AMI they're also greeted with this installation so we're going to leave this at this point we're not going to perform the installation instead we're going to go back to the ec2 instance now because this ec2 instance is for the animals for life business we want to customize it and make sure that everybody knows that this is an animals for life ec2 instance now to do that we're going to install an animal themed utility called cow say I'm going to clear the screen to make it easier to see and then just to demonstrate exactly what cow say does I'm going to run a cow say oh hi and if all goes well we see a cow using ASCII art saying the oh hi message that we just typed so we're going to use this to create a message of the day welcome when anyone connects to this ec2 instance to do that we're going to create a file inside the configuration folder of this ec2 instance so we're going to use shudu nano and we're going to create this file so forward slash etc forward slash update hyphen motd dot d forward slash 40 hyphen cow so we're going to create that file this is the file that's going to be used to generate the output when anyone logs in to this ec2 instance so we're going to copy in these two lines and then press enter so this means when anyone logs into the ec2 instance they're going to get an animal themed welcome so use control o to save that file and control x to exit clear the screen to make it easier to see we're going to make sure that file that we've just edited has the correct permissions then we're going to force an update of the message of the day so this is going to be what's displayed when anyone logs into this instance and then finally now that we've completed this configuration we're going to reboot this ec2 instance so we're going to use this command to reboot it and just to illustrate how this works I'm going to close down that tab and return to the ec2 console give this a few moments to restart that should have rebooted by now so we're going to select it right click go to connect again use ec2 instance connect assuming everything's working now when we connect to the instance we'll see an animal themed login banner so this is just a nice way that we can ensure that anyone logging into this instance understands that a he uses the Amazon Linux 2 AMI and be that it belongs to animals for life so we've created this instance using the Amazon Linux 2 AMI we've performed the WordPress installation and initial configuration we've customized the banner and now we're going to use this as our template instance to create our AMI that can then be used to launch other instances okay so this is the end of part one of this lesson it was getting a little bit on the long side and so I wanted to add a break it's an opportunity just to take a rest or grab a coffee part 2 will be continuing immediately from the end of part one so go ahead complete the video and when you're ready join me in part two

    1. Résumé de la vidéo [00:00:23][^1^][1] - [00:32:19][^2^][2]:

      Cette vidéo explore l'histoire de l'école républicaine en France, ses débats et ses interrogations, en mettant en lumière son évolution depuis 1792 et son lien avec la République.

      Temps forts: + [00:00:23][^3^][3] Introduction et contexte * Présentation de Jean-François Chanet * Objectifs de l'association des professeurs d'histoire-géographie * Importance de l'école républicaine + [00:01:01][^4^][4] Histoire de l'école républicaine * Lien avec la République depuis 1792 * Lois Ferry et unification de l'État * Période noire sous le régime de Vichy + [00:02:21][^5^][5] Débats et interrogations actuels * Laïcité et valeurs républicaines * Adaptation aux défis contemporains * Importance de préserver les valeurs fondamentales + [00:05:01][^6^][6] Exemples historiques et anecdotes * Gaston Bonheur et son livre * Rôle des instituteurs et des écoles * Impact des guerres sur l'éducation + [00:10:00][^7^][7] Unité et séparation * Séparation de la morale et de la religion * Séparation des sexes et des classes sociales * Concurrence entre écoles publiques et religieuses

      Résumé de la vidéo [00:32:22][^1^][1] - [01:03:04][^2^][2]:

      Cette partie de la vidéo explore l'évolution de l'école républicaine en France, en mettant l'accent sur les transformations sociales et éducatives depuis les années 60.

      Temps forts: + [00:32:22][^3^][3] Écoles à classe unique * Longévité malgré l'urbanisation * Féminisation du corps enseignant * Mobilité des enseignants + [00:34:00][^4^][4] Transformation des écoles * Regroupement des sexes * Séparation des âges * Augmentation des écoles mixtes + [00:38:00][^5^][5] Problème du redoublement * Taux de redoublement élevé * Impact sur la durée des études * Difficultés d'apprentissage + [00:42:00][^6^][6] Inégalités scolaires * Fréquentation des écoles rurales * Disparités entre centre et périphérie * Effondrement de la natalité pendant la guerre + [00:50:00][^7^][7] Réformes éducatives * Débats politiques sur les réformes * Importance des instituteurs * Critiques des inégalités perpétuées par l'école

      Résumé de la vidéo [01:03:07][^1^][1] - [01:34:05][^2^][2]:

      Cette partie de la vidéo explore l'histoire et les débats autour de l'école républicaine en France, en mettant l'accent sur les réformes éducatives et les défis sociaux qu'elles ont rencontrés.

      Temps forts: + [01:03:07][^3^][3] Débats sur les réformes éducatives * Importance du consensus politique * Opposition historique aux lois éducatives * Complexité des réformes majeures + [01:05:01][^4^][4] Critiques littéraires et sociales * Zola et l'affaire Dreyfus * Jules Romain et l'éducation * Critiques des inégalités scolaires + [01:09:02][^5^][5] Évolution de l'enseignement secondaire * Accessibilité et inégalités * Critiques des praticiens * Réformes et résistances + [01:17:01][^6^][6] Concept d'école unique * Idées post-guerre * Obstacles et résistances * Différences sociales persistantes + [01:25:03][^7^][7] Réformes de Jean Zay * Allongement de la scolarité * Introduction de l'orientation * Critiques et impacts des réformes

      Résumé de la vidéo [01:34:08][^1^][1] - [01:37:07][^2^][2]:

      Cette partie de la vidéo explore les défis et les crises de l'enseignement républicain en France, en se concentrant sur les réflexions de Charles Péguy sur l'éducation et la société.

      Points forts : + [01:34:08][^3^][3] Propagande et émancipation * Propager des idées pour émanciper les esprits * Le problème républicain de l'école * Opposition entre mystique et politique + [01:34:50][^4^][4] Charles Péguy et l'éducation * Péguy, orphelin et élève brillant * Son parcours scolaire exceptionnel * Mort à la guerre en 1914 + [01:35:28][^5^][5] Crises de l'enseignement * Crises de vie et crises de l'enseignement * Enseignement reflète la société * Société moderne et ses défis éducatifs

    1. Welcome back.

      This is part two of this lesson.

      We're going to continue immediately from the end of part one.

      So let's get started.

      So this is the folder containing the WordPress installation files.

      Now there's one particular file that's really important, and that's the configuration file.

      So there's a file called WP-config-sample, and this is actually the file that contains a template of the configuration items for WordPress.

      So what we need to do is to take this template and change the file name to be the proper file name, so wp-config.php.

      So we're going to create a copy of this file with the correct name.

      And to do that, we run this command.

      So we're copying the template or the sample file to its real file name, so wp-config.php.

      And this is the name that WordPress expects when it initially loads its configuration information.

      So run that command, and that now means that we have a live config file.

      Now this command isn't in the instructions, but if I just take a moment to open up this file, you don't need to do this.

      I'm just demonstrating what's in this file for your benefit.

      But if I run a sudo nano, and then wp, and then hyphen-config, and then php, this is how the file looks.

      So this has got all the configuration information in.

      So it stores the database name, the database user, the database host, and lots of other information.

      Now notice how it has some placeholders.

      So this is where we would need to replace the placeholders with the actual configuration information.

      So the database name itself, the host name, the database username, the database password, all that information would need to be replaced.

      Now we're not going to type this in manually, so I'm going to control X to exit out of this, and then clear the screen again to make it easy to see.

      We're going to use the Linux utility sed, or S-E-D.

      And this is a utility which can perform a search and replace within a text file.

      It's actually much more complex and capable than that.

      It can perform many different manipulation operations.

      But for this demonstration, we're going to use it as a simple search and replace.

      Now we're going to do this a number of times.

      First, we're going to run this command, which is going to replace this placeholder.

      Remember, this is one of the placeholders inside the configuration file that I've just demonstrated, wp-config.

      We're going to replace the placeholder here with the contents of the variable name, dbname, that we set at the start of this demo.

      So this is going to replace the placeholder with our actual database name.

      So I'm going to enter that so you can do the same.

      We're going to run the sed command again, but this time it's going to replace the username placeholder with the dbuser variable that we set at the start of this demo.

      So use that command as well.

      And then lastly, it will do the same for the database password.

      So type or copy and paste this command and press enter.

      And that now means that this wp-config has the actual configuration information inside.

      And just to demonstrate that, you don't need to do this part.

      I'll just do it to demonstrate.

      If I edit this file again, you'll see that all of these placeholders have actually been replaced with actual values.

      So I'm going to control X out of that and then clear the screen.

      And that concludes the configuration for the WordPress application.

      So now it's ready.

      Now it knows how to communicate with the database.

      What we need to do to finish off the configuration though is just to make sure that the web server has access to all of the files within this folder.

      And to do that, we use this command.

      So we're making sure that we use the shown command or chown and set the ownership of all of the files in this folder and any subfolders to be the Apache user and the Apache group.

      And the Apache user and Apache group belong to the web server.

      So this just makes sure that the web server is able to access and control all of the files in the web root folder.

      So run that command and press enter.

      And that concludes the installation part of the WordPress application.

      There's one final thing that we need to do and that's to create the database that WordPress will use.

      So I'm going to clear the screen to make it easy to see.

      Now what we're going to do in order to configure the database is we're going to make a database setup script.

      We're going to put this script inside the forward slash TMP folder and we're going to call it DB.setup.

      So what we need to do is enter the commands into this file that will create the database.

      After the database is created, it needs to create a database user and then it needs to grant that user permissions on that database.

      Now again, instead of manually entering this, we're going to use those variable names that were created at the start of the demo.

      So we're going to run a number of commands.

      These are all in the lessons commands document.

      The first one is this.

      So this echoes this text and because it has a variable name in, this variable name will be replaced by the actual contents of the variable.

      Then it's going to take this text with the replacement of the contents of this variable and it's going to enter that into this file.

      So forward slash TMP, forward slash DB setup.

      So run that and that command is going to create the WordPress database.

      Then we're going to use this command and this is the same so it echoes this text but it replaces these variable names with the contents of the variables.

      This is going to create our WordPress database user.

      It's going to set its password and then it's going to append this text to the DB setup file that we're creating.

      Now all of these are actually database commands that we're going to execute within the MariaDB database.

      So enter that to add that line to DB.setup.

      Then we have another line which uses the same architecture as the ones above it.

      It echoes the text.

      It replaces these variable names with the contents and then outputs that to this DB.setup file and this command grants our database user permissions to our WordPress database.

      And then the last command is this one which just flushes the privileges and again we're going to add this to our DB.setup script.

      So now I'm just going to cat the contents of this file so you can just see exactly what it looks like.

      So cat and then space forward slash TMP, forward slash DB.setup.

      So as you'll see it's replaced all of these variable names with the actual contents.

      So this is what the contents of this script actually looks like.

      So these are commands which will be run by the MariaDB database platform.

      To run those commands we use this.

      So this is the MySQL command line interface.

      So we're using MySQL to connect to the MariaDB database server.

      We're using the username of root.

      We're passing in the password and then using the contents of the DB root password variable.

      And then once we authenticate the database we're passing in the contents of our DB.setup script.

      And so this means that all of the lines of our DB.setup script will be run by the MariaDB database and this will create the WordPress database, the WordPress user and configure all of the required permissions.

      So go ahead and press enter.

      That command is run by the MariaDB platform and that means that our WordPress database has been successfully configured.

      And then lastly just to keep things secure because we don't want to leave files laying around on the file system with authentication information inside.

      We're just going to run this command to delete this DB.setup file.

      Okay, so that concludes the setup process for WordPress.

      It's been a fairly long intensive process but that now means that we have an installation of WordPress on this EC2 instance, a database which has been installed and configured.

      So now what we can do is to go back to the AWS console, click on instances.

      We need to select the A4L-PublicEC2 and then we need to locate its IP address.

      Now make sure that you don't use this open address link because this will attempt to open the IP address using HTTPS and we don't have that configured on this WordPress instance.

      Instead, just copy the IP address into your clipboard and then open that in a new tab.

      If everything's successful, you should see the WordPress installation dialog and just to verify this is working successfully, let's follow this process through.

      So pick English, United States for the language.

      For the blog title, just put all the cats and then admin as the username.

      You can accept the default strong password.

      Just copy that into your clipboard so we can use it to log in in a second and then just go ahead and enter your email.

      It doesn't have to be a correct one.

      So I normally use test@test.com and then go ahead and click on install WordPress.

      You should see a success dialog.

      Go ahead and click on login.

      Username will be admin, the password that you just copied into your clipboard and then click on login.

      And there you go.

      We've got a working WordPress installation.

      We're not going to configure it in any detail but if you want to just check out that it works properly, go ahead and click on this all the cats at the top and then visit site and you'll be able to see a generic WordPress blog.

      And that means you've completed the installation of the WordPress application and the database using a monolithic architecture on a single EC2 instance.

      So this has been a slow process.

      It's been manual and it's a process which is wide open for mistakes to be made at every point throughout that process.

      Can you imagine doing this twice?

      What about 10 times?

      What about a hundred times?

      It gets pretty annoying pretty quickly.

      In reality, this is never done manually.

      We use automation or infrastructure as code systems such as cloud formation.

      And as we move through the course, you're going to get experience of using all of these different methods.

      Now that we're close to finishing up the basics of VPC and EC2 within the course, things will start to get much more efficient quickly because I'm going to start showing you how to use many of the automation and infrastructure as code services within AWS.

      And these are really awesome to use.

      And you'll see just how much power is granted to an architect, a developer, or an engineer by using these services.

      For now though, that is the end of this demo lesson.

      Now what we're going to do is to clear up our account.

      So we need to go ahead and clear all of this infrastructure that we've used throughout this demo lesson.

      To do that, just move back to the AWS console.

      If you still have the cloud formation tab open and move back to that tab, otherwise click on services and then click on cloud formation.

      If you don't see it anywhere, you can use this box to search for it, select the word, press stack, select delete, and then confirm that deletion.

      And that will delete the stack, clear up all of the infrastructure that we've used throughout this demo lesson and the account will now be in the same state as it was at the start of this lesson.

      So from this point onward in the course, we're going to start using automation.

      Now there is a lesson coming up in a little while in this section of the course, where you're going to create an Amazon machine image which is going to contain a pre-baked copy of the WordPress application.

      So as part of that lesson, you are going to be required to perform one more manual installation of WordPress, but that's going to be part of automating the installation.

      So you'll start to get some experience of how to actually perform automated installations and how to design architectures which have WordPress as a component.

      At this point though, that's everything I wanted to cover.

      So go ahead, complete this video, and when you're ready, I look forward to you joining me in the next.

    1. materiële strafrecht

      Wetboek van Strafrecht (Sr)

    2. formele strafrecht

      Wetboek van Strafvordering (SV)

    1. Welcome back and in this lesson we're going to be doing something which I really hate doing and that's using WordPress in a course as an example.

      Joking aside though WordPress is used in a lot of courses as a very simple example of an application stack.

      The problem is that most courses don't take this any further.

      But in this course I want to use it as one example of how an application stack can be evolved to take advantage of AWS products and services.

      What we're going to be using WordPress for in this demo is to give you experience of how a manual installation of a typical application stack works in EC2.

      We're going to be doing this so you can get the experience of how not to do things.

      My personal belief is that to fully understand the advantages that automation features within AWS provide, you need to understand what a manual installation is like and what problems you can experience doing that manual installation.

      As we move through the course we can compare this to various different automated ways of installing software within AWS.

      So you're going to get the experience of bad practices, good practices and the experience to be able to compare and contrast between the two.

      By the end of this demonstration you're going to have a working WordPress site but it won't have any high availability because it's running on a single EC2 instance.

      It's going to be architecturally monolithic with everything running on the one single instance.

      In this case that means both the application and the database.

      The design is fairly straightforward.

      It's just the Animals for Life VPC.

      We're going to be deploying the WordPress application into a single subnet, the WebA public subnet.

      So this subnet is going to have a single EC2 instance deployed into it and then you're going to be doing a manual install onto this instance and the end result is a working WordPress installation.

      At this point it's time to get started and implement this architecture.

      So let's go ahead and switch over to our AWS console.

      To get started with this demo lesson you're going to need to do a few preparation steps.

      First just make sure that you're logged in to the general AWS account, so the management account of the organization and as always make sure you have the Northern Virginia region selected.

      Now attached to this lesson is a one-click deployment for the base infrastructure that we're going to use.

      So go ahead and open the one-click deployment link that's attached to this lesson.

      That link is going to take you to the Quick Create Stack screen.

      Everything should be pre-populated.

      The stack name should be WordPress.

      All you need to do is scroll down towards the bottom, check this capabilities box and then click on Create Stack.

      And this stack is going to need to be in a Create Complete state before we move on with the demo lesson.

      So go ahead and pause this video, wait for the stack to change to Create Complete and then we're good to continue.

      Also attached to this lesson is a Lessons Command document which lists all of the commands that you'll be using within the EC2 instance throughout this demo lesson.

      So go ahead and open that as well.

      So that should look something like this and these are all of the commands that we're going to be using.

      So these are the commands that perform a manual WordPress installation.

      Now that that stack's completed and we've got the Lesson Commands document open, the next step is to move across to the EC2 console because we're going to actually install WordPress manually.

      So click on the Services drop-down and then locate EC2 in this All Services part of the screen.

      If you've recently visited it, it should be in the Recently Visited section under Favorites or you can go ahead and type EC2 in the search box and then open that in a new tab.

      And then click on Instances running and you should see one single instance which is called A4L-PublicEC2.

      Go ahead and right-click on this instance.

      This is the instance we'll be installing WordPress within.

      So right-click, select Connect.

      We're going to be using our browser to connect to this instance so we'll be using Instance Connect just verify that the username is EC2-user and then go ahead and connect to this instance.

      Now again, I fully understand that a manual installation of WordPress might seem like a waste of time but I genuinely believe that you need to understand all the problems that come from manually installing software in order to understand the benefits which automation provides.

      It's not just about saving time and effort.

      It's also about error reduction and the ability to keep things consistent.

      Now I always like to start my installations or my scripts by setting variables which will store the configuration values that everything from that point forward will use.

      So we're going to create four variables.

      One for the database name, one for the database user, one for the database password and then one for the root or admin password of the database server.

      So let's start off by using the pre-populated values from the Lessened Commands documents.

      So that's all of those variables set and we can confirm that those are working by typing echo and then a space and then a dollar and then the name of one of those variables.

      So for example, dbname and press Enter and that will show us the value stored within that variable.

      So now we can use these later points of the installation.

      So at this point I'm going to clear the screen to keep it easy to see and stage two at this installation process is to install some system software.

      So there are a few things that we need to install in order to allow a WordPress installation.

      We'll install those using the DNF package manager.

      We need to give it admin privileges which is why we use shudu and then the packages that we're going to install are the database server which is Maria db-server the Apache web server which is HTTPD and then a utility called Wget which we're going to use to download further components of the installation.

      So go ahead and type or copy and paste that command and press Enter and that installation process will take a few moments and it will go through installing that software and any of the prerequisites.

      They're done so I'll clear the screen to keep this easy to read.

      Now that all those packages are installed we need to start both the web server and the database server and ensure that both of them are started if ever the machine is restarted.

      So to do that we need to enable and start those services.

      So enabling and starting means that both of the services are both started right now and they'll start if the machine reboots.

      So first we'll use this command.

      So we're using admin privileges again, systemctl which allows us to start and stop system processes and then we use enable and then HTTPD which is the web server.

      So type and press enter and that ensures that the web server is enabled.

      We need to run the same command again but this time specifying MariaDB to ensure that the database server is enabled.

      So type or copy and paste and press enter.

      So that means both of those processes will start if ever the instance is rebooted and now we need to manually start both of those so they're running and we can interact with them.

      So we need to use the same structure of command but instead of enable we need to start both of these processes.

      So first the web server and then the database server.

      So that means the CC2 instance now has a running web and database server both of which are required for WordPress.

      So I'll clear the screen to keep this easy to read.

      Next we're going to move to stage 4 and stage 4 is that we need to set the root password of the database server.

      So this is the username and password that will be used to perform all of the initial configuration of the database server.

      Now we're going to use this command and you'll note that for password we're actually specifying one of the variables that we configured at the start of this demo.

      So we're using the DB root password variable that we configured right at the start.

      So go ahead and copy and paste or type that in and press enter and that sets the password for the root user of this database platform.

      The next step which is step 5 is to install the WordPress application files.

      Now to do that we need to install these files inside what's known as the web root.

      So whenever you browse to a web server either using an IP address or a DNS name if you don't specify a path so if you just use the server name for example netflix.com then it loads those initial files from a folder known as the web root.

      Now on this particular server the web root is stored in /varr/www/html so we need to download WordPress into that folder.

      Now we're going to use this command Wget and that's one of the packages that we installed at the start of this lesson.

      So we're giving it admin privileges and we're using Wget to download latest.tar.gz from wordpress.org and then we're putting it inside this web root.

      So /varr/www/html.

      So go ahead and copy and paste or type that in and press enter.

      That'll take a few moments depending on the speed of the WordPress servers and that will store latest.tar.gz in that web root folder.

      Next we need to move into that folder so cd space /varr/www/html and press enter.

      We need to use a Linux utility called tar to extract that file.

      So sudo and then tar and then the command line options -zxvf and then the name of the file so latest.tar.gz So copy and paste or type that in and press enter and that will extract the WordPress download into this folder.

      So now if we do an ls -la you'll see that we have a WordPress folder and inside that folder are all of the application files.

      Now we actually don't want them inside a WordPress folder.

      We want them directly inside the web root.

      So the next thing we're going to do is this command and this is going to copy all of the files from inside this WordPress folder to . and . represents the current folder.

      So it's going to copy everything inside WordPress into the current working directory which is the web root directory.

      So enter that and that copies all of those files.

      And now if we do another listing you'll see that we have all of the WordPress application files inside the web root.

      And then lastly for the installation part we need to tidy up the mess that we've made.

      So we need to delete this WordPress folder and the download file that we just created.

      So to do that we'll run an rm -r and then WordPress to delete that folder.

      And then we'll delete the download with sudo rm and then a space and then the name of the file.

      So latest.tar.gz.

      And that means that we have a nice clean folder.

      So I'll clear the screen to make it easy to see.

      And then I'll just do another listing.

      Okay so this is the end of part one of this lesson.

      It was getting a little bit on the long side and so I wanted to add a break.

      It's an opportunity just to take a rest or grab a coffee.

      Part two will be continuing immediately from the end of part one.

      So go ahead complete the video and when you're ready join me in part two.

    1. la “factualité” de chatGPT ou, plus prosaïquement, pénalise davantage les “hallucinations”

      je n'ai pas compris

    2. l’alignement

      concept clé

    3. coût mais aussi, surtout de risques

      Quel est le cout et quel est aussi le plus grand risque?

    4. ChatGPT

      C'est un générateur de texte fabriqué par OpenAI. capable d'interagir avec l'homme.

    1. Editors Assessment:

      PhysiCell is an open source multicellular systems simulator for studying many interacting cells in dynamic tissue microenvironments. As part of the PhysiCell ecosystem of tools and modules this paper presents a PhysiCell addon, PhysiMeSS (MicroEnvironment Structures Simulation) which allows the user to accurately represent the extracellular matrix (ECM) as a network of fibres. This can specify rod-shaped microenvironment elements such as the matrix fibres (e.g. collagen) of the ECM, allowing the PhysiCell user the ability to investigate physical interactions with cells and other fibres. Reviewers asked for additional clarification on a number of features. And the paper now clear future releases will provide full 3D compatibility and include working on fibrogenesis, i.e. the creation of new ECM fibres by cells.

      This evaluation refers to version 1 of the preprint

    2. AbstractThe extracellular matrix is a complex assembly of macro-molecules, such as collagen fibres, which provides structural support for surrounding cells. In the context of cancer metastasis, it represents a barrier for the cells, that the migrating cells needs to degrade in order to leave the primary tumor and invade further tissues. Agent-based frameworks, such as PhysiCell, are often use to represent the spatial dynamics of tumor evolution. However, typically they only implement cells as agents, which are represented by either a circle (2D) or a sphere (3D). In order to accurately represent the extracellular matrix as a network of fibres, we require a new type of agent represented by a segment (2D) or a cylinder (3D).In this article, we present PhysiMeSS, an addon of PhysiCell, which introduces a new type of agent to describe fibres, and their physical interactions with cells and other fibres. PhysiMeSS implementation is publicly available at https://github.com/PhysiMeSS/PhysiMeSS, as well as in the official Physi-Cell repository. We also provide simple examples to describe the extended possibilities of this new framework. We hope that this tool will serve to tackle important biological questions such as diseases linked to dis-regulation of the extracellular matrix, or the processes leading to cancer metastasis.

      This work has been published in GigaByte Journal under a CC-BY 4.0 license (https://doi.org/10.46471/gigabyte.136), and has published the reviews under the same license. It is also part of GigaByte’s PhysiCell Ecosystem series for tools that utilise or build upon the PhysiCell platform: https://doi.org/10.46471/GIGABYTE_SERIES_0003 These reviews are as follows.

      Reviewer 1. Erika Tsingos

      One important aspect that the authors need to be aware of and mention explicitly is that their algorithm for fiber set-up leads to differences in fiber concentration and orientation at the boundary, because fibers that are not wholly contained in the simulation box are discarded. The effect of this choice can be seen upon close inspection of Figure 2: In the left panel, fibers align tangentially to the boundary, so locally the orientation is not isotropic. Similarly, in Figure 2 middle and right panels, the left and right boundaries have lower local fiber concentration. This issue could potentially affect the outcome of a simulation, so it's important that readers are made aware so that if necessary they can address this with a modified algorithm. ----- Minor comments: In the abstract, the phrasing implies agent-based frameworks are only used for tumour evolution. I would rephrase such that it is clear that tumour evolution is one example among many possible applications. I suggest adding a dash to improve readability in the following sentence in the introduction: "However, we note that the applications of PhysiMeSS stretch beyond those wanting to model the ECM -- as the new cylindrical/rod-shaped agents could be used to model blood vessel segments or indeed create obstacles within the domain." In the implementation section, add a short sentence to clarify if PhysiMeSS is "backwards compatible" with older PhysiCell models that do not use the fiber agent. Notation in equations: A single vertical line is absolute value, and two vertical lines is Euclidean norm? The explanation of Equation 1 implies that the threshold v_{max} should limit the parallel force, but the text does not explicitly say if ||v|| is restricted to be less or equal to v_{max}. Is that the case? In Equation 2, I don't see the need to square the terms in parenthesis. If |v*l_f| is an absolute value it is always positive. Since l_f is normalized the value of the dot product is only between 0 and the magnitude of v. Am I missing something? Are p_x and p_y in the moment arm magnitude coordinates with respect to the fiber center? Table 2: It would be helpful to have a separate column with the corresponding symbols used throughout the text and equations. Figure 5/6: Missing crosslinker color legend. ----- Typos/grammar: "As an aside, an not surprisingly," --> As an aside, and not surprisingly, "This may either be because as a cell tries to migrate through the domain fibres which act as obstacles in the cell’s path," --> remove the word "which"

      Reviewer 2. Jinseok Park

      Noel et al. introduce PhysiMess - a new PhysiCell Addon for ECM remodeling. This new addon is a powerful tool to simulate ECM remodeling and has the potential to be applied to mechanobiology research, which makes my enthusiasm high. I would like to give a few suggestions. 1) Basically, it is an addon of PhysiCell. So, I suggest describing PhysiCell and how to add the addon for readers who are not familiar with these tools. Also, screen captures of tool manipulation would be very helpful. 2) Figure 2 and 3 exhibit the outcome of the addon showing ECM remodeling. I would suggest to show actual ECM images modeled by the addon. 3) The equations reflect four interactions, and in my understanding, the authors describe cell-fibre, fiber-cell, and fiber-fiber interactions. I suggest generating an example corresponding to each interaction's modulation and explaining how the add-on results explain the physiological phenomena. For instance, focal adhesion may be a key modulator of cell-fibre or fiber-cell interaction, presumably, alpha or beta fiber. I would demonstrate how the different parameters generate different results and explain the physiological situation modeled by the results. 4) Similarly, Figure 5 and Figure 6 only show one example and no comparison with other conditions. For example, It would be better to exhibit no pressure/pressure conditions. It may help readers estimate how the pressure impacts cell proliferation.

      Reviewer 3. Simon Syga

      The presented paper "PhysiMeSS - A New PhysiCell Addon for Extracellular Matrix Modelling" is a useful extension to the popular simulation framework PhysiCell. It enables the simulation of cell populations interacting with the extracellular matrix, which is represented by a set of line segments (2D) or cylinders (3D). These represend a new kind of agent in the simulation framework. The paper outlines the basic implementation, properties and interactions of these agents. I recommend publication after a small set of minor issues have been addressed. Please refer to the attached marked-up PDF file for these minor issues and suggestions. https://gigabyte-review.rivervalleytechnologies.comdownload-api-file?ZmlsZV9wYXRoPXVwbG9hZHMvZ3gvVFIvNTUwL2d4LVRSLTE3MTk5NDYwNjlfU1kucGRm

    1. Welcome back and in this video we're going to interact with instant store volumes.

      Now this part of the demo does come at a cost.

      This isn't inside the free tier because we're going to be launching some instances which are fairly large and are not included in the free tier.

      The demo has a cost of approximately 13 cents per hour and so you should only do this part of the demo if you're willing to accept that cost.

      If you don't want to accept those costs then you can go ahead and watch me perform these within my test environment.

      So to do this we're going to go ahead and click on instances and we're going to launch an instance manually.

      So I'm going to click on launch instances.

      We're going to name the instance, Instance Store Test so put that in the name box.

      Then scroll down, pick Amazon Linux, make sure Amazon Linux 2023 is selected and the architecture needs to be 64 bit x86.

      Scroll down and then in the instance type box click and we need to find a different type of instance.

      This is going to be one that supports instance store volumes.

      So scroll down and we're looking for m5dn.large.

      This is a type of instance which includes one instance store volume.

      So select that then scroll down a little bit more and under key pair click in the box and select proceed without a key pair not recommended.

      Scroll down again and under network settings click on edit.

      Click in the VPC drop down and select a4l-vpc1.

      Under subnet make sure sn-web-a is selected.

      Make sure enabled is selected for both of the auto assign public IP drop downs.

      Then we're going to select an existing security group click the drop down select the EBS demo instance security group.

      It will have some random after it but that's okay.

      Then scroll down and under storage we're going to leave all of the defaults.

      What you are able to do though is to click on show details next to instance store volumes.

      This will show you the instance store volumes which are included with this instance.

      You can see that we have one instance store volume it's 75 GB in size and it has a slightly different device name.

      So dev nvme0n1.

      Now all of that looks good so we're just going to go ahead and click on launch instance.

      Then click on view all instances and initially it will be an appending state and eventually it will move into a running state.

      Then we should probably wait for the status check column to change from initializing to 2 out of 2.

      Go ahead and pause the video and wait for this status check to change to be fully green.

      It should show 2 out of 2 status checks.

      That's now in a running state with 2 out of 2 checks so we can go ahead and connect to this instance.

      Before we do though just go ahead and select the instance and just note the instances public IP version 4 address.

      Now this address is really useful because it will change if the EC2 instance moves between EC2 hosts.

      So it's a really easy way that we can verify whether this instance has moved between EC2 hosts.

      So just go ahead and note down the IP address of the instance that you have if you're performing this in your own environment.

      We're going to go ahead and connect to this instance though so right click, select connect, we'll be choosing instance connect, go ahead and connect to the instance.

      Now many of these commands that we'll be using should by now be familiar.

      Just refer back to the lessons command document if you're unsure because we'll be using all of the same commands.

      First we need to list all of the block devices which are attached to this instance and we can do that with LSBLK.

      This time it looks a little bit different because we're using instance store rather than EBS additional volumes.

      So in this particular case I want you to look for the 8G volume so this is the root volume.

      This represents the boot or root volume of the instance.

      Remember that this particular instance type came with a 75GB instance store volume so we can easily identify it's this one.

      Now to check that we can verify whether there's a file system on this instance store volume.

      If we run this command, so the same command we've used previously so shudu file -s and then the id of this volume so dev nvme1n1, you'll see it reports data.

      And if you recall from the previous parts of this demo series this indicates that there isn't a file system on this volume.

      We're going to create one and to do that we use this command again it's the same command that we've used previously just with the new volume id.

      So press enter to create a file system on this raw block device this instance store volume and then we can run this command again to verify that it now has a file system.

      To mount it we can follow the same process that we did in the earlier stages of this demo series.

      We'll need to create a directory for this volume to be mounted into this time we'll call it forward slash instance store.

      So create that folder and then we're going to mount this device into that folder so shudu mount then the device id and then the mount point or the folder that we've previously created.

      So press enter and that means that this block device this instance store volume is now mounted into this folder.

      And if we run a df space -k and press enter you can see that it's now mounted.

      Now we're going to move into that folder by typing cd space forward slash instance store and to keep things efficient we're going to create a file called instance store dot txt.

      And rather than using an editor we'll just use shudu touch and then the name of the file and this will create an empty file.

      If we do an LS space -la and press enter you can see that that file exists.

      So now that we have this file stored on a file system which is running on this instance store volume let's go ahead and reboot this instance.

      Now we need to be careful we're not going to stop and start the instance we're going to restart the instance.

      Restarting is different than stop and start.

      So to do that we're going to close this tab move back to the ec2 console so click on instances right click on instance store test and select reboot instance and then confirm that.

      Note what this IP address is before you initiate the reboot operation and then just give this a few minutes to reboot.

      Then right click and select connect.

      Using instance connect go ahead and connect back to the instance.

      And again if it appears to hang at this point then you can just wait for a few moments and then connect again.

      But in this case I've left it long enough and I'm connected back into the instance.

      Now once I'm back in the instance if I run a df space -k and press enter note how that file system is not mounted after the reboot.

      Now that's fine because we didn't configure the Linux operating system to mount this file system when the instance is restarted.

      But what we can do is do an LS BLK again to list the block device.

      We can see that it's still there and we can manually mount it back in the same folder as it was before the reboot.

      To do that we run this command.

      So it's mounting the same volume ID the same device ID into the same folder.

      So go ahead and run that command and press enter.

      Then if we use cd space forward slash and then instance store press enter and then do an LS space -la we can see that this file is still there.

      Now the file is still there because instance store volumes do persist through the restart of an EC2 instance.

      Restarting an EC2 instance does not move the instance from one EC2 host to another.

      And because instance store volumes are directly attached to an EC2 host this means that the volume is still there after the machine has restarted.

      Now we're going to do something different though.

      Close this tab down.

      Move back to instances.

      Again pay special attention to this IP address.

      Now we're going to right click and stop the instance.

      So go ahead and do that and confirm it if you're doing this in your own environment.

      Watch this public IP v4 address really carefully.

      We'll need to wait for the instance to move into a stopped state which it has and if we select the instance note how the public IP version for address has been unallocated.

      So this instance is now not running on an EC2 host.

      Let's right click.

      Go to start instance and start it up again.

      Only to give that a few moments again.

      It'll move into a running state but notice how the public IP version for address has changed.

      This is a good indication that the instance has moved from one EC2 host to another.

      So let's give this instance a few moments to start up.

      And once it has right click, select connect and then go ahead and connect to the instance using instance connect.

      Once connected go ahead and run an LS BLK and press enter and you'll see it appears to have the same instance store volume attached to this instance.

      It's using the same ID and it's the same size.

      But let's go ahead and verify the contents of this device using this command.

      So shudu file space -s space and then the device ID of the instance store volume.

      For press enter, now note how it shows data.

      So even though we created a file system in the previous step after we've stopped and started the instance, it appears this instance store volume has no data.

      Now the reason for that is when you restart an EC2 instance, it restarts on the same EC2 host.

      But when you stop and start an EC2 instance, which is a distinctly different operation, the EC2 instance moves from one EC2 host to another.

      And that means that it has access to completely different instance store volumes than it did on that previous host.

      It means that all of the data, so the file system and the test file that we created on the instance store volume, before we stopped and started this instance, all of that is lost.

      When you stop and start an EC2 instance or for any other reason, which means the instance moves from one host to another, all of the data is lost.

      So instance store volumes are ephemeral.

      They're not persistent and you can't rely on them to keep your data safe.

      And it's really important that you understand that distinction.

      If you're doing the developer or sysop streams, it's also important that you understand the difference between an instance restart, which keeps the same EC2 host, and a stop and start, which moves an instance from one host to another.

      The format means you're likely to keep your data, but the latter means you're guaranteed to lose your data when using instance store volumes.

      EBS on the other hand, as we've seen, is persistent and any data persists through the lifecycle of an EC2 instance.

      Now with that being said, though, that's everything that I wanted to demonstrate within this series of demo lessons.

      So let's go ahead and tidy up the infrastructure.

      Close down this tab, click on instances.

      If you follow this last part of the demo in your own environment, go ahead and right click on the instance store test instance and terminate that instance.

      That will delete it along with any associated resources.

      We'll need to wait for this instance to move into the terminated state.

      So give that a few moments.

      Once that's terminated, go ahead and click on services and then move back to the cloud formation console.

      You'll see the stack that you created using the one click deploy at the start of this lesson.

      Go ahead and select that stack, click on delete and then delete stack.

      And that's going to put the account back in the same state as it was at the start of this lesson.

      So it will remove all of the resources that have been created.

      And at that point, that's the end of this demo series.

      So what did you learn?

      You learned that EBS volumes are created within one specific availability zone.

      EBS volumes can be mounted to instances in that availability zone only and can be moved between instances while retaining their data.

      You can create a snapshot from an EBS volume which is stored in S3 and that data is replicated within the region.

      And then you can use snapshots to create volumes in different availability zones.

      I told you how snapshots can be copied to other AWS regions either as part of data migration or disaster recovery and you learned that EBS is persistent.

      You've also seen in this part of the demo series that instant store volumes can be used.

      They are included with many instance types but if the instance moves between EC2 hosts so if an instance is stopped and then started or if an EC2 host has hardware problems then that EC2 instance will be moved between hosts and any data on any instant store volumes will be lost.

      So that's everything that you needed to know in this demo lesson and you're going to learn much more about EC2 and EBS in other lessons throughout the course.

      At this point though, thanks for watching and doing this demo.

      I hope it was useful but go ahead complete this video and when you're ready I look forward to you joining me in the next.

    1. positio

      Lovely!

    2. nd the front paws and backside of our dog

      Great!

    3. It is relatively easy to move from this position, especially for a 4 year old

      As soon as he lets go of the dog, he will become much less stable.

    4. lung

      lunge?

    5. internal rotation in the right leg

      Looks like slight external rotation of the left and possible internal rotation of the right. Hard to tell from this angle.

    1. Welcome back.

      This is part two of this lesson.

      We're going to continue immediately from the end of part one.

      So let's get started.

      We just need to give this a brief moment to perform that reboot.

      So just wait a couple of moments and once you have right click again, select Connect.

      We're going to use EC2 instance connect again.

      Make sure the user's correct and then click on Connect.

      Now, if it doesn't immediately connect you to the instance, if it appears to have frozen for a couple of seconds, that's fine.

      It just means that the instance hasn't completed its restart.

      Wait for a brief while longer and then attempt another connect.

      This time you should be connected back to the instance and now we need to verify whether we can still see our volume attached to this instance.

      So do a DF space -k and press Enter and you'll note that you can't see the file system.

      That's because before we rebooted this instance, we used the mount command to manually mount the file system on our EBS volume into the EBS test folder.

      Now that's a manual process.

      It means that while we could interact with that before the reboot, it doesn't automatically mount that file system when the instance restarts.

      To do that, we need to configure it to auto-mount when the instance starts up.

      So to do that, we need to get the unique ID of the EBS volume, which is attached to this instance.

      And to get that, we run a shudu space blkid.

      Now press Enter and that's going to list the unique identifier of all of the volumes attached to this instance.

      You'll see the boot volume listed as devxvda1 and the EBS volume that we've just attached listed as devxvdf.

      So we need the unique ID of the volume that we just added.

      So that's the one next to xvdf.

      So go ahead and select this unique identifier.

      You'll need to make sure that you select everything between the speech marks and then copy that into your clipboard.

      Next, we need to edit the FSTAB file, which controls which file systems are mounted by default.

      So we're going to run a shudu and then space nano, which is our editor, and then a space, and then forward slash ETC, which is the configuration directory on Linux, another forward slash and then FSTAB and press Enter.

      And this is the configuration file for which file systems are mounted by our instance.

      And we're going to add a similar line.

      So first we need to use uuid, which is the unique identifier, and then the equal symbol.

      And then we need to paste in that unique ID that we just copied to our clipboard.

      Once that's pasted in, press Space.

      This is the ID of the EBS volume, so the unique ID.

      Next, we need to provide the place where we want that volume to be mounted.

      And that's the folder we previously created, which is forward slash EBS test.

      Then a space, we need to tell the OS which file system is used, which is xfs, and then a space.

      And then we need to give it some options.

      You don't need to understand what these do in detail.

      We're going to use defaults, all one word, and then a comma, and then no fail.

      So once you've entered all of that, press Ctrl+O to save that file, and Enter, and then Ctrl+X to exit.

      Now this will be mounted automatically when the instance starts up, but we can force that process by typing shudu space mount space-a.

      And this will perform a mount of all of the volumes listed in the FS tab file.

      So go ahead and press Enter.

      Now if we do a df space-k and press Enter, you'll see that our EBS volume once again is mounted within the EBS test folder.

      So I'm going to clear the screen, then I'm going to move into that folder, press Enter, and then do an ls space-la, and you'll see that our amazing test file still exists within this folder.

      And that shows that the data on this file system is persistent, and it's available even after we reboot this EC2 instance, and that's different than instance store volumes, which I'll be demonstrating later on.

      At this point, we're going to shut down this instance because we won't be needing it anymore.

      So close down this tab, click on Instances, right-click on instance one-AZA, and then select Stop Instance.

      You'll need to confirm it, refresh that and wait for it to move into a stopped state.

      Once it has stopped, go down and click on Volumes, select the EBS test volume, right-click and detach it.

      We're going to detach this volume from the instance that we've just stopped.

      You'll need to confirm that, and that will begin the process and it will detach that volume from the instance, and this demonstrates how EBS volumes are completely separate from EC2 instances.

      You can detach them and then attach them to other instances, keeping the data that's on that volume.

      Just keep refreshing.

      We need to wait for that to move into an available state, and once it has, we're going to right-click, select Attach Volume, click inside the instance box, and this time, we're going to select instance two-AZA.

      It should be the only one listed now in a running state.

      So select that and click on Attach.

      Just refresh that and wait for that to move into an in-use state, which it is, then move back to instances, and we're going to connect to the instance that we just attached that volume to.

      So select instance two-AZA, right-click, select Connect, and then connect to that instance.

      Once we connected to that instance, remember this is an instance that we haven't interacted with this EBS volume with.

      So this instance has no initial configuration of this EBS volume, and if we do a DF-K, you'll see that this volume is not mounted on this instance.

      What we need to do is do an LS, BLK, and this will list all of the block devices on this instance.

      You'll see that it's still using XVDF because this is the device ID that we configured when attaching the volume.

      Now, if we run this command, so shudu, file, S, and then the device ID of this EBS volume, notice how now it shows a file system on this EBS volume because we created it on the previous instance.

      We don't need to go through all of the process of creating the file system because EBS volumes persist past the lifecycle of an EC2 instance.

      You can interact with an EBS volume on one instance and then move it to another and the configuration is maintained.

      We're going to follow the same process.

      We're going to create a folder called EBSTEST.

      Then we're going to mount the EBS volume using the device ID into this folder.

      We're going to move into this folder and then if we do an LS, space-LA, and press Enter, you'll see the test file that you created in the previous step.

      It still exists and all of the contents of that file are maintained because the EBS volume is persistent storage.

      So that's all I wanted to verify with this instance that you can mount this EBS volume on another instance inside the same availability zone.

      At this point, close down this tab and then click on Instances and we're going to shut down this second EC2 instance.

      So right-click and then select Stop Instance and you'll need to confirm that process.

      Wait for that instance to change into a stop state and then we're going to detach the EBS volume.

      So that's moved into the stopped state.

      We can select Volumes, right-click on this EBSTEST volume, detach the volume and confirm that.

      Now next, we want to mount this volume onto the instance that's in Availability Zone B and we can't do that because EBS volumes are located in one specific availability zone.

      Now to allow that process, we need to create a snapshot.

      Snapshots are stored on S3 and replicated between multiple availability zones in that region and snapshots allow us to take a volume in one availability zone and move it into another.

      So right-click on this EBS volume and create a snapshot.

      Under Description, just use EBSTESTSNAP and then go ahead and click on Create Snapshot.

      Just close down any dialogues, click on Snapshots and you'll see that a snapshot is being created.

      Now depending on how much data is stored on the EBS volume, snapshots can either take a few seconds or anywhere up to several hours to complete.

      This snapshot is a full copy of all of the data that's stored on our original EBS volume.

      But because the snapshot is stored in S3, it means that we can take this snapshot, right-click, create volume and then create a volume in a different availability zone.

      Now you can change the volume type, the size and the encryption settings at this point, but we're going to leave everything the same and just change the availability zone from US-EAST-1A to US-EAST-1B.

      So select 1B in availability zone, click on Add Tag.

      We're going to give this a name to make it easier to identify.

      For the value, we're going to use EBS Test Volume-AZB.

      So enter that and then create the volume.

      Close down any dialogues and at this point, what we're doing is using this snapshot which is stored inside S3 to create a brand new volume inside availability zone US-EAST-1B.

      At this point, once the volume is in an available state, make sure you select the right one, then we can right-click, we can attach this volume and this time when we click in the instance box, you'll see the instance that's in availability zone 1B.

      So go ahead and select that and click on Attach.

      Once that volume is in use, go back to Instances, select the third instance, right-click, select Connect, choose Instance Connect, verify the username and then connect to the instance.

      Now we're going to follow the same process with this instance.

      So first, we need to list all of the attached block devices using LSBLK.

      You'll see the volume we've just created from that snapshot, it's using device ID XVDF.

      We can verify that it's got a file system using the command that we've used previously and it's showing an XFS file system.

      Next, we create our folder which will be our mount point.

      Then we mount the device into this mount point using the same command as we've used previously, move into that folder and then do a listing using LS-LA and you should see the same test file you created earlier and if you cap this file, it should have the same contents.

      This volume has the same contents because it's created from a snapshot that we created of the original volume and so its contents will be identical.

      Go ahead and close down this tab to this instance, select instances, right click, stop this instance and then confirm that process.

      Just wait for that instance to move into the stopped state.

      We're going to move back to volumes, select the EBS test volume in availability zone 1B, detach that volume and confirm it and then just move to snapshots and I want to demonstrate how you have the option of right clicking on a snapshot.

      You can copy the snapshot and choose a different regions.

      So as well as snapshots giving you the option of moving EBS volume data between availability zones, you can also use snapshots to copy data between regions.

      Now I'm not going to do this process but I could select a different region, for example, Asia Pacific Sydney and copy that snapshot to the Sydney region.

      But there's no point doing that because we just have to remember to clean it up afterwards.

      That process is fairly simple and will allow us to copy snapshots between regions.

      It might take some time again depending on the amount of data within that snapshot but it is a process that you can perform either as part of data migration or disaster recovery processes.

      So go ahead and click on cancel and at this point we're just going to clear things up because this is the end of this first phase of this demo lesson.

      So right click on this snapshot and just delete the snapshot and confirm that.

      Then go to volumes, select the volume in US East 1A, right click, delete that volume and confirm.

      Select the volume in US East 1B, right click, delete volume and confirm.

      And that just means we've tidied up both of those EBS volumes within this account.

      Now that's the end of this first stage of this set of demo lessons.

      All the steps until this point have been part of the free tier within AWS.

      What follows won't be part of the free tier.

      We're going to be creating a larger instant size and this will have a cost attached but I want to use it to demonstrate instant store volumes and how you can interact with them and some of their unique characteristics.

      So I'm going to move into a new video and this new video will have an associated charge.

      So you can either watch me perform the steps or you can do it within your own environment.

      Now go ahead and complete this video and when you're ready, you can move on to the next video where we're going to investigate instant store volumes.

    1. Welcome back and we're going to use this demo lesson to get some experience of working with EBS and Instant Store volumes.

      Now before we get started, this series of demo videos will be split into two main components.

      The first component will be based around EBS and EBS snapshots and all of this will come under the free tier.

      The second component will be based on Instant Store volumes and will be using larger instances which are not included within the free tier.

      So I'm going to make you aware of when we move on to a part which could incur some costs and you can either do that within your own environment or watch me do it in the video.

      If you do decide to do it in your own environment, just be aware that there will be a 13 cents per hour cost for the second component of this demo series and I'll make it very clear when we move into that component.

      The second component is entirely optional but I just wanted to warn you of the potential cost in advance.

      Now to get started with this demo, you're going to need to deploy some infrastructure.

      To do that, make sure that you're logged in to the general account, so the management account of the organization and you've got the Northern Virginia region selected.

      Now attached to this demo is a one click deployment link to deploy the infrastructure.

      So go ahead and click on that link.

      That's going to open this quick create stack screen and all you need to do is scroll down to the bottom, check this capabilities box and click on create stack.

      Now you're going to need this to be in a create complete state before you continue with this demo.

      So go ahead and pause the video, wait for that stack to move into the create complete status and then you can continue.

      Okay, now that's finished and the stack is in a create complete state.

      You're also going to be running some commands within EC2 instances as part of this demo.

      Also attached to this lesson is a lesson commands document which contains all of those commands and you can use this to copy and paste which will avoid errors.

      So go ahead and open that link in a separate browser window or separate browser tab.

      It should look something like this and we're going to be using this throughout the lesson.

      Now this cloud formation template has created a number of resources, but the three that we're concerned about are the three EC2 instances.

      So instance one, instance two and instance three.

      So the next thing to do is to move across to the EC2 console.

      So click on the services drop down and then either locate EC2 under all services, find it in recently visited services or you can use the search box at the top type EC2 and then open that in a new tab.

      Now the EC2 console is going through a number of changes so don't be alarmed if it looks slightly different or if you see any banners welcoming you to this new version.

      Now if you click on instances running, you'll see a list of the three instances that we're going to be using within this demo lesson.

      We have instance one - az a.

      We have instance two - az a and then instance one - az b.

      So these are three instances, two of which are in availability zone A and one which is in availability zone B.

      Next I want you to scroll down and locate volumes under elastic block store and just click on volumes.

      And what you'll see is three EBS volumes, each of which is eight GIB in size.

      Now these are all currently in use.

      You can see that in the state column and that's because all of these volumes are in use as the boot volumes for those three EC2 instances.

      So on each of these volumes is the operating system running on those EC2 instances.

      Now to give you some experience of working with EBS volumes, we're going to go ahead and create a volume.

      So click on the create volume button.

      The first thing you'll need to do when creating a volume is pick the type and there are a number of different types available.

      We've got GP2 and GP3 which are the general purpose storage types.

      We're going to use GP3 for this demo lesson.

      You could also select one of the provisioned IOPS volumes.

      So this is currently IO1 or IO2.

      And with both of these volume types, you're able to define IOPS separately from the size of the volume.

      So these are volume types that you can use for demanding storage scenarios where you need high-end performance or when you need especially high performance for smaller volume sizes.

      Now IO1 was the first type of provisioned IOPS SSD introduced by AWS and more recently they've introduced IO2 and enhanced it which provides even higher levels of performance.

      In addition to that we do have the non-SSD volume types.

      So SC1 which is cold HDD, ST1 which is throughput optimized HDD and then of course the original magnetic type which is now legacy and AWS don't recommend this for any production usage.

      For this demo lesson we're going to go ahead and select GP3.

      So select that.

      Next you're able to pick a size in GIB for the volume.

      We're going to select a volume size of 10 GIB.

      Now EBS volumes are created within a specific availability zone so you have to select the availability zone when you're creating the volume.

      At this point I want you to go ahead and select US-EAST-1A.

      When creating volume you're also able to specify a snapshot as the basis for that volume.

      So if you want to restore a snapshot into this volume you can select that here.

      At this stage in the demo we're going to be creating a blank EBS volume so we're not going to select anything in this box.

      We're going to be talking about encryption later in this section of the course.

      You are able to specify encryption settings for the volume when you create it but at this point we're not going to encrypt this volume.

      We do want to add a tag so that we can easily identify the volume from all of the others so click on add tag.

      For the key we're going to use name.

      For the value we're going to put EBS test volume.

      So once you've entered both of those go ahead and click on create volume and that will begin the process of creating the volume.

      Just close down any dialogues and then just pay attention to the different states that this volume goes through.

      It begins in a creating state.

      This is where the storage is being provisioned and then made available by the EBS product.

      If we click on refresh you'll see that it changes from creating to available and once it's in an available state this means that we can attach it to EC2 instances.

      And that's what we're going to do so we're going to right click and select attach volume.

      Now you're able to attach this volume to EC2 instances but crucially only those in the same availability zone.

      EBS is an availability zone scoped service and so you can only attach EBS volumes to EC2 instances within that same availability zone.

      So if we select the instance box you'll only see instances in that same availability zone.

      Now at this point go ahead and select instance 1 in availability zone A.

      Once you've selected it you'll see that the device field is populated and this is the device ID that the instance will see for this volume.

      So this is how the volume is going to be exposed to the EC2 instance.

      So if we want to interact with this instance inside the operating system this is the device that we'll use.

      Now different operating systems might see this in slightly different ways.

      So as this warning suggests certain Linux kernels might rename SDF to XVDF.

      So we've got to be aware that when you do attach a volume to an EC2 instance you need to get used to how that's seen inside the operating system.

      How we can identify it and how we can configure it within the operating system for use.

      And I'm going to demonstrate that in the next part of this demo lesson.

      So at this point just go ahead and click on attach and this will attach this volume to the EC2 instance.

      Once that's attached to the instance and you see the state change to in use then just scroll up on the left hand side and select instances.

      We're going to go ahead and connect to instance 1 in availability zone A.

      This is the instance that we just attached that EBS volume to so we want to interact with this instance and see how we can see the EBS volume.

      So right click on this instance and select connect and then you could either connect with an SSH client or use instance connect and to keep things simple we're going to connect from our browser so select the EC2 instance connect option make sure the user's name is set to EC2-user and then click on connect.

      So now we connected to this EC2 instance and it's at this point that we'll start needing the commands that are listed inside the lesson commands document and again this is attached to this lesson.

      So first we need to list all the block devices which are connected to this instance and we're going to use the LSBLK command.

      Now if you're not comfortable with Linux don't worry just take this nice and slowly and understand at a high level all the commands that we're going to run.

      So the first one is LSBLK and this is list block devices.

      So if we run this we'll be able to see a list of all of the block devices connected to this EC2 instance.

      You'll see the root device this is the device that's used to boot the instance it contains the instance operating system you'll see that it's 8 gig in size and then this is the EBS volume that we just attached to this instance.

      You'll see that device ID so XVDF and you'll see that it's 10 gig in size.

      Now what we need to do next is check whether there is a file system on this block device.

      So this block device we've created it with EBS and then we've attached it to this instance.

      Now we know that it's blank but it's always safe to check if there's any file system on a block device.

      So to do that we run this command.

      So we're going to check are there any file systems on this block device.

      So press enter and if you see just data that indicates that there isn't any file system on this device and so we need to create one.

      You can only mount file systems under Linux and so we need to create a file system on this raw block device this EBS volume.

      So to do that we run this command.

      So shoo-doo again is just giving us admin permissions on this instance.

      MKFS is going to make a file system.

      We specify the file system type with hyphen t and then XFS which is a type of file system and then we're telling it to create this file system on this raw block device which is the EBS volume that we just attached.

      So press enter and that will create the file system on this EBS volume.

      We can confirm that by rerunning this previous command and this time instead of data it will tell us that there is now an XFS file system on this block device.

      So now we can see the difference.

      Initially it just told us that there was data, so raw data on this volume and now it's indicating that there is a file system, the file system that we just created.

      Now the way that Linux works is we mount a file system to a mount point which is a directory.

      So we're going to create a directory using this command.

      MKDIR makes a directory and we're going to call the directory forward slash EBS test.

      So this creates it at the top level of the file system.

      This signifies root which is the top level of the file system tree and we're going to make a folder inside here called EBS test.

      So go ahead and enter that command and press enter and that creates that folder and then what we can do is to mount the file system that we just created on this EBS volume into that folder.

      And to do that we use this command, mount.

      So mount takes a device ID, so forward slash dev forward slash xvdf.

      So this is the raw block device containing the file system we just created and it's going to mount it into this folder.

      So type that command and press enter and now we have our EBS volume with our file system mounted into this folder.

      And we can verify that by running a df space hyphen k.

      And this will show us all of the file systems on this instance and the bottom line here is the one that we've just created and mounted.

      At this point I'm just going to clear the screen to make it easier to see and what we're going to do is to move into this folder.

      So cd which is change directory space forward slash EBS test and then press enter and that will move you into that folder.

      Once we're in that folder we're going to create a test file.

      So we're going to use this command so shudu nano which is a text editor and we're going to call the file amazing test file dot txt.

      So type that command in and press enter and then go ahead and type a message.

      It can be anything you just need to recognize it as your own message.

      So I'm going to use cats are amazing and then some exclamation marks.

      Then I'm going to press control o and enter to save that file and then control x to exit again clear the screen to make it easier to see.

      And then I'm going to do an LS space hyphen LA and press enter just to list the contents of this folder.

      So as you can see we've now got this amazing test file dot txt.

      And if we cat the contents of this so cat amazing test file dot txt you'll see the unique message that you just typed in.

      So at this point we've created this file within the folder and remember the folder is now the mount point for the file system that we created on this EBS volume.

      So the next step that I want you to do is to reboot this EC2 instance.

      To do that type sudo space and then reboot and press enter.

      Now this will disconnect you from this session.

      So you can go ahead and close down this tab and go back to the EC2 console.

      Just go ahead and click on instances.

      Okay, so this is the end of part one of this lesson.

      It was getting a little bit on the long side and so I wanted to add a break.

      It's an opportunity just to take a rest or grab a coffee.

      Part two will be continuing immediately from the end of part one.

      So go ahead complete the video and when you're ready join me in part two.

    1. Effective collaboration is essential for mutual learning.

      for - Deep Humanity - intertwingled individual / collective learning - evolutionary learning journey - symmathesy - mutual learning - Nora Bateson

    2. preliminary ground-setting

      for - co-creative collaboration - preliminary groundwork

      comment - How many times have I seen people come together with good intention to collaborate on some meaningful project onlyl for the project to fall apart some time later due to differences that emerge later on? - Without laying the proper framework for engagement and conflict resolution, we cannot prevent future conflicts from emerging - What is that proper framework? - What variables bring people closer together? - What variables drive people further apart? - We must identify those variables. They are complex because each one of us see's reality from our own unique perspective

    3. for - Medium article - co-creative collaboration - Donna Nelham

      summary - Donna takes us on a deep dive into the word collaboration what is needed to forge deep and meaningful collaboration and why it often fails - She introduces the term "collaboration washing" (like green washing) into our lexicon - This article is provocation for deep dive into what it means to collaborate - The questions we ask ourselves will lead us back to the most fundamental philosophical questions of self and other and how we formed these

    1. rumination

      Rumineren is het herhaaldelijk en langdurig denken over zaken in het verleden, meestal je eigen gevoelens of problemen

    2. endogenous

      from withing

    3. ‘exogenous

      having an external cause or origin.

    1. Welcome back and in this demo lesson you're going to evolve the infrastructure which you've been using throughout this section of the course.

      In this demo lesson you're going to add private internet access capability using NAT gateways.

      So you're going to be applying a cloud formation template which creates this base infrastructure.

      It's going to be the animals for life VPC with infrastructure in each of three availability zones.

      So there's a database subnet, an application subnet and a web subnet in availability zone A, B and C.

      Now to this point what you've done is configured public subnet internet access and you've done that using an internet gateway together with routes on these public subnets.

      In this demo lesson you're going to add NAT gateways into each availability zone so A, B and C and this will allow this private EC2 instance to have access to the internet.

      Now you're going to be deploying NAT gateways into each availability zone so that each availability zone has its own isolated private subnet access to the internet.

      It means that if any of the availability zones fail then each of the others will continue operating because these route tables which are attached to the private subnets they point at the NAT gateway within that availability zone.

      So each availability zone A, B and C has its own corresponding NAT gateway which provides private internet access to all of the private subnets within that availability zone.

      Now in order to implement this infrastructure you're going to be applying a one-click deployment and that's going to create everything that you see on screen now apart from these NAT gateways and the route table configurations.

      So let's go ahead and move across to our AWS console and get started implementing this architecture.

      Okay so now we're at the AWS console as always just make sure that you're logged in to the general AWS account as the I am admin user and you'll need to have the Northern Virginia region selected.

      Now at the end of the previous demo lesson you should have deleted all of the infrastructure that you've created up until that point so the animals for live VPC as well as the Bastion host and the associated networking.

      So you should have a relatively clean AWS account.

      So what we're going to do first is use a one-click deployment to create the infrastructure that we'll need within this demo lesson.

      So attached to this demo lesson is a one-click deployment link so go ahead and open that link.

      That's going to take you to a quick create stack screen.

      Everything should be pre-populated the stack name should be a4l just scroll down to the bottom check this capabilities box and then click on create stack.

      Now this will start the creation process of this a4l stack and we will need this to be in a create complete state before we continue.

      So go ahead pause the video wait for your stack to change into create complete and then we good to continue.

      Okay so now this stacks moved into a create complete state then we good to continue.

      So what we need to do before we start is make sure that all of our infrastructure has finished provisioning.

      To do that just go ahead and click on the resources tab of this cloud formation stack and look for a4l internal test.

      This is an EC2 instance a private EC2 instance so this doesn't have any public internet connectivity and we're going to use this to test on that gateway functionality.

      So go ahead and click on this icon under physical ID and this is going to move you to the EC2 console and you'll be able to see this a4l - internal - test instance.

      Now currently in my case it's showing as running but the status check is showing as initializing.

      Now we'll need this instance to finish provisioning before we can continue with the demo.

      What should happen is this status check should change from initializing to two out of two status checks and once you're at that point you should be able to right click and select connect and choose session manager and then have the option of connecting.

      Now you'll see that I don't because this instance hasn't finished its provisioning process.

      So what I want you to do is to go ahead and pause this video wait for your status checks to change to two out of two checks and then just go ahead and try to connect to this instance using session manager.

      Only resume the video once you've been able to click on connect under the session manager tab and don't worry if this takes a few more minutes after the instance finishes provisioning before you can connect to session manager.

      So go ahead and pause the video and when you can connect to the instance you're good to continue.

      Okay so in my case it took about five minutes for this to change to two out of two checks past and then another five minutes before I could connect to this EC2 instance.

      So I can right click on here and put connect.

      I'll have the option now of picking session manager and then I can click on connect and this will connect me in to this private EC2 instance.

      Now the reason why you're able to connect to this private instance is because we're using session manager and I'll explain exactly how this product works elsewhere in the course but essentially it allows us to connect into an EC2 instance with no public internet connectivity and it's using VPC interface endpoints to do that which I'll be explaining elsewhere in the course but what you should find when you're connected to this instance if you try to ping any internet IP address so let's go ahead and type ping and then a space 1.1.1.1.1 and press enter you'll note that we don't have any public internet connectivity and that's because this instance doesn't have a public IP version for address and it's not in a subnet with a route table which points at the internet gateway.

      This EC2 instance has been deployed into the application a subnet which is a private subnet and it also doesn't have a public IP version for address.

      So at this point what we need to do is go ahead and deploy our NAT gateways and these NAT gateways are what will provide this private EC2 instance with connectivity to the public IP version for internet so let's go ahead and do that.

      Now to do that we need to be back at the main AWS console click in the services search box at the top type VPC and then right click and open that in a new tab.

      Once you do that go ahead and move to that tab once you there click on NAT gateways and create a NAT gateway.

      Okay so once you're here you'll need to specify a few things you'll need to give the NAT gateway a name you'll need to pick a public subnet for the NAT gateway to go into and then you'll need to give the NAT gateway an elastic IP address which is an IP address which doesn't change.

      So first we'll set the name of the NAT gateway and we'll choose to use a4l for animals for life -vpc1 -natgw and then -a because this is going into availability zone A.

      Next we'll need to pick the public subnet that the NAT gateway will be going into so click on the subnet drop down and then select the web a subnet which is the public subnet in availability zone a so sn -web -a.

      Now we need to give this NAT gateway an elastic IP it doesn't currently have one so we need to click on allocate elastic IP which gives it an allocation.

      Don't worry about the connectivity type we'll be covering that elsewhere in the course just scroll down to the bottom and create the NAT gateway.

      Now this process will take some time and so we need to go ahead and create the two other NAT gateways.

      So click on NAT gateways at the top and then we're going to create a second NAT gateway.

      So go ahead and click on create NAT gateway again this time we'll call the NAT gateway a4l -vpc1 -natgw -b and this time we'll pick the web b subnet so sn -web -b allocated elastic IP again and click on create NAT gateway then we'll follow the same process a third time so click create NAT gateway use the same naming scheme but with -c pick the web c subnet from the list allocate an elastic IP and then scroll down and click on create NAT gateway and at this point we've got the three NAT gateways that are being created they're all in appending state if we go to elastic IPs we can see the three elastic IPs which have been allocated to the NAT gateways and we can scroll to the right or left and see details on these IPs and if we wanted we could release these IPs back to the account once we'd finish with them now at this point you need to go ahead and pause the video and resume it once all three of those NAT gateways have moved away from appending state we need them to be in an available state ready to go before we can continue with this demo so go ahead and pause and resume once all three have changed to an available state okay so all these are now in an available state so that means they're good to go they're providing service now if you scroll to the right in this list you're able to see additional information about these NAT gateways so you can see the elastic and private IP address the VPC and then the subnet that each of these NAT gateways are located in what we need to do now is configure the routing so that the private instances can communicate via the NAT gateways so right click on route tables and open in a new tab and we need to create a new route table for each of the availability zones so go ahead and click on create route table first we need to pick the VPC for this route table so click on the VPC drop down and then select the animals for live VPC so a for L hyphen VPC one once selected go ahead and name at the route table we're going to keep the naming scheme consistent so a for L hyphen VPC one hyphen RT for route table hyphen private a so enter that and click on create then close that dialogue down and create another route table this time we'll use the same naming scheme but of course this time it will be RT hyphen private B select the animals for life VPC and click on create close that down and then finally click on create route table again this time a for L hyphen VPC one hyphen RT hyphen private C again click on the VPC drop down and select the animals for life VPC and then click on create so that's going to leave us with three route tables one for each availability zone what we need to do now is create a default route within each of these route tables and that route is going to point at the NAT gateway in the same availability zone so select the route table private a and then click on the routes tab once you've selected the routes tab click on edit routes and we're going to add a new route it's going to be the IP version for default route of 0.0.0.0/0 and then click on target and pick NAT gateway and we're going to pick the NAT gateway in availability zone a and because we named them it makes it easy to select the relevant one from this list so go ahead and pick a for L hyphen VPC one hyphen NAT GW hyphen a so because this is the route table in availability zone a we need to pick the same NAT gateway so save that and close and now we'll be doing the same process for the route table in availability zone B make sure the routes tab is selected and click on edit routes click on add route again 0.0.0.0/0 and then for target pick NAT gateway and then pick the NAT gateway that's in availability zone B so NAT GW hyphen B once you've done that save the route table and then next select the route table in availability zone C so select RT hyphen private C make sure the routes tab is selected and click on edit routes again we'll be adding a route it will be the IP version for default route so 0.0.0.0/0 select a target go to NAT gateway and pick the NAT gateway in availability zone C so NAT GW hyphen C once you've done that save the route table and now our private EC2 instance should be able to ping 1.1.1.1 because we have the routing infrastructure in place so let's move back to our private instance and we can see that it's not actually working now the reason for this is that although we have created these routes we haven't actually associated these route tables with any of the subnets subnets in a VPC which don't have an explicit route table association are associated with the main route table now we need to explicitly associate each of these route tables with the subnets inside that same AZ so let's go ahead and pick RT hyphen private A we'll go through in order so select it click on the subnet associations tab and edit subnet associations and then you need to pick all of the private subnets in AZ A so that's the reserved subnet so reserved hyphen A the app subnet so app hyphen A and the DB subnet so DB hyphen A so all of these are the private subnets in availability zone A notice how all the public subnets are associated with this custom route table you created earlier but the ones we're setting up now are still associated with the main route table so we're going to resolve that now by associating this route table with those subnets so click on save and this will associate all of the private subnets in AZ A with the AZ A route table so now we're going to do the same process for AZ B and AZ C and we'll start with AZ B so select the private B route table click on subnet associations edit subnet associations so select application B database B and then reserved B and then scroll down and save the associations and then select the private C route table click on subnet associations edit subnet associations and then select reserved C database C and then application C and then scroll down and save those associations and now that we've associated these route tables with the subnets and now that we've added those default routes if we go back to session manager where we still have the connection open to the private EC2 instance we should see that the ping has started to work and that's because we now have a NAT gateway providing service to each of the private subnets in all of the three availability zones okay so that's everything you needed to cover in this demo lesson now it's time to clean up the account and return it to the same state as it was at the start of this demo lesson from this point on within the course you're going to be using automation and so we can remove all the configuration that we've done inside this demo lesson so the first thing we need to do is to reverse the route table changes that we've done so we need to go ahead and select the RT hyphen private a route table go ahead and select subnet associations and then edit the subnet associations and then just uncheck all of these subnets and this will return these to being associated with the main route table so scroll down and click on save do the same for RT hyphen private be so deselect all of these associations and click on save and then the same for RT hyphen private see so select it go to subnet associations and then edit them and remove all of these subnets and click on save next select all of these private route tables these are the ones that we created in this lesson so select them all click on the actions drop down and then delete route table and confirm by clicking delete route tables go to NAT gateways on the left and we need to select each of the NAT gateways in turn so a and then click on actions and delete NAT gateway type delete click delete then select be and do the same process actions delete NAT gateway type delete click delete and finally the same for see so select the C NAT gateway click on actions and delete NAT gateway you'll need to type delete to confirm click on delete now we're going to need all of these to be in a fully deleted state before we can continue so hit refresh and make sure that all three NAT gateways are deleted if yours aren't deleted if they're still listed in a deleting state then go ahead and pause the video and resume once all of these have changed to deleted at this point all of the NAT gateways have deleted so you can go ahead and click on elastic IPs and we need to release each of these IPs so select one of them and then click on actions and release elastic IP addresses and click release and do the same process for the other two click on release then finally actions release IP click on release once that's done move back to the cloud formation console select the stack which was created by the one click deployment at the start of the lesson and click on delete and then confirm that deletion and that will remove the cloud formation stack and any resources created as part of this demo and at that point once that finishes deleting the account has been returned into the same state as it was at the start of this demo lesson so I hope this demo lesson has been useful just to reiterate what you've done you've created three NAT gateways for a region resilient design you've created three route tables one in each availability zone added a default IP version for route pointing at the corresponding NAT gateway and associated each of those route tables with the private subnets in those availability zones so you've implemented a regionally resilient NAT gateway architecture so that's a great job that's a pretty complex demo but it's going to be functionality that will be really useful if you're using AWS in the real world or if you have to answer any exam questions on NAT gateways with that being said at this point you have cleared up the account you've deleted all the resources so go ahead complete this video and when you're ready I'll see you in the next.

    1. Data construction prompt. Fig. 6 shows theprompt used for Chinese distillation data construc-tion. We follow Zhou et al. (2024) to design theprompt for Chinese data construction. We adoptthe data construction prompt of Pile-NER-type 3,since it shows the best performance as in (Zhouet al., 2024).Figure 6: Data construction prompt for Chinese opendomain NER.Data processing. Following (Zhou et al., 2024),we chunk the passages sampled from the Sky cor-pus4 to texts of a max length of 256 tokens andrandomly sample 50K passages. Due to limitedcomputation resources, we sample the first twentyfiles in Sky corpus for data construction, since thesize of the entire Sky corpus is beyond the pro-cessing capability of our machines. We conductthe same data processing procedures including out-put filtering and negative sampling as in UniNER.Specifically, the negative sampling strategy for en-tity types, is applied with a probability proportionalto the frequency of entity types in the entire con

      Qúa trình xây dựng dữ liệu Sky-NER (Open NER tiếng Trung): - Xây dựng prompt: Dựa trên chiến lược của bài UniversalNER. - Xử lý dữ liệu: Thu thập dữ liệu bằng cách cắt đoạn văn trong sky-scorpus thành các đoạn văn bản có độ dài tối đa là 256 token và chọn ra ngẫu nhiên 50K đoạn văn.

    2. ference with out-domain examples. Duringinference, since examples from the automaticallyconstructed data is not aligned with the domainsand schemas of the human-annotated benchmarks,we refer to them as out-domain examples. Fig. 4shows the results of inference with out-domain ex-amples using diverse retrieval strategies. We usethe model trained with NN strategy here. After ap-plying example filtering such as BM25 scoring, in-ference with out-domain examples shows improve-ments compared to the baseline, suggesting theneed of example filtering when implementing RAGwith out-domain examples

      Qúa trình infer với các mẫu out-domain: Trong quá trình infer, bởi vì các mẫu từ tập dữ liệu xây dựng tự động có domain và format không giống với dữ liệu được gán nhãn bởi con người, các mẫu này sẽ được gọi là out-domain.

      Trong thực nghiệm ở hình 4, mô hình RA-IT được huấn luyện với chiến lược truy xuất NN. Sau khi áp dụng bộ lọc BM25, việc infer với các mẫu out-domain cho thấy cải thiện so với baseline, từ đó cho thấy tầm quan trọng trong việc thêm bộ lọc khi áp dụng RAG với các mẫu out-domain.

    3. Training with diverse retrieval strategies. Fig.3 visualize the results of training with various re-trieval strategies. We conduct inference with andwithout examples for each strategy, and set the re-trieval strategy of inference the same as of training.The most straight forward method NN shows bestperformances, suggesting the benefits of semanti-cally similar examples. Random strategy, though in-Figure 4: Impacts of inferece with out-domain examplesusing various retrieval strategies. The average F1 valueof the evaluated benchmarks are reported. w/o exmp.means inference without example. Applying examplefiltering strategy such as BM25 filtering benefits RAGwith out-domain examples.Figure 5: Impacts of inference with in-domain examples.The average F1 value of the evaluated benchmarks arereported. N -exmp. means the example pool of size N .Sufficient in-domain examples are helpful for RAG.ferior to NN, also shows improvements, indicatingthat random examples might introduce some gen-eral information of NER taks to the model. Mean-while, inference with examples does not guaranteeimprovements and often hurt performances. Thismay due to the differences of the annotation schemabetween the automatically constructed data and thehuman-annotated benchmarks

      Huấn luyện với các chiến lược truy xuất khác nhau: Được thể hiện ở hình 3. Qúa trình infer được tiến hành có hoặc không có các mẫu tham khảo với mỗi chiến lược trích xuất, và chiến lược trích xuất trong cả quá trình huấn luyện và quá trình infer là giống nhau.

      Kết quả cho thấy NN là chiến lược truy xuất tốt nhất, từ đó cho thấy tầm quan trọng của các mẫu tham khảo có sự tương đồng về mặt ngữ nghĩa. Trong khi đó, việc infer với các ví dụ thì không đảm bảo sự tăng tiến và thường ảnh hưởng tiêu cực đến kết quả.

    4. Diverse retrieval strategies. The followingstrategies are explored in the subsequent analysis.(1) Nearest neighbor (NN), the strategy used in themain experiments, retrieves k nearest neighborsof the current sample. (2) Nearest neighbor withBM25 filter (NN, BM), where we apply BM25 scor-ing to filters out NN examples not passing a prede-fined threshold. Samples with no satisfied exam-ples are used with the vanilla instruction template.(3) Diverse nearest neighbor (DNN), retrieves Knearest neighbors with K >> k and randomly se-lects k examples from them. (4) Diverse nearestwith BM25 filter (DNN,BM), filters out DNN exam-ples not reaching the BM25 threshold. (5) Random,uniformly selects k random examples. (6) Mixednearest neighbors (MixedNN), mixes the using ofthe NN and random retrieval strategies with theratio of NN set to a.

      Các chiến lược truy xuất chính: - Nearest neighbor (NN): Chiến lược được sử dụng trong các thực nghiệm chính, có khả năng trích xuất ra k mẫu gần với mẫu cần truy xuất nhất. - NN với bộ lọc BM25 (NN, BM): bộ lọc BM25 được sử dụng để lọc các mẫu NN có độ tương đồng ko vượt qua 1 ngưỡng nhất định - NN đa dạng: truy xuất K mẫu NN với K >> k, sau đó chọn ngẫu nhiên k mẫu trong K mẫu NN trên. - Random - NN hỗn hợp:Sử dụng kết hợp NN và các chiến lược chọn ngẫu nhiên với tỉ lệ chọn của NN là alpha

    5. We explore the impacts of diverse retrieval strate-gies. We conduct analysis on 5K data size for costsaving as the effect of RA-IT is consistent acrossvarious data sizes as shown in Section 3.4. Wereport the average results of the evaluated bench-marks here

      Phân tích: Phân tích này được thực hiện để khám phá mức độ ảnh hưởng của các chiến lược truy xuất khác nhau. Phân tích được tiến hành với mẫu dữ liệu có kích thước 5K.

    6. The main results are summarized in Table 1 and2 respectively. We report the results of inferencewithout examples for RA-IT here, since we foundthis setting exhibits more consistent improvements.The impacts of inference with examples are studiedin Section 3.5. As shown in the tables, RA-ITshows consistent improvements on English andChinese across various data sizes. This presumablybecause the retrieved context enhance the model

      Kết quả chính: Được thể hiện ở bảng 1 và bảng 2. Chú ý rằng, thực nghiệm trong 2 bảng này đã thực hiện quá trình infer mà không có few-shot, lý do bởi việc infer này đem lại sự tăng tiến bền vững trong kết quả.

      Kết quả cho thấy RA-IT đạt kết quả tốt nhất. Lý do cho sự tăng tiến này được cho là nhờ ngữ cảnh được truy xuất đã làm tăng cường khả năng hiểu đầu vào của mô hình, từ đó thể hiện sự cần thiết của các mẫu instruction có tăng cường ngữ cảnh.

    7. We conduct a preliminary study on IT data effi-ciency in targeted distillation for open NER byexploring the impact of varous datas sizes: [0.5K,1K, 5K, 10K, 20K, 30K, 40K, 50K]. We use vanillaIT for preliminary study. Results are visualized inFig. 2. The following observations are consistentin English and Chinese: (1) a small data size al-ready surpass ChatGPT’s performances. (2) Perfor-mances are improving as the data sizes increased to10K or 20K, but begin to decline and then remainat a certain level as data sizes further increased to50K. Recent work for IT data selection, Xia et al.Figure 2: Preliminary study of IT data efficiency foropen NER in English (left) and Chinese (right) scenar-ios, where the training data are Pile-NER and Sky-NERrespectively. Average zero-shot results of evaluatedbenchmarks are illustrated. The performance does notnecessarily improve as the data increases.(2024); Ge et al. (2024); Du et al. (2023) also findthe superior performances of only limited data size.We leave selecting more beneficial IT data for IEas future work. Accordingly, we conduct mainexperiments on 5K, 10K and 50K data sizes

      Nghiên cứu chuẩn bị cho đánh giá hiệu quả của dữ liệu: Nghiên cứu chuẩn bị được tiến hành cho việc đánh giá hiệu quả của bộ dữ liệu IT trong việc chiết xuất có mục tiêu của bài toán open NER bằng cách khám phá mức độ ảnh hưởng của dữ liệu ở nhiều kích thước khác nhau: [0.5K, 1K, 5K,...]. Mẫu IT đơn thuần được sử dụng để thực hiện nghiên cứu này.

      Các kết luận rút ra: - Một lượng nhỏ dữ liệu đã có thể vượt qua được khả năng của chatGPT. - Kết quả có sự tăng tiến thuận theo độ tăng của kích thước mô hình (từ 10K lên 20K), nhưng bắt đầu giảm và ổn định ở một mức cụ thể khi dữ liệu tiếp tục tăng đến mức 50k. Các nghiên cứu gần đây về việc chọn dữ liệu IT cũng cho ra kết quả việc trội của bộ dữ liệu nhỏ có kích thước hạn chế.

    1. 五彩目前可免费使用,后期应该会按订阅制收费,估计不会有买断制。但我是很支持给好用的工具付费的,毕竟人都是要吃饭的,价格合理就行。

      wwwaaabbb

    1. Die französische „Ministerin für den ökologischen Übergang“, Agnès Pannier-Runacher, droht mit Rücktritt, wenn in der aktuellen Budgetplanung nicht mehr als die jetzt vorgesehenen Mittel für Klimaanpassung und Klimaschutz vorgesehen werden. Agnès Pannier-Runacher gehört zum linken Flügel des Macron-Lagers. Die neue französische Regierung ist konservativ geprägt und hängt von der Tolerierung durch das rechtsradikale Rassemblement National ab.

    1. Platforms also collect information on how users interact with the site. They might collect information like (they don’t necessarily collect all this, but they might): when users are logged on and logged off who users interact with What users click on what posts users pause over where users are located what users send in direct messages to each other

      I find it scary that these platforms monitor every move we make on there sites especially them checking our direct messages with others. Our direct messages aren't as private as we think if these platforms are sitting there collecting this data.

    1. Who made the water, the raft, the trinity of Earth-Creators? Like manyCalifornia creation epics, the Maidu account seems to begin in the middle ofthe story. Mysteriously, elements of the world seem to have always beenpresent, their existence apparently beyond question or speculation.

      This creation story is interesting to me because it makes me wonder if the earth is being depicted as the "god" of the story. In most of the creation stories I am familiar with, the "god" of the story is the only thing present at the beginning, and it's existence is never really questioned. Earth Initiate does not appear to be an all-powerful being in this story, so I'm curious whether a "god" was present in their beliefs or not.

    1. Die Standardabweichung ist die durchschnittliche Abweichung vom Mittelwert.

      Ist diese Aussage falsch, da es hiess: Gefühlt ist die Standardabweichung sowas wie die durchschnittliche Abweichung (Beträge) vom Mittelwert (eben durch die Quadrierung und Rückrechnung über die Wurzel nicht ganz dasselbe)?

    1. Viele New Yorker Juristen, darunter namhafte Staatsanwälte, unterstützen eine Resolution zur strafrechtlichen Verfolgung der großen Ölgesellschaften. Vorgeworfen wird den Firmen, fossile Brennstoffe über Jahrzehnte verkauft zu haben, ohne über die ihnen bekannten Gefahren zu informieren oder diese zu berücksichtigen. Gefordert wird eine Klage wegen fahrlässiger Gefährdung von Menschenleben. Dazu muss nicht nachgewiesen werden, dass der Tod bestimmter Menschen durch die Konzerne verursacht wurde https://www.theguardian.com/us-news/2024/oct/17/new-york-big-oil-fueling-climate-disasters

    1. Der Stress, dem die Wassersysteme der Welt ausgesetzt sind, wird dazu führen, dass das 2030 die Nachfrage nach Wasser 40% höher sein wird als das Angebot. Der Bericht der Globalen Komission für die Wasserökonomie stellt fest, dass ohne radikale Gegenmaßnahmen die Hälfte der Nahrungsproduktion der Welt in den kommenden 25 Jahren gefährdet ist. Trotz der Verbundenheit der globalen Wasserressourcen werde Wasser noch nicht als globales Gemeingut gemanagt. https://www.theguardian.com/environment/2024/oct/16/global-water-crisis-food-production-at-risk

      Bericht: https://economicsofwater.watercommission.org/

    1. Die internationale Energie-Agentur #IEA stellt in ihrem neuesten Bericht u.a.fest, dass die Extremwettereignisse durch die globale Erhitzung die Energiesicherheit zunehmend gefährden. Sie fordert wesentlich höhere Investitionen einerseits in Energienetze und -speicher, andererseits in die Infrastruktur in den besonders energiearmen Ländern https://taz.de/Internationale-Energieagentur-warnt/!6043317/

    1. Noch nie ist die CO2-Konzentration in der Atmosphäre so stark gestiegen wie im vergangenen Jahr, nämlich um 3,37 parts per million (PPM). Die Konzentration liegt jetzt bei 422 PPM. Vor allem die sehr geringe CO2-Aufnahme durch Ozean- und Landsenken hat diese Steigerung verursacht https://taz.de/Hiobsbotschaft-fuers-Klima/!6040258/

    1. Résumé de la vidéo [00:00:00][^1^][1] - [00:27:00][^2^][2]:

      Cette vidéo présente cinq courants critiques de la sociologie urbaine, en se concentrant sur le catholicisme social et les travaux de Paul-Henri Chambard de Lauwe. Elle explore son influence et ses contributions à la sociologie urbaine française.

      Moments forts : + [00:00:00][^3^][3] Introduction des cinq courants * Présentation de la méthode * Focus sur le catholicisme social * Importance de Chambard de Lauwe + [00:01:09][^4^][4] Vie et contexte de Chambard de Lauwe * Origines aristocratiques et catholiques * Évolution vers des idées de gauche * Formation d'anthropologue + [00:04:44][^5^][5] Relation entre l'Église et la ville * Rupture post-Révolution française * Repli de l'Église dans les campagnes * Retour de l'Église vers les villes au XXe siècle + [00:09:55][^6^][6] Travaux fondateurs de Chambard * Publication de "Paris et l'agglomération parisienne" * Importance des cartes et illustrations * Étude des quartiers ouvriers + [00:12:00][^7^][7] Concept de la ville comme matrice * Culture et espace indissociables * Impact de la destruction des quartiers ouvriers * Transformation des pratiques sociales et culturelles

      Résumé de la vidéo [00:27:04][^1^][1] - [00:57:07][^2^][2]:

      Cette vidéo explore les courants critiques de la sociologie urbaine, en se concentrant sur les associations de locataires, le marxisme urbain, et les mouvements sociaux urbains. Elle examine comment ces courants ont influencé la politique de la ville et les transformations urbaines.

      Moments forts : + [00:27:04][^3^][3] Associations de locataires * Création de services communautaires * Importance des buanderies collectives * Comités de locataires pour améliorer la vie quotidienne + [00:29:29][^4^][4] Marxisme urbain * Application des théories de Karl Marx aux transformations urbaines * Influence du Parti communiste français * Disparition rapide du marxisme urbain après 1982 + [00:35:30][^5^][5] La ville comme marché * Transformation urbaine pour des gains financiers * Influence des grands groupes du BTP * Plus-values générées par les logements et infrastructures + [00:44:02][^6^][6] La ville comme système * Interdépendance des autorités locales, de l'État et des groupes privés * Capitalisme monopoliste d'État * Gestion des services urbains par des groupes privés + [00:47:02][^7^][7] Mouvements sociaux urbains * Mobilisations pour des services collectifs * Revendications pour de meilleurs logements et équipements * Absence de jonction avec le mouvement ouvrier pour une révolution totale

      Résumé de la vidéo [00:57:12][^1^][1] - [01:23:18][^2^][2]:

      Cette vidéo présente une analyse critique de la sociologie urbaine à travers les travaux de Henri Lefebvre, en mettant l'accent sur son concept de "droit à la ville".

      Moments forts: + [00:57:12][^3^][3] Introduction à Henri Lefebvre * Intellectuel atypique des Trente Glorieuses * Auteur de 80 livres sur divers sujets * Connu pour ses travaux sur le marxisme et la vie quotidienne + [01:01:00][^4^][4] Travaux sur la ville * Sept livres sur la question urbaine * "Le droit à la ville" et "La révolution urbaine" * Importance de la ville comme lieu d'événements + [01:10:00][^5^][5] Concept de "droit à la ville" * Titre d'un livre et une idée forte * Souvent mal interprétée par les élus locaux * Basée sur l'idée que la ville est le lieu des événements + [01:15:00][^6^][6] Définition de la ville par Lefebvre * La ville comme lieu où se produisent les événements * Importance des événements dans la fabrication de la société * Exemples de micro-événements et de grands événements + [01:19:00][^7^][7] Événements et urbanisme * Les événements peuvent créer des villes temporaires * Exemple des festivals techno * Impact des événements sur la perception de la ville

      Résumé de la vidéo [01:23:20][^1^][1] - [01:50:50][^2^][2]:

      Cette vidéo explore les courants critiques de la sociologie urbaine, en se concentrant sur les événements et les droits des habitants des périphéries urbaines.

      Points forts : + [01:23:20][^3^][3] Événements locaux et leur importance * Les événements locaux attirent des gens de diverses régions * Ils contribuent à la vie communautaire * Ils sont souvent sous-estimés par rapport aux grands centres urbains + [01:25:02][^4^][4] Droit à la ville selon Lefebvre * Reconnaissance égale des habitants des périphéries * Importance des événements locaux pour la dignité urbaine * Critique des pratiques actuelles des élus + [01:31:02][^5^][5] Le CeRFI et ses contributions * Collectif indépendant dirigé par Félix Guattari * Influence de Michel Foucault et de la psychanalyse lacanienne * Publications originales et non académiques + [01:39:01][^6^][6] Ville comme dispositif disciplinaire * Exploration historique des réseaux routiers et des plans de ville * Impact du capitalisme industriel sur l'urbanisme * Analyse des cités minières et des grands ensembles + [01:46:42][^7^][7] Détérioration et territorialisation * Concept de déterritorialisation * Hospitalisation psychiatrique en milieu ouvert * Influence sur la construction des villes nouvelles en France

      Résumé de la vidéo [01:50:52][^1^][1] - [02:11:22][^2^][2]:

      Cette partie de la vidéo explore les approches critiques en sociologie urbaine, en se concentrant sur la psychiatrie de secteur et la sémiologie urbaine.

      Moments forts: + [01:50:52][^3^][3] Psychopolis et alternatives * Problème de la gestion des malades mentaux * Proposition de la "psychopolis" * Solution alternative avec des appartements F3 + [01:54:00][^4^][4] Psychiatrie de secteur * Mise en place dans les villes nouvelles * Accueil des malades dans des appartements * Suivi par des professionnels de santé + [01:57:03][^5^][5] Sémiologie urbaine * Étude du sens et de l'expérience urbaine * Critique des urbanistes et des pouvoirs de l'État * Importance des représentations et des perceptions + [02:05:27][^6^][6] Axes de recherche en sémiologie urbaine * Ville comme langue * Appropriation sensible des espaces * Relation entre appropriation urbaine et histoire psychique

    1. Viele CO2-Kompensationsgeschäfte mit chinesischen Firmen, die Bestätigungen für angebliche „upstream emission reduction“ anbieten, sind vermutlich betrügerisch. Durch die Anrechnung solcher angeblicher Reduktionen haben österreichische Firmen wie die OMV, Shell Austria und MOL Austria versucht, den vorgeschriebenen Anteil von 13% erneuerbare Energie in ihren Produkten zu erreichen. Die Staatsanwaltschaft hat eine entsprechende Anzeige des österreichischen Klimaschutzministeriums bisher nicht weiter verfolgt, die beweislage ist aber deutlich. https://www.derstandard.at/story/3000000239520/millionen-betrugsverdacht-rund-um-co2-ausgleichsgeschaefte-mit-china-weitet-sich-aus

    1. Résumé de la vidéo [00:00:05][^1^][1] - [00:28:55][^2^][2]:

      Cette vidéo présente les premiers éléments de la sociologie urbaine critique, en se concentrant sur les travaux de divers sociologues urbains des Trente Glorieuses.

      Moments forts: + [00:00:05][^3^][3] Introduction à la sociologie urbaine critique * Présentation rapide des travaux * Importance des transformations urbaines et sociales * Contexte des Trente Glorieuses + [00:02:28][^4^][4] Théories sociologiques des années 1950-1980 * Validité actuelle des théories * Importance d'un regard critique * Exemples de théories marxisantes + [00:05:52][^5^][5] Révolution urbaine et intellectuelle * Transformations urbaines brutales * Effervescence intellectuelle des Trente Glorieuses * Débats publics animés par des intellectuels + [00:12:13][^6^][6] Caractéristiques de la sociologie urbaine critique * Jeunes sociologues engagés * Sociologie radicale et théorique * Importance du contexte intellectuel + [00:20:38][^7^][7] Critique de l'urbanisme et de la planification * Opposition à l'urbanisme d'État * Critique de la planification centralisée * Ton engagé et parfois agressif des sociologues

      Résumé de la vidéo [00:28:57][^1^][1] - [00:35:29][^2^][2]:

      Cette vidéo présente les éléments clés de la sociologie urbaine critique, en mettant l'accent sur la planification urbaine, l'engagement des sociologues et les contradictions inhérentes à cette discipline.

      Moments forts: + [00:28:57][^3^][3] Critique de la planification urbaine * Segmentation de la société * Spécialisation des tâches * Perte de la réalité totale + [00:30:29][^4^][4] Engagement des sociologues * Importance de s'engager * Comprendre les mouvements sociaux * Être avec les acteurs sociaux + [00:31:31][^5^][5] Contradictions en sociologie * Acteurs inconscients vs. engagement * Contradictions dans les travaux * Exemple de Manuel Castels + [00:33:01][^6^][6] Évolution de la sociologie urbaine * Transition du scientisme à l'héroïsme * Production prolifique de travaux * Différences avec la sociologie contemporaine

    1. Résumé de la vidéo [00:00:00][^1^][1] - [00:09:38][^2^][2]:

      Cette vidéo traite de la seconde poussée urbaine en France, qui a eu lieu après la Seconde Guerre mondiale et a duré environ cinquante ans. Elle examine les dynamiques d'urbanisation massive et les facteurs qui ont contribué à cette croissance.

      Points forts : + [00:00:00][^3^][3] Début de la seconde poussée urbaine * Commence après la Seconde Guerre mondiale * Dure environ cinquante ans * Envoie des populations importantes vers les villes + [00:00:40][^4^][4] Croissance urbaine selon l'INSEE * Données de 1962 à 1990 * Observations sur les pôles urbains et les couronnes périurbaines * Variation significative en pourcentage de la population + [00:02:27][^5^][5] Urbanisation généralisée * Affecte toutes les agglomérations * Ralentissement après les Trente Glorieuses * Périurbanisation continue de croître + [00:05:00][^6^][6] Facteurs de la poussée urbaine * Industrialisation massive * Baby-boom * Guerres de décolonisation + [00:07:01][^7^][7] Problèmes de logement * Destruction de logements pendant la guerre * Déficit de logement social * Apparition de bidonvilles et d'habitats insalubres

    1. Résumé de la vidéo [00:00:04][^1^][1] - [00:20:21][^2^][2]:

      Cette vidéo explore les transformations urbaines en France pendant les Trente Glorieuses, une période de croissance économique et de modernisation rapide entre 1945 et 1975. Elle met en lumière les changements sociaux, économiques et culturels qui ont profondément réorganisé l'espace urbain.

      Moments forts: + [00:00:04][^3^][3] Introduction aux Trente Glorieuses * Terme popularisé par Jean Fourastié * Modernisation rapide de la France * Comparaison de deux villages avant et après cette période + [00:04:35][^4^][4] Transformation de Rennes * Expansion urbaine avec de nouveaux quartiers * Remplacement des zones agricoles par des ensembles résidentiels * Développement de l'infrastructure urbaine + [00:11:02][^5^][5] Dynamiques sociales et territoriales * Gentrification des centres-villes * Création rapide des grands ensembles * Périurbanisation et diffusion des maisons individuelles + [00:16:46][^6^][6] Rôle central de l'État * Pilotage des transformations urbaines * Absence de compétences locales * Croissance économique soutenue par l'État + [00:19:52][^7^][7] Impact sur la société * Taux de chômage extrêmement bas * Transformation complète de la société et de l'économie * Importance des Trente Glorieuses pour comprendre les enjeux actuels

    1. Résumé de la vidéo [00:00:00][^1^][1] - [00:28:48][^2^][2]:

      La vidéo explore l'évolution des banlieues au XIXe siècle, en se concentrant sur trois idées principales : l'invention des banlieues populaires, leur rôle dans la résolution de problèmes politiques, et la transformation urbaine de cette époque.

      Moments forts: + [00:00:00][^3^][3] Introduction et objectifs * Trois idées principales * Invention des banlieues entre 1880 et 1900 * Transformation urbaine pour résoudre des problèmes politiques + [00:02:01][^4^][4] Expansion urbaine * La ville dépasse les fortifications * Étalement urbain et périurbanité * Découplage entre réalité et image des villes + [00:07:11][^5^][5] Poussée urbaine du XIXe siècle * Urbanisation continue depuis le XIe siècle * Deux accélérations majeures : 1850-1900 et 1950-1980 * Concentration de la population dans les grandes agglomérations + [00:12:06][^6^][6] Urbanisation et industrialisation * Liens avec l'industrialisation massive * Immigration et conditions de vie difficiles * Ségrégation et insalubrité des quartiers ouvriers + [00:21:00][^7^][7] Réformes et sociologie urbaine * Réformateurs et sociologues comme Le Play et Villermé * Enquêtes de terrain et démarches politiques * Exemples de Manchester et des conditions de vie ouvrières

      Résumé de la vidéo [00:28:50][^1^][1] - [00:56:51][^2^][2]:

      Cette partie de la vidéo explore l'évolution des villes au XIXe siècle, en particulier la ségrégation sociale et les transformations urbaines. Elle met en lumière les conditions de vie difficiles des ouvriers et les révolutions urbaines qui en ont résulté.

      Moments forts: + [00:28:50][^3^][3] Ségrégation sociale à Manchester et Paris * Ouvriers confinés dans des quartiers industriels * Séparation des classes sociales * Conditions de vie difficiles + [00:31:00][^4^][4] Révolutions urbaines au XIXe siècle * Théories socialistes et marxistes émergentes * Révoltes ouvrières dans les villes * Importance des révolutions pour l'histoire urbaine + [00:39:00][^5^][5] Transformation des villes pour résoudre les problèmes sociaux * Destruction des centres-villes ouvriers * Création de banlieues populaires * Amélioration des conditions de vie + [00:45:00][^6^][6] Rôle de Georges Eugène Haussmann à Paris * Expropriation et transformation urbaine * Création de grands boulevards et espaces verts * Impact sur la population ouvrière

      Résumé de la vidéo [00:56:53][^1^][1] - [01:20:57][^2^][2]:

      Cette vidéo explore l'évolution des banlieues parisiennes au XIXe siècle, en se concentrant sur les transformations sociales et économiques qui ont conduit à leur création.

      Points forts : + [00:56:53][^3^][3] Densité de population et gentrification * Baisse de la densité dans les quartiers centraux * Gentrification des centres-villes par Haussmann * Déplacement des ouvriers vers les périphéries + [01:00:00][^4^][4] Invention de la banlieue * Déplacement des ouvriers au-delà des fortifications * Conditions de vie difficiles dans les bidonvilles * Émergence de nouvelles communes pour loger les ouvriers + [01:04:00][^5^][5] Logiques économiques et industrielles * Déplacement des entreprises polluantes en banlieue * Besoin de plus d'espace pour les entreprises * Création de logements à proximité des nouvelles zones industrielles + [01:09:00][^6^][6] Recensement et croissance démographique * Premier recensement de la banlieue en 1891 * Croissance rapide des populations ouvrières * Importance des ouvriers dans les nouvelles communes + [01:13:00][^7^][7] Sécession urbaine et spécialisation de l'espace * Séparation des classes sociales dans l'espace urbain * Concentration des activités économiques et culturelles au centre * Perception négative des banlieues et des banlieusards

    1. Eine Studie weist erstmals systematisch den Einfluss von Dürren und zunehmender Trockenheit auf die Binnenmigration in vielen verschiedenen Ländern nach. Es migrieren vor allem Mitglieder mittlerer Einkommensgruppen, die die dazu nötigen Ressourcen haben. Die klimabedingte Migration trägt deutlich zur Urbanisierung bei https://www.derstandard.at/story/3000000240733/mehr-binnenmigration-durch-klimawandel

      Studie: https://www.nature.com/articles/s41558-024-02165-1.epdf?sharing_token=zQaNIIlE0D5VSVhiEeWSRdRgN0jAjWel9jnR3ZoTv0N5BsSsWDa3LuiqvifrZZqQ9PHrGw0G8JwyXN4l5XLwHLyMEPxhNDlwsm_I7HyLLBL-PIsL8iWYBirASOxKiB3OvY5CyEDs2OqdYzcj0HqqPZGigOJmwF7H97HsKHpUv2tEjBvnMf7i4DKmBH78sfFsx7iymr6A4PFpKfrKe6IDSxkyQgZFpa8kBrt8lM6HkbU%3D&tracking_referrer=www.derstandard.at

    1. Résumé de la vidéo [00:00:01][^1^][1] - [00:29:22][^2^][2]:

      Cette vidéo explore le concept de la sécession urbaine, en se concentrant sur les communautés fermées et les espaces privés qui remplacent les espaces publics.

      Points forts : + [00:00:01][^3^][3] Communautés fermées * Développement historique en Amérique du Sud * Présence mondiale actuelle * Exemples en France et ailleurs + [00:01:22][^4^][4] Évolution des espaces urbains * Fermeture des résidences et immeubles * Sécurité accrue avec caméras et portes * Réduction de l'accès public + [00:02:46][^5^][5] Exemple de Cœur Défense * Immeuble à La Défense, Paris * Cadres supérieurs et leur mode de vie * Services exclusifs pour les employés + [00:10:00][^6^][6] Impact sur la société * Isolement des cadres des problèmes sociaux * Séparation des classes sociales * Disparition des espaces publics + [00:23:00][^7^][7] Question sociale vs. question urbaine * Définition et évolution des deux concepts * Importance des territoires dans les appartenances sociales * Perspectives sociologiques divergentes

      Résumé de la vidéo [00:29:25][^1^][1] - [00:51:12][^2^][2]:

      Cette partie de la vidéo explore les dynamiques sociales et territoriales dans les villes modernes, en mettant l'accent sur la gentrification, les périurbains, et les territoires de relégation. Elle aborde également les conflits territoriaux et les politiques urbaines.

      Moments forts: + [00:29:25][^3^][3] Gentrification et sécurité urbaine * Présence de vigiles courtois * Espaces urbains pacifiés * Population gentrifiée diversifiée + [00:30:47][^4^][4] Mode de vie périurbain * Propriétaires de maisons avec jardin * Emploi stable et deux voitures * Diversité professionnelle mais mode de vie homogène + [00:31:37][^5^][5] Territoires de relégation * Populations marginalisées dans les grands ensembles * Taux de chômage élevé * Difficultés sociales et stigmatisation + [00:34:26][^6^][6] Conflits territoriaux (NIMBY) * Refus de constructions locales (lycées, centres commerciaux) * Protection de l'homogénéité sociale * Mobilisations discrètes mais influentes + [00:38:10][^7^][7] Politiques urbaines et tensions sociales * Pouvoir accru des collectivités locales * Urbanisme comme outil de gestion des tensions * Importance croissante de la question urbaine

    1. Résumé de la vidéo [00:00:00][^1^][1] - [00:29:05][^2^][2]:

      Cette vidéo présente un cours de sociologie urbaine critique, donné par Éric Breton, maître de conférences au département de sociologie. Il aborde les thèmes de la mobilité urbaine, des théories urbaines, et de la production sociale des territoires.

      Moments forts: + [00:00:00][^3^][3] Introduction et présentation * Éric Breton se présente * Il explique ses domaines de compétence * Il introduit le cours de sociologie urbaine + [00:01:10][^4^][4] Les différentes formes de mobilité * Mobilité physique (vélo, marche, voiture) * Mobilité résidentielle (déménagements fréquents) * Mobilité virtuelle (internet, médias) + [00:06:00][^5^][5] Production sociale des territoires * La ville est produite par les transformations sociales * Importance des conflits sociaux dans la production urbaine * Exemples historiques de transformations urbaines + [00:12:00][^6^][6] Périodes clés de l'histoire urbaine * 1850-1900 : invention des banlieues * 1950-1980 : création des grands ensembles * 2000-2020 : périurbanisation et gentrification + [00:22:00][^7^][7] Théories urbaines * Marxisme urbain (Manuel Castells) * Théories de Michel Foucault * Contributions de Chambard de Lauwe et Henri Lefebvre

      Résumé de la vidéo [00:29:10][^1^][1] - [00:49:12][^2^][2]:

      Cette vidéo traite de la sociologie urbaine critique, en se concentrant sur la ségrégation et la sécession urbaine. Elle explore comment ces phénomènes influencent la structure sociale et l'intégration dans les villes.

      Moments forts: + [00:29:10][^3^][3] Définition de la ségrégation * Affectation d'un territoire à un groupe social * Ségrégation ethnique, religieuse, géographique, de genre * Exemple de la ville de Rennes + [00:34:01][^4^][4] Espace public partagé * Rue Le Bastard à Rennes comme espace partagé * Importance de l'espace public pour la mixité sociale * La ville comme lieu d'intégration des diversités + [00:37:00][^5^][5] Évolution vers la sécession urbaine * Réduction des espaces partagés * Impact sur la société et montée des discours d'exclusion * Théorie de Jacques Donzelot sur la sécession urbaine + [00:40:05][^6^][6] Exemples de fermetures urbaines * Fermetures résidentielles à Marseille * Communautés fermées comme Pont Royal * Impact de la peur et de l'insécurité sur l'urbanisme

    1. Fiscal

      This word means relating to government revenue, especially taxes.

    2. The 1935 Social Security Act provided for old-age pensions, unemployment insurance, and economic assistance for both the elderly and dependent children.

      This was the creation of Social Security Numbers right? Also how did it allow the elders to retire?

    3. At the time of the stock market crash, southerners were already underpaid, underfed, and undereducated.

      Out of context, but were the farmers/southerners still able to have pets, like dogs? I know that farmers usually have at least 1 dog? Or a Cat?

    4. In 1932, nearly 2,300 banks collapsed, taking credit, personal deposits, and people’s life savings with them.

      I would be so angry if this happened to me. Honestly this would really suck, what if a teenager was saving for college? I would be so upset.

    1. t is likely that you have more in common with that reality TV star than you care to admit. We tend to focus on personality traits in others that we feel are important to our own personality. What we like in ourselves, we like in others, and what we dislike in ourselves, we dislike in others (McCornack, 2007). If you admire a person’s loyalty, then loyalty is probably a trait that you think you possess as well. If you work hard to be positive and motivated and suppress negative and unproductive urges within yourself, you will likely think harshly about those negative traits in someone else. After all, if you can suppress your negativity, why can’t they do the same? This way of thinking isn’t always accurate or logical, but it is common.

      To me this has never even registered in my head. I am going to focus on this the next time my girlfriend is watching reality tv. I know that I am most aware that I tend to root for the underdogs in most scenarios. I want the one who was counted out to win. I wonder how that relates to my personality. I know I always admire the extroverts, but I felt like that was because I am not very extroverted and wanted to be like them. Intersting self observation for me to try in the coming days.

    2. his simple us/them split affects subsequent interaction, including impressions and attributions. For example, we tend to view people we perceive to be like us as more trustworthy, friendly, and honest than people we perceive to be not like us (Brewer, 1999).

      I am currently working on a construction site here in Boise. I am from Tennessee and all my coworkers are from Kentucky. One day a coworker told me the superindentent didnt like me. Obviously confused since we had only been working together for 3 days, I asked, Why? My coworker told me simply for the fact that I am not from Kentucky, he did not trust me or think I was a capable worker because of where I grew up. I know its not fair but the only thing I can do is prove him wrong and help him recognize his inherant bias is not always correct.

    1. In conclusion, it is important that primary care physicians get well versed with the future AI advances and the new unknown territory the world of medicine is heading toward.

      The conclusion summarizes how physicians should get used to AI because it will soon be a big part of their work.

    2. Some studies have been documented where AI systems were able to outperform dermatologists in correctly classifying suspicious skin lesions.[18] This because AI systems can learn more from successive cases and can be exposed to multiple cases within minutes, which far outnumber the cases a clinician could evaluate in one mortal lifetime.

      This shows that AI can also take jobs as away as well as male them better.

    3. . In conclusion, the physicians who used documentation support such as dictation assistance or medical scribe services engaged in more direct face time with patients than those who did not use these services

      This shows that physicians using AI save more time and are able to interact with patients more.

    4. Primary care physicians can use AI to take their notes, analyze their discussions with patients, and enter required information directly into EHR systems.

      This shows another way physicians use AI in their exams

    5. The Da Vinci robotic surgical system developed by Intuitive surgicals has revolutionized the field of surgery especially urological and gynecological surgeries.

      This paragraph show how AI is being used in surgery. Robots are mimicking surgeons to perform surgery.

    6. Radiology is the branch that has been the most upfront and welcoming to the use of new technology.

      This paragraph talks about how Radiology is using AI. Radiology uses AI to help identify abnormal and normal scans more quickly, especially in busy hospitals with fewer staff.

    7. A lot of AI is already being utilized in the medical field, ranging from online scheduling of appointments, online check-ins in medical centers, digitization of medical records, reminder calls for follow-up appointments and immunization dates for children and pregnant females to drug dosage algorithms and adverse effect warnings while prescribing multidrug combinations.

      This shows the different ways medicine is being utilized in medicine

    1. Employees at the company misusing their access, like Facebook employees using their database permissions to stalk women

      This can be highly problematic as the employees would basically be logged onto your accounts and can even view your posts which are on a privacy setting "only-me". This reminds me of how someone I know was mistreated by their manager and they had an issue over their wages so right before giving in her resignation letter she leaked the company's database by posting it on Twitter, which included budgeting and the balance sheet.

    1. When Elon Musk purchased Twitter, he also was purchasing access to all Twitter Direct Messages

      This can be concerning as we tend to use social media sites like Instagram to chat with our friends and family, which includes a lot of personal information that we wouldn’t want anyone else to know, such as now that I have read this I will think twice before saying something very personal over social media messages and rather use my phone sms. Because on social media there is always a third party tracking your actions, which sounds like a privacy invasion.

    1. Listening to people who are different from us is a key component of developing self-knowledge. This may be uncomfortable, because our taken-for-granted or deeply held beliefs and values may become less certain when we see the multiple perspectives that exist.

      Listening to the thoughts and opinions of people with differing cultures or political opinions with the intention to understand, instead of respond, is such a powerful tool. It can help dismantle prejudices, make you a better advocate for your own values, and/or help practice giving people room to communicate what they really intending to say rather than giving preloaded responses. I think most people would benefit greatly from engaging in this kind of practice on a regular basis.

    1. Self-discrepancy theory states that people have beliefs about and expectations for their actual and potential selves that do not always match up with what they actually experience (Higgins, 1987).

      I have experienced this kind expectation to reality relationship in some of my personal relationships. These people had an idea of what they could be if they could just stop being inadequate that only served to generate shame and guilt. Often, there was never any real grounding for the things they expected of themselves, but they felt the weight of those expectations as if they were an undeniable reflection of their potential. I am sure many of this is related to external social expectations that are later internalized. These expectations seem to rarely serve as drivers for someone to be more productive and more often seem to break people down and make them overall less likely to engage with life.