1. Last 7 days
    1. women accessing microfinance

      foreshadowing the week on social entrepreneurship

    2. We have now all been given the freedom

      He doesn't mean this as a good thing...

    3. The endless drive to exceed one’s capacities across hitherto distinct spheres of life activity

      this is what I was referring to earlier ... gotta get more, be more, be fitter, healthier, happier, wealthier.

      Is the "entrepreneurial self" a constant work-in-progress exemplifying the ethos of the Daft Punk song "harder, better, faster, stronger"? In fact, the lyrics have a definite Neo-liberal twist to them (something lost in the recycling of the song by Kanye West with his version, "Stronger"):

      "Work it harder Make it better Do it faster Makes us stronger More than ever Hour after Our work is Never over

    4. Entrepreneurship would appear at first glance to exemplify such a mode of indirect control sans responsibility.

      Yep - so much of entrepreneurship is Neo-liberalism writ large! You're in it for yourself because you can't count on anyone else (especially the state). Collective forms of entrepreneurship are the exception, rather than the rule (here I'm thinking in particular of indigenous versions...)

    5. The production of subjects responsible only for themselves

      Clearly, Szeman is critical of our contemporary condition in which the figure of the entrepreneur is cast as the salve for all our problems yet also produces "subjects responsible only for themselves" We ought not be fooled into thinking Szeman is advocating for entrepreneurialism -- instead, he's alerting everyone to how it's a tsunami overtaking culture (and advocating that we stay alert to its potential dangers...)

    6. It is a mechanism of self hood and subject formation that begins from the premise that there is no one to count on, no one who can do anything for you other than you yourself.

      This certainly seems to sum up the attitude of a lot of entrepreneurs -- and seems to describe the "common sense" reality of their endeavours. Do you think that this describes your reality?

    7. Governments cannot be entrepreneurial, nor can NGOs.

      This is a contested statement. Certainly some governments and NGOs would say the exact opposite!

    8. The demand to produce ever more is part of a system in which an imperative exists to enjoy and to become ever more.

      remember this when we deal with the latter weeks that highlight themes of self-actualization, well-being, and wellness. Are we ever content with what we have? What does it mean to be constantly striving for more? Does entrepreneurship encourage only this (or anything approximating a sense of "balance" also...)?

    9. hould we not welcome the cracks that might appear in the operations of biopolitics at its fullest oper-ation?

      Even with all of the criticisms he's outlined, Szeman ends on a conditional but hopeful note. This reminds me of a song lyric:

      Leonard Cohen sang "Ring the bells that still can ring Forget your perfect offering There is a crack, a crack in everything That's how the light gets in."

      Which leads me to my final note...

    10. creating an enterprise and creating a self is the same activ-ity.

      This strikes me as really important for my vision of the course. Part of what we're going to be examining this term is how this idea of "the entrepreneurial self" is constituted. A passion project or a side-hustle isn't the only thing an entrepreneurial person is building (or working on). They're also constructing themselves...

    11. The status of entrepreneurship as a new common sense of subjectivity and economic practice

      Remember at the beginning of the article (when Szeman says "we are all entrepreneurs now") (p. 472)? He doesn't mean that we are all creating business start-ups. Rather, he's suggesting that there is a spirit-of-the-times wherein entrepreneurship has become this new common-sense reality. It is both a dominant way of thinking about how we ought to act, AND an informal rulebook for how economies (and other forms of practice) ought to function too... In other words, entrepreneurship isn't just about undertaking profit-making (and risk-inducing) economic practices in capitalism. Rather, it's about undertaking a new subjectivity, a new identity when it comes to how we think of ourselves, how we relate to others, and how we respond to our wider social, cultural, political, and economic environment.

    12. the entrepreneur is the neo-liberal subject par excellence

      remember this term (neoliberal) for two weeks hence!

    13. The figure of the entrepreneur embodies the values and attributes that are celebrated as essential for the economy to operate smoothly and for the contemporary human being to flourish.

      remember this rhetorical nod to "flourishing" (which we'll revisit in earnest in the 2nd or 3rd last week of the semester...)

    14. I love this comment. It links brilliantly to next week's content...

    15. the entrepreneur is abstracted and universal-ized into a model for all citizens
    16. s is, in the main, inevitable in the new world of the devices and gadgets that increasingly mediate our lives. What made this article about Boomtrain distin
    17. definitely stay tuned in week 3 (when we talk about the "auto-preneur") as the success vs. hardship struggle becomes paramount. Also, passion projects are the very root of "unconventional" entrepreneurs (perhaps even more so than "conventional" ones...)

    18. This is the "modus operandi" for the whole course! What does this mean (for you)?

    19. Gary Vee is a fine example! You've also linked perfectly to next week (as we idolize such figures, making them into heroes of either capitalist achievement or pinnacles of success in other fields (like sports -- some of which aren't necessarily roads to financial success, like a lot of Olympic pursuits...).

    20. To quote a silly Netflix series about baking escapades, you "nailed it"

    1. “The water utility did not have a corrosion-control plan…”

      Masten, Davies, and McElmurry, “Flint Water Crisis: What Happened and Why?,” 26.

    2. Still, officials did not implement adequate corrections for another few months.

      Pauli, “The Flint water crisis,” 5.

    3. Legionellosis, a form of pneumonia, attacks the lungs and can spread through infected drinking water.

      Mayo Clinic, “Legionnaires’ Disease - Symptoms & Causes - Mayo Clinic,” last modified May 24, 2021. https://www.mayoclinic.org/diseases-conditions/legionnaires-disease/symptoms-causes/syc-20351747.

    4. In addition to those contaminants, during the summers of 2014 and 2015, 91 cases of Legionellosis were reported.

      Masten, Davies, and McElmurry, “Flint Water Crisis: What Happened and Why?,” 24.

    5. The trihalomethane content far exceeded levels allowed through the federal Safe Drinking Water Act, but they went unchecked by both federal and state governments for months.

      Pauli, “The Flint water crisis,” 2.

    6. These chemicals can cause many health problems, including cancer.

      Jennifer Byrd, “Trihalomethanes in Water,” Water Filter Guru, last modified July 5, 2024. https://waterfilterguru.com/trihalomethanes-in-water/.

    7. By the summer of 2014, water testing revealed higher than acceptable levels of E. Coli, trihalomethane concentrations, lead, and other harmful chemicals.

      Masten, Davies, and McElmurry, “Flint Water Crisis: What Happened and Why?,” 23.

    8. In particular, discoloration included a red tinge, which is associated with iron corrosion.

      Masten, Davies, and McElmurry, “Flint Water Crisis: What Happened and Why?,” 31.

    9. “Sufficient pilot testing and corrosion studies were not commissioned and completed… Furthermore, since the Flint plant had not been fully operational in almost 50 years, was understaffed, and some of the staff were undertrained, it is not surprising that it was difficult to achieve effective treatment.”

      Masten, Davies, and McElmurry, “Flint Water Crisis: What Happened and Why?,” 31.

    10. An emergency manager from the Michigan state government supported the switch.

      Peter Christensen, David A. Keiser, and Gabriel E. Lade, “Economic Effects of Environmental Crises: Evidence From Flint, Michigan,” American Economic Journal. Economic Policy 15, no. 1 (February 1, 2023): 201.

    11. in 1967 city officials began purchasing treated Lake Huron water through the Detroit Water and Sewage Department (DWSD).

      Susan J Masten, Simon H Davies, and Shawn P McElmurry, “Flint Water Crisis: What Happened and Why?” American Water Works Association 108, no. 12 (December 1, 2016): 23.

    12. “structural racism”.

      Pauli, “The Flint water crisis,” 4.

    13. “one of the most significant environmental contamination events in recent American history”

      Benjamin J Pauli, “The Flint water crisis,” WIREs Water 7, no. 3 (March 12, 2020): 1.

    1. the state predictable from the outside (i.e., the state describing the knowledge of the experience from the point of view of an external observer), which we call epistemic

      for - definition - epistemic

      definition - epistemic - an internal state of another predicted from an other outside observer

    2. the internally experienced quantum state, since it corresponds to a definite experience–not to a random choice–must be pure, and we call it ontic.

      for - definition - ontic

      definition - ontic - an internally experienced quantum state that is primal

    3. for - Giocomo Mauro D'Ariano - Federico Faggin - Hard Problem and Free Will: An Information-Theoretical Approach - consciousness research

      from

    1. L’urbanisme virtuel est le moteur de la réalité augmentée, de cette terre promise d’un nouveau paradigme.

      Et c'est l'environnement dans lequel grandissent les LLMs

    2. définition de cet humanisme. On est habitué à dire que le langage définit en quelque sorte l’humain. Or il me semble, suite aux urbanistes, mais aussi aux mythes fondateurs de nos cultures, que le propre de l’homme est aussi sa manière de façonner et d’habiter l’espace.

      Le code crée des architectures linguistiques, des unités de structures qui vont constituer des unités encore plus grandes (e.g. les réseaux sociaux)

    3. Le numérique, en revanche, semble voué à la variation et à la diversité : un fichier MP3 peut être facilement converti en format WAV ou OGG, etc. Pour apprécier les défis du numérique, comme ses promesses, il nous faut, me semble-t-il, prendre au sérieux cette différence radicale dans toute sa portée. Car elle touche à tous les aspects de notre culture, une culture de l’information et du savoir, une culture de l’échange. Ces objets, pour reprendre la belle expression d’Yves Jeanneret [2][2]Yves Jeanneret, 2008, Penser la trivialité, vol. 1: La vie…, sont des êtres culturels, avec leur propre dynamique et intelligence

      Les logiciels sont des êtres culturesl, avec leur propre dynamique et intelligence.

    1. telomere-proximal regions in MEN ts mutant cells. A subtelomeric region

      Subtelomeres are composed of two regions: a telomere-proximal region and a telomere-distal region.

    2. TEM1, CDC15, MOB1 or DBF2

      Components involved in the MEN

    3. We hypothesized that high Cdk activity inhibits DNA synthesis in metaphase, and that the inhibition of Cdk during mitotic exit enables synthesis to complete.

      Q: how does Cdk activity, specifically the inhibition of Cdk at the end of mitosis, affect DNA synthesis?

      High Cdk activity during metaphase (M checkpoint) stalls DNA replication at this point.

      And the inhibition of Cdk during metaphase should allow for cells to move on in the cell cycle and replicate their DNA.

    4. Our results suggest that cells arrested in metaphase for prolonged periods of time do not undergo DNA synthesis during the arrest

      Figures 1-3 demonstrate that the presence of ssDNA during anaphase in 44% of yeast cells may suggest that there is no DNA synthesis occurring during mitosis in these cells.

    5. Interestingly, deletion of the RAD9 checkpoint gene abolished nuclear division delays and chromatin bridge formation in response to challenges in DNA synthesis during mitosis

      Rad9 = DNA damage response during G2

      When the gene was knocked-out, the inhibition of nuclear division and chromatin bridges were abolished.

      RAD9 = mediates stalled DNA rep.

    6. On the other hand, this result also suggests that mitotic DNA synthesis promotes nuclear division

      If pol3 is required for both DNA synthesis and for nuclear division, then the two actions must be interdependent.

    7. maximize population-level growth rate while simultaneously exploring greater genetic space

      Selects for regions that evolve faster - may produce helpful mutations

    8. c

      Quantified in boxplots:

      • Duration of chromatin bridges and time to divide nuclear contents was greater in cells with ssDNA.
    9. b,c

      Overall, figure 3 shows that 44% of cells have ssDNA during anaphase of mitosis. This seems to correlate with the slowing of cell cycle progression and an increase in the duration that chromosome bridges exist and the time for nuclear division.

    10. b

      In the 44% of cells that had ssDNA during anaphase, more chromatin bridges were observed.

    11. Cells expressing Spc42-mCherry to visualise the spindle poles, and Rfa2-GFP to visualise RPA foci, were grown to mid-log phase and imaged every 6 min for 3 h at 30 °C. RPA foci are present in all cells in S-phase and persist during anaphase in 44% of cells (arrows). Time from anaphase onset is indicated in minutes. Scale bar, 2 µm.

      GFP>single stranded DNA = can be associated with DNA synthesis mCherry>spindle poles = appear in prophase, breaks down in telophase

      The arrows in panel a point to signs of single stranded DNA that persists during anaphase (anaphase is identified by appearance of red spindle poles).

    12. a

      44% of cells had single stranded DNA in anaphase

    1. Overwrite Kibo.Config key values

      This step is no longer required.

    2. If the assets are not available at the link, use the paypal-multiparty branch.

      multiparty changes are available on master branch now and no one should have to use paypal-multiparty branch ever

    1. MvP : "Direct Multi-view Multi-person 3D Pose Estimation" Tao Wang, Jianfeng Zhang, Yujun Cai, Shuicheng Yan, Jiashi Feng

      Influential paper on learning consistent skeletal models of human pose from multiview images

    2. MvP designs a novel geometricallyguided attention mechanism, called projective attention, to more precisely fuse thecross-view information for each joint.

      question: what is projective attention?

    1. I'm often asked to describe the “advantages” of free software. But the word “advantages” is too weak when it comes to freedom. Life without freedom is oppression, and that applies to computing as well as every other activity in our lives.
    1. A proprietary program puts its developer or owner in a position of power over its users. This power is in itself an injustice.
    2. Of course, the developer usually does not do this out of malice, but rather to profit more at the users' expense. That does not make it any less nasty or more legitimate.
    3. Power corrupts; the proprietary program's developer is tempted to design the program to mistreat its users.
    4. Yielding to that temptation has become ever more frequent; nowadays it is standard practice. Modern proprietary software is typically an opportunity to be tricked, harmed, bullied or swindled.
    5. Software designed to function in a way that mistreats the user is called malware.
    6. Microsoft is using malware tactics to get users to switch to their web browser, Microsoft Edge, and their search engine, Microsoft Bing. When users launch the Google Chrome browser Microsoft injects a pop up advertisement in the corner of the screen advising users to switch to Bing. Microsoft also imported users Chrome browsing data without their knowledge or consent.
    1. Since the number of queries is larger than the actualnumber of people, we train an MLP-based classifier fβ (.) topredict a score for each query based on the appearance termto remove the “empty” ones.

      Initially there are more queries than there are actual pedestrians. A classifier is trained to prune out the non-people.

    2. Really interesting and innovative method for using multiview perspective data to learn human pose and pedestrian detection.

    3. We adopt a hierarchical query embed-ding scheme proposed in [36] to reduce the number of learn-able parameters.

      a hierarchical scheme to reduce learning paramters, if you know something about the model, that's good!

    4. Most closely related to our work, MvP [36]extends DETR for multi-view 3D human pose estimation.

      mostly based on [36]

    1. Nothing could exceed the intentness with which this scientific gardener examined every shrub which grew in his path: it seemed as if he was looking into their inmost nature, making observations in regard to their creative essence, and discovering why one leaf grew in this shape and another in that, and wherefore such and such flowers differed among themselves in hue and perfume. Nevertheless, in spite of this deep intelligence on his part, there was no approach to intimacy between himself and these vegetable existences. On the contrary, he avoided their actual touch or the direct inhaling of their odors with a caution that impressed Giovanni most disagreeably; for the man’s demeanor was that of one walking among malignant influences, such as savage beasts, or deadly snakes, or evil spirits, which, should he allow them one moment of license, would wreak upon him some terrible fatality. It was strangely frightful to the young man’s imagination to see this air of insecurity in a person cultivating a garden, that most simple and innocent of human toils, and which had been alike the joy and labor of the unfallen parents of the race. Was this garden, then, the Eden of the present world? And this man, with such a perception of harm in what his own hands caused to grow,—was he the Adam?

      The gardener's cautious demeanor, avoiding the plants as if they were "deadly snakes," shows the dangerous aspect of nature in the story. Nature is depicted as a harmful force that should not be underestimated.

    1. https://docutopia.sustrato.red/juliana:autonomo

      Es conveniente agregar a este documento no sólo los enlaces originales en HedgeDoc/Docutopia, sino también los enlaces al repositorio.

    2. Tutorial Markdown https://docutopia.sustrato.red/juliana:markdown

      Ten presente que las subsecciones suelen tener jerarquías continuas. Reemplaza:

      ### Tutorial ...

      por

      ## Tutorial ...

    1. The commercial license prohibits reverse engineering and tampering with our license key mechanism unlocking paid features so that we can run a compliant and fair commercial business.
    1. Chapter 4 introduces the Formula² hierarchical formula language. Chapter 5 introduces the Madata JavaScript API and federated authentication architecture. Chapter 3 introduces the Mavo HTML language. Chapter 7 introduces Lifesheets, a domain-specific visual application builder for building Mavo applications for personal tracking.

      Weird that these are not in order.

    1. We demonstrate here that bulk protein content partitions to wastewater solids. Using a combination of western blotting, ELISA, and mass spectrometry, we identify a robust repertoire of intact human antibodies, predominantly secreted IgA

      Wastewater has a lot of junk that could bind to the sandwich ELISA non-specifically.

      To validate this, I was wondering if there would be a good negative control antigen binder that you could look for - for example, some Ebola antibodies that you won't expect to be in this wastewater?

    1. Checkerboard Is there anything else more relevant here? Maybe Discover the North Country?

      Remove smile test and financial aid

    2. Virtual Tour I don't love this.

    3. How to Visit Not accurate Visit options?

      Separate accordion **How to Get to Campus ** Link to Maps, Directions, and Parking Consider adding this as a sub menu link on the left nav Add section for car rentals add note about car sharing apps

      Maps, Directions, and Parking page include Regional Accommodations Update Bus Travel section with ADM notes

      Regional Accommodations Consider adding this as a sub menu link on the left nav

      ALT Diff Cards Everything you need to know about visiting Visit Options Interviews How to Get Here Campus parking and maps Regional Accomodations

  2. learn.foundry.com learn.foundry.com
    1. The Markup tool being used to annotate areas of the scene that require lighting and animation.

      Missing video or bad URL below.

    1. Joint Public Review:

      Summary:

      This study retrospectively analyzed clinical data to develop a risk prediction model for pulmonary hypertension in high-altitude populations. This finding holds clinical significance as it can be used for intuitive and individualized prediction of pulmonary hypertension risk in these populations. The strength of evidence is high, utilizing a large cohort of 6,603 patients and employing statistical methods such as LASSO regression. The model demonstrates satisfactory performance metrics, including AUC values and calibration curves, enhancing its clinical applicability.

      Strengths:

      (1) Large Sample Size: The study utilizes a substantial cohort of 6,603 subjects, enhancing the reliability and generalizability of the findings.

      (2) Robust Methodology: The use of advanced statistical techniques, including least absolute shrinkage and selection operator (LASSO) regression and multivariate logistic regression, ensures the selection of optimal predictive features.

      (3) Clinical Utility: The developed nomograms are user-friendly and can be easily implemented in clinical settings, particularly in resource-limited high-altitude regions.

      (4) Performance Metrics: The models demonstrate satisfactory performance, with strong AUC values and well-calibrated curves, indicating accurate predictions.

      Weaknesses:

      (1) Lack of External Validation: The models were validated internally, but external validation with cohorts from other high-altitude regions is necessary to confirm their generalizability.

      (2) Simplistic Predictors: The reliance on ECG and basic demographic data may overlook other potential predictors that could improve the models' accuracy and predictive power.

      (3) Regional Specificity: The study's cohort is limited to Tibet, and the findings may not be directly applicable to other high-altitude populations without further validation.

    1. city engineers and technocrats intentionally neglect the water infrastructure in Premnagar, a Muslim settlement in Mumbai.

      Anand, Nikhil. “Municipal Disconnect: On Abject Water and Its Urban Infrastructures.” Ethnography 13, no. 4 (April 12, 2012): 487–509. https://doi.org/10.1177/1466138111435743.

    2. Informing communities about effective water use and equipping them with tools to monitor water contamination levels is an essential first step

      “Public Health Engineering Department.” WBPHED. Accessed July 29, 2024. https://www.wbphed.gov.in/en/laboratories/map.

    3. Numerous efforts have been made to manage water resources and ensure adequate supplies of good quality water for the global population.

      Johnson, S.P. The earth summit: The United Nations conference on environment and development (UNCED). Verfass. Recht Übersee 1994, 28, 134–135.

    4. These issues have collectively led to a severe scarcity of safe water, affecting 35 million people.
    5. esidents see their water as contaminated which reflects exclusion from modern citizenship which has treated water. Mumbai isn’t the only place in India facing this crisis, driven by various factors such as pollution, inefficient agricultural practices, insufficient government planning, and relentless urban sprawl.

      Shiao, Tien. “3 Maps Explain India’s Growing Water Risks.” Trellis, July 24, 2024. https://trellis.net/article/3-maps-explain-indias-growing-water-risks/. ICAR. “DARE/ICAR Annual Report 2014-15 .” Annual Report, May 27, 2014. https://doi.org/10.30875/977ff2df-en.

    6. There are claims of development in India, but in Mumbai, 10 wells were installed, some placed next to toilets where sewage contamination can seep in. How can this be considered development?

      Anand, Nikhil. “Municipal Disconnect: On Abject Water and Its Urban Infrastructures.” Ethnography 13, no. 4 (April 12, 2012): 487–509. https://doi.org/10.1177/1466138111435743.

    1. The Jal Board, an agency responsible for water supply and sewage management in the National Capital Territory of Delhi, India, was indicted for spending $200 million on pollution clean-up without achieving tangible results.

      Mehta, Prashant. Impending water crisis in India and comparing clean water standards among developing and developed nations , 2012.

    2. Additionally, water pollution exacerbates economic challenges, as communities are forced to invest in expensive water purification methods.

      Khatun, Rozina. Water Pollution: Causes, Consequences, Prevention Method and Role of WBPHED with Special Reference from Murshidabad District, 2017.

    3. Along with the short term effects water pollution has there are long term effects that can harm future generations to come. Polluted water can severely affect various organs in the human body which leads to heart and kidney injuries.

      Khatun, Rozina. Water Pollution: Causes, Consequences, Prevention Method and Role of WBPHED with Special Reference from Murshidabad District, 2017.

    4. The drought had a significant economic impact, causing the agricultural GDP growth rate to collapse to 0.5%, well below the population growth rate of 1.4%.

      Gulati, Ashok, and Pritha Banerjee. “Emerging Water Crisis in India: Key Issues and Way Forward .” Indian Journal of Economics, 2016.

    5. In 2014 and 2015, India experienced consecutive droughts that led to crop failures and livestock losses. Some regions, such as the Marathwada region of Maharashtra, were more dependent on rainfall for agriculture and thus more severely affected.

      Gulati, Ashok, and Pritha Banerjee. “Emerging Water Crisis in India: Key Issues and Way Forward .” Indian Journal of Economics, 2016.

    6. Water stress occurs when a country's annual water supplies drop below 1,700 cubic meters per person. While this situation may lead to occasional water shortages, the country might still manage its water resources effectively. In contrast, water scarcity is a more severe condition that arises when water supplies fall below 1,000 cubic meters per person. At this critical level, a country faces significant challenges that can threaten food production, undermine economic development, and harm ecosystems. Water scarcity represents a dire shortage of water resources, leading to serious socio-economic and environmental issues.

      Mehta, Prashant. Impending water crisis in India and comparing clean water standards among developing and developed nations , 2012.

    1. that you can easily teach inside your limited weekly sessions…  that students will be inspired to try

      delete inspired to try (you use it below) mention short sessions - this is a differentiator

    2. help you help

      reword. Empower you to?

    3. 20+

      keep # consistent

    4. I love collaborating with expert educators in the world of “student success strategies” and am super psyched about our first annual Art of Motivating Students Summit!

      what you learned/discovered/what was the aha moment & therefore why you created this summit

    5. upgrade

      improve?

    6. Question whether you can make it through, not to mention your students.

      reduce bullets

    7. Is your battery draining too?

      great

    8. As a coach or tutor, you’re getting a lot of things right: You have a deep love for helping students succeed and you’ve collected a toolkit of strategies that you know will help your students (if only they would follow through. sigh); But… Your students are quickly losing steam as the school year progresses. Disappointing grades and mounting overwhelm is killing momentum. Learning disabilities require specialized strategies and accommodations you haven’t been trained in. It doesn’t help that students are facing anxiety and depression like no generation before!

      excellent

    9. practical

      actionable, that you can implement = this is your differentiator

    10. energize

      like this

    11. finish the semester strong

      use in banner?

    12. tutors

      to be able to do what?

    1. Reviewer #5 (Public Review):

      After reading the manuscript and the concerns raised by reviewer 2 I see both sides of the argument - the relative location of trigeminal nucleus versus the inferior olive is quite different in elephants (and different from previous studies in elephants), but when there is a large disproportionate magnification of a behaviorally relevant body part at most levels of the nervous system (certainly in the cortex and thalamus), you can get major shifting in the location of different structures. In the case of the elephant, it looks like there may be a lot of shifting. Something that is compelling is that the number of modules separated but the myelin bands correspond to the number of trunk folds which is different in the different elephants. This sort of modular division based on body parts is a general principle of mammalian brain organization (demonstrated beautifully for the cuneate and gracile nucleus in primates, VP in most of species, S1 in a variety of mammals such as the star nosed mole and duck-billed platypus). I don't think these relative changes in the brainstem would require major genetic programming - although some surely exist. Rodents and elephants have been independently evolving for over 60 million years so there is a substantial amount of time for changes in each l lineage to occur.

      I agree that the authors have identified the trigeminal nucleus correctly, although comparisons with more out-groups would be needed to confirm this (although I'm not suggesting that the authors do this). I also think the new figure (which shows previous divisions of the brainstem versus their own) allows the reader to consider these issues for themselves. When reviewing this paper, I actually took the time to go through atlases of other species and even look at some of my own data from highly derived species. Establishing homology across groups based only on relative location is tough especially when there appears to be large shifts in the relative location of structures. My thoughts are that the authors did an extraordinary amount of work on obtaining, processing and analyzing this extremely valuable tissue. They document their work with images of the tissue and their arguments for their divisions are solid. I feel that they have earned the right to speculate - with qualifications - which they provide.

    1. Author response:

      Reviewer #3 (Public Review):

      (1) Conditions on growth and interaction rates for feasibility and stability. The authors approach this using a mean field approximation, and it is important to note that there is no particular temperature dependence assumed here: as far as it goes, this analysis is completely general for arbitrary Lotka-Volterra interactions.

      However, the starting point for the authors' mean field analysis is the statement that "it is not possible to meaningfully link the structure of species interactions to the exact closed-form analytical solution for [equilibria] 𝑥^*_𝑖 in the Lotka-Volterra model.

      I may be misunderstanding, but I don't agree with this statement. The time-independent equilibrium solution with all species present (i.e. at non-zero abundances) takes the form

      x^* = A^{-1}r

      where A is the inverse of the community matrix, and r is the vector of growth rates. The exceptions to this would be when one or more species has abundance = 0, or A is not invertible. I don't think the authors intended to tackle either of these cases, but maybe I am misunderstanding that.

      So to me, the difficulty here is not in writing a closed-form solution for the equilibrium x^*, it is in writing the inverse matrix as a nice function of the entries of the matrix A itself, which is where the authors want to get to. In this light, it looks to me like the condition for feasibility (i.e. that all x^* are positive, which is necessary for an ecologically-interpretable solution) is maybe an approximation for the inverse of A---perhaps valid when off-diagonal entries are small. A weakness then for me was in understanding the range of validity of this approximation, and whether it still holds when off-diagonal entries of A (i.e. inter-specific interactions) are arbitrarily large. I could not tell from the simulation runs whether this full range of off-diagonal values was tested.

      We thank the reviewer for pointing this out and we agree that the language used is imprecise. The GLV model is solvable using the matrix inversion method but as they note, this does not give an interpretable expression in terms of the system parameters. This is important as we aim to build understanding of how these parameters (which in turn depend on temperature) affect the richness in communities. We have made this clearer in lines 372-379.

      In regards to the validity of the approximation we have significantly increased the detail of the method in the manuscript, including the assumptions it makes (lines 384-393). In general the method assumes that any individual interaction has a weak effect on abundance. This will fail when the variation in interactions becomes too strong but should be robust to changes in the average interaction strength across the community.

      As a secondary issue here, it would have been helpful to understand whether the authors' feasible solutions are always stable to small perturbations. In general, I would expect this to be an additional criterion needed to understand diversity, though as the authors point out there are certain broad classes of solutions where feasibility implies stability.

      As the reviewer notes previous work using the GLV model by ? has shown that stability almost surely implies stability in the GLV. Thus we expect that our richness estimates derived from feasibility will closely resemble those from stabiltiy. We have amended the maintext to make this argument clear on lines 321-335.

      (2) I did not follow the precise rationale for selecting the temperature dependence of growth rate and interaction rates, or how the latter could be tested with empirical data, though I do think that in principle this could be a valuable way to understand the role of temperature dependence in the Lotka-Volterra equations.

      First, as the authors note, "the temperature dependence of resource supply will undoubtedly be an important factor in microbial communities"

      Even though resources aren't explicitly modeled here, this suggests to me that at some temperatures, resource supply will be sufficiently low for some species that their growth rates will become negative. For example, if temperature dependence is such that the limiting resource for a given species becomes too low to balance its maintenance costs (and hence mortality rate), it seems that the net growth rate will be negative. The alternative would be that temperature affects resource availability, but never such that a limiting resource leads to a negative growth rate when a taxon is rare.

      On the other hand, the functional form for the distribution of growth rates (eq 3) seems to imply that growth rates are always positive. I could imagine that this is a good description of microbial populations in a setting where the resource supply rate is controlled independently of temperature, but it wasn't clear how generally this would hold.

      We thank the reviewer for their comment. The assumption of positive growth rates is indeed a feature of the Boltzmann-Arrhenius model of temperature dependence. We use the Boltzmann-Arrhenius model due to the dependence of growth on metabolic rate. As metabolic rate is ultimately determined by biochemical kinetics its temper- ature dependence is well described by the Boltzmann-Arrhenius. In addition to this reasoning there is a wealth of empirical evidence supporting the use of the Boltzmann- Arrhenius to describe the temperature dependence of growth rate in microbes.

      Ultimately the temperature dependence of resource supply is not something we can directly consider in our model. As such we have to assume that resource supply is sufficient to maintain positive growth rates in the community. Note that this assump- tion only requires resource supply is sufficient to maintain positive growth rates (i.e. the maximal growth rate of species in isolation) not that resource supply is sufficient to maintain growth in the presence of intra- and interspecific competition. We have updated the manuscript in lines 156-159 to make these assumptions more clear.

      Secondly, while I understand that the growth rate in the exponential phase for a single population can be measured to high precision in the lab as a function of temperature, the assumption for the form of the interaction rates' dependence on temperature seems very hard to test using empirical data. In the section starting L193, the authors seem to fit the model parameters using growth rate dependence on temperature, but then assume that it is reasonable to "use the same thermal response for growth rates and interactions". I did not follow this, and I think a weakness here is in not providing clear evidence that the functional form assumed in Equation (4) actually holds.

      The reviewer is correct, it is very difficult to measure interaction coefficients experi- mentally and to our knowledge there is little to no data available on their empirical temperature responses. We as a best guess use the observed variation in thermal physiology parameters for growth rate as a proxy assuming that interactions must also depend on metabolic rates of the interacting species (see also response to com- ment 8).

    1. eLife assessment

      This important study builds on a previous publication, demonstrating that T. brucei has a continuous endomembrane system, which probably facilitates high rates of endocytosis. Using a range of cutting-edge approaches, the authors present compelling evidence that an actomyosin system, with the myosin TbMyo1 as an active molecular motor, is localized close to and can associate with the endosomal system in the bloodstream form of Trypanosoma brucei. It shows convincingly that both actin and Myo I play a role in the organization and integrity of the endosomal system: both RNAi-mediated depletion of Myo1, and treatment of the cells with latrunculin A resulted in endomembrane disruption. This work should be of interest to cell biologists and microbiologists working on the cytoskeleton, and unicellular eukaryotes.

    1. for - The projected timing of climate departure from recent variability - Camilo Mora et al. - 6th mass extinction - biodiversity loss

      Summary - This is an extremely important paper with a startling conclusion of the magnitude of the social and economic impacts of the biodiversity disruption coming down the pipeline - It is likely that very few governments are prepared to adapt to these levels of ecosystemic disruption - Climate departure is defined as an index of the year when: - The projected mean climate of a given location moves to a state that is - continuously outside the bounds of historical variability - Climate departure is projected to happen regardless of how aggressive our climate mitigation pathway - The business-as-usual (BAU) scenario in the study is RCP85 and leads to a global climate departure mean of 2047 (+/- 14 years s.d.) while - The more aggressive RCP45 scenario (which we are currently far from) leads to a global climate departure mean of 2069 (+/- 18 years s.d.) - So regardless of how aggressive we mitigate, we cannot avoid climate departure. - What consequences will this have on economies around the world? How will we adapt? - The world is not prepared for the vast ecosystem changes, which will reshape our entire economy all around the globe.

      to - Nature publication - https://hyp.is/3wZrokX9Ee-XrSvMGWEN2g/www.nature.com/articles/nature12540 - Climate Departure map of major cities around the globe - climate departure map - of major cities around the globe - 2013

    1. Author response:

      Reviewer #3 (Public Review):

      The paper by Rai and colleagues examines the transcriptional response of Candida glabrata, a common human fungal pathogen, during interaction with macrophages. They use RNA PolII profiling to identify not just the total transcripts but instead focus on the actively transcribing genes. By examining the profile over time, they identify particular transcripts that are enriched at each timepoint, and build a hierarchical model for how a transcription factor, Xbp1, may regulate this response. Due to technical difficulties in identifying direct targets of Xbp1 during infection, the authors then turn to the targets of Xbp1 during cellular quiescence.

      The authors have generated a large and potentially impactful dataset, examining the responses of C. glabrata during an important host-pathogen interface. However, the conclusions that the authors make are not well supported by the data. The ChIP-seq is interesting, but the authors make conclusions about the biological processes that are differentially regulated without testing them experimentally. Because Candida glabrata has a significant percent of the genome without GO term annotation, the GO term enrichment analysis is less useful than in a model organism. To support these claims, the authors should test the specific phenotypes, and validate that the transcriptional signature is observed at the protein level.

      Additionally, the authors should also include images of the infections, along with measurements of phagocytosis, to show that the time points are the appropriate. At 30 minutes, are C. glabrata actually internalized or just associated? This may explain the difference in adherence genes at the early timepoint. For example, in Lines 123-132, the authors could measure the timing of ROS production by macrophages to determine when these attacks are deployed, instead of speculating based on the increased transcription of DNA damage response genes. Potentially, other factors could be influencing the expression of these proteins. At the late stage of infection, the authors should measure whether the C. glabrata cells are proliferating, or if they have escaped the macrophage, as other fungi can during infection. This may explain some of the increase in transcription of genes related to proliferation.

      An additional limitation to the interpretation of the data is that the authors should put their work in the context of the existing literature on C. albicans temporal adaptation to macrophages, including recent work from Munoz (doi: 10.1038/s41467-019-09599-8), Tucey (doi: 10.1016/j.cmet.2018.03.019), and Tierney (doi: 10.3389/fmicb.2012.00085), among others.

      When comparing the transcriptional profile between WT and xbp1 mutant, it is not clear whether the authors compared the strains under non-stress conditions. The authors should include an analysis of the wild-type to xbp1 mutants in the absence of macrophage stress, as the authors claims of precocious transcription may be a function of overall decreased transcriptional repression, even in the absence of the macrophage stress. The different cut-offs used to call peaks in the two strain backgrounds is also somewhat concerning-it is not clear to me whether that will obscure the transcriptional signature of each of the strains. Additionally, the authors go on to show that the xbp1 mutant has a significant proliferation defect in macrophages, so potentially this could confound the PolII binding sites if the cells are dying.

      In the section on hierarchical analysis of transcription factors, at least one epistasis experiment should have been performed to validate the functional interaction between Xbp1 and a particular transcription factor. If the authors propose a specific motif, they should test this experimentally through EMSA assays to fully test that the motif is functional.

      The jump from macrophages to quiescent culture is also not well justified. If the transcriptional program is so dynamic during a timecourse of macrophage infection, it is hard to translate the findings from a quiescent culture to this host environment.

      Overall, there is a strong beginning and the focus on active transcription in the macrophage is an exciting approach. However, the conclusions need additional experimental evidence.

      We thank this reviewer’s critical analysis of our manuscript and the comments.

      We fully agree that the jump from macrophages to quiescent culture is also not well justified. We have successfully performed CgXbp1 ChIP-seq during macrophage infection and have rewritten the manuscript according to the new results. With the CgXbp1 ChIP-seq data during macrophage infection added, we have removed the data related to quiescence to focus the paper on the macrophage response. Because of this, we have also removed the DNA binding motif analysis from this work and will report the findings in a separate manuscript comparing CgXbp1 bindings between macrophage response and quiescence.

      As mentioned above, the RNAPII ChIP-seq time course experiment compared RNAP occupancies at different times during infection to the first infection time point. We did not calculate relative to the data in the absence of stress (e.g. before infection), because Xbp1 was expressed at a low level and induced by stresses. Hence its role under no stress conditions is expected to be less than inside macrophages. In addition, up-regulation of its target genes depends on the presence of their transcriptional activators under the experimental conditions, which is going to be very different in normal growth media (RPMI or YPD; i.e. before infection) versus inside macrophages. Hence, comparing to normal growth media would not show the real CgXbp1 effects and/or the CgXbp1 effect might be different. In fact, this can be seen from the new RNAseq analysis of wildtype and Cgxbp1∆ C. glabrata cells in the presence and absence of fluconazole (which are added to the revised manuscript to study CgXbp1’s role on fluconazole resistance). The result shows that CgXbp1 (which was expressed at a low level) has a very small effect on global expression and the up-regulated genes are mainly related to transmembrane transport. More importantly, the effect of the Cgxbp1∆ mutant on TCA cycle and amino acid biosynthesis genes’ expression during macrophage infection is not observed when the mutant is grown under normal growth conditions (YPD without fluconazole). Therefore, the results show that CgXbp1 has condition-specific effects on global gene expression, which is also dependent on the transcriptional activators present in the cell. The result of the new RNAseq analysis of wildtype and Cgxbp1∆ C. glabrata cells in the absence of fluconazole is described in lines 329-339 as follows: “On the other hand, 135 genes were differentially expressed in the Cgxbp1∆ mutant during normal exponential growth (i.e. no fluconazole treatment) (Figure 6c) with up-regulated genes highly enriched with the “transmembrane transport” function and down- regulated genes associated with different metabolic processes (e.g. carbohydrate, glycogen and trehalose) (e.g. carbon metabolism, nucleotide metabolism, and transmembrane transport, etc.) (Supplementary Table 12). Interesting, the TCA cycle and amino acid biosynthesis genes, whose expressions were accelerated in the Cgxbp1∆ mutant during macrophage (Figure 3C, 3D), were not affected by the loss of CgXbp1 function under normal growth conditions (i.e. in YPD media without fluconazole) (Supplementary Figure 11, Supplementary Table 11), suggesting that the overall (direct and indirect) effects of CgXbp1 are condition-specific.”

      For the comment about RNAPII bindings affected by dying cells, our observation of reduced proliferation does not mean that the cells were dying, because we did observe increase in cell numbers over time (i.e. the cells were proliferating) but the rate of proliferation was slower in the Cgxbp1∆ mutant comparing to wildtype. Presumably, the reduced proliferation and/or growth within macrophages is due to poorer adaptation in and compromised response to macrophages.

      We have also discussed our findings in the context of the suggested (and other) literatures in various parts of the Discussion.

      Reviewer #4 (Public Review):

      Macrophages are the first line of defense against invading pathogens. C. glabrata must interact with these cells as do all pathogens seeking to establish an infection. Here, a ChIP-seq approach is used to measure levels of RNA polymerase II levels across Cg genes in a macrophage infection assay. Differential gene expression is analyzed with increasing time of infection. These differentially expressed genes are compared at the promoter level to identify potential transcription factors that may be involved in their regulation. A factor called CgXbp1 on the basis of its similar with the S. cerevisiae Xbp1 protein is characterized. ChIP-seq is done on CgXbp1 using in vitro grown cells and a potential binding site identified. Evidence is provided that CgXbp1 affects virulence in a Galleria system and that this factor might impact azole resistance.

      As the authors point out, candidiasis associated with C. glabrata has dramatically increased in the recent past. Understanding the unique aspects of this Candida species would be a great value in trying to unravel the basis of the increasing fungal disease caused by C. glabrata. The use of ChIP-seq analysis to assess the time-dependent association of RNA polymerase II with Cg genes is a nice approach. Identification of CgXbp1 as a potential participant in the control of this gene expression program is also interesting. Unfortunately, this work suffers by comparison to a significant amount of previous effort that renders the progress detailed here incremental at best.

      I agree that their ChIP-seq time course of RNA polymerase II distribution across the Cg genome is both elegant and an improvement on previous microarray experiments. However, these microarray experiments were carried out 14 years ago and while the current work is certainly at higher resolution, little more can be gleaned from the current work. The authors argue that standard transcriptional analysis is compromised by transcript stability effects. I would suggest that, while no approach is without issues, quite a bit has been learned from approaches like RNA-seq and there are recent developments to this technique that allow for a focus on newly synthesized mRNA (thiouridine labeling).

      The CgXbp1 characterization relies heavily on work from S. cerevisiae. This is disappointing as conservation of functional links between C. glabrata and S. cerevisiae is not always predictable.

      The effects caused by loss of CgXBP1 on virulence (Figure 4) may be statistically significant but are modest. No comparison is shown for another gene that has already been accepted to have a role in virulence to allow determination of the biological importance of this effect.

      The phenotypic effects of the loss of XBP1 on azole resistance look rather odd (Figure 6). The appearance of fluconazole resistant colonies in the xbp1 null strain occurs at a very low frequency and seems to resemble the appearance of rho0 cells in the population. The vast majority of xbp1 null cells do not exhibit increased growth compared to wild-type in the presence of fluconazole.

      Irrespective of the precise explanation, more analysis should be performed to confirm that CgXbp1 is negatively regulating the genes suggested in Figure 6A to be responsible for the increased fluconazole resistance.

      Additionally, the entire analysis of CgXbp1 is based on ChIP-seq performed using cells grown under very different conditions that the RNA polymerase II study. Evidence should be provided that the presumptive CgXbp1 target genes actually impact the expression profiles established earlier.

      We thank this reviewer’s critical analysis of our manuscript. We have done the following to address the comments. As a result, the manuscript is significantly improved.

      • The ChIP-seq data of Xbp1 in macrophage has been successfully generated and the result is now presented in Figure 2C-2F, and lines 182-227 of the revised manuscript. With the addition, we have removed the ChIPseq data related to quiescent from the revised manuscript and re-written the manuscript focusing on the role of Xbp1 in macrophage.

      • We agree that the conservation of functional links between C. glabrata and S. cerevisiae is not always predictable. That’s the reason why we did not solely rely on the S. cerevisiae network for inferring Xbp1’s functions, and had undertaken several different ways (e.g. ChIP-seq of Xbp1 and characterization of the Cgxbp1∆ mutant) to delineate its functions.

      • We also agree that the virulence effect is modest, but it is, nevertheless, an effect that may contribute to the overall virulence of C. glabrata. Since virulence is a pleiotropic trait involving many genes and every gene affects different aspects of the complex process, we feel that it is not fair to penalize a given gene based on its (weaker) effect relative to another gene. Therefore, we respectfully disagree that another gene should be included for benchmarking the effect.

      • We have measured C. glabrata cell numbers in a time course experiment. The result (presented in Figure 4A) showed that there was an increase in cell number at the end of the macrophage infection time course experiment (e.g. 8 hr). We have highlighted this information on lines 278-283.

      • Additional analysis of the fluconazole resistance phenotype of the Cgxbp1∆ mutant has been added, including standard MIC assays. The results are presented in Figure 5C-5E.

      • As suggested and to understand the role of CgXbp1 on fluconazole resistance, we have now carried out RNAseq analysis of WT and the Cgxbp1∆ mutant in the presence and absence of fluconazole. The genes differentially controlled in the Cgxbp1∆ mutant have been identified and a proposed model on how CgXbp1 affects fluconazole resistance is added to Figure 7 in the revised manuscript.

    1. Как и функции, классы можно определять внутри другого выражения, передавать, возвращать, присваивать и т.д.

      Тогда зачем вообще нужны классы, если они выполняют функционал функций?Синтаксический сахар?

    1. for - The projected timing of climate departure from recent variability - Camilo Mora et al. - 6th mass extinction - biodiversity loss

      paper details - title: The projected timing of climate departure from recent variability - author: - Camilo Mora, - Abby G. Frazier, - Ryan J. Longman, - Rachel S. Dacks, - Maya M. Walton, - Eric J. Tong, - Joseph J. Sanchez, - Lauren R. Kaiser, - Yuko O. Stender, - James M. Anderson, - Christine M. Ambrosino, - Iria Fernandez-Silva, - Louise M. Giuseffi, - Thomas W. Giambelluca - date - 9 October, 2013 - publication Nature 502, 183-187 (2013) - https://doi.org/10.1038/nature12540 - https://www.nature.com/articles/nature12540

      to - https://hyp.is/0BdCglsHEe-2CteEQbOBfw/www.researchgate.net/publication/257598710_The_projected_timing_of_climate_departure_from_recent_variability

      Summary - This is an extremely important paper with a startling conclusion of the magnitude of the social and economic impacts of the biodiversity disruption coming down the pipeline - It is likely that very few governments are prepared to adapt to these levels of ecosystemic disruption - Climate departure is defined as an index of the year when: - The projected mean climate of a given location moves to a state that is - continuously outside the bounds of historical variability - Climate departure is projected to happen regardless of how aggressive our climate mitigation pathway - The business-as-usual (BAU) scenario in the study is RCP85 and leads to a global climate departure mean of 2047 (+/- 14 years s.d.) while - The more aggressive RCP45 scenario (which we are currently far from) leads to a global climate departure mean of 2069 (+/- 18 years s.d.) - So regardless of how aggressive we mitigate, we cannot avoid climate departure. - What consequences will this have on economies around the world? How will we adapt? - The world is not prepared for the vast ecosystem changes, which will reshape our entire economy all around the globe.

    1. Author response:

      Reviewer #1 (Public Review):

      The authors conducted cross-species comparisons between the human brain and the macaque brain to disentangle the specific characteristics of structural development of the human brain. Although previous studies had revealed similarities and differences in brain anatomy between the two species by spatially aligning the brains, the authors made the comparison along the chronological axis by establishing models for predicting the chronological ages with the inputting brain structural features. The rationale is actually clear given that brain development occurs over time in both. More interestingly, the model trained on macaque data was better able to predict the age of humans than the human-trained model was at predicting macaque age. This revealed a brain cross-species age gap (BCAP) that quantified the discrepancy in brain development between the two species, and the authors even found this BCAP measure was associated with performance on behavioral tests in humans. Overall, this study provides important and novel insights into the unique characteristics of human brain development. The authors have employed a rigorous scientific approach, reflecting diligent efforts to scrutinize the patterns of brain age models across species. The clarity of the rationale, the interpretability of the methods, and the quality of the presentation all contribute to the strength of this work.

      We are grateful to your helpful and thorough review and for being so positive about our manuscript. Following your recommendations, we have added more analytic details that have strengthened our paper. We would like to thank you for your input.

      Reviewer #2 (Public Review):

      In the current study, Li et al. developed a novel approach that aligns chronological age to a cross-species brain age prediction model to investigate the evolutionary effect. This method revealed some interesting findings, like the brain-age gap of the macaque model in predicting human age will increase as chronological age increases, suggesting an evolutionary alignment between the macaque brain and the human brain in the early stage of development. This study exhibits ample novelty and research significance. However, I still have some concerns regarding the reliability of the current findings.

      We thank you for the positive and appreciative feedback on our work and the insightful comments, which we have addressed below.

      Question 1: Although the authors named their new method a "cross-species" model, the current study only focused on the prediction between humans and macaques. It would be better to discuss whether their method can also generalize to cross-species examination of other species (e.g., C. elegans), which may provide more comprehensive evolutionary insights. Also, other future directions with their new method are worth discussing.

      We appreciate your insightful comment regarding the generalizability of our model to other species. As you said, we indeed only performed human-macaque cross-species study not including other species. In our study, we only focused human and macaque because macaque is considered to be one of the closest primates to humans except chimpanzees and thus is considered to be the best model for studying human brain evolution. However, our proposed method has limitations that limit its generalizability for other species, e.g., C. elegans. First, our model was trained using MRI data, which limits its applicability to species for which such data is unavailable. This technological requirement brings a barrier to broaden cross-species application. Second, our current model is based on homologous brain atlases that are available for both humans and macaques. The lack of comparable atlases for other species further restricts the model's generalizability. We have discussed this limitation in the revised manuscript and outlined potential future directions to overcome these challenges. This includes discussing the need for developing comparable imaging techniques and standardized brain atlases across a wider range of species to enhance the model's applicability and broaden our understanding of cross-species neurodevelopmental patterns.

      On page 15, lines 11-18

      “However, the existing limitation should be noted regarding the generalizability of our proposed approach for cross-species brain comparison. Our current model relies on homologous brain atlases, and the lack of comparable atlases for other species restricts its broader applicability. To address this limitation, future research should focus on developing prediction models that do not depend on atlases. For instance, 3D convolutional neural networks could be trained directly on raw MRI data for age prediction. These deep learning models may offer greater flexibility for cross-species applications once the training within species is complete. Such advancements would significantly enhance the model's adaptability and expand its potential for comparative neuroscience studies across a wider range of species.”

      Question 2: Algorithm of prediction model. In the method section, the authors only described how they chose features, but did no description about the algorithm (e.g., supporting vector regression) they used. Please add relevant descriptions to the methods.

      Thank you for your comment. We apologize for not providing sufficient details about the model training process in our initial submission. In our study, we used a linear regression model for prediction. We have provided more details regarding the algorithm of prediction model in our response to Reviewer #1. For your convenience, we have attached them below.

      For details on the algorithm of prediction model:

      “A linear regression model was adopted for intra- and inter-species age prediction. The linear regression model was built including the following three main steps: 1) Feature selection: a total of two steps are required to extract the final features. The first step is preliminary extraction. First, all the human or macaque participants were divided into 10-fold and 9-fold was used for model training and 1-fold for model test. The preliminary features were chosen by identifying the significantly age-associated features with p < 0.01 during calculating Pearson’s correlation coefficients between all the 260 features and actual ages of the 9-fold subjects. This process was repeated 100 times. Since we obtained not exactly the same preliminary features each time, we thus further analyzed the preliminary features using two methods to determine the final features: common features and minimum mean absolute error (min MAE). Common features are the preliminary features that were selected in all the 100 times during preliminary model training. The min MAE features were the preliminary features that with the smallest MAE value during the 100 times model test for predicting age. After the above feature selections, we obtained two sets of features: 62 macaque features and 225 human features (common features) and 117 macaque features and 239 human features (min MAE). In addition, to further exclude the influences of unequal number of features in human and macaque, we also selected the first 62 features in human and macaque to test the model prediction performances. 2) Model construction: we conducted age prediction linear model using 10-fold cross-validation based on the selected features for human and macaque separately. The linear model parameters are obtained using the training set data and applied to the test set for prediction. The above process is also repeated 100 times. 3) Prediction: with the above results, we obtained the optimal linear prediction models for human and macaque. Next, we performed intra-species and inter-species brain age prediction, i.e., human model predicted human age, human model predicted macaque age, macaque model predicted macaque age and macaque model predicted human age. Three sets of features (62 macaque features and 225 human features; 117 macaque features and 239 human features; 62 macaque features and 62 human features) were used to test the prediction models for cross-validation and to exclude effects of different number of features in human and macaque. In the main text, we showed the results of brain age prediction, brain developmental and evolutional analyses based on common features and the results obtained using other two types of features were shown in supplementary materials. The prediction performances were evaluated by calculating the Pearson’s correlation and MAE between actual ages and predicted ages.”

      Question 3: Sex difference. The sex difference results are strange to me. For example, in the second row of Figure Supplement 3A, different models show different correlation patterns, but why their Pearson's r is all equal to 0.3939? If they are only typo errors, please correct them. The authors claimed that they found no sex difference. However, the results in Figure Supplement 3 show that, the female seems to have poorer performance in predicting macaque age from the human model. Moreover, accumulated studies have reported sex differences in developing brains (Hines, 2011; Kurth et al., 2021). I think it is also worth discussing why sex differences can't be found in the evolutionary effect.

      Reference:

      Hines, M. (2011). Gender development and the human brain. Annual review of neuroscience, 34, 69-88.

      Kurth, F., Gaser, C., & Luders, E. (2021). Development of sex differences in the human brain. Cognitive Neuroscience, 12(3-4), 155-162.

      It is recommended that the authors explore different prediction models for different species. Maybe macaques are suitable for linear prediction models, and humans are suitable for nonlinear prediction models.

      Thank you for pointing the typos out and comments on sex difference. In Figure Supplement 3A, there are typos for Pearson’s r values and we have corrected it in updated Figure 2-figure supplement 3. For details, please see the updated Figure 2-figure supplement 3 and the following figure.

      Regarding gender effects, we acknowledge your point about the importance of gender differences in understanding brain evolution and development. In our study, however, our primary goal was to develop a robust age prediction model by maximizing the number of training samples. To mitigate gender-related effects in our main results, we incorporated gender information as a covariate in the ComBat harmonization process. We conducted a supplementary analysis just to demonstrate the stability of our proposed cross-species age prediction model by separating the data with gender variable not to investigate gender differences. Although our results demonstrated that gender-specific models could still significantly predict chronological age, we refrained from emphasizing these models' performance in gender-specific species comparisons due to difficulty in explanation for the predicted gender difference. For cross-species prediction, whether a higher Pearson’s r value between actual age and predicted age could reflect conserved evolution for male or female is not convincing. In addition, we adopted same not different prediction models for human and macaque aiming to establish a comparable model between species. Generally speaking, the nonlinear model could obtain better prediction accuracy than linear model. If different species used different models, it is unfair to perform cross-species prediction. Importantly, our study aimed to developed new index based on the same prediction models to quantify brain evolution difference, i.e., brain cross-species age gap (BCAP) instead of traditional statistical analyses. Different prediction models for different species may introduce bias causing by prediction methods and thus impacting the accuracy of BCAP. Thus, we adopted the linear model with best prediction performances for intra-species prediction in this study for cross-species prediction. Although our main goal in this study is to set up stable cross-species prediction model and the models built using either male or female subjects showed good performances during cross-species prediction, however, as your comment, how to unbiasedly characterize evolutionary gender differences using machining learning approaches needs to be further investigated since there are many reports about the gender difference in developing brain in humans. In fact, whether macaque brains have the same gender differences as humans is an interesting scientific question worth studying. Thus, we have included a discussion on how to use machining learning method to study the evolutionary gender difference in our revised manuscript.

      On page 15, lines 18-23 and page 16, line 1-4

      “Many studies have reported sex differences in developing human brains (Hines, 2011; Kurth, Gaser, & Luders, 2021), however, whether macaque brains have similar sex differences as humans is still unknown. We used machining learning method for cross-species prediction to quantify brain evolution and the established prediction models are stable even when only using male or female data, which may indicate that the proposed cross-species prediction model has no evolutionary sex difference. Although the stable prediction model can be established in either male or female participants for cross-species prediction, this indeed does not mean that there are no evolutionary sex differences due to lack of quantitative comparative analysis. In the future, we need to develop more objective, quantifiable and stable index for studying sex differences using machining learning methods to further identify sex differences in the evolved brain”

      Reviewer #3 (Public Review):

      The authors identified a series of WM and GM features that correlated with age in human and macaque structural imaging data. The data was gathered from the HCP and WA studies, which was parcellated in order to yield a set of features. Features that correlated with age were used to train predictive intra and inter-species models of human and macaque age. Interestingly, while each model accurately predicted the corresponding species age, using the macaque model to predict human age was more accurate than the inverse (using the human model to predict macaque age). In addition, the prediction error of the macaque model in predicting human age increased with age, whereas the prediction error of the human model predicting macaque age decreased with age.

      After elaboration of the predictive models, the authors classified the features for prediction into human-specific, macaque-specific and common to human and macaque, where they most notably found that macaque-only and common human-macaque areas were located mainly in gray matter, with only a few human-specific features found in gray matter. Furthermore, the authors found significant correlations between BCAP and picture vocabulary (positive correlation) test and visual sensitivity (negative correlation) test. Several white matter tracts (AF, OR, SLFII) were also identified showing a correlation with BCAP.

      Thank you for providing this excellent summary. We appreciate your thorough review and concise overview of our work.

      STRENGTHS AND WEAKNESSES

      The paper brings an interesting perspective on the evolutionary trajectories of human and non-human primate brain structure, and its relation to behavior and cognition. Overall, the methods are robust and support the theoretical background of the paper. However, the overall clarity of the paper could be improved. There are many convoluted sentences and there seems to be both repetition across the different sections and unclear or missing information. For example, the Introduction does not clearly state the research questions, rather just briefly mentions research gaps existing in the literature and follows by describing the experimental method. It would be desirable to clearly state the theoretical background and research questions and leave out details on methodology. In addition, the results section repeats a lot of what is already stated in the methods. This could be further simplified and make the paper much easier to read.

      In the discussion, authors mention that "findings about cortex expansion are inconsistent and even contradictory", a more convincing argument could be made by elaborating on why the cortex expansion index is inadequate and how BCAP is more accurate.

      Thank you for highlighting the interesting aspects of our work. We are sorry for the lack of the clarity in certain parts of our manuscript. Following your valuable suggestions, we have revised the manuscript to reduce unnecessary repetitions and provide a clearer statement of our research question in Introduction. Specifically, unlike previous analyses of human and macaque evolution using comparative neuroscience, this study embeds chronological axis into the cross-species evolutionary analysis process. It constructed a linear prediction model of brain age for humans and macaques, and quantitatively described the degree of evolution. The brain structure based cross-species age prediction model and cross-species brain age differences proposed in this study further eliminate the inherent developmental effects of humans and macaques on cross-species evolutionary comparisons, providing new perspectives and approaches for studying cross-species development. Regarding the existing repetition in the results section, we have simplified them for the clarity. Regarding the comparison between the cortex expansion index and BCAP, we would like to emphasize that the cortex expansion index was derived without fully considering cross-species alignment along the chronological axis. Specifically, this index does not correspond to a specific developmental stage, but rather focuses on a direct comparison between the two species. In contrast, BCAP addresses this limitation by utilizing a prediction model to establish alignment (or misalignment) between species at the individual level. Therefore, BCAP may serve as a more flexible and nuanced tool for cross-species brain comparison.

      STUDY AIMS AND STRENGTH OF CONCLUSIONS

      Overall, the methods are robust and support the theoretical background of the paper, but it would be good to state the specific research questions -even if exploratory in nature- more specifically. Nevertheless, the results provide support for the research aims.

      Thank you for excellent suggestion. We have revised our introduction to state the specific research question as mentioned above.

      IMPACT OF THE WORK AND UTILITY OF METHODS AND DATA TO THE COMMUNITY

      This study is a good first step in providing a new insight into the neurodevelopmental trajectories of humans and non-human primates besides the existing cortical expansion theories.

      Thank you for your encouraging comment.

      ADDITIONAL CONTEXT:

      It should be clearly stated both in the abstract and methods that the data used for the experiment came from public databases.

      Thank you for your suggestion. We have added this information in both abstract and method. For details, please see page 2, line 9 in Abstract section; page 16, lines 10-11 and page 17, lines 6-10 in Materials and Method section.

    1. Author response:

      Reviewer #1 (Public Review):

      Using structural analysis, Bonchuk and colleagues demonstrate that the TTK-like BTB/POZs of insects form stable hexameric assemblies composed of trimers of POZ dimers, a configuration observed consistently across both homomultimers and heteromultimers, which are known to be formed by TTK-like BTB/POZ domains. The structural data is comprehensive, unambiguous, and further supported by theoretical fold prediction analyses. In particular the judicious complementation of experiments and fold prediction is commendable. This study now adds an important cog that might help generalize the general principles of the evolution of multimerization in members of this fold family.

      I strongly feel that enhancing the inclusivity of the discussion would strengthen the paper. Below, I suggest some additional points for consideration for the same.

      Major points.

      1) It would be valuable to discuss alternative multimer assembly interfaces, considering the diverse ways POZs can multimerize. For instance, the Potassium channel POZ domains form tetramers. A comparison of their inter-subunit interface with that of TTK and non-TTK POZs could provide insightful contrasts.

      Thanks for the suggestion, we added this important comparison, as well as comparison with recently published structures of filament-forming BTB domains.

      2) The so-called TTK motif, despite its unique sequence signature, essentially corresponds to the N-terminal extension observed in other "non-TTK" proteins such as Miz-1. Given Miz-1's structure, it becomes evident that the utilization of the N-terminal extension for dimerization is shared with the TTK family, suggesting a common evolutionary origin in metazoan transcription factors. Early phylogenetic trees (e.g. in PMID: 9917379) support the grouping of the TTK-like POZs with other animal Transcription factors containing POZ domains such as those with Kelch repeats further suggesting that the extension might be ancestral. Structural investigations by modeling prominent examples or comparing known structures of similar POZ domains, could support this inference. Control comparisons with POZ domains from fungi, plants and amoebozoans like Dictyostelium could offer additional insights.

      We performed AlphaFold2-Multimer modeling of dimers of all BTB domains from the most ancestral metazoan clades, Placozoa and Porifera, along with BTBs from Choanoflagellates – the closest to first metazoans unicellular eukaryotes. The presence of N-terminal beta-sheet was evaluated. KLHL-BTBs are present in all eukaryotes and likely are predecessors of ZBTB domains. According to AlphaFold modeling of dimers, all KLHL-BTB domains of plants and basal metazoans have alpha1 helix, but most of these domains from do not possess additional N-terminal beta-strand (beta1) characteristic for ZBTB domains. We found only one KLHL-BTB (Uniprot ID: AA9VCT1_MONBE) with such N-terminal extension in Choanoflagellate proteome, one in Dictyostelium proteome (Q54F31_DICDI), and 7 (out of 43 BTB domains in total) and 13 (out of 81) such domains in Trichoplax and Amphimedon proteomes correspondingly. There was no significant sequence similarity of beta1 element at the level of primary sequence. However, most of these domains bear 3-box/BACK extension and represent typical KLHL-BTBs which are member of E3 ubiquitin-ligase complexes, they are often associated with protein-protein interacting MATH domain or WD40 repeats. We found only one protein in Trichoplax proteome with beta1 strand devoid of 3-box/BACK (B3RQ74_TRIAD), thus resembling ZBTB topology. Thus, likely emergence of BTB domains of this subtype occurred early in Metazoan evolution. At this point ZBTBs were not yet associated with zinc-fingers. According to our survey, actual fusion of ZBTB domain with zinc-finger domains occurred in the evolution of earlier bilaterian organisms since proteins with such domain architecture are not found in Radiata but are present in basal Protostomia and Deuterostomia clades. TTK-type sequence is characteristic only for Arthropoda and emerged early in their evolution. We added all these data to the article.

      3) Exploring the ancestral presence of the aforementioned extension in metazoan transcription factors could serve as a foundation for understanding the evolutionary pathway of hexamerization. This analysis could shed light on exposed structural regions that had the potential to interact post-dimerization with the N-terminal extension and also might provide insights into the evolution of multimer interfaces, as observed in the Potassium channel.

      We added this important comparison as well as comparison with recent structures of filament-forming BTB domains.

      4) Considering the role of conserved residues in the multimer interface is crucial. Reference to conserved residues involved in multimer formation, such as discussed in PMID: 9917379, would enrich the discussion.

      We updated our description of multimer interface with respect to conservation of residues.

      Reviewer #2 (Public Review):

      BTB domains are protein-protein interaction domains found in diverse eukaryotic proteins, including transcription factors. It was previously known that many of the Drosophila transcription factor BTB domains are of the TTK-type - these are defined as having a highly-conserved motif, FxLRWN, at their N-terminus, and they thereby differ from the mammalian BTB domains. Whereas the well-characterised mammalian BTB domains are dimeric, several Drosophila TTK-BTB domains notably form multimers and function as chromosome architectural proteins. The aims of this work were (i) to determine the structural basis of multimerisation of the Drosophila TTK-BTB domains, (ii) to determine how different Drosophila TTK-BTB domains interact with each other, and (iii) to investigate the evolution of this subtype of BTB domain.

      The work significantly advances our understanding of the biology of BTB domains. The conclusions of the paper are mostly well-supported, although some aspects need clarification:

      Hexameric organisation of the TTK-type BTB domains:

      Using cryo-EM, the authors showed that the CG6765 TTK-type BTB domain forms a hexameric assembly in which three "classic" BTB dimers interact via a beta-sheet interface involving the B3 strand. This is particularly interesting, as this region of the BTB domain has recently been implicated in protein-protein interactions in a mammalian BTB-transcription factor, MIZ1. SEC-MALS analysis indicated that the LOLA TTK-type BTB domain is also hexameric, and SAXS data was consistent with a hexameric assembly of the CG6765- and LOLA BTB domains.

      The data regarding the hexameric organisation is convincing. However, interpreting the role of specific regions of the BTB domain is difficult because the description of the molecular contacts lacks depth.

      Heteromeric interactions between TTK-type BTB domains:

      The authors use yeast two-hybrid assays to study heteromeric interactions between various Drosophila TTK-type BTB domains. Such assays are notorious for producing false positives, and this needs to be mentioned. Although the authors suggest that the heteromeric interactions are mediated via the newly-identify B3 interaction interface, there is no evidence to support this, since mutation of B3 yielded insoluble proteins.

      We are aware that Y2H can give false positive results in cases where the BTB domain fused to the DNA binding domain can activate reporter genes. Therefore, all tested BTB domains were examined for their ability to activate transcription. Furthermore, in our study, assays with non-TTK-type BTB domains, which showed almost no interactions, provide additional negative control. We have added a corresponding disclaimer in the text. We agree that our data do not explain the basis for heteromeric interactions. Design of mutations in B3 beta-sheet proved to be complicated, using of biochemical methods to study the principles of heteromer assembly also does not seem to be feasible since most TTK-type BTBs tend to form aggregates and are difficult to be expressed and purified. But most important issue is that demonstrated ability of heteromer assembly through B3 in few tested pairs cannot be applied for all pairs, some of them still may use different mechanism. We used AlphaFold to predict possible mechanisms of heteromer assemblies. AlphaFold suggested that usage of both B3 and conventional dimerization interfaces for heteromeric interactions are possible in various cases, with preference of one over another in different pairs. Thus, most likely the presence of two potential heteromerization interfaces extends the heteromerization capability of these domains. We changed the text accordingly.

      Evolution of the TTK-type BTB domains:

      The authors carried out a bioinformatics analysis of BTB proteins and showed that most of the Drosophila BTB transcription factors (24 out of 28) are of the TTK-type. They investigated how the TTK-type BTB domains emerged during evolution, and showed that these are only found in Arthropoda, and underwent lineage-specific expansion in the modern phylogenetic groups of insects. These findings are well-supported by the evidence.

    2. Reviewer #2 (Public Review):

      BTB domains are protein-protein interaction domains found in diverse eukaryotic proteins, including transcription factors. It was previously known that many of the Drosophila transcription factor BTB domains are of the TTK-type - these are defined as having a highly-conserved motif, FxLRWN, at their N-terminus, and they thereby differ from the mammalian BTB domains. Whereas the well-characterised mammalian BTB domains are dimeric, several Drosophila TTK-BTB domains notably form multimers and function as chromosome architectural proteins. The aims of this work were (i) to determine the structural basis of multimerisation of the Drosophila TTK-BTB domains, (ii) to determine how different Drosophila TTK-BTB domains interact with each other, and (iii) to investigate the evolution of this subtype of BTB domain.

      The work significantly advances our understanding of the biology of BTB domains. The conclusions of the paper are mostly well-supported, although some aspects need clarification:

      Hexameric organisation of the TTK-type BTB domains:<br /> Using cryo-EM, the authors showed that the CG6765 TTK-type BTB domain forms a hexameric assembly in which three "classic" BTB dimers interact via a beta-sheet interface involving the B3 strand. This is particularly interesting, as this region of the BTB domain has recently been implicated in protein-protein interactions in a mammalian BTB-transcription factor, MIZ1. SEC-MALS analysis indicated that the LOLA TTK-type BTB domain is also hexameric, and SAXS data was consistent with a hexameric assembly of the CG6765- and LOLA BTB domains.

      The data regarding the hexameric organisation is convincing. However, interpreting the role of specific regions of the BTB domain is difficult because the description of the molecular contacts lacks depth.

      Heteromeric interactions between TTK-type BTB domains:<br /> The authors use yeast two-hybrid assays to study heteromeric interactions between various Drosophila TTK-type BTB domains. Such assays are notorious for producing false positives, and this needs to be mentioned. Although the authors suggest that the heteromeric interactions are mediated via the newly-identify B3 interaction interface, there is no evidence to support this, since mutation of B3 yielded insoluble proteins.

      Evolution of the TTK-type BTB domains:<br /> The authors carried out a bioinformatics analysis of BTB proteins and showed that most of the Drosophila BTB transcription factors (24 out of 28) are of the TTK-type. They investigated how the TTK-type BTB domains emerged during evolution, and showed that these are only found in Arthropoda, and underwent lineage-specific expansion in the modern phylogenetic groups of insects. These findings are well-supported by the evidence.

    1. It’s wild to think about how new technologies are changing the way we think about teaching and learning

      I love the opening sentence and how well it ties into the concept of this new piece of technology we are using. Very innovative.

    1. Author response:

      Reviewer #1 - Public Review

      This report describes work aiming to delineate multi-modal MRI correlates of psychopathology from a large cohort of children of 9-11 years from the ABCD cohort. While uni-modal characterisations have been made, the authors rightly argue that multi-modal approaches in imaging are vital to comprehensively and robustly capture modes of large-scale brain variation that may be associated with pathology. The primary analysis integrates structural and resting-state functional data, while post-hoc analyses on subsamples incorporate task and diffusion data. Five latent components (LCs) are identified, with the first three, corresponding to p-factor, internal/externalising, and neurodevelopmental Michelini Factors, described in detail. In addition, associations of these components with primary and secondary RSFC functional gradients were identified, and LCs were validated in a replication sample via assessment of correlations of loadings.

      1.1) This work is clearly novel and a comprehensive study of associations within this dataset. Multi-modal analyses are challenging to perform, but this work is methodologically rigorous, with careful implementation of discovery and replication assessments, and primary and exploratory analyses. The ABCD dataset is large, and behavioural and MRI protocols seem appropriate and extensive enough for this study. The study lays out comprehensive associations between MRI brain measures and behaviour that appear to recapitulate the established hierarchical structure of psychopathology.

      We thank Reviewer 1 for appreciating our methods and findings, and we address their suggestions below:

      1.2) The work does have weaknesses, some of them acknowledged. There is limited focus on the strength of observed associations. While the latent component loadings seem reliably reproducible in the behavourial domain, this is considerably less the case in the imaging modalities. A considerable proportion of statistical results focuses on spatial associations in loadings between modalities - it seems likely that these reflect intrinsic correlations between modalities, rather than associations specific to any latent component.

      We appreciate the Reviewer’s comment, and minimized the reporting of correlations between the loadings from the different modalities in the revised Results (specifically subsections on LC1, LC2, and LC3). We now refer to Table S4 in each subsection for this information: “Spatial correlations between modality-specific loadings are reported in Supplementary file 1c.”

      For completeness, we report the intrinsic correlations between the different modalities in Supplementary file 1c (P.19):

      “Lastly, although the current work aimed to reduce intrinsic correlations between variables within a given modality through running a PCA before the PLS approach, intrinsic correlations between measures and modalities may potentially be a remaining factor influencing the PLS solution. We, thus, provided an additional overview of the intrinsic correlations between the different neuroimaging data modalities in the supporting results (Supplementary file 1c).”

      1.3) Assessment of associations with functional gradients is similarly a little hard to interpret. Thus, it is hard to judge the implications for our understanding of the neurophysiological basis of psychopathology and the ability of MRI to provide clinical tools for, say, stratification.

      We now provide additional context, including a rising body of theoretical and empirical work, that outlines the value of functional gradients and cortical hierarchies in the understanding of brain development and psychopathology. Please see P.26.

      “Initially demonstrated at the level of intrinsic functional connectivity (Margulies et al., 2016), follow up work confirmed a similar cortical patterning using microarchitectural in-vivo MRI indices related to cortical myelination (Burt et al., 2018; Huntenburg et al., 2017; Paquola et al., 2019), post-mortem cytoarchitecture (Goulas et al., 2018; Paquola et al., 2020, 2019), or post-mortem microarray gene expression (Burt et al., 2018). Spatiotemporal patterns in the formation and maturation of large-scale networks have been found to follow a similar sensory-to-association axis; moreover, there is the emerging view that this framework may offer key insights into brain plasticity and susceptibility to psychopathology (Sydnor et al., 2021). In particular, the increased vulnerability of transmodal association cortices in late childhood and early adolescence has been suggested to relate to prolonged maturation and potential for plastic reconfigurations of these systems (Paquola et al., 2019; Park et al., 2022b). Between mid-childhood and early adolescence, heteromodal association systems such as the default network become progressively more integrated among distant regions, while being more differentiated from spatially adjacent systems, paralleling the development of cognitive control, as well as increasingly abstract and logical thinking. [...] This suggests that neurodevelopmental difficulties might be related to alterations in various processes underpinned by sensory and association regions, as well as the macroscale balance and hierarchy of these systems, in line with previous findings in several neurodevelopmental conditions, including autism, schizophrenia, as well as epilepsy, showing a decreased differentiation between the two anchors of this gradient (Hong et al., 2019). In future work, it will be important to evaluate these tools for diagnostics and population stratification. In particular, the compact and low dimensional perspective of gradients may provide beneficial in terms of biomarker reliability as well as phenotypic prediction, as previously demonstrated using typically developing cohorts (Hong et al. 2020) On the other hand, it will be of interest to explore in how far alterations in connectivity along sensory-to-transmodal hierarchies provide sufficient graduality to differentiate between specific psychopathologies, or whether they, as the current work suggests, mainly reflect risk for general psychopathology and atypical development.”

      1.4) The observation of a recapitulation of psychopathology hierarchy may be somewhat undermined by the relatively modest strength of the components in the imaging domain.

      We thank the Reviewer for this comment, and now expressed this limitation in the revised Discussion, P.23.

      “The p factor, internalizing, externalizing, and neurodevelopmental dimensions were each associated with distinct morphological and intrinsic functional connectivity signatures, although these relationships varied in strength.”

      1.5) The task fMRI was assessed with a fairly basic functional connectivity approach, not using task timings to more specifically extract network responses.

      In the revised Discussion on P.24, we acknowledge that more in-depth analyses of task-based fMRI may have offered additional insights into state-dependent changes in functional architecture.

      “While the current work derived main imaging signatures from resting-state fMRI as well as grey matter morphometry, we could nevertheless demonstrate associations to white matter architecture (derived from diffusion MRI tractography) and recover similar dimensions when using task-based fMRI connectivity. Despite subtle variations in the strength of observed associations, the latter finding provided additional support that the different behavioral dimensions of psychopathology more generally relate to alterations in functional connectivity. Given that task-based fMRI data offers numerous avenues for analytical exploration, our findings may motivate follow-up work assessing associations to network- and gradient-based response strength and timing with respect to external stimuli across different functional states.”

      1.6) Overall, the authors achieve their aim to provide a detailed multimodal characterisation of MRI correlations of psychopathology. Code and data are available and well organised and should provide a valuable resource for researchers wanting to understand MRI-based neural correlates of psycho-pathology-related behavioural traits in this important age group. It is largely a descriptive study, with comparisons to previous uni-modal work, but without particularly strong testing of neuroscience hypotheses.

      We thank the Reviewer for recognizing the detail and rigor of data-driven study and extensive code and data documentation.

      Reviewer #2 - Public Review

      In "Multi-modal Neural Correlates of Childhood Psychopathology" Krebets et al. integrate multi-modal neuroimaging data using machine learning to delineate dissociable links to diverse dimensions of psychopathology in the ABCD sample. This paper had numerous strengths including a superb use of a large resource dataset, appropriate analyses, beautiful visualizations, clear writing, and highly interpretable results from a data-driven analysis. Overall, I think it would certainly be of interest to a general readership. That being said, I do have several comments for the authors to consider.

      We thank Dr Satterthwaite for the positive evaluation and helpful comments.

      2.1) Out-of-sample testing: while the permutation testing procedure for the PLS is entirely appropriate, without out-of-sample testing the reported effect sizes are likely inflated.

      As discussed in the editorial summary of essential revisions, we agree that out-of-sample prediction indeed provides stronger estimates of generalizability. We assess this by applying the PCA coefficients derived from the discovery cohort imaging data to the replication cohort imaging data. The resulting PCA scores and behavioral data were then z-scored using the mean and standard deviation of the replication cohort. The SVD weights derived from the discovery cohort were applied to the normalized replication cohort data to derive imaging and behavioral composite scores, which were used to recover the contribution of each imaging and behavioral variable to the LCs (i.e., loadings). Out-of-sample replicability of imaging (mean r=0.681, S.D.=0.131) and behavioral (mean r=0.948, S.D.=0.022) loadings was generally high across LCs 1-5. This analysis is reported in the revised manuscript (P.18).

      “Generalizability of reported findings was also assessed by directly applying PCA coefficients and latent components weights from the PLS analysis performed in the discovery cohort to the replication sample data. Out-of-sample prediction was overall high across LCs1-5 for both imaging (mean r=0.681, S.D.=0.131) and behavioral (mean r=0.948, S.D.=0.022) loadings.”

      2.2) Site/family structure: it was unclear how site/family structure were handled as covariates.

      Only unrelated participants were included in discovery and replication samples (see P.6). The site variable was regressed out of the imaging and behavioral data prior to the PLS analysis using the residuals from a multiple linear model which also included age, age2, sex, and ethnicity. This is now clarified on P.29:

      “Prior to the PLS analysis, effects of age, age2, sex, site, and ethnicity were regressed out from the behavioral and imaging data using a multiple linear regression to ensure that the LCs would not be driven by possible confounders (Kebets et al., 2021, 2019; Xia et al., 2018). The imaging and behavioral residuals of this procedure were input to the PLS analysis.”

      2.3) Anatomical features: I was a bit surprised to see volume, surface area, and thickness all evaluated - and that there were several comments on the correspondence between the SA and volume in the results section. Given that cortical volume is simply a product of SA and CT (and mainly driven by SA), this result may be pre-required.

      As suggested, we reduced the reporting of correlations between the loadings from the different modalities in the revised Results (specifically subsections on LC1, LC2, and LC3). Instead, we now refer to Table S4 in each subsection for this information: “Spatial correlations between modality-specific loadings are reported in Supplementary file 1c.”

      We also reran the PLS analysis while only including thickness and surface area as our structural metrics, to account for potential redundancy of these measures with volume. This analysis and associated findings are reported on P.36 and P.19:

      “As cortical volume is a result of both thickness and surface area, we repeated our main PLS analysis while excluding cortical volume from our imaging metrics and report the consistency of these findings with our main model.”

      “Third, to account for redundancy within structural imaging metrics included in our main PLS model (i.e., cortical volume is a result of both thickness and surface area), we also repeated our main analysis while excluding cortical volume from our imaging metrics. Findings were very similar to those in our main analysis, with an average absolute correlation of 0.898±0.114 across imaging composite scores of LCs 1-5.”

      2.4) Ethnicity: the rationale for regressing ethnicity from the data was unclear and may conflict with current best practices.

      We thank the Reviewer for this comment. In light of recent discussions on including this covariate in large datasets such as ABCD (e.g., Saragosa-Harris et al., 2022), we elaborate on our rationale for including this variable in our model in the revised manuscript on P.30:

      “Of note, the inclusion of ethnicity as a covariate in imaging studies has been recently called into question. In the present study, we included this variable in our main model as a proxy for social inequalities relating to race and ethnicity alongside biological factors (age, sex) with documented effects on brain organization and neurodevelopmental symptomatology queried in the CBCL.”

      We also assess the replicability of our analyses when removing race and ethnicity covariates prior to computing the PLS analysis and correlating imaging and behavioral composite scores across both models. We report resulting correlations in the revised manuscript (P.37, 19, and 27):

      “We also assessed the replicability of our findings when removing race and ethnicity covariates prior to computing the PLS analysis and correlating imaging and behavioral composite scores across both models.”

      “Moreover, repeating the PLS analysis while excluding this variable as a model covariate yielded overall similar imaging and behavioral composites scores across LCs to our original analysis. Across LCs 1-5, the average absolute correlations reached r=0.636±0.248 for imaging composite scores, and r=0.715±0.269 for behavioral composite scores. Removing these covariates seemed to exert stronger effects on LC3 and LC4 for both imaging and behavior, as lower correlations across models were specifically observed for these components.”

      “Although we could consider some socio-demographic variables and proxies of social inequalities relating to race and ethnicity as covariates in our main model, the relationship of these social factors to structural and functional brain phenotypes remains to be established with more targeted analyses.”

      2.5) Data quality: the authors did an admirable job in controlling for data quality in the analyses of functional connectivity data. However, it is unclear if a comparable measure of data quality was used for the T1/dMRI analyses. This likely will result in inflated effect sizes in some cases; it has the potential to reduce sensitivity to real effects.

      We agree that data quality was not accounted for in our analysis of T1w- and diffusion-derived metrics. We now accounted for T1w image quality by adding manual quality control ratings to the regressors applied to all structural imaging metrics prior to performing the PLS analysis, and reported the consistency of this new model with original findings. See P.36, P.19:

      “We also considered manual quality control ratings as a measure of T1w scan quality. This metric was included as a covariate in a multiple linear regression model accounting for potential confounds in the structural imaging data, in addition to age, age2, sex, site, ethnicity, ICV, and total surface area. Downstream PLS results were then benchmarked against those obtained from our main model.”

      “Considering scan quality in T1w-derived metrics (from manual quality control ratings) yielded similar results to our main analysis, with an average correlation of 0.986±0.014 across imaging composite scores.”

      As for diffusion imaging, we also regressed out effects of head motion in addition to age, age2, sex, site, and ethnicity from FA and MD measures and reported the consistency with our original results (P.36, P.19):

      “We tested another model which additionally included head motion parameters as regressors in our analyses of FA and MD measures, and assessed the consistency of findings from both models.”

      “Additionally considering head motion parameters from diffusion imaging metrics in our model yielded consistent results to those in our main analyses (mean r=0.891, S.D.=0.103; r=0.733-0.998).”

      Reviewer #3 - Public Review

      In this study, the authors utilized the Adolescent Brain Cognitive Development dataset to investigate the relationship between structural and functional brain network patterns and dimensions of psychopathology. They identified multiple components, including a general psychopathology (p) factor that exhibited a strong association with multimodal imaging features. The connectivity signatures associated with the p factor and neurodevelopmental dimensions aligned with the sensory-to-transmodal axis of cortical organization, which is linked to complex cognition and psychopathology risk. The findings were consistent across two separate subsamples and remained robust when accounting for variations in analytical parameters, thus contributing to a better understanding of the biological mechanisms underlying psychopathology dimensions and offering potential brain-based vulnerability markers.

      3.1) An intriguing aspect of this study is the integration of multiple neuroimaging modalities, combining structural and functional measures, to comprehensively assess the covariance with various symptom combinations. This approach provides a multidimensional understanding of the risk patterns associated with mental illness development.

      We thank the Reviewer for acknowledging the multimodal approach, and for the constructive suggestions.

      3.2) The paper delves deeper into established behavioral latent variables such as the p factor, internalizing, externalizing, and neurodevelopmental dimensions, revealing their distinct associations with morphological and intrinsic functional connectivity signatures. This sheds light on the neurobiological underpinnings of these dimensions.

      We are happy to hear the Reviewer appreciates the gain in understanding neural underpinnings of dimensions of psychopathology resulting from the current work.

      3.3) The robustness of the findings is a notable strength, as they were validated in a separate replication sample and remained consistent even when accounting for different parameter variations in the analysis methodology. This reinforces the generalizability and reliability of the results.

      We appreciate that the Reviewer found our robustness and generalizability assessment convincing.

      3.4) Based on their findings, the authors suggest that the observed variations in resting-state functional connectivity may indicate shared neurobiological substrates specific to certain symptoms. However, it should be noted that differences in resting-state connectivity between groups can stem from various factors, as highlighted in the existing literature. For instance, discrepancies in the interpretation of instructions during the resting state scan can influence the results. Hence, while their findings may indicate biological distinctions, they could also reflect differences in behavior.

      For the ABCD dataset, resting-state fMRI scans were based on eyes open and passive viewing of a crosshair, and are thus homogenized. We acknowledge, however, that there may still be state-to-state fluctuations contributing to the findings, and this is now discussed in the revised Discussion, on P.28. Note, however, that prior literature has generally also suggested rather modest impacts of cognitive and daily variation on resting-state functional networks, compared to much more dominating inter-individual and inter-group factors.

      “Finally, while prior research has shown that resting-state fMRI networks may be affected by differences in instructions and study paradigm (e.g., with respect to eyes open vs closed) (Agcaoglu et al., 2019), the resting-state fMRI paradigm is homogenized in the ABCD study to be passive viewing of a centrally presented fixation cross. It is nevertheless possible that there were slight variations in compliance and instructions that contributed to differences in associated functional architecture. Notably, however, there is a mounting literature based on high-definition fMRI acquisitions suggesting that functional networks are mainly dominated by common organizational principles and stable individual features, with substantially more modest contributions from task-state variability (Gratton et al. 2018). These findings, thus, suggest that resting-state fMRI markers can serve as powerful phenotypes of psychiatric conditions, and potential biomarkers (Abraham et al., 2017; Gratton et al., 2020; Parkes et al., 2020).”

      3.5) The authors conducted several analyses to investigate the relationship between imaging loadings associated with latent components and the principal functional gradient. They found several associations between principal gradient scores and both within- and between-network resting-state functional connectivity (RSFC) loadings. Assessing the analysis presented here proves challenging due to the nature of relating loadings, which are partly based on the RSFC, to gradients derived from RSFC. Consequently, a certain level of correlation between these two variables would be expected, making it difficult to determine the significance of the authors' findings. It would be more intriguing if a direct correlation between the composite scores reflecting behavior and the gradients were to yield statistically significant results.

      We thank the Reviewer for the comment, and agree that investigating gradient-behavior relationships could offer additional insights into the neural basis of psychiatric symptomatology. However, the current analysis pipeline precludes this direct comparison which is performed on a region-by-region basis across the span of the cortical gradient. Indeed, the behavioral loadings are provided for each CBCL item, and not cortical regions.

      The Reviewer also evokes concerns of potential circularity in our analysis, as we compared imaging loadings, which are partially based on RSFC, and gradient values generated from the same RSFC data. In response to this comment, we cross-validated our findings using an RSFC gradient derived from an independent dataset (HCP), showing highly consistent findings to those presented in the manuscript. This correlation is now reported in the Results section P.15.

      “A similar pattern of findings was observed when cross-validating between- and within-network RSFC loadings to a RSFC gradient derived from an independent dataset (HCP), with strongest correlations seen for between-network RSFC loadings for LC1 and LC3 (LC1: r=0.50, pspin<0.001; LC3: r=0.37, pspin<0.001).”

      We furthermore note similar correlations between imaging loadings and T1w/T2w ratio in the same participants, a proxy of intracortical microstructure and hierarchy (Glasser et al., 2011). These findings are now detailed in the revised Results, P.15-16:

      “Of note, we obtain similar correlations when using T1w/T2w ratio in the same participants, a proxy of intracortical microstructure and hierarchy (Glasser et al., 2011). Specifically, we observed the strongest association between this microstructural marker of the cortical hierarchy and between-network RSFC loadings related to LC1 (r=-0.43, pspin<0.001).”

      3.6) Lastly, regarding the interpretation of the first identified latent component, I have some reservations. Upon examining the loadings, it appears that LC1 primarily reflects impulse control issues rather than representing a comprehensive p-factor. Furthermore, it is worth noting that within the field, there is an ongoing debate concerning the interpretation and utilization of the p-factor. An insightful publication on this topic is "The p factor is the sum of its parts, for now" (Fried et al, 2021), which explains that the p-factor emerges as a result of a positive manifold, but it does not necessarily provide insights into the underlying mechanisms that generated the data.

      We thank the Reviewer for this comment, and added greater nuance into the discussion of the association to the p factor. We furthermore discuss some of the ongoing debate about the use of the p factor, and cite the recommended publication on P.27.

      “Other factors have also been suggested to impact the development of psychopathology, such as executive functioning deficits, earlier pubertal timing, negative life events (Brieant et al., 2021), maternal depression, or psychological factors (e.g., low effortful control, high neuroticism, negative affectivity). Inclusion of such data could also help to add further mechanistic insights into the rather synoptic proxy measure of the p factor itself (Fried et al., 2021), and to potentially assess shared and unique effects of the p factor vis-à-vis highly correlated measures of impulse control.”

    1. Author response:

      Reviewer #2 (Public Review):

      This is, to my knowledge, the most scalable method for phylogenetic placement that uses likelihoods. The tool has an inter- esting and innovative means of using gaps, which I haven’t seen before. In the validation the authors demonstrate superior performance to existing tools for taxonomic annotation (though there are questions about the setup of the validation as described below).

      The program is written in C with no library dependencies. This is great. However, I wasn’t able to try out the software because the linking failed on Debian 11, and the binary artifact made by the GitHub Actions pipeline was too recent for my GLIBC/kernel. It’d be nice to provide a binary for people stuck on older kernels (our cluster is still on Ubuntu 18.04). Also, would it be hard to publish your .zipped binaries as packages?

      We have provided a binary (and zipped package) that supports Ubuntu 18.04 in GitHub Actions ( https://github.com/lpipes/tronko/actions/runs/9947708087). This should facilitate the use of our software on older sys- tems like yours. We were not able to test the binary however, since GitHub did not seem to find any nodes with Ubuntu 18.04. It is important to note that Ubuntu 18.04 is deprecated. The latest version of Ubuntu is 24.04, and we recommend users to upgrade to newer, supported versions of their operating systems to benefit from the latest security updates and features.

      Thank you for publishing your source files for the validation on zenodo. Please provide a script that would enable the user to rerun the analysis using those files, either on zenodo or on GitHub somewhere.

      We have posted all datasets as well as scripts to Zenodo.

      The validations need further attention as follows.

      First, the authors have not chosen data sets that are not well-aligned with real-world use cases for this software, and as a re- sult, its applicability is difficult to determine. First, the leave-one-species-out experiment made use of COI gene sequences representing 253 species from the order Charadriiformes, which includes bird species such as gulls and terns. What is the reasoning for selecting this data set given the objective of demonstrating the utility of Tronko for large scale community profiling experiments which by their nature tend to include microorganisms as subjects? If the authors are interested in evaluating COI (or another gene target) as a marker for characterizing the composition of eukaryotic populations, is the heterogeneity and species distribution of bird species within order Charadriiformes comparable to what one would expect in populations of organisms that might actually be the target of a metagenomic analysis?

      Our reasoning for selecting Charadriiformes is that these species are often misidentified for each other and there is a heavy reliance on COI for their species identification. This choice allows us to demonstrate Tronko’s ability to handle difficult and realistic identification challenges. Additionally, we aimed to simulate a challenging dataset to effectively differentiate between the methods used, showcasing Tronko’s robustness. Including more distantly related bird species would have simplified the identification process, which would not serve our objective of demonstrating the utility of Tronko for dis- tinguishing closely related species. It is also important to note that all methods used the exact same reference database which is not always the case in other species assignment comparative studies.

      Furthermore, while our study uses bird species, the principles and techniques applied are broadly applicable to other taxa, including microorganisms. By selecting a datase tknown for its identification difficulties, we underscore Tronko’spotential utility in a wide range of taxonomic profiling scenarios, including those involving high heterogeneity and closely related species, such as in microbial communities.

      Second, It appears that experiments evaluating performance for 16S were limited to reclassification of sequencing data from mock communities described in two publications, Schirmer (2015, 49 bacteria and 10 archaea, all environmental), and Gohl (2016; 20 bacteria - this is the widely used commercial mock community from BEI, all well-known human pathogens or commensals). The authors performed a comparison with kraken2, metaphlan2, and MEGAN using both the default database for each as well as the same database used for Tronko (kudos for including the latter). This pair of experiments provide a reasonable high-level indication of Tronko’s performance relative to other tools, but the total number of organ- isms is very limited, and particularly limited with respect to the human microbiome. It is also important to point out that these mock communities are composed primarily of type strains and provide limited species-level heterogeneity. The per- formance of these classification tools on type strains may not be representative of what one would find in natural samples. Thus, the leave-one-individual-out and leave-one-species-out experiments would have been more useful and informative had they been applied to extended 16S data sets representing more ecologically realistic populations.

      We thank the reviewer for this comment and we have included both an additional bacterial mock community dataset from Lluch et al. (2015) and an additional leave-one-species-out experiment. We describe how this leave-one-species-out dataset was constructed in our previous response to ’Essential Revisions’ #1. We also added Figure 5, S5, and S6.

      Finally, the authors should describe the composition of the databases used for classification as well as the strategy (and toolchain) used to select reference sequences. What databases were the reference sequences drawn from and by what criteria? Were the reference databases designed to reflect the composition of the mock communities (and if so, are they limited to species in those communities, or are additional related species included), or have the authors constructed general pur- pose reference databases? How many representatives of each species were included (on average), and were there efforts to represent a diversity of strains for each species? The methods should include a section detailing the construction of the data sets: as illustrated in this very study, the choice of reference database influences the quality of classification results, and the authors should explain the process and design considerations for database construction.

      To construct our databases, we used CRUX (Curd et al., 2018). This is described in the Methods section under ’Custom 16S and COI Tronko-build reference database construction’. All missing outs tests were downsamples of these two databases. It is beyond the scope of the manuscript to discuss how CRUX works. Additionally, we added the following text:

      To compare the new method (Tronko) to previous methods, we constructed reference databases for COI and 16S for com- mon amplicon primer sets using CRUX (See Methods for exact primers used).

    1. Author response:

      Reviewer #1 (Public Review):

      In this manuscript, Perez-Lopez et al. examine the function of the chemokine CCL28, which is expressed highly in mucosal tissues during infection, but its role during infection is poorly understood. They find that CCL28 promotes neutrophil accumulation in the intestines of mice infected with Salmonella and in the lungs of mice infected with Acinetobacter. They find that Ccl28-/- mice are highly susceptible to Salmonella infection, and highly resistant and protected from lethality following Acinetobacter infection. They find that neutrophils express the CCL28 receptors CCR3 and CCR10. CCR3 was pre-formed and intracellular and translocated to the cell surface following phagocytosis or inflammatory stimuli. They also find that CCL28 stimulation of CCR3 promoted neutrophil antimicrobial activity, ROS production, and NET formation, using a combination of primary mouse and human neutrophils for their studies. Overall, the authors' findings provide new and fundamental insight into the role of the CCL28:CCR3 chemokine:chemokine receptor pair in regulating neutrophil recruitment and effector function during infection with the intestinal pathogen Salmonella Typhimurium and the lung pathogen Acinetobacter baumanii.

      We would like to thank the reviewer for their positive assessment of our work and for providing us with constructive comments that have helped us to improve the manuscript.

      Reviewer #2 (Public Review):

      In this manuscript by Perez-Lopez et al., the authors investigate the role of the chemokine CCL28 during bacterial infections in mucosal tissues. This is a well-written study with exciting results. They show a role for CCL28 in promoting neutrophil accumulation to the guts of Salmonella-infected mice and to the lung of mice infected with Acinetobacter. Interestingly, the functional consequences of CCL28 deficiency differ between infections with the two different pathogens, with CCL28-deficiency increasing susceptibility to Salmonella, but increasing resistance to Acinetobacter. The underlying mechanistic reasons for this suggest roles for CCL28 in enhanced neutrophil antimicrobial activity, production of reactive oxygen species, and formation of extracellular traps. However, additional experiments are required to shore up these mechanisms, including addressing the role of other CCL28-dependent cell types and further characterization of neutrophils from CCL28-deficient mice.

      We would like to thank the reviewer for the positive assessment of our work and for providing us with constructive comments that have helped us to improve the manuscript.

      Reviewer #3 (Public Review):

      The manuscript by Perez-Lopez and colleagues uses a combination of in vivo studies using knockout mice and elegant in vitro studies to explore the role of the chemokine CCL28 during bacterial infection on mucosal surfaces. Using the streptomycin model of Salmonella Typhimurium (S. Tm) infection, the authors demonstrate that CCL28 is required for neutrophil influx in the intestinal mucosa to control pathogen burden both locally and systemically. Interestingly, CCL28 plays the opposite role in a model lung infection by Acinetobacter baumanii, as Ccl28-/- mice are protected from Acinetobacter infection. Authors suggest that the mechanism by which CCL28 plays a role during bacterial infection is due to its role in modulating neutrophil recruitment and function.

      We would like to thank the reviewer for the positive assessment of our work and for providing us with constructive comments that have helped us to improve the manuscript.

      The major strengths of the manuscript are:

      The novelty of the findings that are described in the manuscript. The role of the chemokine CCL28 in modulating neutrophil function and recruitment in mucosal surfaces is intriguing and novel.

      Authors use Ccl28-/- mice in their studies, a mouse strain that has only recently been available. To assess the impact of CCL28 on mucosal surfaces during pathogen-induced inflammation, the authors choose not one but two models of bacterial infection (S. Tm and A. baumanii). This approach increases the rigor and impact of the data presented.

      Authors combine the elegant in vivo studies using Ccl28 -/- with in vitro experiments that explore the mechanisms by which CCL28 affects neutrophil function.

      The major weaknesses of the manuscript in its present form are:

      Authors use different time points in the S. Tm model to characterize the influx of immune cells and pathology. They do not provide a clear justification as to why distinct time points were chosen for their analysis.

      The reviewer raises a good point. As discussed in the detailed response to the reviewers, we have now generated extensive results at different time points and included these in the revised manuscript.

      Authors provide puzzling data that Ccl28-/- mice have the same numbers of CCR3 and CCR10- expressing neutrophils in the mucosa during infection. It is unclear why the lack of CCL28 expression would not affect the recruitment of neutrophils that express the ligands (CCR3 and CCR10) for this chemokine. Thus, these results need to be better explained.

      As discussed in the detailed response to the reviewers, we clarified that Ccl28-/- mice have reduced numbers of neutrophils in the mucosa during infection, but the percentage of CCR3+ and CCR10+ neutrophils does not change. We provide additional discussion of this point in the manuscript and in the response to the reviewers.

      The in vitro studies focus primarily on characterizing how CCL28 affects the function of neutrophils in response to S. Tm infection. There is a lack of data to demonstrate whether Acinetobacter affects CCR3 and CCR10 expression and recruitment to the cell surface and whether CCL28 plays any role in this process.

      We agree and have performed additional studies with Acinetobacter and CCL28, which we discuss in greater detail below in the response to the reviewers.

    1. we've learned the  hard way, actually, over the past 50 years, that we don't solve sustainability  problems by only raising awareness. It's not enough. Yeah. You also need some  some, some top down influence on what I call keystone actors to get key players in  the economy or, key decision makers to move.

      for - climate crisis - raising awareness alone - is not enough - need to also influence top down keystone actors

      climate crisis - raising awareness alone - is not enough - need to also influence top down keystone actors - This is only part of the story, the other part is developing a coherent, unified, bottom up movement - While statistics show a majority of people of must countries now take climate change seriously, it's not translating into TIMELY and APPROPRIATE ACTION and BEHAVIOUR CHANGE - The common person is still captured by the pathological economic system - (S)he still prioritised increasingly more precarious survival over all other concerns, including environmental - Ths is because most survival activity is still intimately tied to ecological degradation - The common person is not sufficiently educated about the threat level. - And even if they were, there does not yet exist any process to unify these collective concerns to trigger the appropriate leverage point of bottom up collective action

    1. for - Federico Faggin - quantum physics - consciousness

      summary - Frederico Faggin is a physicist and microelectronic engineer who was the developer of the world's first microprocessor at Intel, the Intel 4004 CPU. - Now he focuses his attention on developing a robust and testable theory of consciousness based on quantum information theory. - What sets Frederico apart from other scientists who are studying consciousness is a series of profound personal 'awakening'-type experiences in which has led to a psychological dissolution of the sense of self bounded by his physical body - This profound experience led him to claim with unshakable certainty that our individual consciousness is far greater than our normal mundane experience of it - Having a science and engineering background, Faggin has set out to validate his experiences with a new scientific theory of Consciousness, Information and Physicality (CIP) and Operational Probabilistic Theory (OPT)

      to - Frederico Faggin's website - https://hyp.is/JTGs6lr9Ee-K8-uSXD3tsg/www.fagginfoundation.org/what-we-do/j - Federico Faggin and paper: - Hard Problem and Free Will: - an information-theoretical approach - https://hyp.is/styU2lofEe-11hO02KJC8w/link.springer.com/chapter/10.1007/978-3-030-85480-5_5

    1. RealSense 455 fixed cameras, namely the navigation andthe manipulation camera, both with a vertical field of viewof 59◦ and capable of 1280×720 RGB-D image capture.The navigation camera is placed looking in the agent’s for-ward direction and points slightly down, with the hori-zon at a nominal 30◦

      sim2real transfer

    1. b dem Wintersemester an der Uni Münster studieren möchte, kann sich bis zum 15. Juli 2024 für einen Studienplatz be

      Noch eine Überlappung

    2. Wintersemester an der Uni Münster studieren möchte, kann sich bis zum 15. Juli 2024 für einen Studienplatz bewerben. Diese Frist gilt für zulassu

      Überlappung

    3. für einen Studienplatz bewerben. Diese Frist gilt für zulassungsbeschränkte Fächer und nur für Personen aus Deutschland beziehungsweise den EU-/EWR-Staaten. Für Bewerbende aus Nicht-EU- und Nicht-EWR-Staaten endet die Frist am 31. Mai. Einen Überblick über die Fächer gibt es im Studienführer. Dort gibt es auch Infomaterial und fachspezifische Beratungsstellen. Wer sich für einen zulassungsfreien Studiengang entscheidet, kann sich von Anfang August bis zum 4. Oktober 2024 einschreiben. Die Bewerbung für einen Studienplatz erfolgt ausschließlich online. Studienführer der ZSB © 2024 Uni MS0 seconds of 0 secondsVolume 90%Press shift question mark to access a list of keyboard shortcutsTastaturkürzelEnabledDisabledShortcuts Open/Close/ or ?Spielen/PauseLeertasteLautstärke Erhöhen↑Lautstärke Verringern↓Vorwärts Springen→Rückwärts Springen←Untertitel An/AuscVollbild/Vollbild BeendenfSumm Schalten/Stummschaltung DeaktivierenmDecrease Caption Size-Increase Caption Size+ or =Gehe zu %0-9 E-Mail facebook linkedin pinterest reddit tumblr twitter Linkhttps://www.uni-muenster.de/de/2017/de.htmlKopiertLive00:0000:0000:00 Studium an der Universität Münster© Uni MS - Peter LessmannBerufsbegleitend studieren an der Universität MünsterFach- und Führungskräfte aller Branchen nutzen seit über 20 Jahren das umfassende berufsbegleitende Studienangebot der Universität Münster. Die Hochschule bietet ihnen und ihrem Team mehr als 20 hochwertige praxisorientierte Master-/MBA-Studiengänge, knapp 30 Universitätszertifikate und zahlreiche Seminare aus den Bereichen Wirtschaft, Verwaltung, Medizin, Recht und dem Non- Profit-Sektor an. Sie profitieren von vollwertigen Universitätsabschlüssen, neuesten wissenschaftlichen Erkenntnissen und renommierten Dozierenden. Die Weiterbildungsanbieter der Universität sind die "Professional School – Universität Münster" und die "JurGrad" (Recht).Universität Münster Professional SchoolJurGradLinksStudienmöglichkeiten an der Universität MünsterBewerbungInternationale MobilitätOrganisationPrüfungenHilfe und BeratungStudierendensekretariatFristen und TermineZentrale StudienberatungCareer ServiceKarriereportal unikap.msUniversitäts- und LandesbibliothekLeben in MünsterKultur an der Uni MünsterHochschulsportSprachen lernen ForschungDie Exzellenzcluster der Universität MünsterDie zwei Exzellenzcluster, die die Universität Münster im Rahmen der Exzellenzstrategie des Bundes und der Länder eingeworben hat, umfassen Fachbereiche aus den Geistes- und Sozialwissenschaften sowie den Naturwissenschaften. Der Fachbereich Mathematik war erfolgreich mit dem Exzellenzcluster „Mathematik Münster: Dynamik - Geometrie - Struktur“. Der schon zum dritten Mal geförderte Exzellenzcluster "Religion und Politik" umfasst rund 200 Geistes- und Sozialwissenschaftler aus 20 Fächern.© Judith KraftForschungsprofilDie Universität Münster leistet internationale Spitzenforschung in der Mathematik, den Geistes- und Sozialwissenschaften, Naturwissenschaften und Lebenswissenschaften. Die beteiligten Forscherinnen und Forscher verbinden exzellente Grundlagenforschung mit anwendungsorientierten Ansätzen.Impact areasKoordinierte ForschungsprogrammeTenure TrackDie Universität Münster unterstützt konsequent die Situation des wissenschaftlichen Nachwuchses durch tran

      Das ist ist übergreifend

    4. landwirtschaftlichen Reststoffen zurückzugewinnen

      Das ist ja was

    1. NPC; chromatin organization; cross-linking mass spectrometry; cryo-electron tomography; cryo-focused-ion-beam milling; in-cell structural biology; integrative modeling; mRNA transport; nuclear basket; nuclear pore complex; subtomogram analysis.

      denem 123

    2. modeling, we computed a model of the basket in yeast and mammals that revealed how a hub of nucleoporins (Nups) in the nuclear ring binds to basket-forming Mlp/Tpr proteins: the coiled-coil domains of Mlp/Tpr form the struts of the basket, while their unstructured termini constitute the basket distal densities, which potentially serve as a docking site for mRNA preprocessing before nucleocytoplasmic transport.

      cemal okka

    3. One such structure is the cage-like nuclear basket. Despite its crucial roles in mRNA surveillance and chromatin organization, an architectural understanding has remained elusive. Using in-cell cryo-electron tomography and subtomogram analysis, we explored the NPC's structural variations and the nuclear basket across fungi (yeast; S. cerevisiae), mammals (mouse; M. musculus), and protozoa (T. gondii).

      Bilmem ne

    1. Welcome back, and in this video, I want to talk about AWS Control Tower. This is a product which is becoming required knowledge if you need to use AWS in the real world. And because of this, it's starting to feature more and more in all of the AWS exams. I want this to be a lesson applicable to all of the AWS study paths, so think of this as a foundational lesson. And if required, for the course that you're studying, I might be going into additional detail. We do have a lot to cover, so let's jump in and get started.

      At a high level, Control Tower has a simple but wide-ranging job, and that's to allow the quick and easy setup of multi-account environments. You might be asking, "Doesn't AWS Organizations already do that?" Well, kind of. Control Tower actually orchestrates other AWS services to provide the functionality that it does, and one of those services is AWS Organizations. But it goes beyond that. Control Tower uses Organizations, IAM Identity Center, which is the product formerly known as AWS SSO. It also uses CloudFormation, AWS Config, and much more. You can think of Control Tower as another evolution of AWS Organizations adding significantly more features, intelligence, and automation.

      There are a few different parts of Control Tower which you need to understand, and it's worth really focusing on understanding the distinction now because we're going to be building on this later. First, we've got the Landing Zone, and simply put, this is the multi-account environment part of Control Tower. This is what most people will be interacting with when they think of Control Tower. Think of this like AWS Organizations only with superpowers. It provides, via other AWS services, single sign-on and ID Federation so you can use a single login across all of your AWS accounts, and even share this with your existing corporate identity store. And this is provided using the IAM Identity Center, again, the service formerly known as AWS SSO. It also provides centralized logging and auditing, and this uses a combination of CloudWatch, CloudTrail, AWS Config, and SNS. Everything else in the Control Tower product surrounds this Landing Zone, and I'll show you how this looks later in this lesson.

      Control Tower also provides guardrails, again, more detail on this is coming up soon. But these are designed to either detect or mandate rules and standards across all AWS accounts within the Landing Zone. You also have the Account Factory, which provides really cool automation for account creation, and adds features to standardize the creation of those accounts. This goes well beyond what AWS Organizations can do on its own, and I'll show you how this works over the rest of this lesson. And if applicable, for the path that you're studying, there will be a demo coming up elsewhere in the course. Finally, there's a dashboard which offers a single-page oversight of the entire organization. At a high level, that's what you get with Control Tower.

      Now, things always make more sense visually, so let's step through this high-level architecture visually, and I hope this will add a little bit more context. We start with Control Tower itself, which like AWS Organizations, is something you create from within an AWS account. And this account becomes the management account at the Landing Zone. At this top, most level within the management account, we have Control Tower itself, which orchestrates everything. We have AWS Organizations, and as you've already experienced, this provides the multi-account structure, so organizational units and service control policies. And then, we have single sign-on provided by the IAM Identity Center, which historically was known as AWS SSO. This allows for, as the name suggests, single sign-on, which means we can use the same set of internal or federated identities to access everything in the Landing Zone that we have permissions to. This works in much the same way as AWS SSO worked, but it's all set up and orchestrated by Control Tower.

      When Control Tower is first set up, it generally creates two organizational units. The foundational organizational units, which by default is called Security, and a custom organizational unit, which by default is named Sandbox. Inside the foundational or security organizational unit, Control Tower creates two AWS accounts, the Audit account and the Log Archive account. The Log Archive account is for users that need access to all logging information for all of your enrolled accounts within the Landing Zone. Examples of things used within this account are AWS Config and CloudTrail logs, so they're stored within this account so that they're isolated. You have to explicitly grant access to this account, and it offers a secure, read-only Archive account for logging.

      The Audit account is for your users who need access to the audit information made available by Control Tower. You can also use this account as a location for any third-party tools to perform auditing of your environment. It's in this account that you might use SNS for notifications of changes to governance and security policies, and CloudWatch for monitoring Landing Zone wide metrics. It's at this point where Control Tower becomes really awesome because we have the concept of an Account Factory. Think of this as a team of robots who are creating, modifying, or deleting AWS accounts as your business needs them. And this can be interacted with both from the Control Tower console or via the Service Catalog.

      Within the custom organizational unit, Account Factory will create AWS accounts in a fully automated way as many of them as you need. The configuration of these accounts is handled by Account Factory. So, from an account and networking perspective, you have baseline or cookie-cutter configurations applied, and this ensures a consistent configuration across all AWS accounts within your Landing Zone. Control Tower utilizes CloudFormation under the covers to implement much of this automation, so expect to see stacks created by the product within your environment. And Control Tower uses both AWS Config and Service Control Policies to implement account guardrails. And these detect drifts away from governance standards, or prevent those drifts from occurring in the first place.

      At a high level, this is how Control Tower looks. Now the product can scale from simple to super complex. This is a product which you need to use in order to really understand. And depending on the course that you're studying, you might have the opportunity to get some hands-on later in the course. If not, don't worry, that means that you only need this high-level understanding for the exam.

      Let's move on and look at the various parts of Control Tower in a little bit more detail. Let's quickly step through the main points at the Landing Zone. It's a feature designed to allow anyone to implement a well-architected, multi-account environment, and it has the concept of a home region, which is the region that you initially deploy the product into, for example, us-east-1. You can explicitly allow or deny the usage of other AWS regions, but the home region, the one that you deploy into, is always available. The Landing Zone is built using AWS Organizations, AWS Config, CloudFormation, and much more. Essentially, Control Tower is a product which brings the features of lots of different AWS products together and orchestrates them.

      I've mentioned that there's a concept of the foundational OU, by default called the Security OU, and within this, Log Archive and Audit AWS accounts. And these are used mainly for security and auditing purposes. You've also got the Sandbox OU which is generally used for testing and less rigid security situations. You can create other organizational units and accounts, and for a real-world deployment of Control Tower, you're generally going to have lots of different organizational units. Potentially, even nested ones to implement a structure which works for your organization.

      Landing Zone utilizes the IAM Identity Center, again, formerly known as AWS SSO, to provide SSO or single sign-on services across multiple AWS accounts within the Landing Zone, and it's also capable of ID Federation. And ID Federation simply means that you can use your existing identity stores to access all of these different AWS accounts. The Landing Zone provides monitoring and notifications using CloudWatch and SNS, and you can also allow end users to provision new AWS accounts within the Landing Zone using Service Catalog.

      This is the Landing Zone at a high level. Let's next talk about guardrails. Guardrails are essentially rules for multi-account governance. Guardrails come in three different types: mandatory, strongly recommended, or elective. Mandatory ones are always applied. Strongly recommended are obviously strongly recommended by AWS. And elective ones can be used to implement fairly niche requirements, and these are completely optional.

      Guardrails themselves function in two different ways. We have preventative, and these stop you doing things within your AWS accounts in your Landing Zone, and these are implemented using Service Control policies, which are part of the AWS Organizations product. These guardrails are either enforced or not enabled, so you can either enforce them or not. And if they're enforced, it simply means that any actions defined by that guardrail are prevented from occurring within any of your AWS accounts. An example of this might be to allow or deny usage of AWS regions, or to disallow bucket policy changes within accounts inside your Landing Zone.

      The second functional type of guardrail is detective, and you can think of this as a compliance check. This uses AWS Config rules and allows you to check that the configuration of a given thing within an AWS account matches what you define as best practice. These type of guardrails are either clear, in violation, or not enabled. And an example of this would be a detective guardrail to check whether CloudTrail is enabled within an AWS account, or whether any EC2 instances have public IPv4 addresses associated with those instances. The important distinction to understand here is that preventative guardrails will stop things occurring, and detective guardrails will only identify those things. So, guardrails are a really important security and governance construct within the Control Tower product.

      Lastly, I want to talk about the Account Factory itself. This is essentially a feature which allows automated account provisioning, and this can be done by either cloud administrators or end users with appropriate permissions. And this automated provisioning includes the application of guardrails, so any guardrails which are defined can be automatically applied to these automatically provisioned AWS accounts.

      Because these accounts can be provisioned by end users, think of these as members of your organization, then either these members of your organization or anyone that you define can be given admin permissions on an AWS account which is automatically provisioned. This allows you to have a truly self-service, automatic process for provisioning AWS accounts so you can allow any member of your organization within tightly controlled parameters to be able to provision accounts for any purpose which you define as okay. And that person will be given admin rights over that AWS account. These can be long-running accounts or short-term accounts. These accounts are also configured with standard account and network configuration. If you have any organizational policies for how networking or any account settings are configured, these automatically provisioned accounts will come with this configuration. And this includes things like the IP addressing used by VPCs within the accounts, which could be automatically configured to avoid things like addressing overlap. And this is really important when you're provisioning accounts at scale.

      The Account Factory allows accounts to be closed or repurposed, and this whole process can be tightly integrated with a business's SDLC or software development life cycle. So, as well as doing this from the console UI, the Control Tower product and Account Factory can be integrated using APIs into any SDLC processes that you have within your organization. If you need accounts to be provisioned as part of a certain stage of application development, or you want accounts to be provisioned as part of maybe client demos or software testing, then you can do this using the Account Factory feature.

      At this point, that is everything I wanted to cover at this high level about Control Tower. If you need practical experience of Control Tower for the course that you are studying, there will be a demo lesson coming up elsewhere in the course, which gives you that practical experience. Don't be concerned if this is the only lesson that there is, or if there's this lesson plus additional deep-dive theory. I'll make sure, for whatever course you're studying, you have enough exposure to Control Tower.

      With that being said, though, that is the end of this high-level video. So go ahead and complete the video, and when you're ready, I'll look forward to you joining me in the next.

    1. Welcome back and welcome to this CloudTrail demo where we're going to set up an organizational trail and configure it to log data for all accounts in our organization to S3 and CloudWatch logs.

      The first step is that you'll need to be logged into the IAM admin user of the management account of the organization. As a reminder, this is the general account. To set up an organizational trail, you always need to be logged into the management account. To set up individual trails, you can do that locally inside each of your accounts, but it's always more efficient to use an organizational trail.

      Now, before we start the demonstration, I want to talk briefly about CloudTrail pricing. I'll make sure this link is in the lesson description, but essentially there is a fairly simple pricing structure to CloudTrail that you need to be aware of.

      The 90-day history that's enabled by default in every AWS account is free. You don't get charged for that; it comes free by default with every AWS account. Next, you have the ability to get one copy of management events free in every region in each AWS account. This means creating one trail that's configured for management events in each region in each AWS account, and that comes for free. If you create any additional trails, so you get any additional copies of management events, they are charged at two dollars per 100,000 events. That won't apply to us in this demonstration, but you need to be aware of that if you're using this in production.

      Logging data events comes at a charge regardless of the number, so we're not going to enable data events for this demo lesson. But if you do enable it, then that comes at a charge of 10 cents per 100,000 events, irrespective of how many trails you have. This charge applies from the first time you're logging any data events.

      What we'll be doing in this demo lesson is setting up an organizational trail which will create a trail in every region in every account inside the organization. But because we get one for free in every region in every account, we won't incur any charges for the CloudTrail side of things. We will be charged for any S3 storage that we use. However, S3 also comes with a free tier allocation for storage, which I don't expect us to breach.

      With that being said, let's get started and implement this solution. To do that, we need to be logged in to the console UI again in the management account of the organization. Then we need to move to the CloudTrail console. If you've been here recently, it will be in the Recently Visited Services. If not, just type CloudTrail in the Find Services box and then open the CloudTrail console.

      Once you're at the console, you might see a screen like this. If you do, then you can just click on the hamburger menu on the left and then go ahead and click on trails. Now, depending on when you're doing this demo, if you see any warnings about a new or old console version, make sure that you select the new version so your console looks like what's on screen now.

      Once you're here, we need to create a trail, so go ahead and click on create trail. To create a trail, you're going to be asked for a few important pieces of information, the first of which is the trail name. For trail name, we're going to use "animals4life.org," so just go ahead and enter that. By default, with this new UI version, when you create a trail, it's going to create it in all AWS regions in your account. If you're logged into the management account of the organization, as we are, you also have the ability to enable it for all regions in all accounts of your organization. We're going to do that because this allows us to have one single logging location for all CloudTrail logs in all regions in all of our accounts, so go ahead and check this box.

      By default, CloudTrail stores all of its logs in an S3 bucket. When you're creating a trail, you have the ability to either create a new S3 bucket to use or you can use an existing bucket. We're going to go ahead and create a brand new bucket for this trail. Bucket names within S3 need to be globally unique, so it needs to be a unique name across all regions and across all AWS accounts. We're going to call this bucket starting with "CloudTrail," then a hyphen, then "animals-for-life," another hyphen, and then you'll need to put a random number. You’ll need to pick something different from me and different from every other student doing this demo. If you get an error about the bucket name being in use, you just need to change this random number.

      You're also able to specify if you want the log files stored in the S3 bucket to be encrypted. This is done using SSE-KMS encryption. This is something that we'll be covering elsewhere in the course, and for production usage, you would definitely want to use it. For this demonstration, to keep things simple, we're not going to encrypt the log files, so go ahead and untick this box.

      Under additional options, you're able to select log file validation, which adds an extra layer of security. This means that if any of the log files are tampered with, you have the ability to determine that. This is a really useful feature if you're performing any account-level audits. In most production situations, I do enable this, but you can also elect to have an SNS notification delivery. So, every time log files are delivered into this S3 bucket, you can have a notification. This is useful for production usage or if you need to integrate this with any non-AWS systems, but for this demonstration, we'll leave this one unchecked.

      You also have the ability, as well as storing these log files into S3, to store them in CloudWatch logs. This gives you extra functionality because it allows you to perform searches, look at the logs from a historical context inside the CloudWatch logs user interface, as well as define event-driven processes. You can configure CloudWatch logs to scan these CloudTrail logs and, in the event that any particular piece of text occurs in the logs (e.g., any API call, any actions by a user), you can generate an event that can invoke, for example, a Lambda function or spawn some other event-driven processing. Don't worry if you don't understand exactly what this means at this point; I'll be talking about all of this functionality in detail elsewhere in the course. For this demonstration, we are going to enable CloudTrail to put these logs into CloudWatch logs as well, so check this box. You can choose a log group name within CloudWatch logs for these CloudTrail logs. If you want to customize this, you can, but we're going to leave it as the default.

      As with everything inside AWS, if a service is acting on our behalf, we need to give it the permissions to interact with other AWS services, and CloudTrail is no exception. We need to give CloudTrail the ability to interact with CloudWatch logs, and we do that using an IAM role. Don’t worry, we’ll be talking about IAM roles in detail elsewhere in the course. For this demonstration, just go ahead and select "new" because we're going to create a new IAM role that will give CloudTrail the ability to enter data into CloudWatch logs.

      Now we need to provide a role name, so go ahead and enter "CloudTrail_role_for_CloudWatch_logs" and then an underscore and then "animals_for_life." The name doesn’t really matter, but in production settings, you'll want to make sure that you're able to determine what these roles are for, so we’ll use a standard naming format. If you expand the policy document, you'll be able to see the exact policy document or IAM policy document that will be used to give this role the permissions to interact with CloudWatch logs. Don’t worry if you don’t fully understand policy documents at this point; we’ll be using them throughout the course, and over time you'll become much more comfortable with exactly how they're used. At a high level, this policy document will be attached to this role, and this is what will give CloudTrail the ability to interact with CloudWatch logs.

      At this point, just scroll down; that's everything that we need to do, so go ahead and click on "next." Now, you'll need to select what type of events you want this trail to log. You’ve got three different choices. The default is to log only management events, so this logs any events against the account or AWS resources (e.g., starting or stopping an EC2 instance, creating or deleting an EBS volume). You've also got data events, which give you the ability to log any actions against things inside resources. Currently, CloudTrail supports a wide range of services for data event logging. For this demonstration, we won't be setting this up with data events initially because I’ll be covering this elsewhere in the course. So, go back to the top and uncheck data events.

      You also have the ability to log insight events, which can identify any unusual activity, errors, or user behavior on your account. This is especially useful from a security perspective. For this demonstration, we won’t be logging any insight events; we’re just going to log management events. For management events, you can further filter down to read or write or both and optionally exclude KMS or RDS data API events. For this demo lesson, we’re just going to leave it as default, so make sure that read and write are checked. Once you've done that, go ahead and click on "next." On this screen, just review everything. If it all looks good, click on "create trail."

      Now, if you get an error saying the S3 bucket already exists, you'll just need to choose a new bucket name. Click on "edit" at the top, change the bucket name to something that's globally unique, and then follow that process through again and create the trail.

      Certainly! Here is the continuation and completion of the transcript:


      After a few moments, the trail will be created. It should say "US East Northern Virginia" as the home region. Even though you didn't get the option to select it because it's selected by default, it is a multi-region trail. Finally, it is an organizational trail, which means that this trail is now logging any CloudTrail events from all regions in all accounts in this AWS organization.

      Now, this isn't real-time, and when you first enable it, it can take some time for anything to start to appear in either S3 or CloudWatch logs. At this stage, I recommend that you pause the video and wait for 10 to 15 minutes before continuing, because the initial delivery of that first set of log files through to S3 can take some time. So pause the video, wait 10 to 15 minutes, and then you can resume.

      Next, right-click the link under the S3 bucket and open that in a new tab. Go to that tab, and you should start to see a folder structure being created inside the S3 bucket. Let's move down through this folder structure, starting with CloudTrail. Go to US East 1 and continue down through this folder structure.

      In my case, I have quite a few of these log files that have been delivered already. I'm going to pick one of them, the most recent, and just click on Open. Depending on the browser that you're using, you might have to download and then uncompress this file. Because I'm using Firefox, it can natively open the GZ compressed file and then automatically open the JSON log file inside it.

      So this is an example of a CloudTrail event. We're able to see the user identity that actually generates this event. In this case, it's me, I am admin. We can see the account ID that this event is for. We can see the event source, the event name, the region, the source IP address, the user agent (in this case, the console), and all of the relevant information for this particular interaction with the AWS APIs are logged inside this CloudTrail event.

      Don’t worry if this doesn’t make a lot of sense at this point. You’ll get plenty of opportunities to interact with this type of logging event as you go through the various theory and practical lessons within the course. For now, I just want to highlight exactly what to expect with CloudTrail logs.

      Since we’ve enabled all of this logging information to also go into CloudWatch logs, we can take a look at that as well. So back at the CloudTrail console, if we click on Services and then type CloudWatch, wait for it to pop up, locate Logs underneath CloudWatch, and then open that in a new tab.

      Inside CloudWatch, on the left-hand menu, look for Logs, and then Log Groups, and open that. You might need to give this a short while to populate, but once it does, you should see a log group for the CloudTrail that you’ve just created. Go ahead and open that log group.

      Inside it, you’ll see a number of log streams. These log streams will start with your unique organizational code, which will be different for you. Then there will be the account number of the account that it represents. Again, these will be different for you. And then there’ll be the region name. Because I’m only interacting with the Northern Virginia region, currently, the only ones that I see are for US East 1.

      In this particular account that I’m in, the general account of the organization, if I look at the ARN (Amazon Resource Name) at the top or after US East 1 here, this number is my account number. This is the account number of my general account. So if I look at the log streams, you’ll be able to see that this account (the general account) matches this particular log stream. You’ll be able to do the same thing in your account. If you look for this account ID and then match it with one of the log streams, you'll be able to pull the logs for the general AWS account.

      If I go inside this particular log stream, as CloudTrail logs any activity in this account, all of that information will be populated into CloudWatch logs. And that’s what I can see here. If I expand one of these log entries, we’ll see the same formatted CloudTrail event that I just showed you in my text editor. So the only difference when using CloudWatch logs is that the CloudTrail events also get entered into a log stream in a log group within CloudWatch logs. The format looks very similar.

      Returning to the CloudTrail console, one last thing I want to highlight: if you expand the menu on the left, whether you enable a particular trail or not, you’ve always got access to the event history. The event history stores a log of all CloudTrail events for the last 90 days for this particular account, even if you don’t have a specific trail enabled. This is standard functionality. What a trail allows you to do is customize exactly what happens to that data. This area of the console, the event history, is always useful if you want to search for a particular event, maybe check who’s logged onto the account recently, or look at exactly what the IAM admin user has been doing within this particular AWS account.

      The reason why we created a trail is to persistently store that data in S3 as well as put it into CloudWatch logs, which gives us that extra functionality. With that being said, that’s everything I wanted to cover in this demo lesson.

      One thing you need to be aware of is that S3, as a service, provides a certain amount of resource under the free tier available in every new AWS account, so you can store a certain amount of data in S3 free of charge. The problem with CloudTrail, and especially organizational trails, is that they generate quite a large number of requests. There is also, in addition to space, a number of requests per month that are part of the free tier.

      If you leave this CloudTrail enabled for the duration of your studies, for the entire month, it is possible that this will go slightly over the free tier allocation for requests within the S3 service. You might see warnings that you’re approaching a billable threshold, and you might even get a couple of cents of bill per month if you leave this enabled all the time. To avoid that, if you just go to Trails, open up the trail that you’ve created, and then click on Stop Logging. You’ll need to confirm that by clicking on Stop Logging, and at that point, no logging will occur into the S3 bucket or into CloudWatch logs, and you won’t experience those charges.

      For any production usage, the low cost of this service means that you would normally leave it enabled in all situations. But to keep costs within the free tier for this course, you can, if required, just go ahead and stop the logging. If you don’t mind a few cents per month of S3 charges for CloudTrail, then by all means, go ahead and leave it enabled.

      With that being said, that’s everything I wanted to cover in this demo lesson. So go ahead, complete the lesson, and when you're ready, I look forward to you joining me in the next.

    1. Mathematical economics 37 languages العربيةAzərbaycancaБеларускаяБългарскиCatalàЧӑвашлаDeutschEestiΕλληνικάEspañolEuskaraفارسیFrançais한국어हिन्दीBahasa IndonesiaҚазақшаລາວMagyarМакедонскиBahasa MelayuNederlands日本語ଓଡ଼ିଆPolskiPortuguêsРусскийසිංහලСрпски / srpskiSrpskohrvatski / српскохрватскиSvenskaTagalogTürkçeУкраїнськаTiếng Việt粵語中文 Edit links
    1. Welcome to this lesson, where I'm going to be introducing CloudTrail.

      CloudTrail is a product that logs API actions which affect AWS accounts. If you stop an instance, that's logged. If you change a security group, that's logged too. If you create or delete an S3 bucket, that's logged by CloudTrail. Almost everything that can be done to an AWS account is logged by this product.

      Now, I want to quickly start with the CloudTrail basics. The product logs API calls or account activities, and every one of those logged activities is called a CloudTrail event. A CloudTrail event is a record of an activity in an AWS account. This activity can be an action taken by a user, a role, or a service.

      CloudTrail by default stores the last 90 days of CloudTrail events in the CloudTrail event history. This is an area of CloudTrail which is enabled by default in AWS accounts. It's available at no cost and provides 90 days of history on an AWS account.

      If you want to customize CloudTrail in any way beyond this 90-day event history, you need to create a trail. We'll be looking at the architecture of a trail in a few moments' time.

      CloudTrail events can be one of three different types: management events, data events, and insight events. If applicable to the course you are studying, I'll be talking about insight events in a separate video. For now, we're going to focus on management events and data events.

      Management events provide information about management operations performed on resources in your AWS account. These are also known as control plane operations. Think of things like creating an EC2 instance, terminating an EC2 instance, creating a VPC. These are all control plane operations.

      Data events contain information about resource operations performed on or in a resource. Examples of this might be objects being uploaded to S3 or objects being accessed from S3, or when a Lambda function is invoked. By default, CloudTrail only logs management events because data events are often much higher volume. Imagine if every access to an S3 object was logged; it could add up pretty quickly.

      A CloudTrail trail is the unit of configuration within the CloudTrail product. It's a way you provide configuration to CloudTrail on how to operate. A trail logs events for the AWS region that it's created in. That's critical to understand. CloudTrail is a regional service.

      When you create a trail, it can be configured to operate in one of two ways: as a one-region trail or as an all-regions trail. A single-region trail is only ever in the region that it's created in, and it only logs events for that region. An all-regions trail, on the other hand, can be thought of as a collection of trails in every AWS region, but it's managed as one logical trail. It also has the additional benefit that if AWS adds any new regions, the all-regions trail is automatically updated.

      This is a specific configuration item on a trail which determines if it only logs events for the region that it's in or if it also logs global services events. Most services log events in the region where the event occurred. For example, if you create an EC2 instance in AP Southeast 2, it’s logged to that region. A trail would need to be either a one-region trail in that region or an all-regions trail to capture that event.

      A very small number of services log events globally to one region. For example, global services such as IAM, STS, or CloudFront are very globally-focused services and always log their events to US East 1, which is Northern Virginia. These types of events are called global service events, and a trail needs to have this enabled in order to log these events. This feature is normally enabled by default if you create a trail inside the user interface.

      AWS services are largely split up into regional services and global services. When these different types of services log to CloudTrail, they either log in the region that the event is generated in or they log to US East 1 if they are global services. So, when you're diagnosing problems or architecting solutions, if the logs you are trying to reach are generated by global services like IAM, STS, or CloudFront, these will be classified as global service events and that will need to be enabled on a trail.

      Otherwise, a trail will only log events for the isolated region that it’s created in. When you create a trail, it is one of two types: one-region or all-regions. A one-region trail is always isolated to that one region, and you would need to create one-region trails in every region if you wanted to do it manually. Alternatively, you could create an all-regions trail, which encompasses all of the regions in AWS and is automatically updated as AWS adds new regions.

      Once you’ve created a trail, management events and data events are all captured by the trail based on whether it's isolated to a region or set to all regions. For an all-region trail, it captures management events and, if enabled, data events. Data events are not generally enabled by default and must be explicitly set when creating a trail. This trail will then listen to everything that's occurring in the account.

      Remember that the CloudTrail event history is limited to 90 days. However, when you create a trail, you can be much more flexible. A trail by default can store the events in a definable S3 bucket, and the logs generated and stored in an S3 bucket can be stored there indefinitely. You are only charged for the storage used in S3. These logs are stored as a set of compressed JSON log files, which consume minimal space. Being JSON formatted, they can be read by any tooling capable of reading standard format files, which is a great feature of CloudTrail.

      Another option is that CloudTrail can be integrated with CloudWatch Logs, allowing data to be stored in that product. CloudTrail can take all the logging data it generates and, in addition to putting it into S3, it can also put it into CloudWatch Logs. Once it's in CloudWatch Logs, you can use that product to search through it or use a metric filter to take advantage of the data stored there. This makes it much more powerful and gives you access to many more features if you use CloudWatch Logs versus S3.

      One of the more recent additions to the CloudTrail product is the ability to create an organizational trail. If you create this trail from the management account of an organization, it can store all the information for all the accounts inside that organization. This provides a single management point for all API and account events across every account in the organization, which is super powerful and makes managing multi-account environments much easier.

      So, we need to talk through some important elements of CloudTrail point by point. CloudTrail is enabled by default on AWS accounts, but it’s only the 90-day event history that’s enabled by default. You don’t get any storage in S3 unless you configure a trail. Trails are how you can take the data that CloudTrail’s got access to and store it in better places, such as S3 and CloudWatch Logs.

      The default for trails is to store management events only, which includes management plane events like creating an instance, stopping an instance, terminating an instance, creating or deleting S3 buckets, and logins to the console. Anything interacting with AWS products and services from a management perspective is logged by default in CloudTrail. Data events need to be specifically enabled and come at an extra cost. I’ll discuss this in more detail in the demo lesson, as you need to be aware of the pricing of CloudTrail. Much of the service is free, but there are certain elements that do carry a cost, especially if you use it in production.

      Most AWS services log data to the same region that the service is in. There are a few specific services, such as IAM, STS, and CloudFront, which are classified as true global services and log their data as global service events to US East 1. A trail needs to be enabled to capture that data.

      That’s critical and might come up as an exam question. What you will also definitely find coming up as an exam-style question is where to use CloudTrail for real-time logging. This is one of the limitations of the product—it is not real-time. CloudTrail typically delivers log files within 15 minutes of the account activity occurring and generally publishes log files multiple times per hour. This means it's not real-time; you can't rely on CloudTrail to provide a complete and exhaustive list of events up to the very point you're looking. Sometimes, it takes a few minutes for the data to arrive in S3 or CloudWatch Logs. Keep this in mind if you face any exam questions about real-time logging—CloudTrail is not the product.

      Okay, so that's the end of the theory in this lesson. It's time for a demo. In the next lesson, we’ll be setting up an organizational trail within our AWS account structure. We’ll configure it to capture all the data for all our member accounts and our management account, storing this data in an S3 bucket and CloudWatch Logs within the management account. I can’t wait to get started. It’s a fun one and will prove very useful for both the exam and real-world usage.

      So go ahead, complete this video, and when you're ready, you can join me in the demo lesson.

    1. eLife assessment

      This is a useful study on sex differences in gene expression across organs of four mice taxa, although there are some shortcomings in the data analyses and interpretations that should to be better placed in the broader context of the current literature. Hence, the evidence in the current form is incomplete, with several overstated key conclusions.

    2. Reviewer #1 (Public Review):

      The authors describe a comprehensive analysis of sex-biased expression across multiple tissues and species of mouse. Their results are broadly consistent with previous work, and their methods are robust, as the large volume of work in this area has converged toward a standardized approach.

      I have a few quibbles with the findings, and the main novelty here is the rapid evolution of sex-biased expression over shorter evolutionary intervals than previously documented, although this is not statistically supported. The other main findings, detailed below, are somewhat overstated.

      (1) In the introduction, the authors conflate gametic sex, which is indeed largely binary (with small sperm, large eggs, no intermediate gametic form, and no overlap in size) with somatic sexual dimorphism, which can be bimodal (though sometimes is even more complicated), with a large variance in either sex and generally with a great deal of overlap between males and females. A good appraisal of this distinction is at https://doi.org/10.1093/icb/icad113. This distinction in gene expression has been recognized for at least 20 years, with observations that sex-biased expression in the soma is far less than in the gonad.

      For example, the authors frame their work with the following statement:<br /> "The different organs show a large individual variation in sex-biased gene expression, making it impossible to classify individuals in simple binary terms. Hence, the seemingly strong conservation of binary sex-states does not find an equivalent underpinning when one looks at the gene-expression makeup of the sexes"

      The authors use this conflation to set up a straw man argument, perhaps in part due to recent political discussions on this topic. They seem to be implying one of two things. a) That previous studies of sex-biased expression of the soma claim a binary classification. I know of no such claim, and many have clearly shown quite the opposite, particularly studies of intra-sexual variation, which are common - see https://doi.org/10.1093/molbev/msx293, https://doi.org/10.1371/journal.pgen.1003697, https://doi.org/10.1111/mec.14408, https://doi.org/10.1111/mec.13919, https://doi.org/10.1111/j.1558-5646.2010.01106.x for just a few examples. Or b) They are the first to observe this non-binary pattern for the soma, but again, many have observed this. For example, many have noted that reproductive or gonad transcriptome data cluster first by sex, but somatic tissue clusters first by species or tissue, then by sex (https://doi.org/10.1073/pnas.1501339112, https://doi.org/10.7554/eLife.67485)<br /> Figure 4 illustrates the conceptual difference between bimodal and binary sexual conceptions. This figure makes it clear that males and females have different means, but in all cases the distributions are bimodal.

      I would suggest that the authors heavily revise the paper with this more nuanced understanding of the literature and sex differences in their paper, and place their findings in the context of previous work.

      (2) The authors also claim that "sexual conflict is one of the major drivers of evolutionary divergence already at the early species divergence level." However, making the connection between sex-biased genes and sexual conflict remains fraught. Although it is tempting to use sex-biased gene expression (or any form of phenotypic dimorphism) as an indicator of sexual conflict, resolved or not, as many have pointed out, one needs measures of sex-specific selection, ideally fitness, to make this case (https://doi.org/10.1086/595841, 10.1101/cshperspect.a017632). In many cases, sexual dimorphism can arise in one sex only without conflict (e.g. 10.1098/rspb.2010.2220). As such, sex-biased genes alone are not sufficient to discriminate between ongoing and resolved conflict.

      (3) To make the case that sex-biased genes are under selection, the authors report alpha values in Figure 3B. Alpha value comparisons like this over large numbers of genes often have high variance. Are any of the values for male- female- and un-biased genes significantly different from one another? This is needed to make the claim of positive selection.

    3. Reviewer #2 (Public Review):

      The manuscript by Xie and colleagues presents transcriptomic experiments that measure gene expression in eight different tissues taken from adult female and male mice from four species. These data are used to make inferences regarding the evolution of sex-biased gene expression across these taxa. The experimental methods and data analysis are appropriate; however, most of the conclusions drawn in the manuscript have either been previously reported in the literature or are not fully supported by the data.

      There are two ways the manuscript could be modified to better strengthen the conclusions.

      First, some of the observed differences in gene expression have very little to no effect on other phenotypes, and are not relevant to medicine or fitness. Selectively neutral gene expression differences have been inferred in previous studies, and consistent with that work, sex-biased and between-species expression differences in this study may also be enriched for selectively neutral expression differences. This idea is supported by the analysis of expression variance, which indicates that genes that show sex-biased expression also tend to show more inter-individual variation. This perspective is also supported by the MK analysis of molecular evolution, which suggests that positive selection is more prevalent among genes that are sex-biased in both mus and dom, and genes that switch sex-biased expression are under less selection at the level of both protein-coding sequence and gene expression.

      As an aside, I was confused by (line 176): "implying that the enhanced positive selection pressure is triggered by their status of being sex-biased in either taxon." - don't the MK values suggest an excess of positive selection on genes that are sex-biased in both taxa?

      Without an estimate of the proportion of differentially expressed genes that might be relevant for broader physiological or organismal phenotypes, it is difficult to assess the accuracy and relevance of the manuscript's conclusions. One (crude) approach would be to analyze subsets of genes stratified by the magnitude of expression differences; while there is a weak relationship between expression differences and fitness effects, on average large gene expression differences are more likely to affect additional phenotypes than small expression differences. Another perspective would be to compare the within-species variance to the between-species variance to identify genes with an excess of the latter relative to the former (similar logic to an MK test of amino acid substitutions).

      Second, the analysis could be more informative if it distinguished between genes that are expressed across multiple tissues in both sexes that may show greater expression in one sex than the other, versus genes with specialized function expressed solely in (usually) reproductive tissues of one sex (e.g. ovary-specific genes). One approach to quantify this distinction would be metrics like those used defined by [Yanai I, et al. 2005. Genome-wide midrange transcription profiles reveal expression-level relationships in human tissue specification. Bioinformatics 21:650-659.] These approaches can be used to separate out groups of genes by the extent to which they are expressed in both sexes versus genes that are primarily expressed in sex-specific tissue such as testes or ovaries. This more fine-grained analysis would also potentially inform the section describing the evolution/conservation of sex-biased expression: I expect there must be genes with conserved expression specifically in ovaries or testes (these are ancient animal structures!) but these may have been excluded by the requirement that genes be sex-biased and expressed in at least two organs.

      There are at least three examples of statements in the discussion that at the moment misinterpret the experimental results.

      The discussion frames the results in the context of sexual selection and sexually antagonistic selection, but these concepts are not synonymous. Sexual selection can shape phenotypes that are specific to one sex, causing no antagonism; and fitness differences between males and females resulting from sexually antagonistic variation in somatic phenotypes may not be acted on by sexual selection. Furthermore, the conditions promoting and consequence of both kinds of selection can be different, so they should be treated separately for the purposes of this discussion.

      The discussion claims that "Our data show that sex-biased gene expression evolves extremely fast" but a comparison or expectation for the rate of evolution is not provided. Many other studies have used comparative transcriptomics to estimate rates of gene expression evolution between species, including mice; are the results here substantially and significantly different from those previous studies? Furthermore, the experimental design does not distinguish between those gene expression phenotypes that are fixed between species as compared to those that are polymorphic within one or more species which prevents straightforward interpretation of differences in gene expression as interspecific differences.

      The conclusion that "Our results show that most of the genetic underpinnings of sex differences show no long-term evolutionary stability, which is in strong contrast to the perceived evolutionary stability of two sexes" - seems beyond the scope of this study. This manuscript does not address the genetic underpinnings of sex differences (this would involve eQTL or the like), rather it looks at sex differences in gene expression phenotypes. Simply addressing the question of phenotypic evolutionary stability would be more informative if genes expressed specifically in reproductive tissues were separated from somatic sex-biased genes to determine if they show similar patterns of expression evolution.

    4. Reviewer #3 (Public Review):

      This manuscript reports some interesting and important patterns. The results on sex-bias in different tissues and across four taxa would benefit from alternative (or additional) presentation styles. In my view, the most important results are with respect to alpha (fraction of beneficial amino acid changes) in relation to sex-bias (though the authors have made this as a somewhat minor point in this version).

      The part that the authors emphasize I don't find very interesting (i.e., the sexes have overlapping expression profiles in many nongonadal tissues), nor do I believe they have the appropriate data necessary to convincingly demonstrate this (which would require multiple measures from the same individual).

      This study reports several interesting patterns with respect to sex differences in gene expression across organs of four mice taxa. An alternative presentation of the data would yield a clearer and more convincing case that the patterns the authors claim are legitimate.

      I recommend that the authors clarify what qualifies as "sex-bias".

    5. Author response:

      We appreciate the time of the reviewers and their detailed comments, which will help to improve the manuscript.

      We are sorry that at least one reviewer seems to have had the impression that we have conflated issues about gonadal and non-gonadal sex phenotypes. This referee suggests that we should use Sharpe et al. (2023) to develop our concepts. However, what is discussed in Sharpe et al. was already the guiding principle for our study (without knowing this paper before). In our paper, we introduce the gonadal binary sex (which is self-evidently also the basis for creating the dataset in the first place, because we needed to separate males from females) and go then on to the question of (adult) sex phenotypes for the rest of the paper. The gonadal data are included only as comparison for contrasting the patterns in the non-gonadal tissues.

      Our study presents the largest systematic dataset so far on the evolution of sex-biased gene expression. It is also the first that explores the patterns of individual variation in sex-biased gene expression and the SBI is an entirely new procedure to directly visualize these variance patterns in an intuitive way (note that the relative position of the distributions along the X-axis is indeed not relevant). The results are actually quite nuanced (e.g. the rather dynamv changes seen in mouse kidney and liver comparisons) and go certainly beyond what would have been predictable based on the current literature.

      Also, we should like to point out that our study contradicts recent conclusions that were published in high profile journals, that had suggested that a substantial set of sex-biased genes has conserved functions between humans and mice and that mice can therefore be informative for gender-specific medicine studies. Our data suggest that that only a very small set of genes are conserved in their sex-biased expression. These are epigenetic regulator genes and it will therefore be interesting in the future to focus on their roles in generating the differences between sexual phenotypes in given species.

      We will be happy to use the referee comments to clarify all of these points in a revised version. But we do not think that our "evidence is incomplete" and that there are several "overstated key conclusions". We have used all canonical statistical analyses that are typically used in papers of sex-biased gene expression, as acknowledged by reviewers 1 and 2. The additional statistical analyses that are requested are not within the scope of such papers, but could be subject to separate general studies, independent of the sex-bias analysis (e.g. the role of highly expressed genes versus low expressed genes, or the analysis of the fraction of neutrally evolving loci).

      Finally, it is unclear why the overall rating of the paper is at the lowest possible category ("useful study"), given that it adds a substantial amount of data and new insights into the exploration of the non-binary nature of sexual phenotypes.

    1. Welcome to this lesson, where I'm going to introduce the theory and architecture of CloudWatch Logs.

      I've already covered the metrics side of CloudWatch earlier in the course, and I'm covering the logs part now because you'll be using it when we cover CloudTrail. In the CloudTrail demo, we'll be setting up CloudTrail and using CloudWatch Logs as a destination for those logs. So, you'll need to understand it, and we'll be covering the architecture in this lesson. Let's jump in and get started.

      CloudWatch Logs is a public service. The endpoint to which applications connect is hosted in the AWS public zone. This means you can use the product within AWS VPCs, from on-premises environments, and even other cloud platforms, assuming that you have network connectivity as well as AWS permissions.

      The CloudWatch Logs product allows you to store, monitor, and access logging data. Logging data, at a very basic level, consists of a piece of information, data, and a timestamp. The timestamp generally includes the year, month, day, hour, minute, second, and timezone. There can be more fields, but at a minimum, it's generally a timestamp and some data.

      CloudWatch Logs has built-in integrations with many AWS services, including EC2, VPC Flow Logs, Lambda, CloudTrail, Route 53, and many more. Any services that integrate with CloudWatch Logs can store data directly inside the product. Security for this is generally provided by using IAM roles or service roles.

      For anything outside AWS, such as logging custom application or OS logs on EC2, you can use the unified CloudWatch agent. I’ve mentioned this before and will be demoing it later in the EC2 section of the course. This is how anything outside of AWS products and services can log data into CloudWatch Logs. So, it’s either AWS service integrations or the unified CloudWatch agent. There is a third way, using development kits for AWS to implement logging into CloudWatch Logs directly into your application, but that tends to be covered in developer and DevOps AWS courses. For now, just remember either AWS service integrations or the unified CloudWatch agent.

      CloudWatch Logs are also capable of taking logging data and generating a metric from it, known as a metric filter. Imagine a situation where you have a Linux instance, and one of the operating system log files logs any failed connection attempts via SSH. If this logging information was injected into CloudWatch Logs, a metric filter can scan those logs constantly. Anytime it sees a mention of the failed SSH connection, it can increment a metric within CloudWatch. You can then have alarms based on that metric, and I’ll be demoing that very thing later in the course.

      Let’s look at the architecture visually because I'll be showing you how this works in practice in the CloudTrail demo, which will be coming up later in the section. Architecturally, CloudWatch Logs looks like this: It’s a regional service. So, for this example, let’s assume we’re talking about us-east-1.

      The starting point is our logging sources, which can include AWS products and services, mobile or server-based applications, external compute services (virtual or physical servers), databases, or even external APIs. These sources inject data into CloudWatch Logs as log events.

      Log events consist of a timestamp and a message block. CloudWatch Logs treats this message as a raw block of data. It can be anything you want, but there are ways the data can be interpreted, with fields and columns defined. Log events are stored inside log streams, which are essentially a sequence of log events from the same source.

      For example, if you had a log file stored on multiple EC2 instances that you wanted to inject into CloudWatch Logs, each log stream would represent the log file for one instance. So, you’d have one log stream for instance one and one log stream for instance two. Each log stream is an ordered set of log events for a specific source.

      We also have log groups, which are containers for multiple log streams of the same type of logging. Continuing the example, we would have one log group containing everything for that log file. Inside this log group would be different log streams, each representing one source. Each log stream is a collection of log events. Every time an item was added to the log file on a single EC2 instance, there would be one log event inside one log stream for that instance.

      A log group also stores configuration settings, such as retention settings and permissions. When we define these settings on a log group, they apply to all log streams within that log group. It’s also where metric filters are defined. These filters constantly review any log events for any log streams in that log group, looking for certain patterns, such as an application error code or a failed SSH login. When detected, these metric filters increment a metric, and metrics can have associated alarms. These alarms can notify administrators or integrate with AWS or external systems to take action.

      CloudWatch Logs is a powerful product. This is the high-level architecture, but don’t worry—you’ll get plenty of exposure to it throughout the course because many AWS products integrate with CloudWatch Logs and use it to store their logging data. We’ll be coming back to this product time and again as we progress through the course. CloudTrail uses CloudWatch Logs, Lambda uses CloudWatch Logs, and VPC Flow Logs use CloudWatch Logs. There are many examples of AWS products where we’ll be integrating them with CloudWatch Logs.

      I just wanted to introduce it at this early stage of the course. That’s everything I wanted to cover in this theory lesson. Thanks for watching. Go ahead, complete this video, and when you’re ready, join me in the next.

    1. Welcome back, and in this demo lesson, I want to give you some experience working with Service Control Policies (SCPs).

      At this point, you've created the AWS account structure which you'll be using for the remainder of the course. You've set up an AWS organization, with the general account that created it becoming the management account. Additionally, you've invited the production AWS account into the organization and created the development account within it.

      In this demo lesson, I want to show you how you can use SCPs to restrict what identities within an AWS account can do. This is a feature of AWS Organizations.

      Before we dive in, let's tidy up the AWS organization. Make sure you're logged into the general account, the management account of the organization, and then navigate to the organization's console. You can either type that into the 'Find Services' box or select it from 'Recently Used Services.'

      As discussed in previous lessons, AWS Organizations allows you to organize accounts with a hierarchical structure. Currently, there's only the root container of the organization. To create a hierarchical structure, we need to add some organizational units. We will create a development organizational unit and a production organizational unit.

      Select the root container at the top of the organizational structure. Click on "Actions" and then "Create New." For the production organizational unit, name it 'prod.' Scroll down and click on "Create Organizational Unit." Next, do the same for the development unit: select 'Route,' click on "Actions," and then "Create New." Under 'Name,' type 'dev,' scroll down, and click on "Create Organizational Unit."

      Now, we need to move our AWS accounts into these relevant organizational units. Currently, the Development, Production, and General accounts are all contained in the root container, which is the topmost point of our hierarchical structure.

      To move the accounts, select the Production AWS account, click on "Actions," and then "Move." In the dialogue that appears, select the Production Organizational Unit and click "Move." Repeat this process for the Development AWS account: select the Development AWS account, click "Actions," then "Move," and select the 'dev' OU before clicking "Move."

      Now, we've successfully moved the two AWS accounts into their respective organizational units. If you select each organizational unit in turn, you can see that 'prod' contains the production AWS account, and 'dev' contains the development AWS account. This simple hierarchical structure is now in place.

      To prepare for the demo part of this lesson where we look at SCPs, move back to the AWS console. Click on AWS, then the account dropdown, and switch roles into the production AWS account by selecting 'Prod' from 'Role History.'

      Once you're in the production account, create an S3 bucket. Type S3 into the 'Find Services' box or find it in 'Recently Used Services' and navigate to the S3 console. Click on "Create Bucket." For the bucket name, call it 'CatPics' followed by a random number—S3 bucket names must be globally unique. I’ll use 1, lots of 3s, and then 7. Ensure you select the US East 1 region for the bucket. Scroll down and click "Create Bucket."

      After creating the bucket, go inside it and upload some files. Click on "Add Files," then download the cat picture linked to this lesson to your local machine. Upload this cat picture to the S3 bucket by selecting it and clicking "Open," then "Upload" to complete the process.

      Once the upload finishes, you can view the picture of Samson. Click on it to see Samson looking pretty sleepy. This demonstrates that you can currently access the Samson.jpg object while operating within the production AWS account.

      The key point here is that you’ve assumed an IAM role. By switching roles into the production account, you’ve assumed the role called "organization account access role," which has the administrator access managed policy attached.

      Now, we’ll demonstrate how this can be restricted using SCPs. Move back to the main AWS console. Click on the account dropdown and switch back to the general AWS account. Navigate to AWS Organizations, then Policies. Currently, most options are disabled, including Service Control Policies, Tag Policies, AI Services, Opt-out Policies, and Backup Policies.

      Click on Service Control Policies and then "Enable" to activate this functionality. This action adds the "Full AWS Access" policy to the entire organization, which imposes no restrictions, so all AWS accounts maintain full access to all AWS services.

      To create our own service control policy, download the file named DenyS3.json linked to this lesson and open it in a code editor. This SCP contains two statements. The first statement is an allow statement with an effect of allow, action as star (wildcard), and resource as star (wildcard). This replicates the full AWS access SCP applied by default. The second statement is a deny statement that denies any S3 actions on any AWS resource. This explicit deny overrides the explicit allow for S3 actions, resulting in access to all AWS services except S3.

      Copy the content of the DenyS3.json file into your clipboard. Move back to the AWS console, go to the policy section, and select Service Control Policies. Click "Create Policy," delete the existing JSON in the policy box, and paste the copied content. Name this policy "Allow all except S3" and create it.

      Now, go to AWS Accounts on the left menu, select the prod OU, and click on the Policies tab. Attach the new policy "Allow all except S3" by clicking "Attach" in the applied policies box. We will also detach the full AWS access policy directly attached. Check the box next to full AWS access, click "Detach," and confirm by clicking "Detach Policy."

      Now, the only service control policy directly attached to production is "Allow all except S3," which allows access to all AWS products and services except S3.

      To verify, go back to the main AWS console and switch roles into the production AWS account. Go to the S3 console and you should receive a permissions error, indicating that you don't have access to list buckets. This is because the SCP attached to the production account explicitly denies S3 access. Access to other services remains unaffected, so you can still interact with EC2.

      If we switch back to the general account, reattach the full AWS access policy, and detach "Allow all except S3," the production account will regain access to S3. By following the same process, you’ll be able to access the S3 bucket and view the object once again.

      This illustrates how SCPs can be used to restrict access for identities within an AWS account, in this case, the production AWS account.

      To clean up, delete the bucket. Select the catpics bucket, click "Empty," type "permanently delete," and select "Empty." Once that's done, you can delete the bucket by selecting it, clicking "Delete," confirming the bucket name, and then clicking "Delete Bucket."

      You’ve now demonstrated full control over S3, evidenced by successfully deleting the bucket. This concludes the demo lesson. You’ve created and applied an SCP that restricts S3 access, observed its effects, and cleaned up. We’ll discuss more about boundaries and restrictions in future lessons. For now, complete this video, and I'll look forward to seeing you in the next lesson.

    1. Welcome back, and in this lesson, I'll be talking about service control policies, or SCPs. SCPs are a feature of AWS Organizations which can be used to restrict AWS accounts. They're an essential feature to understand if you are involved in the design and implementation of larger AWS platforms. We've got a lot to cover, so let's jump in and get started.

      At this point, this is what our AWS account setup looks like. We've created an organization for Animals4life, and inside it, we have the general account, which from now on I'll be referring to as the management account, and then two member accounts, so production, which we'll call prod, and development, which we'll be calling dev. All of these AWS accounts are within the root container of the organization. That's to say they aren't inside any organizational units. In the next demo lesson, we're going to be adding organizational units, one for production and one for development, and we'll be putting the member accounts inside their respective organizational units.

      Now, let's talk about service control policies. The concept of a service control policy is simple enough. It's a policy document, a JSON document, and these service control policies can be attached to the organization as a whole by attaching them to the root container, or they can be attached to one or more organizational units. Lastly, they can even be attached to individual AWS accounts. Service control policies inherit down the organization tree. This means if they're attached to the organization as a whole, so the root container of the organization, then they affect all of the accounts inside the organization. If they're attached to an organizational unit, then they impact all accounts directly inside that organizational unit, as well as all accounts within OUs inside that organizational unit. If you have nested organizational units, then by attaching them to one OU, they affect that OU and everything below it. If you attach service control policies to one or more accounts, then they just directly affect those accounts that they're attached to.

      Now, I mentioned in an earlier lesson that the management account of an organization is special. One of the reasons it's special is that even if the management account has service control policies attached, either directly via an organizational unit, or on the root container of the organization itself, the management account is never affected by service control policies. This can be both beneficial and it can be a limitation, but as a minimum, you need to be aware of it as a security practice. Because the management account can't be restricted using service control policies, I generally avoid using the management account for any AWS resources. It's the only AWS account within AWS Organizations which can't be restricted using service control policies. As a takeaway, just remember that the management account is special and it's unaffected by any service control policies, which are attached to that account either directly or indirectly.

      Now, service control policies are account permissions boundaries. What I mean by that is they limit what the AWS account can do, including the Account Root User within that account. I talked earlier in the course about how you can't restrict an Account Root User. And that is true. You can't directly restrict what the Account Root User of an AWS account can do. The Account Root User always has full permissions over that entire AWS account, but with a service control policy, you're actually restricting what the account itself can do, specifically any identities within that account. So you're indirectly restricting the Account Root User because you're reducing the allowed permissions on the account; you're also reducing what the effective permissions on the Account Root User are. This is a really fine detail to understand. You can never restrict the Account Root User. It will always have 100% access to the account, but if you restrict the account, then in effect, you're also restricting the Account Root User.

      Now, you might apply a service control policy to prevent any usage of that account outside a known region, for example, us-east-1. You might also apply a service control policy which only allows a certain size of EC2 instance to be used within the account. Service control policies are a really powerful feature for any larger, more complex AWS deployments. The critical thing to understand about service control policies is they don't grant any permissions. Service control policies are just a boundary. They define the limit of what is and isn't allowed within the account, but they don't grant permissions. You still need to give identities within that AWS account permissions to AWS resources, but any SCPs will limit the permissions that can be assigned to individual identities.

      You can use service control policies in two ways. You can block by default and allow certain services, which is an allow list. Or you can allow by default and block access to certain services, which is a deny list. The default is a deny list. When you enable SCPs on your organization, AWS applies a default policy called full AWS access. This is applied to the organization and all OUs within that organization. This policy means that in the default implementation, service control policies have no effect since nothing is restricted. As a reminder, service control policies don't grant permissions, but when SCPs are enabled, there is an implicit default deny, just like IAM policies. If you had no initial allow, then everything would be denied. So the default is this full access policy, which essentially means no restrictions. It has the effect of making SCPs a deny list architecture, so you need to add any restrictions that you want to any AWS accounts within the organization. An example is that you could add another policy, such as this one, called DenyS3. This adds a deny policy for the entire S3 set of API operations, effectively denying S3. You need to remember that SCPs don't actually grant any access rights, but they establish which permissions can be granted in an account. The same priority rules apply: deny, allow, deny. Anything explicitly allowed in an SCP is a service which can have access granted to identities within that account, unless there's an explicit deny within an SCP, then a service cannot be granted. Explicit deny always wins. And in the absence of either, if we didn't have this full AWS access policy in place, then there would be an implicit deny, which blocks access to everything.

      The benefit of using deny lists is that because your foundation is to allow wildcard access, so all actions on all resources, as AWS extends the amounts of products and services which are available inside the platform, this allow list constantly expands to cover those services, so it's fairly low admin overhead. You simply need to add any services which you want to deny access to via an explicit deny. In certain situations, you might need to be more conscious about usage in your accounts, and that's where you'd use allow lists. To implement allow lists, it's a two-part architecture. One part of it is to remove the AWS full access policy. This means that only the implicit default deny is in place and active, and then you would need to add any services which you want to allow into a new policy. In this case, S3 and EC2. So in this architecture, we wouldn't have this full AWS access. We would be explicitly allowing S3 and EC2 access. So no matter what identity permissions identities in this account are provided with, they would only ever be allowed to access S3 and EC2. This is more secure because you have to explicitly say which services can be allowed access for users in those accounts, but it's much easier to make a mistake and block access to services which you didn't intend to. It's also much more admin overhead because you have to add services as your business requirements dictate. You can't simply have access to everything and deny services you don't want access to. With this type of architecture, you have to explicitly add each and every service which you want identities within the account to be able to access. Generally, I would normally suggest using a deny list architecture because, simply put, it's much lower admin overhead.

      Before we go into a demo, I want to visually show you how SCPs affect permissions. This is visually how SCPs impact permissions within an AWS account. In the left orange circle, this represents the different services that have been granted access to identities in an account using identity policies. On the right in red, this represents which services an SCP allows access to. So the SCP states that the three services in the middle and the service on the right are allowed access as far as the SCP is concerned, and the identity policies which were applied to identities within the account, so the orange circle on the left, grant access to four different services: the three in the middle and the one on the left.

      Only permissions which are allowed within identity policies in the account and are allowed by a service control policy are actually active. On the right, this access permission has no effect because while it's allowed within an SCP, an SCP doesn't grant access to anything; it just controls what can and can't be allowed by identity policies within that account. Because no identity policy allows access to this resource, then it has no effect. On the left, this particular access permission is allowed within an identity policy, but it's not effectively allowed because it's not allowed within an SCP. So only things which are involved, the identity policy and an SCP, are actually allowed. In this case, this particular access permission on the left has no effect because it's not within a service control policy, so it's denied.

      At an associate level, this is what you need to know for the exam. It's just simply understanding that your effective permissions for identities within an account are the overlap between any identity policies and any applicable SCPs. This is going to make more sense if you experience it with a demo, so this is what we're going to do next. Now that you've set up the AWS organization for the Animals4life business, it's time to put some of this into action. So I'm going to finish this lesson here and then in the next lesson, which is a demo, we're going to continue with the practical part of implementing SCPs. So go ahead and complete this video, and when you're ready, I'll look forward to you joining me in the next.

    1. Welcome back! In this demo lesson, you're going to create the AWS account structure which you'll use for the remainder of the course. At this point, you need to log in to the general AWS account. I’m currently logged in as the IAM admin user of my general AWS account, with the Northern Virginia region selected.

      You’ll need either two different web browsers or a single web browser like Firefox that supports different sessions because we’ll be logged into multiple AWS accounts at once. The first task is to create the AWS organization. Since I'm logged in to a standard AWS account that isn’t part of an AWS organization, it’s neither a management account nor a member account. We need to move to the AWS Organizations part of the console and create the organization.

      To start, go to "Find Services," type "Organizations," and click to move to the AWS Organizations console. Once there, click "Create Organization." This will begin the process of creating the AWS organization and convert the standard account into the management account of the organization. Click on "Create Organization" to complete the process. Now, the general account is the management account of the AWS organization.

      You might see a message indicating that a verification email has been sent to the email address associated with the general AWS account. Click the link in that email to verify the address and continue using AWS Organizations. If you see this notification, verify the email before proceeding. If not, you can continue.

      Now, open a new web browser or a browser session like Firefox and log in to the production AWS account. Ensure this is a separate session; if unsure, use a different browser to maintain logins to both the management and production accounts. I’ll log in to the IAM admin user of the production AWS account.

      With the production AWS account logged in via a separate browser session, copy the account ID for the production AWS account from the account dropdown. Then, return to the browser session with the general account, which is now the management account of the organization. We’ll invite the production AWS account into this organization.

      Click on "Add Account," then "Invite Account." Enter either the email address used while signing up or the account ID of the production account. I’ll enter the account ID. If you’re inviting an account you administer, no notes are needed. However, if the account is administered by someone else, you may include a message. After entering the email or account ID, scroll down and click "Send Invitation."

      Depending on your AWS account, you might receive an error message about too many accounts within the organization. If so, log a support request to increase the number of allowed accounts. If no error message appears, the invite process has begun.

      Next, accept the invite from the production AWS account. Go back to the tab with the general AWS account, move to the Organizations console, and click "Invitations" on the middle left. You should see an overview of all invitations related to the production AWS account. Click "Accept" to complete the process of joining the organization. Now, the production account is a member of the AWS organization.

      To verify, return to the general account tab and refresh. You should now see two AWS accounts: the general and the production accounts. Next, I’ll demonstrate how to role switch into the production AWS account, now a member of the organization.

      When adding an account to an organization, you can either invite an existing account or create a new one within the organization. If creating a new account, a role is automatically created for role switching. If inviting an existing account, you need to manually add this role.

      To do this, switch to the browser or session where you're logged into the production AWS account. Click on the services search box, type IAM, and move to the IAM console to create IAM roles. Click on "Create Role," select "Another AWS Account," and enter the account ID of the general AWS account, which is now the management account.

      Copy the account ID of the general AWS account into the account ID box, then click "Next." Attach the "AdministratorAccess" policy to this role. On the next screen, name the role "OrganizationAccountAccessRole" with uppercase O, A, A, and R, and note that "Organization" uses the U.S. spelling with a Z. Click "Create Role."

      In the role details, select "Trust Relationships" to verify that the role trusts the account ID of your general AWS account, which allows identities within the general account to assume this role.

      Next, switch back to the general AWS account. Copy the account ID for the production AWS account because we will switch into it using role switch. In the AWS console, click on the account dropdown and select "Switch Roles." Paste the production account ID into the account ID box, and enter the role name "OrganizationAccountAccessRole" with uppercase O, A, A, and R.

      For the display name, use "Prod" for production, and pick red as the color for easy identification. Click "Switch Role" to switch into the production AWS account. You’ll see the red color and "Prod" display name indicating a successful switch.

      To switch back to the general account, click on "Switch Back." In the role history section, you can see shortcuts for switching roles. Click "Prod" to switch back to the production AWS account using temporary credentials granted by the assumed role.

      Now, let’s create the development AWS account within our organization. Close the browser window or tab with the production AWS account as it’s no longer needed. Return to the AWS Organizations console, click "Add Account," and then "Create Account." Name the account "Development," following the same naming structure used for general and production accounts.

      Provide a unique email address for the development AWS account. Use the same email structure you’ve used for previous accounts, such as "Adrian+TrainingAWSDevelopment" for consistency.

      In the box for the role name, use "OrganizationAccountAccessRole" with uppercase O, A, A, and R, and the U.S. spelling. Click "Create" to create the development account. If you encounter an error about too many accounts, you might need to request an increase in the account limit.

      The development account will be created within the organization, and this may take a few minutes. Refresh to see the new development account with its own account ID. Copy this account ID for the switch role dialogue.

      Click on the account dropdown, select "Switch Roles," and enter the new development account ID. For the role name, use "OrganizationAccountAccessRole" and for the display name, use "Dev" for development with yellow as the color for distinction. Click "Switch Role" to switch into the development AWS account.

      In the AWS console, you’ll see the new development account. You can switch directly between the general, production, and development accounts using role switch shortcuts. AWS automatically created the "OrganizationAccountAccessRole" in the development account.

      In summary, you now have three AWS accounts: the general AWS account (management account), the production AWS account, and the development AWS account. This completes the account structure for the course. Complete this video, and I'll look forward to seeing you in the next lesson.

    1. Welcome to this lesson, where I'll be introducing AWS Organizations. AWS Organizations is a product that allows larger businesses to manage multiple AWS accounts in a cost-effective way with little to no management overhead.

      Organizations is a product that has evolved significantly over the past few years, and it's worthwhile to step through that evolution to understand all of its different features. We’ve got a lot to cover, so let's jump in and get started.

      Without AWS Organizations, many large businesses would face the challenge of managing numerous AWS accounts. In the example onscreen, there are four accounts, but I've worked with some larger enterprises with hundreds of accounts and have heard of even more. Without AWS Organizations, each of these accounts would have its own pool of IAM users as well as separate payment methods. Beyond 5 to 10 accounts, this setup becomes unwieldy very quickly.

      AWS Organizations is a simple product to understand. You start with a single AWS account, which I'll refer to as a standard AWS account from now on. A standard AWS account is an AWS account that is not part of an organization. Using this standard AWS account, you create an AWS Organization.

      It’s important to understand that the organization isn't created within this account; you're simply using the account to create the organization. This standard AWS account that you use to create the organization then becomes the Management Account for the organization. The Management Account used to be called the Master Account. If you hear either of these terms—Management Account or Master Account—just know that they mean the same thing.

      This is a key point to understand with regards to AWS Organizations because the Management Account is special for two reasons, which I’ll explain in this lesson. For now, I’ll add a crown to this account to indicate that it’s the Management Account and to help you distinguish it from other AWS accounts.

      Using this Management Account, you can invite other existing standard AWS accounts into the organization. Since these are existing accounts, they need to approve the invites to join the organization. Once they do, those Standard Accounts will become part of the AWS Organization.

      When standard AWS accounts join an AWS Organization, they change from being Standard Accounts to being Member Accounts of that organization. Organizations have one and only one Management or Master Account and then zero or more Member Accounts.

      You can create a structure of AWS accounts within an organization, which is useful if you have many accounts and need to group them by business units, functions, or even the development stage of an application. The structure within AWS Organizations is hierarchical, forming an inverted tree.

      At the top of this tree is the root container of the organization. This is just a container for AWS accounts at the top of the organizational structure. Don’t confuse this with the Account Root User, which is the admin user of an AWS account. The organizational root is just a container within an AWS Organization, which can contain AWS accounts, including Member Accounts or the Management Account.

      As well as containing accounts, the organizational root can also contain other containers, known as organizational units (OUs). These organizational units can contain AWS accounts, Member Accounts, or the Management Account, or they can contain other organizational units, allowing you to build a complex nested AWS account structure within Organizations.

      Again, please don’t confuse the organizational root with the AWS Account Root User. The AWS Account Root User is specific to each AWS account and provides full permissions over that account. The root of an AWS Organization is simply a container for AWS accounts and organizational units and is the top level of the hierarchical structure within AWS Organizations.

      One important feature of AWS Organizations is consolidated billing. With the example onscreen now, there are four AWS accounts, each with its own billing information. Once these accounts are added to an AWS Organization, the individual billing methods for the Member Accounts are removed. Instead, the Member Accounts pass their billing through to the Management Account of the organization.

      In the context of consolidated billing, you might see the term Payer Account. The Payer Account is the AWS account that contains the payment method for the organization. So, if you see Master Account, Management Account, or Payer Account, know that within AWS Organizations, they all refer to the same thing: the account used to create the organization and the account that contains the payment method for all accounts within the AWS Organization.

      Using consolidated billing within an AWS Organization means you receive a single monthly bill contained within the Management Account. This bill covers the Management Account and all Member Accounts of the organization. One bill contains all the billable usage for all accounts within the AWS Organization, removing a significant amount of financial admin overhead for larger businesses. This alone would be worth creating an organization for most larger enterprises.

      But it gets better. With AWS, certain services become cheaper the more you use them, and for certain services, you can pay in advance for cheaper rates. When using Organizations, these benefits are pooled, allowing the organization to benefit as a whole from the spending of each AWS account within it.

      AWS Organizations also features a service called Service Control Policies (SCPs), which allows you to restrict what AWS accounts within the organization can do. These are important, and I’ll cover them in their own dedicated lesson, which is coming up soon. I wanted to mention them now as a feature of AWS Organizations.

      Before we go through a demo where we'll create an AWS Organization and set up the final account structure for this course, I want to cover two other concepts. You can invite existing accounts into an organization, but you can also create new accounts directly within it. All you need is a valid, unique email address for the new account, and AWS will handle the rest. Creating accounts directly within the organization avoids the invite process required for existing accounts.

      Using an AWS Organization changes what is best practice in terms of user logins and permissions. With Organizations, you don’t need to have IAM Users inside every single AWS account. Instead, IAM roles can be used to allow IAM Users to access other AWS accounts. We’ll implement this in the following demo lesson. Best practice is to have a single account for logging in, which I’ve shown in this diagram as the Management Account of the organization. Larger enterprises might keep the Management Account clean and have a separate account dedicated to handling logins.

      Both approaches are fine, but be aware that the architectural pattern is to have a single AWS account that contains all identities for logging in. Larger enterprises might also have their own existing identity system and may use Identity Federation to access this single identity account. You can either use internal AWS identities with IAM or configure AWS to allow Identity Federation so that your on-premises identities can access this designated login account.

      From there, we can use this account with these identities and utilize a feature called role switching. Role switching allows users to switch roles from this account into other Member Accounts of the organization. This process assumes roles in these other AWS accounts. It can be done from the console UI, hiding much of the technical complexity, but it’s important to understand how it works. Essentially, you either log in directly to this login account using IAM identities or use Identity Federation to gain access to it, and then role switch into other accounts within the organization.

      I’ll discuss this in-depth as we progress through the course. The next lesson is a demo where you’ll implement this yourself and create the final AWS account structure for the remainder of the course.

      Okay, so at this point, it's time for a demo. As I mentioned, you'll be creating the account structure you'll use for the rest of the course. At the start, I demoed creating AWS accounts, including a general AWS account and a production AWS account. In the next lesson, I’ll walk you through creating an AWS Organization using this general account, which will become the Management Account for the AWS Organization. Then, you'll invite the existing production account into the organization, making it a Member Account. Finally, you'll create a new account within the organization, which will be the Development Account.

      I’m excited for this, and it’s going to be both fun and useful for the exam. So, go ahead and finish this video, and when you're ready, I look forward to you joining me in the next lesson, which will be a demo.

    1. Welcome back.

      In this lesson, I want to continue immediately from the last one by discussing when and where you might use IAM roles. By talking through some good scenarios for using roles, I want to make sure that you're comfortable with selecting these types of situations where you would choose to use an IAM role and where you wouldn't, because that's essential for real-world AWS usage and for answering exam questions correctly.

      So let's get started.

      One of the most common uses of roles within the same AWS account is for AWS services themselves. AWS services operate on your behalf and need access rights to perform certain actions. An example of this is AWS Lambda. Now, I know I haven't covered Lambda yet, but it's a function as a service product. What this means is that you give Lambda some code and create a Lambda function. This function, when it runs, might do things like start and stop EC2 instances, perform backups, or run real-time data processing. What it does exactly isn't all that relevant for this lesson. The key thing, though, is that a Lambda function, as with most AWS things, has no permissions by default. A Lambda function is not an AWS identity. It's a component of a service, and so it needs some way of getting permissions to do things when it runs. Running a Lambda function is known as a function invocation or a function execution using Lambda terminology.

      So anything that's not an AWS identity, this might be an application or a script running on a piece of compute hardware somewhere, needs to be given permissions on AWS using access keys. Rather than hard-coding some access keys into your Lambda function, there's actually a better way. To provide these permissions, we can create an IAM role known as a Lambda execution role. This execution role has a trust policy which trusts the Lambda service. This means that Lambda is allowed to assume that role whenever a function is executed. This role has a permissions policy which grants access to AWS products and services.

      When the function runs, it uses the sts:AssumeRole operation, and then the Secure Token Service generates temporary security credentials. These temporary credentials are used by the runtime environment in which the Lambda function runs to access AWS resources based on the permissions the role’s permissions policy has. The code is running in a runtime environment, and it's the runtime environment that assumes the role. The runtime environment gets these temporary security credentials, and then the whole environment, which the code is running inside, can use these credentials to access AWS resources.

      So why would you use a role for this? What makes this scenario perfect for using a role? Well, if we didn't use a role, you would need to hard-code permissions into the Lambda function by explicitly providing access keys for that function to use. Where possible, you should avoid doing that because, A, it's a security risk, and B, it causes problems if you ever need to change or rotate those access keys. It's always better for AWS products and services, where possible, to use a role, because when a role is assumed, it provides a temporary set of credentials with enough time to complete a task, and then these are discarded.

      For a given Lambda function, you might have one copy running at once, zero copies, 50 copies, a hundred copies, or even more. Because you can't determine this number, because it's unknown, if you remember my rule from the previous lesson, if you don't know the number of principals, if it's multiple or if it's an uncertain number, then it suggests a role might be the most ideal identity to use. In this case, it is the ideal way of providing Lambda with these credentials to use a role and allow it to get these temporary credentials. It's always the preferred option when using AWS services to do something on your behalf; use a role because you don't need to provide any static credentials.

      Okay, so let's move on to the next scenario.

      Another situation where roles are useful is emergency or out-of-the-usual situations. Here’s a familiar scenario that you might find in a workplace. This is Wayne, and Wayne works in a business's service desk team. This team is given read-only access to a customer's AWS account so that they can keep an eye on performance. The idea is that anything more risky than this read-only level of access is handled by a more senior technical team. We don't want to give Wayne's team long-term permissions to do anything more destructive than this read-only access, but there are always going to be situations which occur when we least want them, normally 3:00 a.m. on a Sunday morning, when a customer might call with an urgent issue where they need Wayne's help to maybe stop or start an instance, or maybe even terminate an EC2 instance and recreate it.

      So 99% of the time, Wayne and his team are happy with this read-only access, but there are situations when he needs more. This is a break-glass style situation, which is named after this. The idea of break glass in the physical world is that there is a key for something behind glass. It might be a key for a room that a certain team doesn't normally have access to, maybe it’s a safe or a filing cabinet. Whatever it is, the glass provides a barrier, meaning that when people break it, they really mean to break it. It’s a confirmation step. So if you break a piece of glass to get a key to do something, there needs to be an intention behind it. Anyone can break the glass and retrieve the key, but having the glass results in the action only happening when it's really needed. At other times, whatever the key is for remains locked. And you can also tell when it’s been used and when it hasn’t.

      A role can perform the same thing inside an AWS account. Wayne can assume an emergency role when absolutely required. When he does, he'll gain additional permissions based on the role's permissions policy. For a short time, Wayne will, in effect, become the role. This access will be logged and Wayne will know to only use the role under exceptional circumstances. Wayne’s normal permissions can remain at read-only, which protects him and the customer, but he can obtain more if required when it’s really needed. So that’s another situation where a role might be a great solution.

      Another scenario when roles come in handy is when you're adding AWS into an existing corporate environment. You might have an existing physical network and an existing provider of identities, known as an identity provider, that your staff use to log into various systems. For the sake of this example, let’s just say that it's Microsoft Active Directory. In this scenario, you might want to offer your staff single sign-on, known as SSO, allowing them to use their existing logins to access AWS. Or you might have upwards of 5,000 accounts. Remember, there’s the 5,000 IAM user limit. So for a corporation with more than 5,000 staff, you can’t offer each of them an IAM user. That is beyond the capabilities of IAM.

      Roles are often used when you want to reuse your existing identities for use within AWS. Why? Because external accounts can’t be used directly. You can’t access an S3 bucket directly using an Active Directory account. Remember this fact. External accounts or external identities cannot be used directly to access AWS resources. You can’t directly use Facebook, Twitter, or Google identities to interact with AWS. There is a separate process which allows you to use these external identities, which I’ll be talking about later in the course.

      Architecturally, what happens is you allow an IAM role inside your AWS account to be assumed by one of the external identities, which is in Active Directory in this case. When the role is assumed, temporary credentials are generated and these are used to access the resources. There are ways that this is hidden behind the console UI so that it appears seamless, but that's what happens behind the scenes. I'll be covering this in much more detail later in the course when I talk about identity federation, but I wanted to introduce it here because it is one of the major use cases for IAM roles.

      Now, why roles are so important when an existing ID provider such as Active Directory is involved is that, remember, there is this 5,000 IAM user limit in an account. So if your business has more than 5,000 accounts, then you can’t simply create an IAM user for each of those accounts, even if you wanted to. 5,000 is a hard limit. It can't be changed. Even if you could create more than 5,000 IAM users, would you actually want to manage 5,000 extra accounts? Using a role in this way, so giving permissions to an external identity provider and allowing external identities to assume this role, is called ID Federation. It means you have a small number of roles to manage and external identities can use these roles to access your AWS resources.

      Another common situation where you might use roles is if you're designing the architecture for a popular mobile application. Maybe it's a ride-sharing application which has millions of users. The application needs to store and retrieve data from a database product in AWS, such as DynamoDB. Now, I've already explained two very important but related concepts on the previous screen. Firstly, that when you interact with AWS resources, you need to use an AWS identity. And then secondly, that there’s this 5,000 IAM user limit per account. So designing an application with this many users which needs access to AWS resources, if you could only use IAM users or identities in AWS, it would be a problem because of this 5,000 user limit. It’s a hard limit and it can’t be raised.

      Now, this is a problem which can be fixed with a process called Web Identity Federation, which uses IAM roles. Most mobile applications that you’ve used, you might have noticed they allow you to sign in using a web identity. This might be Twitter, Facebook, Google, and potentially many others. If we utilize this architecture for our web application, we can trust these identities and allow these identities to assume an IAM role. This is based on that role’s trust policy. So they can assume that role, gain access to temporary security credentials, and use those credentials to access AWS resources, such as DynamoDB. This is a form of Web Identity Federation, and I'll be covering it in much more detail later in the course.

      The use of roles in this situation has many advantages. First, there are no AWS credentials stored in the application, which makes it a much more preferred option from a security point of view. If an application is exploited for whatever reason, there’s no chance of credentials being leaked, and it uses an IAM role which you can directly control from your AWS account. Secondly, it makes use of existing accounts that your customers probably already have, so they don't need yet another account to access your service. And lastly, it can scale to hundreds of millions of users and beyond. It means you don’t need to worry about the 5,000 user IAM limit. This is really important for the exam. There are very often questions on how you can architect solutions which will work for mobile applications. Using ID Federation, so using IAM roles, is how you can accomplish that. And again, I'll be providing much more information on ID Federation later in the course.

      Now, one scenario I want to cover before we finish up this lesson is cross-account access. In an upcoming lesson, I’ll be introducing AWS Organizations and you will get to see this type of usage in practice. It’s actually how we work in a multi-account environment. Picture the scenario that's on screen now: two AWS accounts, yours and a partner account. Let’s say your partner organization offers an application which processes scientific data and they want you to store any data inside an S3 bucket that’s in their account. Your account has thousands of identities, and the partner IT team doesn’t want to create IAM users in their account for all of your staff. In this situation, the best approach is to use a role in the partner account. Your users can assume that role, get temporary security credentials, and use those to upload objects. Because the IAM role in the partner account is an identity in that account, using that role means that any objects that you upload to that bucket are owned by the partner account. So it’s a very simple way of handling permissions when operating between accounts.

      Roles can be used cross-account to give access to individual resources like S3 in the onscreen example, or you can use roles to give access to a whole account. You’ll see this in the upcoming AWS Organization demo lesson. In that lesson, we’re going to configure it so a role in all of the different AWS accounts that we’ll be using for this course can be assumed from the general account. It means you won’t need to log in to all of these different AWS accounts. It makes multi-account management really simple.

      I hope by this point you start to get a feel for when roles are used. Even if you’re a little vague, you will learn more as you go through the course. For now, just a basic understanding is enough. Roles are difficult to understand at first, so you’re doing well if you’re anything but confused at this point. I promise you, as we go through the course and you get more experience, it will become second nature.

      So at this point, that’s everything I wanted to cover. Thanks for watching. Go ahead and complete this video, and when you're ready, join me in the next lesson.

    1. Welcome back.

      Over the next two lessons, I'll be covering a topic which is usually one of the most difficult identity-related topics in AWS to understand, and that's IAM roles. In this lesson, I'll step through how roles work, their architecture, and how you technically use a role. In the following lesson, I'll compare roles to IAM users and go into a little bit more detail on when you generally use a role, so some good scenarios which fit using an IAM role. My recommendation is that you watch both these lessons back to back in order to fully understand IAM roles.

      So let's get started.

      A role is one type of identity which exists inside an AWS account. The other type, which we've already covered, are IAM users. Remember the term "principal" that I introduced in the previous few lessons? This is a physical person, application, device, or process which wants to authenticate with AWS. We defined authentication as proving to AWS that you are who you say you are. If you authenticate, and if you are authorized, you can then access one or more resources.

      I also previously mentioned that an IAM user is generally designed for situations where a single principal uses that IAM user. I’ve talked about the way that I decide if something should use an IAM user: if I can imagine a single thing—one person or one application—who uses an identity, then generally under most circumstances, I'd select to use an IAM user.

      IAM roles are also identities, but they're used much differently than IAM users. A role is generally best suited to be used by an unknown number or multiple principals, not just one. This might be multiple AWS users inside the same AWS account, or it could be humans, applications, or services inside or outside of your AWS account who make use of that role. If you can't identify the number of principals which use an identity, then it could be a candidate for an IAM role. Or if you have more than 5,000 principals, because of the number limit for IAM users, it could also be a candidate for an IAM role.

      Roles are also something which is generally used on a temporary basis. Something becomes that role for a short period of time and then stops. The role isn't something that represents you. A role is something which represents a level of access inside an AWS account. It's a thing that can be used, short term, by other identities. These identities assume the role for a short time, they become that role, they use the permissions that that role has, and then they stop being that role. It’s not like an IAM user, where you login and it’s a representation of you, long term. With a role, you essentially borrow the permissions for a short period of time.

      I want to make a point of stressing that distinction. If you're an external identity—like a mobile application, maybe—and you assume a role inside my AWS account, then you become that role and you gain access to any access rights that that role has for a short time. You essentially become an identity in my account for a short period of time.

      Now, this is the point where most people get a bit confused, and I was no different when I first learned about roles. What's the difference between logging into a user and assuming a role? In both cases, you get the access rights that that identity has.

      Before we get to the end of this pair of lessons, so this one and the next, I think it's gonna make a little bit more sense, and definitely, as you go through the course and get some practical exposure to roles, I know it's gonna become second nature.

      IAM users can have identity permissions policies attached to them, either inline JSON or via attached managed policies. We know now that these control what permissions the identity gets inside AWS. So whether these policies are inline or managed, they're properly referred to as permissions policies—policies which grant, so allow or deny, permissions to whatever they’re associated with.

      IAM roles have two types of policies which can be attached: the trust policy and the permissions policy. The trust policy controls which identities can assume that role. With the onscreen example, identity A is allowed to assume the role because identity A is allowed in the trust policy. Identity B is denied because that identity is not specified as being allowed to assume the role in the trust policy.

      The trust policy can reference different things. It can reference identities in the same account, so other IAM users, other roles, and even AWS services such as EC2. A trust policy can also reference identities in other AWS accounts. As you'll learn later in the course, it can even allow anonymous usage of that role and other types of identities, such as Facebook, Twitter, and Google.

      If a role gets assumed by something which is allowed to assume it, then AWS generates temporary security credentials and these are made available to the identity which assumed the role. Temporary credentials are very much like access keys, which I covered earlier in the course, but instead of being long-term, they're time-limited. They only work for a certain period of time before they expire. Once they expire, the identity will need to renew them by reassuming the role, and at that point, new credentials are generated and given to the identity again which assumed that role.

      These temporary credentials will be able to access whatever AWS resources are specified within the permissions policy. Every time the temporary credentials are used, the access is checked against this permissions policy. If you change the permissions policy, the permissions of those temporary credentials also change.

      Roles are real identities and, just like IAM users, roles can be referenced within resource policies. So if a role can access an S3 bucket because a resource policy allows it or because the role permissions policy allows it, then anything which successfully assumes the role can also access that resource.

      You’ll get a chance to use roles later in this section when we talk about AWS Organizations. We’re going to take all the AWS accounts that we’ve created so far and join them into a single organization, which is AWS’s multi-account management product. Roles are used within AWS Organizations to allow us to log in to one account in the organization and access different accounts without having to log in again. They become really useful when managing a large number of accounts.

      When you assume a role, temporary credentials are generated by an AWS service called STS, or the Secure Token Service. This is the operation that's used to assume the role and get the credentials, so sts .

      In this lesson, I focused on the technical aspect of roles—mainly how they work. I’ve talked about the trust policy, the permissions policy, and how, when you assume a role, you get temporary security credentials. In the next lesson, I want to step through some example scenarios of where roles are used, and I hope by the end of that, you’re gonna be clearer on when you should and shouldn’t use roles.

      So go ahead, finish up this video, and when you’re ready, you can join me in the next lesson.

    1. Welcome back and welcome to this demo of the functionality provided by IAM Groups.

      What we're going to do in this demo is use the same architecture that we had in the IAM users demo—the SALLI user and those two S3 buckets—but we’re going to migrate the permissions that the SALLI user has from the user to a group that SALLI is a member of.

      Before we get started, just make sure that you are logged in as the IAM admin user of the general AWS account. As always, you’ll need to have the Northern Virginia region selected.

      Attached to this video is a demo files link that will download all of the files you’re going to use throughout the demo. To save some time, go ahead and click on that link and start the file downloading. Once it’s finished, go ahead and extract it; it will create a folder containing all of the files you’ll need as you move through the demo.

      You should have deleted all of the infrastructure that you used in the previous demo lesson. So at this point, we need to go ahead and recreate it. To do that, attached to this lesson is a one-click deployment link. So go ahead and click that link. Everything is pre-populated, so you need to make sure that you put in a suitable password that doesn’t breach any password policy on your account. I’ve included a suitable default password with some substitutions, so that should be okay for all common password policies.

      Scroll down to the bottom, click on the capabilities checkbox, and then create the stack. That’ll take a few moments to create, so I’m going to pause the video and resume it once that stack creation has completed.

      Okay, so that’s created now. Click on Services and open the S3 console in a new tab. This can be a normal tab. Go to the Cat Pics bucket, click Upload, add file, locate the demo files folder that you downloaded and extracted earlier. Inside that folder should be a folder called Cat Pics. Go in there and then select merlin.jpg. Click on Open and Upload. Wait for that to finish.

      Once it’s finished, go back to the console, go to Animal Pics, click Upload again, add files. This time, inside the Animal Pics folder, upload thaw.jpg. Click Upload. Once that’s done, go back to CloudFormation, click on Resources, and click on the Sally user. Inside the Sally user, click on Add Permissions, Attach Policies Directly, select the "Allow all S3 except cats" policy, click on Next, and then Add Permissions.

      So that brings us to the point where we were in the IAM users demo lesson. That’s the infrastructure set back up in exactly the same way as we left the IAM users demo. Now we can click on Dashboard. You’ll need to copy the IAM signing users link for the general account. Copy that into your clipboard.

      You’re going to need a separate browser, ideally, a fully separate browser. Alternatively, you can use a private browsing tab in your current browser, but it’s just easier to understand probably for you at this point in your learning if you have a separate browser window. I’m going to use an isolated tab because it’s easier for me to show you.

      You’ll need to paste in this IAM URL because now we’re going to sign into this account using the Sally user. Go back to CloudFormation, click on Outputs, and you’ll need the Sally username. Copy that into your clipboard. Go back to this separate browser window and paste that in. Then, back to CloudFormation, go to the Parameters tab and get the password for the Sally user. Enter the password that you chose for Sally when you created the stack.

      Then move across to the S3 console and just verify that the Sally user has access to both of these buckets. The easiest way of doing that is to open both of these animal pictures. We’ll start with Thor. Thor’s a big doggo, so it might take some time for him to load in. There we go, he’s loaded in. And the Cat Pics bucket. We get access denied because remember, Sally doesn’t have access to the Cat Pics bucket. That’s as intended.

      Now we’ll go back to our other browser window—the one where we logged into the general account as the IAM admin user. This is where we’re going to make the modifications to the permissions. We’re going to change the permissions over to using a group rather than directly on the Sally user.

      Click on the Resources tab first and select Sally to move across to the Sally user. Note how Sally currently has this managed policy directly attached to her user. Step one is to remove that. So remove this managed policy from Sally. Detach it. This now means that Sally has no permissions on S3. If we go back to the separate browser window where we’ve got Sally logged in and then hit refresh, we see she doesn’t have any permissions now on S3.

      Now back to the other browser, back to the one where we logged in as IAM admin, click on User Groups. We’re going to create a Developers group. Click on Create New Group and call it Developers. That’s the group name. Then, down at the bottom here, this is where we can attach a managed policy to this group. We’re going to attach the same managed policy that Sally had previously directly on her user—Allow all S3 except cats.

      Type "allow" into the filter box and press Enter. Then check the box to select this managed policy. We could also directly at this stage add users to this group, but we’re not going to do that. We’re going to do that as a separate process. So click on 'Create Group'.

      So that’s the Developers group created. Notice how there are not that many steps to create a group, simply because it doesn’t offer that much in the way of functionality. Open up the group. The only options you see here are 'User Membership' and any attached permissions. Now, as with a user, you can attach inline policies or managed policies, and we’ve got the managed policy.

      What we’re going to do next is click on Users and then Add Users to Group. We’re going to select the Sally IAM user and click on Add User. Now our IAM user Sally is a member of the Developers group, and the Developers group has this attached managed policy that allows them to access everything on S3 except the Cat Pics bucket.

      Now if I move back to my other browser window where I’ve got the Sally user logged in and then refresh, now that the Sally user has been added to that group, we’ve got permissions again over S3. If I try to access the Cat Pics bucket, I won’t be able to because that managed policy that the Developers team has doesn’t include access for this. But if I open the Animal Pics bucket and open Thor again—Thor’s a big doggo, so it’ll take a couple of seconds—it will load in that picture absolutely fine.

      So there we go, there’s Thor. That’s pretty much everything I wanted to demonstrate in this lesson. It’s been a nice, quick demo lesson. All we’ve done is create a new group called Developers, added Sally to this Developers group, removed the managed policy giving access to S3 from Sally directly, and added it to the Developers group that she’s now a member of. Note that no matter whether the policy is attached to Sally directly or attached to a group that Sally is a member of, she still gets those permissions.

      That’s everything I wanted to cover in this demo lesson. So before we finish up, let’s just tidy up our account. Go to Developers and then detach this managed policy from the Developers group. Detach it, then go to Groups and delete the Developers group because it wasn’t created as part of the CloudFormation template.

      Then, as the IAM admin user, open up the S3 console. We need to empty both of these buckets. Select Cat Pics, click on Empty. You’ll need to type or copy and paste 'Permanently Delete' into that box and confirm the deletion. Click Exit. Then select the Animal Pics bucket and do the same process. Copy and paste 'Permanently Delete' and confirm by clicking on Empty and then Exit.

      Now that we’ve done that, we should have no problems opening up CloudFormation, selecting the IAM stack, and then hitting Delete. Note if you do have any errors deleting this stack, just go into the stack, select Events, and see what the status reason is for any of those deletion problems. It should be fairly obvious if it can’t delete the stack because it can’t delete one or more resources, and it will give you the reason why.

      That being said, at this point, assume the stack deletions worked successfully, and we’ve cleaned up our account. That’s everything I wanted to cover in this demo lesson. Go ahead, complete this video, and when you’re ready, I’ll see you in the next lesson.

    1. Welcome back.

      In this lesson, I want to briefly cover IAM groups, so let's get started.

      IAM groups, simply put, are containers for IAM users. They exist to make organizing large sets of IAM users easier. You can't log in to IAM groups, and IAM groups have no credentials of their own. The exam might try to trick you on this one, so it's definitely important that you remember you cannot log into a group. If a question or answer suggests logging into a group, it's just simply wrong. IAM groups have no credentials, and you cannot log into them. So they're used solely for organizing IAM users to make management of IAM users easier.

      So let's look at a visual example. We've got an AWS account, and inside it we've got two groups: Developers and QA. In the Developers group, we've got Sally and Mike. In the QA group, we've got Nathalie and Sally. Now, the Sally user—so the Sally in Developers and the Sally in the QA group—that's the same IAM user. An IAM user can be a member of multiple IAM groups. So that's important to remember for the exam.

      Groups give us two main benefits. First, they allow effective administration-style management of users. We can make groups that represent teams, projects, or any other functional groups inside a business and put IAM users into those groups. This helps us organize.

      Now, the second benefit, which builds off the first, is that groups can actually have policies attached to them. This includes both inline policies and managed policies. In the example on the screen now, the Developers group has a policy attached, as does the QA group. There’s also nothing to stop IAM users, who are themselves within groups, from having their own inline or managed policies. This is the case with Sally.

      When an IAM user such as Sally is added as a member of a group—let’s say the Developers group—that user gets the policies attached to that group. Sally gains the permissions of any policies attached to the Developers group and any other groups that that user is a member of. So Sally also gets the policies attached to the QA group, and Sally has any policies that she has directly.

      With this example, Sally is a member of the Developers group, which has one policy attached, a member of the QA group with an additional policy attached, and she has her own policy. AWS merges all of those into a set of permissions. So effectively, she has three policies associated with her user: one directly, and one from each of the group memberships that her user has.

      When you're thinking about the allow or deny permissions in policy statements for users that are in groups, you need to consider those which apply directly to the user and their group memberships. Collect all of the policy allows and denies that a user has directly and from their groups, and apply the same deny-allow-deny rule to them as a collection. Evaluating whether you're allowed or denied access to a resource doesn’t become any more complicated; it’s just that the source of those allows and denies can broaden when you have users that are in multiple IAM groups.

      I mentioned last lesson that an IAM user can be a member of up to 10 groups and there is a 5,000 IAM user limit for an account. Neither of those are changeable; they are hard limits. There’s no effective limit for the number of users in a single IAM group, so you could have all 5,000 IAM users in an account as members of a single IAM group.

      Another common area of trick questions in the exam is around the concept of an all-users group. There isn't actually a built-in all-users group inside IAM, so you don’t have a single group that contains all of the members of that account like you do with some other identity management solutions. In IAM, you could create a group and add all of the users in that account into the group, but you would need to create and manage it yourself. So that doesn’t exist natively.

      Another really important limitation of groups is that you can’t have any nesting. You can’t have groups within groups. IAM groups contain users and IAM groups can have permissions attached. That’s it. There’s no nesting, and groups cannot be logged into; they don’t have any credentials.

      Now, there is a limit of 300 groups per account, but this can be increased with a support ticket.

      There’s also one more point that I want to make at this early stage in the course. This is something that many other courses tend to introduce later on or at a professional level, but it's important that you understand this from the very start. I'll show you later in the course how policies can be attached to resources, for example, S3 buckets. These policies, known as resource policies, can reference identities. For example, a bucket could have a policy associated with it that allows Sally access to that bucket. That’s a resource policy. It controls access to a specific resource and allows or denies identities to access that bucket.

      It does this by referencing these identities using an ARN, or Amazon Resource Name. Users and IAM roles, which I'll be talking about later in the course, can be referenced in this way. So a policy on a resource can reference IAM users and IAM roles by using the ARN. A bucket could give access to one or more users or to one or more roles, but groups are not a true identity. They can’t be referenced as a principal in a policy. A resource policy cannot grant access to an IAM group. You can grant access to IAM users, and those users can be in groups, but a resource policy cannot grant access to an IAM group. It can’t be referred to in this way. You couldn’t have a resource policy on an S3 bucket and grant access to the Developers group and then expect all of the developers to access it. That’s not how groups work. Groups are just there to group up IAM users and allow permissions to be assigned to those groups, which the IAM users inherit.

      So this is an important one to remember, whether you are answering an exam question that involves groups, users, and roles or resource policies, or whether you're implementing real-world solutions. It’s easy to overestimate the features that a group provides. Don’t fall into the trap of thinking that a group offers more functionality than it does. It’s simply a container for IAM users. That’s all it’s for. It can contain IAM users and have permissions associated with it; that’s it. You can’t log in to them and you can’t reference them from resource policies.

      Okay, so that’s everything I wanted to cover in this lesson. Go ahead, complete the video, and when you're ready, I'll look forward to you joining me in the next.

    1. Welcome back.

      In this demo lesson, we're going to explore IAM users. This is the first type of identity that we've covered in AWS. We'll use the knowledge gained from the IAM policy documents lesson to assign permissions to an IAM user in our AWS account.

      To get started, you'll need to be logged in as the IAM admin user to the general AWS account and have the Northern Virginia region selected.

      Attached to this lesson are two links. The first is a one-click deployment link, which will deploy the infrastructure needed for this demo. The second link will download the files required for this demo. Click the demo files link to start the download, and then click the one-click deployment link to begin the deployment.

      Earlier in the course, you created an IAM user that you should be logged into this account with. I won't go through the process of creating an IAM user again. Instead, we'll use CloudFormation to apply a template that will create an IAM user named Sally, along with two S3 buckets for this demonstration and a managed policy. Enter a password for the Sally user as a parameter in the CloudFormation stack. Use a password that’s reasonably secure but memorable and typeable. This password must meet the password policy assigned to your AWS account, which typically requires a minimum length of eight characters and a mix of character types, including uppercase, lowercase, numbers, and certain special characters. It also cannot be identical to your AWS account name or email address.

      After entering the password, scroll down, check the capabilities box, and click on create stack. Once the stack is created, switch to your text editor to review what the template does. It asks for a parameter (Sally's password) and creates several resources.

      There are logical resources called 'catpix' and 'animalpix,' both of which are S3 buckets. Another logical resource is 'Sally,' an IAM user. This IAM user has a managed policy attached to it, which references an ARN for a managed policy that will be shown once the stack is complete. It also sets the login profile, including the password, and requires a password reset upon first login.

      The managed policy created allows access to S3 but denies access to the CatPix S3 bucket. This setup is defined in the policy logical resource, which you’ll see once the stack is complete.

      Returning to the AWS console, refresh the page. You should see the four created resources: the AnimalPix S3 bucket, the CatPix S3 bucket, the IAM Managed Policy, and the Sally IAM User.

      CloudFormation generates resource names by taking the stack name ("iam"), the logical resource name defined in the template ("animal_pics," "cat_pics," and "sally"), and adding some randomness to ensure unique physical IDs.

      Now, open the Sally IAM user by clicking the link under Resources. Note that Sally has an attached managed policy called IAM User Change Password, which allows her to change her password upon first login.

      Go to Policies in the IAM console to see the managed policies inside the account. The IAM User Change Password policy is one of the AWS managed policies and allows Sally to change her password.

      Next, click on Dashboard and ensure that you have the IAM users sign-in link on your clipboard. Open a private browser tab or a separate browser to avoid logging out of your IAM admin user. Use this separate tab to access the IAM sign-in page for the general AWS account.

      Retrieve Sally’s username from CloudFormation by clicking on outputs and copying it. Paste this username into the IAM username box on the sign-in page, and enter the password you chose for Sally. Click on sign in.

      After logging in as Sally, you’ll need to change the password. Enter the old password, then choose and confirm a new secure password. This is possible due to the managed policy assigned to Sally that allows her to change her password.

      Once logged in, test Sally’s permissions by navigating to the EC2 console. You might encounter API errors or lack of permissions. Check the S3 console and note that you won't have permissions to list any S3 buckets, even though we know at least two were created. This demonstrates that IAM users initially have no permissions apart from changing their passwords.

      Locate and extract the zip file downloaded from the demo files link. Inside the extracted folder, open the file named S3_FullAdminJSON. This JSON policy document grants full access to any S3 actions on any S3 resource.

      Assign this as an inline policy to Sally by copying the JSON policy document and pasting it into the IAM console. Go to the IAM area, open the Sally user, go to the Permissions tab, and click Add Permissions, then Create Inline Policy. Select the JSON tab, delete any existing document, and paste in the JSON policy.

      Review the policy, name it S3 Admin Inline, and click Create Policy. Sally will now have this S3 Admin Inline policy in addition to the IAM User Change Password managed policy.

      Switch to the browser or tab logged in as Sally and refresh the page. You should now be able to see the S3 buckets. Upload a file to both the AnimalPix and CatPix buckets to verify permissions. For example, upload thor.jpg to AnimalPix and merlin.jpg to CatPix.

      To ensure you can read from the CatPix bucket, click on merlin.jpg and select open. You should see the file, confirming that you have access.

      Return to the browser logged in as the IAM admin user. Open the Sally user and delete the S3 Admin Inline policy. This will remove her access rights over S3.

      In the other browser or tab logged in as Sally, refresh the page. You should see an access denied error for S3 actions on the CatPix bucket, while still having access to the AnimalPix bucket.

      Finally, return to the IAM admin browser or tab. Click on Add Permissions for Sally and attach the managed policy created by the CloudFormation template, "allow all S3 except cats." This policy has two statements: one allowing all S3 actions and another explicitly denying access to the CatPix bucket.

      Verify this by refreshing the page logged in as Sally. You should be able to interact with all S3 buckets except the CatPix bucket.

      To conclude, this demo showed how to apply different types of policies to an IAM user, including inline and managed policies. We demonstrated how these policies affect effective permissions.

      For cleanup, delete the managed policy attachment from Sally. In the S3 console, empty both the CatPix and AnimalPix buckets by typing "permanently delete" and clicking empty. Return to CloudFormation, select the IAM stack, and hit delete to clean up all resources created by this stack.

      That covers everything for this demo. Complete this video, and when you're ready, join me in the next lesson.

    1. Welcome back.

      And in this lesson, I want to finish my coverage of IAM users.

      You already gained some exposure to IAM users earlier in the course. Remember, you created an IAM admin user in both your general and production AWS accounts. As well as creating these users, you secured them using MFA, and you attached an AWS managed policy to give this IAM user admin rights in both of those accounts.

      So for now, I just want to build upon your knowledge of IAM users by adding some extra detail that you'll need for the exam. So let's get started.

      Now, before I go into more detail, let's just establish a foundation. Let's use a definition. Simply put, IAM users are an identity used for anything requiring long-term AWS access. For example, humans, applications, or service accounts. If you need to give something access to your AWS account, and if you can picture one thing, one person or one application—so James from accounts, Mike from architecture, or Miles from development—99% of the time you would use an IAM user.

      If you need to give an application access to your AWS account, for example, a backup application running on people's laptops, then each laptop generally would use an IAM user. If you have a need for a service account, generally a service account which needs to access AWS, then generally this will use an IAM user. If you can picture one thing, a named thing, then 99% of the time, the correct identity to select is an IAM user. And remember this because it will help in the exam.

      IAM starts with a principal. And this is a word which represents an entity trying to access an AWS account. At this point, it's unidentified. Principals can be individual people, computers, services, or a group of any of those things. For a principal to be able to do anything, it needs to authenticate and be authorized. And that's the process that I want to step through now.

      A principal, which in this example, is a person or an application, makes requests to IAM to interact with resources. Now, to be able to interact with resources, it needs to authenticate against an identity within IAM. An IAM user is an identity which can be used in this way.

      Authentication is this first step. Authentication is a process where the principal on the left proves to IAM that it is an identity that it claims to be. So an example of this is that the principal on the left might claim to be Sally, and before it can use AWS, it needs to prove that it is indeed Sally. And it does this by authenticating.

      Authentication for IAM users is done either using username and password or access keys. These are both examples of long-term credentials. Generally, username and passwords are used if a human is accessing AWS and accessing via the console UI. Access keys are used if it's an application, or as you experienced earlier in the course, if it's a human attempting to use the AWS Command Line tools.

      Now, once a principal goes through the authentication process, the principal is now known as an authenticated identity. An authenticated identity has been able to prove to AWS that it is indeed the identity that it claims to be. So it needs to be able to prove that it's Sally. And to prove that it's Sally, it needs to provide Sally's username and password, or be able to use Sally's secret access key, which is a component of the access key set. If it can do that, then AWS will know that it is the identity that it claims to be, and so it can start interacting with AWS.

      Once the principal becomes an authenticated identity, then AWS knows which policies apply to the identity. So in the previous lesson, I talked about policy documents, how they could have one or more statements, and if an identity attempted to access AWS resources, then AWS would know which statements apply to that identity. That's the process of authorization.

      So once a principal becomes an authenticated identity, and once that authenticated identity tries to upload to an S3 bucket or terminate an EC2 instance, then AWS checks that that identity is authorized to do so. And that's the process of authorization. So they're two very distinct things. Authentication is how a principal can prove to IAM that it is the identity that it claims to be using username and password or access keys, and authorization is IAM checking the statements that apply to that identity and either allowing or denying that access.

      Okay, let's move on to the next thing that I want to talk about, which is Amazon Resource Names, or ARNs. ARNs do one thing, and that's to uniquely identify resources within any AWS accounts. When you're working with resources, using the command line or APIs, you need a way to refer to these resources in an unambiguous way. ARNs allow you to refer to a single resource, if needed, or in some cases, a group of resources using wild cards.

      Now, this is required because things can be named in a similar way. You might have an EC2 instance in your account with similar characteristics to one in my account, or you might have two instances in your account but in different regions with similar characteristics. ARNs can always identify single resources, whether they're individual resources in the same account or in different accounts.

      Now, ARNs are used in IAM policies which are generally attached to identities, such as IAM users, and they have a defined format. Now, there are some slight differences depending on the service, but as you go through this course, you'll gain enough exposure to be able to confidently answer any exam questions that involve ARNs. So don't worry about memorizing the format at this stage, you will gain plenty of experience as we go.

      These are two similar, yet very different ARNs. They both look to identify something related to the catgifs bucket. They specify the S3 service. They don't need to specify a region or an account because the naming of S3 is globally unique. If I use a bucket name, then nobody else can use that bucket name in any account worldwide.

      The difference between these two ARNs is the forward slash star on the end at the second one. And this difference is one of the most common ways mistakes can be made inside policies. It trips up almost all architects or admins at one point or another. The top ARN references an actual bucket. If you wanted to allow or deny access to a bucket or any actions on that bucket, then you would use this ARN which refers to the bucket itself. But a bucket and objects in that bucket are not the same thing.

      This ARN references anything in that bucket, but not the bucket itself. So by specifying forward slash star, that's a wild card that matches any keys in that bucket, so any object names in that bucket. This is really important. These two ARNs don't overlap. The top one refers to just the bucket and not the objects in the bucket. The bottom one refers to the objects in the bucket but not the bucket itself.

      Now, some actions that you want to allow or deny in a policy operate at a bucket level or actually create buckets. And this would need something like the top ARN. Some actions work on objects, so it needs something similar to the bottom ARN. And you need to make sure that you use the right one. In some cases, creating a policy that allows a set of actions will need both. If you want to allow access to create a bucket and interact with objects in that bucket, then you would potentially need both of these ARNs in a policy.

      ARNs are collections of fields split by a colon. And if you see a double colon, it means that nothing is between it. It doesn't need to be specified. So in this example, you'll see a number of double colons because you don't need to specify the region or account number for an S3 bucket because the bucket name is globally unique. A star can also be used, which is a wild card.

      Now, keep in mind they're not the same thing. So not specifying a region and specifying star don't mean the same thing. You might use a star when you want to refer to all regions inside an AWS account. Maybe you want to give permissions to interact with EC2 in all regions, but you can't simply omit this. The only place you'll generally use the double colon is when something doesn't need to be specified, you'd use a star when you want to refer to a wild card collection of a set of things. So they're not the same thing. Keep that in mind, and I'll give you plenty of examples as we go through the course.

      So the first field is the partition, and this is the partition that the resource is in. For standard AWS regions, the partition is AWS. If you have resources in other partitions, the partition is AWS-hyphen-partition name. This is almost never anything but AWS. But for example, if you do have resources in the China Beijing region, then this is AWS-cn.

      The next part is service. And this is the service name space that identifies the AWS product. For example, S3, IAM, or RDS. The next field is region. So this is the region that the resource you're referring to resides in. Some ARNs do not require a region, so this might be omitted, and certain ARNs require wild card. And you'll gain exposure through the course as to what different services require for their ARNs.

      The next field is the account ID. This is the account ID of the AWS account that owns the resource. So for example, 123456789012. So if you're referring to an EC2 instance in a certain account, you will have to specify the account number inside the ARN. Some resources don't require that, so this example is S3 because it is globally unique across every AWS account. You don't need to specify the account number.

      And then at the end, we've got resource or resource type. And the content of this part of the ARN varies depending on the service. A resource identifier can be the name or ID of an object. For example, user forward slash Sally or instance forward slash and then the instance ID, or it can be a resource path. But again, I'm only introducing this at this point. You'll get plenty of exposure as you go through the course. I just want to give you this advanced knowledge so you know what to expect.

      So let's quickly talk about an exam PowerUp. I tend not to include useless facts and figures in my course, but some of them are important. This is one such occasion.

      Now first, you can only ever have 5,000 IAM users in a single account. IAM is a global service, so this is a per account limit, not per region. And second, an IAM user can be a member of 10 IAM groups. So that's a maximum. Now, both of these have design impacts. You need to be aware of that.

      What it means is that if you have a system which requires more than 5,000 identities, then you can't use one IAM user for each identity. So this might be a limit for internet scale applications with millions of users, or it might be a limit for large organizations which have more than 5,000 staff, or it might be a limit when large organizations are merging together. If you have any scenario or a project with more than 5,000 identifiable users, so identities, then it's likely that IAM users are not the right identity to pick for that solution.

      Now, there are solutions which fix this. We can use IAM roles or Identity Federation, and I'll be talking about both of those later in the course. But in summary, it means using your own existing identities rather than using IAM users. And I'll be covering the architecture and the implementation of this later in the course.

      At this stage, I want you to take away one key fact, and that is this 5,000 user limit. If you are faced with an exam question which mentions more than 5,000 users, or talks about an application that's used on the internet which could have millions of users, and if you see an answer saying create an IAM user for every user of that application, that is the wrong answer. Generally with internet scale applications, or enterprise access or company mergers, you'll be using Federation or IAM roles. And I'll be talking about all of that later in the course.

      Okay, so that's everything I wanted to cover in this lesson. So go ahead, complete the video, and when you're ready, I'll look forward to you joining me in the next.

    1. Welcome back. In this lesson, I want to start by covering an important aspect of how AWS handles security, specifically focusing on IAM policies.

      IAM policies are a type of policy that gets attached to identities within AWS. As you've previously learned, identities include IAM users, IAM groups, and IAM roles. You’ll use IAM policies frequently, so it’s important to understand them for the exam and for designing and implementing solutions in AWS.

      Policies, once you understand them, are actually quite simple. I’ll walk you through the components and give you an opportunity to experiment with them in your own AWS account. Understanding policies involves three main stages: first, understanding their architecture and how they work; second, gaining the ability to read and understand the policy; and finally, learning to write your own. For the exam, understanding their architecture and being able to read them is sufficient. Writing policies will come as you work through the course and gain more practical experience.

      Let's jump in. An IAM identity policy, or IAM policy, is essentially a set of security statements for AWS. It grants or denies access to AWS products and features for any identity using that policy. Identity policies, also known as policy documents, are created using JSON. Familiarity with JSON is helpful, but if you're new to it, don’t worry—it just requires a bit more effort to learn.

      This is an example of an identity policy document that you would use with a user, group, or role. At a high level, a policy document consists of one or more statements. Each statement is enclosed in curly braces and grants or denies permissions to AWS services.

      When an identity attempts to access AWS resources, it must prove its identity through a process known as authentication. Once authenticated, AWS knows which policies apply to that identity, and each policy can contain multiple statements. AWS also knows which resources you’re trying to interact with and what actions you want to perform on those resources. AWS reviews all relevant statements one by one to determine the permissions for a given identity accessing a particular resource.

      A statement consists of several parts. The first part is a statement ID, or SID, which is optional but helps identify the statement and its purpose. For example, "full access" or "DenyCatBucket" indicates what the statement does. Using these identifiers is considered best practice.

      Every interaction with AWS involves two main elements: the resource and the actions attempted on that resource. For instance, if you’re interacting with an S3 bucket and trying to add an object, the statement will only apply if it matches both the action and the resource. The action part of a statement specifies one or more actions, which can be very specific or use wildcards (e.g., s3:* for all S3 operations). Similarly, resources can be specified individually or in lists, and wildcards can refer to all resources.

      The final component is the effect, which is either "allow" or "deny." The effect determines what AWS does if the action and resource parts of the statement match the attempted operation. If the effect is "allow," access is granted; if it’s "deny," access is blocked. An explicit deny always takes precedence over an explicit allow. If neither applies, the default implicit deny prevails.

      In scenarios where there are multiple policies or statements, AWS evaluates all applicable statements. If there’s an explicit deny, it overrides any explicit allows. If no explicit deny is present, an explicit allow will grant access, unless there’s an explicit deny.

      Lastly, there are two main types of policies: inline policies and managed policies. Inline policies are directly attached to individual identities, making them isolated and cumbersome to manage for large numbers of users. Managed policies are created as separate objects and can be attached to multiple identities, making them more efficient and easier to manage. AWS provides managed policies, but you can also create and manage customer managed policies tailored to your specific needs.

      Before concluding, you’ll have a chance to gain practical experience with these policies. For now, this introduction should give you a solid foundation. Complete the video, and I look forward to seeing you in the next lesson.

    1. librarians must increasingly prioritize fostering students' ability to criticallyevaluate AI-generated content because of the continuous advancements in these technologies

      this as AI literacy

    2. confirmation” stage of Diffusion of Innovation

      Effective Change Management: The Five Stages of the Innovation-Decision Process - The EvoLLLution (2019). Available at: https://evolllution.com/technology/tech-tools-and-resources/effective-change-management-the-five-stages-of-the-innovation-decision-process (Accessed: 15 August 2024). Rahul, K. (2023) Diffusion of Innovations - What Is It, Examples, Elements, Stages, WallStreetMojo. Available at: https://www.wallstreetmojo.com/diffusion-of-innovations/ (Accessed: 15 August 2024). Sahin, I. (2006) ‘DETAILED REVIEW OF ROGERS’ DIFFUSION OF INNOVATIONS THEORY AND EDUCATIONAL TECHNOLOGY-RELATED STUDIES BASED ON ROGERS’ THEORY’, The Turkish Online Journal of Educational Technology, 5(2).

    3. impact

      this paragraph history

    4. Artificial intelligence (AI) encompasses diverse technologies that enable machines tosimulate human cognitive capabilities. The subset of AI known as generative artificialintelligence (genAI) immerses itself in extensive datasets and learns from them. This learningenables it to create original content such as text, images, audio, and video based on itscomprehension of the acquired information.

      definition of AI

    1. Лечить инакомыслие. КАРАТЕЛЬНАЯ ПСИХИАТРИЯ

      Начальный тезис "что психиатерапия строится на социальных нормах, культуры, времени или страны". то есть социальные нормы - это конструк, по которому те или иные аспекты поведения являются не здоровыми.

      для тех людей, которые жили в том контексте имели это считалось нормальным, правильным, никто даже не ставил под сомнения выводы более высоких учёных. это была просто норма, к которой люди привыкли

      то есть психитеарапия - это зависимая от внутри социальных аспект и потребностей общества.

      1972 году под поднят вопрос о анти гумманных практиках лечения и особенно внимения "велотекучей шизофрений", которая как я понял, не имела обоснованного смысла.

      как пример, из культуроного контекста США драпатомания от слова драпать. когда чернокожий хотел убежать из своего "дома". убежать из подчинение белым человеком, находясь в роли раба. но эту псих болезнь посчитали кринжвой.

      а еще мастурбация - это психическая болезнь до 1969 года.

      ну еще пару женщин находились в психиторопий 50 лет, под "болезью" нравственная не полноценность, потому что у них были не законно рожденные дети.

    1. eLife assessment

      This paper provides valuable findings related to the impact and timing of exogenous interleukin 2 on the balance of exhausted (Tex) versus effector (Teff) that differentiate from precursors T cells (Tpex) during chronic viral infection. While the data appear solid, the overall claims that IL-2 suppresses Tpex are only partially supported, with the rationale for the timing of IL-2 treatment and its underlying mechanisms remaining unclear.

    2. Reviewer #1 (Public Review):

      Summary:

      The title states "IL-2 enhances effector function but suppresses follicular localization of CD8+ T cells in chronic infection" which data from the paper show but does not seem to be the major goal of the authors. As stated in the short assessment above, the goal of this work seems to connect IL-2 signals, mostly given exogenously, to the differentiation of progenitor T cells (TPEX) that will help sustain effector T cell responses against chronic viral infection (TEX/TEFF). The authors mostly use chronic LCMV infection in mice as their model of choice, Flow cytometry, fluorescent microscopy, and some in vitro assays to explore how IL2 regulates TPEX and TEX/TEFF differentiation. Gain and loss of functions experiments are also conducted to explore the roles of L2 signaling and BLIMP-1 in regulating these processes. Lastly, a loose connection of their mouse findings on TPEX/TEX cells to a clinical study using low-dose IL-2 treatment in SLE patients is attempted.

      Strengths:

      (1) The impact of IL-2 treatment of TPEX/TEX differentiation is very clear.

      (2) The flow cytometry data are convincing and state-of-the-art.

      Weaknesses:

      (1) The title appears disconnected from the major focus of the work.

      (2) The number of TPEX cells is not changed. IL2 treatment increases the number of TEFF and the proportion of TPEX is lower suggesting it does not target TPEX formation. The conclusion about an inhibitory role of IL2 treatment on TPEX formation seems therefore largely overstated.

      (3) Are the expanded TEX/TEFF cells really effectors? Only GrB and some cell surface markers are monitored (44, 62L). Other functions should be included, e.g., CD107a, IFNg, TNF, chemokines - Tbet?

      (4) The rationale for IL2 treatment timing is unclear. Seems that this is given at the T cell contraction time and this is interesting compared to the early treatment that ablate TPEX generation. Maybe this should really be explored further?

      (5) The TGFb/IL6/IL2 in vitro experiment does not bring much to the paper.

      (6) The Figure 2 data try to provide an explanation for a prior lack of difference in viral titers after IL2 treatment. It is hard to be convinced by these tissue section data as presented. It also begs the question of how the host would benefit from the low dose IL-2 treatment if IL-2 TEFF are not contributing to viral control as a result of their inappropriate localization to viral reservoirs.

      (7) It is unclear what the STA5CA and BLIMP-1 KO experiments in Figure 3 add to the story that is not already expected/known.

      (8) The connection to the low-dose IL2 treatment in SLE patients is very loose and weak. This version is likely not the ligand that preferentially signals to CD122 either. SLE is different from a chronic viral infection and the question of timing seems critical from all the data shown in this manuscript. So it is very difficult to make any robust link to the mechanistic data.

      (9) It is really unclear what the take-home message is. IL-2 is signaling via STAT5 and BLIMP1 is also a known target as published by many groups including this one, and these results are more than expected. The observation that TEFF may be differentially localized in the WP area is interesting but no mechanisms are really provided (guessing CXCR5 but again expected). Also, all these observations are highly dependent on the timing of IL2 administration which is fascinating but not explored at all. It also limits significance since underlying mechanisms are unknown and we do not know when such treatment would have to be given.

    3. Reviewer #2 (Public Review):

      This study utilized the LCMV Docile infection model, which induces chronic and persistent infection in mice, leading to T cell exhaustion and dysfunction. Through exogenous IL-2 fusion protein treatment during the late stage of infection, the researchers found that IL-2 treatment significantly enlarges the antigen-specific effector CD8 T cells, expanding the CXCR5-TCF1- exhausted population (Tex) while maintaining the size of the CXCR5+TCF1+ precursors of exhausted T cell population (Tpex). This preservation of the Tpex population's self-renewing capacity allows for sustained T cell proliferation and antiviral responses.

      The authors discovered a dual effect of IL-2 treatment: it decreases CXCR5 expression on Tpex cells, restricting their entry into the B cell follicle. This may explain why IL-2 treatment has little impact on overall viral control. However, this finding also suggests a potential application of IL-2 treatment for autoimmune diseases, as it can suppress specific immune responses within the B cell follicle. Using imaging-based approaches, the team provided direct evidence that IL-2 treatment shifts the viral load to concentrate within the B cell follicle, correlating with the observed decrease in CXCR5 expression.

      Further, the researchers showed that ectopic expression of constitutively active STAT5, downstream of IL-2 induced cytokine signaling, in P14 TCR transgenic T cells (specific for an LCMV epitope), drove the T cell population toward the CXCR5- Tex phenotype over the CXCR5+ Tpex cells in vivo. Additionally, abrogating Blimp1, upregulated by active IL-2-phosphorylated STAT5 signaling, restored the CXCR5+ Tpex population.

      Building on these results, the researchers used an engineered IL-2 fusion protein, ANV410, targeting the beta-chain of the IL-2 receptor CD122, which successfully replicated their earlier findings. Importantly, the Tpex-sustaining effect of IL-2 was only observed when treatment was administered during the late stage of infection, as early treatment suppressed Tpex cell generation. Immune profiling of SLE patients undergoing low-dose IL-2 treatment showed a similar reduction in the CXCR5+ Tpex cell population.

      This study provides compelling data on the physiological consequences of IL-2 treatment during chronic viral infection. By leveraging the chronic and persistent LCMV Docile infection model, the researchers identified the temporal effects of IL-2 fusion protein treatment, offering strategic insights for therapies targeting cancer and autoimmune diseases.

    1. formation and transcendence, and that our best hope for achieving the

      m

    2. newspapers, in print,

      every time i see one i take pictures.

      i feel like these things are "ancient holy relics" and we just don't understand the worth of things like "the early edition" and "the late one" anymore--

      the kinds of things that caused a new die to be cast, and a whole "what's the world staring at" to change

    1. Ketamine has been found to have rapid and potent antidepressant activity. However, despite the ubiquitous brain expression of its molecular target, the N-methyl-d-aspartate receptor (NMDAR), it was not clear whether there is a selective, primary site for ketamine's antidepressant action. We found that ketamine injection in depressive-like mice specifically blocks NMDARs in lateral habenular (LHb) neurons, but not in hippocampal pyramidal neurons.

      Burası bir deneme

    1. 火车时长:1号线:10分钟/圈;2号线:20分钟/圈;3号线:30分钟/圈

      火车 列次

    2. 如果步行,亲友步行进入园可在“J区边城、F区六号门门岗、H区三家巷、L区柳堡、M区康 桥、D区南门、B区南门”门岗进入;

      步行 游览

    3. (亲身经历,当天早上10点已经堵了,中午吃完饭后更堵,导致不得不停到外面的停车场,然后和她步行1公里过去,非常失败)

      时间安排 上午10前到达

    1. 我们直接看解法

      递归解法:

      ```Java class Solution { public ListNode mergeTwoLists(ListNode list1, ListNode list2) { if (list1 == null){ return list2; } if (list2 == null){ return list1; }

          if (list1.val <= list2.val){
              list1.next = mergeTwoLists(list1.next, list2);
              return list1;
          }
          else{
              list2.next = mergeTwoLists(list1, list2.next);
              return list2;
          }
      }
      

      } ```