1. Oct 2024
    1. Das von den französischen Grünen regierte Lyon reagiert auf die globale Erhitzung mit einer „Strategie der durchlässigen Stadt“. Dazu gehört es, bei ausnahmslos jedem neuen Bauprojekt Wasser versickern zu lassen statt es abzuleiten. Eine Vizepräsidentin der Region erklärt die Strategie - und den Unwillen des Staats zur finanziellen Unterstützung der Stadt -im Gespräch mit der Libération aus Anlass der extremen Regenfälle im Département Rhône https://www.liberation.fr/societe/inondations-dans-la-metropole-de-lyon-nous-payons-des-annees-damenagements-urbains-qui-nont-pas-tenu-compte-du-dereglement-climatique-20241018_FT2OJG5YNVFWBJMHM37NJGB634/?redirected=1

    1. しいます

      します

    2. ウェジェット

      ウィジェット

    3. づつ

      ずつ

    4. ごと

    5. グラフを表現する機能があります

      内部的にはこれを使っているっぽいので、どこかで紹介してもいいかも

      https://altair-viz.github.io/

    6. t.session_state.dices.append

      これ、ここは書き換えないで if 文の手前で

      dices = st.session_state.dices

      とすればいいのでは(そうしたら他は書き換えなくていい

    7. if "dices" not in st.session_state: # セッションデータの初期化 st.session_state.dices = []

      streamlitのサンプルコードでもこうなっていますが、

      not inで存在チェックしているのに、初期化するときは属性になっているので少しトリッキーだなと思いました。 そこについて説明してほしいです。

      st.session_state["dices"] = [] でも同じように動作するっぽい

    8. これらを

      これら、とはなにとなにを指していますか?

    9. 上記のステップの3つ目

      こう書くなら、上のステップを数字付きの箇条書きにして「ステップ3まで」と書くとわかりやすい

    10. ここで、st.writeにいろんなデータ型を渡しただけのアプリを作って、それぞれいい感じに表示されるところをスクショでみせてほしい。

    11. サンプルアプリ(2)

      これも名前を付けてほしい

    12. それ相応

      それ相応、がどういうことを指しているのかがわかりにくい。

      相応だと思っているのは誰なのか、が気になる。

      「適切な」とか「データ型にあった」とかの表現でもよいのでは

    13. プロパティ

      引数?

    14. 以下のとおりです。

      いきなり結果になっているけど、初期状態とテキスト入力してボタンを押した状態の2パターンがほしい。

      (アニメgifだとうれしいなー

    15. randam

      typo: random

    16. 入力されたもを

      typo: 入力されたものを?

    17. # 入力ボックス

      コードがごちゃっとしているので、コメントは1行上につけつつ、大きく機能が分かれるところで空行とか入れた方が読みやすいと思います

    18. st.text_input

      ここの手前に説明がほしいです。

      以下、○○について説明します。みたいな

    19. splited_text

      splitの過去形はsplitらしい。なのでsplit_textでよい

      https://www.eigo-bu.com/vocab/pp/split

      リストなので words とかでもいいのでは

    20. choice

      choicedが好み

    21. replace(" ", " ")

      全角半角の書き換えとかは本題とは関係ないし、シンプルに

      text.split() でよいのでは

    22. スペース区切りの文字列から一つの単語を選択する

      このタイトルと、コードで st.title() している内容が違う。

      どういう意図のタイトルなのか?

    23. 内容

      文章にしてほしい

    24. サンプルアプリ(1

      (1)はわかりにくいので、名前を付けた方がいいと思います

      サンプル - ランダム選択アプリ

      とか

    25. 起動

      アプリを起動

      とか

    26. ```bash

      なにか書き方を間違えてそう

    27. import streamlit as st

      キャプションを入れてほしい

      ```{code-block} python :caption: app.py

      import streamlit ...

      ```

    28. st.title("サンプルアプリ")

      空行をあけてほしい派

    29. 多くの依存パッケージがあり、pandasなども依存しており多くのパッケージがインストールされます。

      表現が冗長

      pandasなど多くの依存パッケージが一緒にインストールされます。

      とか

    30. # venvの作成と有効化

      全体的にコメントじゃなくてcaptionにした方がよいかと

    31. venvについては

      4月の記事でも、今までの他の人の記事でもそこまで説明していないので、venvの説明はなくてもいいのでは

      https://terada-202410-streamlit.gihyo-python-monthly.pages.dev/2024/202404

    32. されている

      している

      Streamlitが主体だと思うので受け身じゃなくてよい

    33. 開発開発

      typo

    34. 複雑な処理を

      複雑な処理はサーバーサイドではしているけど、フロントはシンプルみたいなことを言いたいと思うんですが、伝わらないと思います。

    35. これらの

      これら、が連続していて読みにくいのと、ここでの「これら」は一つ前の「これら」と違うことを指している? 代名詞を使わずに具体的に書いた方が良いのではにか

    36. 一文の「、」が多いので読みにくいです。整理してほしい

    37. これはら

      typo: これらは

    38. 機能にフォーカスを当てて、よく使う機能を紹介します

      機能がかぶるので、1つめをトルでもいいかも

    1. whirlpool.

      The whirlpool contrasts with moments of stillness and clarity in the poem. It underscores the tension between chaos and order, reflecting the desire for meaning in a fragmented world. The whirlpool serves as a reminder of the relentless motion of time and the challenges of finding stability.

    2. The river sweats Oil and tar

      The lines "The river sweats / Oil and tar" reflect the industrial pollution of the environment and symbolizes the decay and corruption present in modern life. The river, typically a symbol of life and renewal is assigned a certain vitality and is transformed into a site of contamination, highlighting themes of desolation and moral decline in the post-war world.

    3. Twit twit twit Jug jug jug jug jug jug So rudely forc’d. Tereu

      In "The Waste Land," the lines "Twit twit twit / Jug jug jug jug jug jug / So rudely forc'd" evoke a jarring and fragmented sense of communication, drawing from the myth of Tereus, Procne, and Philomela. This reference introduces themes of violence, loss, and the disruption of natural order. The repetition of "twit" and "jug" creates a rhythmic yet unsettling sound, almost mocking in its simplicity. It highlights the stark contrast between the complexity of human emotion and the reduced, animalistic quality of the sounds. This mirrors the broader themes of disconnection and alienation throughout the poem. The reference to Tereus—who brutally silenced Philomela by cutting out her tongue—serves as a potent metaphor for silencing and trauma. In this context, the nymphs and their experiences are connected to loss and violence, underscoring the idea that beauty and vitality are often subjected to brutal realities.

    4. departed.

      The indentation of “departed” draws attention to the unusual experience of the nymphs, who traditionally symbolize beauty, love, and the natural world, often associated with life and abundance. However, in Eliot’s context, their presence serves to contrast the barrenness and emptiness of modern existence. Also, decapitalizing “departed” shifts the agency of the myths and implies a more passive experience as they have been swept away and lost without active control over their fate. This loss of agency aligns with the themes in "The Waste Land," where characters often feel powerless in the face of societal decay and personal disillusionment. The experience of the nymphs can be interpreted as a reflection of unfulfilled longing and the impact of a fragmented society on intimate relationships. Instead of celebrating love and connection, their references evoke a sense of nostalgia for a more vibrant, meaningful past that has been lost. This mirrors the sorrow expressed in Psalm 137, where the Israelites long for their homeland, suggesting a universal longing for wholeness and the deep human need for connection.

      Ultimately, the nymphs' experience in "The Waste Land" draws attention to the contrast between the idealized past and the stark reality of the present, reinforcing the poem's exploration of loss, longing, and the search for identity in a desolate world.The line "Departed, have left no addresses" from "The Waste Land" resonates deeply with the themes in Psalm 137, particularly the sense of dislocation and absence. In Psalm 137, the Israelites lament their exile in Babylon, feeling disconnected from their homeland and traditions. The line evokes a profound sense of loss and the inability to return to a place of belonging, mirroring the mournful sentiment of having no way to communicate or reconnect with what has been left behind. Both texts express a longing for something lost and the pain of separation, emphasizing the emotional weight of exile. Just as the Israelites mourn their captivity and the destruction of their identity, Eliot's line suggests a broader existential crisis where individuals feel untethered in a fragmented world, underscoring the despair and disconnection prevalent in both works.

    5. HURRY UP PLEASE ITS TIME

      Eliot artfully weaves imagery and language that evokes quietude into the fabric of the poem, creating a body of work whose essence personifies forms of silence. The poem possesses a hushed quality, behaving similarly to a curse word. As if to engage and think with the poem is taboo. Yet, when read, the assemblage of fragmented imagery, allusions, ambiguous language and voice, or lack thereof, engenders a profusion of sound. Eliot’s use of syntax in “A Game of Chess” depicts the unexpected resonance of unsaid speech, drawing attention to the hidden yet audible nature of cognition. The capitalization of “HURRY UP PLEASE IT’S TIME,” a noticeable shift from the earlier lowercase dialogue, intends to evoke a semblance of sound while maintaining the generally quiet disposition of the poem. Eliot's interplay with cognition and sound probes the potency of unsaid speech, revealing how the silence between words carry as much meaning as spoken language itself, inviting readers to consider the depths of thought and emotion that lie beneath the surface of expression.

    6. The Chair she sat in, like a burnished throne,

      I am drawn to the parallels between T.S. Eliot’s The Waste Land and Baudelaire’s “A Martyred Woman,” particularly their shared exploration of the suffering and sacrifice of women. Both works present women as embodiments of beauty intertwined with pain. In Baudelaire’s poem, the “martyred woman” is depicted as suffering yet noble, while Eliot’s female characters often reflect a sense of despair and emotional turmoil despite their allure. Baudelaire explicitly frames women as martyrs, suggesting that their beauty is a source of suffering. Similarly, Eliot’s portrayal of women suggests that they endure personal sacrifices and struggles, often reflecting broader societal issues. This martyrdom emphasizes the emotional toll placed on women. Both poets critique the societal roles imposed on women. Baudelaire highlights how women are idealized yet subjected to suffering, while Eliot’s women often navigate a fragmented identity within a patriarchal context, exposing the emptiness behind romanticized notions of femininity. In both texts, women experience deep alienation. Baudelaire's martyred figures are isolated in their suffering, while Eliot’s women, such as Lil or the clairvoyante, illustrate the emotional disconnect prevalent in modern life, reinforcing feelings of loneliness and despair.

    7. 'That corpse you planted last year in your garden,

      Baudelaire juxtaposes the beauty of art and nature with the harsh realities of life, often reflecting on the dualities of pleasure and suffering. The poems frequently capture the essence of modern urban life, particularly in Paris, highlighting the alienation and moral ambiguity found in the city. Baudelaire delves into themes of vice and corruption, examining how they coexist with beauty. He often portrays sin as an integral part of human nature. Despite the dark themes, there are moments of seeking transcendence through art, love, and spirituality, hinting at the possibility of redemption amid despair. Interestingly, Baudelaire positions the poet as a visionary who can perceive the deeper truths of existence, navigating the complexities of the human condition.

      The line "that corpse you planted last year in your garden" embodies themes of beauty and decay; the imagery of the corpse juxtaposed with the idea of a garden symbolizes the intersection of life and death. It suggests that what might typically be seen as beautiful (a garden) is tainted by decay and mortality. This line hints at buried past sins or traumas, implying that the speaker is grappling with unresolved issues that refuse to remain hidden. The corpse can symbolize guilt or repressed memories that disrupt the facade of normalcy. The garden, often a symbol of natural beauty and cultivation, contrasts sharply with the idea of a corpse. This reflects the alienation and spiritual emptiness of modern life, where even beauty is intertwined with death. The act of planting a corpse can be seen as a perverse twist on the natural cycle of life, suggesting a disruption in the natural order. It points to the theme of regeneration but in a way that is grotesque and unsettling. This line encapsulates Eliot’s task of confronting uncomfortable truths. It suggests that to understand the modern condition, one must acknowledge the darker aspects of existence.

    8. from the hyacinth garden,

      Eliot weaves themes of beauty, love, and loss inspired by the story of Apollo and Hyacinth into the fabric of “The Waste Land,” particularly the cycles of life and death, the transient nature of beauty, and the emotional desolation of the modern world. The tale of Apollo, the god of light and music, and Hyacinth, his beloved, emphasizes the intensity of love and the tragedy of loss. Hyacinth's death, caused by an accidental injury from Apollo’s discus, illustrates how beauty can be fleeting and how love can lead to deep sorrow. In the myth, Hyacinth is transformed into a flower after his death, symbolizing the idea of regeneration. However, in "The Waste Land," this regeneration is complicated by the poem’s pervasive sense of despair and fragmentation. The cycles of life and death are depicted, but they often feel broken or unfulfilled. Eliot contrasts the mythic beauty of Apollo and Hyacinth with the barrenness of the modern world. The decorated imagery of the myth serves to heighten the bleakness of contemporary existence, where love and beauty seem diminished or lost amidst urban decay and spiritual emptiness. The reference to this myth also connects to the broader cultural and literary heritage that Eliot draws upon throughout "The Waste Land." It reflects his engagement with themes of mythology, art, and the human condition, suggesting that ancient stories continue to resonate, even in a fractured modern context.

    9. Quando fiam uti chelidon

      10.18

      Does “The Waste Land” end on a positive note? In debating with myself, I found my answer to remain hopelessly inconclusive. In the final section of the poem, it seems that our protagonist, in a role similar to a quester, has finally arrived at the Waste Land’s “Chapel Perilous” following the hopeful “violet hour” (380). Still, readers are left clueless regarding whether the desired task of regeneration has been completed. In what seems to be the most climactic scene, a rooster announces the arrival of rain from the chapel rooftop, yet two details keep me unnerved about this resolution:

      Firstly, where on Earth did the rain go? The “damp gust” is responsible for “bringing [the] rain,” yet this action is trapped in an unfinished, infinitive state (394-5). In fact, the “black clouds,” confined in a distant mountain chain, can never rejuvenate the withering land in the riverbanks and valleys (397).

      In addition, the cock, the announcer of the rain, is itself heavily connected to the uncertain state between life and death. Firstly, the animal figures in Ariel’s song “Hark, hark! I hear / [...] Cry, Cock a diddle dow” in Shakespeare’s Tempest, which brings to mind the fabricated death of Alonso, King of Naples. Secondly, the word is mentioned in another Shakespearian play, Hamlet, in the specific context of King Hamlet’s appearance as a ghost (ghost-hood and fabricated deaths suggest a similar border state between life and death). This brings even greater uncertainty regarding the cock’s ability of announcing/directing genuine revitalization.

      This sense of incompletion persists until the very last stanza, in which border states, including the shore that the speaker sits at (between water and land) and the London Bridge (between life and death/Inferno), figure heavily. In addition, the insufficiency of Philomela’s transformation is emphasized once again. The line “quando fiam uti chelidon” merely anticipates a future gaining of a voice similar to that of the swallow’s, yet the task is essentially unfulfillable – while both sexes of the swallow can sing, only the male nightingale sings (429). Philomela’s metamorphosis still does not liberate her from her silence, a reminder of her subjugation. It is, once again, an incomplete renewal at best.

    10. falling down falling down falling down

      This is one of many times in the poem where repetition like this occurs. This is similar to "The Vigil of Venus" where the line "Tomorrow may loveless, may tomorrow make love" is repeated several times throughout the poem. Interestingly, the line itself is almost repetition but not quite, which makes the idea of love in the poem feel like an ever-changing thing that isn't stagnant. Meanwhile, "The Waste Land"'s use of "falling down falling down falling down," through its insistent and exact repetition, seems to show an action that cannot be undone and is damaging, like the London Bridge falling down.

    11. My friend

      In Angela’s annotation for this line, she interrogates the true nature of friendship, claiming that friendship in “The Waste Land” appears in relation to “indifference” and “superficiality” (Li). She cites Bradley as one of her sources, specifically, "a common understanding being admitted, how much does that imply? What is the minimum of sameness that we need suppose to be involved in it?" (Bradley, 6). The word “understanding” specifically caught my attention, as it is central to the Brihadaranyaka Upanishad. This line of “The Waste Land” is in reference to the part of the Upanishad that means “give”: “Then the human beings said to him, ‘Teach us, father.’ He spoke to them the same syllable DA. ‘Did you understand?’ ‘We understood,’ they said. ‘You told us, “Give (datta)”’” (Brihadaranyaka, Chapter 2). Yet, although the humans were instructed to give, Eliot appears to extend this scene, resuming it when the humans reflect upon the past, asking “what have we given?”

      The deception and failure of friendship that Angela identifies as it relates to this line may also provide an answer to the shortcomings of the humans to “give.” Before the line Angela quotes, Bradley states, “what, however, we are convinced of, is briefly this, that we understand and, again, are ourselves understood” (Bradley, 6). Very clearly, Bradley accuses the human race of being under an illusion of understanding one another. If they are under the illusion of understanding, then the credibility of the humans in the Upanishad is completely undermined when they say that they “understand” what datta means. Possibly, they misunderstand what it means to “give,” or, Eliot may be making the claim that they misunderstood the meaning of datta itself as it exists in the universe of the poem. With this in mind, it makes sense that the humans are unable to point to what they’ve given in “The Waste Land.” They are left without direction, and, according to Bradley, they are condemned to failure in connecting, or “giving” themselves to one another. Even “my friend” implies an antithesis to “give”--possession. Eliot seems to agree with Bradley’s proposal that friendship, relationship, true exchange between one person and another is something beyond human understanding.

    12. Only at nightfall, aethereal rumours Revive for a moment a broken Coriolanus

      Coming back to what I said in a previous annotation about actions getting darker as night comes, this seems to flip that idea on its head a bit when saying "Only at nightfall, aethereal rumours / Revive for a moment a broken Coriolanus". Coriolanus is a Shakespeare character who is notably a bit of an antihero, so these lines seem to say that "aethereal rumors" at nightfall are what temporarily redeem Coriolanus, despite a previous annotation of mine arguing that peoples' actions get darker as the night falls. For Coriolanus, it seems to be the opposite.

      This is also interesting when you consider Francis Herbert Bradley's Appearance and Reality where he argues that much of what humans perceive is an illusion, which makes it hard for people to truly connect with each other. This makes me wonder if these "aethereal rumours" are then actually other people and not supernatural beings, but Eliot is referring to them this way to show the true distance between ourselves and the reality of other people.

    13. Who is the third who walks always beside you?

      Both this stanza and P. Marudanayagum's "Retelling of an Indian Legend" deal with a mysterious other. In the legend, the vial (verandah) has enough space for one person to lie on, two people to sit on, or three people to stand on. Once three people are standing on the vial, they feel a fourth presence but don't know who it is, before realizing it's Lord Vishnu (a Hindu God). Following the logic of this legend, a mysterious presence in a space where it's not physically possible for the presence to fit inside is probably a God or other supernatural thing. However, this stanza shows two, not three, people that are standing, and their space isn't limited, but there's also a mysterious presence. There's definitely a lot to unpack here, and I'd welcome any theories about it, but I desperately need to go to sleep and can't properly theorize at this point.

    14. Quando fiam uti chelidon—O swallow swallow

      The 6th line of Eliot’s final stanza in “The Waste Land” reads, “Quando fiam uti chelidon”, or “when shall I be as the swallow”. This line was taken from Pervigilium Veneris, translated by Allen Tate, which recalls the story of Philomena, an Athenian princess who was raped by a king, and later turned into a bird. In order to gain a better sense of Eliot’s reference, we can look at it in the context of the stanza in the Pervigilium Veneris, which reads “She sings, we are silent. When will my spring come? Shall I find my voice when I shall be as the swallow? … Silent, I lost the muse. Return, Apollo!”. The mention of spring harkens back to the beginning of “The Waste Land”, where spring plays a major theme. In the Pervigilium Veneris, Philomena attributes spring to herself, calling it “my spring”, suggesting that spring represents her own rebirth and restoration. Thus, we might be able to interpret Eliot’s “spring” in a similar manner. Philomena’s seeking out of her voice is also interesting in terms of “The Waste Land”, which is built on fragmented dialogue and ever changing voices. Interestingly, Philomena seems to have lost “the muse”, or the divine inspiration, and in frustration, she calls out to Apollo to inspire her once again. Eliot, through his biblical references and prayers seems to be calling out to the divine, perhaps for his own inspiration as well. Another significant part of the Pervigilium Veneris are the repeating lines, “Tomorrow may loveless, may lover tomorrow make love.” Through these repeating and ambiguous lines, the reader can get a sense of the future, and the contrast between lovelessness and making love in that future. The word “may” expresses possibility, but can also be interpreted as expressing a wish, or hope. At the final stanza, this phrase shifts into, “Tomorrow let loveless, let lover tomorrow make love.” The newly introduced word, “let”, seems to acknowledge how fate is in the hands of the gods, as it is more of a direct expression of desire. Ultimately this repetition and prayer falls in line with similar repetitions such as “HURRY UP IT IS TIME” in “The Waste Land”, suggesting Eliot’s intensifying attempts at communication with the divine.

    15. We think of the key, each in his prison Thinking of the key, each confirms a prison Only at nightfall, aethereal rumours

      While reading this stanza of “What the Thunder Said”, I instantly connected Eliot’s mention of aethereal rumors to “Appearance and Reality” by Francis Herbert Bradley. Bradley’s philosophical essay attempts to examine and explain interactions between souls. In particular, Bradley mentions ether while discussing the possibility of direct communication through souls ( as in soul-to-soul communication without the use of bodies). Bradley explains that this communication would occur by ‘a medium extended in space, and of course, like “ether,” quite material.”. Thus ether, while material, is equated to the direct impressions on one soul from another. With this understand of ether, we can interpret “ethereal rumors” to be ones not concerned with the external environment or human bodies, rather, spiritual messages that transcend the normal methods of bodily communication, such as the voice. However, Bradley seems to doubt the existence of this ethereal communication, and proceeds to worry, stating “If such alterations of our bodies are the sole means which we posses for conveying what is in us, can we be sure that in the end we really have conveyed it?”. Essentially, Bradley shares his fears that humans are unable to fully represent their souls through their bodies. Interestingly, Eliot’s two previous lines seem to evoke a similar notion of distorted communication between souls. Eliot states, “We think of the key, each in his prison// Thinking of the key, each confirms a prison”. In these lines, the people’s thoughts are collective and similar, but each individual has his own prison. When regarding the word “key”, one might think of a physical key to the prison, however, I argue that the word “key”, instead, refers to the ethereal communication between souls discussed by Bradley. A key is defined as “a thing that provides a means of understanding something”, such as “the key to the code”, or “the key to the riddle”. With this understanding of a key, we can interpret Eliot’s prisons as what Bradley would describe as limits of the bodily expression of the soul. These prisons seem to be “affirmed” by the existence of this “key”, which might represent another concern that the bodily methods of communication are only seen as limits due to the yearning for ethereal soul-to-soul communication.

    16. A woman drew her long black hair out tight And fiddled whisper music on those strings

      Beginning this stanza of “What the Thunder Said”, Eliot describes a woman who manipulated her hair, and “fiddled whisper music on those strings”. Interpreting “those strings” as the woman’s own hair we can interpret a curious instance of a woman using her body as an instrument to play music. Of course, we must acknowledge that realistically, one can’t make any substantial sounds with their hair, and thus we can interpret her “whisper music” as imagined, or only perceived by her. In terms of the human body, especially in relation to hair, we can further understand this passage by looking at page 298 of the Visuddhi-Magga. This page discusses the superficiality of beauty and the ego, as it declares that the human body is repulsive. The repulsiveness of the human body is argued, as the Visuddhi Magga reads, “When any part of the body becomes detached, as, for instance, the hair of the head … people are unwilling so much as to touch it”. According to the Visuddhi Magga, humans assign significance and beauty to discardable parts of their body, and when those parts are discarded, humans view them with disgust. When comparing the teachings of the Visuddhi-Magga with the long-haired woman, there seems to be a contrast in appreciation for the human body. While the Visuddhi-Magga argues that the body, especially the hair, is repulsive, the woman is using her own hair as an instrument, something of significance and beauty in and of itself. I believe another important aspect of this analysis lies in the consideration of Eliot’s notion of “conceptual death”. In “The Waste Land” Eliot has challenged the reader’s literal understanding of death, and instead seems to propose the idea that death is a complex and cultural state that cannot be so easily defined. Literally, our hair is dead, but when attached to our body, it becomes a part of a living thing, and thus seems to gain significance through what I argue is “conceptual vitality”. Interpeting the lesson of the Visuddhi-Magga, hair loses its “vitality” when it is cut off, and becomes recognizably repulsive. Though it was always dead, it has lost its significance to the body. I would argue that the woman using her hair as an instrument is an affirmation of the hair’s significance to herself, and thus, a part of her own conceptual vitality.

    17. Here is no water but only rock

      Psalm 63 describes longing for God in a place with no water, while this stanza describes longing for water whilst pointing out the abundance of rock. In Psalm 63, it even says of God, "My soul thirsteth for thee," which equates God to water in a sense. When looking at this section of "The Waste Land" together with Psalm 63, it makes this part seem notably unreligious.

    1. Welcome back and in this very brief demo lesson, I just want to demonstrate a very specific feature of EC2 known as termination protection.

      Now you don't have to follow along with this in your own environment, but if you are, you should still have the infrastructure created from the previous demo lesson.

      And also if you are following along, you need to be logged in as the I am admin user to the general AWS account.

      So the management account of the organization and have the Northern Virginia region selected.

      Now again, this is going to be very brief.

      So it's probably not worth doing in your own environment unless you really want to.

      Now what I want to demonstrate is termination protection.

      So I'm going to go ahead and move to the EC2 console where I still have an EC2 instance running created in the previous demo lesson.

      Now normally if I right click on this instance, I'm given the ability to stop the instance, to reboot the instance or to terminate the instance.

      And this is assuming that the instance is currently in a running state.

      Now if I go to terminate instance, straight away I'm presented with a dialogue where I need to confirm that I want to terminate this instance.

      But it's easy to imagine that somebody who's less experienced with AWS can go ahead and terminate that and then click on terminate to confirm the process without giving it much thought.

      And that can result in data loss, which isn't ideal.

      What you can do to add another layer of protection is to right click on the instance, go to instance settings, and then change termination protection.

      If you click that option, you get this dialogue where you can enable termination protection.

      So I'm going to do that, I'm going to enable termination protection because this is an essential website for animals for life.

      So I'm going to enable it and click on save.

      And now that instance is protected against termination.

      If I right click on this instance now and go to terminate instance and then click on terminate, I get a dialogue that I'm unable to terminate the instance.

      The instance and then the instance ID may not be terminated, modify its disable API termination instance attribute and then try again.

      So this instance is now protected against accidental termination.

      Now this presents a number of advantages.

      One, it protects against accidental termination, but it also adds a specific permission that is required in order to terminate an instance.

      So you need the permission to disable this termination protection in addition to the permissions to be able to terminate an instance.

      So you have the option of role separation.

      You can either require people to have both the permissions to disable termination protection and permissions to terminate, or you can give those permissions to separate groups of people.

      So you might have senior administrators who are the only ones allowed to remove this protection, and junior or normal administrators who have the ability to terminate instances, and that essentially establishes a process where a senior administrator is required to disable the protection before instances can be terminated.

      It adds another approval step to this process, and it can be really useful in environments which contain business critical EC2 instances.

      So you might not have this for development and test environments, but for anything in production, this might be a standard feature.

      If you're provisioning instances automatically using cloud formation or other forms of automation, this is something that you can enable in an automated way as instances are launching.

      So this is a really useful feature to be aware of.

      And for the SysOps exam, it's essential that you understand when and where you'd use this feature.

      And for both the SysOps and the developer exams, you should pay attention to this, disable API termination.

      You might be required to know which attribute needs to be modified in order to allow terminations.

      So really for both of the exams, just make sure that you're aware of exactly how this process works end to end, specifically the error message that you might get if this attribute is enabled and you attempt to terminate an instance.

      At this point though, that is everything that I wanted to cover about this feature.

      So right click on the instance, go to instance settings, change the termination protection and disable it, and then click on save.

      One other feature which I want to introduce quickly, if we right click on the instance, go to instance settings, and then change shutdown behavior, you're able to specify whether an instance should move into a stop state when shut down, or whether you want it to move into a terminate state.

      Now logically, the default is stop, but if you are running an environment where you don't want to consider the state of an instance to be valuable, then potentially you might want it to terminate when it shuts down.

      You might not want to have an account with lots of stopped instances.

      You might want the default behavior to be terminate, but this is a relatively niche feature, and in most cases, you do want the shutdown behavior to be stop rather than terminate, but it's here where you can change that default behavior.

      Now at this point, that is everything I wanted to cover.

      If you were following along with this in your own environment, you do need to clear up the infrastructure.

      So click on the services dropdown, move to cloud formation, select the status checks and protect stack, and then click on delete and confirm that by clicking delete stack.

      And once this stack finishes deleting all of the infrastructure that's been used during this demo and the previous one will be cleared from the AWS account.

      If you've just been watching, you don't need to worry about any of this process, but at this point, we're done with this demo lesson.

      So go ahead, complete the video, and once you're ready, I'll look forward to you joining me in the next.

    1. Welcome back and in this demo lesson either you're going to get the experience or you can watch me interacting with an Amazon machine image.

      So we created an Amazon machine image or AMI in a previous demo lesson and if you recall it was customized for animals for life.

      It had an install of WordPress and it had the Kause application installed and a custom login banner.

      Now this is a really simple example of an AMI but I want to step you through some of the options that you have when dealing with AMIs.

      So if we go to the EC2 console and if you are following along with this in your own environment do make sure that you're logged in as the IAM admin user of the general AWS account, so the management account of the organization and you have the Northern Virginia region selected.

      The reason for being so specific about the region is that AMIs are regional entities so you create an AMI in a particular region.

      So if I go and select AMIs under images within the EC2 console I'll see the animals for life AMI that I created in a previous demo lesson.

      Now if I go ahead and change the region maybe from Northern Virginia which is US-East-1 to US-East- Ohio which is US-East-2 if I make that change what we'll see is we'll go back to the same area of the console only now we won't see any AMIs that's because an AMI is tied to the region in which it's created.

      Every AMI belongs in one region and it has a unique AMI ID.

      So let's move back to Northern Virginia.

      Now we are able to copy AMIs between regions this allows us to make one AMI and use it for a global infrastructure platform so we can right-click and select copy AMI then select the destination region and then for this example let's say that I did want to copy it to Ohio then I would select that in the drop-down it would allow me to change the name if I wanted or I could keep it the same for description it would show that it's been copied from this AMI ID in this region and then it would have the existing description at the end.

      So at this point I'm going to go ahead and click copy AMI and that process has now started so if I close down this dialogue and then change it from US East 1 to US East 2 so select that now we have a pending AMI and this is the AMI that's being copied from the US - East - one region into this region if we go ahead and click on snapshots under elastic block store then we're going to see the snapshot or snapshots which belong to this AMI.

      Now depending on how busy AWS is it can take a few minutes for the snapshots to appear on this screen just go ahead and keep refreshing until they appear.

      In our case we only have the one which is the boot volume that's used for our custom AMI.

      Now the time taken to copy a snapshot between regions depends on many factors what the source and destination region are and the distance between the two the size of the snapshot and the amount of data it contains and it can take anywhere from a few minutes to much much longer so this is not an immediate process.

      Once the snapshot copy completes then the AMI copy process will complete and that AMI is then available in the destination region but an important thing that I want to keep stressing throughout this course is that this copied AMI is a completely different AMI.

      AMIs are regional don't fall for any exam questions which attempt to have you use one AMI for several regions.

      If we're copying this animals for life AMI from one region to another region in effect we're creating two different AMIs.

      So take note of this AMI ID in this region and if we switch back to the original source region so US - East - 1 note how this AMI has a different ID so they are different AMIs completely different AMIs you're creating a new one as part of the copy process.

      So while the data is going to be the same conceptually they are completely separate objects and that's critical for you to understand both for production usage and when answering any exam questions.

      Now while that's copying I want to demonstrate the other important thing which I wanted to show you in this demo lesson and that's permissions of AMIs.

      So if I right-click on this AMI and edit AMI permissions by default an AMI is private.

      Being private means that it's only accessible within the AWS account which has created the AMI and so only identities within that account that you grant permissions are able to access it and use it.

      Now you can change the permission of the AMI you could set it to be public and if you set it to public it means that any AWS account can access this AMI and so you need to be really careful if you select this option because you don't want any sensitive information contained in that snapshot to be leaked to external AWS accounts.

      A much safer way is if you do want to share the AMI with anyone else then you can select private but explicitly add other AWS accounts to be able to interact with this AMI.

      So I could click in this box and then for example if I clicked on services and I just moved to the AWS organization service I'll open that in a new tab and let's say that I chose to share this AMI with my production account so I selected my production account ID and then I could add this into this box which would grant my production AWS account the ability to access this AMI.

      Now no tell there's also this checkbox and this adds create volume permissions to the snapshots associated with this AMI so this is something that you need to keep in mind.

      Generally if you are sharing an AMI to another account inside your organization then you can afford to be relatively liberal with permissions so generally if you're sharing this internally I would definitely check this box and that gives full permissions on the AMI as well as the snapshots so that anyone can create volumes from those snapshots as well as accessing the AMI.

      So these are all things that you need to consider.

      Generally it's much preferred to explicitly grant an AWS account permissions on an AMI rather than making that AMI public.

      If you do make it public you need to be really sure that you haven't leaked any sensitive information, specifically access keys.

      While you do need to be careful of that as well if you're explicitly sharing it with accounts, generally if you're sharing it with accounts then you're going to be sharing it with trusted entities.

      You need to be very very careful if ever you're using this public option and I'll make sure I include a link attached to this lesson which steps through all of the best practice steps that you need to follow if you're sharing an AMI publicly.

      There are a number of really common steps that you can use to minimize lots of common security issues and that's something you should definitely do if you're sharing an AMI.

      Now if you want to do you could also share an AMI with an organizational unit or organization and you can do that using this option.

      This makes it easier if you want to share an AMI with all AWS accounts within your organization.

      At this point though I'm not going to do that we don't need to do that in this demo.

      What we're going to do now though is move back to US-East-2.

      That's everything I wanted to cover in this demo lesson.

      Now this AMI is available we can right click and select D register and move back to US-East-1 and now that we've done this demo lesson we can do the same process with this AMI.

      So we can right click select D register and that will remove that AMI.

      Click on snapshots this is the snapshot created by this AMI so we need to delete this as well right click delete that snapshot confirm that and we'll need to do the same process in the region that we copied the AMI and the snapshots to.

      So select US-East-2 it should be the only snapshot in the region make sure it is the correct one right click delete confirm that deletion and now you've cleared up all of the extra things created within this demo lesson.

      Now that's everything that I wanted to cover I just wanted to give you an overview of how to work with AMIs from the console UI from a copying and sharing perspective.

      Go ahead and complete this video and when you're ready I look forward to you joining me in the next.

    1. Welcome back.

      This is part two of this lesson.

      We're going to continue immediately from the end of part one.

      So let's get started.

      So the first step is to shut down this instance.

      So we don't want to create an AMI from a running instance because that can cause consistency issues.

      So we're going to close down this tab.

      We're going to return to instances, right-click, and we're going to stop the instance.

      We need to acknowledge this and then we need to wait for the instance to change into the stopped state.

      It will start with stopping.

      We'll need to refresh it a few times.

      There we can see it's now in a stopped state and to create the AMI, we need to right-click on that instance, go down to Image and Templates, and select Create Image.

      So this is going to create an AMI.

      And first we need to give the AMI a name.

      So let's go ahead and use Animals for Life template WordPress.

      And we'll use the same for Description.

      Now what this process is going to do is it's going to create a snapshot of any of the EBS volumes, which this instance is using.

      It's going to create a block device mapping, which maps those snapshots onto a particular device ID.

      And it's going to use the same device ID as this instance is using.

      So it's going to set up the storage in the same way.

      It's going to record that storage inside the AMI so that it's identical to the instance we're creating the AMI from.

      So you'll see here that it's using EBS.

      It's got the original device ID.

      The volume type is set to the same as the volume that our instance is using, and the size is set to 8.

      Now you can adjust the size during this process as well as being able to add volumes.

      But generally when you're creating an AMI, you're creating the AMI in the same configuration as this original instance.

      Now I don't recommend creating an AMI from a running instance because it can cause consistency issues.

      If you create an AMI from a running instance, it's possible that it will need to perform an instance reboot.

      You can force that not to occur, so create an AMI without rebooting.

      But again, that's even less ideal.

      The most optimal way for creating an AMI is to stop the instance and then create the AMI from that stopped instance, which will have fully consistent storage.

      So now that that's set, just scroll down to the bottom and go ahead and click on Create Image.

      Now that process will take some time.

      If we just scroll down, look under Elastic Block Store and click on Snapshots.

      You'll see that initially it's creating a snapshot of the boot volume of our original EC2 instance.

      So that's the first step.

      So in creating the AMI, what needs to happen is a snapshot of any of the EBS volumes attached to that EC2 instance.

      So that needs to complete first.

      Initially it's going to be an appending state.

      We'll need to give that a few moments to complete.

      If we move to AMIs, we'll see that the AMI is also creating it too.

      It is in appending state and it's waiting for that snapshot to complete.

      Now creating a snapshot is storing a full copy of any of the data on the original EBS volume.

      And the time taken to create a snapshot can vary.

      The initial snapshot always takes much longer because it has to take that full copy of data.

      And obviously depending on the size of the original volume and how much data is being used, will influence how long a snapshot takes to create.

      So the more data, the larger the volume, the longer the snapshot will take.

      After a few more refreshes, the snapshot moves into a completed status and if we move across to AMIs under images, after a few moments this too will change away from appending status.

      So let's just refresh it.

      After a few moments, the AMI is now also in an available state and we're good to be able to use this to launch additional EC2 instances.

      So just to summarize, we've launched the original EC2 instance, we've downloaded, installed and configured WordPress, configured that custom banner.

      We've shut down the EC2 instance and generated an AMI from that instance.

      And now we have this AMI in a state where we can use it to create additional instances.

      So we're going to do that.

      We're going to launch an additional instance using this AMI.

      While we're doing this, I want you to consider exactly how much quicker this process now is.

      So what I'm going to do is to launch an EC2 instance from this AMI and note that this instance will have all of the configuration that we had to do manually, automatically included.

      So right click on this AMI and select launch.

      Now this will step you through the launch process for an EC2 instance.

      You won't have to select an AMI because obviously you are now explicitly using the one that you've just created.

      You'll be asked to select all of the normal configuration options.

      So first let's put a name for this instance.

      So we'll use the name "instance" from AMI.

      Then we'll scroll down.

      As I mentioned moments ago, we don't have to specify an AMI because we're explicitly launching this instance from an AMI.

      Scroll down.

      You'll need to specify an instance type just as normal.

      We'll use a free tier eligible instance.

      This is likely to be T2 or T3.micro.

      Below that, go ahead and click and select Proceed without a key pair not recommended.

      Scroll down.

      We'll need to enter some networking settings.

      So click on Edit next to Network Settings.

      Click in VPC and select A4L-VPC1.

      Click in Subnet and make sure that SN-Web-A is selected.

      Make sure the box is below a both set to enable for the auto assign IP settings.

      Under Firewall, click on Select Existing Security Group.

      Click in the Security Groups drop down and select AMI-Demo-Instance Security Group.

      And that will have some random at the end.

      That's absolutely fine.

      Select that.

      Scroll down.

      And notice that the storage is configured exactly the same as the instance which you generated this AMI from.

      Everything else looks good.

      So we can go ahead and click on Launch Instance.

      So this is launching an instance using our custom created AMI.

      So let's close down this dialog and we'll see the instance initially in a pending state.

      Remember, this is launching from our custom AMI.

      So it won't just have the base Amazon Linux 2 operating system.

      Now it's going to have that base operating system plus all of the custom configuration that we did before creating the AMI.

      So rather than having to perform that same WordPress download installation configuration and the banner configuration each and every time, now we've baked that in to the AMI.

      So now when we launch one instance, 10 instances, or 100 instances from this AMI, all of them are going to have this configuration baked in.

      So let's give this a few minutes to launch.

      Once it's launched, we'll select it, right click, select Connect, and then connect into it using EC2, Instance Connect.

      Now one thing you will need to change because we're using a custom AMI, AWS can't necessarily detect the correct username to use.

      And so you might see sometimes it says root.

      Just go ahead and change this to EC2-user and then go ahead and click Connect.

      And if everything goes well, you'll be connected into the instance and you'll see our custom Cowsay banner.

      So all that configuration is now baked in and it's automatically included whenever we use that AMI to launch an instance.

      If we go back to the AWS console and select instances, make sure we still have the instance from AMI selected and then locate its public IP version for address.

      Don't use this link because that will use HTTPS instead, copy the IP address into your clipboard and open that in a new tab.

      Again, all being well, you should see the WordPress installation dialogue and that's because we've baked in the installation and the configuration into this AMI.

      So we've massively reduced the ongoing efforts required to launch an animals for life standard build configuration.

      If we use this AMI to launch hundreds or thousands of instances each and every time we're saving all the time and the effort required to perform this configuration and using an AMI is just one way that we can automate the build process of EC2 instances within AWS.

      And over the remainder of the course, I'm going to be demonstrating the other ways that you can use as well as comparing and contrasting the advantages and disadvantages of each of those methods.

      Now that's everything that I wanted to cover in this demo lesson.

      You've learned how to create an AMI and how to use it to save significant effort on an ongoing basis.

      So let's clear up all of the infrastructure that we've used in this lesson.

      So move back to the AWS console, close down this tab, go back to instances, and we need to manually terminate the instance that we created from our custom AMI.

      So right click and then go to terminate instance.

      You'll need to confirm that.

      That will start the process of termination.

      Now we're not going to delete the AMI or snapshots because there's a demo coming up later in this section of the course where you're going to get the experience of copying and sharing an AMI between AWS regions.

      So we're going to need to leave this in place.

      So we're not going to delete the AMI or the snapshots created within this lesson.

      Verify that that instance has been terminated and once it has, click on services, go to cloud formation, select the AMI demo stack, select delete and then confirm that deletion.

      And that will remove all of the infrastructure that we've created within this demo lesson.

      And at this point, that's everything that I wanted you to do in this demo.

      So go ahead, complete this video.

      And when you're ready, I'll look forward to you joining me in the next.

    1. Welcome back and in this demo lesson you'll be creating an AMI from a pre-configured EC2 instance.

      So you'll be provisioning an EC2 instance, configuring it with a popular web application stack and then creating an AMI of that pre-configured web application.

      Now you know in the previous demo where I said that you would be implementing the WordPress manual install once?

      Well I might have misled you slightly but this will be the last manual install of WordPress in the course, I promise.

      What we're going to do together in this demo lesson is create an Amazon Linux AMI for the animals for life business but one which includes some custom configuration and an install of WordPress ready and waiting to be initially configured.

      So this is a fairly common use case so let's jump in and get started.

      Now in order to perform this demo you're going to need some infrastructure, make sure you're logged into the general AWS account, so the management account of the organization and as always make sure that you have the Northern Virginia region selected.

      Now attached to this lesson is a one-click deployment link, go ahead and click that link.

      This will open the quick create stack screen, it should automatically be populated with the AMI demo as the stack name, just scroll down to the bottom, check this capabilities acknowledgement box and then click on create stack.

      We're going to need this stack to be in a create complete state so go ahead and pause the video and we can resume once the stack moves into create complete.

      Okay so that stacks now moved into a create complete state, we're good to continue with the demo.

      Now you're going to be using some command line commands within an EC2 instance as part of creating an Amazon machine image so also attached to this lesson is the lessons command document which contains all of those commands so go ahead and open that document.

      Now you might recognize these as the same commands that you used when you were performing a manual WordPress installation and that's the case we're running the same manual installation process as part of setting up our animals for life AMI so you're going to need all of these commands but as you've already experienced them in the previous demo lesson I'm going to run through them a lot quicker in this demo lesson so go back to the AWS console and we need to move to the EC2 area of the console so click on the services drop down, type EC2 into this search box and then open that in a new tab.

      Once you there go ahead and click on running instances, close down any dialogues about any console changes we want to maximize the amount of screen space that we have, we're going to connect to this A4L public EC2 instance this is the instance that we're going to use to create our AMI so we're going to set the instance up manually how we want it to be and then we're going to use it to generate an AMI so we need to connect to this instance so right click select connect we're going to use EC2 instance connect to do the work within our browser so make sure the username is EC2-user and then connect to this instance then once connected we're going to run through the commands to install WordPress really quickly we're going to start again by setting the variables that will use throughout the installation so you can just go ahead and copy and paste those straight in and press enter now we're going to run through all of the next set of commands really quickly because you use them in the previous demo lesson so first we're going to go ahead and install the MariaDB server Apache and the Wget utility while that's installing copy all of the commands from step 3 so these are commands which enable and start Apache and MariaDB go ahead and paste all of those four in and press enter so now Apache and MariaDB are both set to start when the instance boots as well as being set to currently started I'll just clear the screen to make this easier to see next we're going to set the DB root password again that's this command using the contents of the variable that you set at the start next we download WordPress once it's downloaded we move into the web root folder we extract the download we copy the files from within the WordPress folder that we've just extracted into the current folder which is the web root once we've done that we remove the WordPress folder itself and then we tidy up by deleting the download I'm going to clear the screen we copy the template configuration file into its final file name so wp-config.php then we're going to replace the placeholders in that file we're going to start with the database name using the variable that you set at the start next we're going to use the database user which you also set at the start and finally the database password and then we're going to set the ownership on all of these files to be the Apache user and the Apache group clear the screen next we need to create the DB setup script that are demonstrated in the previous demo so we need to run a collection of commands the first to enter the create database command the next one to enter the create user command and set that password the next one to grant permissions on the database to that user then flush the permissions then we need to run that script using the MySQL command line interface that runs all of those commands and performs all of those operations and then we tidy up by deleting that file now at this point we've done the exact same process that we did in the previous demo we've installed and set up WordPress and if everything's working okay we can go back to the AWS console click on instances select the running a4l-public ec2 instance copy down its IP address again make sure you copy that down don't click this link and then open that in a new tab if everything's working as expected you should see the WordPress installation dialogue now this time because we're creating an AMI we don't want to perform the installation we want to make sure that when anyone uses this AMI they're also greeted with this installation so we're going to leave this at this point we're not going to perform the installation instead we're going to go back to the ec2 instance now because this ec2 instance is for the animals for life business we want to customize it and make sure that everybody knows that this is an animals for life ec2 instance now to do that we're going to install an animal themed utility called cow say I'm going to clear the screen to make it easier to see and then just to demonstrate exactly what cow say does I'm going to run a cow say oh hi and if all goes well we see a cow using ASCII art saying the oh hi message that we just typed so we're going to use this to create a message of the day welcome when anyone connects to this ec2 instance to do that we're going to create a file inside the configuration folder of this ec2 instance so we're going to use shudu nano and we're going to create this file so forward slash etc forward slash update hyphen motd dot d forward slash 40 hyphen cow so we're going to create that file this is the file that's going to be used to generate the output when anyone logs in to this ec2 instance so we're going to copy in these two lines and then press enter so this means when anyone logs into the ec2 instance they're going to get an animal themed welcome so use control o to save that file and control x to exit clear the screen to make it easier to see we're going to make sure that file that we've just edited has the correct permissions then we're going to force an update of the message of the day so this is going to be what's displayed when anyone logs into this instance and then finally now that we've completed this configuration we're going to reboot this ec2 instance so we're going to use this command to reboot it and just to illustrate how this works I'm going to close down that tab and return to the ec2 console give this a few moments to restart that should have rebooted by now so we're going to select it right click go to connect again use ec2 instance connect assuming everything's working now when we connect to the instance we'll see an animal themed login banner so this is just a nice way that we can ensure that anyone logging into this instance understands that a he uses the Amazon Linux 2 AMI and be that it belongs to animals for life so we've created this instance using the Amazon Linux 2 AMI we've performed the WordPress installation and initial configuration we've customized the banner and now we're going to use this as our template instance to create our AMI that can then be used to launch other instances okay so this is the end of part one of this lesson it was getting a little bit on the long side and so I wanted to add a break it's an opportunity just to take a rest or grab a coffee part 2 will be continuing immediately from the end of part one so go ahead complete the video and when you're ready join me in part two

    1. Résumé de la vidéo [00:00:23][^1^][1] - [00:32:19][^2^][2]:

      Cette vidéo explore l'histoire de l'école républicaine en France, ses débats et ses interrogations, en mettant en lumière son évolution depuis 1792 et son lien avec la République.

      Temps forts: + [00:00:23][^3^][3] Introduction et contexte * Présentation de Jean-François Chanet * Objectifs de l'association des professeurs d'histoire-géographie * Importance de l'école républicaine + [00:01:01][^4^][4] Histoire de l'école républicaine * Lien avec la République depuis 1792 * Lois Ferry et unification de l'État * Période noire sous le régime de Vichy + [00:02:21][^5^][5] Débats et interrogations actuels * Laïcité et valeurs républicaines * Adaptation aux défis contemporains * Importance de préserver les valeurs fondamentales + [00:05:01][^6^][6] Exemples historiques et anecdotes * Gaston Bonheur et son livre * Rôle des instituteurs et des écoles * Impact des guerres sur l'éducation + [00:10:00][^7^][7] Unité et séparation * Séparation de la morale et de la religion * Séparation des sexes et des classes sociales * Concurrence entre écoles publiques et religieuses

      Résumé de la vidéo [00:32:22][^1^][1] - [01:03:04][^2^][2]:

      Cette partie de la vidéo explore l'évolution de l'école républicaine en France, en mettant l'accent sur les transformations sociales et éducatives depuis les années 60.

      Temps forts: + [00:32:22][^3^][3] Écoles à classe unique * Longévité malgré l'urbanisation * Féminisation du corps enseignant * Mobilité des enseignants + [00:34:00][^4^][4] Transformation des écoles * Regroupement des sexes * Séparation des âges * Augmentation des écoles mixtes + [00:38:00][^5^][5] Problème du redoublement * Taux de redoublement élevé * Impact sur la durée des études * Difficultés d'apprentissage + [00:42:00][^6^][6] Inégalités scolaires * Fréquentation des écoles rurales * Disparités entre centre et périphérie * Effondrement de la natalité pendant la guerre + [00:50:00][^7^][7] Réformes éducatives * Débats politiques sur les réformes * Importance des instituteurs * Critiques des inégalités perpétuées par l'école

      Résumé de la vidéo [01:03:07][^1^][1] - [01:34:05][^2^][2]:

      Cette partie de la vidéo explore l'histoire et les débats autour de l'école républicaine en France, en mettant l'accent sur les réformes éducatives et les défis sociaux qu'elles ont rencontrés.

      Temps forts: + [01:03:07][^3^][3] Débats sur les réformes éducatives * Importance du consensus politique * Opposition historique aux lois éducatives * Complexité des réformes majeures + [01:05:01][^4^][4] Critiques littéraires et sociales * Zola et l'affaire Dreyfus * Jules Romain et l'éducation * Critiques des inégalités scolaires + [01:09:02][^5^][5] Évolution de l'enseignement secondaire * Accessibilité et inégalités * Critiques des praticiens * Réformes et résistances + [01:17:01][^6^][6] Concept d'école unique * Idées post-guerre * Obstacles et résistances * Différences sociales persistantes + [01:25:03][^7^][7] Réformes de Jean Zay * Allongement de la scolarité * Introduction de l'orientation * Critiques et impacts des réformes

      Résumé de la vidéo [01:34:08][^1^][1] - [01:37:07][^2^][2]:

      Cette partie de la vidéo explore les défis et les crises de l'enseignement républicain en France, en se concentrant sur les réflexions de Charles Péguy sur l'éducation et la société.

      Points forts : + [01:34:08][^3^][3] Propagande et émancipation * Propager des idées pour émanciper les esprits * Le problème républicain de l'école * Opposition entre mystique et politique + [01:34:50][^4^][4] Charles Péguy et l'éducation * Péguy, orphelin et élève brillant * Son parcours scolaire exceptionnel * Mort à la guerre en 1914 + [01:35:28][^5^][5] Crises de l'enseignement * Crises de vie et crises de l'enseignement * Enseignement reflète la société * Société moderne et ses défis éducatifs

    1. Welcome back.

      This is part two of this lesson.

      We're going to continue immediately from the end of part one.

      So let's get started.

      So this is the folder containing the WordPress installation files.

      Now there's one particular file that's really important, and that's the configuration file.

      So there's a file called WP-config-sample, and this is actually the file that contains a template of the configuration items for WordPress.

      So what we need to do is to take this template and change the file name to be the proper file name, so wp-config.php.

      So we're going to create a copy of this file with the correct name.

      And to do that, we run this command.

      So we're copying the template or the sample file to its real file name, so wp-config.php.

      And this is the name that WordPress expects when it initially loads its configuration information.

      So run that command, and that now means that we have a live config file.

      Now this command isn't in the instructions, but if I just take a moment to open up this file, you don't need to do this.

      I'm just demonstrating what's in this file for your benefit.

      But if I run a sudo nano, and then wp, and then hyphen-config, and then php, this is how the file looks.

      So this has got all the configuration information in.

      So it stores the database name, the database user, the database host, and lots of other information.

      Now notice how it has some placeholders.

      So this is where we would need to replace the placeholders with the actual configuration information.

      So the database name itself, the host name, the database username, the database password, all that information would need to be replaced.

      Now we're not going to type this in manually, so I'm going to control X to exit out of this, and then clear the screen again to make it easy to see.

      We're going to use the Linux utility sed, or S-E-D.

      And this is a utility which can perform a search and replace within a text file.

      It's actually much more complex and capable than that.

      It can perform many different manipulation operations.

      But for this demonstration, we're going to use it as a simple search and replace.

      Now we're going to do this a number of times.

      First, we're going to run this command, which is going to replace this placeholder.

      Remember, this is one of the placeholders inside the configuration file that I've just demonstrated, wp-config.

      We're going to replace the placeholder here with the contents of the variable name, dbname, that we set at the start of this demo.

      So this is going to replace the placeholder with our actual database name.

      So I'm going to enter that so you can do the same.

      We're going to run the sed command again, but this time it's going to replace the username placeholder with the dbuser variable that we set at the start of this demo.

      So use that command as well.

      And then lastly, it will do the same for the database password.

      So type or copy and paste this command and press enter.

      And that now means that this wp-config has the actual configuration information inside.

      And just to demonstrate that, you don't need to do this part.

      I'll just do it to demonstrate.

      If I edit this file again, you'll see that all of these placeholders have actually been replaced with actual values.

      So I'm going to control X out of that and then clear the screen.

      And that concludes the configuration for the WordPress application.

      So now it's ready.

      Now it knows how to communicate with the database.

      What we need to do to finish off the configuration though is just to make sure that the web server has access to all of the files within this folder.

      And to do that, we use this command.

      So we're making sure that we use the shown command or chown and set the ownership of all of the files in this folder and any subfolders to be the Apache user and the Apache group.

      And the Apache user and Apache group belong to the web server.

      So this just makes sure that the web server is able to access and control all of the files in the web root folder.

      So run that command and press enter.

      And that concludes the installation part of the WordPress application.

      There's one final thing that we need to do and that's to create the database that WordPress will use.

      So I'm going to clear the screen to make it easy to see.

      Now what we're going to do in order to configure the database is we're going to make a database setup script.

      We're going to put this script inside the forward slash TMP folder and we're going to call it DB.setup.

      So what we need to do is enter the commands into this file that will create the database.

      After the database is created, it needs to create a database user and then it needs to grant that user permissions on that database.

      Now again, instead of manually entering this, we're going to use those variable names that were created at the start of the demo.

      So we're going to run a number of commands.

      These are all in the lessons commands document.

      The first one is this.

      So this echoes this text and because it has a variable name in, this variable name will be replaced by the actual contents of the variable.

      Then it's going to take this text with the replacement of the contents of this variable and it's going to enter that into this file.

      So forward slash TMP, forward slash DB setup.

      So run that and that command is going to create the WordPress database.

      Then we're going to use this command and this is the same so it echoes this text but it replaces these variable names with the contents of the variables.

      This is going to create our WordPress database user.

      It's going to set its password and then it's going to append this text to the DB setup file that we're creating.

      Now all of these are actually database commands that we're going to execute within the MariaDB database.

      So enter that to add that line to DB.setup.

      Then we have another line which uses the same architecture as the ones above it.

      It echoes the text.

      It replaces these variable names with the contents and then outputs that to this DB.setup file and this command grants our database user permissions to our WordPress database.

      And then the last command is this one which just flushes the privileges and again we're going to add this to our DB.setup script.

      So now I'm just going to cat the contents of this file so you can just see exactly what it looks like.

      So cat and then space forward slash TMP, forward slash DB.setup.

      So as you'll see it's replaced all of these variable names with the actual contents.

      So this is what the contents of this script actually looks like.

      So these are commands which will be run by the MariaDB database platform.

      To run those commands we use this.

      So this is the MySQL command line interface.

      So we're using MySQL to connect to the MariaDB database server.

      We're using the username of root.

      We're passing in the password and then using the contents of the DB root password variable.

      And then once we authenticate the database we're passing in the contents of our DB.setup script.

      And so this means that all of the lines of our DB.setup script will be run by the MariaDB database and this will create the WordPress database, the WordPress user and configure all of the required permissions.

      So go ahead and press enter.

      That command is run by the MariaDB platform and that means that our WordPress database has been successfully configured.

      And then lastly just to keep things secure because we don't want to leave files laying around on the file system with authentication information inside.

      We're just going to run this command to delete this DB.setup file.

      Okay, so that concludes the setup process for WordPress.

      It's been a fairly long intensive process but that now means that we have an installation of WordPress on this EC2 instance, a database which has been installed and configured.

      So now what we can do is to go back to the AWS console, click on instances.

      We need to select the A4L-PublicEC2 and then we need to locate its IP address.

      Now make sure that you don't use this open address link because this will attempt to open the IP address using HTTPS and we don't have that configured on this WordPress instance.

      Instead, just copy the IP address into your clipboard and then open that in a new tab.

      If everything's successful, you should see the WordPress installation dialog and just to verify this is working successfully, let's follow this process through.

      So pick English, United States for the language.

      For the blog title, just put all the cats and then admin as the username.

      You can accept the default strong password.

      Just copy that into your clipboard so we can use it to log in in a second and then just go ahead and enter your email.

      It doesn't have to be a correct one.

      So I normally use test@test.com and then go ahead and click on install WordPress.

      You should see a success dialog.

      Go ahead and click on login.

      Username will be admin, the password that you just copied into your clipboard and then click on login.

      And there you go.

      We've got a working WordPress installation.

      We're not going to configure it in any detail but if you want to just check out that it works properly, go ahead and click on this all the cats at the top and then visit site and you'll be able to see a generic WordPress blog.

      And that means you've completed the installation of the WordPress application and the database using a monolithic architecture on a single EC2 instance.

      So this has been a slow process.

      It's been manual and it's a process which is wide open for mistakes to be made at every point throughout that process.

      Can you imagine doing this twice?

      What about 10 times?

      What about a hundred times?

      It gets pretty annoying pretty quickly.

      In reality, this is never done manually.

      We use automation or infrastructure as code systems such as cloud formation.

      And as we move through the course, you're going to get experience of using all of these different methods.

      Now that we're close to finishing up the basics of VPC and EC2 within the course, things will start to get much more efficient quickly because I'm going to start showing you how to use many of the automation and infrastructure as code services within AWS.

      And these are really awesome to use.

      And you'll see just how much power is granted to an architect, a developer, or an engineer by using these services.

      For now though, that is the end of this demo lesson.

      Now what we're going to do is to clear up our account.

      So we need to go ahead and clear all of this infrastructure that we've used throughout this demo lesson.

      To do that, just move back to the AWS console.

      If you still have the cloud formation tab open and move back to that tab, otherwise click on services and then click on cloud formation.

      If you don't see it anywhere, you can use this box to search for it, select the word, press stack, select delete, and then confirm that deletion.

      And that will delete the stack, clear up all of the infrastructure that we've used throughout this demo lesson and the account will now be in the same state as it was at the start of this lesson.

      So from this point onward in the course, we're going to start using automation.

      Now there is a lesson coming up in a little while in this section of the course, where you're going to create an Amazon machine image which is going to contain a pre-baked copy of the WordPress application.

      So as part of that lesson, you are going to be required to perform one more manual installation of WordPress, but that's going to be part of automating the installation.

      So you'll start to get some experience of how to actually perform automated installations and how to design architectures which have WordPress as a component.

      At this point though, that's everything I wanted to cover.

      So go ahead, complete this video, and when you're ready, I look forward to you joining me in the next.

    1. noata shamelessly lifted from Edmund

      X

    2. participants in the Stream of becoming

      Stream of becoming

    3. intricate self- metamorphic and purposive complexes of prehension or experiential relationships

      Ditto

    4. by theories which stray even further

      From

    5. mind warping mathematical toys

      X

    6. underlying conceptual Inc coherence

      Incoherence

    7. the truths of science

      X

    8. our cognitive faculties

      our cognitive faculties are imperfect machines which have been haphazardly assembled by the blind

      watchmaker of algorithmic natural selection

    9. a new form of technological mysticism

      Mysticism

    10. self-confirmation engines

      Technological mysticism

    1. materiële strafrecht

      Wetboek van Strafrecht (Sr)

    2. formele strafrecht

      Wetboek van Strafvordering (SV)

    1. Welcome back and in this lesson we're going to be doing something which I really hate doing and that's using WordPress in a course as an example.

      Joking aside though WordPress is used in a lot of courses as a very simple example of an application stack.

      The problem is that most courses don't take this any further.

      But in this course I want to use it as one example of how an application stack can be evolved to take advantage of AWS products and services.

      What we're going to be using WordPress for in this demo is to give you experience of how a manual installation of a typical application stack works in EC2.

      We're going to be doing this so you can get the experience of how not to do things.

      My personal belief is that to fully understand the advantages that automation features within AWS provide, you need to understand what a manual installation is like and what problems you can experience doing that manual installation.

      As we move through the course we can compare this to various different automated ways of installing software within AWS.

      So you're going to get the experience of bad practices, good practices and the experience to be able to compare and contrast between the two.

      By the end of this demonstration you're going to have a working WordPress site but it won't have any high availability because it's running on a single EC2 instance.

      It's going to be architecturally monolithic with everything running on the one single instance.

      In this case that means both the application and the database.

      The design is fairly straightforward.

      It's just the Animals for Life VPC.

      We're going to be deploying the WordPress application into a single subnet, the WebA public subnet.

      So this subnet is going to have a single EC2 instance deployed into it and then you're going to be doing a manual install onto this instance and the end result is a working WordPress installation.

      At this point it's time to get started and implement this architecture.

      So let's go ahead and switch over to our AWS console.

      To get started with this demo lesson you're going to need to do a few preparation steps.

      First just make sure that you're logged in to the general AWS account, so the management account of the organization and as always make sure you have the Northern Virginia region selected.

      Now attached to this lesson is a one-click deployment for the base infrastructure that we're going to use.

      So go ahead and open the one-click deployment link that's attached to this lesson.

      That link is going to take you to the Quick Create Stack screen.

      Everything should be pre-populated.

      The stack name should be WordPress.

      All you need to do is scroll down towards the bottom, check this capabilities box and then click on Create Stack.

      And this stack is going to need to be in a Create Complete state before we move on with the demo lesson.

      So go ahead and pause this video, wait for the stack to change to Create Complete and then we're good to continue.

      Also attached to this lesson is a Lessons Command document which lists all of the commands that you'll be using within the EC2 instance throughout this demo lesson.

      So go ahead and open that as well.

      So that should look something like this and these are all of the commands that we're going to be using.

      So these are the commands that perform a manual WordPress installation.

      Now that that stack's completed and we've got the Lesson Commands document open, the next step is to move across to the EC2 console because we're going to actually install WordPress manually.

      So click on the Services drop-down and then locate EC2 in this All Services part of the screen.

      If you've recently visited it, it should be in the Recently Visited section under Favorites or you can go ahead and type EC2 in the search box and then open that in a new tab.

      And then click on Instances running and you should see one single instance which is called A4L-PublicEC2.

      Go ahead and right-click on this instance.

      This is the instance we'll be installing WordPress within.

      So right-click, select Connect.

      We're going to be using our browser to connect to this instance so we'll be using Instance Connect just verify that the username is EC2-user and then go ahead and connect to this instance.

      Now again, I fully understand that a manual installation of WordPress might seem like a waste of time but I genuinely believe that you need to understand all the problems that come from manually installing software in order to understand the benefits which automation provides.

      It's not just about saving time and effort.

      It's also about error reduction and the ability to keep things consistent.

      Now I always like to start my installations or my scripts by setting variables which will store the configuration values that everything from that point forward will use.

      So we're going to create four variables.

      One for the database name, one for the database user, one for the database password and then one for the root or admin password of the database server.

      So let's start off by using the pre-populated values from the Lessened Commands documents.

      So that's all of those variables set and we can confirm that those are working by typing echo and then a space and then a dollar and then the name of one of those variables.

      So for example, dbname and press Enter and that will show us the value stored within that variable.

      So now we can use these later points of the installation.

      So at this point I'm going to clear the screen to keep it easy to see and stage two at this installation process is to install some system software.

      So there are a few things that we need to install in order to allow a WordPress installation.

      We'll install those using the DNF package manager.

      We need to give it admin privileges which is why we use shudu and then the packages that we're going to install are the database server which is Maria db-server the Apache web server which is HTTPD and then a utility called Wget which we're going to use to download further components of the installation.

      So go ahead and type or copy and paste that command and press Enter and that installation process will take a few moments and it will go through installing that software and any of the prerequisites.

      They're done so I'll clear the screen to keep this easy to read.

      Now that all those packages are installed we need to start both the web server and the database server and ensure that both of them are started if ever the machine is restarted.

      So to do that we need to enable and start those services.

      So enabling and starting means that both of the services are both started right now and they'll start if the machine reboots.

      So first we'll use this command.

      So we're using admin privileges again, systemctl which allows us to start and stop system processes and then we use enable and then HTTPD which is the web server.

      So type and press enter and that ensures that the web server is enabled.

      We need to run the same command again but this time specifying MariaDB to ensure that the database server is enabled.

      So type or copy and paste and press enter.

      So that means both of those processes will start if ever the instance is rebooted and now we need to manually start both of those so they're running and we can interact with them.

      So we need to use the same structure of command but instead of enable we need to start both of these processes.

      So first the web server and then the database server.

      So that means the CC2 instance now has a running web and database server both of which are required for WordPress.

      So I'll clear the screen to keep this easy to read.

      Next we're going to move to stage 4 and stage 4 is that we need to set the root password of the database server.

      So this is the username and password that will be used to perform all of the initial configuration of the database server.

      Now we're going to use this command and you'll note that for password we're actually specifying one of the variables that we configured at the start of this demo.

      So we're using the DB root password variable that we configured right at the start.

      So go ahead and copy and paste or type that in and press enter and that sets the password for the root user of this database platform.

      The next step which is step 5 is to install the WordPress application files.

      Now to do that we need to install these files inside what's known as the web root.

      So whenever you browse to a web server either using an IP address or a DNS name if you don't specify a path so if you just use the server name for example netflix.com then it loads those initial files from a folder known as the web root.

      Now on this particular server the web root is stored in /varr/www/html so we need to download WordPress into that folder.

      Now we're going to use this command Wget and that's one of the packages that we installed at the start of this lesson.

      So we're giving it admin privileges and we're using Wget to download latest.tar.gz from wordpress.org and then we're putting it inside this web root.

      So /varr/www/html.

      So go ahead and copy and paste or type that in and press enter.

      That'll take a few moments depending on the speed of the WordPress servers and that will store latest.tar.gz in that web root folder.

      Next we need to move into that folder so cd space /varr/www/html and press enter.

      We need to use a Linux utility called tar to extract that file.

      So sudo and then tar and then the command line options -zxvf and then the name of the file so latest.tar.gz So copy and paste or type that in and press enter and that will extract the WordPress download into this folder.

      So now if we do an ls -la you'll see that we have a WordPress folder and inside that folder are all of the application files.

      Now we actually don't want them inside a WordPress folder.

      We want them directly inside the web root.

      So the next thing we're going to do is this command and this is going to copy all of the files from inside this WordPress folder to . and . represents the current folder.

      So it's going to copy everything inside WordPress into the current working directory which is the web root directory.

      So enter that and that copies all of those files.

      And now if we do another listing you'll see that we have all of the WordPress application files inside the web root.

      And then lastly for the installation part we need to tidy up the mess that we've made.

      So we need to delete this WordPress folder and the download file that we just created.

      So to do that we'll run an rm -r and then WordPress to delete that folder.

      And then we'll delete the download with sudo rm and then a space and then the name of the file.

      So latest.tar.gz.

      And that means that we have a nice clean folder.

      So I'll clear the screen to make it easy to see.

      And then I'll just do another listing.

      Okay so this is the end of part one of this lesson.

      It was getting a little bit on the long side and so I wanted to add a break.

      It's an opportunity just to take a rest or grab a coffee.

      Part two will be continuing immediately from the end of part one.

      So go ahead complete the video and when you're ready join me in part two.

    1. la “factualité” de chatGPT ou, plus prosaïquement, pénalise davantage les “hallucinations”

      je n'ai pas compris

    2. l’alignement

      concept clé

    3. coût mais aussi, surtout de risques

      Quel est le cout et quel est aussi le plus grand risque?

    4. ChatGPT

      C'est un générateur de texte fabriqué par OpenAI. capable d'interagir avec l'homme.

    1. Editors Assessment:

      PhysiCell is an open source multicellular systems simulator for studying many interacting cells in dynamic tissue microenvironments. As part of the PhysiCell ecosystem of tools and modules this paper presents a PhysiCell addon, PhysiMeSS (MicroEnvironment Structures Simulation) which allows the user to accurately represent the extracellular matrix (ECM) as a network of fibres. This can specify rod-shaped microenvironment elements such as the matrix fibres (e.g. collagen) of the ECM, allowing the PhysiCell user the ability to investigate physical interactions with cells and other fibres. Reviewers asked for additional clarification on a number of features. And the paper now clear future releases will provide full 3D compatibility and include working on fibrogenesis, i.e. the creation of new ECM fibres by cells.

      This evaluation refers to version 1 of the preprint

    2. AbstractThe extracellular matrix is a complex assembly of macro-molecules, such as collagen fibres, which provides structural support for surrounding cells. In the context of cancer metastasis, it represents a barrier for the cells, that the migrating cells needs to degrade in order to leave the primary tumor and invade further tissues. Agent-based frameworks, such as PhysiCell, are often use to represent the spatial dynamics of tumor evolution. However, typically they only implement cells as agents, which are represented by either a circle (2D) or a sphere (3D). In order to accurately represent the extracellular matrix as a network of fibres, we require a new type of agent represented by a segment (2D) or a cylinder (3D).In this article, we present PhysiMeSS, an addon of PhysiCell, which introduces a new type of agent to describe fibres, and their physical interactions with cells and other fibres. PhysiMeSS implementation is publicly available at https://github.com/PhysiMeSS/PhysiMeSS, as well as in the official Physi-Cell repository. We also provide simple examples to describe the extended possibilities of this new framework. We hope that this tool will serve to tackle important biological questions such as diseases linked to dis-regulation of the extracellular matrix, or the processes leading to cancer metastasis.

      This work has been published in GigaByte Journal under a CC-BY 4.0 license (https://doi.org/10.46471/gigabyte.136), and has published the reviews under the same license. It is also part of GigaByte’s PhysiCell Ecosystem series for tools that utilise or build upon the PhysiCell platform: https://doi.org/10.46471/GIGABYTE_SERIES_0003 These reviews are as follows.

      Reviewer 1. Erika Tsingos

      One important aspect that the authors need to be aware of and mention explicitly is that their algorithm for fiber set-up leads to differences in fiber concentration and orientation at the boundary, because fibers that are not wholly contained in the simulation box are discarded. The effect of this choice can be seen upon close inspection of Figure 2: In the left panel, fibers align tangentially to the boundary, so locally the orientation is not isotropic. Similarly, in Figure 2 middle and right panels, the left and right boundaries have lower local fiber concentration. This issue could potentially affect the outcome of a simulation, so it's important that readers are made aware so that if necessary they can address this with a modified algorithm. ----- Minor comments: In the abstract, the phrasing implies agent-based frameworks are only used for tumour evolution. I would rephrase such that it is clear that tumour evolution is one example among many possible applications. I suggest adding a dash to improve readability in the following sentence in the introduction: "However, we note that the applications of PhysiMeSS stretch beyond those wanting to model the ECM -- as the new cylindrical/rod-shaped agents could be used to model blood vessel segments or indeed create obstacles within the domain." In the implementation section, add a short sentence to clarify if PhysiMeSS is "backwards compatible" with older PhysiCell models that do not use the fiber agent. Notation in equations: A single vertical line is absolute value, and two vertical lines is Euclidean norm? The explanation of Equation 1 implies that the threshold v_{max} should limit the parallel force, but the text does not explicitly say if ||v|| is restricted to be less or equal to v_{max}. Is that the case? In Equation 2, I don't see the need to square the terms in parenthesis. If |v*l_f| is an absolute value it is always positive. Since l_f is normalized the value of the dot product is only between 0 and the magnitude of v. Am I missing something? Are p_x and p_y in the moment arm magnitude coordinates with respect to the fiber center? Table 2: It would be helpful to have a separate column with the corresponding symbols used throughout the text and equations. Figure 5/6: Missing crosslinker color legend. ----- Typos/grammar: "As an aside, an not surprisingly," --> As an aside, and not surprisingly, "This may either be because as a cell tries to migrate through the domain fibres which act as obstacles in the cell’s path," --> remove the word "which"

      Reviewer 2. Jinseok Park

      Noel et al. introduce PhysiMess - a new PhysiCell Addon for ECM remodeling. This new addon is a powerful tool to simulate ECM remodeling and has the potential to be applied to mechanobiology research, which makes my enthusiasm high. I would like to give a few suggestions. 1) Basically, it is an addon of PhysiCell. So, I suggest describing PhysiCell and how to add the addon for readers who are not familiar with these tools. Also, screen captures of tool manipulation would be very helpful. 2) Figure 2 and 3 exhibit the outcome of the addon showing ECM remodeling. I would suggest to show actual ECM images modeled by the addon. 3) The equations reflect four interactions, and in my understanding, the authors describe cell-fibre, fiber-cell, and fiber-fiber interactions. I suggest generating an example corresponding to each interaction's modulation and explaining how the add-on results explain the physiological phenomena. For instance, focal adhesion may be a key modulator of cell-fibre or fiber-cell interaction, presumably, alpha or beta fiber. I would demonstrate how the different parameters generate different results and explain the physiological situation modeled by the results. 4) Similarly, Figure 5 and Figure 6 only show one example and no comparison with other conditions. For example, It would be better to exhibit no pressure/pressure conditions. It may help readers estimate how the pressure impacts cell proliferation.

      Reviewer 3. Simon Syga

      The presented paper "PhysiMeSS - A New PhysiCell Addon for Extracellular Matrix Modelling" is a useful extension to the popular simulation framework PhysiCell. It enables the simulation of cell populations interacting with the extracellular matrix, which is represented by a set of line segments (2D) or cylinders (3D). These represend a new kind of agent in the simulation framework. The paper outlines the basic implementation, properties and interactions of these agents. I recommend publication after a small set of minor issues have been addressed. Please refer to the attached marked-up PDF file for these minor issues and suggestions. https://gigabyte-review.rivervalleytechnologies.comdownload-api-file?ZmlsZV9wYXRoPXVwbG9hZHMvZ3gvVFIvNTUwL2d4LVRSLTE3MTk5NDYwNjlfU1kucGRm

    1. Welcome back and in this video we're going to interact with instant store volumes.

      Now this part of the demo does come at a cost.

      This isn't inside the free tier because we're going to be launching some instances which are fairly large and are not included in the free tier.

      The demo has a cost of approximately 13 cents per hour and so you should only do this part of the demo if you're willing to accept that cost.

      If you don't want to accept those costs then you can go ahead and watch me perform these within my test environment.

      So to do this we're going to go ahead and click on instances and we're going to launch an instance manually.

      So I'm going to click on launch instances.

      We're going to name the instance, Instance Store Test so put that in the name box.

      Then scroll down, pick Amazon Linux, make sure Amazon Linux 2023 is selected and the architecture needs to be 64 bit x86.

      Scroll down and then in the instance type box click and we need to find a different type of instance.

      This is going to be one that supports instance store volumes.

      So scroll down and we're looking for m5dn.large.

      This is a type of instance which includes one instance store volume.

      So select that then scroll down a little bit more and under key pair click in the box and select proceed without a key pair not recommended.

      Scroll down again and under network settings click on edit.

      Click in the VPC drop down and select a4l-vpc1.

      Under subnet make sure sn-web-a is selected.

      Make sure enabled is selected for both of the auto assign public IP drop downs.

      Then we're going to select an existing security group click the drop down select the EBS demo instance security group.

      It will have some random after it but that's okay.

      Then scroll down and under storage we're going to leave all of the defaults.

      What you are able to do though is to click on show details next to instance store volumes.

      This will show you the instance store volumes which are included with this instance.

      You can see that we have one instance store volume it's 75 GB in size and it has a slightly different device name.

      So dev nvme0n1.

      Now all of that looks good so we're just going to go ahead and click on launch instance.

      Then click on view all instances and initially it will be an appending state and eventually it will move into a running state.

      Then we should probably wait for the status check column to change from initializing to 2 out of 2.

      Go ahead and pause the video and wait for this status check to change to be fully green.

      It should show 2 out of 2 status checks.

      That's now in a running state with 2 out of 2 checks so we can go ahead and connect to this instance.

      Before we do though just go ahead and select the instance and just note the instances public IP version 4 address.

      Now this address is really useful because it will change if the EC2 instance moves between EC2 hosts.

      So it's a really easy way that we can verify whether this instance has moved between EC2 hosts.

      So just go ahead and note down the IP address of the instance that you have if you're performing this in your own environment.

      We're going to go ahead and connect to this instance though so right click, select connect, we'll be choosing instance connect, go ahead and connect to the instance.

      Now many of these commands that we'll be using should by now be familiar.

      Just refer back to the lessons command document if you're unsure because we'll be using all of the same commands.

      First we need to list all of the block devices which are attached to this instance and we can do that with LSBLK.

      This time it looks a little bit different because we're using instance store rather than EBS additional volumes.

      So in this particular case I want you to look for the 8G volume so this is the root volume.

      This represents the boot or root volume of the instance.

      Remember that this particular instance type came with a 75GB instance store volume so we can easily identify it's this one.

      Now to check that we can verify whether there's a file system on this instance store volume.

      If we run this command, so the same command we've used previously so shudu file -s and then the id of this volume so dev nvme1n1, you'll see it reports data.

      And if you recall from the previous parts of this demo series this indicates that there isn't a file system on this volume.

      We're going to create one and to do that we use this command again it's the same command that we've used previously just with the new volume id.

      So press enter to create a file system on this raw block device this instance store volume and then we can run this command again to verify that it now has a file system.

      To mount it we can follow the same process that we did in the earlier stages of this demo series.

      We'll need to create a directory for this volume to be mounted into this time we'll call it forward slash instance store.

      So create that folder and then we're going to mount this device into that folder so shudu mount then the device id and then the mount point or the folder that we've previously created.

      So press enter and that means that this block device this instance store volume is now mounted into this folder.

      And if we run a df space -k and press enter you can see that it's now mounted.

      Now we're going to move into that folder by typing cd space forward slash instance store and to keep things efficient we're going to create a file called instance store dot txt.

      And rather than using an editor we'll just use shudu touch and then the name of the file and this will create an empty file.

      If we do an LS space -la and press enter you can see that that file exists.

      So now that we have this file stored on a file system which is running on this instance store volume let's go ahead and reboot this instance.

      Now we need to be careful we're not going to stop and start the instance we're going to restart the instance.

      Restarting is different than stop and start.

      So to do that we're going to close this tab move back to the ec2 console so click on instances right click on instance store test and select reboot instance and then confirm that.

      Note what this IP address is before you initiate the reboot operation and then just give this a few minutes to reboot.

      Then right click and select connect.

      Using instance connect go ahead and connect back to the instance.

      And again if it appears to hang at this point then you can just wait for a few moments and then connect again.

      But in this case I've left it long enough and I'm connected back into the instance.

      Now once I'm back in the instance if I run a df space -k and press enter note how that file system is not mounted after the reboot.

      Now that's fine because we didn't configure the Linux operating system to mount this file system when the instance is restarted.

      But what we can do is do an LS BLK again to list the block device.

      We can see that it's still there and we can manually mount it back in the same folder as it was before the reboot.

      To do that we run this command.

      So it's mounting the same volume ID the same device ID into the same folder.

      So go ahead and run that command and press enter.

      Then if we use cd space forward slash and then instance store press enter and then do an LS space -la we can see that this file is still there.

      Now the file is still there because instance store volumes do persist through the restart of an EC2 instance.

      Restarting an EC2 instance does not move the instance from one EC2 host to another.

      And because instance store volumes are directly attached to an EC2 host this means that the volume is still there after the machine has restarted.

      Now we're going to do something different though.

      Close this tab down.

      Move back to instances.

      Again pay special attention to this IP address.

      Now we're going to right click and stop the instance.

      So go ahead and do that and confirm it if you're doing this in your own environment.

      Watch this public IP v4 address really carefully.

      We'll need to wait for the instance to move into a stopped state which it has and if we select the instance note how the public IP version for address has been unallocated.

      So this instance is now not running on an EC2 host.

      Let's right click.

      Go to start instance and start it up again.

      Only to give that a few moments again.

      It'll move into a running state but notice how the public IP version for address has changed.

      This is a good indication that the instance has moved from one EC2 host to another.

      So let's give this instance a few moments to start up.

      And once it has right click, select connect and then go ahead and connect to the instance using instance connect.

      Once connected go ahead and run an LS BLK and press enter and you'll see it appears to have the same instance store volume attached to this instance.

      It's using the same ID and it's the same size.

      But let's go ahead and verify the contents of this device using this command.

      So shudu file space -s space and then the device ID of the instance store volume.

      For press enter, now note how it shows data.

      So even though we created a file system in the previous step after we've stopped and started the instance, it appears this instance store volume has no data.

      Now the reason for that is when you restart an EC2 instance, it restarts on the same EC2 host.

      But when you stop and start an EC2 instance, which is a distinctly different operation, the EC2 instance moves from one EC2 host to another.

      And that means that it has access to completely different instance store volumes than it did on that previous host.

      It means that all of the data, so the file system and the test file that we created on the instance store volume, before we stopped and started this instance, all of that is lost.

      When you stop and start an EC2 instance or for any other reason, which means the instance moves from one host to another, all of the data is lost.

      So instance store volumes are ephemeral.

      They're not persistent and you can't rely on them to keep your data safe.

      And it's really important that you understand that distinction.

      If you're doing the developer or sysop streams, it's also important that you understand the difference between an instance restart, which keeps the same EC2 host, and a stop and start, which moves an instance from one host to another.

      The format means you're likely to keep your data, but the latter means you're guaranteed to lose your data when using instance store volumes.

      EBS on the other hand, as we've seen, is persistent and any data persists through the lifecycle of an EC2 instance.

      Now with that being said, though, that's everything that I wanted to demonstrate within this series of demo lessons.

      So let's go ahead and tidy up the infrastructure.

      Close down this tab, click on instances.

      If you follow this last part of the demo in your own environment, go ahead and right click on the instance store test instance and terminate that instance.

      That will delete it along with any associated resources.

      We'll need to wait for this instance to move into the terminated state.

      So give that a few moments.

      Once that's terminated, go ahead and click on services and then move back to the cloud formation console.

      You'll see the stack that you created using the one click deploy at the start of this lesson.

      Go ahead and select that stack, click on delete and then delete stack.

      And that's going to put the account back in the same state as it was at the start of this lesson.

      So it will remove all of the resources that have been created.

      And at that point, that's the end of this demo series.

      So what did you learn?

      You learned that EBS volumes are created within one specific availability zone.

      EBS volumes can be mounted to instances in that availability zone only and can be moved between instances while retaining their data.

      You can create a snapshot from an EBS volume which is stored in S3 and that data is replicated within the region.

      And then you can use snapshots to create volumes in different availability zones.

      I told you how snapshots can be copied to other AWS regions either as part of data migration or disaster recovery and you learned that EBS is persistent.

      You've also seen in this part of the demo series that instant store volumes can be used.

      They are included with many instance types but if the instance moves between EC2 hosts so if an instance is stopped and then started or if an EC2 host has hardware problems then that EC2 instance will be moved between hosts and any data on any instant store volumes will be lost.

      So that's everything that you needed to know in this demo lesson and you're going to learn much more about EC2 and EBS in other lessons throughout the course.

      At this point though, thanks for watching and doing this demo.

      I hope it was useful but go ahead complete this video and when you're ready I look forward to you joining me in the next.

    1. positio

      Lovely!

    2. nd the front paws and backside of our dog

      Great!

    3. It is relatively easy to move from this position, especially for a 4 year old

      As soon as he lets go of the dog, he will become much less stable.

    4. lung

      lunge?

    5. internal rotation in the right leg

      Looks like slight external rotation of the left and possible internal rotation of the right. Hard to tell from this angle.

    1. Welcome back.

      This is part two of this lesson.

      We're going to continue immediately from the end of part one.

      So let's get started.

      We just need to give this a brief moment to perform that reboot.

      So just wait a couple of moments and once you have right click again, select Connect.

      We're going to use EC2 instance connect again.

      Make sure the user's correct and then click on Connect.

      Now, if it doesn't immediately connect you to the instance, if it appears to have frozen for a couple of seconds, that's fine.

      It just means that the instance hasn't completed its restart.

      Wait for a brief while longer and then attempt another connect.

      This time you should be connected back to the instance and now we need to verify whether we can still see our volume attached to this instance.

      So do a DF space -k and press Enter and you'll note that you can't see the file system.

      That's because before we rebooted this instance, we used the mount command to manually mount the file system on our EBS volume into the EBS test folder.

      Now that's a manual process.

      It means that while we could interact with that before the reboot, it doesn't automatically mount that file system when the instance restarts.

      To do that, we need to configure it to auto-mount when the instance starts up.

      So to do that, we need to get the unique ID of the EBS volume, which is attached to this instance.

      And to get that, we run a shudu space blkid.

      Now press Enter and that's going to list the unique identifier of all of the volumes attached to this instance.

      You'll see the boot volume listed as devxvda1 and the EBS volume that we've just attached listed as devxvdf.

      So we need the unique ID of the volume that we just added.

      So that's the one next to xvdf.

      So go ahead and select this unique identifier.

      You'll need to make sure that you select everything between the speech marks and then copy that into your clipboard.

      Next, we need to edit the FSTAB file, which controls which file systems are mounted by default.

      So we're going to run a shudu and then space nano, which is our editor, and then a space, and then forward slash ETC, which is the configuration directory on Linux, another forward slash and then FSTAB and press Enter.

      And this is the configuration file for which file systems are mounted by our instance.

      And we're going to add a similar line.

      So first we need to use uuid, which is the unique identifier, and then the equal symbol.

      And then we need to paste in that unique ID that we just copied to our clipboard.

      Once that's pasted in, press Space.

      This is the ID of the EBS volume, so the unique ID.

      Next, we need to provide the place where we want that volume to be mounted.

      And that's the folder we previously created, which is forward slash EBS test.

      Then a space, we need to tell the OS which file system is used, which is xfs, and then a space.

      And then we need to give it some options.

      You don't need to understand what these do in detail.

      We're going to use defaults, all one word, and then a comma, and then no fail.

      So once you've entered all of that, press Ctrl+O to save that file, and Enter, and then Ctrl+X to exit.

      Now this will be mounted automatically when the instance starts up, but we can force that process by typing shudu space mount space-a.

      And this will perform a mount of all of the volumes listed in the FS tab file.

      So go ahead and press Enter.

      Now if we do a df space-k and press Enter, you'll see that our EBS volume once again is mounted within the EBS test folder.

      So I'm going to clear the screen, then I'm going to move into that folder, press Enter, and then do an ls space-la, and you'll see that our amazing test file still exists within this folder.

      And that shows that the data on this file system is persistent, and it's available even after we reboot this EC2 instance, and that's different than instance store volumes, which I'll be demonstrating later on.

      At this point, we're going to shut down this instance because we won't be needing it anymore.

      So close down this tab, click on Instances, right-click on instance one-AZA, and then select Stop Instance.

      You'll need to confirm it, refresh that and wait for it to move into a stopped state.

      Once it has stopped, go down and click on Volumes, select the EBS test volume, right-click and detach it.

      We're going to detach this volume from the instance that we've just stopped.

      You'll need to confirm that, and that will begin the process and it will detach that volume from the instance, and this demonstrates how EBS volumes are completely separate from EC2 instances.

      You can detach them and then attach them to other instances, keeping the data that's on that volume.

      Just keep refreshing.

      We need to wait for that to move into an available state, and once it has, we're going to right-click, select Attach Volume, click inside the instance box, and this time, we're going to select instance two-AZA.

      It should be the only one listed now in a running state.

      So select that and click on Attach.

      Just refresh that and wait for that to move into an in-use state, which it is, then move back to instances, and we're going to connect to the instance that we just attached that volume to.

      So select instance two-AZA, right-click, select Connect, and then connect to that instance.

      Once we connected to that instance, remember this is an instance that we haven't interacted with this EBS volume with.

      So this instance has no initial configuration of this EBS volume, and if we do a DF-K, you'll see that this volume is not mounted on this instance.

      What we need to do is do an LS, BLK, and this will list all of the block devices on this instance.

      You'll see that it's still using XVDF because this is the device ID that we configured when attaching the volume.

      Now, if we run this command, so shudu, file, S, and then the device ID of this EBS volume, notice how now it shows a file system on this EBS volume because we created it on the previous instance.

      We don't need to go through all of the process of creating the file system because EBS volumes persist past the lifecycle of an EC2 instance.

      You can interact with an EBS volume on one instance and then move it to another and the configuration is maintained.

      We're going to follow the same process.

      We're going to create a folder called EBSTEST.

      Then we're going to mount the EBS volume using the device ID into this folder.

      We're going to move into this folder and then if we do an LS, space-LA, and press Enter, you'll see the test file that you created in the previous step.

      It still exists and all of the contents of that file are maintained because the EBS volume is persistent storage.

      So that's all I wanted to verify with this instance that you can mount this EBS volume on another instance inside the same availability zone.

      At this point, close down this tab and then click on Instances and we're going to shut down this second EC2 instance.

      So right-click and then select Stop Instance and you'll need to confirm that process.

      Wait for that instance to change into a stop state and then we're going to detach the EBS volume.

      So that's moved into the stopped state.

      We can select Volumes, right-click on this EBSTEST volume, detach the volume and confirm that.

      Now next, we want to mount this volume onto the instance that's in Availability Zone B and we can't do that because EBS volumes are located in one specific availability zone.

      Now to allow that process, we need to create a snapshot.

      Snapshots are stored on S3 and replicated between multiple availability zones in that region and snapshots allow us to take a volume in one availability zone and move it into another.

      So right-click on this EBS volume and create a snapshot.

      Under Description, just use EBSTESTSNAP and then go ahead and click on Create Snapshot.

      Just close down any dialogues, click on Snapshots and you'll see that a snapshot is being created.

      Now depending on how much data is stored on the EBS volume, snapshots can either take a few seconds or anywhere up to several hours to complete.

      This snapshot is a full copy of all of the data that's stored on our original EBS volume.

      But because the snapshot is stored in S3, it means that we can take this snapshot, right-click, create volume and then create a volume in a different availability zone.

      Now you can change the volume type, the size and the encryption settings at this point, but we're going to leave everything the same and just change the availability zone from US-EAST-1A to US-EAST-1B.

      So select 1B in availability zone, click on Add Tag.

      We're going to give this a name to make it easier to identify.

      For the value, we're going to use EBS Test Volume-AZB.

      So enter that and then create the volume.

      Close down any dialogues and at this point, what we're doing is using this snapshot which is stored inside S3 to create a brand new volume inside availability zone US-EAST-1B.

      At this point, once the volume is in an available state, make sure you select the right one, then we can right-click, we can attach this volume and this time when we click in the instance box, you'll see the instance that's in availability zone 1B.

      So go ahead and select that and click on Attach.

      Once that volume is in use, go back to Instances, select the third instance, right-click, select Connect, choose Instance Connect, verify the username and then connect to the instance.

      Now we're going to follow the same process with this instance.

      So first, we need to list all of the attached block devices using LSBLK.

      You'll see the volume we've just created from that snapshot, it's using device ID XVDF.

      We can verify that it's got a file system using the command that we've used previously and it's showing an XFS file system.

      Next, we create our folder which will be our mount point.

      Then we mount the device into this mount point using the same command as we've used previously, move into that folder and then do a listing using LS-LA and you should see the same test file you created earlier and if you cap this file, it should have the same contents.

      This volume has the same contents because it's created from a snapshot that we created of the original volume and so its contents will be identical.

      Go ahead and close down this tab to this instance, select instances, right click, stop this instance and then confirm that process.

      Just wait for that instance to move into the stopped state.

      We're going to move back to volumes, select the EBS test volume in availability zone 1B, detach that volume and confirm it and then just move to snapshots and I want to demonstrate how you have the option of right clicking on a snapshot.

      You can copy the snapshot and choose a different regions.

      So as well as snapshots giving you the option of moving EBS volume data between availability zones, you can also use snapshots to copy data between regions.

      Now I'm not going to do this process but I could select a different region, for example, Asia Pacific Sydney and copy that snapshot to the Sydney region.

      But there's no point doing that because we just have to remember to clean it up afterwards.

      That process is fairly simple and will allow us to copy snapshots between regions.

      It might take some time again depending on the amount of data within that snapshot but it is a process that you can perform either as part of data migration or disaster recovery processes.

      So go ahead and click on cancel and at this point we're just going to clear things up because this is the end of this first phase of this demo lesson.

      So right click on this snapshot and just delete the snapshot and confirm that.

      Then go to volumes, select the volume in US East 1A, right click, delete that volume and confirm.

      Select the volume in US East 1B, right click, delete volume and confirm.

      And that just means we've tidied up both of those EBS volumes within this account.

      Now that's the end of this first stage of this set of demo lessons.

      All the steps until this point have been part of the free tier within AWS.

      What follows won't be part of the free tier.

      We're going to be creating a larger instant size and this will have a cost attached but I want to use it to demonstrate instant store volumes and how you can interact with them and some of their unique characteristics.

      So I'm going to move into a new video and this new video will have an associated charge.

      So you can either watch me perform the steps or you can do it within your own environment.

      Now go ahead and complete this video and when you're ready, you can move on to the next video where we're going to investigate instant store volumes.

    1. Welcome back and we're going to use this demo lesson to get some experience of working with EBS and Instant Store volumes.

      Now before we get started, this series of demo videos will be split into two main components.

      The first component will be based around EBS and EBS snapshots and all of this will come under the free tier.

      The second component will be based on Instant Store volumes and will be using larger instances which are not included within the free tier.

      So I'm going to make you aware of when we move on to a part which could incur some costs and you can either do that within your own environment or watch me do it in the video.

      If you do decide to do it in your own environment, just be aware that there will be a 13 cents per hour cost for the second component of this demo series and I'll make it very clear when we move into that component.

      The second component is entirely optional but I just wanted to warn you of the potential cost in advance.

      Now to get started with this demo, you're going to need to deploy some infrastructure.

      To do that, make sure that you're logged in to the general account, so the management account of the organization and you've got the Northern Virginia region selected.

      Now attached to this demo is a one click deployment link to deploy the infrastructure.

      So go ahead and click on that link.

      That's going to open this quick create stack screen and all you need to do is scroll down to the bottom, check this capabilities box and click on create stack.

      Now you're going to need this to be in a create complete state before you continue with this demo.

      So go ahead and pause the video, wait for that stack to move into the create complete status and then you can continue.

      Okay, now that's finished and the stack is in a create complete state.

      You're also going to be running some commands within EC2 instances as part of this demo.

      Also attached to this lesson is a lesson commands document which contains all of those commands and you can use this to copy and paste which will avoid errors.

      So go ahead and open that link in a separate browser window or separate browser tab.

      It should look something like this and we're going to be using this throughout the lesson.

      Now this cloud formation template has created a number of resources, but the three that we're concerned about are the three EC2 instances.

      So instance one, instance two and instance three.

      So the next thing to do is to move across to the EC2 console.

      So click on the services drop down and then either locate EC2 under all services, find it in recently visited services or you can use the search box at the top type EC2 and then open that in a new tab.

      Now the EC2 console is going through a number of changes so don't be alarmed if it looks slightly different or if you see any banners welcoming you to this new version.

      Now if you click on instances running, you'll see a list of the three instances that we're going to be using within this demo lesson.

      We have instance one - az a.

      We have instance two - az a and then instance one - az b.

      So these are three instances, two of which are in availability zone A and one which is in availability zone B.

      Next I want you to scroll down and locate volumes under elastic block store and just click on volumes.

      And what you'll see is three EBS volumes, each of which is eight GIB in size.

      Now these are all currently in use.

      You can see that in the state column and that's because all of these volumes are in use as the boot volumes for those three EC2 instances.

      So on each of these volumes is the operating system running on those EC2 instances.

      Now to give you some experience of working with EBS volumes, we're going to go ahead and create a volume.

      So click on the create volume button.

      The first thing you'll need to do when creating a volume is pick the type and there are a number of different types available.

      We've got GP2 and GP3 which are the general purpose storage types.

      We're going to use GP3 for this demo lesson.

      You could also select one of the provisioned IOPS volumes.

      So this is currently IO1 or IO2.

      And with both of these volume types, you're able to define IOPS separately from the size of the volume.

      So these are volume types that you can use for demanding storage scenarios where you need high-end performance or when you need especially high performance for smaller volume sizes.

      Now IO1 was the first type of provisioned IOPS SSD introduced by AWS and more recently they've introduced IO2 and enhanced it which provides even higher levels of performance.

      In addition to that we do have the non-SSD volume types.

      So SC1 which is cold HDD, ST1 which is throughput optimized HDD and then of course the original magnetic type which is now legacy and AWS don't recommend this for any production usage.

      For this demo lesson we're going to go ahead and select GP3.

      So select that.

      Next you're able to pick a size in GIB for the volume.

      We're going to select a volume size of 10 GIB.

      Now EBS volumes are created within a specific availability zone so you have to select the availability zone when you're creating the volume.

      At this point I want you to go ahead and select US-EAST-1A.

      When creating volume you're also able to specify a snapshot as the basis for that volume.

      So if you want to restore a snapshot into this volume you can select that here.

      At this stage in the demo we're going to be creating a blank EBS volume so we're not going to select anything in this box.

      We're going to be talking about encryption later in this section of the course.

      You are able to specify encryption settings for the volume when you create it but at this point we're not going to encrypt this volume.

      We do want to add a tag so that we can easily identify the volume from all of the others so click on add tag.

      For the key we're going to use name.

      For the value we're going to put EBS test volume.

      So once you've entered both of those go ahead and click on create volume and that will begin the process of creating the volume.

      Just close down any dialogues and then just pay attention to the different states that this volume goes through.

      It begins in a creating state.

      This is where the storage is being provisioned and then made available by the EBS product.

      If we click on refresh you'll see that it changes from creating to available and once it's in an available state this means that we can attach it to EC2 instances.

      And that's what we're going to do so we're going to right click and select attach volume.

      Now you're able to attach this volume to EC2 instances but crucially only those in the same availability zone.

      EBS is an availability zone scoped service and so you can only attach EBS volumes to EC2 instances within that same availability zone.

      So if we select the instance box you'll only see instances in that same availability zone.

      Now at this point go ahead and select instance 1 in availability zone A.

      Once you've selected it you'll see that the device field is populated and this is the device ID that the instance will see for this volume.

      So this is how the volume is going to be exposed to the EC2 instance.

      So if we want to interact with this instance inside the operating system this is the device that we'll use.

      Now different operating systems might see this in slightly different ways.

      So as this warning suggests certain Linux kernels might rename SDF to XVDF.

      So we've got to be aware that when you do attach a volume to an EC2 instance you need to get used to how that's seen inside the operating system.

      How we can identify it and how we can configure it within the operating system for use.

      And I'm going to demonstrate that in the next part of this demo lesson.

      So at this point just go ahead and click on attach and this will attach this volume to the EC2 instance.

      Once that's attached to the instance and you see the state change to in use then just scroll up on the left hand side and select instances.

      We're going to go ahead and connect to instance 1 in availability zone A.

      This is the instance that we just attached that EBS volume to so we want to interact with this instance and see how we can see the EBS volume.

      So right click on this instance and select connect and then you could either connect with an SSH client or use instance connect and to keep things simple we're going to connect from our browser so select the EC2 instance connect option make sure the user's name is set to EC2-user and then click on connect.

      So now we connected to this EC2 instance and it's at this point that we'll start needing the commands that are listed inside the lesson commands document and again this is attached to this lesson.

      So first we need to list all the block devices which are connected to this instance and we're going to use the LSBLK command.

      Now if you're not comfortable with Linux don't worry just take this nice and slowly and understand at a high level all the commands that we're going to run.

      So the first one is LSBLK and this is list block devices.

      So if we run this we'll be able to see a list of all of the block devices connected to this EC2 instance.

      You'll see the root device this is the device that's used to boot the instance it contains the instance operating system you'll see that it's 8 gig in size and then this is the EBS volume that we just attached to this instance.

      You'll see that device ID so XVDF and you'll see that it's 10 gig in size.

      Now what we need to do next is check whether there is a file system on this block device.

      So this block device we've created it with EBS and then we've attached it to this instance.

      Now we know that it's blank but it's always safe to check if there's any file system on a block device.

      So to do that we run this command.

      So we're going to check are there any file systems on this block device.

      So press enter and if you see just data that indicates that there isn't any file system on this device and so we need to create one.

      You can only mount file systems under Linux and so we need to create a file system on this raw block device this EBS volume.

      So to do that we run this command.

      So shoo-doo again is just giving us admin permissions on this instance.

      MKFS is going to make a file system.

      We specify the file system type with hyphen t and then XFS which is a type of file system and then we're telling it to create this file system on this raw block device which is the EBS volume that we just attached.

      So press enter and that will create the file system on this EBS volume.

      We can confirm that by rerunning this previous command and this time instead of data it will tell us that there is now an XFS file system on this block device.

      So now we can see the difference.

      Initially it just told us that there was data, so raw data on this volume and now it's indicating that there is a file system, the file system that we just created.

      Now the way that Linux works is we mount a file system to a mount point which is a directory.

      So we're going to create a directory using this command.

      MKDIR makes a directory and we're going to call the directory forward slash EBS test.

      So this creates it at the top level of the file system.

      This signifies root which is the top level of the file system tree and we're going to make a folder inside here called EBS test.

      So go ahead and enter that command and press enter and that creates that folder and then what we can do is to mount the file system that we just created on this EBS volume into that folder.

      And to do that we use this command, mount.

      So mount takes a device ID, so forward slash dev forward slash xvdf.

      So this is the raw block device containing the file system we just created and it's going to mount it into this folder.

      So type that command and press enter and now we have our EBS volume with our file system mounted into this folder.

      And we can verify that by running a df space hyphen k.

      And this will show us all of the file systems on this instance and the bottom line here is the one that we've just created and mounted.

      At this point I'm just going to clear the screen to make it easier to see and what we're going to do is to move into this folder.

      So cd which is change directory space forward slash EBS test and then press enter and that will move you into that folder.

      Once we're in that folder we're going to create a test file.

      So we're going to use this command so shudu nano which is a text editor and we're going to call the file amazing test file dot txt.

      So type that command in and press enter and then go ahead and type a message.

      It can be anything you just need to recognize it as your own message.

      So I'm going to use cats are amazing and then some exclamation marks.

      Then I'm going to press control o and enter to save that file and then control x to exit again clear the screen to make it easier to see.

      And then I'm going to do an LS space hyphen LA and press enter just to list the contents of this folder.

      So as you can see we've now got this amazing test file dot txt.

      And if we cat the contents of this so cat amazing test file dot txt you'll see the unique message that you just typed in.

      So at this point we've created this file within the folder and remember the folder is now the mount point for the file system that we created on this EBS volume.

      So the next step that I want you to do is to reboot this EC2 instance.

      To do that type sudo space and then reboot and press enter.

      Now this will disconnect you from this session.

      So you can go ahead and close down this tab and go back to the EC2 console.

      Just go ahead and click on instances.

      Okay, so this is the end of part one of this lesson.

      It was getting a little bit on the long side and so I wanted to add a break.

      It's an opportunity just to take a rest or grab a coffee.

      Part two will be continuing immediately from the end of part one.

      So go ahead complete the video and when you're ready join me in part two.

    1. Effective collaboration is essential for mutual learning.

      for - Deep Humanity - intertwingled individual / collective learning - evolutionary learning journey - symmathesy - mutual learning - Nora Bateson

    2. preliminary ground-setting

      for - co-creative collaboration - preliminary groundwork

      comment - How many times have I seen people come together with good intention to collaborate on some meaningful project onlyl for the project to fall apart some time later due to differences that emerge later on? - Without laying the proper framework for engagement and conflict resolution, we cannot prevent future conflicts from emerging - What is that proper framework? - What variables bring people closer together? - What variables drive people further apart? - We must identify those variables. They are complex because each one of us see's reality from our own unique perspective

    3. for - Medium article - co-creative collaboration - Donna Nelham

      summary - Donna takes us on a deep dive into the word collaboration what is needed to forge deep and meaningful collaboration and why it often fails - She introduces the term "collaboration washing" (like green washing) into our lexicon - This article is provocation for deep dive into what it means to collaborate - The questions we ask ourselves will lead us back to the most fundamental philosophical questions of self and other and how we formed these

    1. rumination

      Rumineren is het herhaaldelijk en langdurig denken over zaken in het verleden, meestal je eigen gevoelens of problemen

    2. endogenous

      from withing

    3. ‘exogenous

      having an external cause or origin.

    1. ¿Cómo aprendió Pablo del perdón?
    2. LO QUE APRENDIERON LOS SEGUIDORES DE JESÚS SOBRE EL ARREPENTIMIENTO

      .h2

    3. CÓMO AYUDA JEHOVÁ A LOS PECADORES A ARREPENTIRSE

      .h2

    4. LO QUE JEHOVÁ LE ENSEÑÓ A ISRAEL SOBRE EL ARREPENTIMIENTO

      .h2

    1. Welcome back and in this demo lesson you're going to evolve the infrastructure which you've been using throughout this section of the course.

      In this demo lesson you're going to add private internet access capability using NAT gateways.

      So you're going to be applying a cloud formation template which creates this base infrastructure.

      It's going to be the animals for life VPC with infrastructure in each of three availability zones.

      So there's a database subnet, an application subnet and a web subnet in availability zone A, B and C.

      Now to this point what you've done is configured public subnet internet access and you've done that using an internet gateway together with routes on these public subnets.

      In this demo lesson you're going to add NAT gateways into each availability zone so A, B and C and this will allow this private EC2 instance to have access to the internet.

      Now you're going to be deploying NAT gateways into each availability zone so that each availability zone has its own isolated private subnet access to the internet.

      It means that if any of the availability zones fail then each of the others will continue operating because these route tables which are attached to the private subnets they point at the NAT gateway within that availability zone.

      So each availability zone A, B and C has its own corresponding NAT gateway which provides private internet access to all of the private subnets within that availability zone.

      Now in order to implement this infrastructure you're going to be applying a one-click deployment and that's going to create everything that you see on screen now apart from these NAT gateways and the route table configurations.

      So let's go ahead and move across to our AWS console and get started implementing this architecture.

      Okay so now we're at the AWS console as always just make sure that you're logged in to the general AWS account as the I am admin user and you'll need to have the Northern Virginia region selected.

      Now at the end of the previous demo lesson you should have deleted all of the infrastructure that you've created up until that point so the animals for live VPC as well as the Bastion host and the associated networking.

      So you should have a relatively clean AWS account.

      So what we're going to do first is use a one-click deployment to create the infrastructure that we'll need within this demo lesson.

      So attached to this demo lesson is a one-click deployment link so go ahead and open that link.

      That's going to take you to a quick create stack screen.

      Everything should be pre-populated the stack name should be a4l just scroll down to the bottom check this capabilities box and then click on create stack.

      Now this will start the creation process of this a4l stack and we will need this to be in a create complete state before we continue.

      So go ahead pause the video wait for your stack to change into create complete and then we good to continue.

      Okay so now this stacks moved into a create complete state then we good to continue.

      So what we need to do before we start is make sure that all of our infrastructure has finished provisioning.

      To do that just go ahead and click on the resources tab of this cloud formation stack and look for a4l internal test.

      This is an EC2 instance a private EC2 instance so this doesn't have any public internet connectivity and we're going to use this to test on that gateway functionality.

      So go ahead and click on this icon under physical ID and this is going to move you to the EC2 console and you'll be able to see this a4l - internal - test instance.

      Now currently in my case it's showing as running but the status check is showing as initializing.

      Now we'll need this instance to finish provisioning before we can continue with the demo.

      What should happen is this status check should change from initializing to two out of two status checks and once you're at that point you should be able to right click and select connect and choose session manager and then have the option of connecting.

      Now you'll see that I don't because this instance hasn't finished its provisioning process.

      So what I want you to do is to go ahead and pause this video wait for your status checks to change to two out of two checks and then just go ahead and try to connect to this instance using session manager.

      Only resume the video once you've been able to click on connect under the session manager tab and don't worry if this takes a few more minutes after the instance finishes provisioning before you can connect to session manager.

      So go ahead and pause the video and when you can connect to the instance you're good to continue.

      Okay so in my case it took about five minutes for this to change to two out of two checks past and then another five minutes before I could connect to this EC2 instance.

      So I can right click on here and put connect.

      I'll have the option now of picking session manager and then I can click on connect and this will connect me in to this private EC2 instance.

      Now the reason why you're able to connect to this private instance is because we're using session manager and I'll explain exactly how this product works elsewhere in the course but essentially it allows us to connect into an EC2 instance with no public internet connectivity and it's using VPC interface endpoints to do that which I'll be explaining elsewhere in the course but what you should find when you're connected to this instance if you try to ping any internet IP address so let's go ahead and type ping and then a space 1.1.1.1.1 and press enter you'll note that we don't have any public internet connectivity and that's because this instance doesn't have a public IP version for address and it's not in a subnet with a route table which points at the internet gateway.

      This EC2 instance has been deployed into the application a subnet which is a private subnet and it also doesn't have a public IP version for address.

      So at this point what we need to do is go ahead and deploy our NAT gateways and these NAT gateways are what will provide this private EC2 instance with connectivity to the public IP version for internet so let's go ahead and do that.

      Now to do that we need to be back at the main AWS console click in the services search box at the top type VPC and then right click and open that in a new tab.

      Once you do that go ahead and move to that tab once you there click on NAT gateways and create a NAT gateway.

      Okay so once you're here you'll need to specify a few things you'll need to give the NAT gateway a name you'll need to pick a public subnet for the NAT gateway to go into and then you'll need to give the NAT gateway an elastic IP address which is an IP address which doesn't change.

      So first we'll set the name of the NAT gateway and we'll choose to use a4l for animals for life -vpc1 -natgw and then -a because this is going into availability zone A.

      Next we'll need to pick the public subnet that the NAT gateway will be going into so click on the subnet drop down and then select the web a subnet which is the public subnet in availability zone a so sn -web -a.

      Now we need to give this NAT gateway an elastic IP it doesn't currently have one so we need to click on allocate elastic IP which gives it an allocation.

      Don't worry about the connectivity type we'll be covering that elsewhere in the course just scroll down to the bottom and create the NAT gateway.

      Now this process will take some time and so we need to go ahead and create the two other NAT gateways.

      So click on NAT gateways at the top and then we're going to create a second NAT gateway.

      So go ahead and click on create NAT gateway again this time we'll call the NAT gateway a4l -vpc1 -natgw -b and this time we'll pick the web b subnet so sn -web -b allocated elastic IP again and click on create NAT gateway then we'll follow the same process a third time so click create NAT gateway use the same naming scheme but with -c pick the web c subnet from the list allocate an elastic IP and then scroll down and click on create NAT gateway and at this point we've got the three NAT gateways that are being created they're all in appending state if we go to elastic IPs we can see the three elastic IPs which have been allocated to the NAT gateways and we can scroll to the right or left and see details on these IPs and if we wanted we could release these IPs back to the account once we'd finish with them now at this point you need to go ahead and pause the video and resume it once all three of those NAT gateways have moved away from appending state we need them to be in an available state ready to go before we can continue with this demo so go ahead and pause and resume once all three have changed to an available state okay so all these are now in an available state so that means they're good to go they're providing service now if you scroll to the right in this list you're able to see additional information about these NAT gateways so you can see the elastic and private IP address the VPC and then the subnet that each of these NAT gateways are located in what we need to do now is configure the routing so that the private instances can communicate via the NAT gateways so right click on route tables and open in a new tab and we need to create a new route table for each of the availability zones so go ahead and click on create route table first we need to pick the VPC for this route table so click on the VPC drop down and then select the animals for live VPC so a for L hyphen VPC one once selected go ahead and name at the route table we're going to keep the naming scheme consistent so a for L hyphen VPC one hyphen RT for route table hyphen private a so enter that and click on create then close that dialogue down and create another route table this time we'll use the same naming scheme but of course this time it will be RT hyphen private B select the animals for life VPC and click on create close that down and then finally click on create route table again this time a for L hyphen VPC one hyphen RT hyphen private C again click on the VPC drop down and select the animals for life VPC and then click on create so that's going to leave us with three route tables one for each availability zone what we need to do now is create a default route within each of these route tables and that route is going to point at the NAT gateway in the same availability zone so select the route table private a and then click on the routes tab once you've selected the routes tab click on edit routes and we're going to add a new route it's going to be the IP version for default route of 0.0.0.0/0 and then click on target and pick NAT gateway and we're going to pick the NAT gateway in availability zone a and because we named them it makes it easy to select the relevant one from this list so go ahead and pick a for L hyphen VPC one hyphen NAT GW hyphen a so because this is the route table in availability zone a we need to pick the same NAT gateway so save that and close and now we'll be doing the same process for the route table in availability zone B make sure the routes tab is selected and click on edit routes click on add route again 0.0.0.0/0 and then for target pick NAT gateway and then pick the NAT gateway that's in availability zone B so NAT GW hyphen B once you've done that save the route table and then next select the route table in availability zone C so select RT hyphen private C make sure the routes tab is selected and click on edit routes again we'll be adding a route it will be the IP version for default route so 0.0.0.0/0 select a target go to NAT gateway and pick the NAT gateway in availability zone C so NAT GW hyphen C once you've done that save the route table and now our private EC2 instance should be able to ping 1.1.1.1 because we have the routing infrastructure in place so let's move back to our private instance and we can see that it's not actually working now the reason for this is that although we have created these routes we haven't actually associated these route tables with any of the subnets subnets in a VPC which don't have an explicit route table association are associated with the main route table now we need to explicitly associate each of these route tables with the subnets inside that same AZ so let's go ahead and pick RT hyphen private A we'll go through in order so select it click on the subnet associations tab and edit subnet associations and then you need to pick all of the private subnets in AZ A so that's the reserved subnet so reserved hyphen A the app subnet so app hyphen A and the DB subnet so DB hyphen A so all of these are the private subnets in availability zone A notice how all the public subnets are associated with this custom route table you created earlier but the ones we're setting up now are still associated with the main route table so we're going to resolve that now by associating this route table with those subnets so click on save and this will associate all of the private subnets in AZ A with the AZ A route table so now we're going to do the same process for AZ B and AZ C and we'll start with AZ B so select the private B route table click on subnet associations edit subnet associations so select application B database B and then reserved B and then scroll down and save the associations and then select the private C route table click on subnet associations edit subnet associations and then select reserved C database C and then application C and then scroll down and save those associations and now that we've associated these route tables with the subnets and now that we've added those default routes if we go back to session manager where we still have the connection open to the private EC2 instance we should see that the ping has started to work and that's because we now have a NAT gateway providing service to each of the private subnets in all of the three availability zones okay so that's everything you needed to cover in this demo lesson now it's time to clean up the account and return it to the same state as it was at the start of this demo lesson from this point on within the course you're going to be using automation and so we can remove all the configuration that we've done inside this demo lesson so the first thing we need to do is to reverse the route table changes that we've done so we need to go ahead and select the RT hyphen private a route table go ahead and select subnet associations and then edit the subnet associations and then just uncheck all of these subnets and this will return these to being associated with the main route table so scroll down and click on save do the same for RT hyphen private be so deselect all of these associations and click on save and then the same for RT hyphen private see so select it go to subnet associations and then edit them and remove all of these subnets and click on save next select all of these private route tables these are the ones that we created in this lesson so select them all click on the actions drop down and then delete route table and confirm by clicking delete route tables go to NAT gateways on the left and we need to select each of the NAT gateways in turn so a and then click on actions and delete NAT gateway type delete click delete then select be and do the same process actions delete NAT gateway type delete click delete and finally the same for see so select the C NAT gateway click on actions and delete NAT gateway you'll need to type delete to confirm click on delete now we're going to need all of these to be in a fully deleted state before we can continue so hit refresh and make sure that all three NAT gateways are deleted if yours aren't deleted if they're still listed in a deleting state then go ahead and pause the video and resume once all of these have changed to deleted at this point all of the NAT gateways have deleted so you can go ahead and click on elastic IPs and we need to release each of these IPs so select one of them and then click on actions and release elastic IP addresses and click release and do the same process for the other two click on release then finally actions release IP click on release once that's done move back to the cloud formation console select the stack which was created by the one click deployment at the start of the lesson and click on delete and then confirm that deletion and that will remove the cloud formation stack and any resources created as part of this demo and at that point once that finishes deleting the account has been returned into the same state as it was at the start of this demo lesson so I hope this demo lesson has been useful just to reiterate what you've done you've created three NAT gateways for a region resilient design you've created three route tables one in each availability zone added a default IP version for route pointing at the corresponding NAT gateway and associated each of those route tables with the private subnets in those availability zones so you've implemented a regionally resilient NAT gateway architecture so that's a great job that's a pretty complex demo but it's going to be functionality that will be really useful if you're using AWS in the real world or if you have to answer any exam questions on NAT gateways with that being said at this point you have cleared up the account you've deleted all the resources so go ahead complete this video and when you're ready I'll see you in the next.

    1. Data construction prompt. Fig. 6 shows theprompt used for Chinese distillation data construc-tion. We follow Zhou et al. (2024) to design theprompt for Chinese data construction. We adoptthe data construction prompt of Pile-NER-type 3,since it shows the best performance as in (Zhouet al., 2024).Figure 6: Data construction prompt for Chinese opendomain NER.Data processing. Following (Zhou et al., 2024),we chunk the passages sampled from the Sky cor-pus4 to texts of a max length of 256 tokens andrandomly sample 50K passages. Due to limitedcomputation resources, we sample the first twentyfiles in Sky corpus for data construction, since thesize of the entire Sky corpus is beyond the pro-cessing capability of our machines. We conductthe same data processing procedures including out-put filtering and negative sampling as in UniNER.Specifically, the negative sampling strategy for en-tity types, is applied with a probability proportionalto the frequency of entity types in the entire con

      Qúa trình xây dựng dữ liệu Sky-NER (Open NER tiếng Trung): - Xây dựng prompt: Dựa trên chiến lược của bài UniversalNER. - Xử lý dữ liệu: Thu thập dữ liệu bằng cách cắt đoạn văn trong sky-scorpus thành các đoạn văn bản có độ dài tối đa là 256 token và chọn ra ngẫu nhiên 50K đoạn văn.

    2. ference with out-domain examples. Duringinference, since examples from the automaticallyconstructed data is not aligned with the domainsand schemas of the human-annotated benchmarks,we refer to them as out-domain examples. Fig. 4shows the results of inference with out-domain ex-amples using diverse retrieval strategies. We usethe model trained with NN strategy here. After ap-plying example filtering such as BM25 scoring, in-ference with out-domain examples shows improve-ments compared to the baseline, suggesting theneed of example filtering when implementing RAGwith out-domain examples

      Qúa trình infer với các mẫu out-domain: Trong quá trình infer, bởi vì các mẫu từ tập dữ liệu xây dựng tự động có domain và format không giống với dữ liệu được gán nhãn bởi con người, các mẫu này sẽ được gọi là out-domain.

      Trong thực nghiệm ở hình 4, mô hình RA-IT được huấn luyện với chiến lược truy xuất NN. Sau khi áp dụng bộ lọc BM25, việc infer với các mẫu out-domain cho thấy cải thiện so với baseline, từ đó cho thấy tầm quan trọng trong việc thêm bộ lọc khi áp dụng RAG với các mẫu out-domain.

    3. Training with diverse retrieval strategies. Fig.3 visualize the results of training with various re-trieval strategies. We conduct inference with andwithout examples for each strategy, and set the re-trieval strategy of inference the same as of training.The most straight forward method NN shows bestperformances, suggesting the benefits of semanti-cally similar examples. Random strategy, though in-Figure 4: Impacts of inferece with out-domain examplesusing various retrieval strategies. The average F1 valueof the evaluated benchmarks are reported. w/o exmp.means inference without example. Applying examplefiltering strategy such as BM25 filtering benefits RAGwith out-domain examples.Figure 5: Impacts of inference with in-domain examples.The average F1 value of the evaluated benchmarks arereported. N -exmp. means the example pool of size N .Sufficient in-domain examples are helpful for RAG.ferior to NN, also shows improvements, indicatingthat random examples might introduce some gen-eral information of NER taks to the model. Mean-while, inference with examples does not guaranteeimprovements and often hurt performances. Thismay due to the differences of the annotation schemabetween the automatically constructed data and thehuman-annotated benchmarks

      Huấn luyện với các chiến lược truy xuất khác nhau: Được thể hiện ở hình 3. Qúa trình infer được tiến hành có hoặc không có các mẫu tham khảo với mỗi chiến lược trích xuất, và chiến lược trích xuất trong cả quá trình huấn luyện và quá trình infer là giống nhau.

      Kết quả cho thấy NN là chiến lược truy xuất tốt nhất, từ đó cho thấy tầm quan trọng của các mẫu tham khảo có sự tương đồng về mặt ngữ nghĩa. Trong khi đó, việc infer với các ví dụ thì không đảm bảo sự tăng tiến và thường ảnh hưởng tiêu cực đến kết quả.

    4. Diverse retrieval strategies. The followingstrategies are explored in the subsequent analysis.(1) Nearest neighbor (NN), the strategy used in themain experiments, retrieves k nearest neighborsof the current sample. (2) Nearest neighbor withBM25 filter (NN, BM), where we apply BM25 scor-ing to filters out NN examples not passing a prede-fined threshold. Samples with no satisfied exam-ples are used with the vanilla instruction template.(3) Diverse nearest neighbor (DNN), retrieves Knearest neighbors with K >> k and randomly se-lects k examples from them. (4) Diverse nearestwith BM25 filter (DNN,BM), filters out DNN exam-ples not reaching the BM25 threshold. (5) Random,uniformly selects k random examples. (6) Mixednearest neighbors (MixedNN), mixes the using ofthe NN and random retrieval strategies with theratio of NN set to a.

      Các chiến lược truy xuất chính: - Nearest neighbor (NN): Chiến lược được sử dụng trong các thực nghiệm chính, có khả năng trích xuất ra k mẫu gần với mẫu cần truy xuất nhất. - NN với bộ lọc BM25 (NN, BM): bộ lọc BM25 được sử dụng để lọc các mẫu NN có độ tương đồng ko vượt qua 1 ngưỡng nhất định - NN đa dạng: truy xuất K mẫu NN với K >> k, sau đó chọn ngẫu nhiên k mẫu trong K mẫu NN trên. - Random - NN hỗn hợp:Sử dụng kết hợp NN và các chiến lược chọn ngẫu nhiên với tỉ lệ chọn của NN là alpha

    5. We explore the impacts of diverse retrieval strate-gies. We conduct analysis on 5K data size for costsaving as the effect of RA-IT is consistent acrossvarious data sizes as shown in Section 3.4. Wereport the average results of the evaluated bench-marks here

      Phân tích: Phân tích này được thực hiện để khám phá mức độ ảnh hưởng của các chiến lược truy xuất khác nhau. Phân tích được tiến hành với mẫu dữ liệu có kích thước 5K.

    6. The main results are summarized in Table 1 and2 respectively. We report the results of inferencewithout examples for RA-IT here, since we foundthis setting exhibits more consistent improvements.The impacts of inference with examples are studiedin Section 3.5. As shown in the tables, RA-ITshows consistent improvements on English andChinese across various data sizes. This presumablybecause the retrieved context enhance the model

      Kết quả chính: Được thể hiện ở bảng 1 và bảng 2. Chú ý rằng, thực nghiệm trong 2 bảng này đã thực hiện quá trình infer mà không có few-shot, lý do bởi việc infer này đem lại sự tăng tiến bền vững trong kết quả.

      Kết quả cho thấy RA-IT đạt kết quả tốt nhất. Lý do cho sự tăng tiến này được cho là nhờ ngữ cảnh được truy xuất đã làm tăng cường khả năng hiểu đầu vào của mô hình, từ đó thể hiện sự cần thiết của các mẫu instruction có tăng cường ngữ cảnh.

    7. We conduct a preliminary study on IT data effi-ciency in targeted distillation for open NER byexploring the impact of varous datas sizes: [0.5K,1K, 5K, 10K, 20K, 30K, 40K, 50K]. We use vanillaIT for preliminary study. Results are visualized inFig. 2. The following observations are consistentin English and Chinese: (1) a small data size al-ready surpass ChatGPT’s performances. (2) Perfor-mances are improving as the data sizes increased to10K or 20K, but begin to decline and then remainat a certain level as data sizes further increased to50K. Recent work for IT data selection, Xia et al.Figure 2: Preliminary study of IT data efficiency foropen NER in English (left) and Chinese (right) scenar-ios, where the training data are Pile-NER and Sky-NERrespectively. Average zero-shot results of evaluatedbenchmarks are illustrated. The performance does notnecessarily improve as the data increases.(2024); Ge et al. (2024); Du et al. (2023) also findthe superior performances of only limited data size.We leave selecting more beneficial IT data for IEas future work. Accordingly, we conduct mainexperiments on 5K, 10K and 50K data sizes

      Nghiên cứu chuẩn bị cho đánh giá hiệu quả của dữ liệu: Nghiên cứu chuẩn bị được tiến hành cho việc đánh giá hiệu quả của bộ dữ liệu IT trong việc chiết xuất có mục tiêu của bài toán open NER bằng cách khám phá mức độ ảnh hưởng của dữ liệu ở nhiều kích thước khác nhau: [0.5K, 1K, 5K,...]. Mẫu IT đơn thuần được sử dụng để thực hiện nghiên cứu này.

      Các kết luận rút ra: - Một lượng nhỏ dữ liệu đã có thể vượt qua được khả năng của chatGPT. - Kết quả có sự tăng tiến thuận theo độ tăng của kích thước mô hình (từ 10K lên 20K), nhưng bắt đầu giảm và ổn định ở một mức cụ thể khi dữ liệu tiếp tục tăng đến mức 50k. Các nghiên cứu gần đây về việc chọn dữ liệu IT cũng cho ra kết quả việc trội của bộ dữ liệu nhỏ có kích thước hạn chế.

    8. Training data: For English, we use thetraining data Pile-NER released by Zhou et al.(2024). For Chinese, we use the training data Sky-NER constructed in this paper as described in Sec-tion 3.2. We use LoRA (Hu et al., 2021) to trainmodels. Retrieval: We adopt GTE-large2 (Liet al., 2023) to generate text embeddings and setk = 2 in main experiments. Evaluation: Wemainly focus on the zero-shot evaluation. ForEnglish, we adopt benchmarks CrossNER, MIT-Movie and MIT-restaurant following Zhou et al.(2024). For Chinese, we collect eight benchmarksacross diverse domains, of which details are in Ap-pendix D. We report micro-F1 value

      Thực nghiệm: - Mô hình LLM: LLaMA-3-3B và Qwen-1.5.7B. - Bộ dữ liệu: Đối với tiếng Anh, bộ dữ liệu Pile-NER được sử dụng. Đối với tiếng Trung, bộ dữ liệu Sky-NER do chính nhóm tác giả xây dựng được sử dụng. LoRA được sử dụng trong quá trình huấn luyện LLM - Mô hình truy xuất: Sử dụng GTE-large để tạo ra các embedding câu và số lượng mẫu tương đồng được truy xuất là 2. - Phương pháp đánh giá: Tập trung vào đánh giá Zero-shot.

    1. 五彩目前可免费使用,后期应该会按订阅制收费,估计不会有买断制。但我是很支持给好用的工具付费的,毕竟人都是要吃饭的,价格合理就行。

      wwwaaabbb

    1. One of the main goals of social media sites is to increase the time users are spending on their social media sites. The more time users spend, the more money the site can get from ads, and also the more power and influence those social media sites have over those users. So social media sites use the data they collect to try and figure out what keeps people using their site, and what can they do to convince those users they need to open it again later.

      I like the algorithm social media platforms use because it shows me content that I like to see. I have always wondered how do social media sites make money from the ads, anytime I get an ad on any platform I always skip them if I can.

    2. So social media sites use the data they collect to try and figure out what keeps people using their site, and what can they do to convince those users they need to open it again later.

      Social media has achieved this goal long ago as this generation is on their phones all day. Such as every day when I check my screen time it's over 8 hours or more, and 70% time is spent on TikTok. By using data mining the app has fairly figured out what phase of life I am in and every TikTok that i see is relatable so I feel a connection with it. For example, if someone goes through a break-up, their whole FYP will be filled with tiktoks that would be about a break-up on how someone went through something same or something comforting, keeping them hooked to it. As for me whatever I am going through in my life it's like my TikTok knows all of it and shows exact same posts. In this way, I can think how data mining may be used to extract my conversations with my friends or what I like and repost depending on my mood is also being tracked.

    1. Die französische „Ministerin für den ökologischen Übergang“, Agnès Pannier-Runacher, droht mit Rücktritt, wenn in der aktuellen Budgetplanung nicht mehr als die jetzt vorgesehenen Mittel für Klimaanpassung und Klimaschutz vorgesehen werden. Agnès Pannier-Runacher gehört zum linken Flügel des Macron-Lagers. Die neue französische Regierung ist konservativ geprägt und hängt von der Tolerierung durch das rechtsradikale Rassemblement National ab.

    1. Platforms also collect information on how users interact with the site. They might collect information like (they don’t necessarily collect all this, but they might): when users are logged on and logged off who users interact with What users click on what posts users pause over where users are located what users send in direct messages to each other

      I find it scary that these platforms monitor every move we make on there sites especially them checking our direct messages with others. Our direct messages aren't as private as we think if these platforms are sitting there collecting this data.

    1. Who made the water, the raft, the trinity of Earth-Creators? Like manyCalifornia creation epics, the Maidu account seems to begin in the middle ofthe story. Mysteriously, elements of the world seem to have always beenpresent, their existence apparently beyond question or speculation.

      This creation story is interesting to me because it makes me wonder if the earth is being depicted as the "god" of the story. In most of the creation stories I am familiar with, the "god" of the story is the only thing present at the beginning, and it's existence is never really questioned. Earth Initiate does not appear to be an all-powerful being in this story, so I'm curious whether a "god" was present in their beliefs or not.

    1. Die Standardabweichung ist die durchschnittliche Abweichung vom Mittelwert.

      Ist diese Aussage falsch, da es hiess: Gefühlt ist die Standardabweichung sowas wie die durchschnittliche Abweichung (Beträge) vom Mittelwert (eben durch die Quadrierung und Rückrechnung über die Wurzel nicht ganz dasselbe)?

    1. Viele New Yorker Juristen, darunter namhafte Staatsanwälte, unterstützen eine Resolution zur strafrechtlichen Verfolgung der großen Ölgesellschaften. Vorgeworfen wird den Firmen, fossile Brennstoffe über Jahrzehnte verkauft zu haben, ohne über die ihnen bekannten Gefahren zu informieren oder diese zu berücksichtigen. Gefordert wird eine Klage wegen fahrlässiger Gefährdung von Menschenleben. Dazu muss nicht nachgewiesen werden, dass der Tod bestimmter Menschen durch die Konzerne verursacht wurde https://www.theguardian.com/us-news/2024/oct/17/new-york-big-oil-fueling-climate-disasters

    1. Der Stress, dem die Wassersysteme der Welt ausgesetzt sind, wird dazu führen, dass das 2030 die Nachfrage nach Wasser 40% höher sein wird als das Angebot. Der Bericht der Globalen Komission für die Wasserökonomie stellt fest, dass ohne radikale Gegenmaßnahmen die Hälfte der Nahrungsproduktion der Welt in den kommenden 25 Jahren gefährdet ist. Trotz der Verbundenheit der globalen Wasserressourcen werde Wasser noch nicht als globales Gemeingut gemanagt. https://www.theguardian.com/environment/2024/oct/16/global-water-crisis-food-production-at-risk

      Bericht: https://economicsofwater.watercommission.org/

    1. Die internationale Energie-Agentur #IEA stellt in ihrem neuesten Bericht u.a.fest, dass die Extremwettereignisse durch die globale Erhitzung die Energiesicherheit zunehmend gefährden. Sie fordert wesentlich höhere Investitionen einerseits in Energienetze und -speicher, andererseits in die Infrastruktur in den besonders energiearmen Ländern https://taz.de/Internationale-Energieagentur-warnt/!6043317/

    1. Noch nie ist die CO2-Konzentration in der Atmosphäre so stark gestiegen wie im vergangenen Jahr, nämlich um 3,37 parts per million (PPM). Die Konzentration liegt jetzt bei 422 PPM. Vor allem die sehr geringe CO2-Aufnahme durch Ozean- und Landsenken hat diese Steigerung verursacht https://taz.de/Hiobsbotschaft-fuers-Klima/!6040258/

    1. Résumé de la vidéo [00:00:00][^1^][1] - [00:27:00][^2^][2]:

      Cette vidéo présente cinq courants critiques de la sociologie urbaine, en se concentrant sur le catholicisme social et les travaux de Paul-Henri Chambard de Lauwe. Elle explore son influence et ses contributions à la sociologie urbaine française.

      Moments forts : + [00:00:00][^3^][3] Introduction des cinq courants * Présentation de la méthode * Focus sur le catholicisme social * Importance de Chambard de Lauwe + [00:01:09][^4^][4] Vie et contexte de Chambard de Lauwe * Origines aristocratiques et catholiques * Évolution vers des idées de gauche * Formation d'anthropologue + [00:04:44][^5^][5] Relation entre l'Église et la ville * Rupture post-Révolution française * Repli de l'Église dans les campagnes * Retour de l'Église vers les villes au XXe siècle + [00:09:55][^6^][6] Travaux fondateurs de Chambard * Publication de "Paris et l'agglomération parisienne" * Importance des cartes et illustrations * Étude des quartiers ouvriers + [00:12:00][^7^][7] Concept de la ville comme matrice * Culture et espace indissociables * Impact de la destruction des quartiers ouvriers * Transformation des pratiques sociales et culturelles

      Résumé de la vidéo [00:27:04][^1^][1] - [00:57:07][^2^][2]:

      Cette vidéo explore les courants critiques de la sociologie urbaine, en se concentrant sur les associations de locataires, le marxisme urbain, et les mouvements sociaux urbains. Elle examine comment ces courants ont influencé la politique de la ville et les transformations urbaines.

      Moments forts : + [00:27:04][^3^][3] Associations de locataires * Création de services communautaires * Importance des buanderies collectives * Comités de locataires pour améliorer la vie quotidienne + [00:29:29][^4^][4] Marxisme urbain * Application des théories de Karl Marx aux transformations urbaines * Influence du Parti communiste français * Disparition rapide du marxisme urbain après 1982 + [00:35:30][^5^][5] La ville comme marché * Transformation urbaine pour des gains financiers * Influence des grands groupes du BTP * Plus-values générées par les logements et infrastructures + [00:44:02][^6^][6] La ville comme système * Interdépendance des autorités locales, de l'État et des groupes privés * Capitalisme monopoliste d'État * Gestion des services urbains par des groupes privés + [00:47:02][^7^][7] Mouvements sociaux urbains * Mobilisations pour des services collectifs * Revendications pour de meilleurs logements et équipements * Absence de jonction avec le mouvement ouvrier pour une révolution totale

      Résumé de la vidéo [00:57:12][^1^][1] - [01:23:18][^2^][2]:

      Cette vidéo présente une analyse critique de la sociologie urbaine à travers les travaux de Henri Lefebvre, en mettant l'accent sur son concept de "droit à la ville".

      Moments forts: + [00:57:12][^3^][3] Introduction à Henri Lefebvre * Intellectuel atypique des Trente Glorieuses * Auteur de 80 livres sur divers sujets * Connu pour ses travaux sur le marxisme et la vie quotidienne + [01:01:00][^4^][4] Travaux sur la ville * Sept livres sur la question urbaine * "Le droit à la ville" et "La révolution urbaine" * Importance de la ville comme lieu d'événements + [01:10:00][^5^][5] Concept de "droit à la ville" * Titre d'un livre et une idée forte * Souvent mal interprétée par les élus locaux * Basée sur l'idée que la ville est le lieu des événements + [01:15:00][^6^][6] Définition de la ville par Lefebvre * La ville comme lieu où se produisent les événements * Importance des événements dans la fabrication de la société * Exemples de micro-événements et de grands événements + [01:19:00][^7^][7] Événements et urbanisme * Les événements peuvent créer des villes temporaires * Exemple des festivals techno * Impact des événements sur la perception de la ville

      Résumé de la vidéo [01:23:20][^1^][1] - [01:50:50][^2^][2]:

      Cette vidéo explore les courants critiques de la sociologie urbaine, en se concentrant sur les événements et les droits des habitants des périphéries urbaines.

      Points forts : + [01:23:20][^3^][3] Événements locaux et leur importance * Les événements locaux attirent des gens de diverses régions * Ils contribuent à la vie communautaire * Ils sont souvent sous-estimés par rapport aux grands centres urbains + [01:25:02][^4^][4] Droit à la ville selon Lefebvre * Reconnaissance égale des habitants des périphéries * Importance des événements locaux pour la dignité urbaine * Critique des pratiques actuelles des élus + [01:31:02][^5^][5] Le CeRFI et ses contributions * Collectif indépendant dirigé par Félix Guattari * Influence de Michel Foucault et de la psychanalyse lacanienne * Publications originales et non académiques + [01:39:01][^6^][6] Ville comme dispositif disciplinaire * Exploration historique des réseaux routiers et des plans de ville * Impact du capitalisme industriel sur l'urbanisme * Analyse des cités minières et des grands ensembles + [01:46:42][^7^][7] Détérioration et territorialisation * Concept de déterritorialisation * Hospitalisation psychiatrique en milieu ouvert * Influence sur la construction des villes nouvelles en France

      Résumé de la vidéo [01:50:52][^1^][1] - [02:11:22][^2^][2]:

      Cette partie de la vidéo explore les approches critiques en sociologie urbaine, en se concentrant sur la psychiatrie de secteur et la sémiologie urbaine.

      Moments forts: + [01:50:52][^3^][3] Psychopolis et alternatives * Problème de la gestion des malades mentaux * Proposition de la "psychopolis" * Solution alternative avec des appartements F3 + [01:54:00][^4^][4] Psychiatrie de secteur * Mise en place dans les villes nouvelles * Accueil des malades dans des appartements * Suivi par des professionnels de santé + [01:57:03][^5^][5] Sémiologie urbaine * Étude du sens et de l'expérience urbaine * Critique des urbanistes et des pouvoirs de l'État * Importance des représentations et des perceptions + [02:05:27][^6^][6] Axes de recherche en sémiologie urbaine * Ville comme langue * Appropriation sensible des espaces * Relation entre appropriation urbaine et histoire psychique

    1. Viele CO2-Kompensationsgeschäfte mit chinesischen Firmen, die Bestätigungen für angebliche „upstream emission reduction“ anbieten, sind vermutlich betrügerisch. Durch die Anrechnung solcher angeblicher Reduktionen haben österreichische Firmen wie die OMV, Shell Austria und MOL Austria versucht, den vorgeschriebenen Anteil von 13% erneuerbare Energie in ihren Produkten zu erreichen. Die Staatsanwaltschaft hat eine entsprechende Anzeige des österreichischen Klimaschutzministeriums bisher nicht weiter verfolgt, die beweislage ist aber deutlich. https://www.derstandard.at/story/3000000239520/millionen-betrugsverdacht-rund-um-co2-ausgleichsgeschaefte-mit-china-weitet-sich-aus

    1. Résumé de la vidéo [00:00:05][^1^][1] - [00:28:55][^2^][2]:

      Cette vidéo présente les premiers éléments de la sociologie urbaine critique, en se concentrant sur les travaux de divers sociologues urbains des Trente Glorieuses.

      Moments forts: + [00:00:05][^3^][3] Introduction à la sociologie urbaine critique * Présentation rapide des travaux * Importance des transformations urbaines et sociales * Contexte des Trente Glorieuses + [00:02:28][^4^][4] Théories sociologiques des années 1950-1980 * Validité actuelle des théories * Importance d'un regard critique * Exemples de théories marxisantes + [00:05:52][^5^][5] Révolution urbaine et intellectuelle * Transformations urbaines brutales * Effervescence intellectuelle des Trente Glorieuses * Débats publics animés par des intellectuels + [00:12:13][^6^][6] Caractéristiques de la sociologie urbaine critique * Jeunes sociologues engagés * Sociologie radicale et théorique * Importance du contexte intellectuel + [00:20:38][^7^][7] Critique de l'urbanisme et de la planification * Opposition à l'urbanisme d'État * Critique de la planification centralisée * Ton engagé et parfois agressif des sociologues

      Résumé de la vidéo [00:28:57][^1^][1] - [00:35:29][^2^][2]:

      Cette vidéo présente les éléments clés de la sociologie urbaine critique, en mettant l'accent sur la planification urbaine, l'engagement des sociologues et les contradictions inhérentes à cette discipline.

      Moments forts: + [00:28:57][^3^][3] Critique de la planification urbaine * Segmentation de la société * Spécialisation des tâches * Perte de la réalité totale + [00:30:29][^4^][4] Engagement des sociologues * Importance de s'engager * Comprendre les mouvements sociaux * Être avec les acteurs sociaux + [00:31:31][^5^][5] Contradictions en sociologie * Acteurs inconscients vs. engagement * Contradictions dans les travaux * Exemple de Manuel Castels + [00:33:01][^6^][6] Évolution de la sociologie urbaine * Transition du scientisme à l'héroïsme * Production prolifique de travaux * Différences avec la sociologie contemporaine

    1. Résumé de la vidéo [00:00:00][^1^][1] - [00:09:38][^2^][2]:

      Cette vidéo traite de la seconde poussée urbaine en France, qui a eu lieu après la Seconde Guerre mondiale et a duré environ cinquante ans. Elle examine les dynamiques d'urbanisation massive et les facteurs qui ont contribué à cette croissance.

      Points forts : + [00:00:00][^3^][3] Début de la seconde poussée urbaine * Commence après la Seconde Guerre mondiale * Dure environ cinquante ans * Envoie des populations importantes vers les villes + [00:00:40][^4^][4] Croissance urbaine selon l'INSEE * Données de 1962 à 1990 * Observations sur les pôles urbains et les couronnes périurbaines * Variation significative en pourcentage de la population + [00:02:27][^5^][5] Urbanisation généralisée * Affecte toutes les agglomérations * Ralentissement après les Trente Glorieuses * Périurbanisation continue de croître + [00:05:00][^6^][6] Facteurs de la poussée urbaine * Industrialisation massive * Baby-boom * Guerres de décolonisation + [00:07:01][^7^][7] Problèmes de logement * Destruction de logements pendant la guerre * Déficit de logement social * Apparition de bidonvilles et d'habitats insalubres

    1. Résumé de la vidéo [00:00:04][^1^][1] - [00:20:21][^2^][2]:

      Cette vidéo explore les transformations urbaines en France pendant les Trente Glorieuses, une période de croissance économique et de modernisation rapide entre 1945 et 1975. Elle met en lumière les changements sociaux, économiques et culturels qui ont profondément réorganisé l'espace urbain.

      Moments forts: + [00:00:04][^3^][3] Introduction aux Trente Glorieuses * Terme popularisé par Jean Fourastié * Modernisation rapide de la France * Comparaison de deux villages avant et après cette période + [00:04:35][^4^][4] Transformation de Rennes * Expansion urbaine avec de nouveaux quartiers * Remplacement des zones agricoles par des ensembles résidentiels * Développement de l'infrastructure urbaine + [00:11:02][^5^][5] Dynamiques sociales et territoriales * Gentrification des centres-villes * Création rapide des grands ensembles * Périurbanisation et diffusion des maisons individuelles + [00:16:46][^6^][6] Rôle central de l'État * Pilotage des transformations urbaines * Absence de compétences locales * Croissance économique soutenue par l'État + [00:19:52][^7^][7] Impact sur la société * Taux de chômage extrêmement bas * Transformation complète de la société et de l'économie * Importance des Trente Glorieuses pour comprendre les enjeux actuels

    1. Résumé de la vidéo [00:00:00][^1^][1] - [00:28:48][^2^][2]:

      La vidéo explore l'évolution des banlieues au XIXe siècle, en se concentrant sur trois idées principales : l'invention des banlieues populaires, leur rôle dans la résolution de problèmes politiques, et la transformation urbaine de cette époque.

      Moments forts: + [00:00:00][^3^][3] Introduction et objectifs * Trois idées principales * Invention des banlieues entre 1880 et 1900 * Transformation urbaine pour résoudre des problèmes politiques + [00:02:01][^4^][4] Expansion urbaine * La ville dépasse les fortifications * Étalement urbain et périurbanité * Découplage entre réalité et image des villes + [00:07:11][^5^][5] Poussée urbaine du XIXe siècle * Urbanisation continue depuis le XIe siècle * Deux accélérations majeures : 1850-1900 et 1950-1980 * Concentration de la population dans les grandes agglomérations + [00:12:06][^6^][6] Urbanisation et industrialisation * Liens avec l'industrialisation massive * Immigration et conditions de vie difficiles * Ségrégation et insalubrité des quartiers ouvriers + [00:21:00][^7^][7] Réformes et sociologie urbaine * Réformateurs et sociologues comme Le Play et Villermé * Enquêtes de terrain et démarches politiques * Exemples de Manchester et des conditions de vie ouvrières

      Résumé de la vidéo [00:28:50][^1^][1] - [00:56:51][^2^][2]:

      Cette partie de la vidéo explore l'évolution des villes au XIXe siècle, en particulier la ségrégation sociale et les transformations urbaines. Elle met en lumière les conditions de vie difficiles des ouvriers et les révolutions urbaines qui en ont résulté.

      Moments forts: + [00:28:50][^3^][3] Ségrégation sociale à Manchester et Paris * Ouvriers confinés dans des quartiers industriels * Séparation des classes sociales * Conditions de vie difficiles + [00:31:00][^4^][4] Révolutions urbaines au XIXe siècle * Théories socialistes et marxistes émergentes * Révoltes ouvrières dans les villes * Importance des révolutions pour l'histoire urbaine + [00:39:00][^5^][5] Transformation des villes pour résoudre les problèmes sociaux * Destruction des centres-villes ouvriers * Création de banlieues populaires * Amélioration des conditions de vie + [00:45:00][^6^][6] Rôle de Georges Eugène Haussmann à Paris * Expropriation et transformation urbaine * Création de grands boulevards et espaces verts * Impact sur la population ouvrière

      Résumé de la vidéo [00:56:53][^1^][1] - [01:20:57][^2^][2]:

      Cette vidéo explore l'évolution des banlieues parisiennes au XIXe siècle, en se concentrant sur les transformations sociales et économiques qui ont conduit à leur création.

      Points forts : + [00:56:53][^3^][3] Densité de population et gentrification * Baisse de la densité dans les quartiers centraux * Gentrification des centres-villes par Haussmann * Déplacement des ouvriers vers les périphéries + [01:00:00][^4^][4] Invention de la banlieue * Déplacement des ouvriers au-delà des fortifications * Conditions de vie difficiles dans les bidonvilles * Émergence de nouvelles communes pour loger les ouvriers + [01:04:00][^5^][5] Logiques économiques et industrielles * Déplacement des entreprises polluantes en banlieue * Besoin de plus d'espace pour les entreprises * Création de logements à proximité des nouvelles zones industrielles + [01:09:00][^6^][6] Recensement et croissance démographique * Premier recensement de la banlieue en 1891 * Croissance rapide des populations ouvrières * Importance des ouvriers dans les nouvelles communes + [01:13:00][^7^][7] Sécession urbaine et spécialisation de l'espace * Séparation des classes sociales dans l'espace urbain * Concentration des activités économiques et culturelles au centre * Perception négative des banlieues et des banlieusards

    1. Eine Studie weist erstmals systematisch den Einfluss von Dürren und zunehmender Trockenheit auf die Binnenmigration in vielen verschiedenen Ländern nach. Es migrieren vor allem Mitglieder mittlerer Einkommensgruppen, die die dazu nötigen Ressourcen haben. Die klimabedingte Migration trägt deutlich zur Urbanisierung bei https://www.derstandard.at/story/3000000240733/mehr-binnenmigration-durch-klimawandel

      Studie: https://www.nature.com/articles/s41558-024-02165-1.epdf?sharing_token=zQaNIIlE0D5VSVhiEeWSRdRgN0jAjWel9jnR3ZoTv0N5BsSsWDa3LuiqvifrZZqQ9PHrGw0G8JwyXN4l5XLwHLyMEPxhNDlwsm_I7HyLLBL-PIsL8iWYBirASOxKiB3OvY5CyEDs2OqdYzcj0HqqPZGigOJmwF7H97HsKHpUv2tEjBvnMf7i4DKmBH78sfFsx7iymr6A4PFpKfrKe6IDSxkyQgZFpa8kBrt8lM6HkbU%3D&tracking_referrer=www.derstandard.at

    1. Résumé de la vidéo [00:00:01][^1^][1] - [00:29:22][^2^][2]:

      Cette vidéo explore le concept de la sécession urbaine, en se concentrant sur les communautés fermées et les espaces privés qui remplacent les espaces publics.

      Points forts : + [00:00:01][^3^][3] Communautés fermées * Développement historique en Amérique du Sud * Présence mondiale actuelle * Exemples en France et ailleurs + [00:01:22][^4^][4] Évolution des espaces urbains * Fermeture des résidences et immeubles * Sécurité accrue avec caméras et portes * Réduction de l'accès public + [00:02:46][^5^][5] Exemple de Cœur Défense * Immeuble à La Défense, Paris * Cadres supérieurs et leur mode de vie * Services exclusifs pour les employés + [00:10:00][^6^][6] Impact sur la société * Isolement des cadres des problèmes sociaux * Séparation des classes sociales * Disparition des espaces publics + [00:23:00][^7^][7] Question sociale vs. question urbaine * Définition et évolution des deux concepts * Importance des territoires dans les appartenances sociales * Perspectives sociologiques divergentes

      Résumé de la vidéo [00:29:25][^1^][1] - [00:51:12][^2^][2]:

      Cette partie de la vidéo explore les dynamiques sociales et territoriales dans les villes modernes, en mettant l'accent sur la gentrification, les périurbains, et les territoires de relégation. Elle aborde également les conflits territoriaux et les politiques urbaines.

      Moments forts: + [00:29:25][^3^][3] Gentrification et sécurité urbaine * Présence de vigiles courtois * Espaces urbains pacifiés * Population gentrifiée diversifiée + [00:30:47][^4^][4] Mode de vie périurbain * Propriétaires de maisons avec jardin * Emploi stable et deux voitures * Diversité professionnelle mais mode de vie homogène + [00:31:37][^5^][5] Territoires de relégation * Populations marginalisées dans les grands ensembles * Taux de chômage élevé * Difficultés sociales et stigmatisation + [00:34:26][^6^][6] Conflits territoriaux (NIMBY) * Refus de constructions locales (lycées, centres commerciaux) * Protection de l'homogénéité sociale * Mobilisations discrètes mais influentes + [00:38:10][^7^][7] Politiques urbaines et tensions sociales * Pouvoir accru des collectivités locales * Urbanisme comme outil de gestion des tensions * Importance croissante de la question urbaine

    1. Résumé de la vidéo [00:00:00][^1^][1] - [00:29:05][^2^][2]:

      Cette vidéo présente un cours de sociologie urbaine critique, donné par Éric Breton, maître de conférences au département de sociologie. Il aborde les thèmes de la mobilité urbaine, des théories urbaines, et de la production sociale des territoires.

      Moments forts: + [00:00:00][^3^][3] Introduction et présentation * Éric Breton se présente * Il explique ses domaines de compétence * Il introduit le cours de sociologie urbaine + [00:01:10][^4^][4] Les différentes formes de mobilité * Mobilité physique (vélo, marche, voiture) * Mobilité résidentielle (déménagements fréquents) * Mobilité virtuelle (internet, médias) + [00:06:00][^5^][5] Production sociale des territoires * La ville est produite par les transformations sociales * Importance des conflits sociaux dans la production urbaine * Exemples historiques de transformations urbaines + [00:12:00][^6^][6] Périodes clés de l'histoire urbaine * 1850-1900 : invention des banlieues * 1950-1980 : création des grands ensembles * 2000-2020 : périurbanisation et gentrification + [00:22:00][^7^][7] Théories urbaines * Marxisme urbain (Manuel Castells) * Théories de Michel Foucault * Contributions de Chambard de Lauwe et Henri Lefebvre

      Résumé de la vidéo [00:29:10][^1^][1] - [00:49:12][^2^][2]:

      Cette vidéo traite de la sociologie urbaine critique, en se concentrant sur la ségrégation et la sécession urbaine. Elle explore comment ces phénomènes influencent la structure sociale et l'intégration dans les villes.

      Moments forts: + [00:29:10][^3^][3] Définition de la ségrégation * Affectation d'un territoire à un groupe social * Ségrégation ethnique, religieuse, géographique, de genre * Exemple de la ville de Rennes + [00:34:01][^4^][4] Espace public partagé * Rue Le Bastard à Rennes comme espace partagé * Importance de l'espace public pour la mixité sociale * La ville comme lieu d'intégration des diversités + [00:37:00][^5^][5] Évolution vers la sécession urbaine * Réduction des espaces partagés * Impact sur la société et montée des discours d'exclusion * Théorie de Jacques Donzelot sur la sécession urbaine + [00:40:05][^6^][6] Exemples de fermetures urbaines * Fermetures résidentielles à Marseille * Communautés fermées comme Pont Royal * Impact de la peur et de l'insécurité sur l'urbanisme

    1. Fiscal

      This word means relating to government revenue, especially taxes.

    2. The 1935 Social Security Act provided for old-age pensions, unemployment insurance, and economic assistance for both the elderly and dependent children.

      This was the creation of Social Security Numbers right? Also how did it allow the elders to retire?

    3. At the time of the stock market crash, southerners were already underpaid, underfed, and undereducated.

      Out of context, but were the farmers/southerners still able to have pets, like dogs? I know that farmers usually have at least 1 dog? Or a Cat?

    4. In 1932, nearly 2,300 banks collapsed, taking credit, personal deposits, and people’s life savings with them.

      I would be so angry if this happened to me. Honestly this would really suck, what if a teenager was saving for college? I would be so upset.

    1. t is likely that you have more in common with that reality TV star than you care to admit. We tend to focus on personality traits in others that we feel are important to our own personality. What we like in ourselves, we like in others, and what we dislike in ourselves, we dislike in others (McCornack, 2007). If you admire a person’s loyalty, then loyalty is probably a trait that you think you possess as well. If you work hard to be positive and motivated and suppress negative and unproductive urges within yourself, you will likely think harshly about those negative traits in someone else. After all, if you can suppress your negativity, why can’t they do the same? This way of thinking isn’t always accurate or logical, but it is common.

      To me this has never even registered in my head. I am going to focus on this the next time my girlfriend is watching reality tv. I know that I am most aware that I tend to root for the underdogs in most scenarios. I want the one who was counted out to win. I wonder how that relates to my personality. I know I always admire the extroverts, but I felt like that was because I am not very extroverted and wanted to be like them. Intersting self observation for me to try in the coming days.

    2. his simple us/them split affects subsequent interaction, including impressions and attributions. For example, we tend to view people we perceive to be like us as more trustworthy, friendly, and honest than people we perceive to be not like us (Brewer, 1999).

      I am currently working on a construction site here in Boise. I am from Tennessee and all my coworkers are from Kentucky. One day a coworker told me the superindentent didnt like me. Obviously confused since we had only been working together for 3 days, I asked, Why? My coworker told me simply for the fact that I am not from Kentucky, he did not trust me or think I was a capable worker because of where I grew up. I know its not fair but the only thing I can do is prove him wrong and help him recognize his inherant bias is not always correct.

    3. First impressions are enduring because of the primacy effect, which leads us to place more value on the first information we receive about a person. So if we interpret the first information we receive from or about a person as positive, then a positive first impression will form and influence how we respond to that person as the interaction continues.

      This bit of information reminds me of a few studies and lawsuits that have occurred in the last decade or two regarding names on job applications. The inquiries focused on the concept that someone's name being less culturally familiar to a recruiter would negatively bias an applicant's chances of getting to the interview stage. This effect was studied using identical resumes with different names associated to measure employer responses. This seems like a great example of the primacy effect making biases that are sometimes difficult to identify more obvious.

    1. In conclusion, it is important that primary care physicians get well versed with the future AI advances and the new unknown territory the world of medicine is heading toward.

      The conclusion summarizes how physicians should get used to AI because it will soon be a big part of their work.

    2. Some studies have been documented where AI systems were able to outperform dermatologists in correctly classifying suspicious skin lesions.[18] This because AI systems can learn more from successive cases and can be exposed to multiple cases within minutes, which far outnumber the cases a clinician could evaluate in one mortal lifetime.

      This shows that AI can also take jobs as away as well as male them better.

    3. . In conclusion, the physicians who used documentation support such as dictation assistance or medical scribe services engaged in more direct face time with patients than those who did not use these services

      This shows that physicians using AI save more time and are able to interact with patients more.

    4. Primary care physicians can use AI to take their notes, analyze their discussions with patients, and enter required information directly into EHR systems.

      This shows another way physicians use AI in their exams

    5. The Da Vinci robotic surgical system developed by Intuitive surgicals has revolutionized the field of surgery especially urological and gynecological surgeries.

      This paragraph show how AI is being used in surgery. Robots are mimicking surgeons to perform surgery.

    6. Radiology is the branch that has been the most upfront and welcoming to the use of new technology.

      This paragraph talks about how Radiology is using AI. Radiology uses AI to help identify abnormal and normal scans more quickly, especially in busy hospitals with fewer staff.

    7. A lot of AI is already being utilized in the medical field, ranging from online scheduling of appointments, online check-ins in medical centers, digitization of medical records, reminder calls for follow-up appointments and immunization dates for children and pregnant females to drug dosage algorithms and adverse effect warnings while prescribing multidrug combinations.

      This shows the different ways medicine is being utilized in medicine

    1. Employees at the company misusing their access, like Facebook employees using their database permissions to stalk women

      This can be highly problematic as the employees would basically be logged onto your accounts and can even view your posts which are on a privacy setting "only-me". This reminds me of how someone I know was mistreated by their manager and they had an issue over their wages so right before giving in her resignation letter she leaked the company's database by posting it on Twitter, which included budgeting and the balance sheet.

    1. When Elon Musk purchased Twitter, he also was purchasing access to all Twitter Direct Messages

      This can be concerning as we tend to use social media sites like Instagram to chat with our friends and family, which includes a lot of personal information that we wouldn’t want anyone else to know, such as now that I have read this I will think twice before saying something very personal over social media messages and rather use my phone sms. Because on social media there is always a third party tracking your actions, which sounds like a privacy invasion.

    1. Listening to people who are different from us is a key component of developing self-knowledge. This may be uncomfortable, because our taken-for-granted or deeply held beliefs and values may become less certain when we see the multiple perspectives that exist.

      Listening to the thoughts and opinions of people with differing cultures or political opinions with the intention to understand, instead of respond, is such a powerful tool. It can help dismantle prejudices, make you a better advocate for your own values, and/or help practice giving people room to communicate what they really intending to say rather than giving preloaded responses. I think most people would benefit greatly from engaging in this kind of practice on a regular basis.

    1. Then Sean Black, a programmer on TikTok saw this and decided to contribute by creating a bot that would automatically log in and fill out applications with random user info, increasing the rate at which he (and others who used his code) could spam the Kellogg’s job applications:

      This is a great example of using social media for the right cause and explaining how the context matters. It shows that ethical trolling can be done to get social justice for those who have been wronged, forcing such a big company to act right. It's interesting to see how the company's decision backfired using trolling.

    1. Self-discrepancy theory states that people have beliefs about and expectations for their actual and potential selves that do not always match up with what they actually experience (Higgins, 1987).

      I have experienced this kind expectation to reality relationship in some of my personal relationships. These people had an idea of what they could be if they could just stop being inadequate that only served to generate shame and guilt. Often, there was never any real grounding for the things they expected of themselves, but they felt the weight of those expectations as if they were an undeniable reflection of their potential. I am sure many of this is related to external social expectations that are later internalized. These expectations seem to rarely serve as drivers for someone to be more productive and more often seem to break people down and make them overall less likely to engage with life.

    2. If a man wants to get into better shape and starts an exercise routine, he may be discouraged by his difficulty keeping up with the aerobics instructor or running partner and judge himself as inferior, which could negatively affect his self-concept.

      One of our recent lectures identified the importance of an improvement mindset. Tools like these could help avoid developing unrealistic expectations that ultimately dissuade attempts at self improvement. They could provide an interpretive lens to contextualize feedback in ways that are more constructive.

    1. Long waitlists for on-campus child care impacted student parents’ ability to secure stable and affordable child care.

      makes difficult for students to attend classes on campus

    2. Student parents identified the need for establishing a physical space on campus that worked for them and their families

      would allow lots of parents to seek help on their assignments and other questions if their kids were welcomed in the space. as mom, this would be extremely helpful.

    3. “I didn’t know that [the college] was giving out hotspots or the internet and thencomputers. I had to go buy my own computer, which I put on my credit card and I’m stillpaying for it as of right now. And then internet, I’m barely hanging in there to pay forthat because the school gives internet, but it was so slow that me and my son couldn’tbe on the internet at the same time.”

      sad that many students, especially student parents have to go through this.

    4. Some community college staff members identified a lack of campus-wide understanding and awareness of the needs of studentparents.

      staff should be more aware of the needs of student parents seeking help to achieve their goals

    5. Current statewide and federal data systems do not adequatelyrecord the number of student parents in higher education

      thus, student parents don't receive all the help they need.

    6. Meeting thefinancial demands of monthly rent, child care, children’sclothing, college expenses, utilities, and other set expensesis not attainable for most without the use of student loans

      Student parents seeking a higher education should recieve more help as they are trying to better themselves and the lives of their children.

    7. Often, collegeadministrators are not empathetic to the unique needs of student parents, nor are their institutions equipped with the financialresources to help student parents navigate their journey.

      making it harder for those parents seeking a higher education

    8. he needs and demands ofstudent parents matter in the higher education landscape as more than one in five college students have a dependent and donot earn college degrees at the same rate as their childless peers

      There should be more resources for student parents so they can achieve their goals. They are just as important as the average student.

    1. Federal funding for campus child care is limited and favors 4-year institutions

      the majority of student parents attend community college, so why is most of the funding going to 4 year universities?

    2. 11 percent of single motherssay that they go to school full time, work full time, and care for dependents more than 30 hours aweek,

      finding balance as a mother is so difficult and at times you feel burnt out.

    3. Figure 1. Proportion of Community Colleges withOn-Campus Child Care, Nationally, 2001-2008

      the chart shows that on campus childcare has decreased, yet the demand for higher education has increased.

    4. Three-quarters of single parents in college are women.”

      women single parents are trying to better themselves and earn a higher education

    5. over 80 percent reportedthat the availability of child care was very important in the decision to attend college

      childcare is an important factor for student parents, without childcare they wouldn't be able to attend school.

    6. 1.7 million (27percent) are parents.'

      large amount of college students are parents

    1. For something unexpected to become salient, it has to reach a certain threshold of difference. If you walked into your regular class and there were one or two more students there than normal, you may not even notice. If you walked into your class and there was someone dressed up as a wizard, you would probably notice.

      I can see this effect happen where you may not think expectation would matter very much. There are many times where someone has stopped to say something to me, but the content of the statement is outside of my current mode of thinking. A perfectly understandable statement can become completely unintelligible purely because the context of the message did not prepare the receiver to comprehend it. If something like this can happen in the case of straight forward comprehension, the effect must me exacerbated by the complexity or obscurity of the intended communication.

    1. Chapter 1 Introduction Test work Many Europeans thought that      India’s history was not important. They argued that Africans were inferior to Europeans, and they used this  ash   to help justify sla   very. Africa was by no means inferior to Europe. The people who suffered the most from the Transatlantic Slave trade were civilized, organized, and technologically advanced peoples, long before the arrival fittest of European slavers. Egypt was the first of many great African civilizations, existing for absdasddsaaout 2,000 years before Rome was built. It lasted thousands of years and achieved many magnificent and incredible things in the fields of science, mathematics, medicine, technology and the arts. In the west of Africa, the kingdom of Ghana was a vast Empire that traded in gold, salt, and copper between the ninth and thirteenth centuries.The kingdoms of Benin and Ife were led by the Yoruba people and sprang up between the 11th and 12th centuries. The Ife civilization goes back as far as 500 B.C. and its people made objects from bronze, brass, copper, wood, and ivory. From the thirteenth to the fifteenth century, the kingdom of Mali had an organized trading system, with gold dust and agricultural produce being exported. Cowrie shells were used as a form of currency and gold, salt and copper were traded. Between 1450–1550, the Songhai Kingdom grew very powerful and prosperous. It had a well-organized system of government; a developed currency and it imported fabrics from Europe. Timbu  ktu became one of the most important places in the world as libraries and universities were meeting places for poets, scholars, and artists from around Africa and the Arab World. Figure 1.1   Forms of slavery existed in Africa before Europeans arrived.    However, African slavery was different from what was to come. People were enslaved as punishment for a crime, payment for a debt or as a prisoner of war; most enslaved people were captured in battle. In some kingdoms, temporary slavery was a punishment for some crimes. In some cases, enslaved people could work to buy their freedom. Children have been saved of enslaved people did not automatically become slaves.Chapter ObjectivesAfter this chapter, students will be able to:Explain the significance of the Middle PassageIdentify the stages of the Trans-Atlantic Slave TradeUse primary and interactive sources to analyze the beginnings of the slave trade and the Middle PassageDefine the economic, moral, and political ideologies of implementing and justifying the slave tradeGuiding QuestsDirections: As you engage with the CONTENT in this chapter, keep the following questions in mind. Look for the information that provides answers to these questions and deepens your understanding.How did slavery become synonymous with African enslavement?What were the routes of the first slave ships?What stimulated the slave trade?What makes African slavery different than other forms of slavery?Resistance was an important part of life for enslaved people. What were some of the ways in which they resisted being enslaved? Figure 1.2Interactive Map    Key Terms, People, Places, and EventsTrans-Atlantic Slave TradeBenin and IfeSonghai KingdomBarracoonsElminaNautical technologyBartolomeu DiasChristopher ColumbusHispaniolaGuanchesTainosFernando II of Aragon and Isabel I of CastileLaws of Burgos and Laws of GranadaEmperor Charles VNicolas OvandoIndiesEnriquillo’s RevoltQuobna Ottobah CugoanoPoint of No ReturnMiddle PassageOlaudah EquianoThumb screwsZongThe Dolben ActSection I: Introducing the Slave Trade and New World SlaveryIntroduction to Reading #1: Interesting Narrative of the Life of Olaudah EquianoThe personal accounts of enslaved individuals such as Olaudah Equiano are critical in understanding the harsh realities of the slave trade and the Middle Passage as well as demonstrating the ways in which captive Africans resisted their new station in life and fought for abolition. Olaudah Equiano (c. 1745–1797) was an African born (Kingdom of Benin) writer and abolitionist who documents in his memoir his journey from being captured at eleven years old, the Middle Passage, and working throughout the British Atlantic World as an explorer and merchant before settling in Europe as a free man, converting to Christianity and fought for the abolishment of the slave trade. The following excerpt comes from his memoirs, published in 1789. Reading 1.1Olaudah Equiano Describes the Middle Passage, 1789Olaudah EquianoOlaudah Equiano, Selection from “The Interesting Narrative of the Life of Olaudah Equiano, or Gustavus Vassa, the African, written by Himself,” The Interesting Narrative of the Life of Olaudah Equiano, or Gustavus Vassa, the African, written by Himself, pp. 51–54. 1790.At last, when the ship we were in had got in all her cargo, they made ready with many fearful noises, and we were all put under deck, so that we could not see how they managed the vessel. But this disappointment was the least of my sorrow. The stench of the hold while we were on the coast was so intolerably loathsome, that it was dangerous to remain there for any time, and some of us had been permitted to stay on the deck for the fresh air; but now that the whole ship’s cargo were confined together, it became absolutely pestilential. The closeness of the place, and the heat of the climate, added to the number in the ship, which was so crowded that each had scarcely room to turn himself, almost suffocated us. This produced copious perspirations, so that the air soon became unfit for respiration, from a variety of loathsome smells, and brought on a sickness among the slaves, of which many died, thus falling victims to the improvident avarice, as I may call it, of their purchasers. This wretched situation was again aggravated by the galling of the chains, now become insupportable; and the filth of the necessary tubs, into which the children often fell, and were almost suffocated. The shrieks of the women, and the groans of the dying, rendered the whole a scene of horror almost inconceivable. Happily perhaps for myself I was soon reduced so low here that it was thought necessary to keep me almost always on deck; and from my extreme youth I was not put in fetters. In this situation I expected every hour to share the fate of my companions, some of whom were almost daily brought upon deck at the point of death, which I began to hope would soon put an end to my miseries. Often did I think many of the inhabitants of the deep much more happy than myself; I envied them the freedom they enjoyed, and as often wished I could change my condition for theirs. Every circumstance I met with served only to render my state more painful, and heighten my apprehensions, and my opinion of the cruelty of the whites. One day they had taken a number of fishes; and when they had killed and satisfied themselves with as many as they thought fit, to our astonishment who were on the deck, rather than give any of them to us to eat, as we expected, they tossed the remaining fish into the sea again, although we begged and prayed for some as well we cold, but in vain; and some of my countrymen, being pressed by hunger, took an opportunity, when they thought no one saw them, of trying to get a little privately; but they were discovered, and the attempt procured them some very severe floggings.One day, when we had a smooth sea, and a moderate wind, two of my wearied countrymen, who were chained together (I was near them at the time), preferring death to such a life of misery, somehow made through the nettings, and jumped into the sea: immediately another quite dejected fellow, who, on account of his illness, was suffered to be out of irons, also followed their example; and I believe many more would soon have done the same, if they had not been prevented by the ship’s crew, who were instantly alarmed. Those of us that were the most active were, in a moment, put down under the deck; and there was such a noise and confusion amongst the people of the ship as I never heard before, to stop her, and get the boat to go out after the slaves. However, two of the wretches were drowned, but they got the other, and afterwards flogged him unmercifully, for thus attempting to prefer death to slavery. In this manner we continued to undergo more hardships than I can now relate; hardships which are inseparable from this accursed trade. – Many a time we were near suffocation, from the want of fresh air, which we were often without for whole days together. This, and the stench of the necessary tubs, carried off many. During our passage I first saw flying fishes, which surprised me very much: they used frequently to fly across the ship, and many of them fell on the deck. I also now first saw the use of the quadrant. I had often with astonishment seen the mariners make observations with it, and I could not think what it meant. They at last took notice of my surprise; and one of them, willing to increase it, as well as to gratify my curiosity, made me one day look through it. The clouds appeared to me to be land, which disappeared as they passed along. This heightened my wonder: and I was now more persuaded than ever that I was in another world, and that every thing about me was magic. At last we came in sight of the island of Barbadoes, at which the whites on board gave a great shout, and made many signs of joy to us. https://youtu.be/PmQvofAiZGAThe Arrival of European TradersDuring the fifteenth and sixteenth centuries, European traders started to get involved in the slave trade. European traders took interest in African nations and kingdoms, such as Ghana and Mali because of their complex trading networks. Shortly after, traders became interested in trading in human beings, taking people from western Africa to Europe and the Americas. Initially, this began on a small scale but due to the slave trade, it grew during the seventeenth and eighteenth centuries, as European countries conquered many of the Caribbean islands and much of North and South America. Europeans who settled in the Americas were attracted by the idea of owning their own land and not having to work for someone else. Convicts from Britain were sent to work on the plantations but there were never enough. To satisfy the growing demand for labor, Europeans purchased African people.They wanted the enslaved people to work in mines and on tobacco plantations in South America and on sugar plantations in the West Indies. Millions of Africans were enslaved and forced across the Atlantic, to labor in plantations in the Caribbean and America. Once Europeans became involved, slavery changed, leading to generations of peoples being taken from their homelands and enslaved. Children whose parents were enslaved became slaves as well.How Were They Enslaved?The major means of enslaving Africans were warfare, raiding and kidnapping, though people were enslaved through judicial processes, debt as well as drought and famine in regions where rainfall was scarce. Violence was another form utilized to enslave people. Warfare was used as a source to captured people in the regions of the Senegambia, the Gold Coast, the Slave Coast (Bight of Benin) and Angola. Raiding and kidnapping seemed to have dominated in the Bight of Biafra. Many captives were forced to travel long distances from the areas they called home to the coast, which meant there was an increase in the risk of deaths.Slave factories, dungeons, and forts were erected along the coast of West Africa, housing captured Africans in holding pens (barracoons) awaiting passage throughout the New World. They were equipped with up to a hundred guns and cannons to defend European interests on the coast, by keeping competitors away. There were nearly one hundred castles spread along the coast. The forts had the same simple design, with narrow windowless stone dungeons for captured Africans and fine residences for Europeans. The largest of these forts was Elmina. The fort had been fought over by the Portuguese, the Dutch and the British. At the height of the trade, Elmina housed 400 company personnel, including the company director, as well as 300 forts. The whole commerce surrounding the slave trade had created a town outside the castle, of about 1000 Africans. In other cases, the enslaved Africans were kept on board the ships, until sufficient numbers were captured, waiting perhaps for months in cramped conditions, before setting sail.The Ethnic Groups of the EnslavedThe British traders covered the West African coast from Senegal in the north to the Congo in the south, occasionally venturing to take slaves from South-East Africa in present day Mozambique. Many venues on the African Atlantic coast were more desirable to traders looking for the supply of enslaved people than others. This appeal was reliant on the level of support from the chieftains instead of topographical barriers or the demography of local populations. While some African rulers fought against the slave trade, other African rulers were willing participants, supplying European traders with the enslaved people they wanted. As the demand for African labor grew, some African traders began capturing other Africans and selling them to European traders. The Portuguese, French, and British often helped these rulers in wars against their enemies. African rulers had their own stake in the trade. Those who were willing to supply enslaved Africans became very rich and powerful as well as strongly armed with guns from Europe. The numbers of wars increased, and they became more violent because of the European guns and weapons. Many Africans died for every enslaved person who was eventually sold.The enslaved Africans included a combination of ethnic groups. However, after 1660, over half of the Africans capture and taken away by British ships came from just three regions—the Bight of Biafra, the Gold Coast, and Central Africa. Within the Bight of Biafra two venues, Old Calabar on the Cross River and Bonny in the Niger Delta were the major suppliers of the enslaved boarding British ships. The top three ethnic groups that accounted for the number of enslaved Africans within the British slave trade were the Igbos from the Bight of Biafra, the Akan from the Gold Coast and the Bantu from Central Africa.The Portuguese Slave Trade in AfricaUp to the late medieval era, southern Europe instituted a significant market for North African merchants who brought commodities like gold as well as a small numbers of slaves in caravans across the Sahara Desert. During the early fifteenth century, advances in nautical technology, permitted Portuguese sailors to travel south along Africa’s Atlantic coast in looking for a direct maritime route to gold-producing regions in West Africa. Founded in 1482 near the town of Elmina in present-day Ghana, São Jorge da Mina gave the Portuguese better access to sources of West African gold.By the mid-1440s, a trading post was established on the small island off the coast of present-day Mauritania. The Portuguese established similar trading “factories” with the goal of tapping into local commercial networks. Portuguese traders acquired captives for export and numerous West African commodities such as ivory, peppers, textiles, wax, grain, and copper. They established colonies on previously uninhabited Atlantic African islands that would later serve as gathering areas for captives and commodities to be shipped to Iberia, and then to the Americas. By the 1460s, the Portuguese began colonizing the Cape Verde Islands (Cabo Verde). Additionally, the Portuguese sailors encountered the islands of São Tomé and Príncipe around 1470 with colonization beginning in the 1490s. These islands served as entrepôts for Portuguese commerce across western Africa.In 1453, the Ottoman Empire’s successful capture of Constantinople (Istanbul), Western Europe’s main source for spices, silks, and other luxury goods produced in the Arab World and Asia, added further incentive for European overseas expansion. In 1488, following years of Portuguese expeditions sailing along western Africa’s coastlines, Portuguese navigator Bartolomeu Dias famously sailed around the Cape of Good Hope. As a result, this opened up European access to the Indian Ocean. By the end of the century, Portuguese merchants surpasses Islamic commercial, political, and military grips in North Africa and in the eastern Mediterranean. A major outcome of Portuguese overseas expansion during this time was an intense rise in Iberian access to sub-Saharan trade networks. The following century gave way to Portugal’s expansion into western Africa leading Iberian merchants to recognize the economic opportunity of a widespread slave trading business.The Spanish and New World SlaverySpain was the first to make widespread use of enslaved Africans as a labor force in the colonial Americas. After his 1492 voyage, with support from the Spanish Crown and roughly one thousand Spanish colonists, Genoese merchant Christopher Columbus established the first European colony in the Americas on the island of Hispaniola. It has been reported that Columbus had previous involvement trading in West Africa and had visited the Canary Islands, where the Guanches had been enslaved by the Spanish and exported to Spain. While Columbus’ interests were mainly in gold, he realized Caribbean islanders’ value as slaves.In early 1495, preparing to return to Spain, he loaded his ships with five hundred enslaved Taínos from Hispaniola. Consequently, only three hundred survived. Spanish monarchs, Fernando II of Aragon and Isabel I of Castile, quickly cut his slaving activities short, attempting to compensate for the gold that was not flowing in. However, forced Amerindian labor grew progressively vital for the Spanish Royal policies. These policies were contradictory in a number of ways. While the Spanish Crown intended to protect Amerindians from abuse, they also expected them to accept Spanish rule, embrace Catholicism, and become accustom to a work regimen that was designed to make Spain’s overseas colonies profitable. In 1501, the royals ordered Hispaniola’s governor to return all property stolen from Taínos, and to pay them wages for the labor they performed. Additional reforms were outlined in the Laws of Burgos (1512), and later in the Laws of Granada (1526), however, they have been largely ignored by Spanish colonists. In the meantime, Spain’s royals granted colonists dominion over Amerindian subjects, convincing Indigenous populations to perform labor. This was an adaptation of the medieval encomienda, a quasi-feudal system in which Iberian Christians who performed military service were authorized to rule people and oversee resources in lands taken from Iberian Muslims.In spite of their opposition to the trans-Atlantic slave trade of Amerindians, the Crown allowed their enslavement and sale within the Americas. The first half of the sixteenth century saw Spanish colonists conducting raids throughout the Caribbean, transporting captives from Central America, northern South America, and Florida to Hispaniola and other Spanish colonies. There were two key arguments used to defend the enslavement of Amerindians. The first concept was “just war” against anyone who rebelled against the Crown or did not accept Christianity. The second concept was ransom meaning that any Amerindian held captive were eligible for purchase with the intention to Christianize them as well as rescue them from supposedly cannibalistic captors. The Spanish colonizers soon realized that forced enslavement and labor of Indigenous groups was not a feasible option. While the physical demands were intense, diseases such as smallpox, measles, chicken pox, and typhus devastated Indigenous populations, thus leading to a workforce that could not be sustained. Proponents of reform spoke out against Spanish colonization and abuses towards Amerindians, stating that it was deplorable on the grounds of religion and morality. Due to this mass decline of Indigenous populations, Emperor Charles V passed a series of laws in the 1540s known as the “New Laws of the Indies for the Good Treatment and Preservation of the Indians,” or just the “New Laws.”Among these new laws was the 1542 royal decree that abolished Amerindian slavery. Also, it was no longer a requirement for Indigenous people to provide free labor and Spanish colonists’ children could no longer inherit encomiendas. There were some oppositions to these changes from colonists in Mexico and Peru; places where colonists owned encomiendas similar to small kingdoms. As colonists complained and pushed back against the decree, some of the New Laws were partially enforced and some traditional practices were partially restored. On the contrary, Spanish colonists responding to declining Indigenous population began to search elsewhere for laborers to fulfill demand. As the Portuguese slave trade flourished, they set their sights on Africa.The Early Trans-Atlantic Slave TradeThe first political leader to manage the trans-Atlantic slave trade was Nicolas Ovando. He imported African captives from Spain to the island of Hispaniola. In 1502, Ovando became the third governor of the “Indies” following Christopher Columbus and Francisco de Bobadilla. Ovando was accused of indoctrinating Amerindians by the Catholic monarchs who argued that since they were converts, they should not have any contact with Muslims, Jews, or Protestants. Thus, the monarchs barred North African “Moorish” captives from being transported to the New World, however they allowed black captives and other captives who were born in Spain or Portugal. While Ovando at first resisted the trans-Atlantic slave trade, letters exchanged between Ovando and Spain after 1502 referred to captives exclusively as “negros,” or “blacks.”When the first captives arrived in Hispaniola, many immediately began resisting by escaping into the mountains and launching raids against Spanish settlements. In 1503, due to fears of African captives escaping and influencing Amerindians to revolt, Ovando petitioned the Spanish government to ban the trans-Atlantic slave trade. Shortly after, the indigenous of Hispaniola incited an uprising known as Enriquillo’s Revolt (1519–1533). This revolt demonstrates overlap with increasing African resistance and probably involved some involvement with enslaved Africans. In 1505, the governor sent a request to King Fernando II for seventeen captives to be sent to the mines in Hispaniola. To up the ante, the king used the labor of captives to increase gold production, and sent one hundred black captives from Spain directly to the governor. Over the next several years, the labor of African captives proved to be so effective that Ovando had 250 more African transported from Europe to work in the gold and copper mines.Between 1501 and 1518, the trans-Atlantic slave trade was comprised of Africans who were transported from Iberia. The Spanish Crown prohibited direct traffic from Africa because they feared that African captives would bring their African spiritual and religious practices to Indigenous populations thus interfering with Christian indoctrination. While the number of captive Africans was relatively low at this time, Hispaniola’s thriving population saw a dramatic decline from 60,000 to less than 20,000 from 1508–1518. Therefore, colonists needed laborers to maintain the colony’s gold mines and sugar industry. While the connection between race and slavery did not fully develop into a rigid racial hierarchy until the colonization of the Americas, specifically, North America, the Spanish Crown was adamant that African captives would come from sub-Saharan Africa.Section II: Passages to the New WorldIntroduction to Reading #2: Narrative of the Enslavement of Quobna Ottobah Cugoano, A Native of AfricaLike the plight of Equiano, Quobna Ottobah Cugoano (c. 1757– ?) was born in modern day Ghana and captured at the age of thirteen by a fellow African and sold to the British and forced into slavery. His memoir discusses his experiences during the Middle Passage and enslavement on a sugar cane plantation in Grenada located in the Caribbean. In 1772, after working on the plantation for two years, he was bought by an Englishman and taken to England. Here he converted to Christianity, obtained his freedom, and learn to read and write. He built relationships with Blacks in Britain such as Equiano and become involved in the movement to abolish the slave trade. The following excerpt provides some context into the first-hand experiences of the horrors of the Middle Passage from the point of view of Cugoano. Reading 1.2Narrative of the Enslavement of Ottabah Cugoano, A Native of AfricaOttabah CugoanoOttabah Cugoano, “Narrative of the Enslavement of Ottabah Cugoano, A Native of Africa,” The Negro’s Memorial; or, Abolitionist’s Catechism; by an Abolitionist, ed. Thomas Fisher, pp. 120–127. 1824.The following artless narrative, as given to the public by the subject of it, in 1787, fell into the hands of the author of the foregoing pages when they were nearly completed, and after that portion of his work to which it more particularly belonged had been printed off. It is, nevertheless, a narrative of such high interest, and exhibits the Slave-trade and Slavery in such striking colors, throwing light upon not a few of the most important facts which form the argument of this work, that he could not resist the temptation to give it in an appendix, leaving it to operate unassisted upon the minds of his readers, and to inspire them, according to their respective mental constitutions, either with admiration or detestation of the SLAVE-TRADE and NEGRO SLAVERY.I was early snatched away from my native country, with about eighteen or twenty more boys and girls, as we were playing in a field. We lived but a few days' journey from the coast where we were kidnapped, and as we were decoyed and drove along, we were soon conducted to a factory, and from thence, in the fashionable way of traffic, consigned to Grenada. Perhaps it may not be amiss to give a few remarks, as some account of myself, in this transposition of captivity.I was born in the city of Agimaque, on the coast of Fantyn; my father was a companion to the chief in that part of the country of Fantee, and when the old king died I was left in his house with his family; soon after I was sent for by his nephew, Ambro Accasa, who succeeded the old king in the chiefdom of that part of Fantee, known by the name of Agimaque and Assince. I lived with his children, enjoying peace and tranquillity, about twenty moons, which, according to their way of reckoning time, is two years. I was sent for to visit an uncle, who lived at a considerable distance from Agimaque. The first day after we set out we arrived at Assinee, and the third day at my uncle's habitation, where I lived about three months, and was then thinking of returning to my father and young companion at Agimaque; but by this time I had got well acquainted with some of the children of my uncle's hundreds of relations, and we were some days too venturesome in going into the woods to gather fruit and catch birds, and such amusements as pleased us. One day I refused to go with the rest, being rather apprehensive that something might happen to us; till one of my playfellows said to me, "Because you belong to the great men, you are afraid to “venture your carcase, or else of the bounsam,” which is the devil. This enraged me so much, that I set a resolution to join the rest, and we went into the woods, as usual but we had not been above two hours, before our troubles began, when several great ruffians came upon us suddenly, and said we had committed a fault against their lord, and we must go and answer for it ourselves before him.Some of us attempted, in vain, to run away, but pistols and cutlasses were soon introduced, threatening, that if we offered to stir, we should all lie dead on the spot. One of them pretended to be more friendly than the rest, and said that he would speak to their lord to get us clear, and desired that we should follow him; we were then immediately divided into different parties, and drove after him. We were soon led out of the way which we knew, and towards evening, as we came in sight of a town, they told us that this great man of theirs lived there, but pretended it was too late to go and see him that night. Next morning there came three other men, whose language differed from ours, and spoke to some of those who watched us all the night; but he that pretended to be our friend with the great man, and some others, were gone away. We asked our keeper what these men had been saying to them, and they answered, that they had been asking them and us together to go and feast with them that day, and that we must put off seeing the great man till after, little thinking that our doom was so nigh, or that these villains meant to feast on us as their prey. We went with them again about half a day's journey, and came to a great multitude of people, having different music playing; and all the day after we got there, we were very merry with the music, dancing, and singing. Towards the evening, we were again persuaded that we could not get back to where the great man lived till next day; and when bed-time came, we were separated into different houses with different people. When the next morning came, I asked for the men that brought me there, and for the rest of my companions; and I was told that they were gone to the sea-side, to bring home some rum, guns, and powder, and that some of my companions were gone with them, and that some were gone to the fields to do something or other. This gave me strong suspicion that there was some treachery in the case, and I began to think that my hopes of returning home again were all over. I soon became very uneasy, not knowing what to do, and refused to eat or drink, for whole days together, till the man of the house told me that he would do all in his power to get me back to my uncle; then I eat a little fruit with him, and had some thoughts that I should be sought after, as I would be then missing at home about five or six days. I inquired every day if the men had come back, and for the rest of my companions, but could get no answer of any satisfaction. I was kept about six days at this man's house, and in the evening there was another man came, and talked with him a good while and I heard the one say to the other he must go, and the other said, the sooner the better; that man came out and told me that he knew my relations at Agimaque, and that we must set out to-morrow morning, and he would convey me there. Accordingly we set out next day, and travelled till dark, when we came to a place where we had some supper and slept. He carried a large bag, with some gold dust, which he said he had to buy some goods at the sea-side to take with him to Agimaque. Next day we travelled on, and in the evening came to a town, where I saw several white people, which made me afraid that they would eat me, according to our notion, as children, in the inland parts of the country. This made me rest very uneasy all the night, and next morning I had some victuals brought, desiring me to eat and make haste, as my guide and kidnapper told me that he had to go to the castle with some company that were going there, as he had told me before, to get some goods. After I was ordered out, the horrors I soon saw and felt, cannot be well described; I saw many of my miserable countrymen chained two and two, some handcuffed, and some with their hands tied behind. We were conducted along by a guard, and when we arrived at the castle, I asked my guide what I was brought there for, he told me to learn the ways of the browfow, that is, the white-faced people. I saw him take a gun, a piece of cloth, and some lead for me, and then he told me that he must now leave me there, and went off. This made me cry bitterly, but I was soon conducted to a prison, for three days, where I heard the groans and cries of many, and saw some of my fellow-captives. But when a vessel arrived to conduct us away to the ship, it was a most horrible scene; there was nothing to be heard but the rattling of chains, smacking of whips, and the groans and cries of our fellow-men. Some would not stir from the ground, when they were lashed and beat in the most horrible manner. I have forgot the name of this infernal fort; but we were taken in the ship that came for us, to another that was ready to sail from Cape Coast. When we were put into the ship, we saw several black merchants coming on board, but we were all drove into our holes, and not suffered to speak to any of them. In this situation we continued several days in sight of our native land; but I could find no good person to give any information of my situation to Accasa at Agimaque. And when we found ourselves at last taken away, death was more preferable than life; and a plan was concerted amongst us, that we might burn and blow up the ship, and to perish all together in the flames: but we were betrayed by one of our own countrywomen, who slept with some of the headmen of the ship, for it was common for the dirty filthy sailors to take the African women and lie upon their bodies; but the men were chained and pent up in holes. It was the women and boys which were to burn the ship, with the approbation and groans of the rest; though that was prevented, the discovery was likewise a cruel bloody scene.But it would be needless to give a description of all the horrible scenes which we saw, and the base treatment which we met with in this dreadful captive situation, as the similar cases of thousands, which suffer by this infernal traffic, are well known. Let it suffice to say that I was thus lost to my dear indulgent parents and relations, and they to me. All my help was cries and tears, and these could not avail, nor suffered long, till one succeeding woe and dread swelled up another. Brought from a state of innocence and freedom, and, in a barbarous and cruel manner, conveyed to a state of horror and slavery, this abandoned situation may be easier conceived than described. From the time that I was kidnapped, and conducted to a factory, and from thence in the brutish, base, but fashionable way of traffic, consigned to Grenada, the grievous thoughts which I then felt, still pant in my heart; though my fears and tears have long since subsided. And yet it is still grievous to think that thousands more have suffered in similar and greater distress, Under the hands of barbarous robbers, and merciless task-masters; and that many, even now, are suffering in all the extreme bitterness of grief and woe, that no language can describe. The cries of some, and the sight of their misery, may be seen and heard afar; but the deep-sounding groans of thousands, and the great sadness of their misery and woe, under the heavy load of oppressions and calamities inflicted upon them, are such as can only be distinctly known to the ears of Jehovah Sabaoth.This Lord of Hosts, in his great providence, and in great mercy to me, made a way for my deliverance from Grenada. Being in this dreadful captivity and horrible slavery, without any hope of deliverance, for about eight or nine months, beholding the most dreadful scenes of misery and cruelty, and seeing my miserable companions often cruelly lashed, and, as it were, cut to pieces, for the most trifling faults; this made me often tremble and weep, but I escaped better than many of them. For eating a piece of sugar-cane, some were cruelly lashed, or struck over the face, to knock their teeth out. Some of the stouter ones, I suppose, often reproved, and grown hardened and stupid with many cruel beatings and lashings, or perhaps faint and pressed with hunger and hard labour, were often committing trespasses of this kind, and when detected, they met with exemplary punishment. Some told me they had their teeth pulled out, to deter others, and to prevent them from eating any cane in future. Thus seeing my miserable companions and countrymen in this pitiful, distressed, and horrible situation, with all the brutish baseness and barbarity attending it, could not but fill my little mind horror and indignation. But I must own, to the shame of my own countrymen, that I was first kidnapped and betrayed by some of my own complexion, who were the first cause of my exile, and slavery; but if there were no buyers there would be no sellers. So far as I can remember, some of the Africans in my country keep slaves, which they take in war, or for debt; but those which they keep are well fed, and good care taken of them, and treated well; and as to their clothing, they differ according to the custom of the country. But I may safely say, that all the poverty and misery that any of the inhabitants of Africa meet with among themselves, is far inferior to those inhospitable regions of misery which they meet with in the West-Indies, where their hard-hearted overseers have neither Regard to the laws of God, nor the life of their fellow-men.Thanks be to God, I was delivered from Grenada, and that horrid brutal slavery. A gentleman coming to England took me for his servant, and brought me away, where I soon found my situation become more agreeable. After coming to England, and seeing others write and read, I had a strong desire to learn, and getting what assistance I could, I applied myself to learn reading and writing, which soon became my recreation, pleasure, and delight; and when my master perceived that I could write some, he sent me to a proper school for that purpose to learn. Since, I have endeavoured to improve my mind in reading, and have sought to get all the intelligence I could, in my situation of life, towards the state of my brethren and countrymen in complexion, and of the miserable situation of those who are barbarously sold into captivity, and unlawfully held in slavery. https://youtu.be/S72vvfBTQwsTrans-Atlantic Slave TradeThe Transatlantic Slave Trade had three stages. During STAGE 1, slave ships departed from British ports like London, Liverpool, and Bristol making the journey to West Africa, carrying goods such as cloth, guns, ironware, and drink that had been made in Britain. On the West African coast, these goods would be traded for men, women, and children who had been captured by slave traders or bought from African chiefs.The second stage saw dealers kidnap people from villages up to hundreds of miles inland. One such person was Quobna Ottobah Cugoano who described how the slavers attacked with pistols and threatened to kill those who did not obey. The captives were forced to march long distances with their hands tied behind their backs and their necks connected by wooden yokes. The traders held the enslaved Africans until a ship appeared, and then sold them to a European or African captain. It often took a long time for a captain to fill his ship. He rarely filled his ship in one spot. Instead, he would spend three to four months sailing along the coast, looking for the fittest and cheapest slaves. Ships would sail up and down the coast filling their holds with enslaved Africans. This part of the journey, the coast, is referred to as the Point of No Return.During the horrifying Middle Passage, enslaved Africans were tightly packed onto ships that would carry them to their final destination. Numerous cases of violent resistance by Africans against slave ships and their crews were documented. The final stage, STAGE 3 occurred at the destination in the New World where enslaved Africans were sold to the highest bidder at slave auctions. They belonged to the plantation owner, like any other possession, and had no rights at all. Enslaved Africans were often punished very harshly and often resisted their enslavement in many ways, from revolution to silent, personal resistance. Some refused to be enslaved and took their own lives. Sometimes pregnant women preferred abortion to bringing a child into slavery. On the plantations, many enslaved Africans tried to slow down the pace of work by pretending to be ill, causing fires, or “accidentally” breaking tools.Running away was also a form of resistance. Some escaped to South America, England, northern American cities, or Canada. Additionally, enslaved people led hundreds of revolts, rebellions, and uprisings. Approximately two-thirds of enslaved Africans taken to the Americas ended up on sugar plantations. Sugar was used to sweeten another crop harvested by enslaved Africans in the West Indies—coffee. With the money made from the sale of enslaved Africans, goods such as sugar, coffee and tobacco were bought and carried back to Britain for sale. The ships were loaded with produce from the plantations for the voyage home. Resistance took many forms, some individual, some collective. Enslaved people resisted capture and imprisonment, attacked slave ships from the shore and engaged in shipboard revolts, fighting to free themselves and others. It is important to remember that there was resistance throughout the Transatlantic Slave Trade system beginning when Africans were first kidnapped. In some cases, resistance involved attacks from the shore, as well as ‘insurrections' aboard ships. Some captive Africans refused to be enslaved and took their own lives by jumping from slave ships or refusing to eat. As the system of slavery expanded, resistance will be demonstrated in various ways.Middle PassageThe Middle Passage refers to the part of the trade where Africans, densely packed onto ships, were transported across the Atlantic to the West Indies. The voyage took three to four months and, during this time, the enslaved people mostly lay chained in rows on the floor of the hold or on shelves that ran around the inside of the ships' hulls. There were no more than six hundred enslaved people on each ship. Captives from different nations were mixed together, making it difficult for them to communicate. Men were separated from women and children.Olaudah Equiano was a former enslaved African, seaman, and merchant who wrote an autobiography depicting the horrors of slavery and lobbied Parliament for its abolition. In his biography, he records he was born in what is now Nigeria, kidnapped and sold into slavery as a child. He then endured the middle passage on a slave ship bound for the New World.A great deal of sources remain such as captain's logbooks, memoirs, and shipping company records, all of which describe life on ships. For example, when asked if the slaves had ‘room to turn themselves or lie easy', a Dr Thomas Trotter replied: “By no means. The slaves that are out of irons are laid spoonways … and closely locked to one another. It is the duty of the first mate to see them stowed in this manner every morning … and when the ship had much motion at sea … they were often miserably bruised against the deck or against each other … I have seen the breasts heaving … with all those laborious and anxious efforts for life…” To the contrary, during a Parliamentary investigation, a witness to the slave trade, Robert Norris, described how “‘delightful' the slave ships were, arguing that enslaved people had sufficient room, air, and provisions. When upon deck, they made merry and amused themselves with dancing … In short, the voyage from Africa to the West Indies was one of the happiest periods of their life!”Horrors of the JourneyThe Middle Passage was a system that brutalized both sailors and enslaved people. The captain had total authority over those aboard the ship and was answerable to nobody. Captives usually outnumbered the crew by ten to one, so they were whipped or put in thumb screws if there was any sign of rebellion. Despite this, resistance was common. The European crews made sure that the captives were fed and forced them to exercise. On all ships, the death toll was high. Between 1680 and 1688, 23 out of every 100 people taken aboard the ships of the Royal African Company died in transit. When disease began to spread, the dying were sometimes thrown overboard. In November 1781, around 470 slaves were crammed aboard the slave ship Zong. During the voyage to Jamaica, many got sick. Seven crew and sixty Africans died. Captain Luke Collingwood ordered the sick enslaved Africans, 133 in total, thrown overboard, only one survived.When the Zong arrived back in England, its owners claimed for the value of the slaves from their insurers. They argued that they had little water, and the sick Africans posed a threat to the remaining cargo and crew. In 1783, the owners won their case. This case did much to illustrate the horrors of the trade and sway public opinion against it. The death toll amongst sailors was also terribly high, roughly twenty percent. Sometimes the crew would be harshly treated on purpose during the ‘middle passage'. Fewer hands were required on the third leg and wages could be saved if the sailors jumped ship in the West Indies. It was not uncommon to see injured sailors living in the Caribbean and North American ports. The Dolben Act was passed in 1788, which fixed the number of enslaved people in proportion to the ship's size, but conditions were still horrendous. Research has shown that a man was given a space of 6 feet by 1 foot 4 inches; a woman 5 feet by 1 foot 4 inches and girls 4 feet 6 inches by 1 foot.ReferencesBailey, Anne. Voices of the Atlantic Slave Trade: Beyond the Silence and the Shame. Boston: Beacon Press, 2005.Mustakeem, Sowande. Slavery at Sea: Terror, Sex, and Sickness in the Middle Passage. Champaign, IL: University of Illinois Press, 2016.Smallwood, Stephanie. Saltwater Slavery: A Middle Passage from Africa to American Diaspora. Cambridge: Harvard University Press, 2008.Figure CreditsFig. 1.1: Copyright © by Grin20 (CC BY-SA 2.5) at https://commons.wikimedia.org/wiki/File:Africa_slave_Regions.svg.Fig. 1.2: Copyright © by Sémhur (CC BY-SA 3.0) at https://commons.wikimedia.org/wiki/File:Triangular_trade.png.Fig. 1.3: Copyright © by SimonP (CC BY-SA 2.0) at https://commons.wikimedia.org/wiki/File:Triangle_trade2.png.

      Can I annotate an entire chapter?

    1. For social media content, replication means that the content (or a copy or modified version) gets seen by more people. Additionally, when a modified version gets distributed, future replications of that version will include the modification

      In the context of social media, replication refers to the process where content, or a modified version of it, is shared and distributed across platforms, reaching more viewers. When users share or remix content, the new version may include changes or additions, which are then carried forward as the content continues to spread. This creates a cycle where the modified version becomes the basis for future iterations, allowing both the original and the altered content to reach even larger audiences over time.

    1. Knowing that there is a recommendation algorithm, users of the platform will try to do things to make the recommendation algorithm amplify their content. This is particularly important for people who make their money from social media content.

      Knowing how recommendation algorithms work, users - especially content creators - will often adjust their strategies to expand their content, such as by increasing engagement and using trending topics. This is critical for creators who rely on social media for income, as higher visibility can lead to more opportunities for monetization. However, this also raises ethical issues, as it can sometimes encourage sensationalism or low-quality content exploitation systems.

    1. We mentioned Design Justice earlier, but it is worth reiterating again here that design justice includes considering which groups get to be part of the design process itself.

      It's essential that Design Justice emphasizes not only the outcome of design but who is involved in the process. If only dominant groups are part of the decision-making, we risk creating systems that unintentionally harm or exclude marginalized groups. Ensuring that all voices are represented can lead to more inclusive, equitable design solutions that truly serve diverse communities.

    1. putting in your oar

      I hear this phrase again. I have only heard it in this paper and in MLA Guide to Digital Literacy.

    2. Most songwriters, for instance, rely on a time-honored verse-chorus-verse pattern, and few people would call Shakespeare uncreative because he didn’t invent the sonnet or the dramatic forms that he used to such dazzling effect. Even the most avant-garde, cutting-edge artists like improvisational jazz musicians need to master the basic forms that their work improvises on, departs from, and goes beyond, or else their work will come across as uneducated child’s play

      I understand these examples, but for some reason I don't think it is the same... Some songwriters don't "rely on a time-honored verse-chorus-verse", whilst some do many don't. That is why I think that writers can use their own language, and their own creativity, and their own writing styles to in turn make it a great paper.

    3. sophisticated thinking and writing, and they often require a great deal of practice and instruction to use successfully.

      This reminds me of how we have been talking about writing to meet the status quo of the perfect paper.

    4. Students are quick to see that no one person owns a conventional formula like “on the one hand . . . on the other hand. . . .” Phrases like “a controversial issue” are so commonly used and recycled that they are generic—community property that can be freely used without fear of committing plagiarism.

      I am currently watching an episode of Gilmore Girls where one of the main characters bright up the question of how commonly used catch phrases can be used a plagiarism. Sometimes I live in fear of committing plagiarism. Sometimes a thought comes into my mind and I get nervous I read/heard it somewhere then I will get flagged for plagiarism. At times I worry more about the sources being peer reviewed or is it in MLA/APA format, more than the actual paper. It is hard to know what is/isn't plagiarism.

    1. eLife Assessment

      This important paper demonstrates that different PKA subtypes exhibit distinct subcellular localization at rest in CA1 neurons. The authors provide compelling evidence that when all tested PKA subtypes are activated by norepinephrine, catalytic subunits translocate to dendritic spines but regulatory subunits remain unmoved. Furthermore, PKA-dependent regulation of synaptic plasticity and transmission can be supported only by wildtype, dissociable PKA, but not by inseparable PKA.

    2. Reviewer #1 (Public review):

      Summary:

      This is a short self-contained study with a straightforward and interesting message. The paper focuses on settling whether PKA activation requires dissociation of the catalytic and regulatory subunits. This debate has been ongoing for ~ 30 years, with renewed interest in the question following a publication in Science, 2017 (Smith et al.). Here, Xiong et al demonstrate that fusing the R and C subunits together (in the same way as Smith et al) prevents the proper function of PKA in neurons. This provides further support for the dissociative activation model - it is imperative that researchers have clarity on this topic since it is so fundamental to building accurate models of localised cAMP signalling in all cell types. Furthermore, their experiments highlight that C subunit dissociation into spines is essential for structural LTP, which is an interesting finding in itself. They also show that preventing C subunit dissociation reduces basal AMPA receptor currents to the same extent as knocking down the C subunit. Overall, the paper will interest both cAMP researchers and scientists interested in fundamental mechanisms of synaptic regulation.

      Strengths:

      The experiments are technically challenging and well executed. Good use of control conditions e.g untransfected controls in Figure 4.

      Weaknesses:

      The novelty is lessened given the same team has shown dissociation of the C subunit into dendritic spines from RIIbeta subunits localised to dendritic shafts before (Tillo et al., 2017). Nevertheless, the experiments with RII-C fusion proteins are novel and an important addition.

    3. Reviewer #2 (Public review):

      Summary:

      PKA is a major signaling protein which has been long studied and is vital for synaptic plasticity. Here, the authors examine the mechanism of PKA activity and specifically focus on addressing the question of PKA dissociation as a major mode of its activation in dendritic spines. This would potentially allow to determine the precise mechanisms of PKA activation and address how it maintains spatial and temporal signaling specificity.

      Strengths:

      The results convincingly show that PKA activity is governed by the subcellular localization in dendrites and spines and is mediated via subunit dissociation. The authors make use of organotypic hippocampal slice cultures, where they use pharmacology, glutamate uncaging, and electrophysiological recordings.

      Overall, the experiments and data presented are well executed. The experiments all show that at least in the case of synaptic activity, distribution of PKA-C to dendritic spines is necessary and sufficient for PKA mediated functional and structural plasticity.<br /> The authors were able to persuasively support their claim that PKA subunit dissociation is necessary for its function and localization in dendritic spines. This conclusion is important to better understand the mechanisms of PKA activity and its role in synaptic plasticity.

      Weaknesses:

      While the experiments are indeed convincing and well executed, the data presented is similar to previously published work from the Zhong lab (Tillo et al., 2017, Zhong et al 2009). This reduces the novelty of the findings in terms of re-distribution of PKA subunits, which was already established, at least to some degree.

    4. Reviewer #3 (Public review):

      Summary:

      Xiong et al. investigated the debated mechanism of PKA activation using hippocampal CA1 neurons under pharmacological and synaptic stimulations. Examining all major PKA-R isoforms in these neurons, they found that a portion of PKA-C dissociates from PKA-R and translocate into dendritic spines following norepinephrine bath application. Additionally, their use of a non-dissociable form of PKA demonstrates its essential role in structural long-term potentiation (LTP) induced by two-photon glutamate uncaging, as well as in maintaining normal synaptic transmission, as verified by electrophysiology. This study presents a valuable finding on the activation-dependent re-distribution of PKA catalytic subunits in CA1 neurons, a process vital for synaptic functionality. The robust evidence provided by the authors makes this work particularly relevant for biologists seeking to understand PKA activation mechanisms, its downstream effects, and synaptic plasticity.

      Strengths:

      The study is methodologically robust, particularly in the application of two-photon imaging and electrophysiology. The experiments are well-designed with effective controls and a comprehensive analysis. The credibility of the data is further enhanced by the research team's previous works in related experiments. The study provides sufficient evidence to support the classical model of PKA activation via dissociation in neurons.

      Weaknesses:

      No specific weaknesses are noted in the current study; future research could provide additional insights by exploring PKA dissociation under varied physiological conditions, particularly in vivo, to further validate and expand upon these findings.

    5. Author response:

      The following is the authors’ response to the original reviews.

      New Experiments

      (1) Activation-dependent dynamics of PKA with the RIα regulatory subunit, adding to the answer to Reviewers 1 and 2. To determine the dynamics of all PKA isoforms, we have added experiments that used PKA-RIα as the regulatory subunit. We found differential translocation between PKA-C (co-expressed with PKA-RIα) and PKA-RIα (Figure 1–figure supplement 3), similar to the results when PKA-RIIα or PKA-RIβ was used.

      (2) PKA-C dynamics elicited by a low concentration of norepinephrine, addressing Reviewer 3’s comment. We have found that PKA-C (co-expressed with RIIα) exhibited similar translocation into dendritic spines in the presence of a 5x lowered concentration (2 μM) of norepinephrine, suggesting that the translocation occurs over a wide range of stimulus strengths (Figure 1-figure supplement 2).

      Reviewer #1 (Public Review):

      Summary:

      This is a short self-contained study with a straightforward and interesting message. The paper focuses on settling whether PKA activation requires dissociation of the catalytic and regulatory subunits. This debate has been ongoing for ~ 30 years, with renewed interest in the question following a publication in Science, 2017 (Smith et al.). Here, Xiong et al demonstrate that fusing the R and C subunits together (in the same way as Smith et al) prevents the proper function of PKA in neurons. This provides further support for the dissociative activation model - it is imperative that researchers have clarity on this topic since it is so fundamental to building accurate models of localised cAMP signalling in all cell types. Furthermore, their experiments highlight that C subunit dissociation into spines is essential for structural LTP, which is an interesting finding in itself. They also show that preventing C subunit dissociation reduces basal AMPA receptor currents to the same extent as knocking down the C subunit. Overall, the paper will interest both cAMP researchers and scientists interested in fundamental mechanisms of synaptic regulation.

      Strengths:

      The experiments are technically challenging and well executed. Good use of control conditions e.g untransfected controls in Figure 4.

      We thank the reviewer for their accurate summarization of the position of the study in the field and for the positive evaluation of our study.

      Weaknesses:

      The novelty is lessened given the same team has shown dissociation of the C subunit into dendritic spines from RIIbeta subunits localised to dendritic shafts before (Tillo et al., 2017). Nevertheless, the experiments with RII-C fusion proteins are novel and an important addition.

      We thank the reviewer for noticing our earlier work. The first part of the current work is indeed an extension of previous work, as we have articulated in the manuscript. However, this extension is important because recent studies suggested that the majority of PKA-RIIβ are axonal localized. The primary PKA subtypes in the soma and dendrite are likely PKA-RIβ or PKA-RIIα. Although it is conceivable that the results from PKA-RIIβ can be extended to the other subunits, given the current debate in the field regarding PKA dissociation (or not), it remains important to conclusively demonstrate that these other regulatory subunit types also support PKA dissociation within intact cells in response to a physiological stimulant. To complete the survey for all PKA-R isoforms, we have now added data for PKA-RIα (New Experiment #1), as they are also expressed in the brain (e.g., https://www.ncbi.nlm.nih.gov/gene/5573). Additionally, as the reviewer points out, our second part is a novel addition to the literature.

      Reviewer #2 (Public Review):

      Summary:

      PKA is a major signaling protein that has been long studied and is vital for synaptic plasticity. Here, the authors examine the mechanism of PKA activity and specifically focus on addressing the question of PKA dissociation as a major mode of its activation in dendritic spines. This would potentially allow us to determine the precise mechanisms of PKA activation and address how it maintains spatial and temporal signaling specificity.

      Strengths:

      The results convincingly show that PKA activity is governed by the subcellular localization in dendrites and spines and is mediated via subunit dissociation. The authors make use of organotypic hippocampal slice cultures, where they use pharmacology, glutamate uncaging, and electrophysiological recordings.

      Overall, the experiments and data presented are well executed. The experiments all show that at least in the case of synaptic activity, the distribution of PKA-C to dendritic spines is necessary and sufficient for PKA-mediated functional and structural plasticity.

      The authors were able to persuasively support their claim that PKA subunit dissociation is necessary for its function and localization in dendritic spines. This conclusion is important to better understand the mechanisms of PKA activity and its role in synaptic plasticity.

      We thank the reviewer for their positive evaluation of our study.

      Weaknesses:

      While the experiments are indeed convincing and well executed, the data presented is similar to previously published work from the Zhong lab (Tillo et al., 2017, Zhong et al 2009). This reduces the novelty of the findings in terms of re-distribution of PKA subunits, which was already established. A few alternative approaches for addressing this question: targeting localization of endogenous PKA, addressing its synaptic distribution, or even impairing within intact neuronal circuits, would highly strengthen their findings. This would allow us to further substantiate the synaptic localization and re-distribution mechanism of PKA as a critical regulator of synaptic structure, function, and plasticity.

      We thank the reviewer for noticing our earlier work. The first part of the current work is indeed an extension of previous work, as we have articulated in the manuscript. However, this extension is important because recent studies suggested that the majority of PKA-RIIβ are axonal localized. The primary PKA subtypes in the soma and dendrite are likely PKA-RIβ or PKA-RIIα. Although it is conceivable that the results from PKA-RIIβ can be extended to the other subunits, given the current debate in the field regarding PKA dissociation (or not), it remains important to conclusively demonstrate that these other regulatory subunit types also support PKA dissociation within intact cells in response to a physiological stimulant. To complete the survey for all PKA-R isoforms, we have now added data for PKA-RIα (New Experiment #1), as they are also expressed in the brain (e.g., https://www.ncbi.nlm.nih.gov/gene/5573). Additionally, as Reviewer 1 points out, our second part is a novel addition to the literature.

      We also thank the reviewer for suggesting the experiments to examine PKA’s synaptic localization and dynamics as a key mechanism underlying synaptic structure and function. We agree that this is a very interesting topic. At the same time, we feel that this mechanistic direction is open ended at this time and beyond what we try to conclude within this manuscript: prevention of PKA dissociation in neurons affects synaptic function. Therefore, we will save the suggested direction for future studies. We hope the reviewer understand.

      Reviewer #3 (Public Review):

      Summary:

      Xiong et al. investigated the debated mechanism of PKA activation using hippocampal CA1 neurons under pharmacological and synaptic stimulations. Examining the two PKA major isoforms in these neurons, they found that a portion of PKA-C dissociates from PKA-R and translocates into dendritic spines following norepinephrine bath application. Additionally, their use of a non-dissociable form of PKC demonstrates its essential role in structural long-term potentiation (LTP) induced by two-photon glutamate uncaging, as well as in maintaining normal synaptic transmission, as verified by electrophysiology. This study presents a valuable finding on the activation-dependent re-distribution of PKA catalytic subunits in CA1 neurons, a process vital for synaptic functionality. The robust evidence provided by the authors makes this work particularly relevant for biologists seeking to understand PKA activation and its downstream effects essential for synaptic plasticity.

      Strengths:

      The study is methodologically robust, particularly in the application of two-photon imaging and electrophysiology. The experiments are well-designed with effective controls and a comprehensive analysis. The credibility of the data is further enhanced by the research team's previous works in related experiments. The conclusions of this paper are mostly well supported by data. The research fills a significant gap in our understanding of PKA activation mechanisms in synaptic functioning, presenting valuable insights backed by empirical evidence.

      We thank the reviewer for their positive evaluation of our study.

      Weaknesses:

      The physiological relevance of the findings regarding PKA dissociation is somewhat weakened by the use of norepinephrine (10 µM) in bath applications, which might not accurately reflect physiological conditions. Furthermore, the study does not address the impact of glutamate uncaging, a well-characterized physiologically relevant stimulation, on the redistribution of PKA catalytic subunits, leaving some questions unanswered.

      We agreed with the Reviewer that testing under physiological conditions is critical especially given the current debate in the literature. That is why we tested PKA dynamics induced by the physiological stimulant, norepinephrine. It has been suggested that, near the release site, local norepinephrine concentrations can be as high as tens of micromolar (Courtney and Ford, 2014). Based on this study, we have chosen a mid-range concentration (10 μM). At the same time, in light of the Reviewer’s suggestion, we have now also tested PKA-RIIα dissociation at a 5x lower concentration of norepinephrine (2 μM; New Experiment #2). The activation and translocation of PKA-C is also readily detectible under this condition to a degree comparable to when 10 μM norepinephrine was used.

      Regarding the suggested glutamate uncaging experiment, it is extremely challenging because of finite signal-to-noise ratios in our experiments. From our past studies, we know that activated PKA-C can diffuse three dimensionally, with a fraction as membrane-associated proteins and the other as cytosolic proteins. Although we have evidence that its membrane affinity allows it to become enriched in dendritic spines, it is not known (and is unlikely) that activated PKA-C is selectively targeted to a particular spine. Glutamate uncaging of a single spine presumably would locally activate a small number of PKA-C. It will be very difficult to trace the 3D diffusion of these small number of molecules in the presence of surrounding resting-state PKA-C molecules. Finally, we hope the reviewer agrees that, regardless of the result of the glutamate uncaging experiment, the above new experiment (New Experiment #2) already indicate that certain physiologically relevant stimuli can drive PKA-C dissociation from PKA-R and translocation to spines, supporting our conclusion.

      Reviewer #2 (Recommendations For The Authors):

      It was a pleasure reading your paper, and the results are well-executed and well-presented.

      My main and only recommendations are two ways to further expand the scope of the findings.

      First, I believe addressing the endogenous localization of PKA-C subunit before and after PKA activation would be highly important to validate these claims. Overexpression of tagged proteins often shows vastly different subcellular distribution than their endogenous counterparts. Recent technological advances with CRISPR/Cas9 gene editing (Suzuki et al Nature 2016 and Gao et al Neuron 2019 for example) which the Zhong lab recently contributed to (Zhong et al 2021 eLife) allow us to tag endogenous proteins and image them in fixed or live neurons. Any experiments targeting endogenous PKA subunits that support dissociation and synaptic localization following activation would be very informative and greatly increase the novelty and impact of their findings.

      We agreed that addressing the endogenous PKA dynamics is important. However, despite recent progress, endogenous labeling using CRISPR-based methods remains challenging and requires extensive optimization. This is especially true for signaling proteins whose endogenous abundance is often low. We have tried to label PKA catalytic subunits and regulatory subunits using both the homologous recombination-based method SLENDR and our own non-homologous end joining-based method CRISPIE. We did not succeed, in part because it is very difficult to see any signal under wide-field fluorescence conditions, which makes it difficult to screen different constructs for optimizing parameters. It is also possible that, at the endogenous abundance, the label is just not bright enough to be seen. Nevertheless, for both PKA type Iβ and type IIα that we studied in this manuscript, we have correlated the measured parameters (specifically, Spine Enrichment Index or SEI) with the overexpression level (Figure 1-figure supplement 1). We found that they are not strongly correlated with the expression level under our conditions. By extrapolating to non-overexpression conditions, our conclusion remains valid.

      To overcome the inability to label endogenous PKA subunits using CRISPR-based methods, we have also attempted a conditional knock-in method call ENABLED that we previously developed to label PKA-Cα. In preliminary results, we found that endogenously label PKA were very dim. However, in a subset of cells that are bright enough to be quantified, the PKA catalytic subunit indeed translocated to dendritic spines upon stimulation (see Additional Fig. 1 in the next page), corroborating our results using overexpression. These results, however, are not ready to be published because characterization of the mouse line takes time and, at this moment, the signal-to-noise ratio remains low. We hope that the reviewer can understand.

      Author response image 1.

      Endogeneous PKA-Cα translocate to dendritic spines upon activation.

      Second, experiments which would advance and validate these findings in vivo would be highly valuable. This could be achieved in a number of ways - one would be overexpression of tagged PKA versions and examining sub-cellular distribution before and after physiological activation in vivo. Another possibility is in vivo perturbation - one would speculate that disruption or tethering of PKA subunits to the dendrite would lead to cell-specific functional and structural impairments. This could be achieved in a similar manner to the in vitro experiments, with a PKA KO and replacement strategy of the tethered C-R plasmid, followed by structural or functional examination of neurons.

      I would like to state that these experiments are not essential in my opinion, but any improvements in one of these directions would greatly improve and extend the impact and findings of this paper.

      We thank the reviewer for the suggestion and the understanding. The suggested in vivo experiments are fascinating. However, in vivo imaging of dendritic spine morphology is already in itself challenging. The difficulty greatly increases when trying to detect partial, likely transient translocation of a signaling protein. It is also very difficult to knock down endogenous PKA while simultaneously expressing the R-C construct in a large number of cells to achieve detectable circuit or behavioral effect (and hope that compensation does not happen over weeks). We hope the reviewer agrees that these experiments would be their own project and go beyond the time and scope of the current study.

      Reviewer #3 (Recommendations For The Authors):

      Please elaborate on the methods used to visualize PKA-RIIα and PKA-RIβ subunits.

      As suggested, we have now included additional details for visualizing PKA-Rs in the text. Specifically, we write (pg. 5): “…, as visualized using expressed PKA-R-mEGFP in separate experiments (Figs. 1A-1C).”.

    1. Livros

      javascript:(function(){ document.querySelector('.navbar-subheader_title__URgJZ h2').textContent = "6º Aniversário Igreja Apostólica Missionária Actos Resgate Santo André"; })();

    1. Author response:

      The following is the authors’ response to the original reviews.

      Reviewer #1 (Public Review): 

      Summary: 

      The authors examined the salt-dependent phase separation of the low-complexity domain of hnRN-PA1 (A1-LCD). Using all-atom molecular dynamics simulations, they identified four distinct classes of salt dependence in the phase separation of intrinsically disordered proteins (IDPs), which can be predicted based on their amino acid composition. However, the simulations and analysis, in their current form, are inadequate and incomplete. 

      Strengths: 

      The authors attempt to unravel the mechanistic insights into the interplay between salt and protein phase separation, which is important given the complex behavior of salt effects on this process. Their effort to correlate the influence of salt on the low-complexity domain of hnRNPA1 (A1-LCD) with a range of other proteins known to undergo salt-dependent phase separation is an interesting and valuable topic. 

      Weaknesses: 

      (1) The simulations performed are not sufficiently long (Figure 2A) to accurately comment on phase separation behavior. The simulations do not appear to have converged well, indicating that the system has not reached a steady state, rendering the analysis of the trajectories unreliable.

      We have extended the simulations for an additional 500 ns, to 1500 ns. The last 500 ns show reasonably good convergence (see Figure 2A).

      (2) The majority of the data presented shows no significant alteration with changes in salt concentration. However, the authors have based conclusions and made significant comments regarding salt activities. The absence of error bars in the data representation raises questions about its reliability. Additionally, the manuscript lacks sufficient scientific details of the calculations.  

      We have now included error bars. With the error bars, the salt dependences of all the calculated properties (exception for Rg) show a clear trend. Additionally, we have expanded the descriptions of our calculations (p. 15-16).

      (3) In Figures 2B and 2C, the changes in the radius of gyration and the number of contacts do not display significant variations with changes in salt concentration. The change in the radius of gyration with salt concentration is less than 1 Å, and the number of contacts does not change by at least 1. The authors' conclusions based on these minor changes seem unfounded. 

      The variation of ~ 1 Å for the calculated Rg is similar to the counterpart for the experimental Rg. As for the number of contacts, note that this property is presented on a per-residue basis, so a value of 1 means that each residue picks up one additional contact, or each protein chain gains a total of 131 contacts, when the salt concentration is increased from 50 to 1000 mM.

      Reviewer #2 (Public Review): 

      This is an interesting computational study addressing how salt affects the assembly of biomolecular condensates. The simulation data are valuable as they provide a degree of atomistic details regarding how small salt ions modulate interactions among intrinsically disordered proteins with charged residues, namely via Debye-like screening that weakens the effective electrostatic interactions among the polymers, or through bridging interactions that allow interactions between like charges from different polymer chains to become effectively attractive (as illustrated, e.g., by the radial distribution functions in Supplementary Information). However, this manuscript has several shortcomings: 

      (i) Connotations of the manuscript notwithstanding, many of the authors' concepts about salt effects on biomolecular condensates have been put forth by theoretical models, at least back in 2020 and even earlier. Those earlier works afford extensive information such as considerations of salt concentrations inside and outside the condensate (tie-lines). But the authors do not appear to be aware of this body of prior works and therefore missed the opportunity to build on these previous advances and put the present work with its complementary advantages in structural details in the proper context.

      (ii) There are significant experimental findings regarding salt effects on condensate formation [which have been modeled more recently] that predate the A1-LCD system (ref.19) addressed by the present manuscript. This information should be included, e.g., in Table 1, for sound scholarship and completeness. 

      (iii) The strengths and limitations of the authors' approach vis-à-vis other theoretical approaches should be discussed with some degree of thoroughness (e.g., how the smallness of the authors' simulation system may affect the nature of the "phase transition" and the information that can be gathered regarding salt concentration inside vs. outside the "condensate" etc.). Accordingly, this manuscript should be revised to address the following. In particular, the discussion in the manuscript should be significantly expanded by including references mentioned below as well as other references pertinent to the issues raised. 

      (1) The ability to use atomistic models to address the questions at hand is a strength of the present work. However, presumably because of the computational cost of such models, the "phase-separated" "condensates" in this manuscript are extremely small (only 8 chains). An inspection of Fig.1 indicates that while the high-salt configuration (snapshot, bottom right) is more compact and droplet-like than the low-salt configuration (top right), it is not clear that the 50 mM NaCl configuration can reasonably correspond to a dilute or homogeneous phase (without phase separation) or just a condensate with a lower protein concentration because the chains are still highly associated. One may argue that they become two droplets touching each other (the chains are not fully dispersed throughout the simulation box, unlike in typical coarse-grained simulations of biomolecular phase separation). While it may not be unfair to argue from this observation that the condensed phase is less stable at low salt, this raises critical questions about the adequacy of the approach as a stand-alone source of theoretical information. Accordingly, an informative discussion of the limitation of the authors' approach and comparisons with results from complementary approaches such as analytical theories and coarsegrained molecular dynamics will be instructive-even imperative, especially since such results exist in the literature (please see below). 

      We now discuss the limitations of our all-atom simulations and also other approaches (p. 13; see below).

      (2) The aforementioned limitation is reflected by the authors' choice of using Dmax as a sort of phase separation order parameter. However, no evidence was shown to indicate that Dmax exhibits a twostate-like distribution expected of phase separation. It is also not clear whether a Dmax value corresponding to the linear dimension of the simulation box was ever encountered in the authors' simulated trajectories such that the chains can be reliably considered to be essentially fully dispersed as would be expected for the dilute phase. Moreover, as the authors have noted in the second paragraph of the Results, the variation of Dmax with simulation time does not show a monotonic rank order with salt concentration. The authors' explanation is equivalent to stipulating that the simulation system has not fully equilibrated, inevitably casting doubt on at least some of the conclusions drawn from the simulation data. 

      First off, with the extended simulations, the Dmax values converge to a tiered order rank, with successively decreasing values from low salt (50 mM) to intermediate salt (150 and 300 mM) to high salt (500 and 1000 mM). Secondly, as we now state (p. 13), our low-salt simulations mimic a homogenous solution whereas our high-salt simulations mimic the dense phase of a phase-separated system. The intermediate-salt simulations also mimic the dense phase but at a somewhat lower concentration (hence the intermediate Dmax value).

      (3) With these limitations, is it realistic to estimate possible differences in salt concentration between the dilute and condensed phases in the present work? These features, including tie-lines, were shown to be amenable to analytical theory and coarse-grained molecular dynamics simulation (please see below).  

      The differences in salt effects that we report do not represent those between two phases. Rather, as explained in the preceding reply, they represent differences between a homogenous solution at low salt and the dense phase at higher salt. We also acknowledge salt effects calculated by analytical theory and coarse-grained simulations (p. 13).

      (4) In the comparison in Fig.2B between experimental and simulated radius of gyration as a function of [NaCl], there is an outlier among the simulated radii of gyration at [NaCl] ~ 250 mM. An explanation should be offered.  

      After extending the simulations and analyzing the last 500 ns, the Rg data no longer show an outlier though still have some fluctuations from one salt concentration to another.

      (5) The phenomenon of no phase separation at zero and low salt and phase separation at higher salt has been observed for the IDP Caprin1 and several of its mutants [Wong et al., J Am Chem Soc 142, 24712489 (2020) [https://pubs.acs.org/doi/full/10.1021/jacs.9b12208], see especially Fig.9 of this reference]. This work should be included in the discussion and added to Table 1. 

      We now have added Caprin1 to Table 1 (new ref 26) and discuss this paper (p. 13).

      (6) The authors stated in the Introduction that "A unifying understanding of how salt affects the phase separation of IDPs is still lacking". While it is definitely true that much remains to be learned about salt effects on IDP phase separation, the advances that have already been made regarding salt effects on IDP phase separation is more abundant than that conveyed by this narrative. For instance, an analytical theory termed rG-RPA was put forth in 2020 to provide a uniform (unified) treatment of salt, pH, and sequence-charge-pattern effects on polyampholytes and polyelectrolytes (corresponding to the authors' low net charge and high net charge cases). This theory offers a means to predict salt-IDP tie-lines and a comprehensive account of salt effect on polyelectrolytes resulting in a lack of phase separation at extremely low salt and subsequent salt-enhanced phase separation (similar to the case the authors studied here) and in some cases re-entrant phase separation or dissolution [Lin et al., J Chem Phys 152. 045102 (2020) [https://doi.org/10.1063/1.5139661]]. This work is highly relevant and it already provided a conceptual framework for the authors' atomistic results and subsequent discussion. As such, it should definitely be a part of the authors' discussion. 

      We now cite this paper (new ref 34) in Introduction (p. 4). We also discuss its results for Caprin1 (new ref 18; p. 13).

      (7) Bridging interactions by small ions resulting in effective attractive interactions among polyelectrolytes leading to their phase separation have been demonstrated computationally by Orkoulas et al., Phys Rev Lett 90, 048303 (2003) [https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.90.048303]. This result should also be included in the discussion. 

      We now cite this paper (new ref 41; p. 11).

      (8) More recently, the salt-dependent phase separations of Caprin1, its RtoK variants and phosphorylated variant (see item #5 above) were modeled (and rationalized) quite comprehensively using rG-RPA, field-theoretic simulation, and coarse-grained molecular dynamics [Lin et al., arXiv:2401.04873 [https://arxiv.org/abs/2401.04873]], providing additional data supporting a conceptual perspective put forth in Lin et al. J Chem Phys 2020 (e.g., salt-IDP tie-lines, bridging interactions, reentrance behaviors etc.) as well as in the authors' current manuscript. It will be very helpful to the readers of eLife to include this preprint in the authors' discussion, perhaps as per the authors' discretion along the manner in which other preprints are referenced and discussed in the current version of the manuscript. 

      We now cite this paper (new ref 18) and discuss it along with new ref 26 in Discussion (p. 13).

      Reviewer #3 (Public Review): 

      Summary: 

      This study investigates the salt-dependent phase separation of A1-LCD, an intrinsically disordered region of hnRNPA1 implicated in neurodegenerative diseases. The authors employ all-atom molecular dynamics (MD) simulations to elucidate the molecular mechanisms by which salt influences A1-LCD phase separation. Contrary to typical intrinsically disordered protein (IDP) behavior, A1-LCD phase separation is enhanced by NaCl concentrations above 100 mM. The authors identify two direct effects of salt: neutralization of the protein's net charge and bridging between protein chains, both promoting condensation. They also uncover an indirect effect, where high salt concentrations strengthen pi-type interactions by reducing water availability. These findings provide a detailed molecular picture of the complex interplay between electrostatic interactions, ion binding, and hydration in IDP phase separation. 

      Strengths: 

      Novel Insight: The study challenges the prevailing view that salt generally suppresses IDP phase separation, highlighting A1-LCD's unique behavior. 

      Rigorous Methodology: The authors utilize all-atom MD simulations, a powerful computational tool, to investigate the molecular details of salt-protein interactions. 

      Comprehensive Analysis: The study systematically explores a wide range of salt concentrations, revealing a nuanced picture of salt effects on phase separation. 

      Clear Presentation: The manuscript is well-written and logically structured, making the findings accessible to a broad audience. 

      Weaknesses: 

      Limited Scope: The study focuses solely on the truncated A1-LCD, omitting simulations of the full-length protein. This limitation reduces the study's comparative value, as the authors note that the full-length protein exhibits typical salt-dependent behavior. A comparative analysis would strengthen the manuscript's conclusions and broaden its impact.

      Perhaps we did not impress on the reviewer how expensive the all-atom MD simulations on A1-LCD were: the systems each contained half a million atoms and the simulations took many months to complete. That said, we agree with the reviewer that, ideally, a comparative study on a protein showing the typical screening class of salt dependence would have made our work more complete. However, we are confident of the conclusions for several reasons. First, the three salt effects – charge neutralization, bridging, and strengthening of pi-types of interactions – revealed by the all-atom simulations are physically sound and well-supported by other studies. Second, these effects led us to develop a unified picture for the salt dependence of homotypic phase separation, in the form of a predictor for the classes of salt dependence based on amino-acid composition. This predictor works well for nearly 30 proteins. Third, recent studies using analytical theory and coarse-grained simulations (new ref 18) also strongly support our conclusions.

      Reviewer #1 (Recommendations For The Authors): 

      (1) In Figure 1, the color scheme should be updated and the figure remade, as the current set of color choices makes it very difficult to distinguish the magenta spheres.  

      We have increased the sizes of ions in Figure 1 to make them distinguishable.

      (2) Within the framework of atomistic simulations, the influence of salt concentration alteration on protein conformational plasticity is worth investigating. This could be correlated (with proper details) with the effect of salt-concentration-modulated protein aggregation behavior. 

      We now use RMSF to measure conformational plasticity, which shows a clear salt-dependent trend with a 27% reduction in fluctuations from 50 mM to 1000 mM NaCl (new Fig. S1).

      (3) The authors should mention the protein concentrations employed in the simulations and whether these are consistent with experimentally used concentrations.  

      We have mentioned the initial concentration (3.5 mM). We now further state that this concentration is maintained in the low-salt simulations, indicating absence of phase separation, but is increased to 23 mM in the high-salt simulations, indicating phase separation. The latter value is consistent with the measured concentrations in the dense phase (last two paragraphs of p. 5).

      (4) It would be useful to test the salt effect for at least two extreme salt concentrations at various protein concentrations, consistent with experimental protein concentration ranges.  

      In simulation studies of short peptides (ref 37), we have shown that the initial concentration does not affect the final concentration in the dense phase, as expected for phase-separation systems. We expect that the same will be true for the A1-LCD system at intermediate and high salt where phase separation occurs. Though this expectation could be tested by simulations at a different initial protein concentration, such simulations would be expensive but unlikely to yield new physical insight.

      (5) Importantly, the simulations do not appear to have converged well enough (Figure 2A). The authors should extend the simulation trajectories to ensure the system has reached a steady state.  

      We extended the simulations for an additional 500 ns, which now appear to show convergence. In Figure 2A we now see Dmax values converge to a tiered order rank, with successively decreasing values from low salt (50 mM) to intermediate salt (150 and 300 mM) to high salt (500 and 1000 mM). 

      (6) The authors mention "phase separation" in the title, but with only a 1 μs simulation trajectory, it is not possible to simulate a phenomenon like phase separation accurately. Since atomistic simulations cannot realistically capture phase separation on this timescale, a coarse-grained approach is more suitable. To properly explore salt effects in the context of phase separation, long timescale simulation trajectories should be considered. Otherwise, the data remain unreliable. 

      Our all-atom simulations revealed rich salt effects that might have been missed in coarse-grained simulations. It is true that coarse-grained models allow the simulations of the phase separation process, but as we have recently demonstrated (refs 36 and 37), all-atom simulations on the μs timescale are also able to capture the spontaneous phase separation of peptides and small IDPs. A1-LCD is much larger than those systems, so we had to use a relatively small chain number (8 chains here vs 64 used in ref 37 and 16 used in ref 37). S2ll, we observe the condensation into a dense phase at high salt. We discuss the pros and cons of all-atom vs. coarse-grained simulations in p. 13.

      (7) In Figure 5E, the plot does not show that g(r) has reached 1. If it does, the authors should show the full curve. The same issue remains with supplementary figures 1, 2, 3, etc.  

      We now show the approach to 1 in the insets of Figs. S2, S3, S4, and 5E.

      (8) None of the data is represented with error bars. The authors should include error bars in their data representations. 

      We have now included error bars in all graphs that report average values.

      (9) The authors state that "the net charge of the system reduces to only +8 at 1000 mM NaCl (Figure 3C)" but do not explain how this was calculated. 

      We now add this explanation in methods (p. 16).

      (10). The authors mention "similar to the role played by ATP molecules in driving phase separation of positively charged IDPs." However, ATP can inhibit aggregation, and its induction of phase separation is concentration-dependent. Given ATP's large aromatic moiety, its comparison to ions is not straightforward and is more complex. This comparison can be at best avoided. 

      In this context we are comparing the bridging capability of ATP molecules in driving phase separation of positively charged IDPs in ref 36 to the bridging capability of the ions here. In ref 36 the authors show ATP bridging interactions between protein chains similar to what we show here with ions.

      (11) Many calculations are vaguely represented. The process for calculating the number of bridging ions, for example, is not well documented. The authors should provide sufficient details to allow for the reproducibility of the data. 

      We have now expanded the methods section to include more detailed information on calculations done.

      Reviewer #3 (Recommendations For The Authors): 

      Include error bars or standard deviations for all results averaged over four replicates, particularly for the number of ions and contacts per residue. This would provide a clearer picture of the data's reliability and variability. 

      We have now included error bars in all graphs that report averaged values.

      Strengthen the support for the conclusion that "each Arg sidechain often coordinates two Cl- ions, multiple backbone carbonyls often coordinate a single Na+ ion." While Fig. 3A clearly demonstrates ArgCl- coordination, the Na+ coordination claim for a 131-residue protein requires further clarification. Consider including the integration profile of radial distribution functions for Na+ ions to bolster this assertion. 

      We now report the number of Na+ ions that coordinate with multiple backbone carbonyls (p. 7) as well as the number of Na+ ions that bridge between A1-LCD chains via coordination with multiple backbone carbonyls (p. 9). Please note that Figure 4A right panel displays an example of Na+ coordinating with multiple backbone carbonyls.

      Address the following typographical errors in the main text: o Page 11, line 25: "distinct classes of sat dependence" should be "distinct classes of salt dependence" o Page 14, line 9: "for Cl- and 3.0 and 5.4 A" should be "for Cl- and 3.0 and 5.4 √Ö" o Page 14, line 18: "As a control, PRDFs for water were also calculated" should be "As a control, RDFs for water were also calculated" (assuming PRDF was meant to be RDF) 

      We have now corrected these typos.

      Consider expanding the study to include simulations of the full-length protein to provide a more comprehensive comparison between the truncated A1-LCD and the complete protein's behavior in various salt concentrations. 

      As we explained above, even with eight chains of A1-LCD, which has 131 residues, the systems already contain half a million atoms each and the all-atom simulations took many months to complete. Full-length A1 has 314 residues so a multi-chain system would be too large to be feasible for all-atom simulations.

    2. eLife Assessment

      In this potentially important study, the authors conducted atomistic simulations to probe the salt-dependent phase separation of the low-complexity domain of hnRN-PA1 (A1-LCD). The authors have identified both direct and indirect mechanisms of salt modulation, provided explanations for four distinct classes of salt dependence, and proposed a model for predicting protein properties from amino acid composition. There is a range of opinions regarding the strength of evidence, with some considering the evidence as incomplete due to the limitations in the length and statistical errors of the computationally intense atomistic MD simulations.

    3. Reviewer #1 (Public review):

      Summary:

      The authors examined the salt-dependent phase separation of the low-complexity domain of hnRN-PA1 (A1-LCD). Using all-atom molecular dynamics simulations, they identified four distinct classes of salt dependence in the phase separation of intrinsically disordered proteins (IDPs), which can be predicted based on their amino acid composition. However, the simulations and analysis, in their current form, are inadequate and incomplete.

      Strengths:

      The authors attempt to unravel the mechanistic insights into the interplay between salt and protein phase separation, which is important given the complex behavior of salt effects on this process. Their effort to correlate the influence of salt on the low-complexity domain of hnRNPA1 (A1-LCD) with a range of other proteins known to undergo salt-dependent phase separation is an interesting and valuable topic.

      Weaknesses:

      Based on the reviewer's assessment of the manuscript, the following points were raised:

      (1) The simulation duration is too short to draw comprehensive conclusions about phase separation.<br /> (2) There are concerns regarding the convergence of the simulations, particularly as highlighted in Figure 2A.<br /> (3) The simulation begins with a protein concentration of 3.5 mM ("we built an 8-copy model for the dense phase (with an initial concentration of 3.5 mM)"), which is high for phase separation studies. The reviewer questions the use of the term "dense phase" and suggests that the authors conduct a clearer analysis depicting the coexistence of both the dilute and dense phases to represent a steady state. Without this, the realism of the described phenomena is doubtful. Commenting on phase separation under conditions that don't align with typical phase separation parameters is not acceptable.<br /> (4) The inference that "Each Arg sidechain often coordinates two Cl- ions simultaneously, but each Lys sidechain coordinates only one Cl- ion" is questioned. According to Supplementary Figure 2A, Lys seems to coordinate with Cl- ions more frequently than Arg.<br /> (5) The authors are requested to update the figure captions for Supplementary Figures 2 and 3, specifying which system the analyses were performed on.<br /> (6) It is difficult to observe a clear trend due to irregularities in the data. Although the authors have included a red dotted line in the figures, the trend is not monotonic. The reviewer expresses concerns about significant conclusions drawn from these figures (e.g., Figure 2C, Figure 5A, Supplementary Figure 1).<br /> (7) Given the error in the radius of gyration (Rg) calculations, the reviewer questions the validity of drawing conclusions from this data.<br /> (8) The pair correlation function values in Figure 5E and supplementary figure 4 show only minor differences, and the reviewer questions whether these differences are significant.<br /> (9) Previous reports suggest that, upon self-assembly, protein chains extend within the condensate, leading to a decrease in intramolecular contacts. However, the authors show an increase in intramolecular contacts with increasing salt concentration (Figure 2C), which contradicts prior studies. The reviewer advises the authors to carefully review this and provide justification.<br /> (10) A systematic comparison of estimated parameters with varying salt concentrations is required. Additionally, the authors should provide potential differences in salt concentrations between the dilute and condensed phases.<br /> (11) The reviewer finds that the majority of the data presented shows no significant alteration with changes in salt concentration, yet the authors have made strong conclusions regarding salt activity.

      The manuscript lacks sufficient scientific details of the calculations.