- Oct 2024
-
Local file Local file
-
Prime examples of human-driven evolution come from fisheries, asthe selection pressure imposed by intense har-vest has caused pronounced shifts in growthrates and reproductive timing in many com-mercially important stocks, potentially reducingyields and impeding recovery from overfish-ing
humans affecting underlying genomic basis of fisheries
-
-
millercenter.org millercenter.org
-
I can assure you that it is safer to keep your money in a reopened bank than under the mattress.
This is true due to the risk of getting robbed.
-
It is possible that when the banks resume a very few people who have not recovered from their fear may again begin withdrawals.
It is a good idea to rebuild trust within the community.
-
As a result we start tomorrow, Monday, with the opening of banks in the twelve Federal Reserve Bank cities—those banks which on first examination by the Treasury have already been found to be all right.
Its smart to play into what people want.
-
Because of undermined confidence on the part of the public, there was a general rush by a large portion of our population to turn bank deposits into currency or gold. A rush so great that the soundest banks could not get enough currency to meet the demand.
This makes sense, as people were in a big panic then.
-
-
reactormag.com reactormag.com
-
Shelley’s frown had deepened. Elliott was being cruel. He knew he was. But he was also telling the truth, and Tatiana needed to hear it.
This was after Elliot had laid into Tatiana about using the dragon for show and not for reasons that may somewhat be valuable for her in the long run.
-
His stomach sank. Tatiana was a royal pain in the ass, and her cruelty to the dragon was unfathomable, but in this case, at least, she wasn’t crazy.
This is a way that Elliot shows even though Tatiana can be worrisome he still felt some type of empathy for her in this situation.
-
-
millercenter.org millercenter.org
-
Through this program of action we address ourselves to putting our own national house in order and making income balance outgo. Our international trade relations, though vastly important, are in point of time and necessity secondary to the establishment of a sound national economy.
I think it's very smart to focus on rebuilding the countrys economy.
-
Happiness lies not in the mere possession of money; it lies in the joy of achievement, in the thrill of creative effort
I think this is only semi true, as another form of happiness is due to the relief having money brings.
-
In such a spirit on my part and on yours we face our common difficulties. They concern, thank God, only material things. Values have shrunken to fantastic levels; taxes have risen; our ability to pay has fallen; government of all kinds is faced by serious curtailment of income; the means of exchange are frozen in the currents of trade; the withered leaves of industrial enterprise lie on every side; farmers find no markets for their produce; the savings of many years in thousands of families are gone.
I find it interesting that hes making it out, to seem like a problem he too faces.
-
This is preeminently the time to speak the truth, the whole truth, frankly and boldly.
I like the fact his trying to be honest.
-
-
pressbooks.pub pressbooks.pub
-
Paley argues that organisms are analogous to human-created artifacts in that they involve a complex arrangement of parts that serve some useful function, where even slight alterations in the complex arrangement would mean that the useful function was no longer served
How many things had to go perfectly for me to exist in the flesh?
-
-
mlpp.pressbooks.pub mlpp.pressbooks.pub
-
Survivors of the Great Depression and their children the “baby boomers” would not quickly forget the hard times or the fact that government had helped end them. Historians debate when the New Deal ended. Some identify the Fair Labor Standards Act of 1938 as the last major New Deal legislation.
It's interesting to see the major effect the Great Depression had on America.
-
Southern farmers earned on average $183 per year at a time when farmers on the West Coast made more than four times that. Worse, they were producing cotton and corn, crops that paid little while depleting the soil.
It's crazy the difference in salary between farmers on the West Coast and farmers in the South.
-
Hoover had entered office with widespread popular support, but by the end of 1929 the economic collapse had overwhelmed his presidency. Hoover and his advisors assumed, and then desperately hoped, that the sharp economic decline was just a temporary downturn; part of the inevitable boom-bust cycles that stretched back through America’s commercial history.
I think it's interesting that his presidency had such a devastating effect.
-
On Thursday, October 24, stock market prices plummeted. Ten billion dollars in investments (equivalent to about $100 billion today) disappeared in a matter of hours
I think it's so crazy how so much money can disappear in a blink of an eye.
-
Despite serious problems in the industrial and agricultural economies, most Americans in 1929 and 1930 believed the nation would bounce back quickly. President Herbert Hoover reassured an audience in 1930 that “the depression is over.” But the president was not simply guilty of false optimism. Hoover had made many mistakes. During his 1928 election campaign, he had promoted higher tariffs to encourage consumption of U.S.-produced products and to protect American farmers from foreign competition. Spurred by the ongoing agricultural depression, Hoover signed the highest tariff in American history, the Smoot-Hawley Tariff of 1930, just as global markets began to crumble. Other countries retaliated and tariff walls rose across the globe. Between 1929 and 1932, international trade dropped from $36 billion to only $12 billion. American exports fell by 78%.
I found this passage really shocking. It’s surprising that many people thought the economy would bounce back quickly when things were so bad. Hoover saying “the depression is over” feels almost unreal. The Smoot-Hawley Tariff made things worse, causing trade to drop a lot. This shows how quickly hope can turn into trouble, especially with bad decisions.
-
Although the crash stunned the nation, it exposed deeper, underlying problems with the American economy in the 1920s. The stock market’s rise did not really represent the health of the overall economy, and the overwhelming majority of Americans had no personal stake in Wall Street. The market’s collapse, no matter how dramatic, did not by itself destroy the American economy. Instead, the crash exposed factors such as rising inequality, declining demand, rural collapse, overextended investors, and a bursting speculative bubble that all combined to plunge the nation into the Great Depression. Despite resistance from Populists and Progressives, the gap between rich and poor had widened throughout the early twentieth century. In the aggregate, Americans were better off in 1929 than in 1919 and both production and consumption had grown. Per capita income had risen 10% for all Americans in the 1920s, but 75% for the wealthiest. The return of conservative politics in the 1920s had reinforced federal policies that exacerbated this divide. High import tariffs, low corporate and personal taxes, easy credit and low interest rates overwhelmingly favored wealthy investors who spent their money on luxury goods and speculative investments in the rapidly rising stock market.
I found this information really surprising. It’s shocking that while the stock market was doing well, most Americans weren’t getting richer. The gap between the rich and poor was huge, with the wealthiest seeing their incomes rise a lot while everyday people struggled. It’s hard to believe that the government supported policies that helped the rich even more. This shows that a strong economy doesn’t mean everyone is doing well, and we need to pay attention to these inequalities to prevent future problems.
-
The exact causes of the Stock Market Crash that began the Great Depression is still being debated by economists and historians, but most agree that a huge speculative bubble had formed during the Roaring Twenties. Although most Americans had little savings and only the richest 2.5 percent invested in stocks, those who did often borrowed to do so. Most stock purchases were made on “margin”, which meant shares could be bought with money borrowed from brokers. Often, margin accounts allowed buyers to borrow 90% to 95% of the money they needed to complete a transaction. That meant a speculator could buy $1,000 in shares for $50 or $100. This was a great deal if the value of the shares rose quickly. If a trader could make a 10% gain on $1,000 in shares (or $100) that had only cost her $50 and a couple of dollars in interest on the loan, she would be way ahead. And share prices seemed to be rising steadily. One reason for this, of course, was the demand generated by all this margin buying, which also meant that everybody was able to buy ten to twenty times more shares than they could actually afford.
I found it surprising how easily people could buy stocks with borrowed money during the 1920s. They only needed to put down a small percentage, like $50 to buy $1,000 worth of shares. It worked well while prices went up, but when the market dropped, they couldn’t pay back their loans, which helped cause the big crash. It’s shocking how this risky system led to the stock market collapse and eventually the Great Depression.
-
Although the belief that economic prosperity was universal was exaggerated at the time and has been overstated by many historians, excitement over the stock market and the possibility of making speculative fortunes permeated popular culture in the 1920s. A Hollywood musical, High Society Blues, captured the hope of instant prosperity. Ironically, the movie didn’t reach theaters until after the market crash. “I’m in the Market for You,” a musical number from the film, used the stock market as a metaphor for love: You’re going up, up, up in my estimation / I want a thousand shares of your caresses, too / We’ll count the hugs and kisses / When dividends are due / ’Cause I’m in the market for you. But just as the song was being recorded in 1929, the stock market reached its peak, crashed, and brought an abrupt end to the seeming prosperity of the Roaring Twenties. The Great Depression had arrived.
I found this pretty funny! The idea of using stock market terms for love is clever, but it’s ironic that the song came out just before the stock market crashed. It’s surprising how quickly the mood changed from excitement to the Great Depression.
-
Despite the unprecedented actions he took in his first year in office, Franklin Roosevelt’s approach to combatting the Great Depression was not unanimously supported. Some critics found FDR’s relief programs too conservative. He had been careful to work within the limits of presidential authority and congressional cooperation. And unlike Europe, where several nations had turned toward state-run economies, fascism, and socialism, Roosevelt’s New Deal showed his reluctance to radically alter America’s foundational economic and social structures.
Roosevelt's use of presidential authority to bring about long-lasting change is interesting.
-
When he was nominated as the Democratic Party’s presidential candidate in July 1932, Roosevelt promised, “a new deal for the American people.” Newspaper editors seized on the phrase “new deal,” and it became shorthand for Roosevelt’s program to address the Great Depression. Roosevelt crushed Hoover, winning more counties than any previous candidate in American history. He spent the months between his election and inauguration traveling, planning, and assembling a team of advisors which became famous as Roosevelt’s “Brain Trust” of academics and experts. On March 4, 1933, in his first inaugural address, Roosevelt declared, “This great Nation will endure as it has endured, will revive and will prosper. So, first of all, let me assert my firm belief that the only thing we have to fear is fear itself—nameless, unreasoning, unjustified terror which paralyzes needed efforts to convert retreat into advance.” In his first days in office, Roosevelt and his advisors prepared, submitted, and passed laws designed to halt the worst effects of the Great Depression. His administration threw the federal government headlong into the fight against the Depression.
I'm glad president franklin got majority voted because i felt like Herbert hoover would've done worse for the country.
-
As the United States slid deeper into the Great Depression, individuals, families, and communities faced the frightening and bewildering failure of institutions on which they had depended. The fortunate were spared the worst effects, and a few even profited from it, but by the end of 1932 the crisis had become so deep and so widespread that most Americans had suffered directly. Facing unemployment and declining wages, Americans slashed expenses. The rich could survive by simply deferring vacations and regular consumer purchases. Middle- and working-class Americans might rely on credit at neighborhood stores, default on utility bills, or skip meals. Those who could borrowed from relatives or took boarders into their homes. Many poor families “doubled up” in tenements. The most desperate camped on public lands in “Hoovervilles,” spontaneous shantytowns that dotted America’s cities, depending on bread lines and street-corner peddling. The emotional and psychological shocks of unemployment only added to the material difficulties of the Depression. Social workers and charity officials often found the unemployed suffering from feelings of futility, anger, bitterness, confusion, and shame. These feelings affected the rural poor as well as the urban.
I wonder when this whole time took place, I feel like it was the time when hoover was president, and I heard that he was a selfish corrupt president since he was not helping his people.
-
While most of the “Bonus Army” left Washington after the bill’s defeat, some stayed in protest. They were unemployed and homeless war veterans, but Hoover called the remaining protesters “insurrectionists” and ordered them to leave. When thousands ignored Hoover’s order, he sent General Douglas MacArthur. Accompanied by local police, the U.S. Army infantry, cavalry, tanks, and a machine gun squadron, MacArthur evicted the Bonus Army and burned the tent city. National media covered the disaster as troops attacked veterans, chased down men and women, tear-gassed children, and torched the shantytown. Several veterans were killed in the attack.
That's super crazy after how the old soldiers sacrificed their own life to help the country in war, just for them to get treated like this.
-
Sympathy for migrants, however, accelerated late in the Depression when Hollywood made a movie of The Grapes of Wrath.
I wonder what the difference from before and after the making of the movie was towards the sympathy for migrants. Was it the increased awareness of their struggles or the similarities between their struggles and US citizen's struggles of the time?
-
Other countries retaliated and tariff walls rose across the globe
Seems kind of obvious that other countries would want to retaliate if the US was charging so much for imports, while not having to pay nearly as much in exports.
-
They were unemployed and homeless war veterans, but Hoover called the remaining protesters “insurrectionists” and ordered them to leave.
This is such a horrible response to those in need. The veterans worked for the US to help it cause, and they're just being tossed aside.
-
That meant a speculator could buy $1,000 in shares for $50 or $100.
I'm confused on how this wasn't seen as a problem. To me, it seems like people are getting a large sum of money from just a small buy in. How did 1000 dollars worth of shares come from just 50 to 100 dollars?
-
-
public.wsu.edu public.wsu.eduUntitled5
-
"Captain, shall I keep her making for that light north, sir?"
The light could be a symbol for the afterlife and by continuing on the group could be ultimately racing towards their end.
-
"What do you think of those life-saving people? Ain't they peaches?" "Funny they haven't seen us." "Maybe they think we're out here for sport! Maybe they think we're fishin'. Maybe they think we're damned fools."
could be a demonstration of natural selection by showing how despite the groups best efforts, they still fall short of safety
-
IT would be difficult to describe the subtle brotherhood of men that was here established on the seas. No one said that it was so. No one mentioned it. But it dwelt in the boat, and each man felt it warm him.
Tragedy will often bring an unlikely group together and allow them to bond.
-
A young man thinks doggedly at such times. On the other hand, the ethics of their condition was decidedly against any open suggestion of hopelessness. So they were silent. "Oh, well," said the captain, soothing his children, "we'll get ashore all right."
Creates a dark and cynical tone for the story and may foreshadow tragedy later on since they are all aware that there is a chance they will die at sea.
-
Many a man ought to have a bath-tub larger than the boat which here rode upon the sea. These waves were most wrongfully and barbarously abrupt and tall, and each froth-top was a problem in small boat navigation.
The rough sea environment coincides with the naturalism tendency to place the story in a harsh environment..
-
-
www.ribbonfarm.com www.ribbonfarm.com
-
successfully negotiating more money and/or power
or leave the co
-
cynically play out the now-illogical re-org anyway
need to have a bias for action and realize when there's a sunk cost fallacy
-
-
learningcpp.org learningcpp.org
-
cout << h->key << endl;
Changing numbering to reflect list of options would be more clear. In this case, "2. cout << h->key << endl;"
-
-
socialsci.libretexts.org socialsci.libretexts.org
-
Of the four theorists reviewed above (Freud, Erikson, Piaget, and Vygotsky) which theorist’s ideas about development most closely match your own beliefs about how people develop and why?
All theory is a learning in the development of a child, therefore it helps us in the social, cultural, emotional, physical, cognitive growth, etc. For me, Vygolsky's theory is essential since through the cognitive we learn to develop within society also through our own experiences
-
How does the division of chores impact or not impact your household?
If all members of the household collaborated in household chores, it would be of great help to everyone because we are all putting in the time, effort and attitude to have a clean healthy home and work together.
-
What is the main role you have in your family system? What boundaries do you have or wish you had?
The main role I have in my family is to be a mother and a provider. I take care of my kids , help around the house, and bring income to the house. Some boundaries I wish I had is spending more time with my family, because communication and relationships is important to me, but I am always working, and my kids are always busy.
-
-
static1.squarespace.com static1.squarespace.com
-
The most common, mundane things of the body, the village, the earth-thesetoo, in the Indian mind, were suffused with a history of sacredness and power
This last line is very powerful. It affirms the rootedness of Native Americans in Californian lands as their beliefs and myths themselves were centered around the mundane things that were "the body, the village, the earth".
-
trange yet concrete figures ina magical ambience-the material that myths are made of is very close indeedto the material dreams are made of.
Often rare to find myths resemble more human, tangible features that we can relate to more.
-
According to this plan, people are going to be. Thereare gomg to be people on this earth. On this earth there will be plenty of foodfor the people! According to this plan there will be many different kinds offood for the people! Clover in plenty will grow, grain, acorns, nuts!"
I really love the idea of just "going to be." Like others have touched upon, this creation story is distinct from others I know about in that its intentions are plain and simple: they want the people to live freely and happily--as compared to having to earn happiness and basic necessities like food via strict customs and/or values.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
It turns out that if you look at a lot of data, it is easy to discover spurious correlations where two things look like they are related, but actually aren’t. Instead, the appearance of being related may be due to chance or some other cause. For example:
This is what basic principle we have learned in science class or experiment. The variable in the experiment should be seriously asked. No other factors are allowed to create chaotic results or bias. People's attention sometimes are attracted by absurd or strange news, so they are easy to spread those wrong messages.
-
It turns out that if you look at a lot of data, it is easy to discover spurious correlations where two things look like they are related, but actually aren’t. Instead, the appearance of being related may be due to chance or some other cause. For example:
This is what we have learned in science class and experiments. We should always make sure of the accuracy of the setting. No other elements or factors affect the result with errors. And this is also related to the content of the class I am taking this quarter: Calling Bullshit.
-
For example, social media data about who you are friends with might be used to infer your sexual orientation. Social media data might also be used to infer people’s:
Most of the time, social media always wanna gain information and track people's actions. The registration not only asks for basic contact information like email and phone number. Gender, interests, invite friends are the purpose of using social media and are included as unskippable steps.
-
-
en.wikipedia.org en.wikipedia.org
-
focused on greater realism
What does realism mean here?
-
The plot is based on an Italian tale written by Matteo Bandello
Wow, I didn't know that Romeo and Juliet is from some other story!
-
Romeo and Juliet belongs to a tradition of tragic romances stretching back to antiquity.
What does antiquity mean?
Tags
Annotators
URL
-
-
envirodatagov.org envirodatagov.org
-
Americans now face energy scarcity
The Annual Energy Outlook (AEO) report by the US Energy Information Agency surveys long-term energy trends and is widely respected by traditional energy producers and analysts across many fields. The 2023 AEO concluded that "we project that the United States will remain a net exporter of petroleum products and natural gas through 2050 in all AEO2023 cases." Further, "As a result, in AEO2023, we see renewable generating capacity growing in all regions of the United States in all cases. Across all cases, compared with 2022, solar generating capacity grows by about 325% to 1019% by 2050, and wind generating capacity grows by about 138% to 235%. We see growth in installed battery capacity in all cases to support this growth in renewables. Across the span of AEO cases, relative to 2022, natural gas generating capacity ranges from an increase of between 20% to 87% through 2050." https://www.eia.gov/outlooks/aeo/narrative/index.php#ExecutiveSummary
-
a new energy crisis
This chapter provides no evidence that the US is experiencing an energy crisis. To wit: much evidence such as of record oil production suggests the opposite: https://www.forbes.com/sites/rrapier/2024/03/12/eia-confirms-historic-us-oil-production-record/
-
-
doc.cat-v.org doc.cat-v.org
-
If one profiles what is going on in this whole process, it becomes clear that I/O dominates. Of the cpucycles expended, most go into conversion to and from intermediate file formats.
-
-
learningcpp.org learningcpp.org
-
denominator * x
I believe only the numerator needs to be multiplied here.
-
-
www.youtube.com www.youtube.com
-
the essential feature of the implicate order is
B
-
Sensations are localized within extended geometrical space conceptual feelings or no mat
Noematta
-
noetic
Sensory and the noetic
-
modern science
The advent of
-
transcendental Purity
Fallen material world
-
imperfection and impurity
Complex imperfection
-
the axial age saw the incorporation of the inner dialogue into the human sense of self agency
Axial age
-
a word is the body of a concept a concept is the soul of a word
Well said
-
Alfred North Whitehead in 1927
Creativity
-
the ploma
Preloma?
-
the collective unconscious is in fact the implicate noetic realm
Hegel absolute spirit foreshadows this
-
noata shamelessly lifted from Edmund
X
-
participants in the Stream of becoming
Stream of becoming
-
intricate self- metamorphic and purposive complexes of prehension or experiential relationships
Ditto
-
by theories which stray even further
From
-
mind warping mathematical toys
X
-
-
moderna.historicas.unam.mx moderna.historicas.unam.mx
-
Sobre las posturas dialoguistas del PAN coincidimos con el argumento de Soledad Loaeza, quien subraya cómo este partido tenía una gran claridad respecto de su objetivo: la búsqueda del poder.[
Dato
-
-
openclassrooms.com openclassrooms.com
-
pédagogie.
Test @
-
Mathieu
test
-
-
drive.google.com drive.google.com
-
E-atividade é a designação que normalmente se aplica à estrutura de umaformação ativa e interativa online. As e-atividades podem ser utilizadas devárias formas, mas têm algumas características comuns.As e-atividades permitem uma aprendizagem online ativa, participativa,individual ou em grupo. São importantes porque empregam princípios úteispara a aprendizagem bem como uma escolha de tecnologias adequadas.
Após a leitura dos materiais partilhados, deixo aqui um contributo do que me parece essencial: A seleção das atividades eletrónicas é crucial para um ensino e uma aprendizagem eficazes em ambientes virtuais. Almenara, Osuna & Cejudo (2014) sugerem critérios para a seleção da melhor atividade eletrónica, incluindo a consideração das caraterísticas dos alunos, a mobilização de diferentes aptidões, a promoção de diversas competências e a maior motivação. As actividades electrónicas são cruciais para a aprendizagem dos alunos, apoiando os objectivos e integrando-se no currículo. A motivação é essencial para reforçar as necessidades de aprendizagem. O feedback imediato é crucial para compreender os progressos e identificar as áreas a melhorar. As actividades electrónicas permitem uma aprendizagem online ativa e participativa, utilizando princípios e tecnologias apropriados. A escolha de uma estrutura de atividade eletrónica depende das opções pedagógicas, didácticas e metodológicas.
-
-
www.lazaruscorporation.co.uk www.lazaruscorporation.co.uk
-
In his post Raw dog the open web! Jason says (quite correctly): www.fromjason.xyz Monoculture is winning. The Fortune 500 has shrink-wrapped our zeitgeist and we are suffocating culturally. But, we can fight back by bookmarking a web page or sharing a piece of art unsanctioned by our For Your Page. To do that we must get out there and raw dog that open web. In our current digital landscape, where a corporate algorithm tells us what to read, watch, drink, eat, wear, smell like, and sound like, human curation of the web is an act of revolution. A simple list of hyperlinks published under a personal domain name is subversive. Curation is punk.
I love how this blogpost creates a highlighted link to the original post which they're quoting along with the commanding words "View in context at www.fromjason.xyz".
-
-
edtech.dk edtech.dk
-
v
Skal være \(u\)
-
-
fcichos.github.io fcichos.github.io
-
Hallo, ich hab gerade versucht die erste Uebung zu testen, aber hab leider folgende Fehler bekommen:
Tests konnten nicht ausgeführt werden Jobe Serveranfrage fehlgeschlagen.HTTP response from Jobe was 502: null
Haben alle auch das gleiche Problem?
-
-
www.reddit.com www.reddit.com
-
Does anyone know how do they make new platens?
reply to u/General-Writing1764 at https://old.reddit.com/r/typewriters/comments/1g7a8y5/does_anyone_know_how_do_they_make_new_platens/
I'm guessing that JJ Short is taking the original, removing the rubber. Placing the core into a mold and pouring in new material which hardens. Once done they put it on a lathe and turn it down to the appropriate (original) diameter. Potentially they're sanding the final couple of thousands of an inch for finish.
I'd imagine that if you asked them, they could/would confirm this general process.
The only other shop I've heard doing platen work is Bob at Typewriter Muse, but I haven't gone through his YouTube videos to see what his process looks like. (I'm pretty sure he documents some of it there.)
-
-
thewasteland.info thewasteland.info
-
Dry bones can harm no one.
The assertion that these bones "can harm no one" introduces a paradox. While they signify death and the end of vitality, their inertness suggests that the past cannot actively disrupt the present. This resonates with the broader themes of the poem, which often grapple with the weight of history and the haunting presence of memory. In a world characterized by despair, the line hints at a resigned acceptance of the past’s inability to inflict further harm, positioning it as a relic rather than an active force.
-
Shantih shantih shantih
The repetition of "Shantih shantih shantih" in the final lines of "The Waste Land" functions as a continuum rather than a concrete ending, embodying a search for peace amidst chaos. The term, meaning "the peace that passeth understanding," resonates deeply with the poem’s overarching themes of fragmentation and despair.
This triplet of peace is both a culmination and an invitation, suggesting that true tranquility may lie not in resolution but in the ongoing quest for harmony. Instead of providing a definitive conclusion, the repetition creates a rhythmic pulse that echoes throughout the poem, reminiscent of a mantra. It invites the reader to contemplate the cyclical nature of existence—where endings lead to new beginnings.
-
I do not know whether a man or a woman
The line "I do not know whether a man or a woman" emerges from the section "A Game of Chess," which delves into the disintegration of human connection. This line signifies the speaker’s existential uncertainty, reflecting a world where traditional gender roles and identities have become muddied and irrelevant.
Here, the ambiguity echoes the broader themes of alienation and fragmentation, as the characters struggle to communicate and connect. The uncertainty of gender mirrors the breakdown of personal relationships, suggesting that in a chaotic, post-war landscape, even the most fundamental aspects of identity are in flux. This disorientation emphasizes the emotional paralysis faced by individuals, reinforcing the haunting sense of isolation pervading the poem.
-
Who is the third who walks always beside you?
In T.S. Eliot's "The Waste Land," the line "Who is the third who walks always beside you?" evokes a haunting presence, layering the poem with existential uncertainty and a sense of companionship laced with disquiet. From the speaker's point of view, this line captures an intimate yet unsettling inquiry that transcends the immediate relationships, hinting at a spiritual or existential specter that accompanies the living.
The phrase suggests an ambiguous, almost spectral companionship—one that suggests both intimacy and alienation. The “third” figure implies a triangular relationship, where the speaker and another are not alone; instead, they are shadowed by an elusive presence. This presence is not just a literal figure but symbolizes collective trauma, history, or perhaps the weight of modern disillusionment. It evokes a sense of haunting, as if the past—whether personal or cultural—lingers ominously, shaping the present.
Moreover, the use of “who” implies a search for identity, suggesting that the speaker grapples with understanding not just the presence of this third entity, but also their own place within the existential landscape of the poem. The question is both a plea and a probe, inviting readers to ponder the nature of companionship in a fractured world. The haunting nature of this inquiry lies in its open-endedness; it suggests that the answer may elude the speaker, reinforcing the poem's overarching themes of fragmentation and despair.
Ultimately, this line encapsulates the haunting complexity of human experience—where the past, the present, and the metaphysical intertwine, leaving the speaker (and the reader) in a state of reflective disquiet. The third figure symbolizes both loss and the ongoing search for meaning, a companion that walks with us, whether we acknowledge it or not.
-
whirlpool.
The whirlpool contrasts with moments of stillness and clarity in the poem. It underscores the tension between chaos and order, reflecting the desire for meaning in a fragmented world. The whirlpool serves as a reminder of the relentless motion of time and the challenges of finding stability.
-
The river sweats Oil and tar
The lines "The river sweats / Oil and tar" reflect the industrial pollution of the environment and symbolizes the decay and corruption present in modern life. The river, typically a symbol of life and renewal is assigned a certain vitality and is transformed into a site of contamination, highlighting themes of desolation and moral decline in the post-war world.
-
Twit twit twit Jug jug jug jug jug jug So rudely forc’d. Tereu
In "The Waste Land," the lines "Twit twit twit / Jug jug jug jug jug jug / So rudely forc'd" evoke a jarring and fragmented sense of communication, drawing from the myth of Tereus, Procne, and Philomela. This reference introduces themes of violence, loss, and the disruption of natural order. The repetition of "twit" and "jug" creates a rhythmic yet unsettling sound, almost mocking in its simplicity. It highlights the stark contrast between the complexity of human emotion and the reduced, animalistic quality of the sounds. This mirrors the broader themes of disconnection and alienation throughout the poem. The reference to Tereus—who brutally silenced Philomela by cutting out her tongue—serves as a potent metaphor for silencing and trauma. In this context, the nymphs and their experiences are connected to loss and violence, underscoring the idea that beauty and vitality are often subjected to brutal realities.
-
departed.
The indentation of “departed” draws attention to the unusual experience of the nymphs, who traditionally symbolize beauty, love, and the natural world, often associated with life and abundance. However, in Eliot’s context, their presence serves to contrast the barrenness and emptiness of modern existence. Also, decapitalizing “departed” shifts the agency of the myths and implies a more passive experience as they have been swept away and lost without active control over their fate. This loss of agency aligns with the themes in "The Waste Land," where characters often feel powerless in the face of societal decay and personal disillusionment. The experience of the nymphs can be interpreted as a reflection of unfulfilled longing and the impact of a fragmented society on intimate relationships. Instead of celebrating love and connection, their references evoke a sense of nostalgia for a more vibrant, meaningful past that has been lost. This mirrors the sorrow expressed in Psalm 137, where the Israelites long for their homeland, suggesting a universal longing for wholeness and the deep human need for connection.
Ultimately, the nymphs' experience in "The Waste Land" draws attention to the contrast between the idealized past and the stark reality of the present, reinforcing the poem's exploration of loss, longing, and the search for identity in a desolate world.The line "Departed, have left no addresses" from "The Waste Land" resonates deeply with the themes in Psalm 137, particularly the sense of dislocation and absence. In Psalm 137, the Israelites lament their exile in Babylon, feeling disconnected from their homeland and traditions. The line evokes a profound sense of loss and the inability to return to a place of belonging, mirroring the mournful sentiment of having no way to communicate or reconnect with what has been left behind. Both texts express a longing for something lost and the pain of separation, emphasizing the emotional weight of exile. Just as the Israelites mourn their captivity and the destruction of their identity, Eliot's line suggests a broader existential crisis where individuals feel untethered in a fragmented world, underscoring the despair and disconnection prevalent in both works.
-
-
www.nytimes.com www.nytimes.com
-
Das Amazonasgebiet leidet im zweiten Jahr unter extremer Dürre. Es gab noch nie so wenig Niederschläge, die Dürre dauerte noch nie so lang und betraf noch nie ein so großes Gebiet. Waldbrände machten Sao Paolo zur Stadt mit der schlechtesten Luftqualität der Welt. Auch die meisten übrigen Staaten Südamerikas sind betroffen. Ursachen sind ein durch die globale Erhitzung verstärkter El Niño und die Rekordtemperaturen des Nordatlantik. Ausführlicher Bericht https://www.nytimes.com/2024/10/19/world/americas/south-america-drought-amazon-river.html
-
Julie TurkewitzAna Ionova and José María León Cabrera
-
-
drive.google.com drive.google.com
-
moleculas del sistema inmune en tortugas: Lisozymas con actividad antibacteriana y otrs moleculas son catelicidinas con actividad antifungica yactividad antibacteriana, incluso mas potente que farmacos ampicilina y bencilpenicilina. En el cocodrilo siames se aislo una pequeña proteina cationica 2008). A small cationic protein was isolated from the Siamese crocodile (Crocodylus siamensis), which demonstrated antibacterial activity against S. typhi, E. coli, S. aureus, Staphylococcus epidermidis, K. pneumoniae, P. aeruginosa and Vibrio chorelae (Preecharram et al., 2008). These antimicrobial peptides offer potent protection for reptiles against infection as well as provide exciting opportunities in the search for new clinical or agricultural antibiotics.
-
Las defensinas son: proteins that have a characteristic -sheet-rich fold as well as six disulphide-linked cysteines and have been found in all mammals that have been examined as well as in birds
-
Se han descrito defensinas con actividad antibacteriana against Escherichia coli and Salmonella typhimurium as well as antiviral activity against the Chandipura virus (Chattopadhyay et al., 2006). The first - defensin from reptilian leukocytes was recently isolated from the European pond turtle Emys orbicularis. Known as TBD-1, the peptide demonstrated strong activity against E. coli, Listeria monocytogenes, Candida albicans and methicillin-resistant Staphylococcus aureus (Stegemann, 2009).
-
El timo y el bazo se mantienen activos durante el verano e involucionan en en invierno cuando se encuentran en hibernación
-
Reptiles are the only ectothermic amniotes, and therefore become a pivotal group to study in order to provide important insights into both the evolution of the immune system as well as the functioning of the immune system in an ecological setting.
-
-
www.repubblica.it www.repubblica.it
-
Ein „mediterraner Zyklon“ verursacht an diesem Wochenende überall in Italien extreme Unwetter. In der vergangenen Woche kam es in Ligurien stellenweise innerhalb weniger Stunden zu Niederschlägen von 300 mm (an einigen Punkten Frankreichs zu 600mm). Bei Piombino fielen in einer Stunde 120 mm Regen. Diese „tropischen“ Niederschlagsmengen weichen deutlich vom gewohnten mediterranen Klima ab. https://www.repubblica.it/italia/2024/10/19/news/previsioni_meteo_weekend_maltempo_nubifragi_allerta_rossa-423564680/
-
-
www.dailymaverick.co.za www.dailymaverick.co.za
-
Clash of the Cartels: Unmasking the global drug kingpins stalking South Africa.
for - book - Clash of the Cartels: Unmasking the global drug kingpins stalking South Africa - Caryn Dolley - Columbia drug trafficking in South Africa
-
Why you don’t see it is because it’s subtle, very sophisticated and it is a massive business.
for - quote - organized crime in Cape Town
quote - organized crime in Cape Town - Andre Lincoln - Caryn Dolley - (see below) - Why you don’t see it is because it’s subtle, very sophisticated and it is a massive business. - How many restaurants and clubs on these famous streets are paying protection money to criminals? It's pretty startling - And what about construction shakedowns? 63 billion Rand of projects impacted in 2019 - https://hyp.is/Smjb3I5CEe-fXHsx-Sy8kQ/www.inclusivesociety.org.za/post/overview-of-the-construction-mafia-crisis-in-south-africa
-
If you were to go down Sea Point main road, or into town into Long Street or Kloof Street, all those restaurant or club owners contribute to organised crime regularly. Most of them, unwillingly, but they have no other option. And they have no other option because of the way organised crime works,” said Lincoln.
for - organized crime - Cape Town - hidden protection scheme - Andre Lincoln
-
for - polycrisis - organized crime - Daily Maverick article - organized crime - Cape Town - How the state colludes with SA’s underworld in hidden web of organised crime – an expert view - Victoria O’Regan - 2024, Oct 18 - book - Man Alone: Mandela’s Top Cop – Exposing South Africa’s Ceaseless Sabotage - Daily Maverick journalist Caryn Dolley - 2024 - https://viahtml.hypothes.is/proxy/https://shop.dailymaverick.co.za/product/man-alone-mandelas-top-cop-exposing-south-africas-ceaseless-sabotage/?_gl=11mkyl5s_gcl_auODI2MTMxODEuMTcyNjI0MDAwMg.._gaNzQ5NDM3NzE0LjE3MjMxODY0NzY._ga_Y7XD5FHQVG*MTcyOTM1MjgwOS4xLjAuMTcyOTM1MjgxOS41MC4wLjkyNTE5MDk2OA..
summary - This article revolves around the research of South African crime reporter Caryn Dolley on the organized web of crime in South Africa - She discusses the nexus of - trans-national drug cartels - local Cape Town gangs - South African state collusion with gangs - in her new book: Man Alone: Mandela's Top Cop - Exposing South Africa's Ceaseless Sabotage - It illustrates how on-the-ground efforts to fight crime are failing because they do not effectively address this criminal nexus - The book follows the life of retired top police investigator Andre Lincoln whose expose paints the deep level of criminal activity spanning government, trans-national criminal networks and local gangs - Such organized crime takes a huge toll on society and is an important contributor to the polycrisis. - Non-linear approaches are necessary to tackle this systemic problem - One possibility is a trans-national citizen-led effort
Tags
- trans-national drug cartels - South Africa - Colombia - Serbia
- Daily Maverick article - organized crime - Cape Town - How the state colludes with SA’s underworld in hidden web of organised crime – an expert view - Victoria O’Regan - 2024, Oct 18
- quote - organized crime in Cape Town
- book - Clash of the Cartels: Unmasking the global drug kingpins stalking South Africa - Caryn Dolley
- polycrisis - organized crime
- book - Man Alone: Mandela’s Top Cop – Exposing South Africa’s Ceaseless Sabotage - Daily Maverick journalist Caryn Dolley - 2024
- construction mafia stats - South Africa
- organized crime - Cape Town - hidden protection scheme - Andre Lincoln
Annotators
URL
-
-
www.inclusivesociety.org.za www.inclusivesociety.org.za
-
In 2019, at least 183 infrastructure and construction projects worth more than R63-billion had been affected by the construction mafia.
for - stats - construction mafia impacts - South Africa - 2019 - R63 billion - Overview of the Construction Mafia Crisis in South Africa - Inclusive Society Institute - 2023
Tags
Annotators
URL
-
-
www.biorxiv.org www.biorxiv.org
-
eLife Assessment
This important work advances our understanding of parabrachial CGRP threat function. The evidence supporting CGRP aversive outcome signaling is solid, while the evidence for cue signaling and fear behavior generation is incomplete. The work will be of interest to neuroscientists studying defensive behaviors.
-
Reviewer #1 (Public Review):
Summary
The authors asked if parabrachial CGRP neurons were only necessary for a threat alarm to promote freezing or were necessary for a threat alarm to promote a wider range of defensive behaviors, most prominently flight.
Major Strengths of Methods and Results
The authors performed careful single-unit recording and applied rigorous methodologies to optogenetically tag CGRP neurons within the PBN. Careful analyses show that single-units and the wider CGRP neuron population increases firing to a range of unconditioned stimuli. The optogenetic stimulation of experiment 2 was comparatively simpler but achieved its aim of determining the consequence of activating CGRP neurons in the absence of other stimuli. Experiment 3 used a very clever behavioral approach to reveal a setting in which both cue-evoked freezing and flight could be observed. This was done by having the unconditioned stimulus be a "robot" traveling along a circular path at a given speed. Subsequent cue presentation elicited mild flight in controls and optogenetic activation of CGRP neurons significantly boosted this flight response. This demonstrated for the first time that CGRP neuron activation does more than promote freezing. The authors conclude by demonstrating that bidirectional modulation of CGRP neuron activity bidirectionally affects freezing in a traditional fear conditioning setting and affects both freezing and flight in a setting in which the robot served as the unconditioned stimulus. Altogether, this is a very strong set of experiments that greatly expand the role of parabrachial CGRP neurons in threat alarm.
Weaknesses
In all of their conditioning studies the authors did not include a control cue. For example, a sound presented the same number of times but unrelated to US (shock or robot) presentation. This does not detract from their behavioral findings. However, it means the authors do not know if the observed behavior is a consequence of pairing. Or is a behavior that would be observed to any cue played in the setting? This is particularly important for the experiments using the robot US.
The authors make claims about the contribution of CGRP neurons to freezing and fleeing behavior, however, all of the optogenetic manipulations are centered on the US presentation period. Presently, the experiments show a role for these neurons in processing aversive outcomes but show little role for these neurons in cue responding or behavior organizing. Claims of contributions to behavior should be substantiated by manipulations targeting the cue period.
Appraisal
The authors achieved their aims and have revealed a much greater role for parabrachial CGRP neurons in threat alarm.
Discussion
Understanding neural circuits for threat requires us (as a field) to examine diverse threat settings and behavioral outcomes. A commendable and rigorous aspect of this manuscript was the authors decision to use a new behavioral paradigm and measure multiple behavioral outcomes. Indeed, this manuscript would not have been nearly as impactful had they not done that. This novel behavior was combined with excellent recording and optogenetic manipulations - a standard the field should aspire to. Studies like this are the only way that we as a field will map complete neural circuits for threat.
-
Reviewer #2 (Public Review):
-Summary of the Authors' Aims:<br /> The authors aimed to investigate the role of calcitonin gene-related peptide (CGRP) neurons in the parabrachial nucleus (PBN) in modulating defensive behaviors in response to threats. They sought to determine whether these neurons, previously shown to be involved in passive freezing behavior, also play a role in active defensive behaviors, such as fleeing, when faced with imminent threats.
-Major Strengths and Weaknesses of the Methods and Results:<br /> The authors utilized an innovative approach by employing a predator-like robot to create a naturalistic threat scenario. This method allowed for a detailed observation of both passive and active defensive behaviors in mice. The combination of electrophysiology, optogenetics, and behavioral analysis provided a comprehensive examination of CGRP neuron activity and its influence on defensive behaviors. The study's strengths lie in its robust methodology, clear results, and the multi-faceted approach that enhances the validity of the findings.
No notable weakness found.
-Appraisal of Aims and Results:<br /> The authors successfully achieved their aims by demonstrating that CGRP neurons in the PBN modulate both passive and active defensive behaviors. The results clearly show that activation of these neurons enhances fear memory and promotes conditioned fleeing behavior, while inhibition reduces these responses. The study provides strong evidence supporting the hypothesis that CGRP neurons act as a comprehensive alarm system in the brain.
-Impact on the Field and Utility of Methods and Data:<br /> This work has significant implications for the field of neuroscience, particularly in understanding the neural mechanisms underlying adaptive defensive behaviors. The innovative use of a predator-like robot to simulate naturalistic threats adds ecological validity to the findings and may inspire future studies to adopt similar approaches. The comprehensive analysis of CGRP neuron activity and its role in defensive behaviors provides valuable data that could be useful for researchers studying fear conditioning, neural circuitry, and behavior modulation.
-Additional Context:<br /> The study builds on previous research that primarily focused on the role of CGRP neurons in passive defensive responses, such as freezing. By extending this research to include active responses, the authors have provided a more complete picture of the role of these neurons in threat detection and response. The findings highlight the versatility of CGRP neurons in modulating different types of defensive behaviors based on the perceived intensity and immediacy of threats.
Overall, this manuscript makes a significant contribution to our understanding of the neural basis of defensive behaviors and offers valuable methodological insights for future research in the field.
-
Reviewer #3 (Public Review):
Strengths:<br /> The study used optogenetics together with in vivo electrophysiology to monitor CGRP neuron activity in response to various aversive stimuli including robot chasing to determine whether they encode noxious stimuli differentially. The study used an interesting conditioning paradigm to investigate the role of CGRP neurons in the PBN in both freezing and flight behaviors.
Weakness:<br /> The major weakness of this study is that the chasing robot threat conditioning model elicits weak unconditioned and conditioned flight responses, making it difficult to interpret the robustness of the findings. Furthermore, the conclusion that the CGRP neurons are capable of inducing flight is not substantiated by the data. No manipulations are made to influence the flight behavior of the mouse. Instead, the manipulations are designed to alter the intensity of the unconditioned stimulus.
-
-
www.biorxiv.org www.biorxiv.org
-
eLife Assessment
This study presents a valuable finding on the identification of a complex consisting of NHE1, hERG1, β1/integrin and NaV1.5 on the membrane of breast cancer cells. The evidence supporting the claims of the authors is somewhat incomplete. The inclusion of clarification of some experimental design and the amendment of cropping Western blot data would have strengthened the study. The work will be of interest to scientists working on breast cancer.
-
Reviewer #1 (Public Review):
This manuscript by Capitani et al. extends previous studies of ion channel expression in triple-negative breast cancer cell lines. Probing four phenotypically different breast cancer cell lines, they used co-IP and confocal immunofluorescence (IF) colocalization to reveal that beta1 integrin forms a complex with the neonatal form of the Na+ channel NaV1.5 (nNaV1.5) and the Na+/H+ antiporter NHE1 in addition to previously reported hERG1. They used siRNA to show that silencing beta1 results in a co-depletion of hERG and Nav1.5, further supporting the conclusion that they form a complex; a complementary enhancement of Na current with increased hERG expression was also demonstrated. These data compellingly describe a complex of membrane proteins unregulated in breast cancer and thus present novel potential targets for treatment.
There are several concerns with experimental approaches. How fluorescence measurements were compared and controlled among experiments was not described, and masks drawn to define membrane expression seemed arbitrary, and included in some cases large sections of cytoplasm. There are issues associated with the use of channel blocking agents and a bifunctional small-chain antibody that are not well rationalized. Why are they being used, to test what hypotheses or disrupt what processes? The extremely high concentrations of E-4031 (4000x IC50 for block), e.g., are not expected to have selective actions. The effects of E-4031 at high concentrations altering cytoskeleton properties associated with invasiveness (and thus cancer progression) are questionable. There are numerous problems with co-IPs together carried out together with knock-down, which in one case depleted the protein targeted by the primary IP antibody. Western blots (WB) were quantified by comparing treatment to control, which does not control for loading errors. The control and treated signals should be divided by the respective tubulin signals to control for loading errors. Then the treated value can be compared with the control.
-
Reviewer #2 (Public Review):
The manuscript by Chiara Capitani and Annarosa Arcangeli reports the identification of a complex comprising NHE1,hERG1, β1 integrin, and NaV1.5 on the plasma membrane of breast cancer cells. The authors further investigated the mutual regulatory interactions among these proteins using Western blotting and co-immunoprecipitation assays. They also examined the downstream signaling pathways associated with this complex and assessed its impact on the malignant behavior of breast cancer cells.
Strengths
The manuscript used different breast cancer cell lines and combined Western blot, immunostaining, and electrophysiology to provide evidence for the proposed complex. The inhibitors are also used to test the requirement of channel activity to function in the development of breast cancer cells with in-vitro studies.
Weaknesses
The data shown in this manuscript include the western blots that are cropped and imaged separately to draw conclusions about protein levels and changes in immunoprecipitation. These cannot be done on separate, cropped blots but must be imaged together to make these comparisons.
Antibodies used for hERG, NaV1.5 and β1 integrin must be validated to work for IP using KO or KD cell lines for the respective proteins to demonstrate specificity. The same goes for all the immunofluorescence imaging shown in the manuscript as these are all key pieces of data to support the conclusions.
-
-
www.biorxiv.org www.biorxiv.org
-
eLife Assessment
SGLT2 inhibitors (SGLT2i) have assumed important roles in reducing cardiovascular risk, particularly in those with diabetes. It has become appreciated that its protective effects are likely beyond their ability to lower blood sugar levels. This research presents a novel approach to studying the SGLT2i mechanism of action which is yet to be fully elucidated.
-
Reviewer #1 (Public Review):
The authors examined the hypothesis that plasma ApoM, which carries sphingosine-1-phosphate (S1P) and activates vascular S1P receptors to inhibit vascular leakage, is modulated by SGLT2 inhibitors (SGLTi) during endotoxemia. They also propose that this mechanism is mediated by SGLTi regulation of LRP2/ megalin in the kidney and that this mechanism is critical for endotoxin-induced vascular leak and myocardial dysfunction. The hypothesis is novel and potentially exciting. However, the author's experiments lack critical controls, lack rigor in multiple aspects, and overall does not support the conclusions.
-
Reviewer #2 (Public Review):
Apolipoprotein M (ApoM) is a plasma carrier for the vascular protective lipid mediator sphingosine 1-phospate (S1P). The plasma levels of S1P and its chaperones ApoM and albumin rapidly decline in patients with severe sepsis, but the mechanisms for such reductions and their consequences for cardiovascular health remain elusive. In this study, Ripoll and colleagues demonstrate that the sodium-glucose co-transporter inhibitor dapagliflizin (Dapa) can preserve serum ApoM levels as well as cardiac function after LPS treatment of mice with diet-induced obesity. They further provide data to suggest that Dapa preserves serum ApoM by increasing megalin-mediated reabsorption of ApoM in renal proximal tubules and that ApoM improves vascular integrity in LPS treated mice. These observations put forward a potential therapeutic approach to sustain vascular protective S1P signaling that could be relevant to other conditions of systemic inflammation where plasma levels of S1P decrease. However, although the authors are careful with their statements, the study falls short of directly implicating megalin in ApoM reabsorption and of ApoM/S1P depletion in LPS-induced cardiac dysfunction and the protective effects of Dapa.
The observations reported in this study are exciting and potentially of broad interest. The paper is well written and concise, and the statements made are mostly supported by the data presented. However, the mechanism proposed and implied is mostly based on circumstantial evidence, and the paper could be substantially improved by directly addressing the role of megalin in ApoM reabsorption and serum ApoM and S1P levels and the importance of ApoM for the preservation for cardiac function during endotoxemia. Some observations that are not necessarily in line with the model proposed should also be discussed.
The authors show that Dapa preserves serum ApoM and cardiac function in LPS-treated obese mice. However, the evidence they provide to suggest that ApoM may be implicated in the protective effect of Dapa on cardiac function is indirect. Direct evidence could be sought by addressing the effect of Dapa on cardiac function in LPS treated ApoM deficient and littermate control mice (with DIO if necessary).
The authors also suggest that higher ApoM levels in mice treated with Dapa and LPS reflect increased megalin-mediated ApoM reabsorption and that this preserves S1PR signaling. This could be addressed more directly by assessing the clearance of labelled ApoM, by addressing the impact of megalin inhibition or deficiency on ApoM clearance in this context, and by measuring S1P as well as ApoM in serum samples.
Methods: More details should be provided in the manuscript for how ApoM deficient and transgenic mice were generated, on sex and strain background, and on whether or not littermate controls were used. For intravital microscopy, more precision is needed on how vessel borders were outland and if this was done with or without regard for FITC-dextran. Please also specify the type of vessel chosen and considerations made with regard to blood flow and patency of the vessels analyzed. For statistical analyses, data from each mouse should be pooled before performing statistical comparisons. The criteria used for choice of test should be outlined as different statistical tests are used for similar datasets. For all data, please be consistent in the use of post-tests and in the presentation of comparisons. In other words, if the authors choose to only display test results for groups that are significantly different, this should be done in all cases. And if comparisons are made between all groups, this should be done in all cases for similar sets of data.
-
Reviewer #3 (Public Review):
The authors have performed well designed experiments that elucidate the protective role of Dapa in sepsis model of LPS. This model shows that Dapa works, in part, by increasing expression of the receptor LRP2 in the kidney, that maintains circulating ApoM levels. ApoM binds to S1P which then interacts with the S1P receptor stimulating cardiac function, epithelial and endothelial barrier function, thereby maintaining intravascular volume and cardiac output in the setting of severe inflammation. The authors used many experimental models, including transgenic mice, as well as several rigorous and reproducible techniques to measure the relevant parameters of cardiac, renal, vascular, and immune function. Furthermore, they employ a useful inhibitor of S1P function to show pharmacologically the essential role for this agonist in most but not all the benefits of Dapa. A strength of the paper is the identification of the pathway responsible for the cardioprotective effects of SGLT2is that may yield additional therapeutic targets. There are some weaknesses in the paper, such as, studying only male mice, as well as providing a power analysis to justify the number of animals used throughout their experimentation. Overall, the paper should have a significant impact on the scientific community because the SGLT2i drugs are likely to find many uses in inflammatory diseases and metabolic diseases. This paper provides support for an important mechanism by which they work in conditions of severe sepsis and hemodynamic compromise.
-
-
www.biorxiv.org www.biorxiv.org
-
eLife Assessment
This study provides a valuable new perspective on how motor learning occurring in one state generalizes to new states (for example, a different limb posture). The proposed model improves upon previous theories in its ability to predict patterns of generalization, but evidence supporting this specific proposed model over possible alternatives is incomplete. The newly proposed theory appears promising but would be more convincing if its conceptual and theoretical basis were clearer and more rigorously derived.
-
Reviewer #1 (Public Review):
This paper proposes a novel framework for explaining patterns of generalization of force field learning to novel limb configurations. The paper considers three potential coordinate systems: cartesian, joint-based, and object-based. The authors propose a model in which the forces predicted under these different coordinate frames are combined according to the expected variability of produced forces. The authors show, across a range of changes in arm configurations, that the generalization of a specific force field is quite well accounted for by the model.
The paper is well-written and the experimental data are very clear. The patterns of generalization exhibited by participants - the key aspect of the behavior that the model seeks to explain - are clear and consistent across participants. The paper clearly illustrates the importance of considering multiple coordinate frames for generalization, building on previous work by Berniker and colleagues (JNeurophys, 2014). The specific model proposed in this paper is parsimonious, but there remain a number of questions about its conceptual premises and the extent to which its predictions improve upon alternative models.
A major concern is with the model's premise. It is loosely inspired by cue integration theory but is really proposed in a fairly ad hoc manner, and not really concretely founded on firm underlying principles. It's by no means clear that the logic from cue integration can be extrapolated to the case of combining different possible patterns of generalization. I think there may in fact be a fundamental problem in treating this control problem as a cue-integration problem. In classic cue integration theory, the various cues are assumed to be independent observations of a single underlying variable. In this generalization setting, however, the different generalization patterns are NOT independent; if one is true, then the others must inevitably not be. For this reason, I don't believe that the proposed model can really be thought of as a normative or rational model (hence why I describe it as 'ad hoc'). That's not to say it may not ultimately be correct, but I think the conceptual justification for the model needs to be laid out much more clearly, rather than simply by alluding to cue-integration theory and using terms like 'reliability' throughout.
A more rational model might be based on Bayesian decision theory. Under such a model, the motor system would select motor commands that minimize some expected loss, averaging over the various possible underlying 'true' coordinate systems in which to generalize. It's not entirely clear without developing the theory a bit exactly how the proposed noise-based theory might deviate from such a Bayesian model. But the paper should more clearly explain the principles/assumptions of the proposed noise-based model and should emphasize how the model parallels (or deviates from) Bayesian-decision-theory-type models.
Another significant weakness is that it's not clear how closely the weighting of the different coordinate frames needs to match the model predictions in order to recover the observed generalization patterns. Given that the weighting for a given movement direction is over-parametrized (i.e. there are 3 variable weights (allowing for decay) predicting a single observed force level, it seems that a broad range of models could generate a reasonable prediction. It would be helpful to compare the predictions using the weighting suggested by the model with the predictions using alternative weightings, e.g. a uniform weighting, or the weighting for a different posture. In fact, Fig. 7 shows that uniform weighting accounts for the data just as well as the noise-based model in which the weighting varies substantially across directions. A more comprehensive analysis comparing the proposed noise-based weightings to alternative weightings would be helpful to more convincingly argue for the specificity of the noise-based predictions being necessary. The analysis in the appendix was not that clearly described, but seemed to compare various potential fitted mixtures of coordinate frames, but did not compare these to the noise-based model predictions.
-
Reviewer #2 (Public Review):
Leib & Franklin assessed how the adaptation of intersegmental dynamics of the arm generalizes to changes in different factors: areas of extrinsic space, limb configurations, and 'object-based' coordinates. Participants reached in many different directions around 360{degree sign}, adapting to velocity-dependent curl fields that varied depending on the reach angle. This learning was measured via the pattern of forces expressed in upon the channel wall of "error clamps" that were randomly sampled from each of these different directions. The authors employed a clever method to predict how this pattern of forces should change if the set of targets was moved around the workspace. Some sets of locations resulted in a large change in joint angles or object-based coordinates, but Cartesian coordinates were always the same. Across three separate experiments, the observed shifts in the generalized force pattern never corresponded to a change that was made relative to any one reference frame. Instead, the authors found that the observed pattern of forces could be explained by a weighted combination of the change in Cartesian, joint, and object-based coordinates across test and training contexts.
In general, I believe the authors make a good argument for this specific mixed weighting of different contexts. I have a few questions that I hope are easily addressed.
Movements show different biases relative to the reach direction. Although very similar across people, this function of biases shifts when the arm is moved around the workspace (Ghilardi, Gordon, and Ghez, 1995). The origin of these biases is thought to arise from several factors that would change across the different test and training workspaces employed here (Vindras & Viviani, 2005). My concern is that the baseline biases in these different contexts are different and that rather the observed change in the force pattern across contexts isn't a function of generalization, but a change in underlying biases. Baseline force channel measurements were taken in the different workspace locations and conditions, so these could be used to show whether such biases are meaningfully affecting the results.
Experiment 3, Test 1 has data that seems the worst fit with the overall story. I thought this might be an issue, but this is also the test set for a potentially awkwardly long arm. My understanding of the object-based coordinate system is that it's primarily a function of the wrist angle, or perceived angle, so I am a little confused why the length of this stick is also different across the conditions instead of just a different angle. Could the length be why this data looks a little odd?
The manuscript is written and organized in a way that focuses heavily on the noise element of the model. Other than it being reasonable to add noise to a model, it's not clear to me that the noise is adding anything specific. It seems like the model makes predictions based on how many specific components have been rotated in the different test conditions. I fear I'm just being dense, but it would be helpful to clarify whether the noise itself (and inverse variance estimation) are critical to why the model weights each reference frame how it does or whether this is just a method for scaling the weight by how much the joints or whatever have changed. It seems clear that this noise model is better than weighting by energy and smoothness.
Are there any force profiles for individual directions that are predicted to change shape substantially across some of these assorted changes in training and test locations (rather than merely being scaled)? If so, this might provide another test of the hypotheses.
I don't believe the decay factor that was used to scale the test functions was specified in the text, although I may have just missed this. It would be a good idea to state what this factor is where relevant in the text.
-
Reviewer #3 (Public Review):
The author proposed the minimum variance principle in the memory representation in addition to two alternative theories of the minimum energy and the maximum smoothness. The strength of this paper is the matching between the prediction data computed from the explicit equation and the behavioral data taken in different conditions. The idea of the weighting of multiple coordinate systems is novel and is also able to reconcile a debate in previous literature.
The weakness is that although each model is based on an optimization principle, but the derivation process is not written in the method section. The authors did not write about how they can derive these weighting factors from these computational principles. Thus, it is not clear whether these weighting factors are relevant to these theories or just hacking methods. Suppose the author argues that this is the result of the minimum variance principle. In that case, the authors should show a process of how to derive these weighting factors as a result of the optimization process to minimize these cost functions.
In addition, I am concerned that the proposed model can cancel the property of the coordinate system by the predicted variance, and it can work for any coordinate system, even one that is not used in the human brain. When the applied force is given in Cartesian coordinates, the directionality in the generalization ability of the memory of the force field is characterized by the kinematic relationship (Jacobian) between the Cartesian coordinate and the coordinate of interest (Cartesian, joint, and object) as shown in Equation 3. At the same time, when a displacement (epsilon) is considered in a space and a corresponding displacement is linked with kinematic equations (e.g., joint displacement and hand displacement in 2 joint arms in this paper), the generated variances in different coordinate systems are linked with the kinematic equation each other (Jacobian). Thus, how a small noise in a certain coordinate system generates the hand force noise (sigma_x, sigma_j, sigma_o) is also characterized by the kinematics (Jacobian). Thus, when the predicted forcefield (F_c, F_j, F_o) was divided by the variance (F_c/sigma_c^2, F_j/sigma_j^2, F_o/sigma_o^2, ), the directionality of the generalization force which is characterized by the Jacobian is canceled by the directionality of the sigmas which is characterized by the Jacobian. Thus, as it has been read out from Fig*D and E top, the weight in E-top of each coordinate system is always the inverse of the shift of force from the test force by which the directionality of the generalization is always canceled. Once this directionality is canceled, no matter how to compute the weighted sum, it can replicate the memorized force. Thus, this model always works to replicate the test force no matter which coordinate system is assumed. Thus, I am suspicious of the falsifiability of this computational model. This model is always true no matter which coordinate system is assumed. Even though they use, for instance, the robot coordinate system, which is directly linked to the participant's hand with the kinematic equation (Jacobian), they can replicate this result. But in this case, the model would be nonsense. The falsifiability of this model was not explicitly written.
-
-
www.medrxiv.org www.medrxiv.org
-
eLife Assessment
This important work advances our understanding of factors influencing early childhood development. The large sample size and methodology applied make the findings of this study convincing; however, support for some of the claims made by the authors is incomplete. The work will be of interest to researchers in developmental science and early childhood pediatrics.
-
Reviewer #1 (Public Review):
Padilha et al. aimed to find prospective metabolite biomarkers in serum of children aged 6-59 months that were indicative of neurodevelopmental outcomes. The authors leveraged data and samples from the cross-sectional Brazilian National Survey on Child Nutrition (ENANI-2019), and an untargeted multisegment injection-capillary electrophoresis-mass spectrometry (MSI-CE-MS) approach was used to measure metabolites in serum samples (n=5004) which were identified via a large library of standards. After correlating the metabolite levels against the developmental quotient (DQ), or the degree of which age-appropriate developmental milestones were achieved as evaluated by the Survey of Well-being of Young Children, serum concentrations of phenylacetylglutamine (PAG), cresol sulfate (CS), hippuric acid (HA) and trimethylamine-N-oxide (TMAO) were significantly negatively associated with DQ. Examination of the covariates revealed that the negative associations of PAG, HA, TMAO and valine (Val) with DQ were specific to younger children (-1 SD or 19 months old), whereas creatinine (Crtn) and methylhistidine (MeHis) had significant associations with DQ that changed direction with age (negative at -1 SD or 19 months old, and positive at +1 SD or 49 months old). Further, mediation analysis demonstrated that PAG was a significant mediator for the relationship of delivery mode, child's diet quality and child fiber intake with DQ. HA and TMAO were additional significant mediators of the relationship of child fiber intake with DQ.
Strengths of this study include the large cohort size and study design allowing for sampling at multiple time points along with neurodevelopmental assessment and a relatively detailed collection of potential confounding factors including diet. The untargeted metabolomics approach was also robust and comprehensive allowing for level 1 identification of a wide breadth of potential biomarkers. Given their methodology, the authors should be able to achieve their aim of identifying candidate serum biomarkers of neurodevelopment for early childhood. The results of this work would be of broad interest to researchers who are interested in understanding the biological underpinnings of development and also for tracking development in pediatric populations, as it provides insight for putative mechanisms and targets from a relevant human cohort that can be probed in future studies. Such putative mechanisms and targets are currently lacking in the field due to challenges in conducting these kind of studies, so this work is important.
However, in the manuscript's current state, the presentation and analysis of data impede the reader from fully understanding and interpreting the study's findings. Particularly, the handling of confounding variables is incomplete. There is a different set of confounders listed in Table 1 versus Supplementary Table 1 versus Methods section Covariates versus Figure 4. For example, Region is listed in Supplementary Table 1 but not in Table 1, and Mode of Delivery is listed in Table 1 but not in Supplementary Table 1. Many factors are listed in Figure 4 that aren't mentioned anywhere else in the paper, such as gestational age at birth or maternal pre-pregnancy obesity.
The authors utilize the directed acrylic graph (DAG) in Figure 4 to justify the further investigation of certain covariates over others. However, the lack of inclusion of the microbiome in the DAG, especially considering that most of the study findings were microbial-derived metabolite biomarkers, appears to be a fundamental flaw. Sanitation and micronutrients are proposed by the authors to have no effect on the host metabolome, yet sanitation and micronutrients have both been demonstrated in the literature to affect microbiome composition which can in turn affect the host metabolome.
Additionally, the authors emphasized as part of the study selection criteria the following,<br /> "Due to the costs involved in the metabolome analysis, it was necessary to further reduce the sample size. Then, samples were stratified by age groups (6 to 11, 12 to 23, and 24 to 59 months) and health conditions related to iron metabolism, such as anemia and nutrient deficiencies. The selection process aimed to represent diverse health statuses, including those with no conditions, with specific deficiencies, or with combinations of conditions. Ultimately, through a randomized process that ensured a balanced representation across these groups, a total of 5,004 children were selected for the final sample (Figure 1)."
Therefore, anemia and nutrient deficiencies are assumed by the reader to be important covariates, yet, the data on the final distribution of these covariates in the study cohort is not presented, nor are these covariates examined further.
The inclusion of specific covariates in Table 1, Supplementary Table 1, the statistical models, and the mediation analysis is thus currently biased as it is not well justified.
Finally, it is unclear what the partial-least squares regression adds to the paper, other than to discard potentially interesting metabolites found by the initial correlation analysis.
-
Reviewer #2 (Public Review):
A strength of the work lies in the number of children Padilha et al. were able to assess (5,004 children aged 6-59 months) and in the extensive screening that the Authors performed for each participant. This type of large-scale study is uncommon in low-to-middle-income countries such as Brazil.<br /> The Authors employ several approaches to narrow down the number of potentially causally associated metabolites.<br /> Could the Authors justify on what basis the minimum dietary diversity score was dichotomized? Were sensitivity analyses undertaken to assess the effect of this dichotomization on associations reported by the article? Consumption of each food group may have a differential effect that is obscured by this dichotomization.<br /> Could the Authors specify the statistical power associated with each analysis?<br /> Could the Authors describe in detail which metric they used to measure how predictive PLSR models are, and how they determined what the "optimal" number of components were?<br /> The Authors use directed acyclic graphs (DAG) to identify confounding variables of the association between metabolites and DQ. Could the dataset generated by the Authors have been used instead? Not all confounding variables identified in the literature may be relevant to the dataset generated by the Authors.<br /> Were the systematic reviews or meta-analyses used in the DAG performed by the Authors, or were they based on previous studies? If so, more information about the methodology employed and the studies included should be provided by the Authors.<br /> Approximately 72% of children included in the analyses lived in households with a monthly income superior to the Brazilian minimum wage. The cohort is also biased towards households with a higher level of education. Both of these measures correlate with developmental quotient. Could the Authors discuss how this may have affected their results and how generalizable they are?<br /> Further to this, could the Authors describe how inequalities in access to care in the Brazilian population may have affected their results? Could they have included a measure of this possible discrepancy in their analyses?<br /> The Authors state that the results of their study may be used to track children at risk for developmental delays. Could they discuss the potential for influencing policies and guidelines to address delayed development due to malnutrition and/or limited access to certain essential foods?
-
Reviewer #3 (Public Review):
The ENANI-2019 study provides valuable insights into child nutrition, development, and metabolomics in Brazil, highlighting both challenges and opportunities for improving child health outcomes through targeted interventions and further research.
Strengths of the methods and results:<br /> (1) The study utilizes data from the ENANI-2019 cohort, which was already existing. This cohort choice allows for longitudinal assessments and exploration of associations between metabolites and developmental outcomes. In addition, there was conservation of resources which are scanty in all settings in the current scenario.<br /> (2) The study aims to investigate the relationship between circulating metabolites (exposure) and early childhood development (outcome), specifically developmental quotient (DQ). The objectives are clearly stated, which facilitates focused research questions and hypotheses. The population that is studied is clearly mentioned.<br /> (3) The study accessed a large number of children under five years, with blood collected from a final sample size of 5,004 children. The exclusion of infants under six months due to venipuncture challenges and lack of reference values highlights practical considerations in research design.<br /> The study sample reflects a diverse range of children in terms of age, sex distribution, weight status, maternal education, and monthly family income. This diversity enhances the generalizability of findings across different sociodemographic groups within Brazil.<br /> (4) The study uses standardized measures (e.g., DQ assessments) and chronological age. Confounding variables, such as child's age, diet quality, and nutritional status, are carefully considered and incorporated into analyses through a Directed Acyclic Graph (DAG). The mean DQ of 0.98 indicates overall developmental norms among the studied children, with variations noted across different demographic factors such as age, region, and maternal education. The prevalence of Minimum Dietary Diversity (MDD) being met by 59.3% of children underscores dietary patterns and their potential impact on health outcomes. The association between nutritional status (weight-for-height z-scores) and developmental outcomes (DQ) provides insights into the interplay between nutrition and child development.<br /> The study identified key metabolites associated with developmental quotient (DQ):<br /> Component 1: Branched-chain amino acids (Leucine, Isoleucine, Valine).<br /> Component 2: Uremic toxins (Cresol sulfate, Phenylacetylglutamine).<br /> Component 3: Betaine and amino acids (Glutamine, Asparagine).<br /> The study focused on several serum metabolites like PAG (phenylacetylglutamine), CS (p-cresyl sulfate), HA (hippuric acid), TMAO (trimethylamine-N-oxide), MeHis (methylhistidine), and Crtn (creatinine). These metabolites are implicated in various metabolic pathways linked to gut microbiota activity, amino acid metabolism, and dietary factors.<br /> These metabolites explained a significant portion of both metabolite variance (39.8%) and DQ variance (4.3%). The study suggests that these metabolites can be used as proxy measures of the gut microbiome in children.<br /> (5) The use of partial least square regression (PLSR) with cross-validation (80% training, 20% testing) which is a robust approach to identify metabolites predictive of DQ, which minimizes overfitting. This model allows for outliers to remain outliers for transparency.<br /> The Directed Acyclic Graph (DAG) identifies and adjusts for confounding variables (e.g., child's diet quality, nutritional status) and strengthens the validity of findings by controlling for potential biases. Developmental and gender differences were studied by testing interactions with the age of the child and the sex.<br /> Mediation analysis exploring metabolites as potential mediators provides insights into underlying pathways linking exposures (e.g., diet, microbiome) with DQ.<br /> The use of Benjamini-Hochberg correction for multiple comparisons and bootstrap tests (5,000 iterations) enhances the reliability of results by controlling false discovery rates and assessing significance robustly.
Significant correlations between serum metabolites and DQ, particularly negative associations with certain metabolites like PAG and CS, suggest potential biomarkers or pathways influencing developmental outcomes. Notably, these associations varied with age, suggesting different metabolic impacts during early childhood development.
Weaknesses:<br /> (1) The data collected was incomplete especially those related to breastfeeding history and birth weight. These have been mentioned in the limitations of the study but yet might have been potential confounders or even factors leading to the particular identified metabolite state of the population.<br /> (2) Other tests than mediation analysis might have been used to ensure reliability and robustness of the data. How data was processed, data cleaning methods, how outliers were handled and sensitivity analyses would ensure robustness of the findings.<br /> (3) The generalizability of the data is not sound especially considering the children mostly belonged to a higher socioeconomic group in Brazil with mother or caregiver education being above a certain level. Comparative studies with children from other socio-economic groups and other cohorts might have been useful. Consideration of sample size adequacy and power analysis might have helped in generalizing the findings.<br /> (4) Caution is needed in interpreting causality from this data because of the nature of the study design Discussing alternative explanations and potential confounding factors in more depth could strengthen the conclusions.
Appraisal<br /> (1) The aims of the study were to identify associations between children's serum metabolome and Early Childhood development. This aim was met. The results do confirm their conclusions.<br /> Impact of the work on the field
(1) Unless actual gut microbiome of children in this age group from gut bacteria examination or gastrointestinal examination of the gut of children, the causality of gut metabolome on early childhood development cannot be established with certainty. Because this may not be possible in every situation, proxy methods such as the one elucidated here might be useful, considering the risk-benefit ratio.<br /> (2) More research is needed on this theme through longitudinal studies to validate these findings and explore underlying pathways involving gut-brain interactions and metabolic dysregulation.<br /> Other readings: Readers are advised to read other research from other countries and other languages to understand the connection between gut microbiome, metabolite spectra, and child development. In addition to study the effect of these factors on child mental development too.
Readers might consider the following questions:<br /> (1) Should investigators study the families through direct observation of diet and other factors to look for a connection between food taken in and gut microbiome and child development?<br /> (2) Can an examination of the mother's gut microbiome influence the child's microbiome? Can the mother or caregiver's microbiome influence early childhood development?<br /> (3) Is developmental quotient enough to study early childhood development? Is it comprehensive enough?
-
-
www.biorxiv.org www.biorxiv.org
-
eLife Assessment
This important work addresses the role of Marcks/Markcksl during spinal cord development and regeneration. The study is exceptional in combining molecular approaches to understand the mechanisms of tissue regeneration with behavioural assays, which is not commonly employed in the field. The data presented is convincing and comprehensive, using many complementary methodologies.
-
Reviewer #1 (Public Review):
In this manuscript, El Amri et al. are exploring the role of Marcks and Marcksl1 proteins during spinal cord development and regeneration in Xenopus. Using two different techniques to knockdown their expressions, they argue that these proteins are important for neural progenitors proliferation and neurites outgrowth in both contexts. Finally, using a pharmalogical approach, they suggest that Marcks and Marcksl1 work by modulating the activity of PLD and the levels of PIP2 whilst PKC could modulate Marcks activity.<br /> The strength of this manuscript resides in the ability of the authors to knockdown the expression of 4 different genes using 2 different methods to assess the role of this protein family during early development and regeneration at the late tadpole stage. This has always been a limiting factor in the field as the tools to perform conditional knockouts in Xenopus are very limited. However, this will not really be applicable to essential genes as it relies on the general knockdown of protein expression. The generation of antibodies able to detect endogenous Marcks/Marcksl1 is also a powerful tool to assess the extent to which the expression of these proteins is down-regulated.<br /> Whilst there is a great amount of data provided in this manuscript and there is strong evidence to show that Marcks are important for spinal cord development and regeneration, their roles in both contexts is not explored fully. The description of the effect of knocking down Marcks/Marcksl1 on neurons and progenitors is rather superficial and the evidence for the underlying mechanism underpinning their roles is not very convincing.
-
Reviewer #2 (Public Review):
M. El Amri et al., investigated the functions of Marcks and Marcks like 1 during spinal cord (SC) development and regeneration in Xenopus laevis. The authors rigorously performed loss of function with morpholino knock-down and CRISPR knock-out combining rescue experiments in developing spinal cord in embryo and regeneration in tadpole stage.
For the assays in the developing spinal cord, a unilateral approach (knock-down/out only one side of the embryo) allowed the authors to assess the gene functions by direct comparing one-side (e.g. mutated SC) to the other (e.g. wild type SC on the other side). For the assays in regenerating SC, the authors microinject CRISPR reagents into 1-cell stage embryo. When the embryo (F0 crispants) grew up to tadpole (stage 50), the SC was transected. They then assessed neurite outgrowth and progenitor cell proliferation. The validation of the phenotypes was mostly based on the quantification of immunostaining images (neurite outgrowth: acetylated tubulin, neural progenitor: sox2, sox3, proliferation: EdU, PH3), that are simple but robust enough to support their conclusions. In both SC development and regeneration, the authors found that Marcks and Marcksl1 were necessary for neurite outgrowth and neural progenitor cell proliferation.<br /> The authors performed rescue experiments on morpholino knock-down and CRISPR knock-out conditions by Marcks and Marcksl1 mRNA injection for SC development and pharmacological treatments for SC development and regeneration. The unilateral mRNA injection rescued the loss-of-function phenotype in the developing SC. To explore the signalling role of these molecules, they rescued the loss-of-function animals by pharmacological reagents They used S1P: PLD activator, FIPI: PLD inhibitor, NMI: PIP2 synthesis activator and ISA-2011B: PIP2 synthesis inhibitor. The authors found the activator treatment rescued neurite outgrowth and progenitor cell proliferation in loss of function conditions. From these results, the authors proposed PIP2 and PLD are the mediators of Marcks and Marcksl1 for neurite outgrowth and progenitor cell proliferation during SC development and regeneration. The results of the rescue experiments are particularly important to assess gene functions in loss of function assays, therefore, the conclusions are solid. In addition, they performed gain-of-function assays by unilateral Marcks or Marcksl1 mRNA injection showing that the injected side of the SC had more neurite outgrowth and proliferative progenitors. The conclusions are consistent with the loss-of-function phenotypes and the rescue results. Importantly, the authors showed the linkage of the phenotype and functional recovery by behavioral testing, that clearly showed the crispants with SC injury swam less distance than wild types with SC injury at 10-day post surgery.<br /> Prior to the functional assays, the authors analyzed the expression pattern of the genes by in situ hybridization and immunostaining in developing embryo and regenerating SC. They confirmed that the amount of protein expression was significantly reduced in the loss of function samples by immunostaining with the specific antibodies that they made for Marcks and Marcksl1. Although the expression patterns are mostly known in previous works during embryo genesis, the data provided appropriate information to readers about the expression and showed efficiency of the knock-out as well.
MARCKS family genes have been known to be expressed in the nervous system. However, few studies focus on the function in nerves. This research introduced these genes as new players during SC development and regeneration. These findings could attract broader interests from the people in nervous disease model and medical field. Although it is a typical requirement for loss of function assays in Xenopus laevis, I believe that the efficient knock-out for four genes by CRISPR/Cas9 was derived from their dedication of designing, testing and validation of the gRNAs and is exemplary.
Weaknesses,<br /> 1) Why did the authors choose Marcks and Marcksl1?<br /> The authors mentioned that these genes were identified with a recent proteomic analysis of comparing SC regenerative tadpole and non-regenerative froglet (Line (L) 54-57). However, although it seems the proteomic analysis was their own dataset, the authors did not mention any details to select promising genes for the functional assays (this article). In the proteomic analysis, there must be other candidate genes that might be more likely factors related to SC development and regeneration based on previous studies, but it was unclear what the criteria to select Marcks and Marcksl1 was.
2) Gene knock-out experiments with F0 crispants,<br /> The authors described that they designed and tested 18 sgRNAs to find the most efficient and consistent gRNA (L191-195). However, it cannot guarantee the same phenotypes practically, due to, for example, different injection timing, different strains of Xenopus laevis, etc. Although the authors mentioned the concerns of mosaicism by themselves (L180-181, L289-292) and immunostaining results nicely showed uniformly reduced Marcks and Marcksl1 expression in the crispants, they did not refer to this issue explicitly.
3) Limitations of pharmacological compound rescue<br /> In the methods part, the authors describe that they performed titration experiments for the drugs (L702-704), that is a minimal requirement for this type of assay. However, it is known that a well characterized drug is applied, if it is used in different concentrations, the drug could target different molecules (Gujral TS et al., 2014 PNAS). Therefore, it is difficult to eliminate possibilities of side effects and off targets by testing only a few compounds.
-
Reviewer #3 (Public Review):
El Amri et al conducted an analysis on the function of marcks and marcksl in Xenopus spinal cord development and regeneration. Their study revealed these proteins are crucial for neurite outgrowth and cell proliferation, including Sox2+ progenitors. Furthermore, they suggested these genes may act through the PLD pathway. The study is well-executed with appropriate controls and validation experiments, distinguishing it from typical regeneration research by including behavioral assays. The manuscript is commendable for its quantifications, literature referencing, careful conclusions, and detailed methods. Conclusions are well-supported by the experiments performed in this study. Overall, this manuscript contributes to the field of spinal cord regeneration and sets a good example for future research in this area.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this demo lesson you're going to learn how to install the Docker engine inside an EC2 instance and then use that to create a Docker image.
Now this Docker image is going to be running a simple application and we'll be using this Docker image later in this section of the course to demonstrate the Elastic Container service.
So this is going to be a really useful demo where you're going to gain the experience of how to create a Docker image.
Now there are a few things that you need to do before we get started.
First as always make sure that you're logged in to the I am admin user of the general AWS account and you'll also need the Northern Virginia region selected.
Now attached to this lesson is a one-click deployment link so go ahead and click that now.
This is going to deploy an EC2 instance with some files pre downloaded that you'll use during the demo lesson.
Now everything's pre-configured you just need to check this box at the bottom and click on create stack.
Now that's going to take a few minutes to create and we need this to be in a create complete state.
So go ahead and pause the video wait for your stack to move into create complete and then we're good to continue.
So now this stack is in a create complete state and we're good to continue.
Now if you're following along with this demo within your own environment there's another link attached to this lesson called the lesson commands document and that will include all of the commands that you'll need to type as you move through the demo.
Now I'm a fan of typing all commands in manually because I personally think that it helps you learn but if you are the type of person who has a habit of making mistakes when typing along commands out then you can copy and paste from this document to avoid any typos.
Now one final thing before we finish at the end of this demo lesson you'll have the opportunity to upload the Docker image that you create to Docker Hub.
If you're going to do that then you should pre sign up for a Docker Hub account if you don't already have one and the link for this is included attached to this lesson.
If you already have a Docker Hub account then you're good to continue.
Now at this point what we need to do is to click on the resources tab of this stack and locate the public EC2 resource.
Now this is a normal EC2 instance that's been provisioned on your behalf and it has some files which have been pre downloaded to it.
So just go ahead and click on the physical ID next to public EC2 and that will move you to the EC2 console.
Now this machine is set up and ready to connect to and I've configured it so that we can connect to it using Session Manager and this avoids the need to use SSH keys.
So to do that just right-click and then select connect.
You need to pick Session Manager from the tabs across the top here and then just click on connect.
Now that will take a few minutes but once connected you should see this prompt.
So it should say SH- and then a version number and then dollar.
Now the first thing that we need to do as part of this demo lesson is to install the Docker engine.
The Docker engine is the thing that allows Docker containers to run on this EC2 instance.
So we need to install the Docker engine package and we'll do that using this command.
So we're using shudu to get admin permissions then the package manager DNF then install then Docker.
So go ahead and run that and that will begin the installation of Docker.
It might take a few moments to complete it might have to download some prerequisites and you might have to answer that you're okay with the install.
So press Y for yes and then press enter.
Now we need to wait a few moments for this install process to complete and once it has completed then we need to start the Docker service and we do that using this command.
So shudu again to get admin permissions and then service and then the Docker service and then start.
So type that and press enter and that starts the Docker service.
Now I'm going to type clear and then press enter to make this easier to see and now we need to test that we can interact with the Docker engine.
So the most simple way to do that is to type Docker space and then PS and press enter.
Now you're going to get an error.
This error is because not every user of this EC2 instance has the permissions to interact with the Docker engine.
We need to grant permissions for this user or any other users of this EC2 instance to be able to interact with the Docker engine and we're going to do that by adding these users to a group and we do that using this command.
So shudu for admin permissions and then user mod -a -g for group and then the Docker group and then EC2 -user.
Now that will allow a local user of this system, specifically EC2 -user, to be able to interact with the Docker engine.
Okay so I've cleared the screen to make it slightly easier to see now that we've added EC2 -user the ability to interact with Docker.
So the next thing is we need to log out and log back in of this instance.
So I'm going to go ahead and type exit just to disconnect from session manager and then click on close and then I'm going to reconnect to this instance and you need to do the same.
So connect back in to this EC2 instance.
Now once you're connected back into this EC2 instance we need to run another command which moves us into EC2 user so it basically logs us in as EC2 -user.
So that's this command and the result of this would be the same as if you directly logged in to EC2 -user.
Now the reason we're doing it this way is because we're using session manager so that we don't need a local SSH client or to worry about SSH keys.
We can directly log in via the console UI we just then need to switch to EC2 -user.
So run this command and press enter and we're now logged into the instance using EC2 -user and to test everything's okay we need to use a command with the Docker engine and that command is Docker space ps and if everything's okay you shouldn't see any output beyond this list of headers.
What we've essentially done is told the Docker engine to give us a list of any running containers and even though we don't have any it's not erred it's simply displayed this empty list and that means everything's okay.
So good job.
Now what I've done to speed things up if you just run an LS and press enter the instance has been configured to download the sample application that we're going to be using and that's what the file container.zip is within this folder.
I've configured the instance to automatically extract that zip file which has created the folder container.
So at this point I want you to go ahead and type cd space container and press enter and that's going to move you inside this container folder.
Then I want you to clear the screen by typing clear and press enter and then type ls space -l and press enter.
Now this is the web application which I've configured to be automatically downloaded to the EC2 instance.
It's a simple web page we've got index.html which is the index we have a number of images which this index.html contains and then we have a docker file.
Now this docker file is the thing that the docker engine will use to create our docker image.
I want to spend a couple of moments just stepping you through exactly what's within this docker file.
So I'm going to move across to my text editor and this is the docker file that's been automatically downloaded to your EC2 instance.
Each of these lines is a directive to the docker engine to perform a specific task and remember we're using this to create a docker image.
This first line tells the docker engine that we want to use version 8 of the Red Hat Universal base image as the base component for our docker image.
This next line sets the maintainer label it's essentially a brief description of what the image is and who's maintaining it in this case it's just a placeholder of animals for life.
This next line runs a command specifically the yum command to install some software specifically the Apache web server.
This next command copy copies files from the local directory when you use the docker command to create an image so it's copying that index.html file from this local folder that I've just been talking about and it's going to put it inside the docker image in this path so it's going to copy index.html to /var/www/html and this is where an Apache web server expects this index.html to be located.
This next command is going to do the same process for all of the jpegs in this folder so we've got a total of six jpegs and they're going to be copied into this folder inside the docker image.
This line sets the entry point and this essentially determines what is first run when this docker image is used to create a docker container.
In this example it's going to run the Apache web server and finally this expose command can be used for a docker image to tell the docker engine which services should be exposed.
Now this doesn't actually perform any configuration it simply tells the docker engine what port is exposed in this case port 80 which is HTTP.
Now this docker file is going to be used when we run the next command which is to create a docker image.
So essentially this file is the same docker file that's been downloaded to your EC2 instance and that's what we're going to run next.
So this is the next command within the lesson commands document and this command builds a container image.
What we're essentially doing is giving it the location of the docker file.
This dot at the end contains the working directory so it's here where we're going to find the docker file and any associated files that that docker file uses.
So we're going to run this command and this is going to create our docker image.
So let's go ahead and run this command.
It's going to download version 8 of UBI which it will use as a starting point and then it's going to run through every line in the docker file performing each of the directives and each of those directives is going to create another layer within the docker image.
Remember from the theory lesson each line within the docker file generally creates a new file system layer so a new layer of a docker image and that's how docker images are efficient because you can reuse those layers.
Now in this case this has been successful.
We've successfully built a docker image with this ID so it's giving it a unique ID and it's tagged this docker image with this tag colon latest.
So this means that we have a docker image that's now stored on this EC2 instance.
Now I'll go ahead and clear the screen to make it easier to see and let's go ahead and run the next command which is within the lesson commands document and this is going to show us a list of images that are on this EC2 instance but we're going to filter based on the name container of cats and this will show us the docker image which we've just created.
So the next thing that we need to do is to use the docker run command which is going to take the image that we've just created and use it to create a running container and it's that container that we're going to be able to interact with.
So this is the command that we're going to use it's the next one within the lesson commands document.
It's docker run and then it's telling it to map port 80 on the container with port 80 on the EC2 instance and it's telling it to use the container of cats image and if we run that command docker is going to take the docker image that we've got on this EC2 instance run it to create a running container and we should be able to interact with that container.
So if you go back to the AWS console if we click on instances so look for a4l-public EC2 that's in the running state.
I'm just going to go ahead and select this instance so that we can see the information and we need the public IP address of this instance.
Go ahead and click on this icon to copy the public IP address into your clipboard and then open that in a new tab.
Now be sure not to use this link to the right because that's got a tendency to open the HTTPS version.
We just need to use the IP address directly.
So copy that into your clipboard open a new tab and then open that IP address and now we can see the amazing application if it fits i sits in a container in a container and this amazing looking enterprise application is what's contained in the docker image that you just created and it's now running inside a container based off that image.
So that's great everything's working as expected and that's running locally on the EC2 instance.
Now in the demo lesson for the elastic container service that's coming up later in this section of the course you have two options.
You can either use my docker image which is this image that I've just created or you can use your own docker image.
If you're going to use my docker image then you can skip this next step.
You don't need a docker hub account and you don't need to upload your image.
If you want to use your own image then you do need to follow these next few steps and I need to follow them anyway because I need to upload this image to docker hub so that you can potentially use it rather than your own image.
So I'm going to move back to the session manager tab and I'm going to control C to exit out of this running container and I'm going to type clear to clear the screen and make it easier to see.
Now to upload this to docker hub first you need to log in to docker hub using your credentials and you can do that using this command.
So it's docker space login space double hyphen username equals and then your username.
So if you're doing this in your own environment you need to delete this placeholder and type your username.
I'm going to type my username because I'll be uploading this image to my docker hub.
So this is my docker hub username and then press enter and it's going to ask for the corresponding password to this username.
So I'm going to paste in my password if you're logging into your docker hub you should use your password.
Once you've pasted in the password go ahead and press enter and that will log you in to docker hub.
Now you don't have to worry about the security message because whilst your docker hub password is going to be stored on the EC2 instance shortly we're going to terminate this instance which will remove all traces of this password from this machine.
Okay so again we're going to upload our docker image to docker hub so let's run this command again and you'll see because we're just using the docker images command we can see the base image as well as our image.
So we can see red hat UBI 8.
We want the container of cats latest though so what you need to do is copy down the image ID of the container of cats image.
So this is the top line in my case container of cats latest and then the image ID.
So then we need to run this command so docker space tag and then the image ID that you've just copied into your clipboard and then a space and then your docker hub username.
In my case it's actrl with 1L if you're following along you need to use your own username and then forward slash and then the name of the image that you want this to be stored as on docker hub so I'm going to use container of cats.
So that's the command you need to use so docker tag and then your image ID for container of cats and then your username forward slash container of cats and press enter and that's everything we need to do to prepare to upload this image to docker hub.
So the last command that we need to run is the command to actually upload the image to docker hub and that command is docker space push so we're going to push the image to docker hub then we need to specify the docker hub username so again this is my username but if you're doing this in your environment it needs to be your username and then forward slash and then the image name in my case container of cats and then colon latest and once you've got all that go ahead and press enter and that's going to push the docker image that you've just created up to your docker hub account and once it's up there it means that we can deploy from that docker image to other EC2 instances and even ECS and we're going to do that in a later demo in this section of the course.
Now that's everything that you need to do in this demo lesson you've essentially installed and configured the docker engine you've used a docker file to create a docker image from some local assets you've tested that docker image by running a container using that image and then you've uploaded that image to docker hub and as I mentioned before we're going to use that in a future demo lesson in this section of the course.
Now the only thing that remains to do is to clear up the infrastructure that we've used in this demo lesson so go ahead and close down all of these extra tabs and go back to the cloud formation console this is the stack that's been created by the one click deployment link so all you need to do is select this stack it should be called EC2 docker and then click on delete and confirm that deletion and that will return the account into the same state as it was at the start of this demo lesson.
Now that is everything you need to do in this demo lesson I hope it's been useful and I hope you've enjoyed it so go ahead and complete the video and when you're ready I look forward to you joining me in the next.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Additionally, spam and output from Large Language Models like ChatGPT can flood information spaces (e.g., email, Wikipedia) with nonsense, useless, or false content, making them hard to use or useless.
That is a very valid concern. AI-generated content, such as from ChatGPT, tends to spam online platforms like email and Wikipedia with misinformation, making people not trust the platforms. Because Wikipedia, for example, enables users to edit entries, it is highly susceptible to the addition of false information. There are systems in place for moderation, but it's tough to keep up with how quickly AI can generate content. It requires stronger editorial controls and awareness on the part of users to maintain the reliability of such platforms.
-
-
pressbooks.lib.jmu.edu pressbooks.lib.jmu.eduWork4
-
Does anyone know the original Italian word for "work"?
-
What does the Italian word "work" convey in Montessori's time?
-
Work
Language evolution! Historical discussion.
-
[MAPS 2024 conversation] Italian translations of the term "work": * "meaningful activity" * "play" (lavora) ... i.e., "meaningful play Context: English translations of Montessori's original writing. Italian has different meanings than English translations Historical context matters as it relates to the meaning of terms.
-
-
www.nytimes.com www.nytimes.com
Tags
- Marc-André Parisien
- by: Manuela Andreoni
- Ellen Whitman
- Kanada
- Drivers and Impacts of the Record-Breaking 2023 Wildfire Season in Canada
- increasing risk of wildfires
- flash droughts
- regeneration failure
- Brendan Byrne
- Carbon emissions from the 2023 Canadian wildfires
- Natural Resources Canada
Annotators
URL
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Social Media platforms use the data they collect on users and infer about users to increase their power and increase their profits.
I completely agree with this. As TikTok gained popularity with its short videos, many other platforms quickly adopted this feature for creating and sharing short-form content. Instagram introduced Reels, and YouTube launched Shorts, both experiencing significant growth as a result. Even Spotify has now incorporated a similar short video format.
-
-
pressbooks.lib.jmu.edu pressbooks.lib.jmu.edu
-
Related terms
Discipline
-
moral development
Related to agency
-
-
pressbooks.lib.jmu.edu pressbooks.lib.jmu.edu
-
This is related to normalization
-
-
pressbooks.lib.jmu.edu pressbooks.lib.jmu.edu
-
Social newborn
Broader /related Term: Third Plane of Development; Adolescence
-
-
coalab.space coalab.space
-
quarks
what is quarks?
-
-
www.repubblica.it www.repubblica.it
-
Die aktuelle weltweite Korallenbleiche ist bereits die vierte in 25 Jahren. Die Temperaturen in einem großen Teil der tropischen Meere lagen in diesem Sommer 3° über dem Durchschnitt. Im Gespräch mit der Repubblica erklärt der Korallenexperte Roberto Danovaro, dass ein Viertel der globalen Korallenbestände bereits verloren ist. Korallenriffe, die Ökosysteme mit der größten Biodiversität, sind durch die globale Erhitzung besonders verwundbar https://www.repubblica.it/green-and-blue/dossier/negazionisti-climatici/2024/10/07/news/barriera_corallina_sbiancamento_crisi_clima_roberto_danovaro-423531902/
-
-
pierce.instructure.com pierce.instructure.com
-
aspects of women's health
-
needle exchange websites
-
The Tuskegee Experiment based on information presented in different genres
-
approaches to nursing regarding various patient conditions
-
rhetorical strategies in various genre examination
-
-
www.theatlantic.com www.theatlantic.com
-
In psychology, the belief that only conservatives can be authoritarians, and that therefore only conservative authoritarians warrant serious study, has proved self-reinforcing over the course of decades.
!
-
“powerful pressures to maintain discipline among members, advocate aggressive and censorious means of stifling opposition, [and] believe in top-down absolutist leadership.”
!
-
Intriguingly, the researchers found some common traits between left-wing and right-wing authoritarians, including a “preference for social uniformity, prejudice towards different others, willingness to wield group authority to coerce behavior, cognitive rigidity, aggression and punitiveness towards perceived enemies, outsized concern for hierarchy, and moral absolutism.”
!
-
But one reason left-wing authoritarianism barely shows up in social-psychology research is that most academic experts in the field are based at institutions where prevailing attitudes are far to the left of society as a whole. Scholars who personally support the left’s social vision—such as redistributing income, countering racism, and more—may simply be slow to identify authoritarianism among people with similar goals.
!
-
-
www.liberation.fr www.liberation.fr
-
Extreme Regenfälle im Südosten und Zentrum Frankreichs im September gehören zu einer sogenannten „Mittelmeerepisode“ (épisode méditerranéen). Diese Ereignisse intensivieren sich durch die globale.Erhitzung. Die Libération hat dazu Klimaforschende befragt, und sie verweist auf eine aktualisierte Studie zur Erwärmung in Frankreich https://www.liberation.fr/environnement/climat/les-intemperies-dans-le-sud-est-de-la-france-enieme-illustration-des-effets-devastateurs-du-rechauffement-climatique-20241018_HCAHWSJACRADZNAVD5G7IZE66M/
-
-
viewer.athenadocs.nl viewer.athenadocs.nl
-
menselijk handelen
kopen, lenen, huren etc.
-
Blote rechtsfeiten
Geboren worden, sterven, verouderen etc.
-
-
www.biorxiv.org www.biorxiv.org
-
Overall Assessment (4/5)
Summary: The authors provide a software tool NeuroVar that helps visualizing genetic variations and gene expression profiles of biomarkers in different neurological diseases.
Technical Release criteria
Is the language of sufficient quality? * The language quality of the document is of sufficient quality. I did not notice any major issues.
Is there a clear statement of need explaining what problems the software is designed to solve and who the target audience is? * Yes, authors provide a statement of need. Authors mention that there is the need for a specialized software tool to identify genes from transcriptomic data and genetic variations such as SNPs, specifically for neurological diseases. Perhaps authors could expand on how they chose the diseases. E.g. stroke is not listed among the neurological diseases. Perhaps authors could expand a bit on the diseases they chose in the introduction.
Is the source code available, and has an appropriate Open Source Initiative license been assigned to the code? * Yes the source code is available in github under the following link: https://github.com/omicscodeathon/neurovar. Additionally authors deposited the source code and additional supplementary data in a permanent depository with zenodo under the following DOI: https://zenodo.org/records/13375493. They also provided test data https://zenodo.org/records/13375591. I was able to download and access the complete set of data
As Open Source Software are there guidelines on how to contribute, report issues or seek support on the code? * I did not find any way to contribute, report issues or seek support. I would recommend that the authors add this information to the Github README file.
Is the code executable? * Yes, I could execute the code using Rstudio 4.3.3
Is the documentation provided clear and user friendly? * The documentation is provided and is user friendly. I was able to install, test and run the tool using RStudio. Authors may consider to offer also a simple website link for the RshinyTools if possible. This may enable the access also for scientists that are not familiar with R.Especially, it is great that authors provided a demonstration video. I was able to reproduce the steps. However, I would recommend to add more information into the Youtube video. E.g. reference to the preprint/ paper and Github link would be helpful to connect the data. Perhaps authors could also expand a bit on the possibilities to export data from their software. And provide different formats e.g., PDF / PNG /JPEG. I think this is important for many researchers to export their outputs e.g., from the heatmaps.
Is installation/deployment sufficiently outlined in the paper and documentation, and does it proceed as outlined? * I could follow the installation process, but perhaps authors could add few more details how to download from Github in more detail. As some scientist may have trouble with it. Also perhaps an installation video (additionally to the video demonstration of the Neurovar Shiny App might be helpful.·
Is there a clearly-stated list of dependencies, and is the core functionality of the software documented to a satisfactory level? * Yes, dependencies are listed and are installed automatically. It worked for me with Rstudio version 4.3.3. In the manuscript and in the
Have any claims of performance been sufficiently tested and compared to other commonly-used packages? * not applicable
Are there (ideally real world) examples demonstrating use of the software? * Yes, authors use the example of Epilepsy, focal epilepsy and the gene of interest DEPDC5. I replicated their search and got the same results. However, I find that the label in Figure 1 in the gene’s transcript could be a bit more clear. E.g. it is not clear to me what transcript start and end refers to. It might also be more helpful if authors provide an example dataset for the Expression data that is loaded in the software by default.Furthermore authors use a case study results using RNAseq in ALS patients with mutations in FUS, TARDBP, SOD1, VCP genes.
Is test data available, either included with the submission or openly available via cited third party sources (e.g. accession numbers, data DOIs, etc.)? * Yes the authors provide test data with dois: https://zenodo.org/records/13375591.
Is automated testing used or are there manual steps described so that the functionality of the software can be verified? * Automated testing is not used as far as I can access it.
Overall Recommendation: * Accept with revisions
Reviewer Information: Ruslan Rust is an assistant professor in neuroscience and physiology at University of Southern California working on stem cell therapies on stroke. His lab is particularly interested in working with genomic data and the development of new biomarkers for stroke, AD and other neurological diseases.
Dr. Ruslan Rust's profile on ResearchHub: https://www.researchhub.com/author/4945925
ResearchHub Peer Reviewer Statement: This peer review has been uploaded from ResearchHub as part of a paid peer review initiative. ResearchHub aims to accelerate the pace of scientific research using novel incentive structures.
-
-
www.liberation.fr www.liberation.fr
-
Das von den französischen Grünen regierte Lyon reagiert auf die globale Erhitzung mit einer „Strategie der durchlässigen Stadt“. Dazu gehört es, bei ausnahmslos jedem neuen Bauprojekt Wasser versickern zu lassen statt es abzuleiten. Eine Vizepräsidentin der Region erklärt die Strategie - und den Unwillen des Staats zur finanziellen Unterstützung der Stadt -im Gespräch mit der Libération aus Anlass der extremen Regenfälle im Département Rhône https://www.liberation.fr/societe/inondations-dans-la-metropole-de-lyon-nous-payons-des-annees-damenagements-urbains-qui-nont-pas-tenu-compte-du-dereglement-climatique-20241018_FT2OJG5YNVFWBJMHM37NJGB634/?redirected=1
-
-
terada-202410-streamlit.gihyo-python-monthly.pages.dev terada-202410-streamlit.gihyo-python-monthly.pages.dev
-
しいます
します
-
ウェジェット
ウィジェット
-
づつ
ずつ
-
事
ごと
-
グラフを表現する機能があります
内部的にはこれを使っているっぽいので、どこかで紹介してもいいかも
-
t.session_state.dices.append
これ、ここは書き換えないで if 文の手前で
dices = st.session_state.dices
とすればいいのでは(そうしたら他は書き換えなくていい
-
if "dices" not in st.session_state: # セッションデータの初期化 st.session_state.dices = []
streamlitのサンプルコードでもこうなっていますが、
not inで存在チェックしているのに、初期化するときは属性になっているので少しトリッキーだなと思いました。 そこについて説明してほしいです。
st.session_state["dices"] = [] でも同じように動作するっぽい
-
これらを
これら、とはなにとなにを指していますか?
-
上記のステップの3つ目
こう書くなら、上のステップを数字付きの箇条書きにして「ステップ3まで」と書くとわかりやすい
-
。
ここで、st.writeにいろんなデータ型を渡しただけのアプリを作って、それぞれいい感じに表示されるところをスクショでみせてほしい。
-
サンプルアプリ(2)
これも名前を付けてほしい
-
それ相応
それ相応、がどういうことを指しているのかがわかりにくい。
相応だと思っているのは誰なのか、が気になる。
「適切な」とか「データ型にあった」とかの表現でもよいのでは
-
プロパティ
引数?
-
以下のとおりです。
いきなり結果になっているけど、初期状態とテキスト入力してボタンを押した状態の2パターンがほしい。
(アニメgifだとうれしいなー
-
randam
typo: random
-
入力されたもを
typo: 入力されたものを?
-
# 入力ボックス
コードがごちゃっとしているので、コメントは1行上につけつつ、大きく機能が分かれるところで空行とか入れた方が読みやすいと思います
-
st.text_input
ここの手前に説明がほしいです。
以下、○○について説明します。みたいな
-
splited_text
-
choice
choicedが好み
-
replace(" ", " ")
全角半角の書き換えとかは本題とは関係ないし、シンプルに
text.split() でよいのでは
-
スペース区切りの文字列から一つの単語を選択する
このタイトルと、コードで st.title() している内容が違う。
どういう意図のタイトルなのか?
-
内容
文章にしてほしい
-
サンプルアプリ(1
(1)はわかりにくいので、名前を付けた方がいいと思います
サンプル - ランダム選択アプリ
とか
-
起動
アプリを起動
とか
-
```bash
なにか書き方を間違えてそう
-
import streamlit as st
-
st.title("サンプルアプリ")
空行をあけてほしい派
-
多くの依存パッケージがあり、pandasなども依存しており多くのパッケージがインストールされます。
表現が冗長
pandasなど多くの依存パッケージが一緒にインストールされます。
とか
-
# venvの作成と有効化
全体的にコメントじゃなくてcaptionにした方がよいかと
-
venvについては
4月の記事でも、今までの他の人の記事でもそこまで説明していないので、venvの説明はなくてもいいのでは
https://terada-202410-streamlit.gihyo-python-monthly.pages.dev/2024/202404
-
されている
している
Streamlitが主体だと思うので受け身じゃなくてよい
-
開発開発
typo
-
複雑な処理を
複雑な処理はサーバーサイドではしているけど、フロントはシンプルみたいなことを言いたいと思うんですが、伝わらないと思います。
-
これらの
これら、が連続していて読みにくいのと、ここでの「これら」は一つ前の「これら」と違うことを指している? 代名詞を使わずに具体的に書いた方が良いのではにか
-
、
一文の「、」が多いので読みにくいです。整理してほしい
-
これはら
typo: これらは
-
機能にフォーカスを当てて、よく使う機能を紹介します
機能がかぶるので、1つめをトルでもいいかも
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this very brief demo lesson, I just want to demonstrate a very specific feature of EC2 known as termination protection.
Now you don't have to follow along with this in your own environment, but if you are, you should still have the infrastructure created from the previous demo lesson.
And also if you are following along, you need to be logged in as the I am admin user to the general AWS account.
So the management account of the organization and have the Northern Virginia region selected.
Now again, this is going to be very brief.
So it's probably not worth doing in your own environment unless you really want to.
Now what I want to demonstrate is termination protection.
So I'm going to go ahead and move to the EC2 console where I still have an EC2 instance running created in the previous demo lesson.
Now normally if I right click on this instance, I'm given the ability to stop the instance, to reboot the instance or to terminate the instance.
And this is assuming that the instance is currently in a running state.
Now if I go to terminate instance, straight away I'm presented with a dialogue where I need to confirm that I want to terminate this instance.
But it's easy to imagine that somebody who's less experienced with AWS can go ahead and terminate that and then click on terminate to confirm the process without giving it much thought.
And that can result in data loss, which isn't ideal.
What you can do to add another layer of protection is to right click on the instance, go to instance settings, and then change termination protection.
If you click that option, you get this dialogue where you can enable termination protection.
So I'm going to do that, I'm going to enable termination protection because this is an essential website for animals for life.
So I'm going to enable it and click on save.
And now that instance is protected against termination.
If I right click on this instance now and go to terminate instance and then click on terminate, I get a dialogue that I'm unable to terminate the instance.
The instance and then the instance ID may not be terminated, modify its disable API termination instance attribute and then try again.
So this instance is now protected against accidental termination.
Now this presents a number of advantages.
One, it protects against accidental termination, but it also adds a specific permission that is required in order to terminate an instance.
So you need the permission to disable this termination protection in addition to the permissions to be able to terminate an instance.
So you have the option of role separation.
You can either require people to have both the permissions to disable termination protection and permissions to terminate, or you can give those permissions to separate groups of people.
So you might have senior administrators who are the only ones allowed to remove this protection, and junior or normal administrators who have the ability to terminate instances, and that essentially establishes a process where a senior administrator is required to disable the protection before instances can be terminated.
It adds another approval step to this process, and it can be really useful in environments which contain business critical EC2 instances.
So you might not have this for development and test environments, but for anything in production, this might be a standard feature.
If you're provisioning instances automatically using cloud formation or other forms of automation, this is something that you can enable in an automated way as instances are launching.
So this is a really useful feature to be aware of.
And for the SysOps exam, it's essential that you understand when and where you'd use this feature.
And for both the SysOps and the developer exams, you should pay attention to this, disable API termination.
You might be required to know which attribute needs to be modified in order to allow terminations.
So really for both of the exams, just make sure that you're aware of exactly how this process works end to end, specifically the error message that you might get if this attribute is enabled and you attempt to terminate an instance.
At this point though, that is everything that I wanted to cover about this feature.
So right click on the instance, go to instance settings, change the termination protection and disable it, and then click on save.
One other feature which I want to introduce quickly, if we right click on the instance, go to instance settings, and then change shutdown behavior, you're able to specify whether an instance should move into a stop state when shut down, or whether you want it to move into a terminate state.
Now logically, the default is stop, but if you are running an environment where you don't want to consider the state of an instance to be valuable, then potentially you might want it to terminate when it shuts down.
You might not want to have an account with lots of stopped instances.
You might want the default behavior to be terminate, but this is a relatively niche feature, and in most cases, you do want the shutdown behavior to be stop rather than terminate, but it's here where you can change that default behavior.
Now at this point, that is everything I wanted to cover.
If you were following along with this in your own environment, you do need to clear up the infrastructure.
So click on the services dropdown, move to cloud formation, select the status checks and protect stack, and then click on delete and confirm that by clicking delete stack.
And once this stack finishes deleting all of the infrastructure that's been used during this demo and the previous one will be cleared from the AWS account.
If you've just been watching, you don't need to worry about any of this process, but at this point, we're done with this demo lesson.
So go ahead, complete the video, and once you're ready, I'll look forward to you joining me in the next.
-
-
pressbooks.lib.jmu.edu pressbooks.lib.jmu.edu
-
Erdkinder
Land school
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this demo lesson either you're going to get the experience or you can watch me interacting with an Amazon machine image.
So we created an Amazon machine image or AMI in a previous demo lesson and if you recall it was customized for animals for life.
It had an install of WordPress and it had the Kause application installed and a custom login banner.
Now this is a really simple example of an AMI but I want to step you through some of the options that you have when dealing with AMIs.
So if we go to the EC2 console and if you are following along with this in your own environment do make sure that you're logged in as the IAM admin user of the general AWS account, so the management account of the organization and you have the Northern Virginia region selected.
The reason for being so specific about the region is that AMIs are regional entities so you create an AMI in a particular region.
So if I go and select AMIs under images within the EC2 console I'll see the animals for life AMI that I created in a previous demo lesson.
Now if I go ahead and change the region maybe from Northern Virginia which is US-East-1 to US-East- Ohio which is US-East-2 if I make that change what we'll see is we'll go back to the same area of the console only now we won't see any AMIs that's because an AMI is tied to the region in which it's created.
Every AMI belongs in one region and it has a unique AMI ID.
So let's move back to Northern Virginia.
Now we are able to copy AMIs between regions this allows us to make one AMI and use it for a global infrastructure platform so we can right-click and select copy AMI then select the destination region and then for this example let's say that I did want to copy it to Ohio then I would select that in the drop-down it would allow me to change the name if I wanted or I could keep it the same for description it would show that it's been copied from this AMI ID in this region and then it would have the existing description at the end.
So at this point I'm going to go ahead and click copy AMI and that process has now started so if I close down this dialogue and then change it from US East 1 to US East 2 so select that now we have a pending AMI and this is the AMI that's being copied from the US - East - one region into this region if we go ahead and click on snapshots under elastic block store then we're going to see the snapshot or snapshots which belong to this AMI.
Now depending on how busy AWS is it can take a few minutes for the snapshots to appear on this screen just go ahead and keep refreshing until they appear.
In our case we only have the one which is the boot volume that's used for our custom AMI.
Now the time taken to copy a snapshot between regions depends on many factors what the source and destination region are and the distance between the two the size of the snapshot and the amount of data it contains and it can take anywhere from a few minutes to much much longer so this is not an immediate process.
Once the snapshot copy completes then the AMI copy process will complete and that AMI is then available in the destination region but an important thing that I want to keep stressing throughout this course is that this copied AMI is a completely different AMI.
AMIs are regional don't fall for any exam questions which attempt to have you use one AMI for several regions.
If we're copying this animals for life AMI from one region to another region in effect we're creating two different AMIs.
So take note of this AMI ID in this region and if we switch back to the original source region so US - East - 1 note how this AMI has a different ID so they are different AMIs completely different AMIs you're creating a new one as part of the copy process.
So while the data is going to be the same conceptually they are completely separate objects and that's critical for you to understand both for production usage and when answering any exam questions.
Now while that's copying I want to demonstrate the other important thing which I wanted to show you in this demo lesson and that's permissions of AMIs.
So if I right-click on this AMI and edit AMI permissions by default an AMI is private.
Being private means that it's only accessible within the AWS account which has created the AMI and so only identities within that account that you grant permissions are able to access it and use it.
Now you can change the permission of the AMI you could set it to be public and if you set it to public it means that any AWS account can access this AMI and so you need to be really careful if you select this option because you don't want any sensitive information contained in that snapshot to be leaked to external AWS accounts.
A much safer way is if you do want to share the AMI with anyone else then you can select private but explicitly add other AWS accounts to be able to interact with this AMI.
So I could click in this box and then for example if I clicked on services and I just moved to the AWS organization service I'll open that in a new tab and let's say that I chose to share this AMI with my production account so I selected my production account ID and then I could add this into this box which would grant my production AWS account the ability to access this AMI.
Now no tell there's also this checkbox and this adds create volume permissions to the snapshots associated with this AMI so this is something that you need to keep in mind.
Generally if you are sharing an AMI to another account inside your organization then you can afford to be relatively liberal with permissions so generally if you're sharing this internally I would definitely check this box and that gives full permissions on the AMI as well as the snapshots so that anyone can create volumes from those snapshots as well as accessing the AMI.
So these are all things that you need to consider.
Generally it's much preferred to explicitly grant an AWS account permissions on an AMI rather than making that AMI public.
If you do make it public you need to be really sure that you haven't leaked any sensitive information, specifically access keys.
While you do need to be careful of that as well if you're explicitly sharing it with accounts, generally if you're sharing it with accounts then you're going to be sharing it with trusted entities.
You need to be very very careful if ever you're using this public option and I'll make sure I include a link attached to this lesson which steps through all of the best practice steps that you need to follow if you're sharing an AMI publicly.
There are a number of really common steps that you can use to minimize lots of common security issues and that's something you should definitely do if you're sharing an AMI.
Now if you want to do you could also share an AMI with an organizational unit or organization and you can do that using this option.
This makes it easier if you want to share an AMI with all AWS accounts within your organization.
At this point though I'm not going to do that we don't need to do that in this demo.
What we're going to do now though is move back to US-East-2.
That's everything I wanted to cover in this demo lesson.
Now this AMI is available we can right click and select D register and move back to US-East-1 and now that we've done this demo lesson we can do the same process with this AMI.
So we can right click select D register and that will remove that AMI.
Click on snapshots this is the snapshot created by this AMI so we need to delete this as well right click delete that snapshot confirm that and we'll need to do the same process in the region that we copied the AMI and the snapshots to.
So select US-East-2 it should be the only snapshot in the region make sure it is the correct one right click delete confirm that deletion and now you've cleared up all of the extra things created within this demo lesson.
Now that's everything that I wanted to cover I just wanted to give you an overview of how to work with AMIs from the console UI from a copying and sharing perspective.
Go ahead and complete this video and when you're ready I look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
This is part two of this lesson.
We're going to continue immediately from the end of part one.
So let's get started.
So the first step is to shut down this instance.
So we don't want to create an AMI from a running instance because that can cause consistency issues.
So we're going to close down this tab.
We're going to return to instances, right-click, and we're going to stop the instance.
We need to acknowledge this and then we need to wait for the instance to change into the stopped state.
It will start with stopping.
We'll need to refresh it a few times.
There we can see it's now in a stopped state and to create the AMI, we need to right-click on that instance, go down to Image and Templates, and select Create Image.
So this is going to create an AMI.
And first we need to give the AMI a name.
So let's go ahead and use Animals for Life template WordPress.
And we'll use the same for Description.
Now what this process is going to do is it's going to create a snapshot of any of the EBS volumes, which this instance is using.
It's going to create a block device mapping, which maps those snapshots onto a particular device ID.
And it's going to use the same device ID as this instance is using.
So it's going to set up the storage in the same way.
It's going to record that storage inside the AMI so that it's identical to the instance we're creating the AMI from.
So you'll see here that it's using EBS.
It's got the original device ID.
The volume type is set to the same as the volume that our instance is using, and the size is set to 8.
Now you can adjust the size during this process as well as being able to add volumes.
But generally when you're creating an AMI, you're creating the AMI in the same configuration as this original instance.
Now I don't recommend creating an AMI from a running instance because it can cause consistency issues.
If you create an AMI from a running instance, it's possible that it will need to perform an instance reboot.
You can force that not to occur, so create an AMI without rebooting.
But again, that's even less ideal.
The most optimal way for creating an AMI is to stop the instance and then create the AMI from that stopped instance, which will have fully consistent storage.
So now that that's set, just scroll down to the bottom and go ahead and click on Create Image.
Now that process will take some time.
If we just scroll down, look under Elastic Block Store and click on Snapshots.
You'll see that initially it's creating a snapshot of the boot volume of our original EC2 instance.
So that's the first step.
So in creating the AMI, what needs to happen is a snapshot of any of the EBS volumes attached to that EC2 instance.
So that needs to complete first.
Initially it's going to be an appending state.
We'll need to give that a few moments to complete.
If we move to AMIs, we'll see that the AMI is also creating it too.
It is in appending state and it's waiting for that snapshot to complete.
Now creating a snapshot is storing a full copy of any of the data on the original EBS volume.
And the time taken to create a snapshot can vary.
The initial snapshot always takes much longer because it has to take that full copy of data.
And obviously depending on the size of the original volume and how much data is being used, will influence how long a snapshot takes to create.
So the more data, the larger the volume, the longer the snapshot will take.
After a few more refreshes, the snapshot moves into a completed status and if we move across to AMIs under images, after a few moments this too will change away from appending status.
So let's just refresh it.
After a few moments, the AMI is now also in an available state and we're good to be able to use this to launch additional EC2 instances.
So just to summarize, we've launched the original EC2 instance, we've downloaded, installed and configured WordPress, configured that custom banner.
We've shut down the EC2 instance and generated an AMI from that instance.
And now we have this AMI in a state where we can use it to create additional instances.
So we're going to do that.
We're going to launch an additional instance using this AMI.
While we're doing this, I want you to consider exactly how much quicker this process now is.
So what I'm going to do is to launch an EC2 instance from this AMI and note that this instance will have all of the configuration that we had to do manually, automatically included.
So right click on this AMI and select launch.
Now this will step you through the launch process for an EC2 instance.
You won't have to select an AMI because obviously you are now explicitly using the one that you've just created.
You'll be asked to select all of the normal configuration options.
So first let's put a name for this instance.
So we'll use the name "instance" from AMI.
Then we'll scroll down.
As I mentioned moments ago, we don't have to specify an AMI because we're explicitly launching this instance from an AMI.
Scroll down.
You'll need to specify an instance type just as normal.
We'll use a free tier eligible instance.
This is likely to be T2 or T3.micro.
Below that, go ahead and click and select Proceed without a key pair not recommended.
Scroll down.
We'll need to enter some networking settings.
So click on Edit next to Network Settings.
Click in VPC and select A4L-VPC1.
Click in Subnet and make sure that SN-Web-A is selected.
Make sure the box is below a both set to enable for the auto assign IP settings.
Under Firewall, click on Select Existing Security Group.
Click in the Security Groups drop down and select AMI-Demo-Instance Security Group.
And that will have some random at the end.
That's absolutely fine.
Select that.
Scroll down.
And notice that the storage is configured exactly the same as the instance which you generated this AMI from.
Everything else looks good.
So we can go ahead and click on Launch Instance.
So this is launching an instance using our custom created AMI.
So let's close down this dialog and we'll see the instance initially in a pending state.
Remember, this is launching from our custom AMI.
So it won't just have the base Amazon Linux 2 operating system.
Now it's going to have that base operating system plus all of the custom configuration that we did before creating the AMI.
So rather than having to perform that same WordPress download installation configuration and the banner configuration each and every time, now we've baked that in to the AMI.
So now when we launch one instance, 10 instances, or 100 instances from this AMI, all of them are going to have this configuration baked in.
So let's give this a few minutes to launch.
Once it's launched, we'll select it, right click, select Connect, and then connect into it using EC2, Instance Connect.
Now one thing you will need to change because we're using a custom AMI, AWS can't necessarily detect the correct username to use.
And so you might see sometimes it says root.
Just go ahead and change this to EC2-user and then go ahead and click Connect.
And if everything goes well, you'll be connected into the instance and you'll see our custom Cowsay banner.
So all that configuration is now baked in and it's automatically included whenever we use that AMI to launch an instance.
If we go back to the AWS console and select instances, make sure we still have the instance from AMI selected and then locate its public IP version for address.
Don't use this link because that will use HTTPS instead, copy the IP address into your clipboard and open that in a new tab.
Again, all being well, you should see the WordPress installation dialogue and that's because we've baked in the installation and the configuration into this AMI.
So we've massively reduced the ongoing efforts required to launch an animals for life standard build configuration.
If we use this AMI to launch hundreds or thousands of instances each and every time we're saving all the time and the effort required to perform this configuration and using an AMI is just one way that we can automate the build process of EC2 instances within AWS.
And over the remainder of the course, I'm going to be demonstrating the other ways that you can use as well as comparing and contrasting the advantages and disadvantages of each of those methods.
Now that's everything that I wanted to cover in this demo lesson.
You've learned how to create an AMI and how to use it to save significant effort on an ongoing basis.
So let's clear up all of the infrastructure that we've used in this lesson.
So move back to the AWS console, close down this tab, go back to instances, and we need to manually terminate the instance that we created from our custom AMI.
So right click and then go to terminate instance.
You'll need to confirm that.
That will start the process of termination.
Now we're not going to delete the AMI or snapshots because there's a demo coming up later in this section of the course where you're going to get the experience of copying and sharing an AMI between AWS regions.
So we're going to need to leave this in place.
So we're not going to delete the AMI or the snapshots created within this lesson.
Verify that that instance has been terminated and once it has, click on services, go to cloud formation, select the AMI demo stack, select delete and then confirm that deletion.
And that will remove all of the infrastructure that we've created within this demo lesson.
And at this point, that's everything that I wanted you to do in this demo.
So go ahead, complete this video.
And when you're ready, I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this demo lesson you'll be creating an AMI from a pre-configured EC2 instance.
So you'll be provisioning an EC2 instance, configuring it with a popular web application stack and then creating an AMI of that pre-configured web application.
Now you know in the previous demo where I said that you would be implementing the WordPress manual install once?
Well I might have misled you slightly but this will be the last manual install of WordPress in the course, I promise.
What we're going to do together in this demo lesson is create an Amazon Linux AMI for the animals for life business but one which includes some custom configuration and an install of WordPress ready and waiting to be initially configured.
So this is a fairly common use case so let's jump in and get started.
Now in order to perform this demo you're going to need some infrastructure, make sure you're logged into the general AWS account, so the management account of the organization and as always make sure that you have the Northern Virginia region selected.
Now attached to this lesson is a one-click deployment link, go ahead and click that link.
This will open the quick create stack screen, it should automatically be populated with the AMI demo as the stack name, just scroll down to the bottom, check this capabilities acknowledgement box and then click on create stack.
We're going to need this stack to be in a create complete state so go ahead and pause the video and we can resume once the stack moves into create complete.
Okay so that stacks now moved into a create complete state, we're good to continue with the demo.
Now you're going to be using some command line commands within an EC2 instance as part of creating an Amazon machine image so also attached to this lesson is the lessons command document which contains all of those commands so go ahead and open that document.
Now you might recognize these as the same commands that you used when you were performing a manual WordPress installation and that's the case we're running the same manual installation process as part of setting up our animals for life AMI so you're going to need all of these commands but as you've already experienced them in the previous demo lesson I'm going to run through them a lot quicker in this demo lesson so go back to the AWS console and we need to move to the EC2 area of the console so click on the services drop down, type EC2 into this search box and then open that in a new tab.
Once you there go ahead and click on running instances, close down any dialogues about any console changes we want to maximize the amount of screen space that we have, we're going to connect to this A4L public EC2 instance this is the instance that we're going to use to create our AMI so we're going to set the instance up manually how we want it to be and then we're going to use it to generate an AMI so we need to connect to this instance so right click select connect we're going to use EC2 instance connect to do the work within our browser so make sure the username is EC2-user and then connect to this instance then once connected we're going to run through the commands to install WordPress really quickly we're going to start again by setting the variables that will use throughout the installation so you can just go ahead and copy and paste those straight in and press enter now we're going to run through all of the next set of commands really quickly because you use them in the previous demo lesson so first we're going to go ahead and install the MariaDB server Apache and the Wget utility while that's installing copy all of the commands from step 3 so these are commands which enable and start Apache and MariaDB go ahead and paste all of those four in and press enter so now Apache and MariaDB are both set to start when the instance boots as well as being set to currently started I'll just clear the screen to make this easier to see next we're going to set the DB root password again that's this command using the contents of the variable that you set at the start next we download WordPress once it's downloaded we move into the web root folder we extract the download we copy the files from within the WordPress folder that we've just extracted into the current folder which is the web root once we've done that we remove the WordPress folder itself and then we tidy up by deleting the download I'm going to clear the screen we copy the template configuration file into its final file name so wp-config.php then we're going to replace the placeholders in that file we're going to start with the database name using the variable that you set at the start next we're going to use the database user which you also set at the start and finally the database password and then we're going to set the ownership on all of these files to be the Apache user and the Apache group clear the screen next we need to create the DB setup script that are demonstrated in the previous demo so we need to run a collection of commands the first to enter the create database command the next one to enter the create user command and set that password the next one to grant permissions on the database to that user then flush the permissions then we need to run that script using the MySQL command line interface that runs all of those commands and performs all of those operations and then we tidy up by deleting that file now at this point we've done the exact same process that we did in the previous demo we've installed and set up WordPress and if everything's working okay we can go back to the AWS console click on instances select the running a4l-public ec2 instance copy down its IP address again make sure you copy that down don't click this link and then open that in a new tab if everything's working as expected you should see the WordPress installation dialogue now this time because we're creating an AMI we don't want to perform the installation we want to make sure that when anyone uses this AMI they're also greeted with this installation so we're going to leave this at this point we're not going to perform the installation instead we're going to go back to the ec2 instance now because this ec2 instance is for the animals for life business we want to customize it and make sure that everybody knows that this is an animals for life ec2 instance now to do that we're going to install an animal themed utility called cow say I'm going to clear the screen to make it easier to see and then just to demonstrate exactly what cow say does I'm going to run a cow say oh hi and if all goes well we see a cow using ASCII art saying the oh hi message that we just typed so we're going to use this to create a message of the day welcome when anyone connects to this ec2 instance to do that we're going to create a file inside the configuration folder of this ec2 instance so we're going to use shudu nano and we're going to create this file so forward slash etc forward slash update hyphen motd dot d forward slash 40 hyphen cow so we're going to create that file this is the file that's going to be used to generate the output when anyone logs in to this ec2 instance so we're going to copy in these two lines and then press enter so this means when anyone logs into the ec2 instance they're going to get an animal themed welcome so use control o to save that file and control x to exit clear the screen to make it easier to see we're going to make sure that file that we've just edited has the correct permissions then we're going to force an update of the message of the day so this is going to be what's displayed when anyone logs into this instance and then finally now that we've completed this configuration we're going to reboot this ec2 instance so we're going to use this command to reboot it and just to illustrate how this works I'm going to close down that tab and return to the ec2 console give this a few moments to restart that should have rebooted by now so we're going to select it right click go to connect again use ec2 instance connect assuming everything's working now when we connect to the instance we'll see an animal themed login banner so this is just a nice way that we can ensure that anyone logging into this instance understands that a he uses the Amazon Linux 2 AMI and be that it belongs to animals for life so we've created this instance using the Amazon Linux 2 AMI we've performed the WordPress installation and initial configuration we've customized the banner and now we're going to use this as our template instance to create our AMI that can then be used to launch other instances okay so this is the end of part one of this lesson it was getting a little bit on the long side and so I wanted to add a break it's an opportunity just to take a rest or grab a coffee part 2 will be continuing immediately from the end of part one so go ahead complete the video and when you're ready join me in part two
-
-
www.youtube.com www.youtube.com
-
Résumé de la vidéo [00:00:23][^1^][1] - [00:32:19][^2^][2]:
Cette vidéo explore l'histoire de l'école républicaine en France, ses débats et ses interrogations, en mettant en lumière son évolution depuis 1792 et son lien avec la République.
Temps forts: + [00:00:23][^3^][3] Introduction et contexte * Présentation de Jean-François Chanet * Objectifs de l'association des professeurs d'histoire-géographie * Importance de l'école républicaine + [00:01:01][^4^][4] Histoire de l'école républicaine * Lien avec la République depuis 1792 * Lois Ferry et unification de l'État * Période noire sous le régime de Vichy + [00:02:21][^5^][5] Débats et interrogations actuels * Laïcité et valeurs républicaines * Adaptation aux défis contemporains * Importance de préserver les valeurs fondamentales + [00:05:01][^6^][6] Exemples historiques et anecdotes * Gaston Bonheur et son livre * Rôle des instituteurs et des écoles * Impact des guerres sur l'éducation + [00:10:00][^7^][7] Unité et séparation * Séparation de la morale et de la religion * Séparation des sexes et des classes sociales * Concurrence entre écoles publiques et religieuses
Résumé de la vidéo [00:32:22][^1^][1] - [01:03:04][^2^][2]:
Cette partie de la vidéo explore l'évolution de l'école républicaine en France, en mettant l'accent sur les transformations sociales et éducatives depuis les années 60.
Temps forts: + [00:32:22][^3^][3] Écoles à classe unique * Longévité malgré l'urbanisation * Féminisation du corps enseignant * Mobilité des enseignants + [00:34:00][^4^][4] Transformation des écoles * Regroupement des sexes * Séparation des âges * Augmentation des écoles mixtes + [00:38:00][^5^][5] Problème du redoublement * Taux de redoublement élevé * Impact sur la durée des études * Difficultés d'apprentissage + [00:42:00][^6^][6] Inégalités scolaires * Fréquentation des écoles rurales * Disparités entre centre et périphérie * Effondrement de la natalité pendant la guerre + [00:50:00][^7^][7] Réformes éducatives * Débats politiques sur les réformes * Importance des instituteurs * Critiques des inégalités perpétuées par l'école
Résumé de la vidéo [01:03:07][^1^][1] - [01:34:05][^2^][2]:
Cette partie de la vidéo explore l'histoire et les débats autour de l'école républicaine en France, en mettant l'accent sur les réformes éducatives et les défis sociaux qu'elles ont rencontrés.
Temps forts: + [01:03:07][^3^][3] Débats sur les réformes éducatives * Importance du consensus politique * Opposition historique aux lois éducatives * Complexité des réformes majeures + [01:05:01][^4^][4] Critiques littéraires et sociales * Zola et l'affaire Dreyfus * Jules Romain et l'éducation * Critiques des inégalités scolaires + [01:09:02][^5^][5] Évolution de l'enseignement secondaire * Accessibilité et inégalités * Critiques des praticiens * Réformes et résistances + [01:17:01][^6^][6] Concept d'école unique * Idées post-guerre * Obstacles et résistances * Différences sociales persistantes + [01:25:03][^7^][7] Réformes de Jean Zay * Allongement de la scolarité * Introduction de l'orientation * Critiques et impacts des réformes
Résumé de la vidéo [01:34:08][^1^][1] - [01:37:07][^2^][2]:
Cette partie de la vidéo explore les défis et les crises de l'enseignement républicain en France, en se concentrant sur les réflexions de Charles Péguy sur l'éducation et la société.
Points forts : + [01:34:08][^3^][3] Propagande et émancipation * Propager des idées pour émanciper les esprits * Le problème républicain de l'école * Opposition entre mystique et politique + [01:34:50][^4^][4] Charles Péguy et l'éducation * Péguy, orphelin et élève brillant * Son parcours scolaire exceptionnel * Mort à la guerre en 1914 + [01:35:28][^5^][5] Crises de l'enseignement * Crises de vie et crises de l'enseignement * Enseignement reflète la société * Société moderne et ses défis éducatifs
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
This is part two of this lesson.
We're going to continue immediately from the end of part one.
So let's get started.
So this is the folder containing the WordPress installation files.
Now there's one particular file that's really important, and that's the configuration file.
So there's a file called WP-config-sample, and this is actually the file that contains a template of the configuration items for WordPress.
So what we need to do is to take this template and change the file name to be the proper file name, so wp-config.php.
So we're going to create a copy of this file with the correct name.
And to do that, we run this command.
So we're copying the template or the sample file to its real file name, so wp-config.php.
And this is the name that WordPress expects when it initially loads its configuration information.
So run that command, and that now means that we have a live config file.
Now this command isn't in the instructions, but if I just take a moment to open up this file, you don't need to do this.
I'm just demonstrating what's in this file for your benefit.
But if I run a sudo nano, and then wp, and then hyphen-config, and then php, this is how the file looks.
So this has got all the configuration information in.
So it stores the database name, the database user, the database host, and lots of other information.
Now notice how it has some placeholders.
So this is where we would need to replace the placeholders with the actual configuration information.
So the database name itself, the host name, the database username, the database password, all that information would need to be replaced.
Now we're not going to type this in manually, so I'm going to control X to exit out of this, and then clear the screen again to make it easy to see.
We're going to use the Linux utility sed, or S-E-D.
And this is a utility which can perform a search and replace within a text file.
It's actually much more complex and capable than that.
It can perform many different manipulation operations.
But for this demonstration, we're going to use it as a simple search and replace.
Now we're going to do this a number of times.
First, we're going to run this command, which is going to replace this placeholder.
Remember, this is one of the placeholders inside the configuration file that I've just demonstrated, wp-config.
We're going to replace the placeholder here with the contents of the variable name, dbname, that we set at the start of this demo.
So this is going to replace the placeholder with our actual database name.
So I'm going to enter that so you can do the same.
We're going to run the sed command again, but this time it's going to replace the username placeholder with the dbuser variable that we set at the start of this demo.
So use that command as well.
And then lastly, it will do the same for the database password.
So type or copy and paste this command and press enter.
And that now means that this wp-config has the actual configuration information inside.
And just to demonstrate that, you don't need to do this part.
I'll just do it to demonstrate.
If I edit this file again, you'll see that all of these placeholders have actually been replaced with actual values.
So I'm going to control X out of that and then clear the screen.
And that concludes the configuration for the WordPress application.
So now it's ready.
Now it knows how to communicate with the database.
What we need to do to finish off the configuration though is just to make sure that the web server has access to all of the files within this folder.
And to do that, we use this command.
So we're making sure that we use the shown command or chown and set the ownership of all of the files in this folder and any subfolders to be the Apache user and the Apache group.
And the Apache user and Apache group belong to the web server.
So this just makes sure that the web server is able to access and control all of the files in the web root folder.
So run that command and press enter.
And that concludes the installation part of the WordPress application.
There's one final thing that we need to do and that's to create the database that WordPress will use.
So I'm going to clear the screen to make it easy to see.
Now what we're going to do in order to configure the database is we're going to make a database setup script.
We're going to put this script inside the forward slash TMP folder and we're going to call it DB.setup.
So what we need to do is enter the commands into this file that will create the database.
After the database is created, it needs to create a database user and then it needs to grant that user permissions on that database.
Now again, instead of manually entering this, we're going to use those variable names that were created at the start of the demo.
So we're going to run a number of commands.
These are all in the lessons commands document.
The first one is this.
So this echoes this text and because it has a variable name in, this variable name will be replaced by the actual contents of the variable.
Then it's going to take this text with the replacement of the contents of this variable and it's going to enter that into this file.
So forward slash TMP, forward slash DB setup.
So run that and that command is going to create the WordPress database.
Then we're going to use this command and this is the same so it echoes this text but it replaces these variable names with the contents of the variables.
This is going to create our WordPress database user.
It's going to set its password and then it's going to append this text to the DB setup file that we're creating.
Now all of these are actually database commands that we're going to execute within the MariaDB database.
So enter that to add that line to DB.setup.
Then we have another line which uses the same architecture as the ones above it.
It echoes the text.
It replaces these variable names with the contents and then outputs that to this DB.setup file and this command grants our database user permissions to our WordPress database.
And then the last command is this one which just flushes the privileges and again we're going to add this to our DB.setup script.
So now I'm just going to cat the contents of this file so you can just see exactly what it looks like.
So cat and then space forward slash TMP, forward slash DB.setup.
So as you'll see it's replaced all of these variable names with the actual contents.
So this is what the contents of this script actually looks like.
So these are commands which will be run by the MariaDB database platform.
To run those commands we use this.
So this is the MySQL command line interface.
So we're using MySQL to connect to the MariaDB database server.
We're using the username of root.
We're passing in the password and then using the contents of the DB root password variable.
And then once we authenticate the database we're passing in the contents of our DB.setup script.
And so this means that all of the lines of our DB.setup script will be run by the MariaDB database and this will create the WordPress database, the WordPress user and configure all of the required permissions.
So go ahead and press enter.
That command is run by the MariaDB platform and that means that our WordPress database has been successfully configured.
And then lastly just to keep things secure because we don't want to leave files laying around on the file system with authentication information inside.
We're just going to run this command to delete this DB.setup file.
Okay, so that concludes the setup process for WordPress.
It's been a fairly long intensive process but that now means that we have an installation of WordPress on this EC2 instance, a database which has been installed and configured.
So now what we can do is to go back to the AWS console, click on instances.
We need to select the A4L-PublicEC2 and then we need to locate its IP address.
Now make sure that you don't use this open address link because this will attempt to open the IP address using HTTPS and we don't have that configured on this WordPress instance.
Instead, just copy the IP address into your clipboard and then open that in a new tab.
If everything's successful, you should see the WordPress installation dialog and just to verify this is working successfully, let's follow this process through.
So pick English, United States for the language.
For the blog title, just put all the cats and then admin as the username.
You can accept the default strong password.
Just copy that into your clipboard so we can use it to log in in a second and then just go ahead and enter your email.
It doesn't have to be a correct one.
So I normally use test@test.com and then go ahead and click on install WordPress.
You should see a success dialog.
Go ahead and click on login.
Username will be admin, the password that you just copied into your clipboard and then click on login.
And there you go.
We've got a working WordPress installation.
We're not going to configure it in any detail but if you want to just check out that it works properly, go ahead and click on this all the cats at the top and then visit site and you'll be able to see a generic WordPress blog.
And that means you've completed the installation of the WordPress application and the database using a monolithic architecture on a single EC2 instance.
So this has been a slow process.
It's been manual and it's a process which is wide open for mistakes to be made at every point throughout that process.
Can you imagine doing this twice?
What about 10 times?
What about a hundred times?
It gets pretty annoying pretty quickly.
In reality, this is never done manually.
We use automation or infrastructure as code systems such as cloud formation.
And as we move through the course, you're going to get experience of using all of these different methods.
Now that we're close to finishing up the basics of VPC and EC2 within the course, things will start to get much more efficient quickly because I'm going to start showing you how to use many of the automation and infrastructure as code services within AWS.
And these are really awesome to use.
And you'll see just how much power is granted to an architect, a developer, or an engineer by using these services.
For now though, that is the end of this demo lesson.
Now what we're going to do is to clear up our account.
So we need to go ahead and clear all of this infrastructure that we've used throughout this demo lesson.
To do that, just move back to the AWS console.
If you still have the cloud formation tab open and move back to that tab, otherwise click on services and then click on cloud formation.
If you don't see it anywhere, you can use this box to search for it, select the word, press stack, select delete, and then confirm that deletion.
And that will delete the stack, clear up all of the infrastructure that we've used throughout this demo lesson and the account will now be in the same state as it was at the start of this lesson.
So from this point onward in the course, we're going to start using automation.
Now there is a lesson coming up in a little while in this section of the course, where you're going to create an Amazon machine image which is going to contain a pre-baked copy of the WordPress application.
So as part of that lesson, you are going to be required to perform one more manual installation of WordPress, but that's going to be part of automating the installation.
So you'll start to get some experience of how to actually perform automated installations and how to design architectures which have WordPress as a component.
At this point though, that's everything I wanted to cover.
So go ahead, complete this video, and when you're ready, I look forward to you joining me in the next.
-
-
viewer.athenadocs.nl viewer.athenadocs.nl
-
cisternae
Holle bochten
-
-
viewer.athenadocs.nl viewer.athenadocs.nl
-
materiële strafrecht
Wetboek van Strafrecht (Sr)
-
formele strafrecht
Wetboek van Strafvordering (SV)
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson we're going to be doing something which I really hate doing and that's using WordPress in a course as an example.
Joking aside though WordPress is used in a lot of courses as a very simple example of an application stack.
The problem is that most courses don't take this any further.
But in this course I want to use it as one example of how an application stack can be evolved to take advantage of AWS products and services.
What we're going to be using WordPress for in this demo is to give you experience of how a manual installation of a typical application stack works in EC2.
We're going to be doing this so you can get the experience of how not to do things.
My personal belief is that to fully understand the advantages that automation features within AWS provide, you need to understand what a manual installation is like and what problems you can experience doing that manual installation.
As we move through the course we can compare this to various different automated ways of installing software within AWS.
So you're going to get the experience of bad practices, good practices and the experience to be able to compare and contrast between the two.
By the end of this demonstration you're going to have a working WordPress site but it won't have any high availability because it's running on a single EC2 instance.
It's going to be architecturally monolithic with everything running on the one single instance.
In this case that means both the application and the database.
The design is fairly straightforward.
It's just the Animals for Life VPC.
We're going to be deploying the WordPress application into a single subnet, the WebA public subnet.
So this subnet is going to have a single EC2 instance deployed into it and then you're going to be doing a manual install onto this instance and the end result is a working WordPress installation.
At this point it's time to get started and implement this architecture.
So let's go ahead and switch over to our AWS console.
To get started with this demo lesson you're going to need to do a few preparation steps.
First just make sure that you're logged in to the general AWS account, so the management account of the organization and as always make sure you have the Northern Virginia region selected.
Now attached to this lesson is a one-click deployment for the base infrastructure that we're going to use.
So go ahead and open the one-click deployment link that's attached to this lesson.
That link is going to take you to the Quick Create Stack screen.
Everything should be pre-populated.
The stack name should be WordPress.
All you need to do is scroll down towards the bottom, check this capabilities box and then click on Create Stack.
And this stack is going to need to be in a Create Complete state before we move on with the demo lesson.
So go ahead and pause this video, wait for the stack to change to Create Complete and then we're good to continue.
Also attached to this lesson is a Lessons Command document which lists all of the commands that you'll be using within the EC2 instance throughout this demo lesson.
So go ahead and open that as well.
So that should look something like this and these are all of the commands that we're going to be using.
So these are the commands that perform a manual WordPress installation.
Now that that stack's completed and we've got the Lesson Commands document open, the next step is to move across to the EC2 console because we're going to actually install WordPress manually.
So click on the Services drop-down and then locate EC2 in this All Services part of the screen.
If you've recently visited it, it should be in the Recently Visited section under Favorites or you can go ahead and type EC2 in the search box and then open that in a new tab.
And then click on Instances running and you should see one single instance which is called A4L-PublicEC2.
Go ahead and right-click on this instance.
This is the instance we'll be installing WordPress within.
So right-click, select Connect.
We're going to be using our browser to connect to this instance so we'll be using Instance Connect just verify that the username is EC2-user and then go ahead and connect to this instance.
Now again, I fully understand that a manual installation of WordPress might seem like a waste of time but I genuinely believe that you need to understand all the problems that come from manually installing software in order to understand the benefits which automation provides.
It's not just about saving time and effort.
It's also about error reduction and the ability to keep things consistent.
Now I always like to start my installations or my scripts by setting variables which will store the configuration values that everything from that point forward will use.
So we're going to create four variables.
One for the database name, one for the database user, one for the database password and then one for the root or admin password of the database server.
So let's start off by using the pre-populated values from the Lessened Commands documents.
So that's all of those variables set and we can confirm that those are working by typing echo and then a space and then a dollar and then the name of one of those variables.
So for example, dbname and press Enter and that will show us the value stored within that variable.
So now we can use these later points of the installation.
So at this point I'm going to clear the screen to keep it easy to see and stage two at this installation process is to install some system software.
So there are a few things that we need to install in order to allow a WordPress installation.
We'll install those using the DNF package manager.
We need to give it admin privileges which is why we use shudu and then the packages that we're going to install are the database server which is Maria db-server the Apache web server which is HTTPD and then a utility called Wget which we're going to use to download further components of the installation.
So go ahead and type or copy and paste that command and press Enter and that installation process will take a few moments and it will go through installing that software and any of the prerequisites.
They're done so I'll clear the screen to keep this easy to read.
Now that all those packages are installed we need to start both the web server and the database server and ensure that both of them are started if ever the machine is restarted.
So to do that we need to enable and start those services.
So enabling and starting means that both of the services are both started right now and they'll start if the machine reboots.
So first we'll use this command.
So we're using admin privileges again, systemctl which allows us to start and stop system processes and then we use enable and then HTTPD which is the web server.
So type and press enter and that ensures that the web server is enabled.
We need to run the same command again but this time specifying MariaDB to ensure that the database server is enabled.
So type or copy and paste and press enter.
So that means both of those processes will start if ever the instance is rebooted and now we need to manually start both of those so they're running and we can interact with them.
So we need to use the same structure of command but instead of enable we need to start both of these processes.
So first the web server and then the database server.
So that means the CC2 instance now has a running web and database server both of which are required for WordPress.
So I'll clear the screen to keep this easy to read.
Next we're going to move to stage 4 and stage 4 is that we need to set the root password of the database server.
So this is the username and password that will be used to perform all of the initial configuration of the database server.
Now we're going to use this command and you'll note that for password we're actually specifying one of the variables that we configured at the start of this demo.
So we're using the DB root password variable that we configured right at the start.
So go ahead and copy and paste or type that in and press enter and that sets the password for the root user of this database platform.
The next step which is step 5 is to install the WordPress application files.
Now to do that we need to install these files inside what's known as the web root.
So whenever you browse to a web server either using an IP address or a DNS name if you don't specify a path so if you just use the server name for example netflix.com then it loads those initial files from a folder known as the web root.
Now on this particular server the web root is stored in /varr/www/html so we need to download WordPress into that folder.
Now we're going to use this command Wget and that's one of the packages that we installed at the start of this lesson.
So we're giving it admin privileges and we're using Wget to download latest.tar.gz from wordpress.org and then we're putting it inside this web root.
So /varr/www/html.
So go ahead and copy and paste or type that in and press enter.
That'll take a few moments depending on the speed of the WordPress servers and that will store latest.tar.gz in that web root folder.
Next we need to move into that folder so cd space /varr/www/html and press enter.
We need to use a Linux utility called tar to extract that file.
So sudo and then tar and then the command line options -zxvf and then the name of the file so latest.tar.gz So copy and paste or type that in and press enter and that will extract the WordPress download into this folder.
So now if we do an ls -la you'll see that we have a WordPress folder and inside that folder are all of the application files.
Now we actually don't want them inside a WordPress folder.
We want them directly inside the web root.
So the next thing we're going to do is this command and this is going to copy all of the files from inside this WordPress folder to . and . represents the current folder.
So it's going to copy everything inside WordPress into the current working directory which is the web root directory.
So enter that and that copies all of those files.
And now if we do another listing you'll see that we have all of the WordPress application files inside the web root.
And then lastly for the installation part we need to tidy up the mess that we've made.
So we need to delete this WordPress folder and the download file that we just created.
So to do that we'll run an rm -r and then WordPress to delete that folder.
And then we'll delete the download with sudo rm and then a space and then the name of the file.
So latest.tar.gz.
And that means that we have a nice clean folder.
So I'll clear the screen to make it easy to see.
And then I'll just do another listing.
Okay so this is the end of part one of this lesson.
It was getting a little bit on the long side and so I wanted to add a break.
It's an opportunity just to take a rest or grab a coffee.
Part two will be continuing immediately from the end of part one.
So go ahead complete the video and when you're ready join me in part two.
-
-
grham.hypotheses.org grham.hypotheses.org
-
la “factualité” de chatGPT ou, plus prosaïquement, pénalise davantage les “hallucinations”
je n'ai pas compris
-
l’alignement
concept clé
-